content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
DS18B20 (community library) Summary DSB18XX Lib for Particle devices Example Build Testing Device OS Version: This table is generated from an automated build. Success only indicates that the code compiled successfully. Library Read Me This content is provided by the library maintainer and has not been validated or approved. #DS18B20 Library for Particle Core, Photon and P1 Modified DS18BXX Lib copied from Sample application using Particle Photon and Dallas DS18B20 Digital Temperature Sensor. The OneWire source code is taken from this link by @tidwelltimj. I just separated this into two classes OneWire and DS18B20. The sample code publishes a variable named tmpinfo with temperature value. Requires Particle-OneWire (included) from Wiring: Power to 3.3/5V GND to GND Signal to D2 (with 4.7k pullup resistor) Use crcCheck() to verify the sensor was successfully read. Browse Library Files
https://docs.particle.io/reference/device-os/libraries/d/DS18B20/
2022-06-25T10:20:13
CC-MAIN-2022-27
1656103034930.3
[]
docs.particle.io
SharedMethodBox From Xojo Documentation Displays a shared method box {{SharedMethodBox | name = method name | owner = class or module owner | ownertype = class/module | scope = global/public/protected/private | parameters = parameters | returntype = type of data returned | platform = all/mac/win/linux | newinversion = version where this class first appeared | modifiedinversion = version where this class has been modified | replacementreason = obsolete/deprecated | replacement = the replacement }} Shared Method TestClass.TestMethod(index as Integer, autocommit as Boolean) As RecordSet Supported for all project types and targets. Supported for all project types and targets.
http://docs.xojo.com/Template:SharedMethodBox
2022-06-25T11:13:06
CC-MAIN-2022-27
1656103034930.3
[]
docs.xojo.com
Uninstalling Aspose.Cells for SharePoint License To uninstall Aspose.Cells for SharePoint license, please use the steps below from the server console. - Retract the license solution from the farm: stsadm.exe -o retractsolution -name Aspose.Cells.SharePoint.License.wsp -immediate - Execute administrative timer jobs to complete the retraction immediately: stsadm.exe -o execadmsvcjobs - Wait for the retraction to complete. You can use Central Administration to check if the retraction has completed by going to Central Administration, then Operations and Solution Management. - Remove the solution from the SharePoint solution store: stsadm.exe -o deletesolution -name Aspose.Cells.SharePoint.License.wsp
https://docs.aspose.com/cells/sharepoint/uninstalling-aspose-cells-for-sharepoint-license/
2022-06-25T10:56:24
CC-MAIN-2022-27
1656103034930.3
[]
docs.aspose.com
. Shopify store to NetSuite store (update) The flow syncs the store information from Shopify to NetSuite. This flow is auto-triggered as soon as the store name is updated in Shopify. In NetSuite, you can find the Shopify store names in the "Celigo Shopify Store Info List" custom record. This flow is available in the Flows > General section. Shopify product ID to NetSuite item (update) The flow syncs the product ID information from Shopify and updates the same on the NetSuite item record. This flow auto-triggers when you run the “NetSuite Item to Shopify Product Add/Update” flow. This flow is available in the Flows > Inventory section. NetSuite image to Shopify image (add or update) The flow syncs the image from NetSuite to Shopify. This flow retrieves item images from the NetSuite file cabinet and adds or updates the images on the Shopify product page. Note: This flow runs only if the “NetSuite Item to Shopify Product Add/Update” flow does not sync the images. This flow is available in the Flows > Product section. Shopify order to NetSuite sales order/cash sale Add (on-demand sync) The flow performs on-demand sync up to ten orders from Shopify and creates sales orders or cash sales in NetSuite. This flow is auto-triggered when you enter order IDs in the Shopify order IDs setting (Settings > Orders section > Orders sub-tab) and click Save. You can update mappings in the “Shopify Order to NetSuite Order/CashSale Add” flow. This flow is available in the Flows > Order section. Note: If the “Shopify Order to NetSuite Order Add” and “Shopify Order to NetSuite Cash Sale Add” flows are disabled, be sure to select the correct value in Using scheduled flow, sync Shopify Orders to NetSuite as setting (Settings > Orders section > Orders sub-tab). Please sign in to leave a comment.
https://docs.celigo.com/hc/en-us/articles/360054612792-Understand-the-Shopify-NetSuite-Integration-App-dependent-flows
2022-06-25T10:57:00
CC-MAIN-2022-27
1656103034930.3
[array(['/hc/article_attachments/360080906852/Shopify_General_flow.jpg', 'Shopify_General_flow.jpg'], dtype=object) array(['/hc/article_attachments/360080906912/Shopify_product_ID.jpg', 'Shopify_product_ID.jpg'], dtype=object) array(['/hc/article_attachments/360080906892/Shopify_image.jpg', 'Shopify_image.jpg'], dtype=object) array(['/hc/article_attachments/360081009271/Shopify_on_demand_sync.jpg', 'Shopify_on_demand_sync.jpg'], dtype=object) ]
docs.celigo.com
.18) becomes Next, we demonstrate how to implement a log-sum-exp constraint (6.17).). [r,res] = mosekopt('symbcon'); % Input data Awall = 200; Afloor = 50; alpha = 2; beta = 10; gamma = 2; delta = 10; % Objective prob = []; prob.c = [1, 1, 1, 0, 0]'; % Linear constraints: % [ 0 0 0 1 1 ] == 1 % [ 0 1 1 0 0 ] <= log(Afloor) % [ 1 -1 0 0 0 ] in [log(alpha), log(beta)] % [ 0 -1 1 0 0 ] in [log(gamma), log(delta)] % prob.a = [ 0 0 0 1 1; 0 1 1 0 0; 1 -1 0 0 0; 0 -1 1 0 0 ]; prob.blc = [ 1; -inf; log(alpha); log(gamma) ]; prob.buc = [ 1; log(Afloor); log(beta); log(delta) ]; prob.blx = [ -inf; -inf; -inf; -inf; -inf]; prob.bux = [ = sparse([0 0 0 1 0; 0 0 0 0 0; 1 1 0 0 0; 0 0 0 0 1; 0 0 0 0 0; 1 0 1 0 0]); prob.g = [ 0; 1; log(2/Awall); 0; 1; log(2/Awall)]; prob.cones = [ res.symbcon.MSK_CT_PEXP, 3, res.symbcon.MSK_CT_PEXP, 3 ]; % Optimize and print results [r,res]=mosekopt('maximize',prob); exp(res.sol.itr.xx(1:3))
https://docs.mosek.com/latest/toolbox/tutorial-gp-shared.html
2022-06-25T11:02:17
CC-MAIN-2022-27
1656103034930.3
[]
docs.mosek.com
Build a paid content site with replit.web and. By the end of this tutorial, you'll be able to: Getting started To get started, create a Python repl. Our application will have the following functionality: - Users can log in with their Replit accounts. - Users can purchase PDFs. - Users can view free PDFs and PDFs that they've previously purchased. - Administrators can upload new PDFs. We've covered both replit.web and Stripe in previous tutorials, so some aspects of the following may be familiar if you've built a brick shop or a technical challenge website. We'll start our app off with the following import statements in main.py: import os, shutil import stripe from flask import Flask, render_template, render_template_string, flash, redirect, url_for, request, jsonify from flask.helpers import send_from_directory from werkzeug.utils import secure_filename from replit import db, web from functools import wraps Here we're importing most of what we'll need for our application: - Python's osand shutilpackages, which provide useful functions for working with files and directories. - Stripe's Python library. - Flask, our web framework and the heart of the application. - A Flask helper function send_from_directory, which will allow us to send PDFs to users. - A function secure_filenamefrom the Werkzeug WSGI (which Flask is built on) that we'll use when admins upload PDFs and other files. - Replit's web framework and Replit DB integration, which we'll use for user authentication and persistent data storage. - The wrapstool from Python's functools, which we'll use to make authorization decorators for restricting access to sensitive application functionality. Now that the imports are out of the way, let's start on our application scaffold. Add the following code to main.py: app = Flask(__name__, static_folder='static', static_url_path='') This code initializes our Flask application. We've added a static_folder and static_url_path so that we can serve static files directly from our repl's file pane without writing routing code for each file. This will be useful for things like images and stylesheets. Add the following code to initialize your application's secret key: # Secret key app.config["SECRET_KEY"] = os.environ["SECRET_KEY"] Our secret key will be a long, random string. You can generate one in your repl's Python console with the following two lines of code: import random, string ''.join(random.SystemRandom().choice(string.ascii_uppercase + string.digits) for _ in range(20)) Rather than putting this value directly into our code, we'll retrieve it from an environment variable. This will keep it out of source control and is good practice for sensitive data. In your repl's Secrets tab, add a new key named SECRET_KEY and enter the random string you just generated as its value. Once that's done, return to main.py and add the code below to initialize our Replit database: # Database setup def db_init(): if "content" not in db.keys(): db["content"] = {} if "orders" not in db.keys(): db["orders"] = {} # Create directories if not os.path.exists("static"): os.mkdir("static") if not os.path.exists("content"): os.mkdir("content") db_init() Replit's Database can be thought of and used as one big Python dictionary that we can access with db. Any values we store in db will persist between repl restarts. We've written a function to initialize the database as we may want to do it again if we need to refresh our data during testing. Whenever we initialize our database, we will also create the content and static directories, which will contain user-uploaded files. Next we need to create our UserStore (a secondary database keyed by username), and list of admins: users = web.UserStore() ADMINS = ["YOUR-REPLIT-USERNAME-HERE"] Make sure to replace the contents of the ADMINS list with your Replit username. Finally, let's make our root page. Add the following code, and then run your repl. # Main app Replit username on the greeting message. Content upload and other admin functionality Before we do anything else with our site, we need to have some PDFs to sell. While we could manually upload our PDFs to our repl and write code to add each one to the database, it will make our site more user-friendly if we include an upload form for this purpose. This upload form should only be accessible by admins, so we can enforce some level of quality control. We'll also create a route that allows admins to refresh the application database. Access control Add the following functions to main.py, just below the line where you've assigned ADMINS: # Helper functions def is_admin(username): return username in ADMINS # Auth decorators def admin_only(f): @wraps(f) def decorated_function(*args, **kwargs): if not is_admin(web.auth.name): flash("Permission denied.", "warning") return redirect(url_for("index")) return f(*args, **kwargs) return decorated_function The code in the second function may look a bit strange if you haven't written your own decorators before. Here's how it works: admin_only is the name of our decorator. You can think of decorators as functions that take other functions as arguments. (The two code snippets below are. Now we can create the following admin routes below the definition of the index function: # Admin functionality @app.route('/admin/content-create', methods=["GET", "POST"]) @web.authenticated @admin_only def content_create(): pass @app.route('/admin/db-flush') @web.authenticated @admin_only def flush_db(): pass Note that both of these functions are protected with the @web.authenticated and @admin_only decorators, restricting their use to logged-in admins. The first function will let our admins create content, and the second will allow us to flush the database. While the second function will be useful during development, it's not something we'd want to use in a finished application, as our database will contain records of user payments. Content creation form Before we can fill in the code for content creation, we need to create the web form our admins will use. As the form creation code will include a lot of information and functionality and require several special imports, we're going to put it in its own file so we can keep a navigable codebase. In your repl's files pane, create forms.py. Enter the following import statements at the top of forms.py: from flask_wtf import FlaskForm from flask_wtf.file import FileField, FileRequired, FileAllowed from wtforms import StringField, TextAreaField, SubmitField, FloatField, ValidationError from wtforms.validators import InputRequired, NumberRange, Length from replit import db Here we're importing from WTForms, an extensive library for building web forms, and Flask WTF, a library which bridges WTForms and Flask. We're also importing our Replit database, which we'll need for uniqueness validations. The structure of our forms is dictated by the structure of our database. In our db_init function, we defined two dictionaries, "content" and "orders". The former will contain entries for each of the PDFs we have for sale. These entries will contain the PDF's filename as well as general metadata. Thus, our "content" data structure will look something like this: { "content": { "ID": { "name": "NAME", "description": "DESCRIPTION", "file": "PDF_FILENAME", "preview_image": "IMAGE_FILENAME", "price": 5, } } } The ID value will be the content's create our form. With Flask WTF, we model a form as a class inheriting from FlaskForm. This class takes in the value of Flask's request.form and applies validations to the fields therein. Add the following class definition to the bottom of forms.py: class ContentCreateForm(FlaskForm): name = StringField( "Title", validators=[ InputRequired(), Length(3) ] ) description = TextAreaField( "Description", validators=[InputRequired()] ) file = FileField( "PDF file", validators=[ FileRequired(), FileAllowed(['pdf'], "PDFs only.") ] ) image = FileField( "Preview image", validators=[ FileRequired(), FileAllowed(['jpg', 'jpeg', 'png', 'svg'], "Images only.") ] ) price = FloatField( "Price in USD (0 = free)", validators=[ InputRequired(), NumberRange(0) ] ) submit = SubmitField("Create content") def validate_name(form, field): if name_to_id(field.data) in db["content"].keys(): raise ValidationError("Content name already taken.") When admins create content, they'll specify a name, a description, and a price, as well as upload both the PDF and a preview image. We've used WTForm's validators to restrict the file types that can be uploaded for each. Should we decide to branch out from selling PDFs in the future, we can add additional file extensions to the file field's FileAllowed validator. We could also make individual fields optional by removing their InputRequired() or FileRequired() validators. The final part of our form is a custom validator to reject new PDFs with IDs that match existing PDFs. Because we're validating on ID rather than name, admins won't be able to create PDFs with the same name but different capitalization (e.g. "Sherlock Holmes" and "SHERLOCK HOLMES"). We've finished creating our form class. Now we can return to main.py and import the class with the following import statement, which you can add just below the other imports at the top of the file. from forms import name_to_id, ContentCreateForm Note that we've also imported name_to_id, which we'll use when populating the database. Admin routes We can now use our form to implement our content creation route. Populate the content_create function with this code: # Admin functionality @app.route('/admin/content-create', methods=["GET", "POST"]) @web.authenticated @admin_only def content_create(): form = ContentCreateForm() if request.method == "POST" and form.validate(): content_name = form.name.data content_id = name_to_id(content_name) content_price = form.price.data content_file = form.file.data content_filename = secure_filename(content_file.filename) content_file.save(os.path.join('content', content_filename)) image_file = form.image.data image_filename = secure_filename(image_file.filename) image_file.save(os.path.join('static', image_filename)) content_paywalled = content_price > 0 # Construct content dictionary db["content"][content_id] = { "name": content_name, "description": form.description.data, "filename": content_filename, "preview_image": image_filename, "paywalled": content_paywalled, "price": content_price, } flash("Content created!") return redirect(url_for('content', content_id=content_id)) return render_template("admin/content-create.html", form = form, **context()) First, we create an instance of ContentCreateForm. This will automatically use the values in request.form, including the uploaded files. We then check whether the current request is an HTTP POST, and we call content's ID using the helper function from forms.py, store our content's price value, and then save our PDF and image files to the content and static directories. Saving images to static will allow Flask to serve them without us writing additional code. We'll need custom code for PDFs, however, as we need to ensure they're only accessible to paying customers. We use the variable content_paywalled to determine whether this PDF should be available for free or behind a paywall. Finally, we save our content's details to the database and redirect the creator to the content page, which we'll build in the next section. At the bottom of the function, we render our content-create page and tell it which form to use. This will happen regardless of whether the initiating request was a GET or a POST. We'll create the template and define the context function when we build the application front-end. Next, we need to create our database flushing functionality. Populate the flush_db function with the following code: @app.route('/admin/db-flush') @web.authenticated @admin_only def flush_db(): # clear db del db["content"] del db["orders"] # clear users for _, user in users.items(): user["content_library"] = [] # delete content and images shutil.rmtree("content") shutil.rmtree("static") # reinit db_init() return redirect(url_for("index")) After deleting all database content and uploaded files, we call db_init() to start afresh. Keep in mind that this function should not be used if you're storing real user data unless you've made a backup. Content viewing and paywalls Now that our site admins can upload PDFs, we need a way for users to view them. We'll start by creating another helper function, just below the definition of is_admin: def owns_content(username, content_id): if "content_library" in users[username].keys() and users[username]["content_library"] is not None: return content_id in users[username]["content_library"] We have to do several checks on our user's content_library, as it can be in a few different states – the key might not exist, or it might be set to None, or it might be a list. We'll use this function to determine which content has been purchased by a given user and thus avoid writing all these checks again. Now we need to create our application's content-viewing routes. We'll start by rewriting the / route so that it renders a template rather than a greeting string. This page will contain a list of PDFs. Change the code in index to the following: # Main app @app.route("/") @web.authenticated def index(): return render_template("index.html", **context()) Then we'll write a route that displays individual PDF metadata, by adding this function just below the definition of index: @app.route("/content/<content_id>") @web.authenticated def content(content_id): return render_template("content.html", content_id=content_id, **context()) The content_id value will be the same ID that we're using in our database. This page will contain the content's name, preview image, description, and either a download link, or a purchase link, depending on whether the PDF is paywalled, and whether the current user has purchased it. Lastly, we need a route that handles downloading actual PDFs. Add the following code just below the content function definition: @app.route("/content-file/<content_id>") @web.authenticated def content_file(content_id): content = db["content"][content_id] if not content["paywalled"] or owns_content(web.auth.name, content_id): return send_from_directory("content", path=content["filename"]) else: return "Access denied." If the current user owns this PDF, or it's not paywalled, we use Flask's send_from_directory to allow them to download it. Otherwise, we return an error message. Creating the application frontend We have most of our application back-end, so now let's create the front-end. We'll do this using HTML and Jinja, Flask's front-end templating language. First, let's create the following HTML files in a new directory called templates: templates/ |__ admin/ | |__ content-create.html |__ _macros.html |__ content.html |__ index.html |__ layout.html Once you've created these files, let's populate them, starting with templates/layout.html: <!DOCTYPE html> <html> <head> <title>Books and Manuscripts</title> </head> <body> {% with messages = get_flashed_messages() %} {% if messages %} <ul class=flashes> {% for message in messages %} <li>{{ message }}</li> {% endfor %} </ul> {% endif %} {% endwith %} {% if name != None %} <p>Logged in as {{ username }}</p> {% endif %} {% block body %}{% endblock %} </body> </html> We'll use this file as the base of all our pages, so we don't need to repeat the same HTML. It contains features we want on every page, such as flashed messages, and an indication of who's currently logged in. provide our form fields with error-handling, provided by WTForms. We'll use this macro in templates/admin/content-create.html, which we'll populate with the following code: {% extends "layout.html" %} {% block body %} {% from "_macros.html" import render_field %} <h1>Upload content item</h1> <form action="/admin/content-create" method="post" enctype="multipart/form-data"> {{ render_field(form.name) }} {{ render_field(form.description) }} {{ render_field(form.file) }} {{ render_field(form.image) }} {{ render_field(form.price) }} {{ form.csrf_token }} {{ form.submit }} </form> {% endblock %} Here, {% extends "layout.html" %} tells our templating engine to use layout.html as a base template, and {% block body %} ... {% endblock %} defines the code to place inside layout.html's body block. Our render_function macro will be used to show our different form fields – some of these will be text input fields, while others will be file upload fields. Our form also has a hidden field specified by {{ form.csrf_token }}. This is a security feature WTForms provides to prevent cross-site request forgery vulnerabilities. Let's define our home page now, with a list of content items. Add the following code to templates/index.html: {% extends "layout.html" %} {% block body %} <h1>Marketplace</h1> <ul> {% for id, content in content.items() %} <li> <a href="/content/{{ id }}">{{ content.name }}</a> {% if content.paywalled %} {% if id in my_library %} (PURCHASED) {% else %} ({{ "${:,.2f}".format(content.price) }}) {% endif %} {% endif %} </li> {% endfor %} {% if admin %} <li><a href="/admin/content-create">NEW CONTENT...</a></li> {% endif %} </ul> {% if admin %} <h1>Admin functions</h1> <ul> <li><a href="/admin/db-flush">Flush database</a></li> </ul> {% endif %} {% endblock %} We display each piece of content in a list. If an item is paywalled, we show its price if the current user hasn't already purchased it, or "(PURCHASED)" if they have. In addition, we use {% if admin %} blocks to include links to admin functionality, such as content creation and database flushing, that will only display when an admin is logged in. The last page we need to create is templates/content.html, which will display information about individual PDFs: {% extends "layout.html" %} {% block body %} <h1>{{ content[content_id].name }}</h1> <img src='/{{ content[content_id].preview_image }}' style='max-width: 150px'> <p>{{ content[content_id].description }}</p> {% if content_id in my_library or not content[content_id].paywalled %} <a href="/content-file/{{ content_id }}">Download PDF</a> {% else %} <form action="/checkout/{{ content_id }}" method="POST"> <button type="submit" id="checkout-button">Buy {{ content[content_id].name }} for {{ "${:,.2f}".format(content[content_id].price) }}</button> </form> {% endif %} {% endblock %} As with the home page, we display different parts of the page depending on whether the content is paywalled, and whether the current user owns it. If the user must purchase the PDF, we include a single-button form that posts to We've referred to a lot of different variables in our front-end templates. Flask's Jinja templating framework allows us to pass the variables we need into render_template, as we did when building the application backend. Our content creation page needed a form, and our content viewing pages needed an ID. In addition, we unpack the return value of a function named context to all of our rendered pages. Define this function now with our other helper functions in main.py, just below owns_content: def context(): if "content_library" in users.current.keys() and users.current["content_library"] is not None: my_library = users.current["content_library"] else: my_library = [] return { "username": web.auth.name, "my_library": my_library, "admin": is_admin(web.auth.name), "content": db["content"] } This will give every page most of the application's state, including the full content dictionary and the current user's library. If we find we need another piece of state later, we can add it to the context helper function, and it will be available to all our pages. Run your repl now and add some content. For best results, open the site in a new tab, rather than using it in your repl's browser. If you add free PDFs, you'll be able to download them, but you won't be able to purchase paywalled PDFs yet. Integrating with Stripe Our application is fully functional for free PDFs. To have users pay for premium PDFs, we'll integrate Stripe Checkout. This will save us the trouble and risk of developing our own payment gateway or storing users' card details. To use Stripe Checkout, you will need an activated Stripe account. Create one now at if you haven't already. Once you've created a Stripe account, add the following code near the top of main.py, just below the import statements: # Stripe setup stripe.api_key = os.environ["STRIPE_KEY"] DOMAIN = "YOUR-REPL-URL-HERE" You can find your Stripe API keys on this page of the developer dashboard. Make sure that you're in test mode and copy the secret key to your clipboard. Then return to your repl and create an environment variable called STRIPE_KEY with the value you just copied from Stripe. You will also need to replace the value of DOMAIN with your repl's root URL. You can get this URL from the in-repl browser. Stripe Checkout Stripe provides detailed technical documentation and code snippets in a variety of languages, so setting up basic integration is largely a matter of copying and adapting these code snippets to our needs. We'll start by creating the /checkout/<content_id> route. This will create a new Stripe checkout session and redirect the user to a Stripe payment page. Add the following code below your content_file function definition: # Stripe integration @app.route("/checkout/<content_id>", methods=["POST"]) @web.authenticated def checkout(content_id): # Proceed to checkout try: line_items=[ { "price_data": { "currency": "usd", "product_data": { "name": db["content"][content_id]["name"], "images": [DOMAIN + "/" + db["content"][content_id]["preview_image"]] }, 'unit_amount': int(db["content"][content_id]["price"]*100), }, "quantity": 1 }, ], payment_method_types=[ 'card', ], mode='payment', success_url=DOMAIN + '/success?session_id={CHECKOUT_SESSION_ID}', cancel_url=DOMAIN + '/cancel' ) except Exception as e: return str(e) # Record order order_id = checkout_session.id db["orders"][order_id] = { "content_id": content_id, "buyer": web.auth.name } return redirect(checkout_session.url, code=303) This code is adapted from Stripe's sample integration Python code. It initiates a checkout from the pricing and product details we provide and redirects the user to Stripe's checkout website to pay. If payment is successful, it sends the user to a success_url on our site; otherwise, it sends to the user to a cancel_url. We'll define both of these shortly. We've made two key changes to the sample code. First, we've included the details for our content item in line_items: line_items=[ { "price_data": { "currency": "usd", "product_data": { "name": db["content"][content_id]["name"], "images": [DOMAIN + "/" + db["content"][content_id]["preview_image"]] }, 'unit_amount': int(db["content"][content_id]["price"]*100), }, "quantity": 1 }, ], Rather than defining individual products on Stripe's side, we're programmatically constructing our products at checkout time. This saves us from having to add our PDF metadata in two places. We provide our product's name, and the full URL of its preview image, so both can be shown on the Stripe Checkout page. As Stripe expects prices in cents, we multiply the price from our database by 100 before converting it to an integer. The second change we've made to the sample code is to record the order details in our database. We need to do this so that we can fulfill the order once it's paid for. # Record order order_id = checkout_session.id db["orders"][order_id] = { "content_id": content_id, "buyer": web.auth.name } We reuse Stripe's Checkout Session object's id as our order_id so that we can link the two later. If you run your repl now, you should be able to reach the Stripe checkout page for any paywalled content you've added. Don't try to pay for anything yet though, as we still need to build order fulfillment. Stripe fulfillment As we're selling digital goods, we can integrate fulfillment directly into our application by adding purchased content to the buyer's library as soon as payment has been made. We'll do this with a function called fulfill_order, which you can add just below the def fulfill_order(session): # Get order details content_id = db["orders"][session.id]["content_id"] buyer = db["orders"][session.id]["buyer"] # Add content to library if session.payment_status == "paid" and not owns_content(buyer, content_id): if users[buyer]["content_library"] is not None: users[buyer]["content_library"].append(content_id) else: users[buyer]["content_library"] = [content_id] This function takes a Stripe Checkout Session object, retrieves the corresponding order from our database, and then adds the order's content to the buyer's library if a payment has been made, and the buyer does not already own the content. We'll invoke this function from our /success route, which we'll define just below it. @app.route('/success', methods=['GET']) @web.authenticated def success(): # Get payment info from Stripe session = stripe.checkout.Session.retrieve(request.args.get('session_id')) # Abort if user is not buyer if web.auth.name != db["orders"][session.id]["buyer"]: return "Access denied." fulfill_order(session) return render_template_string(f'<html><body><h1>Thanks for your order, {web.auth.name}!</h1><p>Your purchase has been added to your <a href="/">library</a>.</p></body></html>') Here we retrieve the session details from the session_id GET parameter Stripe passed to our app, ensure that the current user is also the order buyer, and call fulfill_order. We then render a simple success page. You can replace this with a full Jinja template if you want to make it a bit fancier. We also need to define the /cancel route, used if the payment fails. This one is quite simple: @app.route('/cancel', methods=['GET']) @web.authenticated def cancel(): return render_template_string("<html><body><h1>Order canceled.</h1></body></html>") If you run your repl now, you should be able to purchase content. You can find test credit card numbers on the Stripe integration testing documentation page. You can use any future date as the expiry date and any CVV. Webhooks A potential problem with the way we're fulfilling orders is that a user might close the Stripe Checkout tab or lose internet connectivity after their payment has been confirmed, but before they're redirected to our /success route. If this happens, we'll have their money, but they won't have their PDF. For this reason, Stripe provides an additional method for fulfilling orders, based on webhooks. A webhook is an HTTP route intended to be used by machines rather than people. Much like we've created routes for our admins to upload PDFs, and our users to buy PDFs, we'll now create a route for Stripe's bots to notify our application of completed payments. First, you'll need to create a webhook on your Stripe Dashboard. Visit the Webhooks page and click Add endpoint. You should then see a page like this: On this page, do the following: For the Endpoint URL value, enter your repl's URL, followed by /fulfill-hook. Select the Click Add endpoint. Stripe should then redirect you to your new webhook's details page. From here you can see webhook details, logs and the signing secret. The signing secret is used to ensure that our webhook only accepts requests from Stripe – otherwise, anyone could call it with spoofed data and complete orders without paying. Reveal your webhook's signing secret and copy it to your clipboard, then return to your repl. We'll use another environment variable here. Add the following code below your cancel function definition: endpoint_secret = os.environ['ENDPOINT_SECRET'] Then create an environment variable called ENDPOINT_SECRET with the value you just copied from Stripe. For our app's webhook code, we can once again tweak Stripe's sample code. We'll use this order fulfillment code as a base. Add this code below your endpoint_secret assignment: @app.route('/fulfill-hook', methods=['POST']) def fulfill_webhook(): event = None payload = request.data sig_header = request.headers['STRIPE_SIGNATURE'] try: event = stripe.Webhook.construct_event( payload, sig_header, endpoint_secret ) except ValueError as e: # Invalid payload raise e except stripe.error.SignatureVerificationError as e: # Invalid signature raise e # Handle the event if event['type'] == 'checkout.session.completed': session = event['data']['object'] # Fulfill the purchase... fulfill_order(session) else: print('Unhandled event type {}'.format(event['type'])) return jsonify(success=True) After ensuring that the request we've received comes from Stripe, we retrieve the Checkout Session object from Stripe's fulfill_order. If you run your repl now, you should be able to purchase a PDF, close the checkout page after your payment is accepted but before being redirected, and still end up with the PDF in your library. You can also view webhook invocation logs on the Stripe Dashboard. Where next? We've built a functional if fairly basic storefront for digital goods. If you'd like to continue with this project, consider the following extensions: - Improving the site's appearance with custom CSS. - Branching out from PDFs to other files, such as audio podcasts, videos, or desktop software. - Providing a subscription option that gives users access to all PDFs for a limited time. - Converting the site into a peer-to-peer marketplace where users can all upload and purchase files from each other. You can find the code for this tutorial here:
https://docs.replit.com/tutorials/paid-content-site
2022-06-25T11:47:46
CC-MAIN-2022-27
1656103034930.3
[array(['https://replit-docs-images.bardia.repl.co/images/tutorials/29-paid-content-site/site-functionality.gif', 'Paid content site functionality'], dtype=object) array(['https://replit-docs-images.bardia.repl.co/images/tutorials/29-paid-content-site/create-python-repl.png', 'Create python repl'], dtype=object) array(['https://replit-docs-images.bardia.repl.co/images/tutorials/29-paid-content-site/randomstring.png', 'Random string'], dtype=object) array(['https://replit-docs-images.bardia.repl.co/images/tutorials/29-paid-content-site/repl-secrets.png', 'Repl secrets'], dtype=object) array(['https://replit-docs-images.bardia.repl.co/images/tutorials/29-paid-content-site/login-button.png', 'Login button'], dtype=object) array(['https://replit-docs-images.bardia.repl.co/images/tutorials/29-paid-content-site/forms-py-in-file-pane.png', 'Create forms.py file'], dtype=object) array(['https://replit-docs-images.bardia.repl.co/images/tutorials/29-paid-content-site/folder-structure.png', 'Folder structure'], dtype=object) array(['https://replit-docs-images.bardia.repl.co/images/tutorials/29-paid-content-site/open-new-window.png', 'Open in new window'], dtype=object) array(['https://replit-docs-images.bardia.repl.co/images/tutorials/29-paid-content-site/free-pdf-download.png', 'Free pdf download'], dtype=object) array(['https://replit-docs-images.bardia.repl.co/images/tutorials/29-paid-content-site/stripe-key.png', 'Stripe Key'], dtype=object) array(['https://replit-docs-images.bardia.repl.co/images/tutorials/29-paid-content-site/repl-url.png', 'Repl URL'], dtype=object) array(['https://replit-docs-images.bardia.repl.co/images/tutorials/29-paid-content-site/alice-paywall.png', 'Paywall'], dtype=object) array(['https://replit-docs-images.bardia.repl.co/images/tutorials/29-paid-content-site/checkout-page.png', 'Checkout page'], dtype=object) array(['https://replit-docs-images.bardia.repl.co/images/tutorials/29-paid-content-site/alice-purchased.gif', 'PDF purchased'], dtype=object) array(['https://replit-docs-images.bardia.repl.co/images/tutorials/29-paid-content-site/add-webhook.png', 'Add webhook'], dtype=object) array(['https://replit-docs-images.bardia.repl.co/images/tutorials/29-paid-content-site/signing-secret.png', 'Signing secret'], dtype=object) array(['https://replit-docs-images.bardia.repl.co/images/tutorials/29-paid-content-site/stripe-webhook-success.png', 'Stripe webhook success'], dtype=object) ]
docs.replit.com
Sysdig On-Premises Release Notes You may also want to review the update log for Falco rules used in the Sysdig Secure Policy Editor. See Falco Rules Changelog.. Review the Sysdig On-Premises Release Support statement. Supported Web Browsers Sysdig supports, tests, and verifies the latest versions of Chrome and Firefox. Other browsers may also work, but are not tested in the same way. 5.1.2 Hotfix May 2022 Upgrade Process Supported Upgrades From: 4.0.x, 5.0.x Secure Feature: Reporting - Added the Run Now and Download(s) menu items Bugs - Fixed an “Unable to load latest task result” bug when accessing compliance benchmarks results 5.1.1 Hotfix May 2022 Upgrade Process Supported Upgrades From: 4.0.x, 5.0.x Sysdig Platform - Added the RelayStateparameter optional for SAML configuration - Upgraded the Spring Framework to version 5.2.20 in the sysdig-backendcontainer Monitor - Added the ability to choose regions with Capture Storage. Installer Improvements - Fixed an issue with MultiAZ GCP/GKE platforms that would prevent Elasticsearch from starting. - Fixed an ingress permissions issue when upgrading from 5.0.4 to 5.1.0 that would result in the Sysdig UI generating a 404 Not Found error. - Fixed an installer bug when cloudProvider.namewas set and cloudProvider.regionwas not set. - Fixed a Kafka/Zookeeper statefulset naming issue when installing or upgrading Sysdig on-premise Bugs - Monitor Alert re-notification messages now provide the latest metric value instead of the metric value at time of triggering. - Fixed a Runtime scan page issue not displaying image results based on specific Team scopes. Release 5.0.5 Hotfix for CVE-2022-22965 This hotfix upgrades the Spring Framework to version 5.2.20 in the sysdig-backend container. Release 5.1.0 March 2022 Upgrade Process Supported Upgrades From: 4.0.x, 5.0.x For the full supportability matrix, see the Release Notes on Github. There you will also find important Install instructions. Sysdig Platform Installer Improvements - Kubernetes versions 1.22 and 1.23 are now supported. - An optional cronjob for the falco-rules-installer, which runs once a month, can now be created through the Installer values file. - Users operating their own ingress controller, such as Rancher, are no longer need to manually create Ingress Objects Go HTTP APIs. Note that the Collector uses TCP and will need external configuration. - The Installer now has a pre-flight check to verify the kubectl and Kubernetes versions of the cluster with the context provided by the user. Secure API Docs - API documentation for Sysdig Secure are now enabled by default. Bugs - Fixed an issue with Secure Events not displaying the correct number of events in the dashboard. - Fixed an issue that prevented Rapid Response being enabled with a Secure Team created with LDAP. - Fixed a network issue that would sometimes occur during an upgrade which would cause PostgreSQL to timeout. - Fixed an issue when the nats-streaming-initcontainer failed to start due to permission problem when storageClassProvisioneris set to hostPath. - Fixed a Compliance Database Password issue during upgrades from on-prem 4.0.x to on-prem 5.0.x - Fixed an issue with the StatefulSet definition when upgrading from 4.0.x to 5.0.x on a Kubernetes cluster prior to 1.18.x Release 4.0.7/5.0.4 Hotfix for CVE-2021-44228 in Apache’s log4j (3.6.4, 4.0.7, 5.0.4) The patch relese upgrades all components that compose Sysdig’s Platform running Apache’s vulnerable Log4j library to 2.16. Note on ElasticSearch: This is using Log4j v2.11.1. An additional JVM parameter has been added through the Installer in accordance with the recommendations from Elastic. In addition, the impacted class from the Log4j library has been removed completely. Security scanners may still list this as vulnerable but in this case it will be a false positive. Elastic currently does not offer a way to fully remove or upgrade this component. Release 4.0.6/5.0.3 Hotfix for CVE-2021-44228 in Apache’s log4j (3.6.3, 4.0.6, 5.0.3). We have released a patch version of our self hosted-software which upgrades the vulnerable version of log4j or adds additional mitigating controls suggested by vendors. - 3.6.3 - 4.0.6 - 5.0.3 Please reach out to support or the customer success team for assistance with your upgrade. Release 5.0.2 Hotfix December 2021 Upgrade Process Supported Upgrades From: 4.0.0, 4.0.1, 4.0.2, 4.0.3, 4.0.4, 4.0.5, 5.0.0, 5.0.1 For the full supportability matrix, see the Release Notes on Github. There you will also find important Install instructions. Fixes - Fixed a version-comparison bug in RedHat rpm packages. - Enabled a retention manager for Secure-only on-prem installations to handle data retention. Release 5.0.1 Hotfix November 2021 Upgrade Process Supported Upgrades From: 4.0.0, 4.0.1, 4.0.2, 4.0.3, 4.0.4, 4.0.5, 5.0.0 For the full supportability matrix, see the Release Notes on Github. There you will also find important Install instructions. Fixes - Fixed missing field “Last Evaluation Date” in the scanning policy evaluation results and Scheduled Reports - Kubernetes environment / labels are no longer mandatory to generate a scanning Scheduled Report - Fixed CVSS filters in scanning Scheduled Reports - Fixed an issue in scanning Scheduled Reports when scanning Red Hat images that caused related Red-Hat advisories (RHSA) to not be displayed - Fixed priority sorting for ‘Unknown’ severity vulnerabilities that are now considered less severe than ‘Negligible’ in scanning Scheduled Reports Release 4.0.5 Hotfix October 28, 2021 Upgrade Process Supported Upgrades From: 3.6.2, 4.0.0, 4.0.1, 4.0.2, 4.0.3, 4.0.4 For the full supportability matrix, see the Release Notes on Github. There you will also find important Install instructions. Fixes - Fixed Scheduled Reports not displaying last evaluation datefield - Fixed an issue in 4.0.x Scheduled Reports when scanning Red Hat images, causing vulnerabilities missing related Red-Hat advisory (RHSA) to not be displayed Release 4.0.4 Hotfix September 29, 2021 Upgrade Process Supported Upgrades From: 3.6.2, 4.0.0, 4.0.1, 4.0.2, 4.0.3 For the full supportability matrix, see the Release Notes on Github. There you will also find important Install instructions. Fixes - Fixed a timeout issue for policy advisor and scanning database init containers occuring in some environments - Fixed a certificate handling issue at network security component Release 5.0 September 7, 2021 Known limitations: The Network Security Policy tool is not supported in OpenShift 3.11 with this release. Version 5.0.0 does not yet support Kubernetes 1.22. Upgrade Process **Supported Upgrades From: **4.0.x For the full supportability matrix, see the Release Notes on Github. There you will also find important Install and Upgrade instructions. Sysdig Platform Define S3 Bucket Path for Storing Captures Sysdig Platform users can now define a custom path in the S3 bucket they are using for storing captures. This is useful to those who want to reuse a certain bucket used for other purposes or send captures from different installations to the same S3 bucket. For more information, see (On-Prem) Configure Custom S3 Endpoint.(On-Prem) Configure Custom S3 Endpoint Webhook Channel Enhancements Sysdig supports the following on a Webhook channel integration: >>IMAGE. Microsoft Team Channel You can now use Microsoft Team s as a notification channel in Sysdig Monitor. See Configure a Microsoft Teams Channel for more details._1<< Sysdig Monitor Workload Label Sysdig Monitor now supports two new labels, kubernetes.workload.name and kubernetes.workload.type which can be used for scoping Dashboards and configuring Gropings. . Sysdig Secure Sysdig Secure for cloud Sysdig Secure for cloud is available with Cloud Risk Insights for AWS, Cloud Security Posture Management based on Cloud Custodian for AWS and multi-cloud threat detection for AWS using Falco. . Falco Policy Tuner. Tunable Exclusions Available in Insights Details We’ve added the ability to identify and add exceptions using the Policy Tuner in the Insights module. Now you can receive policy tuning recommendations directly within the Insights view, enhancing usability, ease, and refinement of results. See also: Insights and Runtime Policy Tuning . New Scan Results Page Layout We have reorganized the visual layout of the Scan Results summaries to clearly distinguish policy evaluation from vulnerability matching and to better summarize the information. Kubernetes Network Security: New Configuration and Improved User Experience Sysdig’s Kubernetes Network Policy tool has been updated to include additional fine-tuning configurations and an improved user experience. >>IMAGE. Activity Audit Improved The Activity Audit user interface was enhanced as follows: Activity Audit entry point moved under the Investigate menu Trace feature, used for kube exec,is now also available for parent commands The filter selector is also available in-line, with no need to open the detail view Lateral Tree view removed and replaced with the Scope menu above, in alignment with the Event panel Alert Notification Channel for Microsoft Teams Microsoft Teams is now available as an Alert Notification Channel in Sysdig Secure for Runtime Policies. See also: Manage Policies Internal Scanning Date Improvements Scanning policies have improved the reliability of the Max days since creation and Max days since fix rule gate parameters. The information is now included in the inline-scan JSON report and available in the Jenkins plugin. Reporting Improved with Multi-Select Option Added the option to select multiple policies and multiple package types as part of a scheduled scanning report. Release 4.0.3 August 27, 2021 This release is a hot-fix only release. Upgrade Process Supported Upgrades from: 3.6.2, 4.0.0, 4.0.1, 4.0.2 For the full supportability matrix, see the Release Notes on Github. Other upgrade notes are maintained in the GitHub upgrade folder. Installation Instructions Full installation instructions for Kubernetes environments: here. Defect Fixes Inline Scanning Fix for Sysdig Secure Fixed an issue when scanning long Java manifest files that caused the scan to fail. LDAP Improvements for Sysdig Platform Fixed an issue with the LDAP sync Job running out of shared memory. The LDAP sync will no longer stop if it encounters an intermittent issue or error, but will allow the sync to complete. 4.0.2 June 29, 2021 This release is a hot-fix only release for Sysdig Secure features. Upgrade Process Supported Upgrades From: 3.6.2, 4.0.0, 4.0.1. For the full supportability matrix, see the Release Notes on Github. Improvements CSV Runtime Reports The runtime labels that were described in a single CSV column (JSON encoded) will now be represented using one column per label. If the same vulnerability, same package, same image is found in several runtime contexts, the CSV will separate each runtime context in a separate row, instead of building a JSON array with several objects nested. See also: Scheduled Reports. Defect Fixes Fixed Incorrect Fingerprinting Causing False Positives in Scanning Fixed incorrect version detection for Apache Struts 2 packages leading to false positives. Fixed Metadata Retrieval Issue in Scanning Fixed incorrect metadata retrieval for corner cases when imageIDs are associated with several digests. Improved Memory Usage Reduced Redis memory consumed by scanning by optimizing the usage of the scanning API cache. Fixed Subscription Alert Entries Fixed scanning alerts triggers for images discovered via the Node Image Analyzer or Inline Scan container. Readable Filenames for Scanning Reports The scheduled scanning reports now generate report files named after the report name i.e. my-daily-critical-vulns-2021-05-04.zip Release 4.0.1 May 05, 2021 This release is a hotfix-only release for Sysdig Secure features. Upgrade Process Supported Upgrades From: 3.6.2 For the full supportability matrix, see the Release Notes on Github. Improvements Improved RHEL Vulnerability Matching The RedHat OVAL source feed interpretation and the matching algorithm have been improved to handle special RedHat packages versioning rules. This should effectively translate into fewer false positives and more accurate fix versions for RH-based packages. Defect Fixes Security Fix A SQL injection vulnerability discovered in 4.0.0 has been fixed in 4.0.1. Scan Results The vulnerability list on the UI shows a different number of vulnerabilities as compared to the summary PDF report for the same image. This issue has been fixed as part of Improved RHEL Vulnerability Matching. Secure Audit Reporting Errors Secure Audit Reporting displayed intermittent errors for custom agent versions. Fixed the agent version parsing to correctly assess feature support. Release 4.0.0 April 06, 2021 Upgrade Process Supported Upgrades From: 3.6.2 For the full supportability matrix, see the Release Notes on Github. Migrating MySQL to PostgreSQL For consolidation and to meet higher performance requirements, upgrading to v4.0.0 from v3.x.x involves migrating MySQL to the PostgreSQL database. The migration process is seamless and no user intervention is expected. For more information, see Migration Documentation on Github. Related Documents Deprecations Deprecating “Scan Image” Reaction in Alerts When setting up runtime alerts in previous versions, there was an option to trigger “scan image” when an unscanned image was detected. This has been deprecated in the UI in favor of the Node Image Analyzer, which is bundled by default with the Sysdig agent as an additional container per node. See also: Manage Scanning Alerts. Defect Fixes Large SAML Metadata An issue was detected in an earlier version where large SAML metadata could not be saved due to limits in the database field size. This issue is now fixed and Sysdig now supports large SAML metadata. Single Sign-On for Monitor and Secure When a user logs in to Sysdig products successively, a confusing error message related to SAML was displayed if: If both Secure and Monitor have been configured with SSO. The Create User on login feature has been turned on for both products. This issue is fixed with this release. When a user created in one product logs in to another, and if the Create user on login feature is turned on, no error message is thrown. The user is added to the appropriate team in the product and can log in to the other. Sysdig Platform Monitor UI Displays On-Prem License Information The on-prem license information is now displayed on the Monitor UI. Additionally, users will be warned of imminent license expiration on the UI. Changes to Auditing Sysdig Platform Activities Due to the changes in the underlying database (PostgreSQL instead of MySQL), the existing Sysdig auditing data will be dropped when performing the upgrade from 3.x to 4.0 on-premise version. The audit data is not migrated due to the potentially large size of the table, which could prolong the upgrade process. The data remains available in the MySQL database. If you require the data, do the following: Before upgrading, dump the audit_eventstable from MySQL. When the upgrade is completed, import the data back to the new database if you desire. Contact your Sysdig contact for details on how to perform this operation. Sysdig Monitor Improved Alerts The Alert interface has been improved to allow faster browsing and easier management. For more information, see Alerts... Host Overview To complement Sysdig Kubernetes Overviews, Hosts Overview has been released. Host Overview provides a unified view of the performance and health of physical hosts in your infrastructure.Hosts Overview Sysdig Secure Serverless Agent Preview Feature The 1.0.x serverless agent is supported as a preview feature with Sysdig Platform 4.0. Note that there is no guarantee of forward or backwards compatibility with this preview release._13<< . Network Micro-Segmentation: Support for CronJobs, Weave, & Cilium CNIs The Sysdig Network Security Policy Tool has been upgraded to add support for CronJob pod Owners. . New Product: Rapid Response Rapid Response is an Endpoint Detection and Response (EDR) solution built for cloud-native workloads, which gives security engineers the ability to respond to incidents directly via a remote shell. The shell uses the underlying host tooling already installed, such as kubectl, Docker commands, cloud CLIs, etc. Users can also mount their own scripts to use any familiar tooling. Rapid Response requires a component installed on the host machine. This component provides end-to-end encrypted communication using a passphrase only your team knows. The Rapid Response feature is disabled by default and can only be accessed to teams that have the feature enabled. Admins can see all user activity, including access to audit logs, and can initiate a rapid response session. Advanced users can view only their own user activity, including their audit logs, and can initiate a rapid response session. See also: Rapid Response: Installation and Rapid Response_15<<_16<<) CIS AWS Cloud Benchmark Released_17<< See also: AWS Foundations Benchmarks._18<< See also: Event Forwarding.. Improved UI for New Users We have added introductory splash screens throughout the product to help you get started when using a feature for the first time. UI Improvement on Rules Library and Rule Details Usability improvement so you can see in which policies a rule is used, from both the Rules Library list and the Rule Detail view. See Manage Rules for details. Deprecation Notice: Legacy Commands Audit & Legacy Policy events The “Commands Audit” feature was deprecated in favor of Activity Audit in November 2019. This feature will be completely removed from the On-prem distribution in version 4.1. Sysdig agent version 9.5.0+, released in January 2020, is required by the Activity Audit feature. The “Policy Events” feature was deprecated in favor of the new Events feed in June 2020. This feature will be completely removed from the On-prem distribution in version 4.1. Sysdig agent version 10.3.0+ is recommended._19<< See Perform Inline Malware Scanning for recommended parameters and output options. Release 3.6.2 December 14, 2020 This release contains bug fixes and minor improvements. Upgrade Process Supported Upgrade From: 3.2.2, 3.5.1, (3.6.0 or 3.6.1 if it was installed) For the full supportability matrix see the GitHub documentation. Bug Fixes Fixed email notifications error In some cases, including alerts with very large scopes and some others, email notifications were not sent due to a bug in the email renderer. This issue has been fixed. Fixed Kubernetes metadata display delay In 3.6.0 and 3.6.1 releases, upon connecting an agent, it would take 1h for Kubernetes metadata to appear. With this bug fixed, the metadata is displayed a couple of minutes after connecting the agent. Fixed dashboard display error when switching teams When the user switched teams, the dashboard menu was not displayed and required the user to reload the application. This has been fixed. Improvements to the security setup of our Intercom integrations We have improved the security of the Sysdig Intercom integration, as in some cases, the conversations could leak between different users. Fix to Activity Audit Janitor Fixed an Activity Audit Janitor error that stopped the AA clean-up process when a particular set of Sysdig Secure features were not enabled. Improvements Increased Decimal Precision from 4 to 6 With this release, we increased the decimal precision from 4 to 6 decimal places. This feature is mostly useful for customers using Prometheus metrics, as by convention, the metrics for time are given in seconds in Prometheus exporters, which does not work well for low numbers (for example - latencies in microseconds)._20<< See also: Event Forwarding. Release 3.5.3 December 14, 2020 (Replicated Only) This release is a bug fix only release. Upgrade Process Sysdig Platform v 3.5.3 has been tested and qualified against the same components as in v. 3.5.1. Supported Upgrade from: 3.5.1, 3.2.x, 3.0 Bug Fixes Sysdig Platform Fixed email notifications error In some cases, including alerts with very large scopes and some others, email notifications were not sent due to a bug in the email renderer. This issue has been fixed. Improvements to the security setup of our Intercom integrations We have improved the security of the Sysdig Intercom integration, as in some cases, the conversations could leak between different users. Sysdig Secure Events Forwarder improvement Fixed a crash condition in the Events Forwarder service stemming from a microservices connectivity issue. Release 3.6.1 November 23, Secure The following improvements were introduced in release 3.6.1: Node Image Analyzer: Scan “Repo-less” Images Added support to scan images that lack a Repo tag, such as OpenShift 4.x distribution images. Audit Tap Forwarding: Fixed Splunk Event Timestamp Metadata The format of the “time” field included in the Splunk event metadata for forwarded Audit Tap events is now increased to millisecond granularity. Fixed False Positives on Java Libraries Related to log4j Fixed an issue that resulted in log4j-jboss-logmanager and log4j-1.2-api being incorrectly detected as log4j, possibly generating vulnerability false positives. NOTE: Inline Scanner v2.1 Inline Scanner v2.1 has been released. This component is independent of the Sysdig Platform version you are running–it can be used with Sysdig On-Prem version 3.6.1 and with earlier versions. Inline Scanner 2.1 includes the following enhancements: NEW Added ability to analyze scratch-based images FIXES Fixed a bug retrieving the PDF output for previously- scanned images Addressed several vulnerabilities found in the inline scanner container See also: Integrate with CI/CD Tools. Release 3.6.0 November 10, Platform Interactive Session Expiration Installation-Wide With this release, you can define a period of interactive-session expiration, so that when a user is idle for a defined period of time, the session terminates. This helps enterprises with strict security and compliance requirements comply with relevant security controls, such as NIST or PCI-DSS 8.1.8 . Currently, this feature is available for on-premises only and is configured per installation. See also: Configure Interactive Session Expiration. Minor Enhancements and Fixes around Users and Teams Team Search Available when Switching Teams You can now search for Teams on the Team Switcher. This feature is especially handy for Admins who are members of many teams. See also: Switching Teams in the UI. User search now supports many more users With this release, we have enhanced the performance for listing and search for users on both Settings>Usersand Settings>Teamspages. We now support tens of thousands of users comfortably. LDAP: Search for users by both username and email address For enterprises using LDAP, this release enables search on both username and user email address in the Settings > Usersand Settings > Teamspages. Users are listed by name but can be searched by email as well. LDAP: Default team role respected This fix ensures that when LDAP users are created upon login, the default user role for the team is respected.. Sysdig Secure_21<< This feature is a beta release. A Sysdig Secure admin must enable it from the Sysdig Labs interface under Settings. . Replacing RHSA Advisories with CVE Advisories In new images scanned, RHSA advisories will be replaced with CVE advisories. Benchmarks support for Kubernetes Benchmark 1.6 Kubernetes Bench upgraded to version 1.6 Using the Kubernetes benchmark, we now provide customer-selected benchmark checks for GKE and EKS (rather than just the Kubernetes default). Vulnerability Exceptions Handling Enhanced The Vulnerability Exceptions feature in Sysdig Secure has been redesigned and enhanced. . Event Forwarding: Kafka and Webhook Added Two new supported integrations have been added to the Sysdig Secure Event Forwarder: _25<< Captures Filter on the Policies Page Policies can now be filtered to display if a capture is associated with an active or inactive policy. _27<< Image Scan Results Page Redesigned to Improve Load Times & User Experience The user interface is cleaned up, reorganized, and provides the following functional improvements: Load times are significantly decreased because the last known evaluation for the image is automatically fetched View the latest evaluation time directly in the scan summary Evaluated at Use the new Re-evaluatebutton to fetch current data if desired View the image origin/reporting mechanism in the new “Added By” field. Possible values are: Sysdig Secure UI, Node Image Analyzer, API, Sysdig Inline Scanner, or Scanning alert. Copy the Image Digest and Image ID to the clipboard using a quick pop-up panel. Forwarding the Activity Audit Information The Sysdig Secure Event Forwarder has added support to forward Activity Audit data to external platforms. Sysdig Monitor Time Navigation in Events Feed You can now browse and find historic events easily by using time navigation. Zooming Out Dashboards You now have the ability to zoom out Dashboards. This feature doubles the selected timeframe for a better context surrounding a problem when troubleshooting an incident. Release 3.5.1 August 24, 2020 NOTE: Version 3.5.1 includes a fix for vulnerabilities that were detected in version 3.5.0. It is recommended to skip version 3.5.0 and install version 3.5.1 instead. As of this release, all on-premises installs and upgrades include oversight services from Sysdig support. Sysdig Platform has been tested and qualified against the following: *MySQL8: You can use MySQL8 for non-HA setups using the flag useMySQL8: true * Postgres: Upgrading to 3.5.0 will also involve an automatic Postgres version upgrade from 10.6.x to 12.x. Depending on your database size, the upgrade could take some time. See Postgres Version Update v10.x to 12.x for details. Related Documents Sysdig Platform Endpoint for Feeds Update Has Changed We no longer point to ancho.re for feeds update but to[](http://). This could require a change to your firewall rules, as an exception to your proxy for ancho.re would impact the feeds update. Sysdig Secure Note that the Secure Overview is not available with Replicated installations. New Sysdig Secure Overview Page The Sysdig Secure Overview provides an at-a-glance view of the critical areas of your security posture. .._32<< See also: Feeds Status. Secure Events Feed Overhaul The Events feed in Sysdig Secure (formerly called Policy Events) has been redesigned, both visually and functionally. Team, Role, and Channel Updates A variety of enhancements have been added to the team, role, and notification channel options. .. Menu Update The ordering of the side menu has been changed. _37<<_38<<_39<<_40<< See also: Vulnerability Databases Used._42<<_43<< Sysdig Monitor. Create a New Panel.. Sysdig Monitor Rebranding The Monitor app has been refreshed with new logos and icons. The navigation pane has been re-organized. The Explore tab is moved below Dashboards. The New Get Started Page The Get Started page provides the key steps to ensure that you are getting the most value out of Sysdig Monitor. We’ll update this page with new steps as we add new features to Sysdig Monitor. . Configurable Default Team Role You can now define the default user role to apply when a new member is added to the team. The Admin can change this default on a per-team basis. See also: Create a Team. Default Dashboards for Istio 1.5 Default dashboards (Overview and Services dashboards) are now available for Istio v1.5 in addition to the existing ones for Istio v1.0. Release 3.2.2, June 11, 2020 This is a hotfix release for Benchmarks. See Defect Fixes for details. Upgrade Process. Use of release 3.2.1-onprem requires first upgrading your Replicated Console to version 2.42.4 or newer. Release 3.2.0, March 04, 2020 Upgrade Process Data Retention for details.Data Retention_48<< See User and Team Administration for details._49<< See Review Vulnerability Summaries for details.Review Vulnerability Summ. Sysdig agent version 9.5.0+ is required to enable this new data source. You can now filter the activity (On-Prem) Configure Custom S3 Endpoint. Release 3.0.0, December 19, 2019 Upgrade Process Sysdig Platform has been tested and qualified against the following: Related Documents Sysdig Secure_51<<_52<<_53<< Activity Audit is a Preview Beta feature. Contact your customer success manager to learn more about rolling out this feature._54<<. Please contact your Sysdig Technical Account Manager or email support to enable Overview for on-premises environments. Pre-Defined Dashboards_59<<.) 2.5.0-3.2.2 and Installer Upgrade (2.5.0+) for details.. _64<<. Units for Metrics The format of metric units are the same for the following: The CPU and Memory metrics for Host and Container. Kube-state CPU and Memory metrics. . Default Dashboard for Cluster and Node Capacity Kubernetes Cluster and Node Capacity Dashboard has been refreshed to add actual usage of CPU and Memory compared to Requests, Limits and Allocatable capacity. . Bug Fixes Export CSV/JSON was missing columns, not all data was exported as expected. All columns from the dashboard should exist in the exported output. All data and columns are is now exported as expected. Sysdig Secure_67<< Falco Lists Easily browse, append, and re-use lists to create new rules. Lists can also be updated directly via API if users want to add existing feeds of malicious domains, or IPs. Falco Macros Easily browse, append, and re-use macros to create new rules.. Sysdig Monitor Enhanced Dashboard Menu The Dashboard menu features a drawer-style popover that displays on-demand to provide maximum real estate for your Dashboards. The menu displays an alphabetical list of Dashboards you own and those shared by your team. With the popover menu, you can add new Dashboards and search for existing ones. Click a Dashboard name to access the relevant Dashboard page where you can continue with the regular Dashboard settings. _73<<. See Collecting Prometheus Metrics from Remote Hosts for details Enhancements to Kafka App Check Kafka integrations can now support authentication and SSL/TLS. If the. Sysdig Secure_74<< Release 2304 replaces version 2172 and 2266 which were released on May 28, 2019 and June 17, 2019. If you installed 2172 or 2266, upgrade to 2304. Upgrade Process Review the Migration Path tables in On-Premises Upgrades.: quay.io/sysdig/sysdigcloud-backend:2266-allinone-java quay.io/sysdig/sysdigcloud-backend:2266-nginx quay.io/sysdig/sysdigcloud-backend:2266-email-renderer ‘Notify’ ‘scope" Topology Views..Data Retention Sysdig Secure Global Whitelists Sysdig Secure allows users to manage CVEs and images that may impact builds by defining them as globally trusted or blacklisted. See Manage Vulnerability Exceptions and Global Lists This release supports upgrades from: 987, 1149, 1245, 1402 (1511), 1586 Upgrade Process for Sysdig in Kubernetes Environments. If you are upgrading from an earlier version of the agent, note that you must also download the latest sysdig-agent-daemonset-v2.yamlf This release supports upgrades from: 1149. 1245, 1402, 1511, and 1586. Performance Issues A performance issue was found when creating snapshots for large number of teams and large number of custom metrics. This issue has been fixed. Release 1586, January 21, 2019 (Benchmarks) ‘%’ ‘Standard User’ role and RBAC changes Introduces new ‘Standard User’ role for developers that includes edit access to dashboards, alerts, events but NO access to Explore. Renames ‘Edit user’ role to ‘Advanced user’ and ‘Read only’ role to ‘View ‘warning’ . _96<<. ‘S ‘Advanced’ section of the ‘Settings’!. _99<<). ‘Compare to’ for number panels Metric number panels now feature a configurable ‘Compare. Public URL dashboards Ever want to share a killer dashboard with a colleague who is not a Sysdig Monitor user? Now you can! Just pick, click, and send your URL. Team Manager role We’ve introduced a new ‘Team ‘Read user’ or ‘Edit ‘Teams’ and ‘User’
https://docs.sysdig.com/en/docs/release-notes/sysdig-on-premises-release-notes/
2022-06-25T11:10:10
CC-MAIN-2022-27
1656103034930.3
[array(['/image/webhook-channel.png', None], dtype=object) array(['/image/customized-session-expiration.png', None], dtype=object) array(['/image/workload-label.png', None], dtype=object) array(['/image/insights_launch4.gif', None], dtype=object) array(['/image/insight_tune_2.png', None], dtype=object) array(['/image/insight_tune3.png', None], dtype=object) array(['/image/scan_results_summary.png', None], dtype=object) array(['/image/netsec_rn.png', None], dtype=object) array(['/image/deprecated_alert.png', None], dtype=object) array(['/image/on-prem-license-info.png', None], dtype=object) array(['/image/alert_revamp.gif', None], dtype=object) array(['/image/explore_ui.gif', None], dtype=object) array(['/image/groupings.gif', None], dtype=object) array(['/image/serverless_arch_2__1_.jpg', None], dtype=object) array(['/image/cronjob.png', None], dtype=object) array(['/image/scan_reports_edit.png', None], dtype=object) array(['/image/ac_assignments.png', None], dtype=object) array(['/image/aws_bench2_1.jpg', None], dtype=object) array(['/image/new_events_json.png', None], dtype=object) array(['/image/malware.png', None], dtype=object) array(['/image/new_events_json.png', None], dtype=object) array(['/image/compliance_1.png', None], dtype=object) array(['/image/labs.png', None], dtype=object) array(['/image/vuln_except.png', None], dtype=object) array(['/image/event_forward_2.gif', None], dtype=object) array(['/image/image_exclude.png', None], dtype=object) array(['/image/cap_filter.png', None], dtype=object) array(['/image/capture_inspect.png', None], dtype=object) array(['/image/scan_results_1.jpg', None], dtype=object) array(['/image/forward_audit.png', None], dtype=object) array(['/image/secure_overview_beta.png', None], dtype=object) array(['/image/get_started_1.png', None], dtype=object) array(['/image/feeds_status.png', None], dtype=object) array(['/image/new-events-feed-4.png', None], dtype=object) array(['/image/default_role.png', None], dtype=object) array(['/image/newruntime3.png', None], dtype=object) array(['/image/rules_library1_1.png', None], dtype=object) array(['/image/scan_menu_1.png', None], dtype=object) array(['/image/tekton-sysdig.png', None], dtype=object) array(['/image/ecrrn.png', None], dtype=object) array(['/image/vulndb.png', None], dtype=object) array(['/image/first.png', None], dtype=object) array(['/image/second.png', None], dtype=object) array(['/image/third.png', None], dtype=object) array(['/image/rn_new_dashboard.png', None], dtype=object) array(['/image/monitor_app.png', None], dtype=object) array(['/image/get-started-monitor_1.png', None], dtype=object) array(['/image/screen_shot_2020-02-06_at_1_07_50_pm.png', None], dtype=object) array(['/image/rbac.png', None], dtype=object) array(['/image/vuln-comparison__1_.gif', None], dtype=object) array(['/image/captures_redesigned_rn.png', None], dtype=object) array(['/image/rn1.png', None], dtype=object) array(['/image/rn2.png', None], dtype=object) array(['/image/rn3.png', None], dtype=object) array(['/image/rn4.png', None], dtype=object) array(['/image/rn5.png', None], dtype=object) array(['/image/rn6.png', None], dtype=object) array(['/image/file_attribs.png', None], dtype=object) array(['/image/overview_ui_1.png', 'Cluster Overview'], dtype=object) array(['/image/event_scope_dashboard.gif', 'Filter Events by Scope in Dashboards'], dtype=object) array(['/image/fav_star.png', None], dtype=object) array(['/image/fav_dash.png', None], dtype=object) array(['/image/package-reports.gif', None], dtype=object) array(['/image/trigger_params.png', None], dtype=object) array(['/image/securedash.png', None], dtype=object) array(['/image/384762126.png', None], dtype=object) array(['/image/384762121.gif', None], dtype=object) array(['/image/384762662.gif', None], dtype=object) array(['/image/384762656.png', None], dtype=object) array(['/image/384762650.png', None], dtype=object) array(['/image/384762644.png', None], dtype=object) array(['/image/384762638.png', None], dtype=object) array(['/image/384762987.gif', None], dtype=object) array(['/image/384762982.gif', None], dtype=object) array(['/image/384762977.png', None], dtype=object) array(['/image/384762972.png', None], dtype=object) array(['/image/384762967.png', None], dtype=object) array(['/image/360972316.png', None], dtype=object) array(['/image/360874010.png', None], dtype=object) array(['/image/360775705.png', None], dtype=object) array(['/image/334856230.png', None], dtype=object) array(['/image/329809925.png', None], dtype=object) array(['/image/384762992.png', None], dtype=object) array(['/image/313229658.gif', None], dtype=object) array(['/image/313229662.png', None], dtype=object) array(['/image/384763010.png', None], dtype=object) array(['/image/384763004.png', None], dtype=object) array(['/image/384762998.png', None], dtype=object) array(['/image/384763022.png', None], dtype=object) array(['/image/384762390.gif', None], dtype=object) array(['/image/384762376.png', None], dtype=object) array(['/image/384762369.png', None], dtype=object) array(['/image/384762362.gif', None], dtype=object) array(['/image/384762355.gif', None], dtype=object) array(['/image/384762348.gif', None], dtype=object) array(['/image/384762341.gif', None], dtype=object) array(['/image/384762334.png', None], dtype=object) array(['/image/5d7f8c8ad5aa3.png', None], dtype=object) array(['/image/384762292.gif', None], dtype=object) array(['/image/384762334.png', None], dtype=object) array(['/image/384762327.png', None], dtype=object) array(['/image/384762320.png', None], dtype=object) array(['/image/384762313.png', None], dtype=object) array(['/image/384762215.png', None], dtype=object) array(['/image/384762306.png', None], dtype=object) array(['/image/384762299.png', None], dtype=object) array(['/image/384762285.gif', None], dtype=object) array(['/image/384762278.png', None], dtype=object) array(['/image/384762264.png', None], dtype=object) array(['/image/384762271.png', None], dtype=object) array(['/image/384762257.gif', None], dtype=object) array(['/image/384762250.png', None], dtype=object) array(['/image/384762243.gif', None], dtype=object) array(['/image/384762236.png', None], dtype=object) array(['/image/384762229.png', None], dtype=object) array(['/image/384763016.png', None], dtype=object)]
docs.sysdig.com
This article explains how to add or remove user access to your Celigo integrator.io account. Only an account owner or administrator can add users, remove users, or change user permissions. Note: Account owners can't permanently delete their own accounts. Account owners can delete all flows and remove all users from the account. There is no way to fully delete an account owner's account. Invite a user to your account - Sign into integrator.io and click the avatar icon in the upper right. - Click My account (or My profile if you are an administrator). - Select the Users tab, and click + Invite user. - Enter the new user’s email address, and configure the permissions. - Click Save. The user receives an email with a confirmation link. Remove a user - Log in to integrator.io and click the avatar icon in the upper right. - Click My account (or My profile if you are an administrator). - Select the Users tab. - Click the Actions overflow menu next to the user and select Remove user from account. Please sign in to leave a comment.
https://docs.celigo.com/hc/en-us/articles/360053966912-Add-or-remove-integrator-io-account-users
2022-06-25T11:45:22
CC-MAIN-2022-27
1656103034930.3
[array(['/hc/article_attachments/360086745532/profile.png', 'profile.png'], dtype=object) array(['/hc/article_attachments/360086827691/UsersTab.png', 'UsersTab.png'], dtype=object) array(['/hc/article_attachments/360086745532/profile.png', 'profile.png'], dtype=object) array(['/hc/article_attachments/360086746312/UsersTabActions.png', 'UsersTabActions.png'], dtype=object) ]
docs.celigo.com
Use a single COPY command to load from multiple files Amazon Redshift can automatically load in parallel from multiple compressed data files. However, if you use multiple concurrent COPY commands to load one table from multiple files, Amazon Redshift is forced to perform a serialized load. This type of load is much slower and requires a VACUUM process at the end if the table has a sort column defined. For more information about using COPY to load data in parallel, see Loading data from Amazon S3.
https://docs.aws.amazon.com/redshift/latest/dg/c_best-practices-single-copy-command.html
2022-06-25T10:19:07
CC-MAIN-2022-27
1656103034930.3
[]
docs.aws.amazon.com
You can specify the ID pointing to the search field in the storefront. This is necessary if the value used in your storefront differs from the default one, which is search. First, you should know the ID used in your storefront: Go the storefront of your Magento store Click on the search field with the right mouse button, then click on ‘Inspect Element‘ in the context menu. You should see a developer console with a highlighted line, which should look similar to: <input id="mysearch" type="text" name="q" value="" class="input-text"/> The value stated in the ‘id’ attribute the one you need (‘mysearch’ in this example). Now, give this value to Searchanise: - Go to your Magento store admin panel - Go to System → Configuration - In the left pane, find SEARCHANISE → Settings and click on it - Enter the obtained id value in the field Search input HTML DOM ID - Click on Save Config
https://docs.searchanise.io/set-the-search-box-id-magento1/
2022-06-25T10:14:45
CC-MAIN-2022-27
1656103034930.3
[]
docs.searchanise.io
ContextMixin¶ Attributes A dictionary to include in the context. This is a convenient way of specifying some context in as_view(). Example usage: from django.views.generic import TemplateView TemplateView.as_view(extra_context={'title': 'Custom Title'}) Methods Returns a dictionary representing the template context. The keyword arguments provided will make up the returned context. Example usage: def get_context_data(self, **kwargs): context =. TemplateResponseMixin¶ 'text/html'.. The first template that is found will be used. The default implementation will return a list containing template_name (if it is specified).
https://django.readthedocs.io/en/latest/ref/class-based-views/mixins-simple.html
2022-09-25T08:51:06
CC-MAIN-2022-40
1664030334515.14
[]
django.readthedocs.io
2D Stabilization Panel¶ The purpose of this feature is to smooth out jerky camera handling on existing real-world footage. To activate the 2D stabilizer, you need to set the toggle in the panel, and additionally you need to enable Show Stable in the Clip Display pop-over. Then you’ll need to set up some tracking points to detect the image movements. The 2D Stabilization panel is used to define the data used for 2D stabilization of the shot. Several options are available in this panel: you may add a list of tracks to determine lateral image shifts and another list of tracks to determine tilting and zooming movements. Based on the average contribution of these tracks, a compensating movement is calculated and applied to each frame. When the footage includes panning and traveling movements, the stabilizer tends to push the image out of the visible area. This can be compensated by animating the parameters for the intentional, “expected” camera movement. Note To activate the 2D stabilizer, you need to set the toggle in the panel, and additionally you need to enable Show Stable in the Clip Display pop-over. Options¶ - Anchor Frame Reference point to anchor stabilization: other frames will be adjusted relative to this frame’s position, orientation and scale. You might want to select a frame number where your main subject is featured in an optimal way. - Stabilization Type - Rotation In addition to location, stabilizes detected rotation around the rotation pivot point, which is the weighted average of all location tracking points. - Scale Compensates any scale changes relative to center of rotation. - Tracks For Stabilization - Location List of tracks to be used to compensate for camera jumps, or location movement. - Rotation/Scale List of tracks to be used to compensate for camera tilts and scale changes. - Autoscale Finds smallest scale factor which, when applied to the footage, would eliminate all empty black borders near the image boundaries. - Max Limits the amount of automatic scaling. - Expected Position X/Y Known relative offset of original shot, will be subtracted, e.g. for panning shots. - Expected Rotation Rotation present on original shot, will be compensated, e.g. for deliberate tilting. - Expected Zoom Explicitly scale resulting frame to compensate zoom of original shot. - Influence The amount of transformation applied to the footage can be controlled. In some cases it is not necessary to fully compensate camera jumps. The amount of stabilization applied to the footage can be controlled. In some cases you may not want to fully compensate some of the camera’s jumps. Please note that these “* Influence” parameters do control only the compensation movements calculated by the stabilizer, not the deliberate movements added through the “Expected *”-parameters. - Interpolate The stabilizer calculates compensation movements with sub-pixel accuracy. Consequently, a resulting image pixel needs to be derived from several adjacent source footage pixels. Unfortunately, any interpolation causes some minor degree of softening and loss of image quality. - Nearest No interpolation, uses nearest neighboring pixel. No interpolation, use nearest neighboring pixel. This setting basically retains the original image’s sharpness. The downside is we also retain residual movement below the size of one pixel, and compensation movements are done in 1 pixel steps, which might be noticeable as irregular jumps. - Bilinear Simple linear interpolation between adjacent pixels. - Bicubic Highest quality interpolation, most expensive to calculate.
https://docs.blender.org/manual/en/2.82/movie_clip/tracking/clip/properties/stabilization/panel.html
2022-09-25T08:07:40
CC-MAIN-2022-40
1664030334515.14
[]
docs.blender.org
Webhooks NetBox can be configured to transmit outgoing webhooks to remote systems in response to internal object changes. The receiver can act on the data in these webhook messages to perform related tasks. For example, suppose you want to automatically configure a monitoring system to start monitoring a device when its operational status is changed to active, and remove it from monitoring for any other status. You can create a webhook in NetBox for the device model and craft its content and destination URL to effect the desired change on the receiving system. Webhooks will be sent automatically by NetBox whenever the configured constraints are met. Each webhook must be associated with at least one NetBox object type and at least one event (create, update, or delete). Users can specify the receiver URL, HTTP request type ( GET, POST, etc.), content type, and headers. A request body can also be specified; if left blank, this will default to a serialized representation of the affected object. Security Notice Webhooks support the inclusion of user-submitted code to generate the URL, custom headers, and payloads, which may pose security risks under certain conditions. Only grant permission to create or modify webhooks to trusted users. as).
https://docs.netbox.dev/en/stable/integrations/webhooks/
2022-09-25T09:11:17
CC-MAIN-2022-40
1664030334515.14
[]
docs.netbox.dev
Users API Keeping your user profile up to date helps SendGrid verify who you are and share important communications with you. You can learn more in the SendGrid Account Details documentation. Get a user's account information. GET /v3/user/account Base url: This endpoint allows you to retrieve your user account details. Your user's account information includes the user's account type and reputation. Authentication - API Key Headers Responses The type of account for this user.Allowed Values: free, paid The sender reputation for this.
https://docs.sendgrid.com/api-reference/users-api/get-a-users-account-information
2022-09-25T08:23:35
CC-MAIN-2022-40
1664030334515.14
[]
docs.sendgrid.com
Excerpt from the essay “Dalit lives” – EV]. (An excerpt from Leven als Dalit | Dalit lives)
https://www.titojoe-docs.nl/dalit-lives/excerpt-from-the-essay-dalit-lives/
2019-06-15T23:10:58
CC-MAIN-2019-26
1560627997501.61
[]
www.titojoe-docs.nl
- Informix 4GL Rapid Development System - Publisher Page - IBM - Category - Application Development Software - Release - TKU 2019-04-1 - Change History - IBM Informix 4GL Rapid Development System - Change History - Reports & Attributes - IBM Informix 4GL Rapid Development System - Reports & Attributes - Publisher Link - IBM Product Description Informix-4GL is a 4GL programming language developed by Informix during the mid-1980s. It includes embedded SQL, a report writer language, a form language, and a limited set of imperative capabilities (functions, if and while statements, and supports arrays etc.). The language is particularly close to a natural language and is easy to learn and use. It has two versions of compiler which either produce 1) intermediate byte code for an interpreter (known as the rapid development system), or 2) C Programming Language code for compilation with a C compiler into machine-code (which executes faster, but compiles slower, and executables are bigger). It is specifically designed to run as a client on a network, connected to an IBM Informix database engine service. It has a mechanism for calling C Programming Language functions and conversely, to be called from executing C programs. The RDS version also features an interactive debugger for Dumb terminals. A particular feature is the comprehensive error checking which is built into the final executable and the extremely helpful error messages produced by both compilers and executables. It also features embedded modal statements for changing compiler and executable behaviour (e.g. causing the compiler to include memory structures matching database schema structures and elements, or to continue executing in-spite of error conditions, which can be trapped later on).
https://docs.bmc.com/docs/display/Configipedia/IBM+Informix+4GL+Rapid+Development+System
2019-06-15T23:56:13
CC-MAIN-2019-26
1560627997501.61
[]
docs.bmc.com
DTR architectureEstimated reading time: 3 minutes These are the docs for DTR version 2.0 To select a different version, use the selector below. Docker Trusted Registry (DTR) is a Dockerized application that runs on a Docker Universal Control Plane cluster. Containers When you install DTR on a node, the following containers are started: Networks To allow containers to communicate, when installing DTR the following networks are created: The communication between all DTR components is secured using TLS. Also, when installing DTR, two Certificate Authorities (CAs) are created. These CAs are used to create the certificates used by Etcd and RethinkDB when communicating across nodes. Volumes DTR uses these named volumes for persisting data: If you don’t create these volumes, when installing DTR they are created with the default volume driver and flags. Image storage By default, Docker Trusted Registry stores images on the filesystem of the host where it is running. You can also configure DTR to using these cloud storage backends: - Amazon S3 - OpenStack Swift - Microsoft Azure For highly available installations, configure DTR to use a cloud storage backend or a network filesystem like NFS. High-availability support For load balancing and high-availability, you can install multiple replicas of DTR, and join them to create a cluster. Learn more about high availability.
https://docs.docker.com/v17.12/datacenter/dtr/2.0/architecture/
2019-06-15T22:35:38
CC-MAIN-2019-26
1560627997501.61
[array(['images/architecture-1.png', None], dtype=object)]
docs.docker.com
tns device ios Description Lists all recognized connected iOS devices with serial number and index. WARNING: You can run this command only on Windows and macOS systems. To view the complete help for this command, run $ tns help device ios Commands Options --available-devices- Lists all available emulators for iOS. --timeout- Sets the time in milliseconds for the operation to search for connected devices before completing. If not set, the default value is 4000. The operation will continue to wait and listen for newly connected devices and will list them after the specified time expires. Command Limitations - You can run $ tns device ioson Windows and OS X systems.
https://docs.nativescript.org/tooling/docs-cli/device/device-ios
2019-06-15T22:30:12
CC-MAIN-2019-26
1560627997501.61
[]
docs.nativescript.org
Reconciling form, field, and view objects using the objects list If you know the objects that are modified in the new definition or you want to compare your overlay objects manually with the old definition and new definition, you can directly open the Objects list from All Objects and reconcile them. You can open a single instance of the form object to reconcile form, field, and view properties. When you open any form, field, or view object for reconciliation, the properties for other other objects are also displayed for reconciliation. For example, when you open a form for reconciliation, you can view all the fields and views on that form which needs to be reconciled. When you double-click any property from the Difference list, you either see the Editor tab or Properties tab for reconciliation based on the type of object. - In the AR System Navigator, open the Objects list for the object which you want to reconcile. For example, Forms, Fields. - Find the object from the objects list. - Right click the object and click View Differences with. - Select the server with which you want to compare your definition. - The Differences List with all the properties that need to be reconciled is displayed. For information about the icons in the Differences List, see Reviewing the changes using the Differences List in BMC Remedy ITSM Deployment online documentation. - Double-click the property for which you want to see the differences in 3-way reconciliation utility. You must either use the Editor tab or Properties tab for 3-way reconciliation. The Editor tab displays the list of definitions and the Properties tab displays the list of properties. - Check and reconcile the properties by replicating them from new definition to overlay definition. You can reconcile your properties using the following options: - Click Move from Oldto replace your overlay object property with the old object property. Click Move from Newto replace your overlay object property with the new object property introduced after upgrade. Note You can use the above sub steps only for specific node level only. - Right-click to use the Copy and Paste option only where you want to directly replace the overlay object property with the new object property. You can use this option only for the entire section. For example, you cannot copy individual check box from new definition and paste it in the overlay definition. For more information about existing copy and paste feature in BMC Remedy Developer Studio, see Saving and copying objects. - Right-click the panel that you want to reconcile from the New Definition and select Copy. Right-click the same panel in the overlay definition and select Paste. The existing overlay object is updated to replace the old panel with the changes from new panel. Note For lists, permissions, and associated forms, existing overlay object is updated to merge the changes from new definition without losing the customizations. Edit the changes from new definition property into your overlay definition property. You must compare your new definition property with the overlay definition property and modify your overlay object manually to reconcile the changes. Note For complex properties, you must verify the property values in new definition and reconcile the changes in your overlay definition by editing the property inline. After updating the properties, click Apply to apply the changes to your overlay. If you go the next difference without applying the changes, your changes are lost. Also, you must save the entire object to save the changes. Click the Next Difference and Previous Difference icons to navigate through the Differences List instead of scrolling through the list. You can also go to the top of the list by using First Diff. Note You can also click the Next Field Difference and Previous Field Difference icons to navigate from one field difference to another field difference in the list. - Copy or edit all the properties for the object as mentioned in step 7. - Save the changes to complete reconciliation. - Yes — the Reconciled check box for the selected fields is selected in the Objects to Reconcile list indicating that the object is reconciled. - No — the Reconciled check box is not selected for any field in the Objects to Reconcile list indicating that you have not completed reconciliation. Important After saving the changes and closing the object, when you open the object again from the list, the Objects to reconcile list does not display the properties which are already reconciled. When you turn the Only show changes to reconcile filter OFF, you will see all the changes including the ones which are already reconciled. Reconciling objects during upgrade If you are upgrading your AR System and see the list of AR customizations that need to be reconciled, see Performing AR reconciliation in BMC Remedy ITSM Deployment online documentation.
https://docs.bmc.com/docs/ars1805/reconciling-form-field-and-view-objects-using-the-objects-list-804715813.html
2019-06-16T00:01:12
CC-MAIN-2019-26
1560627997501.61
[]
docs.bmc.com
Message-ID: <515814389.140557.1560640783270.JavaMail.confluence@docs> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_140556_520466682.1560640783269" ------=_Part_140556_520466682.1560640783269 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: In this UI Quick Start we will learn how to create a basic character and= apply an animation! After learning thes= e steps and completing the task you can then go on to more difficult topics= like Blendspaces or Mannequin for player functionality. The UI system is your gateway to both the HUD's and menus that you c= reate for your game. By default we use Scaleform for our UI creation and ex= ecute this through Adobe Flash. Within the overview you will find topics on= localization of fonts and text to adhere to the different regions you will= service your game to. Most of the setup for UI's is contained within Flow Graph and is exe= cuted as level scripting logic. This is why you will want to examine the in= terface of Flow Graph closely to understand how you can create complex UI's= without touching code..
https://docs.cryengine.com/exportword?pageId=26872538
2019-06-15T23:19:43
CC-MAIN-2019-26
1560627997501.61
[]
docs.cryengine.com
Pub/Sub (Publish and Subscribe)Pub/Sub (Publish and Subscribe) In a publish and subscribe system, events are written to a stream by one endpoint and can be reacted to by any number of other endpoints, or even by the endpoint that wrote the event. Loose CouplingLoose Coupling Pub/Sub pattern is a messaging pattern that enables a loose coupling of different parts of a system, allowing individual parts to be changed independently without concerns for other parts being affected by the change. Autonomous ServicesAutonomous Services Pub/Sub is the essential pattern that allows services to be autonomous. Without the level of decoupling provided by pub/sub, services cannot be taken offline independent of the services they interact with. An outage in one service would cause outages in the other services. Without pub/sub, there is no possibility of autonomy. And without autonomy, the use of the term service or microservice to describe an architecture is largely a mistake. A service architecture without pub/sub is arguably not a service architecture, and is more appropriately called a distributed monolith. Message ContractsMessage Contracts As is the case with all messaging patterns, as long as the structure and content of the messages flowing between parts of a system do not change, individual parts are free to change without regard for other parts of the system. This is why is critical to pay significant attention to getting the message schemas right so that they do not need to change later due to a need to correct oversights. The schemas of the messages that are exchanged between parts of a system are the contracts that each subsystem or service agree to respect. Any unplanned changes made to the messages schemas breaks the contract and obviates any expectation that the parts of the system will continue to function correctly. Event SourcingEvent Sourcing Because a system based on pub/sub provides ample opportunity to simplify application data storage by using event sourcing. Because there is pub/sub, there is events. And once a system is based on events, event sourcing can be leveraged. WARNING Pub/sub only applies to events and event streams/categories. It's not a pattern that should be used for commands and command streams/categories.
http://docs.eventide-project.org/core-concepts/pub-sub.html
2019-06-15T23:42:33
CC-MAIN-2019-26
1560627997501.61
[array(['/assets/img/pub-sub.4de9c527.png', 'Publish and Subscribe'], dtype=object) ]
docs.eventide-project.org
All content with label amazon+cluster+faq+infinispan+interactive., write_behind, ec2, eap, 缓存, eap6, hibernate, aws, getting_started, interface, clustering, setup, eviction, gridfs, out_of_memory, mod_jk, concurrency, jboss_cache, import, events, l, configuration, hash_function, batch, buddy_replication, loader, cloud, mvcc, tutorial, notification, jbosscache3x, read_committed, xml, distribution, ssl, cachestore, data_grid, cacheloader, hibernate_search, development, websocket, transaction, async, xaresource, build, domain, searchable, subsystem, demo, installation, scala, mod_cluster, client, as7, non-blocking, migration, filesystem, jpa, http, tx, gui_demo, eventing, client_server, infinispan_user_guide, standalone, hotrod, ejb, webdav, snapshot, repeatable_read, docs, consistent_hash, batching, store, jta, 2lcache, as5, jgroups, locking, rest, hot_rod more » ( - amazon, - cluster, - faq, - infinispan, - interactive ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/amazon+cluster+faq+infinispan+interactive
2019-06-15T23:38:45
CC-MAIN-2019-26
1560627997501.61
[]
docs.jboss.org
All content with label deadlock+examples+gridfs+hotrod+infinispan+installation+s. Related Labels: expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, query, archetype, lock_striping, jbossas, nexus, guide, listener, cache, amazon, s3, grid, memcached, test, jcache, api, xsd, ehcache, maven, documentation, jboss, wcm, write_behind, ec2, 缓存, hibernate, aws, getting, getting_started, interface, clustering, setup, eviction, ls, out_of_memory, concurrency, jboss_cache, import, events, configuration, hash_function, buddy_replication, loader, xa, write_through, cloud, remoting, mvcc, tutorial, notification, murmurhash2, read_committed, xml, jbosscache3x, distribution, started, cachestore, data_grid, cacheloader, resteasy, hibernate_search, cluster, development, websocket, transaction, async, interactive, xaresource, build, gatein, searchable, demo, cache_server, scala, client, as7, migration, filesystem, jpa, tx, gui_demo, eventing, client_server, testng, murmurhash, infinispan_user_guide, webdav, snapshot, repeatable_read, docs, consistent_hash, batching, store, jta, faq, 2lcache, as5, lucene, jgroups, locking, favourite, rest, hot_rod more » ( - deadlock, - examples, - gridfs, - hotrod, - infinispan, - installation, - s ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/deadlock+examples+gridfs+hotrod+infinispan+installation+s
2019-06-15T23:11:02
CC-MAIN-2019-26
1560627997501.61
[]
docs.jboss.org
All content with label expiration+grid+hot_rod+infinispan+jboss_cache+listener+read_committed+release+scala+xml. Related Labels: podcast, publish, datagrid, coherence, interceptor, server, replication,, cloud, mvcc, tutorial, notification, presentation, jbosscache3x, distribution, jira, cachestore, data_grid, cacheloader, hibernate_search, cluster, development, br, websocket, transaction, async, interactive, xaresource, build, searchable, demo, installation, cache_server, client, non-blocking, migration, jpa, filesystem, tx, user_guide, article, gui_demo, eventing, client_server, infinispan_user_guide, standalone, hotrod, webdav, snapshot, repeatable_read, docs, consistent_hash, batching, store, whitepaper, jta, faq, 2lcache, as5, jgroups, locking, rest more » ( - expiration, - grid, - hot_rod, - infinispan, - jboss_cache, - listener, - read_committed, - release, - scala, - xml ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/expiration+grid+hot_rod+infinispan+jboss_cache+listener+read_committed+release+scala+xml
2019-06-15T23:31:41
CC-MAIN-2019-26
1560627997501.61
[]
docs.jboss.org
This release is focused on bug fixes and performance improvements. Search the console within Internet Explorer Audit & Reporting visual issues Inviting users from another enterprise account caused confusion Reduce the number of Audit Event alerts on multi-record operations German translations missing in several screens Subnode beneath the root node is being highlighted by default when logging in
https://docs.keeper.io/release-notes/desktop-platforms/admin-console/admin-console-14.0.2
2019-06-15T22:30:39
CC-MAIN-2019-26
1560627997501.61
[]
docs.keeper.io
Contents Now Platform Custom Business Applications Previous Topic Next Topic Script Debugger multiple developer support Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Script Debugger multiple developer support The Script Debugger allows multiple developers to debug their own transactions without affecting each other. The Script Debugger only allows developers to see and interact with items related to their current debugging session such as: Breakpoints Call stack Transactions Status The Script Debugger prevents one developer from seeing or modifying another debug session. Administrators, however, can impersonate another user, open the Script Debugger, and debug transactions generated by the impersonated user. The Script Debugger displays the debug session user at the bottom left of the user interface. Figure 1. Sample Script Debugger user Concurrent Script Debugger usage By default, the system supports debugging [(The number of semaphores on the instance) / 4] concurrent transactions. Administrators can specify the number of concurrent transactions the system can debug by setting the glide.debugger.config.max_node_concurrency system property. The system can debug up to [(The number of semaphores on the instance) - 2] concurrent transactions. Administration of debugging sessions Debugging sessions can remain actively debugging (in the EXECUTION_PAUSED or WAITING_FOR_BREAKPOINT statuses) until: The user pauses the Script Debugger. The user closes the Script Debugger. The user session ends. Administrators can view the currently running debugger sessions by navigating to the page xmlstats.do. Administrators can stop all currently running debugging sessions by navigating to the page debugger_reset.do. Only users with the admin role can access this page. Related ConceptsScript Debugger impersonation support On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/istanbul-application-development/page/script/debugging/concept/multiple-developer-support.html
2019-06-15T23:06:02
CC-MAIN-2019-26
1560627997501.61
[]
docs.servicenow.com
Kotyle — measures vessel capacity¶ K. Kotyle can be downloaded from the Mercurial repository. For a quick start, look at the GIMP plugins and Μετρω, the Kotyle measurement tool pages. Contents¶ - Introduction, or why would you want to know capacity - Describing the profile of a vessel in a digital format - GIMP plugins - Μετρω, the Kotyle measurement tool - Calculating the volume of a vessel - Calculating the surface of a vessel - Dealing with weight and density of ceramic vessels and sherds - Other programs that calculate vessel capacity Indices and tables¶ Logo: Kotyle with satyr, from W. Lamb, Seven vases from the Hope collection, in Journal of Hellenic Studies 38, 1918.
https://kotyle.readthedocs.io/en/latest/
2019-06-15T22:47:31
CC-MAIN-2019-26
1560627997501.61
[]
kotyle.readthedocs.io
On-Demand Capacity Reservations On-Demand Capacity Reservations enable. When you no longer need the reservation, cancel the Capacity Reservation to stop incurring charges for it. When you create a Capacity Reservation, you specify the Availability Zone in which you want to reserve the capacity, the number of instances for which you want to reserve capacity, and the instance attributes, including the instance type, tenancy, and platform/OS. Capacity Reservations can only be used by instances that match their attributes. By default, they are automatically used by running instances that match the attributes. If you don't have any running instances that match the attributes of the Capacity Reservation, it remains unused until you launch an instance with matching attributes. In addition, you can use your Regional RIs with your Capacity Reservations to benefit from billing discounts. This gives you the flexibility to selectively add capacity reservations and still get the Regional RI discounts for that usage. AWS automatically applies your RI discount when the attributes of a Capacity Reservation match the attributes of an active Regional RI. Contents Differences between Capacity Reservations and RIs The following table highlights some key differences between Capacity Reservations and RIs: Capacity Reservation Limits The number of instances for which you are allowed to reserve capacity is based on your account's On-Demand Instance limit. You can reserve capacity for as many instances as that limit allows, minus the number of instances that are already running. Capacity Reservation Limitations and Restrictions Before you create Capacity Reservations, take note of the following limitations and restrictions. Active and unused Capacity Reservations count towards your On-Demand Instance limits Capacity Reservations can't be shared across AWS accounts Capacity Reservations are not transferable from one AWS account to another Zonal RI billing discounts do not apply to Capacity Reservations Capacity Reservations can't be created in Placement Groups Capacity Reservations can't be used with Dedicated Hosts
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-capacity-reservations.html
2019-06-15T23:03:58
CC-MAIN-2019-26
1560627997501.61
[]
docs.aws.amazon.com
For Agent 6, there are differences in hostname resolution. For more information, see differences in hostname resolution between Agent v5 and Agent v6. The Datadog Agent collects potential host names from many different sources. To see all the names the Agent is detecting, run the Agent status command. For example: $ sudo /etc/init.d/datadog-agent status ... primarily uses to identify itself to Datadog. The other names are submitted as well, but only as candidates for aliasing. The canonical hostname is picked according to the following rules. The first match is selected. If the name is recognized as obviously non-unique (like provided by an internal DNS server or a config-managed hosts file (myhost.mydomain). Datadog creates aliases for hostnames when there are multiple uniquely identifiable names for a single host. The names collected by the Agent (detailed above) are added as aliases for the chosen canonical name. See a list of all the hosts in your account from the Infrastructure List. See the list of aliases associated with each host in the Inspect panel, which is accessed by clicking the “Inspect” button while hovering over a host row:
https://docs.datadoghq.com/agent/faq/how-datadog-agent-determines-the-hostname/
2019-06-15T22:58:26
CC-MAIN-2019-26
1560627997501.61
[]
docs.datadoghq.com
Contents Now Platform Capabilities Previous Topic Next Topic Allocating users to a Per-User subscription Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Allocating users to a Per-User subscription You use the Subscriptions module to allocate users to all Per-User subscriptions that your organization has purchased. Example: allocate users to a Per-User subscriptionIn this example, your organization purchased a subscription for 200 fulfiller users. You, the usage admin, can therefore specify the 200 users—the subscribed users—who have the right to use the application family on your production instance. You can allocate users either individually or by adding user sets to the subscription.Note: User sets are the preferred automated method for managing the pool of users who are allocated to a subscription. Once you configure and allocate a user set, the system regularly updates the list of members based on the conditions that you specified. You do not need to manage individual users.Figure 1. Allocating users to a Per-User subscription Guidelines for allocating users You allocate users only to Per-User subscriptions. The instance auto-allocates and reports on monthly usage for all other subscription types. When you add a user set to a subscription, the system allocates all users in the user set, up to the purchased subscription limit. In the case that adding the users in a user set would exceed the subscription limit, then none of the users in the user set are subscribed. Instead, all users in the user set are set to the Pending state and are listed on the Pending Users related list. Users that are currently subscribed are not affected. See Manage users in the Pending state. You can unsubscribe any user as needed. You can exclude users. Excluded users cannot be allocated to a subscription individually or through a user set. At any time, you can remove a user from the list of excluded users. Note: If you unsubscribe a user who is a member of a user set that is associated with an auto-synced subscription, the user will be resubscribed in the next synchronization cycle. To remove such a user, exclude the user. See Exclude a user from a subscription. Methods for allocating users Define user sets and then add user sets to the subscription Allocate individual users to the subscription (as many as are required) Overview of the procedure for allocating users Allocate users to the subscription using one or both of the following methods: Build one or more user sets and add the user sets to the subscription Allocate individual users After you have allocated users to the subscription, you can add or remove user sets and individual users as needed. In addition, you can exclude individual users as needed. Subscription user setsUser sets make it easy to specify which users should be allocated to a subscription. You can use roles, user groups, or both roles and groups as the conditions for a user to be a member of a user set (for example, everyone with the fulfiller role in the "IT department" group). Allocate an individual user from the Subscription formIn addition to adding user sets to a subscription, you can add individual users directly. Users that you allocate directly are subscribed immediately.Allocate an individual user from the User Record formTo simplify the process of setting up user capabilities, you can allocate or deallocate an individual user to a subscription while viewing the user data on the User form. Users that you allocate directly are subscribed immediately.Manage users in the Pending stateUsers are set to the Pending state when adding the users would exceed the subscription limit (the Purchased value). Unsubscribe a userWhen you remove a user from a subscription, the user is unsubscribed.Exclude a user from a subscriptionYou can exclude users. Excluded users cannot be allocated to a specified subscription. The auto-sync process does not add excluded users.Remove a user from the Excluded listYou can unexclude a user so that the user can be allocated to a subscription. Related ReferenceTypes of subscriptions On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/london-servicenow-platform/page/administer/subscription-management/concept/allocating-users.html
2019-06-15T23:11:11
CC-MAIN-2019-26
1560627997501.61
[]
docs.servicenow.com
The Kubernetes-native platform (v2). The Package manager for Kubernetes. The Kubernetes-native Service Broker. A common architecture pattern of multi-process applications is to have one process serve public requests while having multiple other processes supporting the public one to, for example, perform actions on a schedule or process work items from a queue. To implement this system of apps in Deis Workflow, set up the apps to communicate using DNS resolution, as shown above, and hide the supporting processes from public view by removing them from the Deis Workflow router. See Deis Blog: Private Applications on Workflow for more details, which walks through an example of removing an app from the router. Deis Workflow supports deploying a single app composed of a system of processes. Each Deis. Deis Workflow apps, then, can simply send requests to the domain name given to the service, which is "app-name.app-namespace".
https://docs.teamhephy.com/applications/inter-app-communication/
2019-06-15T23:40:30
CC-MAIN-2019-26
1560627997501.61
[]
docs.teamhephy.com
Chemical Summary for Lemon peel oil from > Chemical Search Toxicity of Lemon peel oil Toxicity Summary by Category PAN Bad Actor Acute Toxicity Carcinogen Cholinesterase Inhibitor Water Contaminant Developmental or Reproductive Toxin Endocrine Disruptor Not Listed No Identification and Use of Lemon peel oil Chemical Class and Use Type Chemical Class Oil - essential Use Type Unknown Other Names for this Chemical 040518 [US EPA PC Code, Text ]; 40518 [US EPA PC Code, Numeric ]; 8020-19-7 (CAS number); 8020197; 8020197 (CAS number without hyphens); Citrus lemon peel oil; Lemon peel oil; Lemonpeeloil; Oils, essential, lemon Additional Information for Lemon peel oil.
http://docs.pesticideinfo.org/Summary_Chemical.jsp?Rec_Id=PRI3842
2019-06-15T23:19:39
CC-MAIN-2019-26
1560627997501.61
[]
docs.pesticideinfo.org
Configure index storage You configure indexes in indexes.conf. How you edit indexes.conf depends on whether you're using index replication, also known as "clustered indexing": -". - For clustered indexes, edit a copy of indexes.confon the cluster master node and then distribute it to all the peer nodes, as described in "Configure the peer indexes". This table lists the key indexes.conf attributes affecting buckets and what they configure. It also provides links to other topics that show how to use these attributes. For the most detailed information on these attributes, as well as others, always refer to the indexes.conf spec file. Note: For non-clustered indexes only, you can use Splunk Manager to configure the path to your indexes. Go to Splunk Manager > System settings > General settings. Under the section Index settings, set the field Path to indexes. After doing this, you must restart Splunk from the CLI, not from within Manager.!
https://docs.splunk.com/Documentation/Splunk/5.0.18/Indexer/Configureindexstorage
2019-06-15T22:58:58
CC-MAIN-2019-26
1560627997501.61
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Deploying Applications "Application" (as used in the Mule ESB environment) packages one or many flows into a deployment unit that can be managed together. Flows that share a common development life cycle and/or share components are typically packaged in a single application. Use the Deployments Tab in the management console to perform application provisioning, including deploying, undeploying, and redeploying applications to specified target Mule ESB servers. You can also use the Deployments Tab to to deploy, undeploy, and redeploy applications to a cluster of Mule ESB servers (nodes). For further details, see Deploying, Redeploying, or Undeploying an Application To or From a Cluster. To provision an application, you create a new deployment. Deployments may contain one or more applications. This feature allows you to provision a group of flows simultaneously to any number of run time Mule ESB instances. For example, you could create a deployment and separately provision it to your staging and UAT (user acceptance testing) environments. Before deployment takes place, applications are automatically stored in the Mule Repository and versioned. Use the Deployments Tab in the management console to perform application provisioning, including deploying, undeploying, and redeploying applications to specified target servers. The console lets you create deployments, which are groups for organizing web applications for deployment to target servers. Applications can be added to deployment groups and kept in the repository, to be deployed (or undeployed) at some later time. You can also deploy applications at the same time you add them to the repository. Viewing and Managing Deployments The Deployments Tab provides two filtered views in the navigation tree: Deployments and Repository. Deployments lists all provisioned applications, the servers to which they are provisioned, and the current status of the deployment. Click the Deployments button in the navigation tree to view and manage deployments. Deployments also lists the clusters to which applications are provisioned, and the current status of the deployment. The Repository view shows all applications loaded into the repository and whether or not they have been provisioned. Once applications have been added to the repository, use the Repository button in the navigation tree to view and manage them. For how to work with the repository, see Maintaining the Server Application Repository. When you open the Applications screen, it displays a list of all deployments, as follows: Deployments to clusters are also displayed. To perform a deployment action on an existing deployment, locate it in the list and check the box to the left of the row. Then, select the deployment action you wish to perform. The available actions depend on the current state of the deployment. In the examples above, several deployments are selected. The appropriate buttons are highlighted for these selected deployments, depending on their status. Deployment Provisioning Actions The table below lists the different deployment actions you can perform and what the expected impact will be. Keep in mind that if during the deploy operation, one application fails to deploy, the entire deployment fails. Similarly, the undeploy and redeploy operations fail if just one application cannot be undeployed or redeployed. However, if a server in a deployment is down at the time of the operation, the operation (deploy, undeploy, or redeploy) continues and just skips the down server. Note, too, that the redeploy operation redeploys all applications in the deployment, regardless of whether the application is deployed or undeployed. Reconciled Deployments Deployments may appear as reconciled or not reconciled. When a deployment is marked as reconciled, it indicates that the applications in the deployment unit have all been successfully deployed to all servers specified for that deployment. On the other hand, when a deployment is marked as not reconciled, it indicates that at least one application in the deployment unit did not successfully deploy to at least one server specified for the deployment. It might also indicate that at least one of the servers for the deployment is currently down. Creating a New Deployment Group Click the New button to create a new deployment group and specify servers and applications for that group. The figure below shows the options you have for specifying applications and servers for a group. You can also specify a cluster for the group. You enter a name for the deployment, plus select the server or servers (or server groups) for the deployment. Selected servers and server groups appear in the box beneath Servers, and can be removed by clicking the red X to the right of the server name. You can also select a cluster for the deployment. You also add applications to the deployment, either by selecting them from the repository or uploading new applications. (See Maintaining the Server Application Repository for more information on the advanced options when adding applications to the repository.) Added applications appear listed in the box beneath Applications, and can be removed by clicking the red X to the right of the application name. Once you complete the deployment specification, click Deploy to save and deploy simultaneously. If you want to deploy the application later, just select Save and the application is saved in an Undeployed state. If you cancel without saving, the deployment is discarded. Changing Deployed Applications After you save a deployment, the same screen shows the current deployed status for the deployment (either deployed or undeployed) and lets you edit the deployment, such as add additional servers or applications. If you make any changes to the configuration of the deployment, you will need to redeploy for it to take effect. Open the deployments screen from the main Applications panel by clicking the deployment name in the deployments table. Depending on the status, you can use this screen to deploy, undeploy, or redeploy the group of applications. You can also deploy, undeploy, or redeploy the group of applications to a cluster.
https://docs.mulesoft.com/mule-management-console/v/3.5/deploying-applications
2017-04-23T09:56:06
CC-MAIN-2017-17
1492917118519.29
[array(['./_images/applications.png', 'applications'], dtype=object) array(['./_images/applications_cluster.png', 'applications_cluster'], dtype=object) array(['./_images/add-deployment.png', 'add-deployment'], dtype=object) array(['./_images/add-deployment_cluster.png', 'add-deployment_cluster'], dtype=object) array(['./_images/edit-deployment.png', 'edit-deployment'], dtype=object) array(['./_images/edit-deployment_cluster.png', 'edit-deployment_cluster'], dtype=object) ]
docs.mulesoft.com
WebServerURLPrefix Synopsis [Startup] WebServerURLPrefix=n n is an alphanumeric string to be used in a URL. As a guideline, WebServerURLPrefix should be shorter than 80 characters. The default is an empty string. Description WebServerURLPrefix is used by Studio when constructing URLs. This should match the CSP Server Instance setting. This is only one of the steps required to set up a remote web server to access one or more InterSystems IRIS instances. For details, see the “Connecting to Remote Servers” chapter in the System Administration Guide. Changing This Parameter On the Startup page of the Management Portal (System Administration > Configuration > Additional Settings > Startup), in the WebServerURLPrefix row, select Edit. Enter an InterSystems IRIS instance name. Instead of using the Management Portal, you can change WebServerURLPrefix in the Config.Startup Opens in a new window class (as described in the class reference) or by editing the CPF in a text editor (as described in the Editing the Active CPF section of the “Introduction to the Configuration Parameter File” chapter in this book).
https://docs.intersystems.com/healthconnectlatest/csp/docbook/DocBook.UI.Page.cls?KEY=RACS_WEBSERVERURLPREFIX
2021-06-12T18:52:07
CC-MAIN-2021-25
1623487586239.2
[]
docs.intersystems.com
Location Documentation Home Palo Alto Networks Support Live Community Knowledge Base Panorama Panorama Administrator's Guide Set Up Panorama Set Up Authentication Using Custom Certificates How Are SSL/TLS Connections Mutually Authenticated? Document: Panorama Administrator's Guide How Are SSL/TLS Connections Mutually Authenticated? Download PDF Last Updated: Mon May 24 16:03:08 PDT 2021 Current Version: 9.1 Version 10.1 Version 10.0 Version 9.1 Version 9.0 Version 8.1 Version 8.0 (EoL) How Are SSL/TLS Connections Mutually Authenticated? In a regular SSL connection, only the server needs to identify itself to the client by presenting its certificate. However, in mutual SSL authentication, the client presents its certificate to the server as well. Panorama, the primary Panorama HA peer, Log Collectors, WildFire appliances, and PAN-DB appliances can act as the server. Firewalls, Log Collectors, WildFire appliances, and the secondary Panorama HA peer can act as the client. The role that a device takes on depends the deployment. For example, in the diagram below, Panorama manages a number of firewalls and a collector group and acts as the server for the firewalls and Log Collectors. The Log Collector acts as the server to the firewalls that send logs to it. To deploy custom certificates for mutual authentication in your deployment, you need: SSL/TLS Service Profile —An SSL/TLS service profile defines the security of the connections by referencing your custom certificate and establishing the SSL/TLS protocol versions used by the server device to communicate with client devices. Server Certificate and Profile —Devices in the server role require a certificate and certificate profile to identify themselves to the client devices. You can deploy this certificate from your enterprise public key infrastructure (PKI), purchase one from a trusted third-party CA, or generate a self-signed certificate locally. The server certificate must include the IP address or FQDN of the device’s management interface in the certificate common name (CN) or Subject Alt Name. The client firewall or Log Collector matches the CN or Subject Alt Name in the certificate the server presents against the server’s IP address or FQDN to verify the server’s identity. Additionally, use the certificate profile to define certificate revocation status (OCSP/CRL) and the actions taken based on the revocation status. Client Certificates and Profile —Each managed device. The client certificate behavior also applies to Panorama HA peer connections. You can configure the client certificate and certificate profile on each client device or push the configuration from Panorama to each device as part of a template. SSL/TLS Authentication Recommended For You Recommended Videos Recommended videos not found.
https://docs.paloaltonetworks.com/panorama/9-1/panorama-admin/set-up-panorama/set-up-authentication-using-custom-certificates/how-are-ssltls-connections-mutually-authenticated.html
2021-06-12T18:20:29
CC-MAIN-2021-25
1623487586239.2
[]
docs.paloaltonetworks.com
password_hash: "$6$5s2u6/jR$un0AvWnqilcgaNB3Mkxd5yYv6mTlWfOoCYHZmfi3LDKVltj.E8XNKEcwWm..." ssh_authorized_keys: - "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGdByTgSVHq......." groups: [ sudo, docker ].
https://docs.fedoraproject.org/fi/fedora-coreos/migrate-ah/
2021-06-12T18:42:08
CC-MAIN-2021-25
1623487586239.2
[]
docs.fedoraproject.org
Figure 16: Groups tab Show Groups Select this checkbox (on tab header) to make this tab visible on Ad hoc Report Wizard/ Smart View. Initial Number of Groups Specify the Initial number of groups a user can specify on Ad hoc Report Wizard/ Smart View. Show Option to Rank Groups Grouped records can be ordered based on value of Ranking field. For example, groups are required to be in descending order of “Total Sales” for the group. If “Total Sales” of “East Region” is 1000 units and “Total Sales” of “West Region” is 1900 units, then, “West Region” group detail will appear before “East Region” group details. Check this checkbox to provide Ranking Groups related options on ad hoc report wizard. Show When Select this checkbox if you want your user to get option to place Show When option (to specify certain criteria to meet in order to show this field) on Ad hoc Report Wizard/ Smart View. GroupBy Options Intellicus can group the data by date, number and character. Select the options you want your users to use.
https://docs.intellicus.com/documentation/using-intellicus-19-0/ad-hoc-reports-19-0/configuring-ad-hoc-reporting-19-0/properties-for-ad-hoc-smart-report/groups/
2021-06-12T16:57:19
CC-MAIN-2021-25
1623487586239.2
[array(['https://docs.intellicus.com/wp-content/uploads/2019/12/Groups-tab.png', 'Groups tab'], dtype=object) ]
docs.intellicus.com
You can monitor the network usage of devices and operating systems for a specific Edge. Clickto view the following: At the top of the page, you can choose a specific time period to view the details of clients used for the selected duration. Click Operating Systems to view the report based on the Operating Systems used in the devices. devices or operating systems. To view drill-down reports with more details, click the links displayed in the metrics column. The following image shows a detailed report of top clients. Click the arrows displayed next to Top Applications to navigate to the Applications tab.
https://docs.vmware.com/en/VMware-SD-WAN/4.4/VMware-SD-WAN-Administration-Guide/GUID-0E2010D4-EB43-4EE7-906B-8CAAC5B5DEB4.html
2021-06-12T18:52:03
CC-MAIN-2021-25
1623487586239.2
[array(['images/GUID-3A597248-3E38-49AB-8932-6514FF3063AF-low.png', None], dtype=object) array(['images/GUID-C24A9BC8-F7AD-467C-905E-6EEF0F0EC9E5-low.png', None], dtype=object) ]
docs.vmware.com
Introduction¶). We offer recipes to deploy TORQUE (optionally with MAUI), SLURM, SGE, HTCondor, Mesos, Nomad and Kubernetes clusters that can be self-managed with CLUES: it starts with a single-node cluster and working nodes will be dynamically deployed and provisioned to fit increasing load (number of jobs at the LRMS). Working nodes will be undeployed when they are idle. This introduces a cost-efficient approach for Cluster-based computing. Installation¶ Requisites¶ The program ec3 requires Python 2.6+, PLY, PyYAML, Requests, jsonschema and an IM server, which is used to launch the virtual machines. PyYAML is usually available in distribution repositories ( python-yaml in Debian; PyYAML in Red Hat; and PyYAML in pip). PLY is usually available in distribution repositories ( python-ply and ply in pip). Requests is usually available in distribution repositories ( python-requests and requests in pip). jsonschema is usually available in distribution repositories ( python-jsonschema and jsonschema in pip). By default ec3 uses our public IM server in appsgrycap.i3m.upv.es. Optionally you can deploy a local IM server following the instructions of the `IM manual`_. Also sshpass command is required to provide the user with ssh access to the cluster. Installing¶ First you need to install pip tool. To install them in Debian and Ubuntu based distributions, do: sudo apt update sudo apt install python-pip In Red Hat based distributions (RHEL, CentOS, Amazon Linux, Oracle Linux, Fedora, etc.), do: sudo yum install epel-release sudo yum install which python-pip Then you only have to call the install command of the pip tool with the ec3-cli package: sudo pip install ec3-cli You can also download the last ec3 version from this git repository: git clone Then you can install it calling the pip tool with the current ec3 directory: sudo pip install ./ec3 Basic example with Amazon EC2¶ First create a file auth.txt with a single line like this: id = provider ; type = EC2 ; username = <<Access Key ID>> ; password = <<Secret Access Key>> Replace <<Access Key ID>> and <<Secret Access Key>> with the corresponding values for the AWS account where the cluster will be deployed. It is safer to use the credentials of an IAM user created within your AWS account. This file is the authorization file (see Authorization file), and can have more than one set of credentials. Now we are going to deploy a cluster in Amazon EC2 with a limit number of nodes = 10. The parameter to indicate the maximum size of the cluster is called ec3_max_instances and it has to be indicated in the RADL file that describes the infrastructure to deploy. In our case, we are going to use the ubuntu-ec2 recipe, available in our github repo. The next command deploys a TORQUE cluster based on an Ubuntu image: $ ec3 launch mycluster torque ubuntu-ec2 -a auth.txt -y WARNING: you are not using a secure connection and this can compromise the secrecy of the passwords and private keys available in the authorization file. Creating infrastructure Infrastructure successfully created with ID: 60 ▄▟▙▄¨ Front-end state: running, IP: 132.43.105.28 If you deployed a local IM server, use the next command instead: $ ec3 launch mycluster torque ubuntu-ec2 -a auth.txt -u This can take several minutes. After that, open a ssh session to the front-end: $ ec3 ssh mycluster Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-24-generic x86_64) * Documentation: ubuntu@torqueserver:~$ Also you can show basic information about the deployed clusters by executing: $ ec3 list name state IP nodes --------------------------------------------- mycluster configured 132.43.105.28 0 EC3 in Docker Hub¶ EC3 has an official Docker container image available in Docker Hub that can be used instead of installing the CLI. You can download it by typing: $ sudo docker pull grycap/ec3 You can exploit all the potential of EC3 as if you download the CLI and run it on your computer: $ sudo docker run grycap/ec3 list $ sudo docker run grycap/ec3 templates To launch a cluster, you can use the recipes that you have locally by mounting the folder as a volume. Also it is recommendable to mantain the data of active clusters locally, by mounting a volume as follows: $ sudo docker run -v /home/user/:/tmp/ -v /home/user/ec3/templates/:/etc/ec3/templates -v /tmp/.ec3/clusters:/root/.ec3/clusters grycap/ec3 launch mycluster torque ubuntu16 -a /tmp/auth.dat Notice that you need to change the local paths to the paths where you store the auth file, the templates folder and the .ec3/clusters folder. So, once the front-end is deployed and configured you can connect to it by using: $ sudo docker run -ti -v /tmp/.ec3/clusters:/root/.ec3/clusters grycap/ec3 ssh mycluster Later on, when you need to destroy the cluster, you can type: $ sudo docker run -ti -v /tmp/.ec3/clusters:/root/.ec3/clusters grycap/ec3 destroy mycluster Additional information¶ - EC3 Command-line Interface. - Templates. - Information about available templates: ec3 templates [--search <topic>] [--full-description].
https://ec3.readthedocs.io/en/devel/intro.html
2021-06-12T17:38:31
CC-MAIN-2021-25
1623487586239.2
[]
ec3.readthedocs.io
This reference architecture shows a recommended architecture for IoT applications on Azure using PaaS (platform-as-a-service) components. IoT applications can be described as things (devices) sending data that generates insights. These insights generate actions to improve a business or process. An example is an engine (the thing) sending temperature data. This data is used to evaluate whether the engine is performing as expected (the insight). The insight is used to proactively prioritize the maintenance schedule for the engine (the action). This reference architecture uses Azure PaaS (platform-as-a-service) components. Another recommended option for building IoT solutions on Azure is: - Azure IoT Central. IoT Central is a fully managed SaaS (software-as-a-service) solution. It abstracts the technical choices and lets you focus on your solution exclusively. This simplicity comes with a tradeoff in being less customizable than a PaaS-based solution. At a high level, there are two ways to process telemetry data, hot path and cold path. The difference has to do with requirements for latency and data access. - The hot path analyzes data in near-real-time, as it arrives. In the hot path, telemetry must be processed with very low latency. The hot path is typically implemented using a stream processing engine. The output may trigger an alert, or be written to a structured format that can be queried using analytical tools. - The cold path performs batch processing at longer intervals (hourly or daily). The cold path typically operates over large volumes of data, but the results don't need to be as timely as the hot path. In the cold path, raw telemetry is captured and then fed into a batch process. Architecture This architecture consists of the following components. Some applications may not require every component listed here. IoT devices. Devices can securely register with the cloud, and can connect to the cloud to send and receive data. Some devices may be edge devices that perform some data processing on the device itself or in a field gateway. We recommend Azure IoT Edge for edge processing. Cloud gateway. A cloud gateway provides a cloud hub for devices to connect securely to the cloud and send data. It also provides device management, capabilities, including command and control of devices. For the cloud gateway, we recommend IoT Hub. IoT Hub is a hosted cloud service that ingests events from devices, acting as a message broker between devices and backend services. IoT Hub provides secure connectivity, event ingestion, bidirectional communication, and device management. Device provisioning. For registering and connecting large sets of devices, we recommend using the IoT Hub Device Provisioning Service (DPS). DPS lets you assign and register devices to specific Azure IoT Hub endpoints at scale. Stream processing. Stream processing analyzes large streams of data records and evaluates rules for those streams. For stream processing, we recommend Azure Stream Analytics. Stream Analytics can execute complex analysis at scale, using time windowing functions, stream aggregations, and external data source joins. Another option is Apache Spark on Azure Databricks. Machine learning allows predictive algorithms to be executed over historical telemetry data, enabling scenarios such as predictive maintenance. For machine learning, we recommend Azure Machine Learning. Warm path storage holds data that must be available immediately from device for reporting and visualization. For warm path storage, we recommend Cosmos DB. Cosmos DB is a globally distributed, multi-model database. Cold path storage holds data that is kept longer-term and is used for batch processing. For cold path storage, we recommend Azure Blob Storage. Data can be archived in Blob storage indefinitely at low cost, and is easily accessible for batch processing. Data transformation manipulates or aggregates the telemetry stream. Examples include protocol transformation, such as converting binary data to JSON, or combining data points. If the data must be transformed before reaching IoT Hub, we recommend using a protocol gateway (not shown). Otherwise, data can be transformed after it reaches IoT Hub. In that case, we recommend using Azure Functions, which has built-in integration with IoT Hub, Cosmos DB, and Blob Storage. Business process integration performs actions based on insights from the device data. This could include storing informational messages, raising alarms, sending email or SMS messages, or integrating with CRM. We recommend using Azure Logic Apps for business process integration. User management restricts which users or groups can perform actions on devices, such as upgrading firmware. It also defines capabilities for users in applications. We recommend using Azure Active Directory to authenticate and authorize users. Security monitoring Azure Security Center for IoT provides an end-to-end security solution for IoT workloads and simplifies their protection by delivering unified visibility and control, adaptive threat prevention, and intelligent threat detection and response across workloads from leaf devices through Edge as well as up through the clouds. Scalability considerations An IoT application should be built as discrete services that can scale independently. Consider the following scalability points: IoTHub. For IoT Hub, consider the following scale factors: - The maximum daily quota of messages into IoT Hub. - The quota of connected devices in an IoT Hub instance. - Ingestion throughput — how quickly IoT Hub can ingest messages. - Processing throughput — how quickly the incoming messages are processed. Each IoT hub is provisioned with a certain number of units in a specific tier. The tier and number of units determine the maximum daily quota of messages that devices can send to the hub. For more information, see IoT Hub quotas and throttling. You can scale up a hub without interrupting existing operations. Stream Analytics. Stream Analytics jobs scale best if they are parallel at all points in the Stream Analytics pipeline, from input to query to output. A fully parallel job allows Stream Analytics to split the work across multiple compute nodes. Otherwise, Stream Analytics has to combine the stream data into one place. For more information, see Leverage query parallelization in Azure Stream Analytics. IoT Hub automatically partitions device messages based on the device ID. All of the messages from a particular device will always arrive on the same partition, but a single partition will have messages from multiple devices. Therefore, the unit of parallelization is the partition ID. Functions. When reading from the Event Hubs endpoint, there is a maximum of function instance per event hub partition. The maximum processing rate is determined by how fast one function instance can process the events from a single partition. The function should process messages in batches. Cosmos DB. To scale out a Cosmos DB collection, create the collection with a partition key and include the partition key in each document that you write. For more information, see Best practices when choosing a partition key. - If you store and update a single document per device, the device ID is a good partition key. Writes are evenly distributed across the keys. The size of each partition is strictly bounded, because there is a single document for each key value. - If you store a separate document for every device message, using the device ID as a partition key would quickly exceed the 10-GB limit per partition. Message ID is a better partition key in that case. Typically you would still include device ID in the document for indexing and querying. Azure Time Series Insights (TSI) is an analytics, storage and visualization service for time-series data, providing capabilities including SQL-like filtering and aggregation, alleviating the need for user-defined functions. Time Series Insights provides a data explorer to visualize and query data as well as REST Query APIs. In addition to time series data, TSI is also well-suited for solutions that need to query aggregates over large sets of data. With support for multi layered storage, rich APIs, model and it’s integration with Azure IoT ecosystem, explorer for visualizations, and extensibility through Power BI, etc. TSI is our recommendation for time series data storage and analytics. Security considerations Trustworthy and secure communication All information received from and sent to a device must be trustworthy. Unless a device can support the following cryptographic capabilities, it should be constrained to local networks and all internetwork communication should go through a field gateway: - Data encryption with a provably secure, publicly analyzed, and broadly implemented symmetric-key encryption algorithm. - Digital signature with a provably secure, publicly analyzed, and broadly implemented symmetric-key signature algorithm. - Support for either TLS 1.2 for TCP or other stream-based communication paths or DTLS 1.2 for datagram-based communication paths. Support of X.509 certificate handling is optional and can be replaced by the more compute-efficient and wire-efficient pre-shared key mode for TLS, which can be implemented with support for the AES and SHA-2 algorithms. - Updateable key-store and per-device keys. Each device must have unique key material or tokens that identify it toward the system. The devices should store the key securely on the device (for example, using a secure key-store). The device should be able to update the keys or tokens periodically, or reactively in emergency situations such as a system breach. - The firmware and application software on the device must allow for updates to enable the repair of discovered security vulnerabilities. However, many devices are too constrained to support these requirements. In that case, a field gateway should be used. Devices connect securely to the field gateway through a local area network, and the gateway enables secure communication to the cloud. Physical tamper-proofing It is strongly recommended that device design incorporates features that defend against physical manipulation attempts, to help ensure the security integrity and trustworthiness of the overall system. For example: - Choose microcontrollers/microprocessors or auxiliary hardware that provides secure storage and use of cryptographic key material, such as trusted platform module (TPM) integration. - Secure boot loader and secure software loading, anchored in the TPM. - Use sensors to detect intrusion attempts and attempts to manipulate the device environment with alerting and potentially "digital self-destruction" of the device. For additional security considerations, see Internet of Things (IoT) security architecture. Monitoring and logging Logging and monitoring systems are used to determine whether the solution is functioning and to help troubleshoot problems. Monitoring and logging systems help answer the following operational questions: - Are devices or systems in an error condition? - Are devices or systems correctly configured? - Are devices or systems generating accurate data? - Are systems meeting the expectations of both the business and end customers? Logging and monitoring tools are typically comprised of the following four components: - System performance and timeline visualization tools to monitor the system and for basic troubleshooting. - Buffered data ingestion, to buffer log data. - Persistence store to store log data. - Search and query capabilities, to view log data for use in detailed troubleshooting. Monitoring systems provide insights into the health, security, and stability, and performance of an IoT solution. These systems can also provide a more detailed view, recording component configuration changes and providing extracted logging data that can surface potential security vulnerabilities, enhance the incident management process, and help the owner of the system troubleshoot problems. Comprehensive monitoring solutions include the ability to query information for specific subsystems or aggregating across multiple subsystems. Monitoring system development should begin by defining healthy operation, regulatory compliance, and audit requirements. Metrics collected may include: - Physical devices, edge devices, and infrastructure components reporting configuration changes. - Applications reporting configuration changes, security audit logs, request rates, response times, error rates, and garbage collection statistics for managed languages. - Databases, persistence stores, and caches reporting query and write performance, schema changes, security audit log, locks or deadlocks, index performance, CPU, memory, and disk usage. - Managed services (IaaS, PaaS, SaaS, and FaaS) reporting health metrics and configuration changes that impact dependent system health and performance. Visualization of monitoring metrics alert operators to system instabilities and facilitate incident response. Tracing telemetry Tracing telemetry allows an operator to follow the journey of a piece of telemetry from creation through the system. Tracing is important for debugging and troubleshooting. For IoT solutions that use Azure IoT Hub and the IoT Hub Device SDKs, tracing datagrams can be originated as Cloud-to-Device messages and included in the telemetry stream. Logging Logging systems are integral in understanding what actions or activities a solution has performed, failures that have occurred, and can provide help in fixing those failures. Logs can be analyzed to help understand and remedy error conditions, enhance performance characteristics, and ensure compliance with governing rule and regulations. Though plain-text logging is lower impact on upfront development costs, it is more challenging for a machine to parse/read. We recommend structured logging be used, as collected information is both machine parsable and human readable. Structured logging adds situational context and metadata to the log information. In structured logging, properties are first class citizens formatted as key/value pairs, or with a fixed schema, to enhance search and query capabilities. DevOps considerations Use the Infrastructure as code (IaC). IaC is the management of infrastructure (networks, virtual machines, load balancers, and connection topology) with a declarative approach. Templates should be versioned and part of the release pipeline. The most reliable deployment processes are automated and idempotent. One way is to create Azure Resource Manager template for provisioning the IoT resources and the infrastructure. To automate infrastructure deployment, you can use Azure DevOps Services, Jenkins, or other CI/CD solutions. Azure Pipelines is part of Azure DevOps Services and runs automated builds, tests, and deployments. Consider staging your workloads by deploying to various stages and running validations at each stage before moving on to the next one; that way you can push updates to your production environments in a highly controlled way and minimize unanticipated deployment issues. Blue-green deployment and Canary releases are recommended deployment strategies for updating live production environments. Also consider having a good rollback strategy for when a deployment fails; for example you could automatically redeploy an earlier, successful deployment from your deployment history, the --rollback-on-error flag parameter in Azure CLI is good example. Consider monitoring your solution by using Azure Monitor. Azure Monitor is the main source of monitoring and logging for all your Azure services, it provides diagnostics information for Azure resources. You can for example, monitor the operations that take place within your IoT hub. There are specific metrics and events that Azure Monitor supports, as well as services, schemas, and categories for Azure Diagnostic Logs. For more information, see the DevOps section in Microsoft Azure Well-Architected Framework. Cost considerations In general, use the Azure pricing calculator to estimate costs. Other considerations are described in the Cost section in Microsoft Azure Well-Architected Framework. There are ways to optimize costs associated the services used in this reference architecture. Azure IoT Hub In this architecture, IoT Hub is the cloud gateway that ingests events from devices. IoT Hub billing varies depending on the type of operation. Create, update, insert, delete are free. Successful operations such as device-to-cloud and cloud-to-device messages are charged. Device-to-cloud messages sent successfully are charged in 4-KB chunks on ingress into IoT Hub. For example, a 6-KB message is charged as two messages. IoT Hub maintains state information about each connected device in a device twin JSON document. Read operations from a device twin document are charged. IoT Hub offers two tiers: Basic and Standard. Consider using the Standard tier if your IoT architecture uses bi-directional communication capabilities. This tier also offers a free edition that is most suited for testing purposes. If you only need uni-directional communication from devices to the cloud, use the Basic tier, which is cheaper. For more information, see IoT Hub Pricing. Azure Stream Analytics Azure Stream Analytics is used for stream processing and rules evaluation. Azure Stream Analytics is priced by the number of Streaming Units (SU) per hour, which takes into compute, memory, and throughput required to process the data. Azure Stream Analytics on IoT Edge is billed per job. Billing starts when a Stream Analytics job is deployed to devices regardless of the job status, running, failed, or stopped. For more information about pricing, see Stream Analytics pricing. Azure Functions Azure Functions is used to transform data after it reaches the IoT Hub. From a cost perspective, the recommendation is to use consumption plan because you pay only for the compute resources you use. You are charged based on per-second resource consumption each time an event triggers the execution of the function. Processing several events in a single execution or batches can reduce cost. Azure Logic Apps In this architecture, Logic Apps is used for business process integration. Logic apps pricing works on the pay-as-you-go model. Triggers, actions, and connector executions are metered each time a logic app runs. All successful and unsuccessful actions, including triggers, are considered as executions. For instance, your logic app processes 1000 messages a day. A workflow of five actions will cost less than $6. For more information, see Logic Apps pricing. Data Storage For cold path storage, Azure Blob Storage is the most cost-effective option. For warm path storage, consider using Azure Cosmos DB. For more information, see Cosmos DB pricing. Next steps For a more detailed discussion of the recommended architecture and implementation choices, see Microsoft Azure IoT Reference Architecture. For detailed documentation of the various Azure IoT services, see Azure IoT Fundamentals.
https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/iot
2021-06-12T18:12:13
CC-MAIN-2021-25
1623487586239.2
[]
docs.microsoft.com
The MCP 2019.2.7 update includes the following changes in the minor versions of the MCP components compared to the MCP 2019.2.6 update. Updated minor versions of the MCP components Updated packages from the Mirantis and mirrored repositories Note For the full list of the versions of the major MCP components, see Major components versions. Note All 2019.2.7 packages are available at.
https://docs.mirantis.com/mcp/q4-18/mcp-release-notes/mu/mu-7/mu-7-packages.html
2021-06-12T18:00:02
CC-MAIN-2021-25
1623487586239.2
[]
docs.mirantis.com
scipy.cluster.hierarchy.linkage¶ scipy.cluster.hierarchy. linkage(y, method='single', metric='euclidean', optimal_ordering=False)[source]¶ Perform hierarchical/agglomerative clustering. The input y may be either a 1-D condensed distance matrix or a 2-D array of observation vectors. If y is a 1-D condensed distance matrix, then y must be a \(\binom{n}. - Parameters - yndarray A condensed distance matrix. A condensed distance matrix is a flat array containing the upper triangular of the distance matrix. This is the form that pdistreturns. Alternatively, a collection of \(m\) observation vectors in \(n\) dimensions may be passed as an \(m\) by \(n\) array. All elements of the condensed distance matrix must be finite, i.e., no NaNs or infs. - methodstr, optional The linkage algorithm to use. See the Linkage Methodssection below for full descriptions. - metricstr or function, optional The distance metric to use in the case that y is a collection of observation vectors; ignored otherwise. See the pdistfunction for a list of valid distance metrics. A custom distance function can also be used. - optimal_orderingbool, optional If True, the linkage matrix will be reordered so that the distance between successive leaves is minimal. This results in a more intuitive tree structure when the data are visualized. defaults to False, because this algorithm can be slow, particularly on large datasets [2]. See also the optimal_leaf_orderingfunction. New in version 1.0.0. - Returns - Zndarray The hierarchical clustering encoded as a linkage matrix. [1] for details about the algorithms. Methods ‘centroid’, ‘median’, and ‘ward’ are correctly defined only if Euclidean pairwise metric is used. If y is passed as precomputed pairwise distances, then it is the user’s responsibility to assure that these distances are in fact Euclidean, otherwise the produced result will be incorrect. References - 1 Daniel Mullner, “Modern hierarchical, agglomerative clustering algorithms”, arXiv:1109.2378v1. - 2 Ziv Bar-Joseph, David K. Gifford, Tommi S. Jaakkola, “Fast optimal leaf ordering for hierarchical clustering”, 2001. Bioinformatics DOI:10.1093/bioinformatics/17.suppl_1.S22()
https://docs.scipy.org/doc/scipy-1.5.3/reference/generated/scipy.cluster.hierarchy.linkage.html
2021-06-12T17:06:58
CC-MAIN-2021-25
1623487586239.2
[]
docs.scipy.org
scipy.interpolate.interp2d.__call__¶ interp2d. __call__(self, x, y, dx=0, dy=0, assume_sorted=False)[source]¶ Interpolate the function. - Parameters - x1-D array x-coordinates of the mesh on which to interpolate. - y1-D array y-coordinates of the mesh on which to interpolate. - dxint >= 0, < kx Order of partial derivatives in x. - dyint >= 0, < ky Order of partial derivatives in y. - assume_sortedbool, optional If False, values of x and y can be in any order and they are sorted first. If True, x and y have to be arrays of monotonically increasing values. - Returns - z2-D array with shape (len(y), len(x)) The interpolated values.
https://docs.scipy.org/doc/scipy-1.6.1/reference/generated/scipy.interpolate.interp2d.__call__.html
2021-06-12T16:36:16
CC-MAIN-2021-25
1623487586239.2
[]
docs.scipy.org
What happens during an Interana upgrade Interana performs all software upgrades for Managed Edition customers. This article gives you a behind-the-scenes look at what happens during a software upgrade. Interana software upgrade process You may be wondering when an Interana upgrade typically happens, and why there's a need for system downtime during these procedures. This document explains the inner workings of an Interana software upgrade, how long you can expect the system to be down, and when Interana upgrades typically take place. What happens during an upgrade Before you can understand why your Interana cluster needs to be unavailable for a set period of time during an upgrade, or why we require a delegated upgrade schedule, we first need to explain the inner workings of the Interana upgrade process. To upgrade Interana, the following tasks are performed: - Check the most recent cluster backup. - Stop the system services and freeze the file systems. - Do one of the following: - For Azure customers, another backup of the cluster is performed. - For AWS customers, a backup of Interana MySQL—on the config node—is taken. - Install the Interana upgrade. - Update Nginx. - Reapply options specific to your cluster configuration. - Restart Interana and the related services. Necessary system downtime The tasks outlined in What happens during an upgrade require that the Interana cluster be unavailable until the entire upgrade process is complete. The required system downtime depends upon the size of your cluster, and can range from twelve minutes to two hours. Interana upgrade schedule The default Interana upgrade schedule is at 6:00 pm, Pacific Standard Time (PST). If the default upgrade schedule is not convenient for your time zone, contact your Interana Customer Success Manager (or [email protected]) to establish a backup time that meets your needs.
https://docs.scuba.io/2/Guides/Managed_Edition_Guide/What_happens_during_an_Interana_upgrade
2021-06-12T17:54:24
CC-MAIN-2021-25
1623487586239.2
[]
docs.scuba.io
Installing. - Start the LifeKeeper GUI Server on each server (see Starting/Stopping the GUI Server). Note: Once the GUI Server has been started following an initial installation, starting and stopping LifeKeeper will start and stop all LifeKeeper daemon processes including the GUI Server. - If you plan to allow users other than root to use the GUI, then you need to Configure GUI Users. Running the GUI You can run the LifeKeeper GUI: - on the LifeKeeper server in the cluster.
https://docs.us.sios.com/spslinux/9.5.0/en/topic/configuring-the-lifekeeper-gui
2021-06-12T18:29:25
CC-MAIN-2021-25
1623487586239.2
[]
docs.us.sios.com
Installation Assumptions - All pre-installation steps have been completed. - Your Kubernetes cluster is running and has appropriate resources available as defined in the pre-installation sizing considerations. STEP 1: Installing the Helm Chart - Create a secret using the Harbor repository that was setup for you by Zebrium. Zebrium will provide you with your Harbor USERNAME and PASSWORD. kubectl create secret docker-registry regcred --docker-server=harbor.ops.zebrium.com --docker-username=<USERNAME> --docker-password=<PASSWORD> --docker-email=<EMAIL> [ --namespace <NAMESPACE> ] - Update your Helm chart override.yaml file with the secret from Step 1. global.imagePullSecret="<SECRET>" - Add the Harbor repo to your Kubernetes cluster helm repo add --username <USERNAME> --password <PASSWORD> <REPO_NAME> helm repo update - Install the Zebrium On Prem Software helm upgrade <RELEASE_NAME> -i --namespace <NAMESPACE> --version <VERSION> <REPO_NAME>/zebrium-onprem -f <override.yaml> STEP 2: Configuring Your Account Zebrium On Prem currently supports a single account where all data will be ingested. This account name was defined when your Helm chart was created. You must now create this same account name within your On Prem instance using the API. Note: Substitute <COMPANY>, <DEPLOYMENT_NAME> and <FQHN> with those values defined in the Helm chart when completing the pre-installation steps. curl -X POST --data '{ "action":"create_account","customer":"<COMPANY>","deployment_name":"<DEPLOYMENT_NAME>"}' http://<FQHN>/report/v1/deployment The API will return a payload detailing the account information. Note: the zapi_token field returned from the API needs to be saved. This token will be used to ingest data into Zebrium On Prem in Step 5. below. STEP 3: Configuring Outbound Notifications If you would like Root Cause report notifications to be sent to a destination other than the Slack webhook defined in your Helm chart, you can configure additional notification channels using the API. The full API definition is located here. Supported outbound notification channels include: - Slack - Microsoft Teams - Webhook (with the zebrium_incident payload) Here is an example that creates a webhook notification channel that will send the zebrium_incident payload to an endpoint using Token-based authentication: curl -X POST --data '{ "channel_type":"webhook", "customer":"<COMPANY>", "name":"incident webhook", "webhook":"", "auth_scheme":"token", "auth_header_prefix":"Token", "auth_token":"HDTSHSHDBSGSRERWJDJSODYDNDYDD" }' http://<FQHN>/report/v1/outbound_channel.create Note: the payload of the zebrium_incident webhook is defined here. STEP 4: Configuring AutoSupport (optional) your Kubernetes cluster. The log collector is deployed as a Kubernetes Helm chart as follows: kubectl create namespace zebrium helm install zlog-collector zlog-collector --namespace zebrium --repo --set zebrium.collectorUrl= STEP 5: Ingesting data into your Zebrium On Prem instance There are three supported methods to ingest data into Zebrium On Prem using your ZAPI Token and Endpoint: - Zebrium CLI command - Kubernetes Log Collector - Elastic integration using Logstash Obtaining your ZAPI Token and Endpoint The ZAPI Token was provided in the response payload when creating your account. You can obtain your ZAPI Token anytime by using the List Deployments API here. The ZAPI Endpoint IP address can be obtained using the following command: kubectl [ -n <namespace> ] get service zapi Failure Domain Boundary Because Zebrium On Prem currently supports a single account with only one deployment, if you intend to ingest data from unrelated services/applications it is important to specify the ze_deployment_name label which essentially defines a failure domain boundary for anomaly correlation. You will see in the examples provided below, how to specify the ze_deployment_name label for each of the three methods that can be used to ingest data. Note: The ze_deployment_name must be a single word lowercase characters. Using the CLI to ingest data Instructions for downloading, configuring and using the Zebrium CLI are located here. Here is an example that ingests a Jira log file into the atlassian failure domain (ze_deployment_name): ~/zapi/bin/ze up --file=jira.log --log=jira --ids=zid_host=jiraserver,ze_deployment_name=atlassian --auth=97453627rDGSDE67FDCA77BCE44 --url= Using the Kubernetes Log Collector to ingest data If your application to be "monitored" is Kubernetes-based, this is the preferred method for sending logs to Zebrium On Prem. The log collector is deployed as a Kubernetes Helm chart as follows: kubectl create namespace zebrium helm install zlog-collector zlog-collector --namespace zebrium --repo --set zebrium.collectorUrl= Note: Remember to substitute ZAPI_ENDPOINT, ZAPI_TOKEN and ZE_DEPLOYMENT_NAME Using Logstash to ingest data Instructions for configuring this integration are located here. Note: Please contact Zebrium Support for assistance with configuration.
https://docs.zebrium.com/docs/ze_onprem/getting_started/installation/
2021-06-12T17:34:53
CC-MAIN-2021-25
1623487586239.2
[]
docs.zebrium.com
The fastest way to get started is to install via npm. npm i @motor-js/engine Start by wrapping your application with the Motor component, at the root of your project. This handles connection to the Qlik engine and is needed for any of the hooks to work. You can either set the configuration to your Qlik site directly in this component, or pass the engine object (useful if you are handling custom authentication and connection). // 1. Import the Motor componentimport { Motor } from @motor-js/enginefunction App() {return (// 2. Use at the root of your project<Motorconfig={host: "myqliksite.qlik.com",secure: true,port: 433,prefix: "",appId: 'myAwesomeApp',}><App /></Motor>)} Next, import the hooks or contexts into your project and you are good to go. In the below example we are extracting data from the useList hook. import { useList } from "@motor-js/engine"const Filter = () => {const dimension = ['Country'];const {listData,} = useList({dimension,});console.log(listData);return (<div></div>);}; Use the link below to jump to more details on how to use the hooks in the package. If you are connecting to a Qlik cloud SAAS instance, your configuration will look slightly different. We need to set the qcs entry to true and also add the web integration id, generated in your Qlik site. import { Motor }from '@motor-js/core'<Motorconfig={{host: 'motor.eu.qlikcloud.com',secure: true,port: null,prefix: '',appId: '39e218c1-42ee-4f38-9451-1b8f850505d5',webIntId: '...',qcs: true,}}>>// ...</Motor> For more information on how to generate your first web integration id in Qlik SAAS, head to the Qlik docs.
https://docs.motor.so/motor-js-engine/getting-started
2021-06-12T17:18:32
CC-MAIN-2021-25
1623487586239.2
[]
docs.motor.so
[−][src]Crate substrate_wasmtime Wasmtime's embedding API This crate contains an API used to interact with WebAssembly modules. For example you can compile modules, instantiate them, call them, etc. As an embedder of WebAssembly you can also provide WebAssembly modules functionality from the host by creating host-defined functions, memories, globals, etc, which can do things that WebAssembly cannot (such as print to the screen). The wasmtime crate draws inspiration from a number of sources, including the JS WebAssembly API as well as the proposed C API. As with all other Rust code you're guaranteed that programs will be safe (not have undefined behavior or segfault) so long as you don't use unsafe in your own program. With wasmtime you can easily and conveniently embed a WebAssembly runtime with confidence that the WebAssembly is safely sandboxed. An example of using Wasmtime looks like: use anyhow::Result; use wasmtime::*; fn main() -> Result<()> { // All wasm objects operate within the context of a "store" let store = Store::default(); // Modules can be compiled through either the text or binary format let wat = r#" (module (import "" "" (func $host_hello (param i32))) (func (export "hello") i32.const 3 call $host_hello) ) "#; let module = Module::new(store.engine(), wat)?; // Host functions can be defined which take/return wasm values and // execute arbitrary code on the host. let host_hello = Func::wrap(&store, |param: i32| { println!("Got {} from WebAssembly", param); }); // Instantiation of a module requires specifying its imports and then // afterwards we can fetch exports by name, as well as asserting the // type signature of the function with `get0`. let instance = Instance::new(&store, &module, &[host_hello.into()])?; let hello = instance .get_func("hello") .ok_or(anyhow::format_err!("failed to find `hello` function export"))? .get0::<()>()?; // And finally we can call the wasm as if it were a Rust function! hello()?; Ok(()) } Core Concepts There are a number of core types and concepts that are important to be aware of when using the wasmtime crate: Reference counting - almost all objects in this API are reference counted. Most of the time when and object is cloned you're just bumping a reference count. For example when you clone an Instancethat is a cheap operation, it doesn't create an entirely new instance. Store- all WebAssembly object and host values will be "connected" to a store. A Storeis not threadsafe which means that itself and all objects connected to it are pinned to a single thread (this happens automatically through a lack of the Sendand Synctraits). Similarly wasmtimedoes not have a garbage collector so anything created within a Storewill not be deallocated until all references have gone away. See the Storedocumentation for more information. Module- a compiled WebAssembly module. This structure represents in-memory JIT code which is ready to execute after being instantiated. It's often important to cache instances of a Modulebecause creation (compilation) can be expensive. Note that Moduleis safe to share across threads. Instance- an instantiated WebAssembly module. An instance is where you can actually acquire a Funcfrom, for example, to call. Each Instance, like all other Store-connected objects, cannot be sent across threads. There are other important types within the wasmtime crate but it's crucial to be familiar with the above types! Be sure to browse the API documentation to get a feeling for what other functionality is offered by this crate. Example Architecture To better understand how Wasmtime types interact with each other let's walk through, at a high-level, an example of how you might use WebAssembly. In our use case let's say we have a web server where we'd like to run some custom WebAssembly on each request. To ensure requests are isolated from each other, though, we'll be creating a new Instance for each request. When the server starts, we'll start off by creating an Engine (and maybe tweaking Config settings if necessary). This Engine will be the only engine for the lifetime of the server itself. Next, we can compile our WebAssembly. You'd create a Module through the Module::new API. This will generate JIT code and perform expensive compilation tasks up-front. After that setup, the server starts up as usual and is ready to receive requests. Upon receiving a request you'd then create a Store with Store::new referring to the original Engine. Using your Module from before you'd then call Instance::new to instantiate our module for the request. Both of these operations are designed to be as cheap as possible. With an Instance you can then invoke various exports and interact with the WebAssembly module. Once the request is finished the Store, Instance, and all other items loaded are dropped and everything will be deallocated. Note that it's crucial to create a Store-per-request to ensure that memory usage doesn't balloon accidentally by keeping a Store alive indefinitely. Advanced Linking Often WebAssembly modules are not entirely self-isolated. They might refer to quite a few pieces of host functionality, WASI, or maybe even a number of other wasm modules. To help juggling all this together this crate provides a Linker type which serves as an abstraction to assist in instantiating a module. The Linker type also transparently handles Commands and Reactors as defined by WASI. WASI The wasmtime crate does not natively provide support for WASI, but you can use the wasmtime-wasi crate for that purpose. With wasmtime-wasi you can create a "wasi instance" and then add all of its items into a Linker, which can then be used to instantiate a Module that uses WASI. Examples In addition to the examples below be sure to check out the online embedding documentation as well as the online list of examples An example of using WASI looks like: use wasmtime_wasi::{Wasi, WasiCtx}; let store = Store::default(); let mut linker = Linker::new(&store); // Create an instance of `Wasi` which contains a `WasiCtx`. Note that // `WasiCtx` provides a number of ways to configure what the target program // will have access to. let wasi = Wasi::new(&store, WasiCtx::new(std::env::args())?); wasi.add_to_linker(&mut linker)?; // Instantiate our module with the imports we've created, and run it. let module = Module::from_file(store.engine(), "foo.wasm")?; let instance = linker.instantiate(&module)?; // ... An example of reading a string from a wasm module: use std::str; let store = Store::default(); let log_str = Func::wrap(&store, |caller: Caller<'_>, ptr: i32, len: i32| { let mem = match caller.get_export("memory") { Some(Extern::Memory(mem)) => mem, _ => return Err(Trap::new("failed to find host memory")), }; // We're reading raw wasm memory here so we need `unsafe`. Note // though that this should be safe because we don't reenter wasm // while we're reading wasm memory, nor should we clash with // any other memory accessors (assuming they're well-behaved // too). unsafe { let data = mem.data_unchecked() .get(ptr as u32 as usize..) .and_then(|arr| arr.get(..len as u32 as usize)); let string = match data { Some(data) => match str::from_utf8(data) { Ok(s) => s, Err(_) => return Err(Trap::new("invalid utf-8")), }, None => return Err(Trap::new("pointer/length out of bounds")), }; assert_eq!(string, "Hello, world!"); println!("{}", string); } Ok(()) }); let module = Module::new( store.engine(), r#" (module (import "" "" (func $log_str (param i32 i32))) (func (export "foo") i32.const 4 ;; ptr i32.const 13 ;; len call $log_str) (memory (export "memory") 1) (data (i32.const 4) "Hello, world!")) "#, )?; let instance = Instance::new(&store, &module, &[log_str.into()])?; let foo = instance.get_func("foo").unwrap().get0::<()>()?; foo()?;
https://docs.rs/substrate-wasmtime/0.19.0/substrate_wasmtime/
2021-06-12T16:54:28
CC-MAIN-2021-25
1623487586239.2
[]
docs.rs
@ (or its alias value) may be used. Also note that name accepts an array of Strings, allowing for multiple names (i.e. a primary bean name plus one or more aliases) for a single bean. @Bean({"b1", "b2"}) // bean available as 'b1' and 'b2', but not 'myBean' public MyBean myBean() { // instantiate and configure MyBean obj return obj; } Note that the @Bean annotation does not provide attributes for profile, @AliasFor(value="name") public abstract String[] value name(). Intended to be used when no other attributes are needed, for example: @Bean("customBeanName"). name() @AliasFor(value="value") public abstract String[] name If left unspecified, the name of the bean is the name of the annotated method. If specified, the method name is ignored. The bean name and aliases may also be configured via the value() attribute if no other attributes are declared. value() @Deprecated public abstract Autowire autowire @Beanfactory method argument resolution and @Autowiredprocessing supersede name/type-based bean property injection Note that this autowire mode is just about externally driven autowiring based on bean property setter methods by convention, analogous to XML bean definitions. The default mode does allow for annotation-driven autowiring. "no" refers to externally driven autowiring only, not affecting any autowiring demands that the bean class itself expresses through annotations. Autowire.BY_NAME, Autowire.BY_TYPE public abstract()
https://docs.spring.io/spring-framework/docs/5.1.8.RELEASE/javadoc-api/org/springframework/context/annotation/Bean.html
2021-06-12T17:51:19
CC-MAIN-2021-25
1623487586239.2
[]
docs.spring.io
Install Ping¶ The install pings contain some data about the system and the installation process, sent whenever the installer exits 1. Stub Ping¶ The Stub Installer sends a ping just before it exits, in function SendPing of stub.nsi. This is sent as an HTTP GET request to DSMO (download-stats.mozilla.org). Ingestion is handled in gcp-ingestion at class StubUri within ParseUri. Several of the fields are codes which are broken out into multiple boolean columns in the database table. Full Install Ping¶ The Full Installer sends a ping just before it exits, in function SendPing of installer.nsi. This is an HTTP POST request with a JSON document, sent to the standard Telemetry endpoint (incoming.telemetry.mozilla.org). To avoid double counting, the full installer does not send a ping when it is launched from the stub installer, so pings where installer_type = "full" correspond to installs that did not use the stub. Querying the install pings¶ The pings are recorded in the firefox_installer.install table, accessible in Redash 2 using the default “Telemetry (BigQuery)” data source. Some of the columns are marked [DEPRECATED] because they involve features that were removed, mostly when the stub installer was streamlined in Firefox 55. These columns were not removed to keep compatibility and so we could continue to use the old data, but they should no longer be used. The columns are annotated with “(stub)”, “(full)”, or “(both)” to indicate which types of installer provide meaningful values. See also the JSON schema. - submission_timestamp (both) Time the ping was received - installer_type (both) Which type of installer generated this ping (full or stub) - installer_version (full) Version of the installer itself 3 - build_channel (both) Channel the installer was built with the branding for (“release”, “beta”, “nightly”, or “default”) - update_channel (both) Value of MOZ_UPDATE_CHANNEL for the installer build; should generally be the same as build_channel - version, build_id (both) Version number and Build ID of the installed product, from application.ini. This is not the version of the installer itself. stub: 0 if the installation failed - locale (both) Locale of the installer and of the installed product, in AB_CD format - from_msi (full) True if the install was launched from an MSI wrapper. - _64bit_build (both) True if a 64-bit build was selected for installation. stub: This means the OS is 64-bit, the RAM requirement was met, and no third-party software that blocks 64-bit installations was found full: Hardcoded based on the architecture to be installed - _64bit_os (both) True if the version of Windows on the machine was 64-bit. - os_version (both) Version number of Windows in major.minor.buildformat 5 - service_pack (stub) Latest Windows service pack installed on the machine. - server_os (both) True if the installed OS is a server version of Windows. - admin_user (both) True if the installer was run by a user with administrator privileges (and the UAC prompt was accepted). Specifically, this reports whether HKLM was writeable. - default_path (both) True if the default installation path was not changed. - set_default (both) True if the option to set the new installation as the default browser was left selected. - new_default (both) True if the new installation is now the default browser (registered to handle the http protocol). full: Checks the association using AppAssocReg::QueryCurrentDefaultand HKCU. - old_default (both) True if firefox.exe in a different directory is now the default browser, mutually exclusive with new_default. The details are the same as new_default. - had_old_install (both) True if at least one existing installation of Firefox was found on the system prior to this installation. full: Checks for the installation directory given in the Software\Mozilla\${BrandFullName}registry keys, either HKLM or HKCU stub: Checks for the top level profile directory %LOCALAPPDATA%\Mozilla\Firefox - old_version, old_build_id (stub) Version number and Build ID (from application.ini) of a previous installation of Firefox in the install directory, 0 if not found - bytes_downloaded (stub) Size of the full installer data that was transferred before the download ended (whether it failed, was cancelled, or completed normally) - download_size (stub) Expected size of the full installer download according to the HTTP response headers - download_retries (stub) Number of times the full installer download was retried or resumed. 10 retries is the maximum. - download_time (stub) Number of seconds spent downloading the full installer 9 - download_latency (stub) Seconds between sending the full installer download request and receiving the first response data - download_ip (stub) IP address of the server the full installer was download from (can be either IPv4 or IPv6) - manual_download (stub) True if the user clicked on the button that opens the manual download page. The prompt to do that is shown after the installation fails or is cancelled. - intro_time (both) Seconds the user spent on the intro screen. stub: [DEPRECATED] The streamlined stub no longer has this screen, so this should always be 0. - options_time (both) Seconds the user spent on the options screen. stub: [DEPRECATED] The streamlined stub no longer has this screen, so this should always be 0. - preinstall_time (stub) Seconds spent verifying the downloaded full installer and preparing to run it - install_time (both) full: Seconds taken by the installation phase. stub: Seconds taken by the full installer. - finish_time (both) full: Seconds the user spent on the finish page. stub: Seconds spent waiting for the installed application to launch. - succeeded (both) True if a new installation was successfully created. False if that didn’t happen for any reason, including when the user closed the installer window. - disk_space_error (stub) [DEPRECATED] True if the installation failed because the drive we’re trying to install to does not have enough space. The streamlined stub no longer sends a ping in this case, because the installation drive can no longer be selected. - no_write_access (stub) [DEPRECATED] True if the installation failed because the user doesn’t have permission to write to the path we’re trying to install to. The streamlined stub no longer sends a ping in this case, because the installation directory can no longer be selected. - user_cancelled (both) True if the installation failed because the user cancelled it or closed the window. - out_of_retries (stub) True if the installation failed because the download had to be retried too many times (currently 10) - file_error (stub) True if the installation failed because the downloaded file couldn’t be read from - sig_not_trusted (stub) True if the installation failed because the signature on the downloaded file wasn’t valid or wasn’t signed by a trusted authority - sig_unexpected (stub) True if the installation failed because the signature on the downloaded file didn’t have the expected subject and issuer names - install_timeout (stub) True if the installation failed because running the full installer timed out. Currently that means it ran for more than 150 seconds for a new installation, or 165 seconds for a paveover installation. - new_launched (both) True if the installation succeeded and tried to launch the newly installed application. - old_running (stub) [DEPRECATED] True if the installation succeeded and we weren’t able to launch the newly installed application because a copy of Firefox was already running. This should always be false since the check for a running copy was removed in Firefox 74. - attribution (both) Any attribution data that was included with the installer - profile_cleanup_prompt (stub) 0: neither profile cleanup prompt was shown 1: the “reinstall” version of the profile cleanup prompt was shown (no existing installation was found, but the user did have an old Firefox profile) 2: the “paveover” version of the profile cleanup prompt was shown (an installation of Firefox was already present, but it’s an older version) - profile_cleanup_requested (stub) True if either profile cleanup prompt was shown and the user accepted the prompt - funnelcake (stub) Funnelcake ID - ping_version (stub) Version of the stub ping, currently 8 - silent (full) True if the install was silent (see Full Installer Configuration) Footnotes¶ - 1 No ping is sent if the installer exits early because initial system requirements checks fail. - 2 A Mozilla LDAP login is required to access Redash. - 3 The version of the installer would be useful for the stub, but it is not currently sent as part of the stub ping. - 4 If the installation failed or was cancelled, the full installer will still report the version number of whatever was in the installation directory, or ""on if it couldn’t be read. - 5 Previous versions of Windows have used a very small set of build numbers through their entire lifecycle.. - 6 default_pathshould always be true in the stub, since we no longer support changing the path, but see bug 1351697. - 7 We no longer attempt to change the default browser setting in the streamlined stub, so set_default should always be false. - 8 We no longer attempt to change the default browser setting in the streamlined stub, so new_default should usually be false, but the stub still checks the association at Software\Classes\http\shell\open\commandin HKLM or HKCU. - 9 download_timewas previously called download_phase_time, this includes retries during the download phase. There was a different download_timefield that specifically measured only the time of the last download, this is still submitted but it is ignored during ingestion.
https://firefox-source-docs.mozilla.org/toolkit/components/telemetry/data/install-ping.html
2021-06-12T18:50:55
CC-MAIN-2021-25
1623487586239.2
[]
firefox-source-docs.mozilla.org
Creating a Data Source Using Apache Spark You can connect directly to Apache Spark using Amazon QuickSight, or you can connect to Spark through Spark SQL. Using the results of queries, or direct links to tables or views, you create data sources in Amazon QuickSight. You can either directly query your data through Spark, or you can import the results of your query into SPICE. Before you use Amazon QuickSight with Spark products, you must configure Spark for Amazon QuickSight. Amazon QuickSight requires your Spark server to be secured and authenticated using LDAP, which is available to Spark version 2.0 or later. If Spark is configured to allow unauthenticated access, Amazon QuickSight refuses the connection to the server. To use Amazon QuickSight as a Spark client, you must configure LDAP authentication to work with Spark. The Spark documentation contains information on how to set this up. To start, you need to configure it to enable front-end LDAP authentication over HTTPS. For general information on Spark, see the Apache Spark website. For information specifically on Spark and security, see Spark security documentation. To make sure that you have configured your server for Amazon QuickSight access, follow the instructions in Network and Database Configuration Requirements.
https://docs.aws.amazon.com/quicksight/latest/user/create-a-data-source-spark.html
2018-09-18T20:17:26
CC-MAIN-2018-39
1537267155676.21
[]
docs.aws.amazon.com
Renew your Project Online plans in a larger organization In August of 2016, we announced some changes in the Project Online plans that will be available to you, as some plans expired at the end of 2016 and new ones became available. Many of you may now need to renew your retired Project Online plans and move your users to the new available plans. For most organizations, the Project Online license renewal process can help you through purchasing and reassigning licenses to your users. However, if you need to renew 3000 or more Project Online licenses, you need to reassign user licenses with the steps documented in this article. These steps are: Step 1: Determine my current licenses and users Step 2: Determine which new Project Online SKUs you need for your users Step 3: Buy the Project Online Skus that you need Step 4: Assign the new licenses to your users Step 5: Verify you have moved your users to the new SKUs You'll need Windows PowerShell to do this For the procedures in this article, you'll need to run scripts that will require you to connect to Office 365 from Windows PowerShell. You'll need to install the following: The 64-bit version of the Microsoft Online Services Sign-in Assistant for IT Professionals RTW. The 64-bit version of the Windows Azure Active Directory Module for Windows PowerShell (64-bit version). For more information, see Connect to Office 365 PowerShell. After you complete your installation, open the Windows Azure Active Directory Module for Windows PowerShell on your desktop and type the following at the prompt: Connect-MsolService This lets you to enter your credentials needed to connect to Office 365. Step 1: Determine my current licenses and users As a first step, you need to know which Project Online licenses you have and which users they are assigned to. This will help you to determine which new Project Online licenses they will need. We suggest using the Manage your Office 365 Licenses script that you can download from the Microsoft Code Gallery. This script lets you create a comprehensive report of assign skus and enabled plans that prints out to a .CSV file. We can also use it for replacing your users assigned sku, which is described later in this article. Make sure to run Get-Help on the script to get more information about usage and examples. After downloading the Manage-MSOLLicense.ps1 file that contains the script, open your Microsoft Azure Active Directory Module for Windows PowerShell, log in, and enter the following cmd to run the script: ./Manage-MSOLLicense.ps1 -IAgreeToTheDisclaimer -Report -Logfile .\MyReport.log This will agree to the disclaimer, create a log file called MyReport.log and save it to the current location, and will create a License Report CSV file and save it to the default location. If you open the log file, it will contain the output displayed in the module when you run the script. If you open the License Report CSV file in Excel, you will see a listing of your users and the SKUs that are assigned to them: For example, in the graphic above, you can tell that each user listed has both a Project Online Premium and an Office 365 Enterprise E5 sku assigned to them. MOD385910 is the org id. You can use your column filters in Excel to easily group users that are assigned specific licenses. For example, you could find out which users are all using the Project Lite, Project Online, and Project Online with Project Pro for Office 365 Skus. Project Online SKU strings The following tables lists the possible Project Online Sku strings that you will see in the script results. You can use the following table to help you determine which Project Online skus are based on the Sku strings. You will also need to know what the new Project Online sku strings mean when you need to assign them to your users later in this article. Step 2: Determine which new Project Online SKUs you need for your users Now that you know which Skus are assigned to specific users, you need to determine which new Project Online plans to renew them to. You first need to know your new Project Online plans and what they do. Look to the following resources to provide you more information about how to best select the Project Online Skus for your users: If you are looking to provide your users with similar functionality as they did in their retired Project Online SKU, this table gives you some general guidance, but look to the above resources for more detail: Step 3: Buy the Project Online Skus that you need Now that you know what you need, you can now purchase the needed number of licenses you need for each new Project Online Skus. You can do this through the Office 365 Admin Center through the Billing page. You will want to Buy licenses for your Office 365 for business subscription. Step 4: Assign the new licenses to your users After purchasing the Project Online Skus that you need, you now need to assign them to your users. You can use the Manage-MSOLLicense script you ran earlier to do this, but it will require additional parameters. $users=Get-MSOLUser ./Manage-MSOLLicense.ps1 -IAgreeToTheDisclaimer -users $users -Logfile c:\temp\license.log -NewSKU orgID:NewSKU -ExistingSKU orgID:ExistingSKU Example 1 In a very simple example, your company (Contoso) wants to assign new Project Online Essential Skus to all of its users who currently have Project Lite Skus. I would run the script as the following: $users=Get-MSOLUser ./Manage-MSOLLicense.ps1 -IAgreeToTheDisclaimer -users $users -Logfile c:\temp\license.log -NewSKU CONTOSO:PROJECTESSENTIALS -ExistingSKU CONTOSO:PROJECT_ESSENTIALS After populating the $users variable with your Office 365 users in your tenant, the script first agrees to the disclaimer, sets the log file location, and then sets the new Sku as Project Online Essentials (PROJECTESSENTIALS) for all users who have the old Project Lite Sku (PROJECT_ESSENTIALS) . Example 2 In another example, let's say that at Contoso, you want to update all of your users in your HR department from their old Project Online Plan 2 licenses and assign them new Project Online Premium licenses. However, there are a few users in HR that have already been assigned Project Online Premium licenses, and we don't want to make any changes to these users. You would run the following script: C:\PS>$users=Get-MSOLUser | where {($_.Department -like "*HR") -and ($_.Licenses.accountskuid -notlike "*PROJECTPREMIUM")} ./Manage-MSOLLicense.ps1 -IAgreeToTheDisclaimer -users $users -Logfile c:\temp\license.log -NewSKU CONTOSO:PROJECTPREMIUM -ExistingSKU CONTOSO:PROJECTONLINE_PLAN_2 This command reads all of your users who are in the HR department and who do not already have a Project Online Premium Sku (PROJECTPREMIUM) . The script then agrees to the disclaimer, sets the log file location, and then sets the new Sku as Project Online Premium for all users who have the old Project Online Plan 2 sku. Note As mentioned previously, run the Get-Help command to take a look at detailed usage information and additional examples. It will also provide you information about additional uses of the script that are not needed for this article. Step 5: Verify you have moved your users to the new SKUs When you have completed assigning your Project Online Skus to your users, you need to verify that your users no longer have any of the old Project Online Skus assigned to them. You can do this by simply running the script again to generate a new license report: ./Manage-MSOLLicense.ps1 -IAgreeToTheDisclaimer -Report -Logfile .\MyReport.log After running it, open the newly generated License Report in Excel and search for any occurrences of the old Project Online skus. Also verify that if you have any unassigned old Project Online skus, that you cancel them in the Office 365 Admin Center on the Billing page. You can search for retired skus by simply searching the file for occurrences of the retired Project Online sku strings. If you have any issues in trying to move to your new Project Online skus, you can Contact support for business products - Admin Help for assistance. Related Topics Brian Smith's Project Support Blog: How to handle Project Online sku changes
https://docs.microsoft.com/en-us/ProjectOnline/renew-your-project-online-plans-in-a-larger-organization?redirectSourcePath=%252fen-gb%252farticle%252frenew-your-project-online-plans-in-a-larger-organization-307b0b6d-0256-4f3f-b780-bff36149aca1
2018-09-18T19:21:11
CC-MAIN-2018-39
1537267155676.21
[array(['media/311ce100-df60-4b98-b19b-b25c3e3bd213.png', 'Output from running the Manage-MSOLLicense script.'], dtype=object) array(['media/67f9e937-0506-4223-bc34-635bf1ef0777.png', 'License Report'], dtype=object) ]
docs.microsoft.com
This module is a port of a growing fragment of the numeric header in Alexander Stepanov's Standard Template Library, with a few additions. Format flags for CustomFloat. Adds a sign bit to allow for signed numbers. Store values in normalized form by default. The actual precision of the significand is extended by 1 bit by assuming an implicit leading bit of 1 instead of 0. i.e. 1.nnnn instead of 0.nnnn. True for all IEE754 types Stores the significand in IEEE754 denormalized form when the exponent is 0. Required to express the value 0. Allows the storage of IEEE754 infinity values. Allows the storage of IEEE754 Not a Number values. If set, select an exponent bias such that max_exp = 1. i.e. so that the maximum value is >= 1.0 and < 2.0. Ignored if the exponent bias is manually specified. If set, unsigned custom floats are assumed to be negative. If set, 0 is the only allowed IEEE754 denormalized number. Requires allowDenorm and storeNormalized. Include all of the IEEE754 options. Include none of the above options. Allows user code to define custom floating-point formats. These formats are for storage only; all operations on them are performed by first implicitly extracting them to real first. After the operation is completed the result can be stored in a custom floating-point value via assignment.)); Implements the secant method for finding a root of the function fun starting from points [xn_1, x_n] (ideally close to the root). Num may be float, double, or real. import std.math : approxEqual, cos; signs or at least one of them equals ±0,. Find root of a real function f(x) by bracketing, allowing the termination condition to be specified. x) of the root, while the second pair of elements are the corresponding function values at those points. If an exact root was found, both of the first two elements will contain the root, and the second pair of elements will be 0. Find a real minimum of a real function f(x) via bracketing. Given a function f and a range (ax .. bx), returns the value of x in the range which is closest to a minimum of f(x). f is never evaluted at the endpoints of ax and bx. If f(x) has more than one minimum in the range, one will be chosen arbitrarily. If f(x) returns NaN or -Infinity, (x, f(x), NaN) will be returned; otherwise, this algorithm is guaranteed to succeed. axand bxshall be finite reals. relToleranceshall be normal positive real. absToleranceshall be normal positive real no less then T.epsilon*2. x, y = f(x)and error = 3 * (absTolerance * fabs(x) + relTolerance). The method used is a combination of golden section search and successive parabolic interpolation. Convergence is never much slower than that for a Fibonacci search. findRoot, std.math.isNormal import std.math : approxEqual; auto ret = findLocalMin((double x) => (x-4)^^2, -1e7, 1e7); assert(ret.x.approxEqual(4.0)); assert(ret.y.approxEqual(0.0));). Computes the dot product of input ranges a and b. The two ranges must have the same length. If both ranges define length, the check is done once; otherwise, it is done at each iteration. Computes the cosine similarity of input ranges a and b. The two ranges must have the same length. If both ranges define length, the check is done once; otherwise, it is done at each iteration. If either range has all-zero elements, return 0.. trueif normalization completed normally, falseif all elements in rangewere zero or if rangeis empty. double[] a = []; assert(!normalize(a)); a = [ 1.0, 3.0 ]; assert(normalize(a)); writeln(a); // [0.25, 0.75] a = [ 0.0, 0.0 ]; assert(!normalize(a)); writeln(a); // [0.5, 0.5] Compute the sum of binary logarithms of the input range r. The error of this method is much smaller than with a naive sum of log2.. import std.math : approxEqual; double[] p = [ 0.0, 0, 0, 1 ]; writeln(jensenShannonDivergence(p, p)); // 0 double[] p1 = [ 0.25, 0.25, 0.25, 0.25 ]; writeln: "Hello", "new", and "world"; "Hello", "new"), ( "Hello", "world"), and ( "new", "world"); "Hello", "new", "world"). gapWeightedSimilarity(s, t, 1)simply counts all of these matches and adds them up, returning 7. string[] s = ["Hello", "brave", "new", "world"]; string[] t = ["Hello", "new", "world"]; assert(gapWeightedSimilarity(s, t, 1) == 7); ); "Hello", "new"), ( "Hello", "world"), and ( "Hello", "new", "world") from the tally. That leaves only 4 matches. ngaps.)); Constructs an object given two ranges s and t and a penalty lambda. Constructor completes in Ο( s.length * t.length) time and computes all matches of length 1. this. Computes the match of the popFront length. Completes in Ο( s.length * t.length) time. popFront). Computes the greatest common divisor of a and b by using an efficient algorithm such as Euclid's or Stein's algorithm. writeln(gcd(2 * 5 * 7 * 7, 5 * 7 * 11)); // 5 * 7 const int a = 5 * 13 * 23 * 23, b = 13 * 59; writeln(gcd(a, b)); // 13. Create an Fft object for computing fast Fourier transforms of power of two sizes of size or smaller. size must be a power of two.. Inverse FFT that allows a user-supplied buffer to be provided. The buffer must be a random access range with slicing, and its elements must be some complex-like type. Convenience functions that create an Fft object, run the FFT or inverse FFT and return the result. Useful for one-off FFTs. © 1999–2017 The D Language Foundation Licensed under the Boost License 1.0.
http://docs.w3cub.com/d/std_numeric/
2018-09-18T19:08:59
CC-MAIN-2018-39
1537267155676.21
[]
docs.w3cub.com
All content with label elytron+wildfly. Related Labels: high, realm, jboss, managed, tutorial, standalone, eap, eap6, ssl, load, security, modcluster, balancing, getting_started, cli, availability, wildfly81, cluster, mod_jk, domain, httpd, ha, mod_cluster, as7, http more » ( - elytron, - wildfly ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/elytron+wildfly
2018-09-18T20:02:39
CC-MAIN-2018-39
1537267155676.21
[]
docs.jboss.org
Web Control. Web Enable Theming Control. Web Enable Theming Control. Web Enable Theming Control. Property Enable Theming Definition Gets or sets a value indicating whether themes apply to this control. public: virtual property bool EnableTheming { bool get(); void set(bool value); }; [System.ComponentModel.Browsable(true)] public override bool EnableTheming { get; set; } member this.EnableTheming : bool with get, set Public Overrides Property EnableTheming As Boolean Remarks.
https://docs.microsoft.com/en-us/dotnet/api/system.web.ui.webcontrols.webcontrol.enabletheming?view=netframework-4.7.2
2018-09-18T19:43:08
CC-MAIN-2018-39
1537267155676.21
[]
docs.microsoft.com
Securing Cassandra Cassandra provides various security features to the open source community. Cassandra provides these security features to the open source community. - Authentication based on internally controlled rolename/passwords Cassandra authentication is roles-based and stored internally in Cassandra system tables. Administrators can create, alter, drop, or list roles using CQL commands, with an associated password. Roles can be created with superuser, non-superuser, and login privileges. The internal authentication is used to access Cassandra keyspaces and tables, and by cqlsh and DevCenter to authenticate connections to Cassandra clusters and sstableloader to load SSTables. - Authorization based on object permission management Authorization grants access privileges to Cassandra cluster operations based on role authentication. Authorization can grant permission to access the entire database or restrict a role to individual table access. Roles can grant authorization to authorize other roles. Roles can be granted to roles. CQL commands GRANT and REVOKE are used to manage authorization. - Authentication and authorization based on JMX username/passwords JMX (Java Management Extensions) technology provides a simple and standard way of managing and monitoring resources related to an instance of a Java Virtual Machine (JVM). This is achieved by instrumenting resources with Java objects known as Managed Beans (MBeans) that are registered with an MBean server. JMX authentication stores username and associated passwords in two files, one for passwords and one for access. JMX authentication is used by nodetool and external monitoring tools such as jconsole.In Cassandra 3.6 and later, JMX authentication and authorization can be accomplished using Cassandra's internal authentication and authorization capabilities. - SSL encryption Cassandra provides secure communication between a client and a database cluster, and between nodes in a cluster. Enabling SSL encryption ensures that data in flight is not compromised and is transferred securely. Client-to-node and node-to-node encryption are independently configured. Cassandra tools (cqlsh, nodetool, DevCenter) can be configured to use SSL encryption. The DataStax drivers can be configured to secure traffic between the driver and Cassandra. - General security measures Typically, production Cassandra clusters will have all non-essential firewall ports closed. Some ports must be open in order for nodes to communicate in the cluster. These ports are detailed.
https://docs.datastax.com/en/cassandra/3.0/cassandra/configuration/secureIntro.html
2018-09-18T19:31:33
CC-MAIN-2018-39
1537267155676.21
[]
docs.datastax.com
The <bdi> (bidirectional isolation) isolates a span of text that might be formatted in a different direction from other text outside it. This element is useful when embedding text with an unknown directionality, from a database for example, inside text with a fixed directionality. Like all other HTML elements, this element has the global attributes, with a slight semantic difference: the dir attribute is not inherited. If not set, its default value is the auto which let the browser decide the direction based on the element's content.. <p dir="ltr">This arabic word <bdi>ARABIC_PLACEHOLDER</bdi> is automatically displayed right-to-left.</p> This arabic word REDLOHECALP_CIBARA is automatically displayed right-to-left. <bdo> direction, unicode-bidi © 2005–2018 Mozilla Developer Network and individual contributors. Licensed under the Creative Commons Attribution-ShareAlike License v2.5 or later.
http://docs.w3cub.com/html/element/bdi/
2018-09-18T19:11:51
CC-MAIN-2018-39
1537267155676.21
[]
docs.w3cub.com
- Limitations Aug 31, 2017 Configuring appliances in a WCCP cluster has the following limitations: - All appliances within a cluster must be the same model and use the same software release. - Parameter synchronization between appliances within the cluster is not automatic. Use Command Center to manage the appliances as a group. - SD-WAN traffic shaping is not effective, because it relies on controlling the entire link as a unit, and none of the appliances are in a position to do this. Router QoS can be used instead. - The WCCP-based load-balancing algorithms do not vary dynamically with load, so achieving a good load balance can require some tuning. - The hash method of cache assignment is not supported. Mask assignment is the supported method. - While the WCCP standard allows mask lengths of 1-7 bits, the appliance supports masks of 1-6 bits. - Multicast service groups are not supported; only unicast service groups are supported. - All routers using the same service group pair must support the same forwarding method (GRE or L2). - The forwarding and return method negotiated with the router must match: both must be GRE or both must be L2. Some routers do not support L2 in both directions, resulting in an error of “Router’s forward or return or assignment capability mismatch.” In this case, the service group must be configured as GRE. - SD-WAN VPX does not support WCCP clustering. - The appliance supports (and negotiates) only unweighted (equal) cache assignments. Weighted assignments are not supported. - Some older appliances, such as the SD-WAN 700, do not support WCCP clustering. - (SD-WAN 4000/5000 only) Two accelerator instances are required per interface in L2 mode. No more than three interfaces are supported per appliance (and then only on appliances with six or more accelerator instances.) - (SD-WAN 4000/5000 only) WCCP control packets from the router must match one of the router IP addresses configured on the appliance for the service group. In practice, the router’s IP address for the interface that connects it to the appliance should be used. The router’s loopback IP should not be used. Support: Feedback and forums:
https://docs.citrix.com/en-us/netscaler-sd-wan/9-3/hardware-platforms/4100-and-5100-wanop-appliances/deployment-modes/br-adv-wccp-mode-con/cb-wccp-cluster-wrapper-con/cb-wccp-cluster-limitations-con.html
2018-09-18T20:27:26
CC-MAIN-2018-39
1537267155676.21
[]
docs.citrix.com
Permissions and ACL Enforcement When user impersonation is enabled, permissions and ACL restrictions are applied on behalf of the submitting user. In the following example, “foo_db” database has a table “drivers”, which only user “foo” can access: A Beeline session running as user “foo” can access the data, read the drivers table, and create a new table based on the table: Spark queries run in a YARN application as user “foo”: All user permissions and access control lists are enforced while accessing tables, data or other resources. In addition, all output generated is for user “foo”. For the table created in the preceding Beeline session, the owner is user “foo”: The per-user Spark Application Master ("AM") caches data in memory without other users being able to access the data--cached data and state are restricted to the Spark AM running the query. Data and state information are not stored in the Spark Thrift server, so they are not visible to other users. Spark master runs as yarn-cluster, but query execution works as though it is yarn-client (essentially a yarn-cluster user program that accepts queries from STS indefinitely).
https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.1/developing-spark-applications/content/permissions_and_acl_enforcement.html
2018-09-18T20:29:20
CC-MAIN-2018-39
1537267155676.21
[array(['../how-to/images/initial-tables-acl.png', None], dtype=object) array(['../how-to/images/basic-beeline.png', None], dtype=object) array(['../how-to/images/foo-am-spark-app.png', None], dtype=object) array(['../how-to/images/tables-acl.png', None], dtype=object)]
docs.hortonworks.com
A data object, representing the times associated with a benchmark measurement. Default caption, see also Benchmark::CAPTION Default format string, see also Benchmark::FORMAT System CPU time of children User CPU time of children Label Elapsed real time System CPU time Total time, that is utime + stime + cutime + cstime User CPU time 426 lib/benchmark.rb, line 49 522 lib/benchmark.rb, line 536 Ruby Core © 1993–2017 Yukihiro Matsumoto Licensed under the Ruby License. Ruby Standard Library © contributors Licensed under their own licenses.
http://docs.w3cub.com/ruby~2.5/benchmark/tms/
2018-09-18T18:58:54
CC-MAIN-2018-39
1537267155676.21
[]
docs.w3cub.com
Search This Document Browse the Internet For more information on the charges associated with using the browser, contact your wireless service provider. - On the home screen, click the Browser icon. - In the address bar, type a web address or search terms. - Press the key on the keyboard. To stop loading a webpage, press the key > Stop. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/41695/1773287.jsp
2013-12-05T07:59:15
CC-MAIN-2013-48
1386163041955
[]
docs.blackberry.com
/usr/bin/perldirectory. By default, this instance of Perl is a standard environment that runs Perl 5.8.8 on RHEL/CentOS 3, 4, and 5; and Perl 5.10 on RHEL/CentOS 6. This particular instance of Perl is used by CGI scripts and operating system maintenance scripts. /usr/local/cpanel/scripts) and in CGI scripts in cPanel & WHM. These CGI scripts in cPanel & WHM include WHM plugins that use the #!/usr/bin/cpenv perlshebang. cPanel & WHM adds modules to this environment and excludes this instance in YUM updates. You can manage modules for this Perl instance with the cpancommand or with the Home >> Software >> Module Installers feature in WHM. /usr/libor the /usr/local/libdirectories. If you have scripts that need to work with cPanel code, we recommend that you make the following changes to the top of your Perl scripts. This modification allows the scripts to work with all cPanel installs: Custom modules will not be installed toCustom modules will not be installed to#!/bin/sh eval 'if [ -x /usr/local/cpanel/3rdparty/bin/perl ]; then exec /usr/local/cpanel/3rdparty/bin/perl -x -- $0 ${1+"$@"}; else exec /usr/bin/perl -x $0 ${1+"$@"}; fi;' if 0; #!/usr/bin/perl /usr/local/cpanel. Instead, the custom module will be installed to the system's Perl at /usr/bin/perl. /usr/local/cpanel/3rdparty/perl/514/bin/perl. In addition, we have changed the Perl scripts that we distribute with cPanel & WHM to use this version of perl. These scripts include the cPanel system maintenance scripts (in /usr/local/cpanel/scripts) and in CGI scripts in cPanel & WHM. This change allows you to manage your system's Perl binary independently of cPanel & WHM. Ultimately, you can no longer expect all of the same modules to be installed into @INCfor the system's Perl binary located at /usr/bin/perlin cPanel & WHM 11.36. This also means that systems that have been provisioned after this change can manage the system's Perl binary via an RPM. For more information, read Prepare Your Perl Scripts For 11.36. /usr/local/cpanel/build-tools/buildperl. This binary is a compiled Perl script that implements many of the switches required for Perl modules to be compiled against it. When you use use the buildperlutility, you must specify a variety of MakeMakeroptions: SITEPREFIX=/usr/local/cpanel/perl PERL_LIB=/usr/local/cpanel/perl PERL_ARCHLIB=/usr/local/cpanel/perl/x86_64-linux SITELIBEXP=/usr/local/cpanel/perl SITEARCHEXP=/usr/local/cpanel/perl/x86_64-linux INSTALLPRIVLIB=/var/cpanel/lib/perl5 INSTALLSITELIB= /var/cpanel/lib/perl5 INSTALLARCHLIB= /var/cpanel/lib/perl5 PERL_SRC=/usr/local/cpanel/perl INSTALLMAN3DIR=/usr/local/cpanel/3rdparty/man scripts/perlinstaller, have always installed the modules to the /usr/libdirectory or to the /usr/local/cpaneldirectory. These paths are the search paths that the system perl binary uses at /usr/bin/perl, and they will not change. To install a module: /usr/local/cpanel/build-tools/buildperl Makefile.PL SITEPREFIX=/usr/local/cpanel/perl PERL_LIB=/usr/local/cpanel/perl PERL_ARCHLIB=/usr/local/cpanel/perl/x86_64-linux INSTALLPRIVLIB= /var/cpanel/lib/perl5 INSTALLSITELIB= /var/cpanel/lib/perl5 SITELIBEXP=/usr/local/cpanel/perl SITEARCHEXP=/usr/local/cpanel/perl/x86_64-linux INSTALLARCHLIB= /var/cpanel/lib/perl5 PERL_SRC=/usr/local/cpanel/perl INSTALLMAN3DIR=/usr/local/cpanel/3rdparty/man make make test make install make testcommand if the makecommand finishes without an error. make installcommand if the make testcommand finishes without an error. /usr/local/cpanel/3rdparty/perl/514/bin/cpan -ifollowed by the name of the module to be installed. For example: /usr/local/cpanel/3rdparty/perl/514/bin/cpan -i Module::Name #!/usr/local/cpanel/3rdparty/bin/perlat the beginning of your scripts. The shebang line, #!/usr/local/cpanel/3rdparty/bin/perl, is a symlink to the latest Perl Shipped by cPanel. The default search order is: To insert your path at the top of this list, you must add the following block into your code.To insert your path at the top of this list, you must add the following block into your code./usr/local/cpanel /usr/local/cpanel/3rdparty/perl/514/lib/perl5/cpanel_lib /usr/local/cpanel/3rdparty/perl/514/lib/perl5/5.14.3 /opt/cpanel/perl5/514/site_lib BEGIN { unshift @INC, '/path/to/my/lib'; } #!/bin/sh eval 'if [ -x /usr/bin/local/cpanel/3rdparty/bin/perl ]; then exec /usr/local/cpanel/3party/bin/local -x --$0 ${1+”$@”}; else exex /usr/bin/perl -x $0 ${1+”$@”}; fi;' if 0; #!/usr/bin/perl/ /usr/bin/perl. Instead, the Perl Module Magic User Loader is located at /usr/bin/perlml. To continue to use the Perl Magic User Loader, the system administrator must change all of the CGI scripts from #!/usr/bin/perlto #! /usr/bin/perlml. WHM's Install a Perl Module screen includes a deprecation message for the Perl Module Magic User Loader. Users can still disable the Magic Loader, but they will not be able to re-enable it. For more information on the included Perl modules, read metacpan.org.For more information on the included Perl modules, read metacpan.org./usr/local/cpanel/3rdparty/bin/perl -V If the utility responds with the version of the module that you installed, then the installation was successful. If you encounter errors, the problem may be one or more of the following:If the utility responds with the version of the module that you installed, then the installation was successful. If you encounter errors, the problem may be one or more of the following:/usr/local/cpanel/build-tools/buildperl -e'use ; print "$ ::VERSION\n";' /usr/local/cpanel/scripts/postupcpscript as a hook that validates your application's functionality externally. We also strongly recommend that this hook notify you of any failures so that you can address them yourself or file a support ticket with our technical support staff.
http://docs.cpanel.net/twiki/bin/view/SoftwareDevelopmentKit/InstallingInternalPerlModules
2013-12-05T08:07:44
CC-MAIN-2013-48
1386163041955
[]
docs.cpanel.net
Nuclear Energy, Nonproliferation, and Disarmament Document List > NRDC et al letter to Nuclear Regulatory Commission RE: the NRC's response to the Fukushima disaster This letter to Commissioners Ostendorff, Magwood, and Svinicki of the Nuclear Regulatory Commission about the NRC's response to the Fukushima disaster was sent May 9, 2011 and signed by the Natural Resources Defense Council, Physicians for Social Responsibility, Public Citizen, Nuclear Information and Resource Service, Project on Governmetn Oversight, Riverkeeper, Inc., and Pilgrim Watch.
http://docs.nrdc.org/nuclear/nuc_11080903.asp
2013-12-05T08:07:22
CC-MAIN-2013-48
1386163041955
[]
docs.nrdc.org
. BNP Paribas Wealth Management S.A. Company Profile and SWOT Analysis By bharatbookseo on July 25, 2012, 247 views, 0 comments Tags: Market Research Reports, Asset, HNWIs, BNP Paribas, Banking, Wealthy, finance Indian Luxury Boats Industry Outlook to 2020 By 3271886 on May 30, 2012, 217 views, 0 comments Tags: luxury boats, yacht, 33 footer, Ashim Mongia, Marine Solutions, West Coast Marina, Ocean Blue, Aquasail, Yachting Association of India, Boat Shows, high net worth individuals, HNWI
http://docs.thinkfree.com/docs/?selSrchType=tag&q=HNWI
2013-12-05T07:57:08
CC-MAIN-2013-48
1386163041955
[]
docs.thinkfree.com
Intermediate After checking out the current page and its linked sections, you should have a better understanding of the following:After checking out the current page and its linked sections, you should have a better understanding of the following:Note: This page assumes that you’ve experimented with Kubernetes before. At this point, you should have basic experience interacting with a Kubernetes cluster (locally with Minikube, or elsewhere), and using API objects like Deployments to run your applications. If not, you should review the Beginner App Developer topics first. - Additional Kubernetes workload patterns, beyond Deployments - What it takes to make a Kubernetes application production-ready - Community tools that can improve your development workflow Learn additional workload patterns As your Kubernetes use cases become more complex, you may find it helpful to familiarize yourself with more of the toolkit that Kubernetes provides. Basic workload objects like DeploymentsAn API object that manages a replicated application. make it straightforward to run, update, and scale applications, but they are not ideal for every scenario. The following API objects provide functionality for additional workload types, whether they are persistent or terminating. Persistent workloads Like Deployments, these API objects run indefinitely on a cluster until they are manually terminated. They are best for long-running applications. StatefulSetsManages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods. - Like Deployments, StatefulSets allow you to specify that a certain number of replicas should be running for your application.Note: It’s misleading to say that Deployments can’t handle stateful workloads. Using PersistentVolumesAn API object that represents a piece of storage in the cluster. Available as a general, pluggable resource that persists beyond the lifecycle of any individual Pod. , you can persist data beyond the lifecycle of any individual Pod in your Deployment. However, StatefulSets can provide stronger guarantees about “recovery” behavior than Deployments. StatefulSets maintain a sticky, stable identity for their Pods. The following table provides some concrete examples of what this might look like: In practice, this means that StatefulSets are best suited for scenarios where replicas (Pods) need to coordinate their workloads in a strongly consistent manner. Guaranteeing an identity for each Pod helps avoid split-brain side effects in the case when a node becomes unreachable (network partition). This makes StatefulSets a great fit for distributed datastores like Cassandra or Elasticsearch. DaemonSetsEnsures a copy of a Pod is running across a set of nodes in a cluster. - DaemonSets run continuously on every node in your cluster, even as nodes are added or swapped in. This guarantee is particularly useful for setting up global behavior across your cluster, such as: - Logging and monitoring, from applications like fluentd - Network proxy or service mesh Terminating workloads In contrast to Deployments, these API objects are finite. They stop once the specified number of Pods have completed successfully. JobsA finite or batch task that runs to completion. - You can use these for one-off tasks like running a script or setting up a work queue. These tasks can be executed sequentially or in parallel. These tasks should be relatively independent, as Jobs do not support closely communicating parallel processes. Read more about Job patterns. CronJobsManages a Job that runs on a periodic schedule. - These are similar to Jobs, but allow you to schedule their execution for a specific time or for periodic recurrence. You might use CronJobs to send reminder emails or to run backup jobs. They are set up with a similar syntax as crontab. Other resources For more info, you can check out a list of additional Kubernetes resource types as well as the API reference docs. There may be additional features not mentioned here that you may find useful, which are covered in the full Kubernetes documentation. Deploy a production-ready workload The beginner tutorials on this site, such as the Guestbook app, are geared towards getting workloads up and running on your cluster. This prototyping is great for building your intuition around Kubernetes! However, in order to reliably and securely promote your workloads to production, you need to follow some additional best practices. Declarative configuration You are likely interacting with your Kubernetes cluster via kubectlA command line tool for communicating with a Kubernetes API server. . kubectl can be used to debug the current state of your cluster (such as checking the number of nodes), or to modify live Kubernetes objects (such as updating a workload’s replica count with kubectl scale). When using kubectl to update your Kubernetes objects, it’s important to be aware that different commands correspond to different approaches: - Purely imperative - Imperative with local configuration files (typically YAML) - Declarative with local configuration files (typically YAML) There are pros and cons to each approach, though the declarative approach (such as kubectl apply -f) may be most helpful in production. With this approach, you rely on local YAML files as the source of truth about your desired state. This enables you to version control your configuration, which is helpful for code reviews and audit tracking. For additional configuration best practices, familiarize yourself with this guide. Security You may be familiar with the principle of least privilege—if you are too generous with permissions when writing or using software, the negative effects of a compromise can escalate out of control. Would you be cautious handing out sudo privileges to software on your OS? If so, you should be just as careful when granting your workload permissions to the Kubernetes APIThe application that serves Kubernetes functionality through a RESTful interface and stores the state of the cluster. server! The API server is the gateway for your cluster’s source of truth; it provides endpoints to read or modify cluster state. You (or your cluster operatorA person who configures, controls, and monitors clusters. ) can lock down API access with the following: - ServiceAccountsProvides an identity for processes that run in a Pod. - An “identity” that your Pods can be tied to - RBACManages authorization decisions, allowing admins to dynamically configure access policies through the Kubernetes API. - One way of granting your ServiceAccount explicit permissions For even more comprehensive reading about security best practices, consider checking out the following topics: - Authentication (Is the user who they say they are?) - Authorization (Does the user actually have permissions to do what they’re asking?) Resource isolation and management If your workloads are operating in a multi-tenant environment with multiple teams or projects, your container(s) are not necessarily running alone on their node(s). They are sharing node resources with other containers which you do not own. Even if your cluster operator is managing the cluster on your behalf, it is helpful to be aware of the following: - NamespacesAn abstraction used by Kubernetes to support multiple virtual clusters on the same physical cluster. , used for isolation - Resource quotas, which affect what your team’s workloads can use - Memory and CPU requests, for a given Pod or container - Monitoring, both on the cluster level and the app level This list may not be completely comprehensive, but many teams have existing processes that take care of all this. If this is not the case, you’ll find the Kubernetes documentation fairly rich in detail. Improve your dev workflow with tooling As an app developer, you’ll likely encounter the following tools in your workflow. kubectl kubectl is a command-line tool that allows you to easily read or modify your Kubernetes cluster. It provides convenient, short commands for common operations like scaling app instances and getting node info. How does kubectl do this? It’s basically just a user-friendly wrapper for making API requests. It’s written using client-go, the Go library for the Kubernetes API. To learn about the most commonly used kubectl commands, check out the kubectl cheatsheet. It explains topics such as the following: - kubeconfig files - Your kubeconfig file tells kubectl what cluster to talk to, and can reference multiple clusters (such as dev and prod). The various output formats available - This is useful to know when you are using kubectl getto list information about certain API objects. The JSONPath output format - This is related to the output formats above. JSONPath is especially useful for parsing specific subfields out of kubectl getoutput (such as the URL of a ServiceA way to expose an application running on a set of Pods as a network service. ). kubectl runvs kubectl apply- This ties into the declarative configuration discussion in the previous section. For the full list of kubectl commands and their options, check out the reference guide. Helm To leverage pre-packaged configurations from the community, you can use Helm chartsA package of pre-configured Kubernetes resources that can be managed with the Helm tool. . Helm charts package up YAML configurations for specific apps like Jenkins and Postgres. You can then install and run these apps on your cluster with minimal extra configuration. This approach makes the most sense for “off-the-shelf” components which do not require much custom implementation logic. For writing your own Kubernetes app configurations, there is a thriving ecosystem of tools that you may find useful. Explore additional resources References Now that you’re fairly familiar with Kubernetes, you may find it useful to browse the following reference pages. Doing so provides a high level view of what other features may exist: In addition, the Kubernetes Blog often has helpful posts on Kubernetes design patterns and case studies. What’s next If you feel fairly comfortable with the topics on this page and want to learn more, check out the following user journeys: - Advanced.
https://v1-15.docs.kubernetes.io/docs/user-journeys/users/application-developer/intermediate/
2020-08-03T23:00:18
CC-MAIN-2020-34
1596439735836.89
[]
v1-15.docs.kubernetes.io
iOS Scrollable Area From Xojo Documentation Contents A scrollable area allows users to scroll a container that otherwise has more content than can fit in the visible area. When users interacts with content in the scrollable area, horizontal or vertical scroll indicators briefly display to show users that they can scroll around to reveal more content. Other than the transient scroll indicators, a scrollable area has no visual appearance. Refer to iOSScrollableArea in the Language Reference for the full list of events, properties and methods. Properties - This is a Container Control to display in the scrollable area. If the container has more content than can fit in the view, the user will be able to scroll to see it. Usage To use a scrollable area, drag the control from the Library to the layout. In the Inspector choose an existing Container Control for the Content property. Use the Content property to get a reference to the Container Control at runtime so that you can call methods on it (or access its controls). For example: Var cc As MyContainer cc = MyContainer(ScrollableArea1.Content) // cast to MyContainer // Now you can call methods on it cc.UpdateData There may be situations where you need to dynamically alter the size of the Container Control at run-time, perhaps because the size of its content has increased. You can do this by adding a Height constraint to the Content area like this: Var contentConstraint As iOSLayoutConstraint contentConstraint = New iOSLayoutConstraint(ScrollableArea1.Content, _ iOSLayoutConstraint.AttributeTypes.Height, _ iOSLayoutConstraint.RelationTypes.Equal, _ Nil, _ iOSLayoutConstraint.AttributeTypes.None, _ 1, _ newHeight) Self.AddConstraint(contentConstraint) Example Projects - Examples/iOS/Controls/ScrollableArea See Also iOSScrollableArea class; UserGuide:iOS Container Control, UserGuide:iOS UI topics
http://docs.xojo.com/UserGuide:iOS_Scrollable_Area
2020-08-04T00:22:46
CC-MAIN-2020-34
1596439735836.89
[]
docs.xojo.com
You can create custom code to do things like calculations, data comparisons, or transformations. You can tell Flow Builder when to trigger your hook. Create your hook Step 1: Find the records Note: If you already have the flow you want open in Flow Builder, skip to “Step 2: Create your hook.” - Go go to integrator.io. - Click the integration tile that has the flow with the records you want to map. - In the list, find the flow you want. - On the right, under “Actions,” click the down arrow > Edit . Step 2: Create your hook Note: The Define Hook editor is a dynamic form. The fields in the editor may vary based on the app and the record type. - To the right of your app’s card, click JS . The Define Hooks editor will pop up on your screen. - To define your hook, fill out the fields in the editor: - Hook type: - Script: Your code is managed and executed by integrator.io. - Stack: Your code will be hosted on either your own server orAWS lambda. - Pre save page: This function will be invoked at the end of the data record, after the transformations and the filters. It’s the last step before the export record is passed to the destination app.The functions will be in a drop-down at the top of your script editor: - Pre map: The function your write in your script or stack will be triggered before the mapping. - Post map: The function you write in your script or stack will be triggered after the mapping. - Post Submit: The function you write in your script or stack will be triggered after the record is submitted. - Post aggregate: The function you write in your script or stack will be triggered after all of the data has been gathered. - Script or stack record: To the right of the “Pre save page,” select your script or stack from the drop-down. - Add a script: If you’re using a script, you can add a new script. Click Add . - Edit script: If you’re using a script, you can edit it. Click Edit . - Click Save. Check out our community forum for tips on creating hooks. You can ask question and post answers. Please sign in to leave a comment.
https://docs.celigo.com/hc/en-us/articles/360025638472-Create-a-hook-for-your-export-data-record
2020-08-03T23:58:47
CC-MAIN-2020-34
1596439735836.89
[]
docs.celigo.com
Pane Title - 3 minutes to read You can show a title above the pane to distinguish between multiple panes. End users can use the expand button next to the title to hide or show panes. This document explains how to: Add a Pane Title. NOTE The Chart Control can hide its elements if there is insufficient space to display them. Elements are hidden in the following order: To make the Chart Control always display its elements, disable the ChartControl.AutoLayout property.; } The code above uses the following API members: Align a Pane Title; Use the following API members to align pane titles: Customize a Title's Text Appearance; The code above uses the following properties:
https://docs.devexpress.com/WindowsForms/120591/controls-and-libraries/chart-control/chart-elements/diagram/panes/pane-title
2020-08-04T01:09:03
CC-MAIN-2020-34
1596439735836.89
[array(['/WindowsForms/images/pane-title133831.png', 'pane-title'], dtype=object) array(['/WindowsForms/images/design-time-pane-title-settings133847.png', 'design-time-pane-title-settings'], dtype=object) array(['/WindowsForms/images/pane-title-alignment133854.png', 'pane-title-alignment'], dtype=object) array(['/WindowsForms/images/pane-title-appearance133855.png', 'pane-title-appearance'], dtype=object) ]
docs.devexpress.com
Standard Rust log crate adapter to slog-rs This crate allows using slog features with code using legacy log statements. log crate expects a global logger to be registered (popular one is env_logger) as a handler for all info!(...) and similar. slog-stdlog will register itself as log global handler and forward all legacy logging statements to slog's Logger. That means existing logging debug! (even in dependencies crates) work and utilize slog composable drains. See init() documentation for minimal working example.
https://docs.rs/crate/slog-stdlog/1.0.0
2020-08-03T23:51:20
CC-MAIN-2020-34
1596439735836.89
[]
docs.rs
Using the driver selection GUI - PnP driver paths In defining computer settings for a Microsoft Windows system package, you can use the driver selection GUI to browse to the data store and select PnP drivers. Note This information applies to Windows operating systems other than Windows 2008. For information about specifying the location of Windows 2008 PnP drivers, see Computer settings - Windows 2008. To select PnP driver paths - On the Computer Settings tab, next to PnP driver paths, click Browse. For Select DataStore, browse to the data store that contains the PnP drivers. Note Browsing the data store for the PnP drivers has the following requirements: - The drivers must exist in the same data store as the rest of the installation files for this system package. - A server object whose name matches the LOCATION property of the data store instance you selected must exist in the BMC Server Automation environment. In the left pane, expand the root folder down to the folder that contains drivers or subfolders with drivers. Note If you filled in the Path to $OEM$ directory, the navigation panel uses that $OEM$ path as its root. If you did not fill in the Path to $OEM$ directory, the navigation panel uses the data store root as its root. All drivers in the folder and its subfolders appear in the right panel. Note The selected folder must contain files with the .inf extension. If you select a folder that does not contain any .inf files, the system displays an error message. Use the arrow keys to further select drivers. Then click OK. The paths appear in the PnP driver paths field on the Computer Settings panel and the required entries are added to the Unattend Entries file. The paths appear in NSH format. Note If you do not fill in the Path to $OEM$ directory, the system checks to see if the directories you specify are directly beneath the i386 or amd64 directory. If they are not, they are automatically copied to a location directly beneath the i386 or amd64 directory. (For this to work, the system package type must specify the OS installer path. Otherwise, an error message appears.) For more information, see When to use Specify path to $OEM$ directory. - If you have not associated this system package with this data store, a message appears prompting whether to set this data store as the default data store for this system package. To display the contents of this data store every time you start the driver selection GUI from this system package, click Yes. (Note that if you associate this data store with this system package, this data store is displayed by default if you subsequently browse for mass storage drivers.)
https://docs.bmc.com/docs/ServerAutomation/86/using/creating-and-modifying-bmc-server-automation-jobs/panel-reference-for-provision-jobs/system-package-panels-os-specific/defining-settings-for-microsoft-windows-servers/computer-settings-windows-operating-systems-earlier-than-windows-2008/using-the-driver-selection-gui-pnp-driver-paths
2020-08-04T00:26:41
CC-MAIN-2020-34
1596439735836.89
[]
docs.bmc.com
LibreOffice » o3tl View module in: cgit Doxygen Very basic template functionality, a bit like boost or stl, but specific to LibO o3tl stands for "OOo o3, get it? template library" From The scope for o3tl is admittedly kind of ambitious, as it should contain "...very basic (template) functionality, comparable to what's provided by boost or stl, but specific to OOo (what comes to mind are e.g. stl adapters for our own data types and UNO, and stuff that could in principle be upstreamed to boost, but isn't as of now)." o3tl/inc/o3tl/cow_wrapper.hxx A copy-on-write wrapper. o3tl/inc/o3tl/lazy_update.hxx This template collects data in input type, and updates the output type with the given update functor, but only if the output is requested. Useful if updating is expensive, or input changes frequently, but output is only comparatively seldom used. o3tl/inc/o3tl/range.hxx Represents a range of integer or iterator values. o3tl/inc/o3tl/vector_pool.hxx Simple vector-based memory pool allocator. o3tl/inc/o3tl/functional.hxx Some more templates, leftovers in spirit of STLport's old functional header that are not part of the C++ standard (STLport has been replaced by direct use of the C++ STL in LibreOffice). Generated by Libreoffice CI on lilith.documentfoundation.org Last updated: 2020-08-01 10:38:02 | Privacy Policy | Impressum (Legal Info)
https://docs.libreoffice.org/o3tl.html
2020-08-04T00:24:10
CC-MAIN-2020-34
1596439735836.89
[]
docs.libreoffice.org
Debug Example : Setting conditional breakpoint in catch block¶ For this Debug example, we will be setting a breakpoint in the catch section of try-catch block. By doing this, we will only be triggering our breakpoint when an exception has been thrown. There will be two examples shown: one without using the Condition field of the breakpoint, and one using this field to make it so the breakpoint only triggers for certain types of exceptions. We will be using the following file for both examples <% Object[] objects = { "1" , "2" , null , true , "9999999999999999", Long.MAX_VALUE , "1.3" }; for( Object object : objects ){ try { Integer.parseInt( (String) object ); } catch ( Exception e ) { out.print( "Exception thrown of type " + e.getClass().getName() + " for object of value " + object + "<br/>" ); } } %> When this file executes through normally, we get the following output: As you can see, we regularly have 5 exceptions being caught, of which there are two alternating types: NumberFormatException and ClassCastException. Unconditional Breakpoint in catch block¶ For this example, we will use a breakpoint with the following configuration: Interactive Debugger View¶ When this breakpoint is triggered for the page with the code above, we will get 5 pauses on line 7 for each exception that is caught within the looped try-catch block. Note for this example, there is a copy of objectArrayExcep.jsp in the /opt/ directory. I have configured this as a source, so FusionReactor displays that while viewing the paused thread's details. Conditional Breakpoint in catch block¶ The caught exception variable must be called at some point in your code or may be optimized away during compilation, meaning the Exception variable may not exist. Suppose that we do want to worry about debugging on ClassCastException but instead only want to pause in the catch block for NumberFormatException. We can do this by adding a Condition to the breakpoint. Conditions are written in the Groovy scripting language (for .jsp and Java classes) or ColdFusion scripting language (for any ColdFusion file extension locations). For our objectArrayExcep.jsp example, we would do the following: This means that the breakpoint will trigger on line 7 if the variable e is of class type NumberFormatException. Therefore, for our new breakpoint we would only get 3 pauses, all for NumberFormatException cases. This can be used in combination with any trigger type ( for example, emails ) to tailor the breakpoint to the cases you are more concerned about.
https://docs.fusion-reactor.com/Debugger/Debug-Example-4/
2020-08-04T00:20:39
CC-MAIN-2020-34
1596439735836.89
[array(['/attachments/245553613/245553636.png', None], dtype=object) array(['/attachments/245553613/245553618.png', None], dtype=object) array(['/attachments/245553613/245553624.png', None], dtype=object)]
docs.fusion-reactor.com
JDBC Error History¶ The JDBC Error History page is almost identical to the History page, except that it only shows JDBC transactions flagged by FusionReactor as being in error. The error is usually detected by FusionReactor observing an exception being raised during the execution of a JDBC operation. Note The JDBC Error History is the same as Transaction->Error History but filtered to only show transactions of JDBC type.
https://docs.fusion-reactor.com/JDBC/JDBC-Error-History/
2020-08-04T00:11:38
CC-MAIN-2020-34
1596439735836.89
[]
docs.fusion-reactor.com
Clustering Algorithm Guide Sigalisers are the clustering algorithms in Moogsoft AIOps that group alerts based on factors such as time, language, similarity and proximity. The clustering algorithms available include: You can configure and run multiple different clustering algorithms on the same instance of Moogsoft AIOps. The algorithms you choose depend on your specific use cases and the type of Situations you want your operators to receive. AIOps UI, see Configure a Cookbook Recipe and Configure a Cookbook. You can also configure Cookbook and its Recipes via the Graze API. AIOps UI, see Configure Tempus. You can also configure Tempus via the Graze API. Feedback Warning Feedback is a Beta feature. Feedback is the neural-based algorithm that learns and unlearns actions based on user feedback. See Feedback for more information. Type: Neural/learns user feedback. Use cases: Feedback is currently a prototype and should not be used in production environments. You can use it if the other clustering algorithms did not correlate anything, as you can teach it what to cluster. For example, if you have a set of alerts that you want to cluster but they didn't cluster through time, attribute similarity or topological proximity, you can teach the system and it learns to cluster those alerts. Alternatively, you might want to use Feedback if you want to manually create Situations and teach Moogsoft AIOps to cluster the same type of alerts. Another use case is to use Feedback alongside Tempus. If you have several team members looking at time-based correlation with an inherent degree of fuzziness, they can use Feedback to train the system to remember good Situations and forgot about bad Situations and persist that behavior in future. For example, you could teach it to remember when there was a server failure but to ignore the printer ink failure and persist that behavior. Benefits: Feedback offers the following advantages: No enrichment required. See Enrichment Overview. Allows operators to push domain knowledge back into the system. Can be trained to only create the Situations you are interested in. Configuration: Both UI and backend configuration. See Configure Feedback for more information.
https://docs.moogsoft.com/AIOps.7.3.0/clustering-algorithm-guide.html
2020-08-03T23:32:44
CC-MAIN-2020-34
1596439735836.89
[]
docs.moogsoft.com
Downloading New Versions The Progress® Telerik® UI for ASP.NET Core Visual Studio (VS) extensions enable you to keep your projects updated. Latest Version Acquirer Tool The Latest Version Acquirer tool automatically retrieves the latest Telerik UI for ASP.NET Core distribution which is available on the Telerik website. Once a day, upon loading a project with Telerik UI for ASP.NET Core controls, the extensions query the Telerik website for a new version of Telerik UI for ASP.NET Core. When a new version is detected, a notification is displayed. Clicking Update Now starts the Latest Version Acquirer tool which prompts for your Telerik credentials on its first page. If you do not have a account, you can create one through the Create an account for free link. The Download Process - Go to the release notes of the Telerik UI for ASP.NET Core distribution to get information on the latest available versions. - To avoid having to enter your Telerik credentials multiple times, use the Save my password checkbox. The persistence is securely done and the credentials are saved in a per-user context. Other users on the machine do not have access to your stored credentials. In the dialog that appears, confirm the download. As a result, the latest version automatically starts to download. Click OK when the download process of the latest version completes. To access the latest version of Telerik UI for ASP.NET Core, after the download completes, go to New Project Wizard. - The Download buttons of the New Project Wizard launch the Latest Version Acquirer tool. - The Latest Version Acquirer tool downloads the .zipfiles which contain the latest Telerik UI for ASP.NET Core binaries and any resources that are vital for a Telerik UI for ASP.NET Core application. These get unpacked in the %APPDATA%\Telerik\Updatesfolder by default. If you find the list of the offered packages too long and you do not need the older versions, close the VS and use the Windows Explorer to delete these distributions.
https://docs.telerik.com/aspnet-core/installation/vs-integration/latest-version-retrieval
2020-08-03T23:59:09
CC-MAIN-2020-34
1596439735836.89
[array(['../../installation/vs-integration/images/lva_notification.png', 'Getting the latest version notification'], dtype=object) array(['../../installation/vs-integration/images/lva1.png', 'Getting the latest version dialog'], dtype=object) ]
docs.telerik.com
Use this SDK to interact with the LaunchKey Platform API in your Python application. Basic documentation can also be found in the SDK README file. The SDK is available in these locations: Before you can begin using the Platform API, you need an service. If you have not created an service yet, you can use our Help Center to create one. The LaunchKey SDK is broken into credential based factories with access to functionality based clients. Factories are based on the credentials supplied. The Organization Factory uses Organization credentials, the Directory Factory uses Directory credentials, and the Service Factory uses Service credentials. Each factory provides clients which are accessible to the factory. The availability is based on the hierarchy of the entities themselves. Below is a matrix of available services for each factory. from launchkey.factories import ServiceFactory, DirectoryFactory directory_id = "37d98bb9-ac71-44b7-9ac0-5d75e31e627a" directory_private_key = open('directory_private_key.key').read() service_id = "9ecc57e0-fb0f-4971-ba12-399b630158b0" service_private_key = open('service_private_key.key').read() directory_factory = DirectoryFactory(directory_id, directory_private_key) directory_client = directory_factory.make_directory_client() service_factory = ServiceFactory(service_id, service_private_key) service_client = service_factory.make_service_client() from launchkey.factories import OrganizationFactory organization_id = "bff1602d-a7b3-4dbe-875e-218c197e9ea6" organization_private_key = open('organization_private_key.key').read() directory_id = "37d98bb9-ac71-44b7-9ac0-5d75e31e627a" service_id = "9ecc57e0-fb0f-4971-ba12-399b630158b0" user_name = "myuser" organization_factory = OrganizationFactory(organization_id, organization_private_key) directory_client = organization_factory.make_directory_client(directory_id) service_client = organization_factory.make_service_client(service_id)
https://docs.launchkey.com/service-sdk/python/sdk-v3/index.html
2018-11-12T22:34:13
CC-MAIN-2018-47
1542039741151.56
[]
docs.launchkey.com
Dictionary Integration¶ You can integrate the Dictionary from Signavio Process Manager to work with Signavio Workflow Accelerator. Doing so allows you to pull data from Dictionary entries and use them in your workflows. Note Your Signavio Workflow Accelerator organization needs to be connected to a Signavio Process Manager workspace before you can use this feature. Activating the Dictionary integration¶ In Signavio Workflow Accelerator, select Services & Connectors from the top-right user menu. Select Signavio Process Manager Integration, then select a user that also has a Signavio Process Manager account. Make sure this user is able to see all desired dictionary entries, as all requests to retrieve dictionary items will be done with this user. (Don’t worry–you can always change this user later.) In the dropdown, select which dictionary categories you want to use. Open one of these categories to see which fields are now available to use in Signavio Workflow Accelerator. Using Dictionary categories with forms¶ In Signavio Workflow Accelerator, under the Process tab, open the process editor. Select a process element that requires configuration, such as a user task. Under the Configuration tab, the categories you imported from Dictionary will be displayed as fields. Drag fields to your form to use them. Now, when you execute your case, you will see a field where you can type and search for entries. Once you find the entry you want to use, simply click it to use it. Hint You can also use Dictionary entries in gateway conditions, script tasks, and emails the same as any other fields. Additional info¶ Whenever you select an entry used in the task form during the case execution, Signavio Workflow Accelerator takes a snapshot of that entry. If the entry is changed later on, the snapshot in the case is NOT automatically updated. This is so that Signavio Workflow Accelerator can properly track past decisions that were based on dictionary entries. If a selected entry is set as the default value for a field, Signavio Workflow Accelerator will take a snapshot when the case is started. Whenever you add or remove attributes to or from the dictionary category, you have to press the Reload integration button. The new attributes can then be used in the workflow editor after reloading the integration. They will become available as nested fields of the category fields. Note - Attributes that have been removed from a dictionary category will still show up in old cases. You have to manually remove old dictionary attributes anywhere they are used in a workflow. - Dictionary categories can be deactivated at any time. Once they have been deactivated, they can no longer be used in the workflow editor. You also have to manually remove deactivated categories anywhere they are used in a workflow. Old cases will still show data from deactivated categories. Troubleshooting¶ - When setting up the Dictionary integration, you may run into the following error message: - Could not set up the Dictionary integration for the following reason: The tenant ID is not configured for your Process Manager workspace. Please contact customer support. To resolve this, you need to try to sync the configuration from Process Manager again. Open Setup > Manage approval workflows in Process Manager and click Synchronize configuration now. If that option is not available, you should contact the Signavio Support Team to set the tenant ID for you.
https://docs.signavio.com/userguide/workflow/en/integration/dictionary.html
2018-11-12T23:10:44
CC-MAIN-2018-47
1542039741151.56
[]
docs.signavio.com
Descriptive information about fields or filters in a report template can be added to the Field Hints portion of the Template Configuration panel. For example, a circulation report template might include the field, Circ ID. You can add content to the Field hints to further define this field for staff and provide a reminder about the type of information that they should select for this field. To view a field hint, click the Column Picker, and select Field Hint. The column will be added to the display. To add or edit a field hint, select a filter or field, and click Change Field Hint. Enter text, and click Ok.
http://docs.evergreen-ils.org/dev/_field_hints.html
2018-11-12T23:27:05
CC-MAIN-2018-47
1542039741151.56
[]
docs.evergreen-ils.org
. Minimum software requirements for SharePoint Server 2019 This section provides minimum software requirements for each server in the farm. Minimum requirements for a database server in a farm One of the following: Microsoft SQL Server 2016 RTM Standard or Enterprise Editions Microsoft SQL Server 2017 RTM Standard or Enterprise Editions for Application Server role Microsoft .NET Framework version 4.7.2 Microsoft SQL Server 2012 Service Pack 4: Install,Web-ISAPI-Ext,Web-ISAPI-Filter,Web-Health,Web-Http-Logging,Web-Log-Libraries,Web-Request-Monitor,Web-Http-Tracing,Web-Security,Web-Basic-Auth,Web-Windows-Auth,Web-Filtering,Web-Performance,Web-Stat-Compression,Web-Dyn-Compression,Web-Mgmt-Tools,Web-Mgmt-Console,WAS,WAS-Process-Model,WAS-NET-Environment,WAS-Config-APIs,Windows-Identity-Foundation,Xps-Viewer -IncludeManagementTools -Verbose Service Pack 4 (SP4) Native Client (installs with Microsoft SQL Server 2012 Feature Pack)4 Native Client.
https://docs.microsoft.com/en-us/sharepoint/install/hardware-and-software-requirements-2019
2018-11-12T23:12:23
CC-MAIN-2018-47
1542039741151.56
[]
docs.microsoft.com
Media Frame Media Source Group Frame Media Source Group Frame Media Source Group Frame Media Source Group Frame Class Source Group Definition Represents a group of media frame sources that can be used simultaneously by a MediaCapture. public : sealed class MediaFrameSourceGroup : IMediaFrameSourceGroup struct winrt::Windows::Media::Capture::Frames::MediaFrameSourceGroup : IMediaFrameSourceGroup public sealed class MediaFrameSourceGroup : IMediaFrameSourceGroup Public NotInheritable Class MediaFrameSourceGroup Implements IMediaFrameSourceGroup // This class does not provide a public constructor. - Attributes - Windows 10 requirements Remarks Get an instance of this class by calling FindAllAsync and selecting an instance from the returned list. If you know the unique identifier of a media frame source group, you can get an instance of this class by calling FromIdAsync. Initialize a MediaCapture object to use the selected MediaFrameSourceGroup by assigning the group to the SourceGroup property of a MediaCaptureInitializationSettings object, and then passing that settings object into InitializeAsync. For how-to guidance on using MediaFrameSource to capture frames, see Process media frames with MediaFrameReader.
https://docs.microsoft.com/en-us/uwp/api/windows.media.capture.frames.mediaframesourcegroup
2018-11-12T22:39:08
CC-MAIN-2018-47
1542039741151.56
[]
docs.microsoft.com
11.4 Futures Parallelism with Futures in The Racket Guide introduces futures.Parallelism with Futures in The Racket Guide introduces futures. Currently, parallel support for future is enabled by default for Windows, Linux x86/x86_64, and Mac OS x86/x86_64. To enable support for other platforms, use --enable-futures with configure when building Racket. The future and touch functions from racket/future provide access to parallelism as supported by the hardware and operating— A future never runs in parallel if all of the custodians that allow its creating thread to run are shut down. Such futures can execute through a call to touch, however. 11.4.1 Creating and Touching Futures Between a call to future and touch for a given future, the given thunk may run speculatively in parallel to other computations, as described above. With a normal future, certain circumstances might prevent the logging of unsafe operations. For example, when executed with debug-level logging, might log three messages, one for each printf invocation. However, if the touch is performed before the future has a chance to start running in parallel, the future thunk evaluates in the same manner as any ordinary thunk, and no unsafe operations are logged. Replacing future with would-be-future ensures the logging of all three calls to printf. 11.4.2 Future Semaphores A future semaphore is similar to a plain semaphore, but future-semaphore operations can be performed safely in parallel (to synchronize parallel computations). In contrast, operations on plain semaphores are not safe to perform in parallel, and they therefore prevent a computation from continuing in parallel. 11.4.3 Future Performance Logging Racket traces use logging (see Logging) extensively to report information about how futures are evaluated. Logging output is useful for debugging the performance of programs that use futures. Though textual log output can be viewed directly (or retrieved in code via trace-futures), it is much easier to use the graphical profiler tool provided by future-visualizer. Future events are logged with the topic 'future. In addition to its string message, each event logged for a future has a data value that is an instance of a future-event prefab structure: The future-id field is an exact integer that identifies a future, or it is #f when action is 'missing. The future-id field is particularly useful for correlating logged events. The proc-id fields is an exact, non-negative integer that identifies a parallel process. Process 0 is the main Racket process, where all expressions other than future thunks evaluate. The time field is an inexact number that represents time in the same way as current-inexact-milliseconds. The action field is a symbol: 'create: a future was created. 'complete: a future’s thunk evaluated successfully, so that touch will produce a value for the future immediately. 'start-work and 'end-work: a particular process started and ended working on a particular future. 'start-0-work: like 'start-work, but for a future thunk that for some structural reason could not be started in a process other than 0 (e.g., the thunk requires too much local storage to start). 'start-overflow-work: like 'start-work, where the future thunk’s work was previously stopped due to an internal stack overflow. 'sync: blocking (processes other than 0) or initiation of handing (process 0) for an “unsafe” operation in a future thunk’s evaluation; the operation must run in process 0. 'block: like 'sync, but for a part of evaluation that must be delayed until the future is touched, because the evaluation may depend on the current continuation. 'touch (never in process 0): like 'sync or 'block, but for a touch operation within a future thunk. 'overflow (never in process 0): like 'sync or 'block, but for the case that a process encountered an internal stack overflow while evaluating a future thunk. 'result or 'abort: waiting or handling for 'sync, 'block, or 'touch ended with a value or an error, respectively. 'suspend (never in process 0): a process blocked by 'sync, 'block, or 'touch abandoned evaluation of a future; some other process may pick up the future later. 'touch-pause and 'touch-resume (in process 0, only): waiting in touch for a future whose thunk is being evaluated in another process. 'missing: one or more events for the process were lost due to internal buffer limits before they could be reported, and the time-id field reports an upper limit on the time of the missing events; this kind of event is rare. Assuming no 'missing events, then 'start-work, 'start-0-work, 'start-overflow-work is always paired with 'end-work; 'sync, 'block, and 'touch are always paired with 'result, 'abort, or 'suspend; and 'touch-pause is always paired with 'touch-resume. In process 0, some event pairs can be nested within other event pairs: 'sync, 'block, or 'touch with 'result or 'abort; and 'touch-pause with 'touch-resume. An 'block in process 0 is generated when an unsafe operation is handled. This type of event will contain a symbol in the unsafe-op-name field that is the name of the operation. In all other cases, this field contains #f. The prim-name field will always be #f unless the event occurred on process 0 and its action is either 'block or 'sync. If these conditions are met, prim-name will contain the name of the Racket primitive which required the future to synchronize with the runtime thread (represented as a symbol). The user-data field may take on a number of different values depending on both the action and prim-name fields: 'touch on process 0: contains the integer ID of the future being touched. 'sync and prim-name = |allocate memory|: The size (in bytes) of the requested allocation. 'sync and prim-name = jit_on_demand: The runtime thread is performing a JIT compilation on behalf of the future future-id. The field contains the name of the function being JIT compiled (as a symbol). 'create: A new future was created. The field contains the integer ID of the newly created future.
https://docs.racket-lang.org/reference/futures.html
2018-11-12T23:16:27
CC-MAIN-2018-47
1542039741151.56
[]
docs.racket-lang.org
obspy.imaging - Plotting routines for ObsPy¶ This module provides routines for plotting and displaying often used in seismology. It can currently plot waveform data, generate spectrograms and draw beachballs. The module obspy.imaging depends on the plotting module matplotlib. Seismograms¶ This submodule can plot multiple Trace in one Stream object and has various other optional arguments to adjust the plot, such as color and tick format changes. Additionally the start and end time of the plot can be given as UTCDateTime objects. Examples files may be retrieved via. >>> from obspy import read >>> st = read() >>> print(st) 3 Trace(s) in Stream: BW.RJOB..EHZ | 2009-08-24T00:20:03.000000Z - ... | 100.0 Hz, 3000 samples BW.RJOB..EHN | 2009-08-24T00:20:03.000000Z - ... | 100.0 Hz, 3000 samples BW.RJOB..EHE | 2009-08-24T00:20:03.000000Z - ... | 100.0 Hz, 3000 samples >>> st.plot(color='gray', tick_format='%I:%M %p', ... starttime=st[0].stats.starttime, ... endtime=st[0].stats.starttime+20) (Source code, png, hires.png) Spectrograms¶ The obspy.imaging.spectrogram submodule plots spectrograms. The spectrogram will on default have 90% overlap and a maximum sliding window size of 4096 points. For more info see obspy.imaging.spectrogram.spectrogram(). >>> from obspy import read >>> st = read() >>> st[0].spectrogram(log=True) (Source code, png, hires.png) Beachballs¶ Draws a beach ball diagram of an earthquake focal mechanism. Note ObsPy ships with two engines for beachball generation. - obspy.imaging.beachball is based on the program from the Generic Mapping Tools (GMT) and the MATLAB script bb.m written by Andy Michael and Oliver Boyd, which both have known limitations. - obspy.imaging.mopad_wrapper is based on the the Moment tensor Plotting and Decomposition tool (MoPaD) [Krieger2012]. MoPaD is more correct, however it consumes much more processing time. The function calls for creating beachballs are similar in both modules. The following examples are based on the first module, however those example will also work with MoPaD by using >>> from obspy.imaging.mopad_wrapper import beachball and >>> from obspy.imaging.mopad_wrapper import beach respectively. Examples The focal mechanism can be given by 3 (strike, dip, and rake) components. The strike is of the first plane, clockwise relative to north. The dip is of the first plane, defined clockwise and perpendicular to strike, relative to horizontal such that 0 is horizontal and 90 is vertical. The rake is of the first focal plane solution. 90 moves the hanging wall up-dip (thrust), 0 moves it in the strike direction (left-lateral), -90 moves it down-dip (normal), and 180 moves it opposite to strike (right-lateral). >>> from obspy.imaging.beachball import beachball >>> np1 = [150, 87, 1] >>> beachball(np1) <matplotlib.figure.Figure object at 0x...> (Source code, png, hires.png) The focal mechanism can also be specified using the 6 independent components of the moment tensor (M11, M22, M33, M12, M13, M23). For obspy.imaging.beachball.beachball() (1, 2, 3) corresponds to (Up, South, East) which is equivalent to (r, theta, phi). For obspy.imaging.mopad_wrapper.beachball() the coordinate system can be chosen and includes the choices ‘NED’ (North, East, Down), ‘USE’ (Up, South, East), ‘NWU’ (North, West, Up) or ‘XYZ’. >>> from obspy.imaging.beachball import beachball >>> mt = [-2.39, 1.04, 1.35, 0.57, -2.94, -0.94] >>> beachball(mt) <matplotlib.figure.Figure object at 0x...> (Source code, png, hires.png) For more info see obspy.imaging.beachball.beachball() and obspy.imaging.mopad_wrapper.beachball(). Plot the beach ball as matplotlib collection into an existing plot. >>> import matplotlib.pyplot as plt >>> from obspy.imaging.beachball import beach >>> >>> np1 = [150, 87, 1] >>> mt = [-2.39, 1.04, 1.35, 0.57, -2.94, -0.94] >>> beach1 = beach(np1, xy=(-70, 80), width=30) >>> beach2 = beach(mt, xy=(50, 50), width=50) >>> >>> plt.plot([-100, 100], [0, 100], "rv", ms=20) [<matplotlib.lines.Line2D object at 0x...>] >>> ax = plt.gca() >>> ax.add_collection(beach1) >>> ax.add_collection(beach2) >>> ax.set_aspect("equal") >>> ax.set_xlim((-120, 120)) (-120, 120) >>> ax.set_ylim((-20, 120)) (-20, 120) (Source code, png, hires.png) For more info see obspy.imaging.beachball.beach() and obspy.imaging.mopad_wrapper.beach(). Saving plots into files¶ All plotting routines offer an outfile argument to save the result into a file. The outfile parameter is also used to automatically determine the file format. Available output formats mainly depend on your matplotlib settings. Common formats are png, svg, pdf or ps. >>> from obspy import read >>> st = read() >>> st.plot(outfile='graph.png')
https://docs.obspy.org/packages/obspy.imaging.html
2018-11-12T22:25:02
CC-MAIN-2018-47
1542039741151.56
[array(['../_images/obspy-imaging-1.png', '../_images/obspy-imaging-1.png'], dtype=object) array(['../_images/obspy-imaging-2.png', '../_images/obspy-imaging-2.png'], dtype=object) ]
docs.obspy.org
General Transformations General transformations are devices that allow us to convert any data structure into a retroactive data structure. General transformation for partial retroactivity Implemented, with an O(r) overhead. This implementation uses the rollback method to implement retroactivity. It stores up to r prior operations as well as the state of the data structure before those operations, so that these operations can be reversed. When an operation is removed or inserted, the current state of the data structure is “refreshed” from the past state by applying each operation in sequence. Implementing this proved to be an entertaining exercise in abstraction: it needs to be able to wrap any data structure, and allow any form of operation on that data structure. So, operations are represented – and passed as input – using Python functions: an operation is any function which takes in a data structure and returns a new data structure. General transformation from partial to full retroactivity Implemented, with an O(m) overhead. This implementation stores a list of partially-retroactive data structures, applying or deleting operations from those partially-retroactive data structures when relevant. When the fully-retroactive data structure is queried, we simply query the relevant partially-retroactive data structure. The better O(√m) implementation requires an implementation of persistence.
https://python-retroactive-data-structures.readthedocs.io/en/latest/general/
2018-11-12T23:17:43
CC-MAIN-2018-47
1542039741151.56
[]
python-retroactive-data-structures.readthedocs.io
This is the 2.3 Beta release version of Magento documentation. Content in this version is subject to change. For additional versions, see Magento Documentation and Resources. Configuring Admin Security Magento recommends that you take a multifaceted approach to protect the security of your store. You can begin by using a custom Admin URL that is not easy to ascertain, rather than the obvious “AdminThe password-protected back office of your store where orders, catalog, content, and configurations are managed.” or “Backend.” By default, passwords that are used to log in to the Admin must be seven or more characters long, and include both letters and numbers. As a best practice, use only strong Admin passwords that include a combination of letters, numbers, and symbols. For increased security, consider implementing two-factor authentication that generates a token on a separate device. To learn more, see the selection of security-related extensions on Magento Marketplace. The Admin security configuration gives you the ability to add a secret key to URLs, require passwords to be case sensitive, and to limit the length of Admin sessions, the lifetime of passwords, and the number of loginThe process of signing into an online account.. To configure Admin security: A quick rating takes only 3 clicks. Add a comment to help us improve Magento even more.
https://docs.magento.com/m2/ce/user_guide/stores/security-admin.html
2018-11-12T23:22:27
CC-MAIN-2018-47
1542039741151.56
[]
docs.magento.com
YITH WooCommerce Authorize.net gateway integrates this payment gateway to your e-commerce, without the stress of the difficult task to connect your website to the payment processing network. This guide aims to illustrate the plugin features and help you solve any possible doubt raised by using this plugin.
https://docs.yithemes.com/yith-woocommerce-authorizenet-payment-gateway/
2018-11-12T22:35:17
CC-MAIN-2018-47
1542039741151.56
[]
docs.yithemes.com
With Arcadia Instant, users with administrative role can define distinct color custom color palettes. In the New Custom Color interface, perform these actions: Leaf. For Color 1, enter a hexadecimal color code. We used #25351f. Click Add More. For Color 2, enter an rgb color code. We used rgb(40,65,14). Click Add More. For Color 3, use the color picker. We picked the color #476b17. Click Add More for other colors. Altogether, we added these colors to the custom color Leaf: Click Save. Note that the new color, Leaf, appears in the list of colors on the Manage Custom Colors interface. It's type is distinct, and you have an option to edit and delete it.
http://docs.arcadiadata.com/4.1.0.0/pages/topics/custom-colors-distinct.html
2018-11-12T22:41:42
CC-MAIN-2018-47
1542039741151.56
[]
docs.arcadiadata.com
std::string OEToASCI,.
https://docs.eyesopen.com/toolkits/csharp/lexichemtk/OEIUPACFunctions/OEToASCII.html
2018-11-12T23:17:10
CC-MAIN-2018-47
1542039741151.56
[]
docs.eyesopen.com
Multilingual Support is set up at a partition level within a LANSA system. All new partitions created on the IBM i on Japanese and French machines are automatically created as multilingual. All Visual LANSA partitions are automatically created as multilingual. Multilingual support is necessary if your application will: For applications in bi-directional or DBCS languages you must use multilingual support, regardless of whether or not the resulting applications are truly multilingual (that is, able to operate in more than one language). Note: Although it is not recommended, it is only possible to use a national language without multilingual support if the language is derived from the Latin alphabet (for example, English).
https://docs.lansa.com/14/en/lansa070/content/lansa/mulb1_0010.htm
2018-11-12T22:43:54
CC-MAIN-2018-47
1542039741151.56
[]
docs.lansa.com
This tutorial shows you how to install JUCE and how to create a new cross-platform JUCE project using the Projucer. You also learn how to export the project to an IDE such as Xcode or Visual Studio to develop, run and debug your JUCE application. Level: Beginner Platforms: Windows, macOS, Linux, iOS, Android Download JUCE. Unpack the JUCE folder and place it to some location on your computer. Your user home folder is a convenient place. Go into the JUCE folder you just installed. Launch the Projucer, which is located there. The first time you launch Projucer, you are presented with the new project wizard. (You can also launch the wizard later by selecting New Project... from the Projucer's main menu.) On the first page of the wizard, select which type of project you want to create:. Below is an overview over all currently supported project types. There are also many example projects which can serve as an alternative starting point for your project. They are located in the subfolder JUCE/examples. You will be directed to this subfolder if you click on the Open Example Project button in the lower right corner. After you have selected the appropriate project type, the Projucer will take you through some additional settings, such as the target platform of your application and various project paths. In the second page of the wizard, you can set everything up: modulessubfolder located inside the JUCE folder you installed earlier. The Projucer currently has exporters for the following IDEs, build systems, and platforms: After you have created your project, you can launch the native IDE with your project directly from the Projucer. Use the button near the top: Now that you have opened your IDE (Xcode, Visual Studio, and so on), you can compile and run your JUCE app, and get started with coding!. You can read about this and more features for managing your Projucer project in Tutorial: Projucer Part 2: Manage your Projucer projects. To open up an existing Projucer project, you can either double-click on the .jucer file contained in the project folder or click on Open Existing Project from the wizard. (You can also navigate to Open... from the Projucer's main menu.) You may come across Projucer Instant Project (PIP) files when following other JUCE tutorials. These are essentially header files with the usual .h extension that provide metadata to the Projucer in order to automatically create a project with the correct modules and exporters from a single file. PIP files can be opened similarly by either selecting the file from the Open... dialog of the Projucer's main menu or a simple drag-and-drop onto the Projucer interface window. After reading this tutorial, you should be able to:
https://docs.juce.com/master/tutorial_new_projucer_project.html
2018-11-12T22:23:33
CC-MAIN-2018-47
1542039741151.56
[]
docs.juce.com
into the Admin UI on your Discover appliance. - In the Network Settings section, click Flow Networks. - In the Shared SNMP Credentials section, click Add SNMP Credentials. - Type the IPv4 CIDR block in the CIDR field. -?
https://docs.extrahop.com/7.0/shared-snmp-netflow/
2018-01-16T13:21:07
CC-MAIN-2018-05
1516084886436.25
[]
docs.extrahop.com
Microsoft Security Bulletin MS15-086 - Important Vulnerability in System Center Operations Manager Could Allow Elevation of Privilege (3075158) Published: August 11, 2015 Version: 1.0 Executive Summary This security update resolves a vulnerability in Microsoft System Center Operations Manager. The vulnerability affected versions of Microsoft System Center 2012 Operations Manager and Microsoft System Center 2012 Operations Manager R2. For more information, see the Affected Software section. The security update addresses the vulnerability by modifying how System Center Operations Manager accepts input. For more information about the vulnerability, see the Vulnerability Information section. For more information about this update, see Microsoft Knowledge Base Article 3075158.. Vulnerability Information System Center Operations Manager Web Console XSS Vulnerability - CVE-2015-2420 An elevation of privilege vulnerability exists in Microsoft System Center Operations Manager that is caused by the improper validation of input. An attacker who successfully exploited this vulnerability could inject a client-side script into the user's browser. The script could spoof content, disclose information, or take any action that the user could take on the affected website on behalf of the targeted user. An attacker could exploit this vulnerability by convincing a user to visit an affected website by way of a specially crafted URL. This can be done through any medium that can contain URL web links that are controlled by the attacker, such as a link in an email, a link on a website, or a redirect on a website. Additionally, compromised websites and websites that accept or host user-provided content or advertisements could contain specially crafted content that could exploit this vulnerability. In all cases, however, an attacker would have no way to force users to visit such websites. Instead, an attacker would have to convince users to visit a website, typically by getting them to click a link in an email or Instant Messenger message that directs them to the affected website by way of a specially crafted URL. Users who are authorized to access System Center Operations Manager web consoles are primarily at risk from this vulnerability. The update addresses the vulnerability by modifying the way that System Center Operations Manager accepts input.-05 13:11Z-07:00.
https://docs.microsoft.com/en-us/security-updates/SecurityBulletins/2015/ms15-086
2018-01-16T14:30:52
CC-MAIN-2018-05
1516084886436.25
[]
docs.microsoft.com
Firefox Puppeteer¶ Firefox Puppeteer is a library built on top of the Marionette Python client. It aims to make automation of Firefox’s browser UI simpler. It does not make sense to use Firefox Puppeteer if: - You are manipulating something other than Firefox (like Firefox OS) - You are only manipulating elements in content scope (like a webpage) Roughly speaking, Firefox Puppeteer provides a library to manipulate each visual section of Firefox’s browser UI. For example, there are different libraries for the tab bar, the navigation bar, etc. Installation¶ For end-users Firefox Puppeteer can be easily installed as a Python package from PyPI. If you want to contribute to the project we propose that you clone the mozilla-central repository and run the following commands: $ cd testing/puppeteer/firefox $ python setup.py develop In both cases all necessary files including all dependencies will be installed. Versioning¶ Puppeteer versions as regularly released from the Python source code, will follow a specific versioning schema. It means the major version number will always be identical with the supported Firefox version. Minor releases - the second part of the version number - are done throughout the life-cycle of a Firefox version when Puppeteer itself needs API changes for back-end and front-end modules. The last part of the version number is the patch level, and is only used for bugfix releases without any API changes. Examples: firefox_puppeteer_45.0.0 - First release for Firefox 45.0 and Firefox 45.xESR firefox_puppeteer_46.2.0 - Second release for Firefox 46.0 caused by API changes firefox_puppeteer_47.0.1 - First bugfix release for the new Firefox 47.0 support Libraries¶ The following libraries are currently implemented. More will be added in the future. Each library is available from an instance of the FirefoxTestCase class. - About Window - Deck - Page Info Window - Notifications - Tabbar - Toolbars - BrowserWindow - Update Wizard Dialog - UpdateWizardDialog - Wizard - CheckingPanel - DownloadingPanel - DummyPanel - ErrorPatchingPanel - ErrorPanel - ErrorExtraPanel - FinishedPanel - FinishedBackgroundPanel - IncompatibleCheckPanel - IncompatibleListPanel - InstalledPanel - LicensePanel - ManualUpdatePanel - NoUpdatesFoundPanel - PluginUpdatesFoundPanel - UpdatesFoundBasicPanel - Windows - AppInfo - Keys - Localization - Places - Security - SoftwareUpdate - Utils
http://firefox-puppeteer.readthedocs.io/en/aurora/
2018-01-16T13:05:03
CC-MAIN-2018-05
1516084886436.25
[]
firefox-puppeteer.readthedocs.io
Ticket #2618 (new task) An Update On Root Factors In decorative pillows Description Attractive Decorative Pillows and Rugs for the Contemporary Home - Home Improvement Articles Hiring a marquee in Norfolk is all well and good, nonetheless it?s not really a good deal of wedding if there?s not even attempt to sit on or eat your canap?s off. When organising the next wedding, business launch or public event, put away a small amount of amount of time in order so consider what new angles case hire of a few extra accessories would bring. Table linen in alluring design and finish are great for both indoors and outdoors. These can be employed to drape both round and square tables. Ideal for adding beautiful accents to the tables, table clothes and table runners can be complemented with matching placemats and napkins. Table runners could be placed diagonally through the tables. These can also be looped over one corner of the table to make an attractive and unique look on the table milieu. The array in designer look and finished would work for special occasions and events. These can be availed from online and stores at amazing prices. A well decorated table setting makes dining a memorable experience. Besides enhancing the tables in homes, it is wise to beautify the living room at the same time. This helps to add vibrancy towards the room and offers best of comfort and luxury. The collection of decorative pillows are manufactured from superior quality fabrics and stitched to perfection. The range include quilts, throws, designer pillows and cushions and also pillows shams. These are accessible in dyed prints and block prints that contributes vibrancy and pleasant appeal to the array. Suitable for daily use, these bed linen doubles for special occasions too. The entire range is simple to completely clean and keep and it is known for color fastness and shrink resistance. One can pick the entire range of bedding in patchwork patterns and embroidery works, that best matches their residence interiors. Available in contemporary desigs nad shades, these reflect one's aesthetic taste and attrcats the eyes from the beholder. If your affordability is small , you need to change the look of your room throw pillows may be the strategy to use, it allows you better buying power. It will give you a chance to get pillows tailor made having a dream fabric, buy a down filled pillow forms as opposed to poly-fill. You could buy a yard of one's favorite fabric and in accordance with the size of the pillows you should be able to get from 2 to 4 pillows. There are a variety of curtain rods you'll be able to avail at present. There are expensive in addition to inexpensive decorative curtain rods out there. It is your wise preferences that assist that you determine which of which must be sharpening your drapery panel. If you don't wish to put a great deal of emphasize on your curtain rods, there is an simplest ones which may have simple rod ends with simple brackets which have round shapes or even an oval shapes not having very detailed finials and finishes. On the other hand, there are expensive in addition to lavish decorative curtain rods too which can be certainly going to be a complete bit of decorative item for an elaborately designed home. You get powder coated durable decorative curtain rods which may have detailed and elaborate finials that are gonna be a fantastic accessory for a window treatment. Such decorative curtain rods appear in packages like they will have an extender rod, 3 brackets in addition to hardware. Should you adored this article in addition to you wish to get more info about decorative needlepoint pillows i implore you to stop by the web site.
http://docs.openmoko.org/trac/ticket/2618
2018-01-16T12:59:00
CC-MAIN-2018-05
1516084886436.25
[]
docs.openmoko.org
Recently Viewed Topics Reset Registration and Erase Settings To reset the registration information, shut down the nessusd service first. Next, run the nessuscli fix --reset command. You will be prompted for confirmation. If you have not shut down the nessusd service, the nessuscli fix --reset command will exit. Note: Performing nessuscli fix --reset does not reset the managed function. # /sbin/service nessusd stop # /opt/nessus/sbin/nessuscli fix --reset Resetting Nessus configuration will permanently erase all your settings and cause Nessus to become unregistered. Do you want to proceed? (y/n) [n]: y Successfully reset Nessus configuration. Perform a Full Reset If you intend to change a scanner from being managed by SecurityCenter to being linked to Tenable.io, you must perform a full reset, which resets Nessus to a fresh state. Caution: Performing a full reset deletes all registration information, settings, data, and users. Contact Tenable support before performing a full reset. This action cannot be undone. To perform a full reset, shut down the nessusd service first. Next, run the nessuscli fix --reset-all command. You will be prompted for confirmation. # /sbin/service nessusd stop # /opt/nessus/sbin/nessuscli fix --reset-all WARNING: This option will reset Nessus to a fresh state, permanently erasing the following: * All scans, scan data, and policies * All users and any user settings * All preferences and settings * Registration information (Nessus will become unregistered) * Master password for this Nessus installation, if there is one Are you sure you want to proceed? (y/n) [n]:
https://docs.tenable.com/nessus/commandlinereference/Content/ResetRegistrationAndEraseSettings.htm
2018-01-16T13:01:26
CC-MAIN-2018-05
1516084886436.25
[]
docs.tenable.com
Defining your command driver¶ It’s possible to run your OptparseInterfaces using the pyqi command, as illustrated in Running our Command via its OptparseInterface, but that mechanism is clunky and not how you’d want your users to interact with your software. To handle this more gracefully, you can create a shell script that can be distributed with your package and used as the primary driver for all OptparseInterfaces. Creating the driver shell script¶ To define a driver command for your project, create a new file named as you’d like your users to access your code. For example, the driver for the biom-format package is called biom, and the driver for the pyqi package is called pyqi. In this example our driver name will be my-project. Add the following two lines to that file, replacing my-project with your driver name: #!/bin/sh exec pyqi --driver-name my-project --command-config-module my_project.interfaces.optparse.config -- "$@" The value passed with --command-config-module must be the directory where the OptparseInterface configuration files can be found. If you followed the suggestions in Organizing your repository the above should work. The driver script should then be made executable with: chmod +x my-project You’ll next need to ensure that the directory containing this driver file is in your PATH environment variable. Again, if you followed the recommendations in Organizing your repository and if your project directory is under $HOME/code, you can do this by running: export PATH=$HOME/code/my-project/scripts/:$PATH You should now be able to run: my-project This will print a list of the commands that are available via the driver script, which will be all of the Commands for which you’ve defined OptparseInterfaces. If one of these commands is called my-command, you can now run it as follows to get the help text associated with that command: my-project my-command -h The command names that you pass to the driver (my-command, in this example) match the name of the OptparseInterface config file, minus the .py. The driver also matches the dashed version of a command name, so my-command and my_command both map to the same command. Configuring bash completion¶ One very useful feature for your driver script is to enable tab-completion of commands and command line options (meaning that when a user starts typing the name of a command or an option, they can hit the tab key to complete it without typing the full name, if the name is unique). pyqi facilitates this with the pyqi make-bash-completion command. There are two steps in enabling tab completion. First, you’ll need to generate the tab completion file, and then you’ll need to edit your $HOME/.bash_profile file. To create the tab completion file for my-project, run the following commands (again, this is assuming that your OptparseInterface config files are located as described in Organizing your repository): mkdir ~/.bash_completion.d pyqi make-bash-completion --command-config-module my_project.interfaces.optparse.config --driver-name my-project -o ~/.bash_completion.d/my-project Then, add the following lines to your $HOME/.bash_profile file: # enable bash completion for pyqi-based scripts for f in ~/.bash_completion.d/*; do source $f; done When you open a new terminal, tab completion should work for the my-project commands and their options.
http://pyqi.readthedocs.io/en/latest/tutorials/defining_your_command_driver.html
2018-01-16T12:57:46
CC-MAIN-2018-05
1516084886436.25
[]
pyqi.readthedocs.io
Groupwise Registration¶ Groupwise registration methods try to mitigate uncertainties associated with any one image by simultaneously registering all images in a population. This incorporates all image information in registration process and eliminates bias towards a chosen reference frame. The method described here uses a 3D (2D+time) and 4D (3D+time) free-form B-spline deformation model and a similarity metric that minimizes variance of intensities under the constraint that the average deformation over images is zero. This constraint defines a true mean frame of reference that lie in the center of the population without having to calculate it explicitly. The method can take into account temporal smoothness of the deformations and a cyclic transform in the time dimension. This may be appropriate if it is known a priori that the anatomical motion has a cyclic nature e.g. in cases of cardiac or respiratory motion. Note that brain registration is a difficult to task because of complex anatomical variations and almost a scientific topic in itself. Entire registration packages are dedicated to just brain image processing. In this section we are less strict with the end result and focus on illustrating the groupwise registration method in SimpleElastix. Consider the following mean image: Elastix takes a single N+1 dimensional image for groupwise registration. Therefore we need to first concatenate the images along the higher dimension. SimpleITK makes this very easy with the JoinSeries image filter. The registration step is business as usual: import SimpleITK as sitk # Concatenate the ND images into one (N+1)D image population = ['image1.hdr', ..., 'imageN.hdr'] vectorOfImages = sitk.VectorOfImage() for filename in population vectorOfImages.push_back(sitk.ReadImage(filename)) image = sitk.JoinSeries(vectorOfImages) # Register elastixImageFilter = sitk.ElastixImageFilter() elastixImageFilter.SetFixedImage(image) elastixImageFilter.SetMovingImage(image) elastixImageFilter.SetParameterMap(sitk.GetDefaultParameterMap('groupwise')) elastixImageFilter.Execute() While the groupwise transform works only on the moving image we need to pass a dummy fixed image is to prevent elastix from throwing errors. This does not consume extra memory as only pointers are passed internally. The result image is shown in Figure 13. It is clear that anatomical correpondence is obtained in many regions of the brain. However, there are a some anatomical regions that have not been registered correctly, particularly near Corpus Collosum. Generally these kinds of difficult registration problems require a lot of parameter tuning. No way around that. In a later chapter we introduce methods for assessment of registration quality. Tip We can use the JoinSeries() SimpleITK method to construct a 4D image from multiple 3D images and the Extract() SimpleITK method to pick out a 3D image from a result 4D image. Note that the JoinSeries method may throw an error “Inputs do not occupy the same physical space!” if image information is not perfectly aligned. These can be caused by slight differences between the image origins, spacing, or axes. The tolerance that SimpleITK uses for these settings can be adjusted using :code: sitk.ProcessObject.SetGlobalDefaultDirectionTolerance(x) and :code: sitk.ProcessObject.SetGlobalDefaultCoordinateTolerance(x). We may need to change the image origins to make sure they are the same. This can be done by copying the origin of one of the images origin = firstImage.GetOrigin() and setting it to the others otherImages.SetOrigin(origin)
http://simpleelastix.readthedocs.io/GroupwiseRegistration.html
2018-01-16T13:11:13
CC-MAIN-2018-05
1516084886436.25
[]
simpleelastix.readthedocs.io
This section focuses on the most important features of ApSIC Xbench and aims to be a quick view of what ApSIC Xbench can do. We strongly recommend reading this chapter to get the most out of ApSIC Xbench with the minimum learning effort. After getting some familiarity with the basic capability of the product, we do encourage you to read the documentation with more detail to learn about many useful features that exist in the product.
https://docs.xbench.net/user-guide/quick-tips/
2018-01-16T13:15:16
CC-MAIN-2018-05
1516084886436.25
[]
docs.xbench.net
You're viewing Apigee Edge documentation. View Apigee X documentation. The following sections introduce you to API products and related key concepts. What is an API product? As an API provider, you create API products to bundle your APIs and make them available to app developers for consumption. You can think of API products as your product line. Specifically, an API product bundles together the following: - Collection of API resources (URIs) - Service plan - Metadata specific to your business for monitoring or analytics (optional) The API resources bundled in an API product can come from one or more APIs, so you can mix and match resources to create specialized feature sets, as shown in the following figure. You can create multiple API products to address use cases that solve specific needs. For example, you can create an API product that bundles a number of mapping resources to enable developers to easily integrate maps into their applications. In addition, you can set different properties on each API product, such as different pricing levels. For example, you might offer the following API product combinations: - An API product offering a low access limit, such as 1000 requests per day, for a bargain price. A second API product providing access to the same resources, but with higher access limit and a higher price. - A free API product offering read-only access to resources. A second API product providing read/write access to the same resources for a small charge. In addition, you can control access to the API resources in an API product. For example, you can bundle resources that can be accessed by internal developers only or by paying customers only.. App developers access your API products by registering their apps, as described in Registering apps. match those associated with the access token presented by the app. Understand key concepts Review the following key concepts before you create your API products. API keys When you register a developer's app in your organization, the app must be associated with at least one API product. As a result of pairing an app with one or more API products, Edge assigns the app a unique consumer key. The consumer key or access token act as request credentials. The app developer embeds the consumer key into the app, so that when the app makes a request to an API hosted by Edge, the app passes the consumer key in the request in one of the following ways: - When the API uses API key verification, the app must pass the consumer key directly. - When the API uses OAuth Token verification, the app must pass a token which has been derived from the consumer key. API key enforcement doesn't happen automatically. Whether using the consumer key or OAuth tokens as request credentials, the API Proxy validates the request credentials in your API proxies by including a VerifyAPIKey policy or an OAuth/VerifyAccessToken policy, in the appropriate flow. If you do not include a credential enforcement policy in your API Proxy, any caller can invoke your APIs. For more information, see Verify API Key policy. To verify the credentials passed in the request, Edge performs the following steps: - Get the credentials that are passed with the request. In the case of OAuth token verification, Edge verifies that the token is not expired, and then looks up the consumer key that was used to generate the token. - Retrieve the list of API products to which the consumer key has been associated. - Confirm that the current API Proxy is included in the API Product, and if the current resource path (URL path) is enabled on the API Product. - Verify that the consumer key is not expired or revoked, check that the app is not revoked, and check that the app developer is active. If all of the above checks pass, the credential verification succeeds. Bottom line, Edge automatically generates consumer keys, but API publishers have to enforce key checking in API proxies by using appropriate policies. Automatic versus manual approval By default, all requests to obtain a key to access an API product from an app are automatically approved. Alternatively, you can configure the API product to approve keys manually. In this case, you will have to approve key requests from any app that adds the API product. For more information, see Register apps and manage API keys. popular and receiving a large amount of requests. For information on configuring quota, see Quota policy. For information on on using product quota settings in quota policies, see the following community article How do the quota settings on an API product interact with quota policies in an API proxy?. OAuth scopes. Access levels When defining an API product, you can set the following acess levels.
https://docs.apigee.com/api-platform/publish/what-api-product?hl=fr_CA
2022-05-16T21:33:26
CC-MAIN-2022-21
1652662512249.16
[]
docs.apigee.com
System Requirements General Requirements CPU Exasol requires 64-bit Intel platforms with SSSE3 featured CPUs (Xeon Woodcrest upwards). Firmware Interface Exasol currently only supports the classic Basic Input/Output System (BIOS) firmware interface. Deactivate the Unified Extensible Firmware Interface (UEFI) if applicable. Network Depending on the network topology, the servers need to be equipped with two or more dedicated Ethernet network interface cards. Every EXASolution cluster node is wired to at least two distinct network types: - Private network, used for cluster-internal communication. - Public network, used for client-access connections. For more information about private and public networks, refer to the Network Planning section. It is also possible to have additional networks for purposes of fail safety or link bonding. For more information about the network setup that will cover the performance requirements of most customers, refer to the Minimum Network Setup section. Storage Subsystem - SAS (Serial Attached SCSI) is the preferred bus technology and hard drive interface - SSDs (Solid-State Drives) are supported but not certified. - Use of hardware RAID controllers with RAID-1 pairs (mirroring) is recommended for added hardware reliability in the case of a failure - The more spindles EXAStorage has, the more parallelism there will be on read write operations Out-of-band Management - The preferred protocol to connect to Lights-out-Management interfaces is IPMI (Intelligent Platform Management Interface, version 2.0). - Enterprise license recommended Cluster Node Requirements - EXASolution clusters are composed of one or more database nodes and at least one management node (called "license server"). Database nodes are the powerhouse of a cluster and operate both the EXASolution instances as well as the EXAStorage volumes. - Database nodes are expected to be equipped with homogeneous hardware. The Exasol database and EXAStorage are designed and optimized to distribute processing load and database payload equally across the cluster. Inconsistent hardware (especially differences in RAM and disk sizes) may cause undesired effects, ranging from poor performance to service disruptions. - License servers are the only nodes of a cluster that boot from local disk. They are installed from an EXASuite installation ISO image. - Database servers are installed and booted from the license server over the network via PXE (DHCP + TFTP). License Server Requirements Minimum - Single socket quad core Intel CPU - 8 GB RAM - 2x 200 GB hard disks Don't use more than 2 TB of disk for the management node. Exasol supports only MBR partitioning for the management node. For the disk size more than 2 TB, GPT is used. - 2x network adapters with at least 1 GB/s - DVD drive for installation (alternatively: virtual media on lights-out-management) Recommended - Hardware RAID controller - Lights-out-management interface BIOS settings - Boot order: DVD, hard disk drive - Power management: maximum performance (static) - Processor options: hyper-threading enabled - Disable UEFI (GPT) License Server Requirement for Virtual Machine If you are installing Exasol in a virtual machine, the following table provides you with the license server's minimum requirements. Network Interface - The first network interface should be connected to the private network - The second network interface should be connected to the public network The steps to install Exasol in a virtual machine is similar to installing Exasol on hardware. Refer to Install Exasol on Hardware for detailed instructions. Data Node Requirements Minimum - 2x Quad core Intel CPU (XEON) - 16 GB RAM (32 GB RAM recommended) - Hardware RAID controller - 2x 128 GB SAS hard disks in RAID-1 (Operating system) - 4x 300 GB SAS hard disks in RAID-1 (Database payload) - 2x network adapters with at least 1 GB/s (PXE enabled on NET-0) Recommended - Lights-out-management interface BIOS settings - Boot order: PXE (network boot) from NET-0 - Power management: maximum performance (static) - Processor options: hyper-threading enabled, virtualisation enabled - Disable UEFI (GPT)
https://docs.exasol.com/db/latest/administration/on-premise/installation/system_requirements.htm
2022-05-16T21:27:36
CC-MAIN-2022-21
1652662512249.16
[]
docs.exasol.com