content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Glob Documentation Quick Started Installing your theme: Installing sample demo data (optional): Translate the theme to your language: Front Page Setup. - 1 Create your new page and give it a name, e.g : Home and Set the page template to Home Widget Builder Template - 2 Go to Reading -> Settings and change your front page to the Home page you recent created above. Customizer Once you have activated your theme you should take a few minutes to read over and configure the Theme Customization page. Navigate to Appearance > Customize to load the theme settings. From here you can change site logo, theme layout, typography, colors, etc... Widget Areas The Glob includes up to 11 widget areas : - Header Right - Sidebar - Home Hero Top - Home Content Top - Home Content Left - Home Content Right - Home Content Bottom - Footer 1 - Footer 2 - Footer 3 - Footer 4 Widgetized Homepage Template The Glob includes a widgetized homepage template with up to 5 particular widget locations where you can place WordPress widgets, custom widgets or any content you want. You can arrange the widgets on the front page as you like and create a unique and amazing magazine layout to display your content in an effective way. To enable this functionality you need to add or assign widgets to predefined widget areas in Appearance > Widgets. Custom Widgets The Glob theme comes with 6 custom widgets that can be used to show post layout display โ€“ they can be found under Appearance > Widgets. Theme Settings Youโ€™ll find all the settings for Glob at Appearance => Customize => Theme Options. Weโ€™ll cover each section here in the documentation. Header Option Breaking Option Archive Layout Option Single Post Option Footer Option
https://docs.famethemes.com/article/109-glob-documentation
2022-05-16T08:17:34
CC-MAIN-2022-21
1652662510097.3
[]
docs.famethemes.com
influx bucket create This page documents an earlier version of InfluxDB. InfluxDB v2.2 is the latest stable version. View this page in the v2.2 documentation. The influx bucket create command creates a bucket in InfluxDB. Usage influx bucket create . - explicit.
https://docs.influxdata.com/influxdb/v2.0/reference/cli/influx/bucket/create/
2022-05-16T07:45:49
CC-MAIN-2022-21
1652662510097.3
[]
docs.influxdata.com
Defining Repository-Level Detection Rules Some repositories may have specific requirements for vulnerability detection which differ from the rules defined globally. To address repository-specific needs, you can use a per-repository configuration file. This file is committed to an individual repositories and affects only the repository where they are committed. When this option is enabled, Security for Bitbucket looks for a file named soteri-security.yml at the root of the repository. You will need to create and commit this file yourself if the repository has no soteri-security.yml yet. Rules and options configured in this file are used both when repository content is updated (if Soteri - Scan Commits hook is enabled) and when you trigger scan manually from Security Scan tab of the repository. As of version 3.10.0, this file is always loaded from the latest commit of the default branch. This ensures that scans of even historical branches are conducted with the most up-to-date configuration. Per-repository configuration is disabled by default. The global settings page has a toggle to enable custom repository rules support: Supported Configuration This table documents all supported soteri-security.yml configuration options. Below is an example of a soteri-security.yml config: inherit_builtin_rules: false inherit_custom_rules: false rules: - RSA_PRIVATE_KEY - SSH_PRIVATE_KEY custom_rules: # Comments are supported. BITCOIN_ADDRESS: '^[13][a-km-zA-HJ-NP-Z0-9]{26,33}$' YOUTUBE_LINKS: '<a\s+(?:[^>]*)href=\"((?: allowlist_paths: - config/server.yml - (.*)\.insecure Pre-Receive Hook When Security for Bitbucket is installed and enabled, any changes which might invalidate soteri-security.yml on the default branch will be blocked by an included pre-receive hook. Changes which can be rejected must both: Target a repositoryโ€™s default branch. This includes: Direct pushes to the default branch. The merging of pull requests which target the default branch. Modify soteri-security.ymlsuch that it is no longer valid. Examples of potential reasons soteri-security.ymlcould be invalid: Doesnโ€™t parse as a valid YAML. You can use online tools to check that your YAML is valid before committing, like this one. Note that the YAML standard forbids tab characters. You must use spaces in soteri-security.yml. Properties donโ€™t parse as expected (e.g. inherit_builtin_rulesmust be parsable as a boolean). Provided regular expressions are invalid (e.g. for allowlist_pathsor any defined custom_rules). You can use online tools to validate that your regexes are valid before committing, like this one. Rule names that donโ€™t have regular expressions associated with them.
https://docs.soteri.io/security-for-bitbucket/3.15.1/Defining-Repository-Level-Detection-Rules.14594277382.html
2022-05-16T09:27:06
CC-MAIN-2022-21
1652662510097.3
[]
docs.soteri.io
After you allocate the newValues buffer, fill the buffer with the desired element value for each element in the specified range, in row-major order. For each element to be modified, fill the buffer as follows: - The first 4 bytes should describe the size of the data. This is only applicable for variable-length element data types. - The remaining bytes allocated for each element are allocated to hold the maximum size of the element data type. This space should contain the new element value. Therefore, for each element, space is allocated as: - (MAX_SIZE_OF_ELEMENT_DATA_TYPE + 4 bytes) for variable-length element data types. - (MAX_SIZE_OF_ELEMENT_DATA_TYPE) for fixed-length data types.
https://docs.teradata.com/r/Teradata-VantageTM-SQL-External-Routine-Programming/July-2021/C-Library-Functions/FNC_SetArrayElementsWithMultiValues/Usage-Notes/Filling-the-newValues-Buffer
2022-05-16T08:59:44
CC-MAIN-2022-21
1652662510097.3
[]
docs.teradata.com
An unique advantage of TX Spell .NET is that a single instance of the spell check component can be used to provide spell checking capabilities to an unlimited number of controls. To enable spell checking for a TextControl, the SpellChecker property of TextControl must be simply set to the specific TXSpellChecker instance. The source code is contained in the following directories: If a customizable SpellCheckDialog is used, the Proofing.TXSpellChecker.SpellCheckDialogClosing event can be used to link to another TextControl when the current control has been checked completely. This allows to usage of one instance of the dialog to spell check an unlimited number of TextControls. private void txSpellChecker1_SpellCheckDialogClosing(object sender, TXTextControl.Proofing.SpellCheckDialogClosingEventArgs e) { if (e.StateManager.NextMisspelledWord != null) { // The dialog was closed by user before checking all misspelled words. return; } e.Cancel = SetNextTextControl(e.StateManager, curTextControl); } Private Sub txSpellChecker1_SpellCheckDialogClosing(sender As Object, _ e As TXTextControl.Proofing.SpellCheckDialogClosingEventArgs) If e.StateManager.NextMisspelledWord IsNot Nothing Then ' The dialog was closed by user before checking all misspelled words. Return End If e.Cancel = SetNextTextControl(e.StateManager, curTextControl) End Sub Start the sample application and click on the button Spell Check Dialog. The dialog checks each of the four TextControls and every contained TextPart such as headers, footers or text frames.
https://docs.textcontrol.com/spell/windows-forms/article.winforms.textcontroldialogonmultiplecontrols.htm
2022-05-16T08:36:31
CC-MAIN-2022-21
1652662510097.3
[]
docs.textcontrol.com
Product Index Make your Genesis 3 Female images look Sweet 'N Sassy and you're sure to get it noticed! Don't spend all that time on a new clothing or fabulous skin for your new girl and then apply just any old pose. You want a pose that has some pizzazz so that your work gets shown off right and in the best way possible! That's why we offer you 25 Sweet 'N Sassy poses, with 25 mirror poses to choose from for just the right look! Plus you also get 7 Sweet 'N Sassy face expressions to jazz up your model and bring her alive. If Sweet'N Sassy is the look you need then Sweet 'N Sassy is the pose collection made just for you! Perfect for vendors showing off their new outfit promos or their lovely characters. Great for anyone who loves to show off the beauty of Genesis 3 Female in.
http://docs.daz3d.com/doku.php/public/read_me/index/24503/start
2022-05-16T09:02:42
CC-MAIN-2022-21
1652662510097.3
[]
docs.daz3d.com
Nested collections and sidebar menu Here is Shopify Official Documentation about Products. Nested collection is composed by parent collection and child collections. Create parent collection - Navigate Shopify Admin > Products > Collections. - Click Create Collection and input parent collection name. - Select Manual in Collection Type block. - Select collection.nested-collection in Theme templates block. Create navigation to add child collections - Navigate Shopify Admin > Online Store > Navigation. - Click Add menu button and input navigation title. - Click Add menu item below and select parent collection we created from above step. - Add child menu items with collections - child collections. Set menu as collection sidebar menu in theme settings - Navigate Shopify Admin > Online Store > Themes > Customize. - Switch to Theme Settings tab and click Collection Page. - Select create menu above in Sidebar menu setting. - Save settings
http://docs.flexkitux.com/pages/collections-nested
2022-05-16T08:57:30
CC-MAIN-2022-21
1652662510097.3
[]
docs.flexkitux.com
Replicate an organization The state of an organization consists of metadata (dashboards, buckets, and other resources) and data (time-series). An organizationโ€™s state at a point in time can be replicated to another organization by copying both metadata and data. To replicate the state of an organization: Create a new organization using the InfluxDB Cloud sign up page. Use a different email address for the new organization. Replicate metadata. Use an InfluxDB template to migrate metadata resources. Export all resources, like dashboards and buckets, to a template manifest with influx export all. Then, apply the template to the new organization. Replicate data. Use one of the methods below to copy data to the new organization: Re-invite users. Export data to CSV - Perform a query to return all specified data. - Save the results as CSV (to a location outside of InfluxDB Cloud). - Write the CSV data into a bucket in the new organization using the influx writecommand. Write data with Flux Perform a query to return all specified data. Write results directly to a bucket in the new organization with the Flux to() function. If writes are prevented by rate limiting, use the influx write --rate-limit flag to control the rate of writes. For more information on rate limits in InfluxDB Cloud, see Exceeded rate.
https://docs.influxdata.com/influxdb/cloud/organizations/migrate-org/
2022-05-16T09:21:51
CC-MAIN-2022-21
1652662510097.3
[]
docs.influxdata.com
CardViewSettings.GroupSummary Property Provides access to group summary items. Namespace: DevExpress.Web.Mvc Assembly: DevExpress.Web.Mvc5.v20.2.dll Declaration public ASPxCardViewSummaryItemCollection GroupSummary { get; } Public ReadOnly Property GroupSummary As ASPxCardViewSummaryItemCollection Property Value Remarks Group summaries are displayed within group rows when data grouping is applied. Summary items are represented by the ASPxCardViewSummaryItemCollection objects and are stored within the GroupSummary collection. This collection provides methods and properties that allow you to add, remove and access summary items. NOTE In database server mode, a summary cannot be calculated for unbound columns whose values are calculated by events (see ASPxCardView.CustomUnboundColumnData). Only columns with unbound expressions (see CardViewColumn.UnboundExpression) support summary calculation. See Also Feedback
https://docs.devexpress.com/AspNetMvc/DevExpress.Web.Mvc.CardViewSettings.GroupSummary
2021-02-25T08:16:49
CC-MAIN-2021-10
1614178350846.9
[]
docs.devexpress.com
Stylus. Preview Stylus InRange Attached Event Definition Occurs when the stylus comes within range of the tablet. see AddPreviewStylusInRangeHandler, and RemovePreviewStylusInRangeHandler see AddPreviewStylusInRangeHandler, and RemovePreviewStylusInRangeHandler see AddPreviewStylusInRangeHandler, and RemovePreviewStylusInRangeHandler Examples The following example demonstrates how to determine whether the stylus is inverted. This example assumes that there is a TextBox called textBox1 and that the PreviewStylusInRange event is connected to the event handlers. void textbox1_PreviewStylusInRange(object sender, StylusEventArgs e) { if (e.StylusDevice.Inverted) { textbox1.AppendText("Pen is inverted\n"); } else { textbox1.AppendText("Pen is not inverted\n"); } } Private Sub textbox1_PreviewStylusInRange(ByVal sender As Object, _ ByVal e As StylusEventArgs) Handles textbox1.PreviewStylusInRange If e.StylusDevice.Inverted Then textbox1.AppendText("Pen is inverted" & vbLf) Else textbox1.AppendText("Pen is not inverted" & vbLf) End If.
https://docs.microsoft.com/en-us/dotnet/api/system.windows.input.stylus.previewstylusinrange?view=netcore-3.1
2021-02-25T09:27:49
CC-MAIN-2021-10
1614178350846.9
[]
docs.microsoft.com
npm. "scripts" "command" run[-script] As of [email protected], you can use custom arguments when executing scripts. The special option -- is used by getopt to delimit the end of the options. npm will pass all the arguments after the -- directly to your script: [email protected] -- npm run test -- --grep="pattern" The arguments will only be passed to the script specified after npm run and not to any pre or post script. npm run The env script is a special built-in command that can be used to list environment variables that will be available to the script at runtime. If an "env" command is defined in your package, it will take precedence over the built-in. env: PATH node_modules/.bin devDependency tap . /bin/sh cmd.exe [email protected] script-shell. INIT_CWD. NODE node npm --scripts-prepend-node-path --scripts-prepend-node-path=auto If you try to run a script without having a node_modules directory and it fails, you will be given a warning to run npm install, just in case you've forgotten. node_modules npm install You can use the --silent flag to prevent showing npm ERR! output on error. --silent npm ERR! You can use the --if-present flag to avoid exiting with a non-zero exit code when the script is undefined. This lets you run potentially undefined scripts without breaking the execution chain. --if-present
https://docs.npmjs.com/cli/v6/commands/npm-run-script/
2021-02-25T07:08:12
CC-MAIN-2021-10
1614178350846.9
[]
docs.npmjs.com
9. Poppy humanoid: wiring arrangement 9.1. Data and power buses Before you can startup your robot, let's have a look at the cables. You must connect wires from motors to motors by forming 2 different buses: - Upper body (from head_yto abs_x) - Lower body (from r_hip_zand l_hip_zto r_ankle_yand l_ankle_y) These 2 buses are fully disconnected from each other (1), they have their own: - Power supply (SMPS2Dynamixel + wall socket) - Dynamixel hub to plug up to 6x motors - USB2AX The drawing below shows the 2 data and power buses: cables connecting motors are in red, data hubs and SMPS power injection in green. (1) N.B.: If you own a single power supply unit, you may connect both power buses together by adding a 4-wire cable and cutting out its data bus so that only 1 SMPS2Dynamixel is powered, as shown in the video. But this is unnecessary in most cases since robots are sold with 2 power supply units. Both USB2AX adapters have to be plugged to the USB sockets of the Raspi3 at the bottom of the head. If you mess up with wiring, at first startup, software will report missing motors or too much motors or the same bus. 9.2. Power supplies for Poppy Humanoid The robot requires 3 power supply cables: - 12V power supply for the SMPS2Dynamixel of the upper body - 12V power supply for the SMPS2Dynamixel of the lower body - 5V micro USB power supply for the Raspberry Pi 3 For now you can plug the 3 cables to the robot but wait a bit before connecting them to the wall socket. Indeed, there's a few things we need to setup before we can start the software. 9.3. Connect to the robot Let's getting started with software! Please checkout the dedicated section: Getting started with Poppy software. Psst, before you leave: don't forget to fasten the last screws to fix your head face after you got it working from software. Next: 10. Getting started with Poppy software >> << Back to the assembly guide
https://docs.poppy-project.org/en/assembly-guides/poppy-humanoid/wiring_arrangement.html
2021-02-25T07:20:08
CC-MAIN-2021-10
1614178350846.9
[array(['../../img/humanoid/humanoid-wires.png', None], dtype=object) array(['img/wires_1.jpg', 'Rear wiring'], dtype=object)]
docs.poppy-project.org
. Omnibus GitLab based images GitLab maintains a set of official Docker images based on our Omnibus GitLab package. These images include: A complete usage guide to these images is available, as well as the Dockerfile used for building the images. Cloud native images GitLab is also working towards a cloud native set of containers, with a single image for each component service. We intend for these images to eventually replace the Omnibus GitLab based images.
https://gitlab-docs.creationline.com/ee/install/docker.html
2021-02-25T07:04:00
CC-MAIN-2021-10
1614178350846.9
[]
gitlab-docs.creationline.com
- Migrating from self-managed GitLab to GitLab.com - Migrating from GitLab.com to self-managed GitLab - Migrating between two self-managed GitLab instances Migrating - From Jira (issues only). Migrating admin through the UI or the users API. Migrating.
https://gitlab-docs.creationline.com/ee/user/project/import/index.html
2021-02-25T07:35:01
CC-MAIN-2021-10
1614178350846.9
[]
gitlab-docs.creationline.com
Step 1 โ€” Navigate to your websiteโ€™s WordPress Dashboard > Elementor > Settings Step 2 โ€” Tick the post you want to edit with Elementor in the Post Types section and click the Save Changes button. Step 3 โ€” Then proceed to your post type (it can be Products or any other type), hover over it and click Edit. Step 4 โ€” Now you can click Edit with Elementor button and start working!
https://docs.hibootstrap.com/docs/varn-theme-documentation/elementor/how-to-enable-elementor-editor-for-different-custom-post-types/
2021-02-25T08:16:07
CC-MAIN-2021-10
1614178350846.9
[array(['https://docs.hibootstrap.com/wp-content/uploads/2020/06/elementor_post_1-1024x449.png', None], dtype=object) ]
docs.hibootstrap.com
About This Book This book explains how the concept of virtual documents allows InterSystems IRISยฎ to provide efficient support for Electronic Data Interchange (EDI) formats and for XML documents. This book provides the common information that applies to all virtual document formats. This book contains the following sections: Using Virtual Documents in a Production Controlling Message Validation Creating Custom Schema Categories Syntax Guide for Virtual Property Paths For a detailed outline, see the table of contents. The following books provide related information: Developing Productions describes specific development practices in detail. Best Practices for Creating Productions describes best practices for organizing and developing productions. Routing EDIFACT Documents in Productions describes how to work with EDIFACT documents as virtual documents. Routing X12 Documents in Productions describes how to work with X12 documents as virtual documents. Routing XML Virtual Documents in Productions describes how to work with XML documents as virtual documents.
https://docs.intersystems.com/irisforhealthlatest/csp/docbook/DocBook.UI.Page.cls?KEY=EEDI_PREFACE
2021-02-25T08:47:34
CC-MAIN-2021-10
1614178350846.9
[]
docs.intersystems.com
Current User After a successful Login or Register request, the Current User is automatically set. Current User automatically logs in every time your application loads, if there was one when the app closed. You will need to Logout to clear it. Auto Login The easiest way to implement Auto Login in your application is by checking if there is a Current User during the load of your Launch Screen and perform a Screen Navigation to the authenticated users section of your app. - Create a new Screen Navigation that presents the first screen that your authenticated users should see when open the app. Tip: We would suggest setting its type to Set as Root Screen. - Open the Screen that you have set as Launch Screen. This is the screen that will be loaded first when your application starts. - Create a new Function that performs the Screen Navigation from the step 1, if Current User is Logged In. - Create a new Execute a Function Event Action in the When Screen First Loads Event. Bind Current User values You can use the values of the Current User properties when you need to bind dynamic parameter values. Code Blocks Current User Using the Current User Code Block, you can access all the Current Userโ€™s properties. You can connect it to Get Value and Set Value Code Blocks to access or change a propertyโ€™s value. Get Current User Id Get Current User Id Code Block returns the Record Id of the Current User in case you need to use it in a Function. Current User Is Logged In You can use this Code Block in combination with an If Code Block in order to execute different flows in your app, based on if the user is authenticated. You can check Auto-Login section. For example, you can allow the press of a button to the authenticated users, and navigate the non-authenticated to a Login Screen.
https://docs.kodika.io/kodika-server/user-authentication/current-user
2021-02-25T07:08:57
CC-MAIN-2021-10
1614178350846.9
[array(['/assets/kodika-server/user-authentication/current-user/[email protected]', None], dtype=object) array(['/assets/kodika-server/user-authentication/current-user/[email protected]', None], dtype=object) array(['/assets/kodika-server/user-authentication/current-user/[email protected]', None], dtype=object) ]
docs.kodika.io
Difference between revisions of "Two-Factor setup" Revision as of 20:42, 24 June 2020 How to setup your Two-Factor authentication. Before starting, download and install the 'Google Authenticator' app in your smartphone or tablet (iOS, Android, Blackberry, Windows Phone) After you have successfully logged into Niagara, either using password or keys, type this command in the command line: $ autenticator_setup Your terminal screen will clear and display this message: **************************************************************** You have about to setup your Two-factor authentication. Please download and install the 'Google Authenticator' app in your smartphone or tablet (iOS, Android, Blackberry, Windows Phone), then press 'Enter' to continue or Ctrl-C to exit without setup. **************************************************************** Press <enter> to continue. You will get this next message. Read it carefully: **************************************************************** Google Authenticator is going to generate your new secret key. A lot of output will scroll past, including a large QR code. At this point, 60 seconds in your app. If the QR code is too big to see it completely, you may reduce the font size of your terminal window and enlarging the window again. Press 'Enter' to continue or Ctrl-C to exit without setup. **************************************************************** Press <enter> to continue. You will get this next message. Read it carefully: Warning: pasting the following URL into your browser exposes the OTP secret to Google: |0&cht=qr&chl=otpauth://totp/[email protected]%3Fsecret%3DVCFN4IAS4Z6OYPHHOUQBHFO37A%26issuer%3Dnia-login07.scinet.local โ–ˆโ–€โ–€โ–€โ–€โ–€โ–ˆ โ–ˆ โ–ˆโ–€โ–„โ–€โ–ˆโ–„ โ–ˆโ–€ โ–ˆ โ–€โ–ˆโ–„ โ–€ โ–€โ–€โ–ˆโ–ˆ โ–„โ–€ โ–ˆโ–€โ–€โ–€โ–€โ–€โ–ˆ โ–ˆ โ–ˆโ–ˆโ–ˆ โ–ˆ โ–€โ–ˆโ–€โ–„ โ–„โ–€โ–ˆโ–„โ–ˆโ–„ โ–ˆโ–€โ–„โ–€ โ–€ โ–„โ–„โ–€โ–ˆโ–ˆโ–ˆ โ–ˆโ–„ โ–ˆ โ–ˆโ–ˆโ–ˆ โ–ˆ โ–ˆ โ–€โ–€โ–€ โ–ˆ โ–€ โ–€โ–€ โ–ˆโ–ˆโ–ˆโ–ˆโ–„ โ–ˆโ–€โ–€โ–€โ–ˆโ–€ โ–ˆโ–„โ–„โ–€โ–ˆโ–„ โ–€โ–€โ–€ โ–ˆ โ–€โ–€โ–€ โ–ˆ โ–€โ–€โ–€โ–€โ–€โ–€โ–€ โ–€โ–„โ–€ โ–ˆ โ–ˆโ–„โ–ˆโ–„โ–ˆโ–„โ–ˆ โ–€ โ–ˆ โ–€ โ–ˆโ–„โ–ˆ โ–ˆโ–„โ–ˆโ–„โ–€ โ–€โ–€โ–€โ–€โ–€โ–€โ–€ โ–€ โ–„โ–ˆโ–ˆโ–€โ–€โ–ˆโ–€โ–ˆโ–€โ–„โ–€โ–ˆโ–ˆโ–„โ–ˆโ–„โ–ˆโ–„โ–€โ–ˆโ–€โ–€โ–ˆโ–ˆโ–ˆ โ–ˆโ–„โ–ˆ โ–„ โ–„โ–„โ–ˆโ–ˆโ–„โ–„โ–ˆ โ–ˆโ–ˆโ–€ โ–€โ–„ โ–€โ–ˆโ–ˆโ–„โ–€โ–„โ–€ โ–ˆ โ–ˆโ–ˆโ–„โ–ˆโ–ˆโ–ˆโ–„ โ–„โ–„โ–ˆโ–„โ–„โ–ˆโ–ˆโ–„โ–ˆโ–€โ–„โ–„โ–ˆ โ–„ โ–„โ–ˆโ–ˆ โ–€โ–„โ–ˆโ–€โ–ˆ โ–€โ–ˆโ–€โ–ˆโ–„โ–ˆโ–ˆ โ–ˆโ–„ โ–ˆโ–€โ–€ โ–€ โ–ˆโ–ˆโ–€โ–„โ–„ โ–€โ–„ โ–ˆโ–„ โ–ˆ โ–ˆโ–ˆโ–„โ–„ โ–ˆโ–„โ–€โ–€โ–ˆโ–ˆโ–€ โ–€โ–„โ–„ โ–ˆโ–„โ–€โ–„โ–ˆโ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–ˆโ–„โ–ˆโ–€โ–„ โ–„โ–€โ–ˆโ–„โ–„โ–€โ–„โ–ˆโ–„โ–„โ–„โ–ˆโ–„โ–„ โ–„โ–€โ–„โ–„ โ–€โ–„โ–ˆโ–ˆโ–„โ–€โ–ˆโ–€โ–€ โ–„โ–ˆโ–ˆโ–ˆโ–„ โ–„โ–ˆโ–€โ–„ โ–„โ–€โ–€โ–ˆโ–ˆโ–ˆโ–„โ–„ โ–„โ–„ โ–€โ–„ โ–€โ–„โ–ˆโ–ˆโ–„โ–„โ–€ โ–€โ–ˆ โ–„โ–ˆโ–€โ–ˆโ–„โ–ˆโ–„ โ–ˆโ–„ โ–€โ–ˆโ–„โ–ˆโ–ˆโ–ˆโ–„โ–„โ–€โ–„โ–€โ–€ โ–ˆโ–ˆโ–€โ–„โ–€โ–„โ–„โ–ˆ โ–€โ–ˆโ–€โ–ˆโ–ˆโ–€โ–€โ–€โ–ˆโ–„โ–„โ–ˆโ–€ โ–„โ–€โ–„โ–€โ–€ โ–ˆโ–€โ–€โ–€โ–ˆ โ–„โ–„โ–ˆโ–ˆโ–ˆ โ–„โ–„โ–ˆโ–„โ–ˆโ–€โ–€โ–€โ–ˆโ–ˆโ–„โ–ˆ โ–ˆโ–„โ–„โ–€โ–ˆ โ–€ โ–ˆโ–ˆ โ–„ โ–ˆ โ–„ โ–„โ–ˆ โ–€ โ–ˆโ–€โ–ˆ โ–„โ–ˆโ–„โ–€ โ–„โ–€โ–„โ–ˆ โ–€ โ–ˆโ–€โ–ˆโ–€โ–ˆ โ–ˆโ–€ โ–ˆโ–€โ–€โ–€โ–€โ–„โ–„โ–€โ–ˆโ–€โ–„โ–€โ–ˆ โ–„โ–€โ–€โ–€โ–ˆโ–€โ–ˆ โ–ˆโ–„โ–ˆ โ–ˆโ–„โ–€โ–„ โ–ˆโ–€โ–€โ–ˆโ–€ โ–„โ–„ โ–ˆโ–ˆ โ–€ โ–„โ–€โ–ˆ โ–ˆโ–ˆ โ–„โ–„โ–ˆ โ–ˆโ–ˆโ–€โ–ˆโ–„โ–€โ–„โ–„โ–ˆโ–€โ–€โ–ˆ โ–„โ–ˆ โ–„โ–€โ–€ โ–ˆ โ–ˆโ–„โ–„โ–„ โ–„โ–„ โ–ˆโ–€โ–€โ–€ โ–€โ–„โ–€โ–ˆโ–ˆโ–€โ–„โ–„โ–€ โ–€ โ–„ โ–€โ–€ โ–€ โ–ˆ โ–€โ–„โ–ˆโ–ˆ โ–€โ–€โ–„โ–ˆโ–€โ–„โ–€โ–„ โ–€ โ–„โ–ˆโ–ˆโ–„โ–€โ–ˆโ–ˆ โ–€โ–„โ–„โ–ˆโ–„โ–„ โ–„ โ–ˆโ–„โ–ˆ โ–ˆ โ–„โ–„โ–„โ–ˆ โ–ˆ โ–ˆโ–€โ–„โ–€ โ–€โ–ˆโ–„ โ–€ โ–ˆโ–€โ–ˆโ–ˆ โ–€โ–ˆโ–€ โ–€โ–€โ–ˆโ–ˆโ–€โ–€โ–ˆโ–„โ–ˆโ–„ โ–€โ–€โ–ˆโ–„โ–„โ–„โ–ˆโ–ˆโ–ˆโ–„ โ–€โ–ˆโ–„โ–€ โ–„โ–€โ–„โ–€โ–ˆโ–„โ–„ โ–„โ–„โ–„โ–ˆ โ–€โ–ˆโ–„ โ–ˆโ–ˆ โ–ˆโ–ˆโ–ˆโ–€โ–„ โ–€โ–„โ–„โ–ˆโ–„โ–€โ–€โ–„โ–„โ–€โ–„โ–„โ–ˆ โ–„ โ–ˆโ–„โ–ˆโ–ˆโ–ˆโ–ˆโ–€ โ–€ โ–€โ–€ โ–€โ–€โ–ˆ โ–„โ–„โ–ˆโ–„ โ–„โ–„โ–€โ–ˆโ–€โ–€โ–€โ–ˆโ–€ โ–ˆโ–„โ–„โ–„โ–€โ–€โ–€โ–€โ–ˆโ–€โ–€โ–€โ–ˆ โ–„โ–ˆโ–€ โ–ˆโ–€โ–€โ–€โ–€โ–€โ–ˆ โ–ˆโ–€โ–„ โ–„ โ–€โ–ˆโ–ˆโ–ˆ โ–ˆโ–ˆ โ–€ โ–ˆโ–€โ–„โ–ˆโ–€โ–ˆโ–„ โ–ˆโ–ˆโ–„ โ–ˆ โ–€ โ–ˆโ–„โ–ˆโ–ˆ โ–ˆ โ–ˆโ–ˆโ–ˆ โ–ˆ โ–ˆ โ–€โ–ˆโ–„ โ–„โ–„ โ–ˆโ–„โ–ˆโ–ˆโ–€โ–ˆโ–€โ–€โ–€ โ–„โ–€โ–„โ–ˆโ–ˆโ–ˆโ–„โ–„โ–€โ–€โ–€โ–ˆโ–ˆโ–ˆ โ–€โ–ˆ โ–ˆ โ–€โ–€โ–€ โ–ˆ โ–€โ–„โ–„โ–ˆโ–„โ–„ โ–€โ–€โ–ˆ โ–ˆโ–€โ–€โ–ˆโ–€โ–„ โ–€โ–ˆ โ–ˆโ–€โ–€ โ–„โ–€โ–ˆโ–„ โ–€โ–„โ–ˆโ–„โ–ˆ โ–€โ–€โ–€โ–€โ–€โ–€โ–€ โ–€ โ–€ โ–€โ–€ โ–€โ–€ โ–€โ–€โ–€ โ–€ โ–€ โ–€ โ–€ โ–€ Your new secret key is: VCFN4IAS4Z6OYPHHOUQBHFO37A Your verification code is 900480 Your emergency scratch codes are: 22000145 31163391 78565881 89503548 88588782 45712462 67599332 **************************************************************** Make sure you record the secret key, verification code, and the emergency scratch codes in a safe place. Keep your scratch codes in the good old fashion way: in a piece of paper in your wallet. Do not keep them in your phone or your computer The scratch codes are used when you cannot generate a new One-Time Code. For example, if you have lost your phone. A scratch code will give you access to the system while you recover your phone; scratch codes can be used only once. Please login again. You will be prompted for the One-Time Code after you sucessfully authenticate. Press 'Enter' to continue You are all set! You can now logout and login again. This time the system will ask for your One-Time Password. You just get it from your smartphone (or tablet). You data is now safely protected. Probably, the QR code will not fit your screen so you may not be able to scan it. The URL above,..., will show you a smaller version of the QR code in your browser. Please have in mind that pasting that URL into your browser exposes the OTP secret to Google. Another way to make you QR code fit the screen is to reduce the font size of your terminal window; your windows will shrink, but then you can maximize the window and most likely you will be able to scan the code.
https://docs.scinet.utoronto.ca/index.php?title=Two-Factor_setup&oldid=2667&diff=prev
2021-02-25T08:08:59
CC-MAIN-2021-10
1614178350846.9
[]
docs.scinet.utoronto.ca
The bit level data interchange formatยถ Introductionยถ Bitproto is a fast, lightweight and easy-to-use bit level data interchange format for serializing data structures. The protocol describing syntax looks like the great protocol buffers , but in bit level: message Data { uint3 the = 1 uint3 bit = 2 uint5 level = 3 uint4 data = 4 uint11 interchange = 6 uint6 format = 7 } // 32 bits => 4B The Data above is called a message, it consists of 7 fields and will occupy a total of 4 bytes after encoding. This image shows the layout of data fields in the encoded bytes buffer: Featuresยถ Supports bit level data serialization. Supports protocol extensiblity, for backward-compatibility. - Very easy to start: Protocol syntax is similar to the well-known protobuf. Generating code with very simple serialization api. - Supports the following languages: C (ANSI C) - No dynamic memory allocation. Go - No reflection or type assertions. - Blazing fast encoding/decoding (benchmark). Code Exampleยถ Code example to encode bitproto message in C: struct Data data = {}; unsigned char s[BYTES_LENGTH_DATA] = {0}; EncodeData(&data, s); And the decoding example: struct Data data = {}; DecodeData(&data, s); Simple and green, isnโ€™t it? Code patterns of bitproto encoding are exactly similar in C, Go and Python. Please checkout the quickstart document for further guide. Why bitproto ?ยถ There is protobuf, why bitproto? Originยถ The bitproto was originally made when Iโ€™m working with embedded programs on micro-controllers. Where usually exists many programming constraints: tight communication size. limited compiled code size. better no dynamic memory allocation. Protobuf does not live on embedded field natively, it doesnโ€™t target ANSI C out of box. Scenarioยถ Itโ€™s recommended to use bitproto over protobuf when: Working on or with microcontrollers. Wants bit-level message fields. Wants to know clearly how many bytes the encoded data will occupy. For scenarios other than the above, I recommend to use protobuf over bitproto. Vs Protobufยถ The differences between bitproto and protobuf are: bitproto supports bit level data serialization, like the bit fields in C. bitproto doesnโ€™t use any dynamic memory allocations. Few of protobuf C implementations support this, except nanopb. bitproto doesnโ€™t support varying sized data, all types are fixed sized. bitproto wonโ€™t encode typing or size reflection information into the buffer. It only encodes the data itself, without any additional data, the encoded data is arranged like itโ€™s arranged in the memory, with fixed size, without paddings, think setting aligned attribute to 1 on structs in C. Protobuf works good on backward compatibility. For bitproto, this is the main shortcome of bitproto serialization until v0.4.0, since this version, it supports messageโ€™s extensiblity by adding two bytes indicating the message size at head of the messageโ€™s encoded buffer. This breaks the traditional data layout design by encoding some minimal reflection size information in, so this is designed as an optional feature. Shortcomesยถ Known shortcomes of bitproto: bitproto doesnโ€™t support varying sized types. For example, a unit37always occupies 37 bits even you assign it a small value like 1. Which means there will be lots of zero bytes if the meaningful data occupies little on this type. For instance, there will be n-1bytes left zero if only one byte of a type with nbytes size is used. Generally, we actually donโ€™t care much about this, since there are not so many bytes in communication with embedded devices. The protocol itself is meant to be designed tight and compact. Consider to wrap a compression mechanism like zlib on the encoded buffer if you really care. bitproto canโ€™t provide best encoding performance with extensibility. Thereโ€™s an optimization mode designed in bitproto to generate plain encoding/decoding statements directly at code-generation time, since all types in bitproto are fixed-sized, how-to-encode can be determined earlier at code-generation time. This mode gives a huge performance improvement, but I still havenโ€™t found a way to make it work with bitprotoโ€™s extensibility mechanism together. Content listยถ - Quickstart - C Guide - Go Guide - Python Guide - Compiler - Language Guide - Performance - Frequently Asked Questions - Changelog - License
https://bitproto.readthedocs.io/en/latest/
2021-02-25T07:48:54
CC-MAIN-2021-10
1614178350846.9
[array(['_images/data-encoding-sample.png', '_images/data-encoding-sample.png'], dtype=object)]
bitproto.readthedocs.io
Managing drive arraysยถ Drives are iSCSI based block devices attached to servers at runtime. They can be operating system drives or unformatted drives. Drive arrays are collections of identical drives that users can control as a single entity. Creating a drive array using the UIยถ There are multiple ways to create a drive array from the Infrastructure Editor: - Click on on an instance array and go to the DriveArrays tab - Click on Create DriveArray button Listing drive arrays of an instance array using the UIยถ - Click on on an instance array and go to the DriveArrays tab Adding a new driveยถ 1.In the Bigstep Infrastructure Editor, click on the Infrastructure button on the left bar and select Drive Array. 2.The new Drive Array will now be visible in the interface. 3. Click on the new Drive Array, in order to customize it. You can tweak the size, storage type, operating system template, file system and block size, as well as attach it to an existing Instance Array or Container Array. You can also attach it to an array by simply dragging and dropping it in the interface. 4. An alternative way to create a Drive Array is from the Instance Array overview, in the DriveArrays tab. Click on Create DriveArray and a new one will be attached to the server, with default settings. You can customize the Drive Array by clicking on its icon in the interface. 5.After configuring and attaching the drive, click on Deploy Changes in order to make it active. 6.The new disk will be attached to your server after the deploy. A server reboot might be needed, as well as additional configuration from your operating system, before it can be used. Removing a driveยถ 1.In order to delete an existing Drive Array, click on its icon in the infrastructure editor. 2.Click on Delete DriveArray. 3.Confirm that you want to delete the drive. 4.Click on Deploy Changes in the infrastructure editor. 5.As a safety measure, you will be warned about data loss. In order to confirm your option, manually type destroydata before the actual deploy starts. 6.The drive will be deleted at the end of the deploy. Expanding disk sizeยถ 1.From the infrastructure editor, click on the DriveArray you want to change the Drive size of. This opens the DriveArray overview panel. 2.Drag the Default drive size slider in order to increase its capacity, or simply type the desired size in GB. It is not possible to reduce the size of a drive, since this is a very risky operation on most operating systems. 3.Click on Save, then on Deploy changes. 4.Resizing a disk requires that the server is powered off. If any affected servers in the infrastructure are still powered on, you will be prompted to Hard power off or Soft power off them before the deploy. We recommend the Soft power off option. 5.The resized disk will be available after the deploy. Depending on the operating system on your server, additional configuration might be required. Creating a drive array using the CLIยถ metalcloud-cli drive-array create -ia gold -infra complex-demo -size 100000 -label da In order for a drive array to be accessible you need to push the Deploy changes button in the UI, or run the following metalcloud-cli command: metalcloud-cli infrastructure deploy -id complex-demo or metalcloud-cli infrastructure deploy -id complex-demo -autoconfirm Typically drive arrays will expand with their instance array. To stop that from happening use -no-expand-with-ia Listing drive arrays of an infrastructure using the CLIยถ $ metalcloud-cli drive-array list -infra complex-demo Drive Arrays I have access to as user [email protected]: +-------+-------------------------------+-----------+-----------+-----------+-------------------------------+-----------+--------------------------+ | ID | LABEL | STATUS | SIZE (MB) | TYPE | ATTACHED TO | DRV_CNT | TEMPLATE | +-------+-------------------------------+-----------+-----------+-----------+-------------------------------+-----------+--------------------------+ | 47859 | da | ordered | 100000 | iscsi_ssd | gold (#37135) | 1 | | | 45928 | drive-array-45928 | active | 40960 | iscsi_ssd | workers (#35516) | 2 | CentOS 7.4 (#78) | | 45929 | drive-array-45929 | active | 40960 | iscsi_ssd | master (#35517) | 1 | CentOS 7.4 (#78) | | 47799 | gold-da | ordered | 100000 | iscsi_ssd | gold (#37135) | 1 | | | 47858 | test | ordered | 40960 | iscsi_ssd | | 1 | | +-------+-------------------------------+-----------+-----------+-----------+-------------------------------+-----------+--------------------------+ Total: 5 Drive Arrays All the drive arrays with status ordered are not yet accessible. Deleting a drive array via the CLIยถ To delete a drive array use metalcloud-cli drive-array delete -id 47859 Manually logging into the iscsi targetยถ Most of the time the drives will simply appear in the operating system at a reboot. However sometimes manual intervention is required. Reffer to Manually managing iSCSI connections for more information.
https://docs.bigstep.com/en/latest/guides/managing_drive_arrays.html
2021-02-25T08:12:42
CC-MAIN-2021-10
1614178350846.9
[array(['../_images/managing_drive_arrays1.png', '../_images/managing_drive_arrays1.png'], dtype=object) array(['../_images/managing_drive_arrays2.png', '../_images/managing_drive_arrays2.png'], dtype=object) array(['../_images/drive_management_5.png', '../_images/drive_management_5.png'], dtype=object) array(['../_images/drive_management_10.png', '../_images/drive_management_10.png'], dtype=object) array(['../_images/drive_management_11.png', '../_images/drive_management_11.png'], dtype=object) array(['../_images/drive_management_12.png', '../_images/drive_management_12.png'], dtype=object)]
docs.bigstep.com
A role must be defined in the Roles Manager portlet in the Teradata Viewpoint portal before it can be assigned to a user. The Roles tab provides a list of available roles. Roles can be selected and assigned to the current user account. You can assign roles to an existing Teradata Viewpoint user account. If your Teradata Viewpoint portal is configured to use auto-provisioning, a user account is created automatically the first time a user logs on to Teradata Viewpoint. By default, auto-provisioned accounts are authenticated externally and assigned a default role. The authentication source (for example, LDAP) and default role are set during configuration. - From the Admin menu, open the Roles Manager portlet. - From the Users view, browse the list of users or use the filters to find users. - Select a user name. - Click the Roles tab.Available roles are listed in the AVAILABLE PORTAL ROLES pane. Roles assigned to the user are shown in the ROLES FOR <User> pane. - Select a role from the AVAILABLE PORTAL ROLES list or select multiple roles by pressing Shift or Ctrl. - Click.The selected roles appear in the ROLES FOR <User> pane. In the AVAILABLE PORTAL ROLES pane, the assigned roles are dimmed. - Click Apply.
https://docs.teradata.com/r/2s9WVVrAx1CTcNqi_pPisA/LRtr6vKusUvxYPydzHFFIg
2021-02-25T08:09:39
CC-MAIN-2021-10
1614178350846.9
[]
docs.teradata.com
The Windows HTML Help Edition of the PHP Manual over-performs the presentational and interactive capabilities offered by other editions (including the online manual). This was possible because currently CHMs can only be viewed on Windows using Internet explorer, so we can develop for one browser family and one operating system. Viewers for other OSes may be developed in the future by third parties.
http://docs.zhangziran.com/php53/chm.specialities.html
2021-02-25T07:37:55
CC-MAIN-2021-10
1614178350846.9
[]
docs.zhangziran.com
Class: Aws::Rekognition::Types::ProtectiveEquipmentSummarizationAttributes - Inherits: - Struct - Object - Struct - Aws::Rekognition::Types::ProtectiveEquipmentSummarizationAttributes - Defined in: - gems/aws-sdk-rekognition/lib/aws-sdk-rekognition/types.rb Overview When making an API call, you may pass ProtectiveEquipmentSummarizationAttributes data as a hash: { min_confidence: 1.0, # required required_equipment_types: ["FACE_COVER"], # required, accepts FACE_COVER, HAND_COVER, HEAD_COVER }. Constant Summary collapse - SENSITIVE = [] Instance Attribute Summary collapse - #min_confidence โ‡’ Float The minimum confidence level for which you want summary information. - #required_equipment_types โ‡’ Array<String> An array of personal protective equipment types for which you want summary information. Instance Attribute Details #min_confidence โ‡’ Float The minimum confidence level for which you want summary information. The confidence level applies to person detection, body part detection, equipment detection, and body part coverage. Amazon Rekognition doesn't return summary information with a confidence than this specified value. There isn't a default value. Specify a MinConfidence value that is between 50-100% as DetectProtectiveEquipment returns predictions only where the detection confidence is between 50% - 100%. If you specify a value that is less than 50%, the results are the same specifying a value of 50%. #required_equipment_types โ‡’ Array<String> An array of personal protective equipment types for which you want summary information. If a person is detected wearing a required requipment type, the person's ID is added to the PersonsWithRequiredEquipment array field returned in ProtectiveEquipmentSummary by DetectProtectiveEquipment.
https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/Rekognition/Types/ProtectiveEquipmentSummarizationAttributes.html
2021-02-25T09:18:52
CC-MAIN-2021-10
1614178350846.9
[]
docs.aws.amazon.com
Managing assetsยถ Assets are files that are served at appropriate times via the DHCP, TFTP and HTTP mechanisms during PXE or ONIE. There are 3 types of assets: - text files stored in the DB (size limited to 4MB). These can also hold references to variables or secrets. MIME type: text/plain - binary files stored in the DB (size limited to 64MB). MIME type: application/octet-stream - external files referenced as an URL. MIME type: text/plain, application/octet-stream Using text assets as file templatesยถ If enabled, text files stored in the db can act as a template. When they need to be served t the user via HTTP or TFTP a search_and_replace will be executed for strings matching the provided list of params with the format {{<VARIABLE_NODE>}}. See Managing Variables and Secrets for more details. Creating an URL assetยถ metalcloud-cli asset create -url "%repoURL%/.tftp/boot/uefi-windows/bootx64.efi" -filename "bootx64.efi" -mime "application/octet-stream" -usage "bootloader" -return-id Creating a text assetยถ Creating an asset from a pipe: echo "test" | metalcloud-cli asset create -filename "test.xxx" -mime "text/plain" -pipe -return-id Creating a binary asset from pipeยถ cat data.bin | metalcloud-cli asset create -filename "data.bin" -mime "application/octet-stream" -pipe -return-id Associating assets with templatesยถ Before assets can be used they need to be associated with a specific OS template. Associating an asset with a template at a specific path ( /bootx64.efi). metalcloud-cli asset associate -id "bootx64.efi" -template-id windows2019 -path "/bootx64.efi" During the install process, the fill be accesible at {{HTTP_SERVER_ENDPOINT}}/bootx64.efi. For more information visit Creating a local install os template
https://docs.bigstep.com/en/latest/advanced/managing_assets.html
2021-02-25T07:48:13
CC-MAIN-2021-10
1614178350846.9
[]
docs.bigstep.com
Modifying the Cloud Probe configuration The Maintenance Tool enables you to modify the configuration of the Real User Cloud Probe, such as updating the connection details to the Real User Collector or changing the monitored network card. You can run the Maintenance Tool both on Linux (in a GUI or from the command line) and on Windows. To modify Cloud Probe configuration - Extract the files from the Cloud Probe binary download. - Open the Cloud Probe Maintenance Tool: - On Linux, run the CloudProbeMaintenanceTool.sh file located in the utilityfolder. - On Windows, run the CloudProbeMaintenanceTool.cmd file located in the utilityfolder. - Click the Configuration tab and modify the Cloud Probe configuration details: - (Optional) Modify any of the following connection details for the Real User Collector, and then click Next: - IP address or DNS name of the computer with the Collector installation - Port number of the Collector (by default, 443) - Collector user name - Collector password - (Optional) Modify the Cloud Probe name and select the monitored network card from the list. Click Next. The Cloud Probe Name can have up to 60 alphanumeric characters, hyphens (-), and underscore characters (_). This name is displayed in the Real User Collector details. - To complete the configuration, click Finish. A notice on the window indicates when the configuration is completed. With the Cloud Probe Maintenance Tool, you can also view and download Cloud Probe's system logs (see the Logs tab) and encrypt password (the Encrypt tab). To modify Cloud Probe configuration from the command line (Linux) - Navigate to the Maintenance Tool, located in the installation directory (by default, /opt/bmc/CloudProbe/cloudprobe). - Run the following command to display all the configuration options: ./CloudProbeMaintenanceTool.sh -silent -help Run the command for the configuration options you want to change. Note If you need to change a password, ensure that you first use the -encryptcommand to obtain an encrypted password: ./CloudProbeMaintenanceTool.sh -silent -encrypt -encrypt -password=<password> -confirm_password=<password> Replace the variable <password>with your password. After you run the command, use the generated encrypted password when you reconfigure the Real User Collector password. Related topics Starting and stopping the Real User Cloud Probe Configuring traffic filtering rules on a Cloud Probe Configuring confidentiality policies on a Cloud Probe Setting up the monitored NIC for the Cloud Probe
https://docs.bmc.com/docs/applicationmanagement/113/modifying-the-cloud-probe-configuration-772581930.html
2021-02-25T08:20:00
CC-MAIN-2021-10
1614178350846.9
[]
docs.bmc.com
While learning is great and reputation points are valuable for users but the point given to accepted answers and upvotes are few and let say someone could complete few courses and reach to high level while someone who is actively answering questions will get few points and I believe pointing system needs an update to provides more points for active contribution in the forum.
https://docs.microsoft.com/en-us/answers/content/idea/242970/add-more-reputation-points-for-accepted-answers-an.html
2021-02-25T09:18:10
CC-MAIN-2021-10
1614178350846.9
[]
docs.microsoft.com
To enhance the community engagement , it would be nice to create a path for career through this website. Stake Overflow for example has its own job portal and active participates in there would have better chance to land a job. While many of us participate here to learn more and help community , it is possible to motivate experts to join and participate our platform when they see it is a mean to improve their career. The key benefit of forum is learning and capacity building. For example, I am participating here because , I look into question and understand challenges for IT Professionals and while I look into the solution I learn a lot and when I come across similar problem in the real world, I will fix it right away and that is how I learn and gain experience. This is same for everyone who is actively participate in forums. Now, there should be a model to proof it and badge and number of contributions is one way to do it. In next step, this profile could be integrate into LinkedIn profile and show expertise for users in their profile as verified expertise. For example consider a user is very active in Microsoft Windows forum by number of answers and response, it is possible to add tag to his/her profile in LinkedIn like expertise in Microsoft Edge verified by Microsoft QnA. Then those who are looking for talents could find them easier in LinkedIn by verified expertise and it also motivate others to be active in this forum. Since LinkedIn is part of Microsoft and doing so is not complex and just required implementation of some API and this could be an experience to move forward with other forums too.
https://docs.microsoft.com/en-us/answers/content/idea/277069/integration-with-linkedin.html
2021-02-25T09:05:39
CC-MAIN-2021-10
1614178350846.9
[]
docs.microsoft.com
Landing Page Design - Purlem's Editor Purlem's landing page editor makes it easy to modify content, colors, images, and form on your PURL Landing Page. Don't like your current design? Select from several different templates to get started quickly. Want more control of the design? Advanced users may want to create their own templates, or work directly through the HTML. Content Adding/modifying content in your landing page is an easy as updating the text areas provided under the Content tab in the editor. You can also use the gear dropdown to add pull-in variable content specific to the visitor. Colors Each Purlem template allows you to modify the colors. To do so, simply click on the color tab within the editor Images You can upload new images to your landing page under the Images tab. You can also define variable images, or hide the image, using the gear dropdown. Form The editor also gives you full control to modify the landing page form. You can add new fields, change the type and size of the input, and order of the fields by simply clicking and dragging. Adding a Field To add a new field, click on the field you would like to add under the Form tab. Editing a Field To edit an existing field - change the input type, requirement, and select options - rollover the field, and select the edit button. Deleting a Field To delete a field: Rollover the field, select the edit button, and choose the Delete link at the bottom right of the page.
https://docs.purlem.com/article/39-landing-page-design-purlems-editor
2021-02-25T08:29:39
CC-MAIN-2021-10
1614178350846.9
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56475bb490336002f86dda69/images/567708a2c69791436155882a/file-FYhxUMoRuo.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56475bb490336002f86dda69/images/5677096ec69791436155882c/file-m0wcqbiAC1.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56475bb490336002f86dda69/images/56770cd8c697914361558837/file-LAo95sM7NI.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56475bb490336002f86dda69/images/56770d5dc697914361558838/file-AhvFPJQxgC.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56475bb490336002f86dda69/images/56770e37c69791436155883a/file-NeTezbVtbY.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56475bb490336002f86dda69/images/56770efcc69791436155883b/file-fVAmDm3ySY.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56475bb490336002f86dda69/images/56770f92c69791436155883c/file-mx4a1rzZk2.png', None], dtype=object) ]
docs.purlem.com
webservice_parserยถ sunpy.net.helio. webservice_parser(service='HEC')[source] [edit on github]ยถ Quickly parses important contents from HELIO registry. Uses the link contained in registry_links in with โ€˜serviceโ€™ appended and scrapes the web-service links contained on that webpage. - Parameters service (str) โ€“ Indicates which particular HELIO service is used. Defaults to HEC. - Returns links (list or NoneType) โ€“ List of urls to registries containing WSDL endpoints. Examples >>> from sunpy.net.helio import parser >>> parser.webservice_parser() # doctest: +SKIP ['', '', '', '', '', '', '', '']
http://docs.sunpy.org/en/stable/api/sunpy.net.helio.webservice_parser.html
2019-07-15T20:22:47
CC-MAIN-2019-30
1563195524111.50
[]
docs.sunpy.org
This page introduces the different material types in V-Ray for Rhino. Overview There are a number of different materials for use with V-Ray for Rhinoceros..
https://docs.chaosgroup.com/display/VNFR/Materials
2019-07-15T20:13:42
CC-MAIN-2019-30
1563195524111.50
[]
docs.chaosgroup.com
In case the actual working times and the planned working times do not match you have the possibility to edit them in retrospective. Go to the section "reports" and chose the relevant department, employee and time frame. Now you will see a list of all assignments. Click on the date or on the time to change it. The same applies to the break time. The working times will update automatically.
http://docs.staffomatic.com/en/articles/1093162-how-can-i-edit-actual-and-planned-working-times
2019-07-15T21:08:19
CC-MAIN-2019-30
1563195524111.50
[array(['https://downloads.intercomcdn.com/i/o/32569147/6edca8d24c1a6ab40a01a635/Bildschirmfoto+2017-08-31+um+17.07.51.png', None], dtype=object) ]
docs.staffomatic.com
STAFFOMATIC provides three different calendar views: calendar, list and employee: You can also change between day, week and month view: Calendar view: if you only have a few shifts to plan per day it is recommended to use the calendar view: List view: if you want to manage several shifts at once we advise you to use the list view sorted by departments: Employee view: This view arranges the schedule by employees:
http://docs.staffomatic.com/en/articles/868539-what-calendar-views-are-there
2019-07-15T21:10:04
CC-MAIN-2019-30
1563195524111.50
[array(['https://downloads.intercomcdn.com/i/o/65128448/b3fbca9e3a6c28c6bb06a59b/Bildschirmfoto+2018-06-27+um+11.56.23.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/65128466/52981b5ed5679cc2abaceedf/Bildschirmfoto+2018-06-27+um+11.56.29.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/65128496/baff73924a9c64041faeb880/Bildschirmfoto+2018-06-27+um+11.55.43.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/65128513/ba3851be1426f6a6121a85f0/Bildschirmfoto+2018-06-27+um+11.55.26.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/65128534/1bb8f1cd44c9223fcbc8408d/Bildschirmfoto+2018-06-27+um+11.55.54.png', None], dtype=object) ]
docs.staffomatic.com
Part 4 - Viewing Device Data In Part 1, Part 2, and Part 3 of this walkthrough weโ€™ve created an API for our lลm smart plant mobile app that allows users to sign up, log in, and register devices. In this part weโ€™re going to add the ability to view a list of devices owned by the current user and cover how to request historical device data. The first endpoint weโ€™ll create is GET /devices, which will return a list of all devices owned by the currently authenticated user. - Set the Methodto GET. - Set the Routeto /devices. - Set the Descriptionto anything you want. - Set the Access Controlto Any authenticated user. Like all endpoints weโ€™ve created, this one will also need a workflow with a matching endpoint trigger. Next, add a Get Device node that weโ€™ll use to look up the userโ€™s devices. - Set the Find by...to Tag Query. This allows us to query devices by their tags. - Check the Return multiple devices?checkbox. - Add a tag query and set the Key Templateto โ€œownerโ€ and set the value template to {{ experience.user.id }}. - Check the Return tags as an object map instead of an arraycheckbox. This changes how tags are returned. It makes them an object map, which is easier to work with in most cases. - Set the Result Pathto data.devices. In Part 3 of this walkthrough, we showed how devices were tagged with the owner when they were created. We can now use this tag to find all devices owned by the user that requested this endpoint. Losant automatically adds the Experience User to the payload whenever an authenticated route is requested, so we can get the user ID at experience.user.id. All thatโ€™s left to do for this endpoint is to reply with the devices we just looked up. - Set the Response Code Templateto 200, which is the HTTP status code for OK. - Change the Response Body Sourceradio to Payload Path. - Change the Response Body Payload Pathto data.devices. This is the location on the payload where the Get Device node put its result. Weโ€™re going to simply reply with the same value. - Add a Content-Typeheader with the value application/json. At this point, the lลm mobile app can now request all of the devices owned by the currently logged-in user. You can test this route by requesting the devices created in Part 3 of this walkthrough. curl -H "Authorization: Bearer USER_TOKEN" \ [ { "name": "My Awesome Plant", "tags": { "owner": [ "58e016591c3ce300017cc5d4" ], "manufacturerId": [ "000000" ] }, "attributes": [ { "name": "moisture", "dataType": "number" } ], "id": "58e4605f29eeec0001d383df" ... } ] The lลm mobile app now allows the user to select a device from the list to show historical moisture data collected by that device. In order to support this, we need to create a new endpoint to get this data. Weโ€™ll call it GET /devices/{id}/data. The device ID for each device is available on the result of the GET /devices endpoint above. - Set the Methodto GET. - Set the Routeto /devices/{id}/data. This route makes use of route parameters, which provide a way to pass variables to the endpoint. In this case, weโ€™ll be passing the Losant device ID. - Set the Access Controlto Any authenticated user. Next, create a workflow for this endpoint with a matching endpoint trigger. The first thing this workflow needs to do once itโ€™s triggered is to determine whether or not the current user has access to the device ID that was passed in. We can do this by getting the device that matches the route parameter and then checking that its owner tag matches the ID of the current user. First, add a get device node to the canvas. - Set the ID Templateto {{ data.params.id }}. - Check the Return tags as an object map instead of an arraycheckbox. - Set the Result Pathto data.device. Losant automatically parses the route parameters and put them on the payload. In this case, our device ID parameter id can be found at data.params.id. The next step is to add a Conditional node to check that this device has an owner tag that matches the current user. Set the Expression to the following: {{ data.device }} && {{ data.device.tags.owner.[0] }} === {{ experience.user.id }} This expression first checks to see if any device was returned at all. Since this route could be requested with any value as the ID, for example /devices/not-a-real-value/data, the result of the Get Device node could be null. It then checks that the value of the owner tag matches the current Experience User. Since Losant devices can have duplicate values for the same tag, the values are always returned as an array, which is why the syntax is data.device.tags.owner.[0]. If the device canโ€™t be found, or the owner tag doesnโ€™t match, simply use an Endpoint Reply node to respond with a 403 (Forbidden). If we did get a valid device ID, we can now use the Time Series node to request historical data for the device. - Select the Use Device ID(s) specified on the current payloadradio. - Set the Device ID(s) JSON Pathto data.params.id. This will perform a time series query against the device that was passed in through the route parameters. - Set the Time Rangeto whatever range makes sense for the application. In this example, itโ€™s showing the last 24 hours of data. - Set the Resolutionto whatever makes sense for the application. In this example, it will return 24 hours of data with a data point for each 5 minute interval. - Set the Payload Path for Valueto data.resultData. The Time Series node provides powerful aggregation support for data that has been reported by your devices. The duration, resolution, and aggregation can all be changed to match your desired result. The last step is to reply with this data using an Endpoint Reply node. - Set the Response Code Templateto 200, which is the HTTP status code for OK. - Change the Response Body Sourceradio to Payload Path. - Set the Response Body Payload Pathto data.resultData. This where the Time Series node put its result, so this reply is simply sending the result back to the client. - Add a Content-Typeheader with the value application/json. You can now test this endpoint to see data that has been reported by your applicationโ€™s devices. curl -H "Authorization: Bearer USER_TOKEN" \ [ { "time": "2017-04-05T05:05:00.000Z", "sum": 235, "count": 5, "value": 47 }, { "time": "2017-04-05T05:00:00.000Z", "sum": 235, "count": 5, "value": 47 } ... ] The result of this API request will be a data point for each five-minute interval over the last 24 hours. Since the aggregation in the Time Series node was set to Mean, the value above will be the average. The count is the number of data points collected in that five-minute interval. The sum is the value of every point collected in the five-minute interval added together. The lลm mobile app now has an API that supports all required features. Users can sign up, they can log in, they can register devices, they can see a list of their devices, and they can view historical data for each device. This concludes the Losant Application Experience walkthrough. If you have additional questions about experiences, please visit our forums.
https://docs.losant.com/experiences/walkthrough/part4/
2019-07-15T21:13:50
CC-MAIN-2019-30
1563195524111.50
[array(['/images/experiences/walkthrough/part-4/device-data.png', 'Device Data'], dtype=object) array(['/images/experiences/walkthrough/part-4/devices-endpoint.png', 'Devices Endpoint Devices Endpoint'], dtype=object) array(['/images/experiences/walkthrough/part-4/get-devices.png', 'Get Devices Get Devices'], dtype=object) array(['/images/experiences/walkthrough/part-4/experience-user.png', 'Experience User Experience User'], dtype=object) array(['/images/experiences/walkthrough/part-4/devices-reply.png', 'Devices Reply Devices Reply'], dtype=object) array(['/images/experiences/walkthrough/part-4/device-data-endpoint.png', 'Device Data Endpoint Device Data Endpoint'], dtype=object) array(['/images/experiences/walkthrough/part-4/get-device.png', 'Get Device Get Device'], dtype=object) array(['/images/experiences/walkthrough/part-4/params-payload.png', 'Params Payload Params Payload'], dtype=object) array(['/images/experiences/walkthrough/part-4/conditional.png', 'Conditional Conditional'], dtype=object) array(['/images/experiences/walkthrough/part-4/time-series.png', 'Time Series Time Series'], dtype=object) array(['/images/experiences/walkthrough/part-4/data-reply.png', 'Data Reply Data Reply'], dtype=object) array(['/images/experiences/walkthrough/lom-banner.jpg', 'lom banner lom banner'], dtype=object) ]
docs.losant.com
Principles At NewTecnia Solutions SLU, we keep only the most basic information about you. All data you store on our servers are encrypted and can only be decrypted by you logging into our systems. NewTecnia Solutions SLU is a Spanish company located at C/Halcon 8 Las Rozas de Madrid, Madrid Spain 28232. NewTecnia Solutions SLU complies with Spanish and EU privacy laws. Non-Owners If you are a non-owner member of a team, business, or family account, your use of any of our services, apps or APIs any of our services, apps or APIs any of our services, apps or APIs account, you cannot provide us with Secure Data. Service Data We inevitably acquire Service Data about your usage of any of our services, apps or APIs,. Data Processing Agreement (GDPR) any of our services, apps or APIs.eu and any of our services, apps or APIs.ca fully comply with the GDPR, including the third country data transfer requirements. any of our services, apps or APIs.com complies with everything except for third country data transfer requirements. Data Location and Transfer any of our services, apps or APIs.eu Our services and apps on is held on servers located within the United States. We maybe for technical purposes store/relocate/replicate data and or services to any of the of the data centers in any of the countries and regions provided by our Cloud providers. Customer support system Our customer support and email services are hosted primarily in the United States. Any information you choose send us through email or our customer support system may pass through and be stored on a variety of intermediate services. any of our services, apps or APIs account(s) or you may eventually lose access. Data Portability You may export your any of our services, apps or APIs data at any time you wish during the life of your account. If you discontinue payment, your account will enter a frozen (read-only) state for a period of 3 months during which you may still retrieve and export your data. NewTecnia Solutions SLU. Client applications, including web browsers, will store information about your account to assist with future sign-ins and keep some information available to you when you are not signed in. Users may remove all such information from their devices, email, informing you of substantive changes. Previous versions will be made available from this page.
https://docs.pkhub.io/privacy/
2019-07-15T19:53:10
CC-MAIN-2019-30
1563195524111.50
[]
docs.pkhub.io
View on-premises report server reports and KPIs in the Power BI mobile apps Is this page helpful? the global navigation button in the upper-left corner, then tap the gear icon in the upper right . Tap Reporting Services samples, then browse to interact with the sample KPIs and mobile reports. Connect to an on-premises report server You can view on-premises Power BI reports, Reporting Services mobile reports, and KPIs in the Power BI mobile apps. On your mobile device, open the Power BI app. If you haven't signed in to Power BI yet, tap Report Server. If you've already signed in to the Power BI app, tap the global navigation button , then tap the gear icon in the upper-right. your user name and password. Use this format for the server address: http://<servername>/reports OR https://<servername>/reports Include http or https in front of the connection string. (Optional) Under Advanced options, you can give the server a friendly name, if you'd like. Now you see the server in the left navigation bar--in this example, called "power bi report server." Connect to an on-premises report server in iOS If you're viewing Power BI in the iOS. Your favorite KPIs and reports from the web portal are all on this page, along with Power BI dashboards in the Power BI service: Remove a connection to a report server - At the bottom of the left navigation bar, tap Settings. - Tap the server name you don't want to be connected to. - Tap Remove Server. Next steps Was this page helpful? Thank you for your feedback. Feedback
https://docs.microsoft.com/en-us/power-bi/consumer/mobile/mobile-app-ssrs-kpis-mobile-on-premises-reports
2019-07-15T21:26:14
CC-MAIN-2019-30
1563195524111.50
[array(['media/mobile-app-ssrs-kpis-mobile-on-premises-reports/power-bi-ipad-pbi-report-server-home.png', 'Report Server home in the mobile apps'], dtype=object) ]
docs.microsoft.com
Advanced scheduling involves configuring a pod so that the pod is required to run on particular nodes or has a preference to run on particular nodes. Generally, advanced scheduling is not necessary, as the OpenShift Container Platform automatically places pods in a reasonable manner. For example, the default scheduler attempts to distribute pods across the nodes evenly and considers the available resources in a node. However, you might want more control over where a pod is placed. If a pod needs to be on a machine with a faster disk speed (or prevented from being placed on that machine) or pods from two different services need to be located so they can communicate, you can use advanced scheduling to make that happen. To ensure that appropriate new pods are scheduled on a dedicated group of nodes and prevent other new pods from being scheduled on those nodes, you can combine these methods as needed. There are several ways to invoke advanced scheduling in your cluster: Pod affinity allows a pod to specify an affinity (or anti-affinity) towards a group of pods (for an applicationโ€™s latency requirements, due to security, and so forth) it can be placed with. The node does not have control over the placement. Pod affinity uses labels on nodes and label selectors on pods to create rules for pod placement. Rules can be mandatory (required) or best-effort (preferred). Node affinity allows a pod to specify an affinity (or anti-affinity) towards a group of nodes (due to their special hardware, location, requirements for high availability, and so forth) it can be placed on. The node does not have control over the placement. Node affinity uses labels on nodes and label selectors on pods to create rules for pod placement. Rules can be mandatory (required) or best-effort (preferred). See Using Node Affinity. Node selectors are the simplest form of advanced scheduling. Like node affinity, node selectors also use labels on nodes and label selectors on pods to allow a pod to control the nodes on which it can be placed. However, node selectors do not have required and preferred rules that node affinities have. See Using Node Selectors. Taints/Tolerations allow the node to control which pods should (or should not) be scheduled on them. Taints are labels on a node and tolerations are labels on a pod. The labels on the pod must match (or tolerate) the label (taint) on the node in order to be scheduled. Taints/tolerations have one advantage over affinities. For example, if you add to a cluster a new group of nodes with different labels, you would need to update affinities on each of the pods you want to access the node and on any other pods you do not want to use the new nodes. With taints/tolerations, you would only need to update those pods that are required to land on those new nodes, because other pods would be repelled.
https://docs.openshift.com/container-platform/3.6/admin_guide/scheduling/scheduler-advanced.html
2019-07-15T20:21:05
CC-MAIN-2019-30
1563195524111.50
[]
docs.openshift.com
Data Structure This document intends to explain every aspect of its structure and fields. The integration will have the following methods: Each request sent to the service url requires a node called rqXML . Inside this node travels the current methodโ€™s Input object. Typical Exchange Message Scenario Typical use case of message exchange flow between Providers and Sellers can be resumed as: Retrieve and purchase of car rental product: - Agencies retrieve the available product from the provider using OTA_VehAvailRate. - Once the final customer selects an option, a pre-booking must be done using OTA_VehRateRule. - Finally, when the customer agrees purchasing the option, the booking is created using OTA_VehRes. Manage Bookings: - The information related to a booking previously created can be retrieved usingOTA_VehRetRes. - A previously made reservation can be cancelled usingOTA_VehCancel. Office Mapping: - Agencies can retrieve the available offices usingOTA_VehLocSearch. Data structure content: - Common-Elements - Availability - Pre-Booking (Rate Rule) - Booking - Routes (Offices) - OTA VehRetRes (GetBooking Details) - Cancel Booking - StaticConfiguration - RunTimeConfiguration - Code List
https://docs.travelgatex.com/legacy/docs/car/data-structure/
2019-07-15T20:55:01
CC-MAIN-2019-30
1563195524111.50
[]
docs.travelgatex.com
Toctree and the Hierarchical Structure of a Manual ยถ You can define what should be included in the menu with the .. toctree:: directive. Only .rst files that are included in a toctree, are included in the menu. The toctree directive can also be used to display a table of contents on current page (if :hidden: is not added in toctree). The first headline of an .rst file is its โ€œdoctitleโ€. That is the documentโ€™s title property. The title and the following headlines are used for cross-references and appear in menus and table of contents. General Rules for Using ..toctree:: ยถ Each .rst file should have a doctitle, for example: ========== Some Title ========== Do not use any additional headlines in the file if it contains a .. toctree::directive. Note: What we call โ€œheadlinesโ€ here is called โ€œsectionsโ€ in reST-jargon, see Headlines and Sections . How it works ยถ 2017-02-13 by Martin Bless TYPO3 documentation usually starts with the file PROJECT/Documentation/Index.rst . The text may go into more than one textfile and these can be โ€œpulled inโ€ and referenced by the .. toctree:: directive. Note: - Each .. toctree::directive creates a sublevel of headlines in the menu. - The sublevel refers to the current level . - Problem Sometimes you donโ€™t get what you expect: ================ My Documentation ================ Introduction ============ This project does something very useful ... See the individual chapters. .. toctree:: Chapter-1 Chapter-2 Chapter-3 The example feels very natural. We are thinking of the introduction followed by the single chapters. Unfortunately we get something different. The chapters will all be a subpart of Introduction and not at the same level. It is exactly what the Sphinx documentation states and there is no easy way to โ€œtweakโ€ this behavior. - Solution Use these rules of thumb: - All or nothing: Pull in all content of a given level via toctreeor donโ€™t use toctreeat all. - Or, in other words: Do not use a headline (โ€œsectionโ€) in a document before a .. toctree::directive unless you really want to place the pulled in documents at a sublevel of that section . Here is how we can fix the example: Move the introduction to an extra file and pull it in just like the others. Fixed example: ================ My Documentation ================ You can have text here. But don't introduce headlines, if you want to have the pulled in files at the same level. .. toctree:: Introduction Chapter-1 Chapter-2 Chapter-3 Now the document titles (not shown here) of the files Introduction.rst, Chapter-1.rst, Chapter-2and Chapter-3will all be at the sublevel of My Documentationin the menu.
https://docs.typo3.org/m/typo3/docs-how-to-document/master/en-us/WritingReST/MenuHierarchy.html
2019-07-15T20:58:20
CC-MAIN-2019-30
1563195524111.50
[]
docs.typo3.org
There are some situations with iOSAppleโ€™s mobile operating system. More info See in Glossary where your game can work perfectly in the Unity editor but then doesnโ€™t work or maybe doesnโ€™t even start on the actual device. The problems are often related to code or content quality. This section describes the most common scenarios. There are a number of reasons why this may happen. Typical causes include: This message typically appears on iOS devices when your application receives a NullReferenceException. There are two ways to figure out where the fault happened: Unity includes software-based handling of the NullReferenceException. The AOT compiler includes quick checks for null references each time a method or variable is accessed on an object. This feature affects script performance, which is why it is enabled only for development buildsA development build includes debug symbols and enables the Profiler. More info See in Glossary (enable the script debugging option in Build Settings dialog). If everything was done right and the fault actually is occurring in .NET code then you wonโ€™t see EXC_BAD_ACCESS anymore. Instead, the .NET exception text will be printed in the Xcode console (or else your code will just handle it in a โ€œcatchโ€ statement). Typical output might be: Unhandled Exception: System.NullReferenceException: A null value was found where an object instance was required. at DayController+$handleTimeOfDay$121+$.MoveNext () [0x0035a] in DayController.js:122 This indicates that the fault happened in the handleTimeOfDay method of the DayController class, which works as a coroutine. Also, if it is script code then you will generally be told the exact line number (e.g. โ€œDayController.js:122โ€). The offending line might be something like the following: Instantiate(_img); This might happen if, say, the script accesses an asset bundle without first checking that it was downloaded correctly. Native stack traces are a much more powerful tool for fault investigation but using them requires some expertise. Also, you generally canโ€™t continue after these native (hardware memory access) faults happen. To get a native stack trace, type bt all into the Xcode Debugger Console. Carefully inspect the printed stack traces; they may contain hints about where the error occurred. You might see something like: ... Thread 1 (thread 11523): 1. 0 0x006267d0 in m_OptionsMenu_Start () 1. 1 0x002e4160 in wrapper_runtime_invoke_object_runtime_invoke_void__this___object_intptr_intptr_intptr () 1. 2 0x00a1dd64 in mono_jit_runtime_invoke (method=0x18b63bc, obj=0x5d10cb0, params=0x0, exc=0x2fffdd34) at /Users/mantasp/work/unity/unity-mono/External/Mono/mono/mono/mini/mini.c:4487 1. 3 0x0088481c in MonoBehaviour::InvokeMethodOrCoroutineChecked () ... Firstly, you should find the stack trace for โ€œThread 1โ€, which is the main thread. The very first lines of the stack trace will point to the place where the error occurred. In this example, the trace indicates that the NullReferenceException happened inside the OptionsMenu scriptโ€™s Start method. Looking carefully at this method implementation would reveal the cause of the problem. Typically, NullReferenceExceptions happen inside the Start method when incorrect assumptions are made about initialization order. In some cases only a partial stack trace is seen on the Debugger Console: Thread 1 (thread 11523): 1. 0 0x0062564c in start () This indicates that native symbols were stripped during the Release build of the application. The full stack trace can be obtained with the following procedure: This usually happens when an external library is compiled with the ARM Thumb instruction set. Currently, such libraries are not compatible with Unity. The problem can be solved easily by recompiling the library without Thumb instructions. You can do this for the libraryโ€™s Xcode project with the following steps: If the library source is not available you should ask the supplier for a non-thumb version of the library. Sometimes, you might see a message like Program received signal: โ€œ0โ€. This warning message is often not fatal and merely indicates that iOS is low on memory and is asking applications to free up some memory. Typically, background processes like Mail will free some memory and your application can continue to run. However, if your application continues to use memory or ask for more, the OS will eventually start killing applications and yours could be one of them. Apple does not document what memory usage is safe, but empirical observations show that applications using less than 50% of all device RAM do not have major memory usage problems. The main metric you should rely on is how much RAM your application uses. Your application memory usage consists of three major components: Note: The internal profiler shows only the heap allocated by .NET scriptsA piece of code that allows you to create your own Components, trigger game events, modify Component properties over time and respond to user input in any way you like. More info See in Glossary. Total memory usage can be determined via Xcode Instruments as shown above. This figure includes parts of the application binary, some standard framework buffers, Unity engine internal state buffers, the .NET runtime heap (number printed by internal profiler), GLES driver heap and some other miscellaneous stuff. The other tool displays all allocations made by your application and includes both native heap and managed heap statistics. The important statistic is the Net bytes value. To keep memory usage low: Querying the OS about the amount of free memory may seem like a good idea to evaluate how well your application is performing. However, the free memory statistic is likely to be unreliable since the OS uses a lot of dynamic buffers and caches. The only reliable approach is to keep track of memory consumption for your application and use that as the main metric. Pay attention to how the graphs from the tools described above change over time, especially after loading new levels. There could be several reasons for this. You need to inspect the device logs to get more details. Connect the device to your Mac, launch Xcode and select Window > Devices and Simulators from the menu. Select your device in the windowโ€™s left toolbar, then click on the Show the device console button and review the latest messages carefully. Additionally, you may need to investigate crash reports. You can find out how to obtain crash reports here:. There is a poorly-documented time limit for an iOS application to render its first frames and process input. If your application exceeds this limit, it will be killed by SpringBoard. This may happen in an application with a first sceneA Scene contains the environments and menus of your game. Think of each unique Scene file as a unique level. In each Scene, you place your environments, obstacles, and decorations, essentially designing and building your game in pieces. More info See in Glossary which is too large, for example. To avoid this problem, it is advisable to create a small initial scene which just displays a splash screen, waits a frame or two with yield and then starts loading the real scene. This can be done with code as simple as the following: IEnumerator Start() { yield return new WaitForEndOfFrame(); // Do not forget using UnityEngine.SceneManagement directive SceneManager.LoadScene("Test"); } Currently Type.GetProperty() and Type.GetValue() are supported only for the .NET 2.0 Subset profile. You can select the .NET API compatibility level in the Player settings. Note: Type.GetProperty() and Type.GetValue() might be incompatible with managed code stripping and might need to be excluded (you can supply a custom non-strippable type list during the stripping process to accomplish this). For further details, see the iOS player size optimization guide. The Mono .NET implementation for iOS is based on AOT (ahead of time compilation to native code) technology, which has its limitations. It compiles only those generic type methods (where a value type is used as a generic parameter) which are explicitly used by other code. When such methods are used only via reflection or from native code (i.e. the serialization system) then they get skipped during AOT compilationAhead of Time (AOT) compilation is an iOS optimization method for optimizing the size of the built iOS player More info See in Glossary. The AOT compiler can be hinted to include code by adding a dummy method somewhere in the script code. This can refer to the missing methods and so get them compiled ahead of time. void _unusedMethod() { var tmp = new SomeType<SomeValueType>(); } Note: Value types are basic types, enums and structs. .NET Cryptography services rely heavily on reflection and so are not compatible with managed code stripping since this involves static code analysis. Sometimes the easiest solution to the crashes is to exclude the whole System.Security.Crypography namespace from the stripping process. The stripping process can be customized by adding a custom link.xml file to the Assets folder of your Unity project. This specifies which types and namespaces should be excluded from stripping. Further details can be found in the iOS player size optimization guide. <linker> <assembly fullname="mscorlib"> <namespace fullname="System.Security.Cryptography" preserve="all"/> </assembly> </linker> You should consider the above advice or try working around this problem by adding extra references to specific classes to your script code: object obj = new MD5CryptoServiceProvider(); This error usually happens if you use lots of recursive generics. You can hint to the AOT compiler to allocate more trampolines of type 0, type 1 or type 2. Additional AOT compiler command line options can be specified in the Other Settings section of the Player settings. For for type 0 trampolines specify ntrampolines=ABCD, where ABCD is the number of new trampolines required (i.e. 4096). For type 1 trampolines specify nrgctx-trampolines=ABCD and for type 2 trampolines specify nimt-trampolines=ABCD. With some latest Xcode releases there were changes introduced in PNG compression and optimization tool. These changes might cause false positives in Unity iOS runtime checks for splash screen modifications. If you encounter such problems try upgrading Unity to the latest publicly available version. If this does not help, you might consider the following workaround: If this still does not help try disabling PNG re-compression in Xcode: The most common mistake is to assume that WWW downloads are always happening on a separate thread. On some platforms this might be true, but you should not take it for granted. Best way to track WWW status is either to use the yield statement or check status in Update method. You should not use busy while loops for that. Some operations with the UI(User Interface) Allows a user to interact with your application. More info See in Glossary will result in iOS redrawing the window immediately (the most common example is adding a UIView with a UIViewController to the main UIWindow). If you call a native function from a script, it will happen inside Unityโ€™s PlayerLoop, resulting in PlayerLoop being called recursively. In such cases, you should consider using the performSelectorOnMainThread method with waitUntilDone set to false. It will inform iOS to schedule the operation to run between Unityโ€™s PlayerLoop calls. If your application runs ok in editor but you get errors in your iOS project this may be caused by missing DLLs (e.g. I18N.dll, I19N.West.dll). In this case, try copying those dlls from within the Unity.app to your projectโ€™s Assets\Plugins folder. The location of the DLLs within the unity app is: Unity.app\Contents\Frameworks\Mono\lib\mono\unity You should then also check the stripping level of your project to ensure the classes in the DLLs arenโ€™t being removed when the build is optimised. Refer to the iOS Optimisation Page for more information on iOS Stripping Levels. Typically, such a message is received when the managed function delegate is passed to the native function, but the required wrapper code wasnโ€™t generated when the application was built. You can help AOT compiler by hinting which methods will be passed as delegates to the native code. This can be done by adding the MonoPInvokeCallbackAttribute custom attribute. Currently, only static methods can be passed as delegates to the native code. Sample code: using UnityEngine; using System.Collections; using System; using System.Runtime.InteropServices; using AOT; public class NewBehaviourScript : MonoBehaviour { [DllImport ("__Internal")] private static extern void DoSomething (NoParamDelegate del1, StringParamDelegate del2); delegate void NoParamDelegate (); delegate void StringParamDelegate (string str); [MonoPInvokeCallback(typeof(NoParamDelegate))] public static void NoParamCallback() { Debug.Log ("Hello from NoParamCallback"); } [MonoPInvokeCallback(typeof(StringParamDelegate))] public static void StringParamCallback(string str) { Debug.Log(string.Format("Hello from StringParamCallback {0}", str)); } // Use this for initialization void Start() { DoSomething(NoParamCallback, StringParamCallback); } } This error usually means there is just too much code in single module. Typically, it is caused by having lots of script code or having big external .NET assemblies included into build. And enabling script debugging might make things worse, because it adds quite few additional instructions to each function, so it is easier to hit that limit. Enabling managed code stripping in Player settings might help with this problem, especially if big external .NET assemblies are involved. But if the issue persists then the best solution is to split user script code into multiple assemblies. The easiest way to this is move some code to Plugins folder. Code at this location is put to a different assembly. Also, check the information about how special folder names affect script compilation. Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/Manual/TroubleShootingIPhone.html
2019-07-15T20:08:07
CC-MAIN-2019-30
1563195524111.50
[]
docs.unity3d.com
In order to create an employee without an email address click on 'employees' in the menu. On the right you will see a red button '+ invite employee'. A window will open where you can chose the option 'employee without email address. Enter all information and click on 'create'. Another possibility would be to create subaddresses of your own email address. For example: [email protected], [email protected], [email protected], etc. Instead of a number you could also use the name of the person.
http://docs.staffomatic.com/en/articles/824818-how-do-i-invite-an-employee-without-an-email-address
2019-07-15T21:08:16
CC-MAIN-2019-30
1563195524111.50
[array(['https://uploads.intercomcdn.com/i/o/25444400/fbbe6944c3b8a9af96d6bac9/Bildschirmfoto+2017-05-31+um+17.00.21.png', None], dtype=object) ]
docs.staffomatic.com
WordPress Acquire is a quick, effective way to make communication with customers. Start using it on your WordPress site and install the free Acquire plugin for WordPress. It takes less than 3 minutes to install and can increase conversions. Step 1 1. Login to your WordPress account and open your dashboard, then select Plugins on the left sidebar. 2. In the โ€œAdd Newโ€ section, search โ€œAcquireโ€ with the search bar at the top right. 3. Install and activate the plugin. Then find Acquire in the left sidebar and pick a widget to turn the chat on! At this stage, itโ€™s done. The plugin has been activated, youโ€™ll return to the Installed Plugins Page. Step 2 Scroll down and youโ€™ll see that Acquire. To add Chat widget in your website link up your acquire account here, for this you need to provide your account id in a textbox and click on โ€œActivateโ€ button. If you donโ€™t know how to get account id of acquire, please visit Acquire Integration tutorial. To register with acquire account please click on blue line text โ€œclick hereโ€. Get Account ID 1. If you have not to Acquire account then register here: 2. After login Acquire dashboard and see right side corner username [YOUR_NAME] click here and get your Acquire ACCOUNT ID. Step 3 1. After Account id linked up with acquire. you can check current status. 2. Visit your website homepage and click on a small icon that will be displayed in the bottom corner. and start chat there. 3. Chat Will display on acquire portal and agent will start communicating with customers. Acquire live chat provide to your WordPress site on user communication make easily and you handle all chat on acquire dashboard. If you are experiencing any trouble with this integration, please contact us via Acquire or at [email protected]
https://docs.acquire.io/wordpress
2019-07-15T20:30:18
CC-MAIN-2019-30
1563195524111.50
[]
docs.acquire.io
This page provides information on the Bump Normals Render Element which creates a normal-style image from the camera view from bumps and normals in the scene. Overview V-Ray Bump Normals is a render element that stores the camera view as a normal map. The normals are generated using screen space (the Screen coordinate system in Maya)RE_Normals) page. This render element is similar to the Normals (vrayRE_Normals) Render Element that does not include bump maps. The Bump Normal Render Element is useful for adjusting lighting that comes from a particular direction. For example, faces of objects pointing toward the camera will be predominantly blue in vrayRE_BumpNormals, so the lighting on such objects can be adjusted by using the blue channel in compositing software. UI Path ||Render Settings window|| > Render Elements tab > BumpNorm.bumpnormals.vrimg). Filtering โ€“ Applies the image filter to this channel. Image filter settings are in the Image Sampler rollout in the VRay tab of the Render Settings. compositing to react to the pixels as if they were on the surface of the model. Bump Normals Render Element The World Positions pass Original Beauty Composite The resulting relit composite (2 point lights of varying intensities and colors were used along with a directional light)
https://docs.chaosgroup.com/pages/viewpage.action?pageId=39814298
2019-07-15T20:32:06
CC-MAIN-2019-30
1563195524111.50
[]
docs.chaosgroup.com
Rest API For day to day usage you probably want to use the CLI App, The rest API allows you to write custom tooling or integrate PKHub features directly into your application. We used Swagger to define our Rest API, and this allows you to generate REST bindings for any number of languages. Generators exist for, Java, Golang, Scala, Clojure, Ruby, Python, JavaScriptโ€ฆ Our Swagger JSON definition is provided at: Swagger.json
https://docs.pkhub.io/rest-api/
2019-07-15T20:09:39
CC-MAIN-2019-30
1563195524111.50
[]
docs.pkhub.io
Introduction Stein Expedite helps you - display data from Google Sheets on your website, using handlebars-like {{ }}syntax - directly link a form to a Google Sheet. All straight through the simple, beloved HTML. Expedite is a part of the Stein suite, and is thus powered by the Stein API. To utilize Expedite, though, you don't need to learn the details of the core API. First, create an API for your sheet via the interface. It's nothing technical, and wouldn't take you more than a couple of minutes. Read on to learn how to install Expedite on your site.
https://docs.steinhq.com/expedite-introduction
2019-07-15T20:55:29
CC-MAIN-2019-30
1563195524111.50
[]
docs.steinhq.com
Hevo sends out alerts to its users for any changes that occur on pipelines, models, workflows, and destinations. Hevo also sends out a periodic status update on the above entities. These alerts are delivered to users over email or Slack. As a User, at times you might want to receive alerts over Slack only but not over email or vice-versa. It is also possible that you might want to completely opt out of certain kinds of notifications completely. Hevo allows you to set up alert preferences as per your need. To customize your alerts preferences, select the Admin tab on the left bottom of the screen and click on Settings. In the screen that opens up, select Alerts on the top bar and then head on to the Preferences tab on the sidebar. Select the check boxes for the alerts you want to receive, under the channel you want to receive it on. Once you are done with the selection, click on save preference. You will now receive alerts only according to the set preference. Please sign in to leave a comment.
https://docs.hevodata.com/hc/en-us/articles/360008259754-How-to-customise-your-Alert-Preferences
2019-07-15T21:17:22
CC-MAIN-2019-30
1563195524111.50
[]
docs.hevodata.com
Opening a Device Before using a device, you must initialize it by using the open (MCI_OPEN) command. This command loads the driver into memory (if it isn't already loaded) and retrieves the device identifier you will use to identify the device in subsequent MCI commands. You should check the return value of the mciSendString or mciSendCommand function before using a new device identifier to ensure that the identifier is valid. (You can also retrieve a device identifier by using the mciGetDeviceID function.) Like all MCI command messages, MCI_OPEN has an associated structure. These structures are sometimes called parameter blocks. The default structure for MCI_OPEN is MCI_OPEN_PARMS. Certain devices (such as waveform and overlay) have extended structures (such as MCI_WAVE_OPEN_PARMS and MCI_OVLY_OPEN_PARMS) to accommodate additional optional parameters. Unless you need to use these additional parameters, you can use the MCI_OPEN_PARMS structure with any MCI device. The number of devices you can have open is limited only by the amount of available memory. Using an Alias When you open a device, you can use the "alias" flag to specify a device identifier for the device. This flag lets you assign a short device identifier for compound devices with lengthy filenames, and it lets you open multiple instances of the same file or device. For example, the following command assigns the device identifier "birdcall" to the lengthy filename C:\NABIRDS\SOUNDS\MOCKMTNG.WAV: mciSendString( "open c:\nabirds\sounds\mockmtng.wav type waveaudio alias birdcall", lpszReturnString, lstrlen(lpszReturnString), NULL); In the command-message interface, you specify an alias by using the lpstrAlias member of the MCI_OPEN_PARMS structure. Specifying a Device Type When you open a device, you can use the "type" flag to refer to a device type, rather than to a specific device driver. The following example opens the waveform-audio file C:\WINDOWS\CHIMES.WAV (using the "type" flag to specify the waveaudio device type) and assigns the alias "chimes": mciSendString( "open c:\windows\chimes.wav type waveaudio alias chimes", lpszReturnString, lstrlen(lpszReturnString), NULL); In the command-message interface, the functionality of the "type" flag is supplied by the lpstrDeviceType member of the MCI_OPEN_PARMS structure. Simple and Compound Devices MCI classifies device drivers as compound or simple. Drivers for compound devices require the name of a data file for playback; drivers for simple devices do not. Simple devices include cdaudio and videodisc devices. There are two ways to open simple devices: Specify a pointer to a null-terminated string containing the device name from the registry or the SYSTEM.INI file. For example, you can open a videodisc device by using the following command: mciSendString("open videodisc", lpszReturnString, lstrlen(lpszReturnString), NULL); In this case, "videodisc" is the device name from the registry or the [mci] section of SYSTEM.INI. - Specify the actual name of the device driver. Opening a device using the device-driver filename, however, makes the application device-specific and can prevent the application from running if the system configuration changes. If you use a filename, you do not need to specify the complete path or the filename extension; MCI assumes drivers are located in a system directory and have the .DRV filename extension. Compound devices include waveaudio and sequencer devices. The data for a compound device is sometimes called a device element. This document, however, generally refers to this data as a file, even though in some cases the data might not be stored as a file. There are three ways to open a compound device: - Specify only the device name. This lets you open a compound device without associating a filename. Most compound devices process only the capability (MCI_GETDEVCAPS) and close (MCI_CLOSE) commands when they are opened this way. - Specify only the filename. The device name is determined from the associations in the registry. - Specify the filename and the device name. MCI ignores the entries in the registry and opens the specified device name. To associate a data file with a particular device, you can specify the filename and device name. For example, the following command opens the waveaudio device with the filename MYVOICE.SND: mciSendString("open myvoice.snd type waveaudio", lpszReturnString, lstrlen(lpszReturnString), NULL); In the command-string interface, you can also abbreviate the device name specification by using the alternative exclamation-point format, as documented with the open command. Opening a Device Using the Filename Extension If the open (MCI_OPEN) command specifies only the filename, MCI uses the filename extension to select the appropriate device from the list in the registry or the [mci extensions] section of the SYSTEM.INI file. The entries in the [mci extensions] section use the following form: filename_extension = device_name MCI implicitly uses device_name if the extension is found and if a device name has not been specified in the open command. The following example shows a typical [mci extensions] section: [mci extensions] wav=waveaudio mid=sequencer rmi=sequencer Using these definitions, MCI opens the waveaudio device if the following command is issued: mciSendString("open train.wav", lpszReturnString, lstrlen(lpszReturnString), NULL); New Data Files To create a new data file, simply specify a blank filename. MCI does not save a new file until you save it by using the save (MCI_SAVE) command. When creating a new file, you must include a device alias with the open (MCI_OPEN) command. The following example opens a new waveaudio file, starts and stops recording, then saves and closes the file: mciSendString("open new type waveaudio alias capture", lpszReturnString, lstrlen(lpszReturnString), NULL); mciSendString("record capture", lpszReturnString, lstrlen(lpszReturnString), NULL); mciSendString("stop capture", lpszReturnString, lstrlen(lpszReturnString), NULL); mciSendString("save capture orca.wav", lpszReturnString, lstrlen(lpszReturnString), NULL); mciSendString("close capture", lpszReturnString, lstrlen(lpszReturnString), NULL); Sharable Devices The "sharable" (MCI_OPEN_SHAREABLE) flag of the open (MCI_OPEN) command lets multiple applications access the same device (or file) and device instance simultaneously. If your application opens a device or file as sharable, other applications can also access it by opening it as sharable. The shared device or file gives each application the ability to change the parameters governing its operating state. Each time a device or file is opened as sharable, MCI returns a unique device identifier, even though the identifiers refer to the same instance. If your application opens a device or file without specifying that it is sharable, no other application can access it until your application closes it. Also, if a device supports only one open instance, the open command will fail if you specify the sharable flag. If your application opens a device and specifies that it is sharable, your application should not make any assumptions about the state of this device. Your application might need to compensate for changes made by other applications accessing the device. Most compound files are not sharable; however, you can open multiple files, or you can open a single file multiple times. If you open a single file multiple times, MCI creates an independent instance for each, with each instance having a unique operating status. If you open multiple instances of a file, you must assign a unique device identifier to each. You can use an alias, as described in the following section, to assign a unique name for each file.
https://docs.microsoft.com/en-us/windows/win32/multimedia/opening-a-device
2019-07-15T21:40:51
CC-MAIN-2019-30
1563195524111.50
[]
docs.microsoft.com
Wait until JMESPath query BundleTasks[].State returns complete for all elements when polling with describe-bundle-tasks. It will poll every 15 seconds until a successful state has been reached. This will exit with a return code of 255 after 40 failed checks. See also: AWS API Documentation See 'aws help' for descriptions of global parameters. bundle-task-complete [--bundle-ids <value>] [--filters <value>] [--dry-run | --no-dry-run] [--cli-input-json <value>] [--generate-cli-skeleton <value>] --bundle-ids (list) One or more bundle task IDs. Default: Describes all your bundle tasks. Syntax: "string" "string" ... --filters (list) One or more filters. -.
https://docs.aws.amazon.com/cli/latest/reference/ec2/wait/bundle-task-complete.html
2019-02-15T23:42:10
CC-MAIN-2019-09
1550247479627.17
[]
docs.aws.amazon.com
Caching We cache all your requests so that you don't have pay for the same request again. What counts as a unique screenshot?What counts as a unique screenshot? A unique screenshot is any combination of url and parameters that you have not requested before. How long do we cache?How long do we cache? The maximum TTL for any request is 7 days, so after 7 days your cache screenshot / PDF will be deleted.
https://docs.capture.techulus.in/docs/caching
2019-02-16T00:15:04
CC-MAIN-2019-09
1550247479627.17
[]
docs.capture.techulus.in
Revision history of "JDocumentHTML::parse::parse/11.1 to API17:JDocumentHTML::parse without leaving a redirect (Robot: Moved page)
https://docs.joomla.org/index.php?title=JDocumentHTML::parse/11.1&action=history
2015-10-04T13:16:33
CC-MAIN-2015-40
1443736673632.3
[]
docs.joomla.org
Difference between revisions of "Vulnerable Extensions List" From Joomla! Documentation Revision as of 13:56, 16 April aiContactSafe 2.0.19 - 6 RSfiles - 7 Multiple Customfields Filter for Virtuemart - 8 Collector - 9 tz guestbook - 10 extplorer - 11 JooProperty - 12 Multiple Customfields Filter for Virtuemart - 13 ag google analytic - 14 sh404sef <3.7.0 - 15 Login Failed Log - 16 jNews - 17 Joombah Jobs - 18 commedia - 19 Kunena - 20 Icagenda - 21 JTag [joomlatag] - 22 Freestyle Support - 23 ACEFTP - 24 MijoFTP - 25 spider calendar lite - 26 RokModule - 27 ICagenda - 28 En Masse cart - 29 JCE (joomla content editor) - 30 RSGallery2 - 31 osproperty - 32 KSAdvertiser - 33 Shipping by State for Virtuemart - 34 ownbiblio 1.5.3 - 35 Ninjaxplorer <=1.0.6 - 36 Phoca Fav Icon - 37 estateagent improved - 38 bearleague - 39 JLive! Chat v4.3.1 - 40 virtuemart 2.0.2 - 41 JE testimonial - 42 JaggyBlog - 43 Quickl Form - 44 com_advert - 45 Joomla Discussions Component - 46 HD Video Share (contushdvideoshare) - 47 Simple File Upload 1.3 - 48 - 49 January 2011 - Jan 2012 Reported Vulnerable Extensions - 50 Simple File Upload 1.3 - 51 Dshop - 52 QContacts 1.0.6 - 53 Jobprofile 1.0 - 54 JX Finder 2.0.1 - 55 wdbanners - 56 JB Captify Content J1.5 and J1.7 - 57 JB Microblog - 58 JB Slideshow <3.5.1, - 59 JB Bamboobox - 60 RokModule - 61 hm community - 62 Alameda - 63 Techfolio 1.0 - 64 Barter Sites 1.3 - 65 Jeema SMS 3.2 - 66 Vik Real Estate 1.0 - 67 yj contact - 68 NoNumber Framework - 69 Time Returns - 70 Simple File Upload - 71 Jumi - 72 Joomla content editor - 73 Google Website Optimizer - 74 Almond Classifieds - 75 joomtouch - 76 RAXO All-mode PRO - 77 V-portfolio - 78 obSuggest - 79 Simple Page - 80 JE Story - 81 appointment booking pro - 82 acajoom - 83 gTranslate - 84 alpharegistration - 85 Jforce - 86 Flash Magazine Deluxe Joomla - 87 AVreloaded - 88 Sobi - 89 fabrik - 90 xmap - 91 Atomic Gallery - 92 myApi - 93 mdigg - 94 Calc Builder - 95 Cool Debate - 96 - 97 Scriptegrator Plugin 1.5.5 - 98 Joomnik Gallery - 99 JMS fileseller - 100 sh404SEF - 101 JE Story submit - 102 FCKeditor - 103 KeyCaptcha - 104 Ask A Question AddOn v1.1 - 105 Global Flash Gallery - 106 com_google - 107 docman - 108 Newsletter Subscriber - 109 Akeeba - 110 Facebook Graph Connect - 111 booklibrary - 112 semantic - 113 JOMSOCIAL 2.0.x 2.1.x - 114 flexicontent - 115 jLabs Google Analytics Counter - 116 xcloner - 117 smartformer - 118 xmap 1.2.10 - 119 Frontend-User-Access 3.4.1 - 120 com properties 7134 - 121 B2 Portfolio - 122 allcinevid - 123 People Component - 124 Jimtawl - 125 Maian Media SILVER - 126 alfurqan - 127 ccboard - 128 ProDesk v 1.5 - 129 sponsorwall - 130 Flip wall - 131 Freestyle FAQ 1.5.6 - 132 iJoomla Magazine 3.0.1 - 133 Clantools - 134 jphone - 135 PicSell - 136 Zoom Portfolio - 137 zina - 138 Team's - 139 Amblog - 140 - 141 - 142 wmtpic - 143 Jomtube - 144 Rapid Recipe - 145 Health & Fitness Stats - 146 staticxt - 147 quickfaq - 148 Minify4Joomla - 149 IXXO Cart - 150 PaymentsPlus - 151 ArtForms - 152 autartimonial - 153 eventcal 1.6.4 - 154 date converter - 155 real estate - 156 cinema - 157 Jreservation - 158 joomdocs - 159 Live Chat - 160 Turtushout 0.11 - 161 BF Survey Pro Free - 162 MisterEstate - 163 RSMonials - 164 Answers v2.3beta - 165 Gallery XML 1.1 - 166 JFaq 1.2 - 167 Listbingo 1.3 - 168 Alpha User Points - 169 recruitmentmanager - 170 Info Line (MT_ILine) - 171 Ads manager Annonce - 172 lead article - 173 djartgallery - 174 Gallery 2 Bridge - 175 jsjobs - 176 - 177 JE Poll - 178 MediQnA - 179 JE Job - 180 - 181 SectionEx - 182 ActiveHelper LiveHelp - 183 JE Quotation Form - 184 konsultasi - 185 Seber Cart - 186 Camp26 Visitor - 187 JE Property - 188 Noticeboard - 189 SmartSite - 190 htmlcoderhelper graphics - 191 Ultimate Portfolio - 192 Archery Scores - 193 ZiMB Manager - 194 Matamko - 195 Multiple Root - 196 Multiple Map - 197 Contact Us Draw Root Map - 198 iF surfALERT - 199 GBU FACEBOOK - 200 jnewspaper - 201 - 202 MT Fire Eagle - 203 Sweetykeeper - 204 jvehicles - 205 worldrates - 206 cvmaker - 207 advertising - 208 horoscope - 209 webtv - 210 diary - 211 Memory Book - 212 JprojectMan - 213 econtentsite - 214 Jvehicles - 215 - 216 gigcalender - 217 heza content - 218 SqlReport - 219 Yelp - 220 - 221 Codes used - 222 Future Actions & WIP - 223
https://docs.joomla.org/index.php?title=Vulnerable_Extensions_List&curid=4104&diff=84521&oldid=84018
2015-10-04T13:37:39
CC-MAIN-2015-40
1443736673632.3
[]
docs.joomla.org
CURRENT_TIMESTAMP โ€” Returns the current time as a timestamp value. CURRENT_TIMESTAMP The CURRENT_TIMESTAMP function returns the current time as a VoltDB timestamp. The value of the timestamp is determined when the query or stored procedure is invoked. Several important aspects of how the CURRENT_TIMESTAMP function operates are: The value returned is guaranteed to be identical for all partitions that execute the query. The value returned is measured in milliseconds then padded to create a timestamp value in microseconds. During command logging, the returned value is stored as part of the log, so when the command log is replayed, the same value is used during the replay of the query. Similarly, for database replication (DR) the value returned is passed and reused by the replica database when replaying the query. You can specify CURRENT_TIMESTAMP as a default value in the CREATE TABLE statement when defining the schema of a VoltDB database. The CURRENT_TIMESTAMP function cannot be used in the CREATE INDEX or CREATE VIEW statements. The NOW and CURRENT_TIMESTAMP functions are synonyms and perform an identical function.
https://docs.voltdb.com/UsingVoltDB/sqlfunccurrenttimestamp.php
2015-10-04T12:42:38
CC-MAIN-2015-40
1443736673632.3
[]
docs.voltdb.com
Revision history of "JSessionStorageWincache::gc/1.6" View logs for this page There is no edit history for this page. This page has been deleted. The deletion and move log for the page are provided below for reference. - 03:03, 7 May 2013 Wilsonge (Talk | contribs) deleted page JSessionStorageWincache::gc/1.6 (content was: "__NOTOC__ =={{JVer|1.6}} JSessionStorageWincache::gc== ===Description=== Garbage collect stale sessions from the SessionHandler backend. {{Description:JSessionS..." (and the only contributor was "Doxiki2"))
https://docs.joomla.org/index.php?title=JSessionStorageWincache::gc/1.6&action=history
2015-10-04T13:51:38
CC-MAIN-2015-40
1443736673632.3
[]
docs.joomla.org
>>> numargs = triang.numargs >>> [ c ] = [0.9,] * numargs >>> rv = tri = triang.cdf(x, c) >>> h = plt.semilogy(np.abs(x - triang.ppf(prb, c)) + 1e-20) Random number generation >>> R = triang.rvs(c, size=100) Methods
http://docs.scipy.org/doc/scipy-0.11.0/reference/generated/scipy.stats.triang.html
2015-10-04T12:51:08
CC-MAIN-2015-40
1443736673632.3
[]
docs.scipy.org
JDatabaseQuerySQLAzure::innerJoin From Joomla! Documentation Revision as of::innerJoin Description Description:JDatabaseQuerySQLAzure::innerJoin [Edit Descripton] public function innerJoin ($conditions) - Returns Returns this object to allow chaining. - Defined on line 156 of libraries/joomla/database/database/sqlazurequery.php - Since See also JDatabaseQuerySQLAzure::innerJoin source code on BitBucket Class JDatabaseQuerySQLAzure Subpackage Database - Other versions of JDatabaseQuerySQLAzure::innerJoin SeeAlso:JDatabaseQuerySQLAzure::innerJoin [Edit See Also] User contributed notes <CodeExamplesForm />
https://docs.joomla.org/index.php?title=API17:JDatabaseQuerySQLAzure::innerJoin&direction=next&oldid=56471
2015-10-04T13:31:54
CC-MAIN-2015-40
1443736673632.3
[]
docs.joomla.org
Difference between revisions of "Framework" From Joomla! Documentation Redirect page Revision as of 08:21, 15 June 2013 (view source)Tom Hutchison (Talk | contribs)m (merge or move(redirect), duplicate content)โ† Older edit Latest revision as of 13:05, 29 September 2013 (view source) Tom Hutchison (Talk | contribs) m (redirect) Line 1: Line 1: โˆ’{{merge|Framework:Home}}+#REDIRECT [[Framework:Home]] โˆ’{{move|redirect to [[Framework:Home]] or new page [[Framework:About]], although it really is a duplicate of the [[Framework:Home]] page.}}+ โˆ’{{Chunk:Framework}}+ โˆ’The home of the Joomla! Framework is [ the joomla/joomla-framework GitHub repository] that contains the source, documentation and means to contribute (pull requests).+ โˆ’ + โˆ’API documentation for the Framework can be found on โˆ’ + โˆ’Framework development can be discussed on [ the joomla-dev-framework Google Group].+ โˆ’ + โˆ’== References ==+ โˆ’<references/>+ Latest revision as of 13:05, 29 September 2013 Framework:Home Retrieved from โ€˜โ€™
https://docs.joomla.org/index.php?title=Framework&diff=cur&oldid=100359
2015-10-04T12:46:42
CC-MAIN-2015-40
1443736673632.3
[]
docs.joomla.org
Revision history of "JDocument::setDescriptionDescription/11.1 to API17:JDocument::setDescription without leaving a redirect (Robot: Moved page)
https://docs.joomla.org/index.php?title=JDocument::setDescription/11.1&action=history
2015-10-04T13:00:07
CC-MAIN-2015-40
1443736673632.3
[]
docs.joomla.org
All public logs Combined display of all available logs of Joomla! Documentation. You can narrow down the view by selecting a log type, the username (case-sensitive), or the affected page (also case-sensitive). - 15:48, 7 January 2013 Srihari.sahu (Talk | contribs) marked revision 79785 of page Joomla course for beginning developers patrolled - 10:35, 7 January 2013 User account Srihari.sahu (Talk | contribs) was created
https://docs.joomla.org/index.php?title=Special:Log&user=Srihari.sahu
2015-10-04T13:32:21
CC-MAIN-2015-40
1443736673632.3
[]
docs.joomla.org
Main talk file other From Joomla! Documentation other This template is used inside other templates that need to behave differently (usually look differently) depending on what type of page they are on. It detects and groups all the different namespaces used on Joomla! Documentation into four types: - main = Main/article space, as in normal Wikipedia articles. - talk = Any talk space, such as page names that start with "Talk:", ":", "Framework talk:" and so on. - file = Image/media space, such as viewing an uploaded file - other = All other spaces, such as page names that start with "User:", ":", "Framework:" and so on. Functions/Features If this template is used without any parameters it returns the type name that the page belongs to: main, talk, file or other. This template can also take four parameters and then returns one of them depending on which type a page belongs to. Usage: - Other pages text Parameters See also For a more specific alternative see {{thingamabob}} main talk other is based on: Wikipedia:Main talk other (edit|talk|history|links|watch|logs)
https://docs.joomla.org/index.php?title=Template:Main_talk_file_other&oldid=6356
2015-10-04T12:58:53
CC-MAIN-2015-40
1443736673632.3
[]
docs.joomla.org
Joomla! Doc Sprints From Joomla! Documentation Revision as of 17:19, 5 North America Vancouver, Canada 9AM to 6PM PST (GMT -8) on 19 January 2008. <countdown time="01/19/2008 9:00 AM UTC-0800"> ***.. There is a full list of local wiki templates and extensions that can be used in your wiki pages. Please read the Joomla! Editorial Style Guide
https://docs.joomla.org/index.php?title=JDOC:Joomla!_Doc_Camp&oldid=107
2015-10-04T13:50:37
CC-MAIN-2015-40
1443736673632.3
[]
docs.joomla.org
Information for "Alias/tr" Basic information Display titleTakma ad Default sort keyAlias/tr Page length (in bytes)308 Page ID34544 Page content languageTurkish (tr)nes (Talk | contribs) Date of page creation08:55, 31 May 2014 Latest editorEnes (Talk | contribs) Date of latest edit08:58, 31 May 2014 Total number of edits4 Total number of distinct authors1 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Page properties Transcluded template (1)Template used on this page: Chunk:Alias/tr (view source) Retrieved from โ€˜โ€™
https://docs.joomla.org/index.php?title=Alias/tr&action=info
2015-10-04T13:43:36
CC-MAIN-2015-40
1443736673632.3
[]
docs.joomla.org
Information for "Joomla info page/EN" Basic information Display titleJoomla info page/EN Default sort keyJoomla info page/EN Page length (in bytes)3,055 Page ID104Pe7er (Talk | contribs) Date of page creation18:45, 5 July 2010 Latest editorIsidrobaq (Talk | contribs) Date of latest edit10:39, 3 April 2013 Total number of edits11 Total number of distinct authors3 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Page properties Transcluded template (1)Template used on this page: Template:RightTOC (view source) Retrieved from โ€˜โ€™
https://docs.joomla.org/index.php?title=Joomla_info_page/EN&action=info
2015-10-04T13:49:24
CC-MAIN-2015-40
1443736673632.3
[]
docs.joomla.org
Search and indexing Orchard provides the ability to index and search content items in the application. The indexing functionality is provided by enabling the Indexing feature, along with a specific implementation of indexing (Lucene-based is included by default). In addition to the Indexing, the Search feature provides the ability to query the index (by keyword or using Lucene query syntax) to return a list of content items matching the query on the front end. You must enable all the following Modules Search, Indexing, and Lucene. Because search depends on indexing, enabling search will automatically enable indexing as well. Note that you must also enable Lucene before search and indexing will work. When the indexing feature is enabled, a new Search and Indexes item becomes available under the Settings section of the dashboard. The indexer runs as a background task, once per minute by default, and you can optionally update or rebuild the index from this screen. The Indexes screen also displays the number of documents (content items) indexed and the Search screen displays the indexed fields. After enabling the Search feature goto the Content Definition section and click on any Content Type which you want to index and then check the check box for the available index. For e.g. Page Content Type When the search feature is enabled, the Settings screen in the dashboard displays the fields that will be queried from the index (listed on the Search screen). The front end of the site does not have the searching UI yet at this point. To add it, you need to add a widget. Click Widgets in the admin menu. With the default layer selected, click Add to zone next to SearchForm in the list of available widgets. Keep "Header" selected as the zone and "Default" as the layer so that your search widget appears on top of all pages (the default layer applies to all pages in the site). Give it a title such as "Search" and click the Save button. For more information about widgets, see Managing widgets. If you navigate now to any page in the front end of the site, you will see the search form. When you type a keyword or query into this input box, a list of matching content items is displayed. Change History - Updates for Orchard 1.8 - 9-05-14: Updated all screen shots for Search , Indexing and Lucene
http://docs.orchardproject.net/Documentation/Search-and-indexing
2015-10-04T12:42:03
CC-MAIN-2015-40
1443736673632.3
[array(['/Upload/screenshots_675/enable_lucene.png', None], dtype=object) array(['/Upload/screenshots_675/search2.png', None], dtype=object) array(['/Upload/screenshots_675/indexnsearch.png', None], dtype=object) array(['/Upload/screenshots_675/indexcreated.png', None], dtype=object) array(['/Upload/screenshots_675/indexupdated.png', None], dtype=object) array(['/Upload/screenshots_675/indexcontenttype.png', None], dtype=object) array(['/Upload/screenshots_675/searchfield.png', None], dtype=object) array(['/Upload/screenshots_675/searchformwidget.png', None], dtype=object) array(['/Upload/screenshots_675/searchwidgetfrontend.png', None], dtype=object) ]
docs.orchardproject.net
Suppressions - Suppressions Suppressions are recipient email addresses that are added to unsubscribe groups. Once a recipient's address is on the suppressions list for an unsubscribe group, they will not receive any emails that are tagged with that unsubscribe group. Delete a suppression from a suppression group DELETE /v3/asm/groups/{group_id}/suppressions/{email} Base url: This endpoint allows you to remove a suppressed email address from the given suppression group. Removing an address that you are removing an email address from.default: None The email address that you want to remove from the suppression group.
https://docs.sendgrid.com/api-reference/suppressions-suppressions/delete-a-suppression-from-a-suppression-group?utm_source=docs&utm_medium=social&utm_campaign=guides_tags
2022-08-08T07:07:53
CC-MAIN-2022-33
1659882570767.11
[]
docs.sendgrid.com
XUSD.Money Searchโ€ฆ XUSD.Money XUSD: Partial-Collateralized Stablecoin Protocol Introduction Price Stability Minting and Redeeming XUSD Shares (XUS) Buybacks & Recollateralization Liquidity Programs & Staking Conclusion Token Distribution Contract Addresses GitBook Introduction Many stablecoin protocols have entirely embraced one spectrum of design (entirely collateralized) or the other extreme (entirely algorithmic with no backing). Collatralized stablecoins either have custodial risk or require on-chain over-collateralization. These designs provide a stablecoin with a fairly tight peg with higher confidence than purely algorithmic designs. Purely algorithmic designs such as Basis, Empty Set Dollar, and Seigniorage Shares provide a highly trustless and scalable model that captures the early Bitcoin vision of decentralized money but with useful stability. The issue with algorithmic designs is that they are difficult to bootstrap, slow to grow (as of Q4 2020 none have significant traction), and exhibit extreme periods of volatility which erodes confidence in their usefulness as actual stablecoins. They are mainly seen as a game/experiment than a serious alternative to collateralized stablecoins. XUSD attempts to be the stablecoin protocol to implement design principles of both to create a highly scalable, trustless, extremely stable, and ideologically pure on-chain money. The XUSD protocol is a two token system encompassing a stablecoin, XUSD (XUSD), and a governance token, XUSD Shares (XUS). The protocol also has pool contracts that hold collateral (at genesis WETH and DAI). Pools can be added or removed with governance. Although there are no predetermined timeframes for how quickly the number of collateralization changes, we believe that as XUSD adoption increases, users will be more comfortable with a higher percentage of XUSD supply being stabilized algorithmically rather than with collateral. The collateral ratio refresh function in the protocol can be called by any user once per hour. The function can change the collateral ratio in steps of .25% if the price of XUSD is above or below $1. When XUSD is above $1, the function lowers the collateral ratio by one step and when the price of XUSD is below $1, the function increases the collateral ratio by one step. Both refresh rate and step parameters can be adjusted through governance. In a future update of the protocol, they can even be adjusted dynamically using a PID controller design. The price of XUSD, XUS, and collateral are all calculated with a time-weighted average of the Uniswap pair price and the ETH:USD Chainlink oracle. The Chainlink oracle allows the protocol to get the true price of USD instead of an average of stablecoin pools on Uniswap. This allows XUSD to stay stable against the dollar itself which would provide greater resiliency instead of using a weighted average of existing stablecoins only. XUSD stablecoins can be minted by placing the appropriate amount of its constituent parts into the system. At genesis, XUSD is 100% collateralized, meaning that minting XUSD only requires placing collateral into the minting contract. During the fractional phase, minting XUSD requires placing the appropriate ratio of collateral and burning the ratio of XUSD Shares (XUS). While the protocol is designed to accept any type of cryptocurrency as collateral, this implementation of the XUSD Protocol will mainly accept on-chain stablecoins as collateral to smoothen out volatility in the collateral so that XUSD can transition to more algorithmic ratios smoothly. XUSD: Partial-Collateralized Stablecoin Protocol Price Stability Last modified 1yr ago Copy link
https://docs.xusd.money/introduction
2022-08-08T07:51:54
CC-MAIN-2022-33
1659882570767.11
[]
docs.xusd.money
Source code for forte.data.readers.ms_marco_passage reader to read passages from `MS MARCO` dataset, pertaining to the Passage Ranking task. Uses the document text for indexing. Official webpage - Dataset download link - Dataset Paper - Nguyen, Tri, et al. "MS MARCO: A Human-Generated MAchine Reading COmprehension Dataset." (2016). """ import os from typing import Iterator, Tuple from forte.data.data_pack import DataPack from forte.data.base_reader import PackReader from ft.onto.base_ontology import Document __all__ = ["MSMarcoPassageReader"][docs]class MSMarcoPassageReader(PackReader): def _collect(self, *args, **kwargs) -> Iterator[Tuple[str, str]]: # pylint: disable = unused-argument, undefined-variable dir_path: str = args[0] corpus_file_path = os.path.join(dir_path, "collection.tsv") with open(corpus_file_path, "r", encoding="utf-8") as file: for line in file: doc_id, doc_content = line.split("\t", 1) yield doc_id, doc_content def _parse_pack(self, doc_info: Tuple[str, str]) -> Iterator[DataPack]: r"""Takes the `doc_info` returned by the `_collect` method and returns a `data_pack` that either contains entry of the type `Query`, or contains an entry of the type Document. Args: doc_info: document info to be populated in the data_pack. Returns: query or document data_pack. """ data_pack: DataPack = DataPack() doc_id, doc_text = doc_info data_pack.pack_name = doc_id data_pack.set_text(doc_text) # add documents Document(data_pack, 0, len(doc_text)) yield data_pack def _cache_key_function(self, data_pack: DataPack) -> str: if data_pack.pack_name is None: raise ValueError("Data pack does not have a document id.") return data_pack.pack_name
https://asyml-forte.readthedocs.io/en/latest/_modules/forte/data/readers/ms_marco_passage_reader.html
2022-08-08T06:51:39
CC-MAIN-2022-33
1659882570767.11
[]
asyml-forte.readthedocs.io
Record of Committee Proceedings Joint Committee on Finance Senate Bill 498 Relating to: grants to support peer-to-peer suicide prevention programs in high schools, granting rule-making authority, and making an appropriation. By Senators Darling, Carpenter, Olsen, Nass, Schachtner, L. Taylor, Ringhand and Bernier; cosponsored by Representatives Duchow, Doyle, Stafsholt, Bowen, Schraa, Felzkowski, Jagler, James, Edming, Petersen, Oldenburg, Ramthun, Tranel, Tittl, Mursau, Milroy, Kurtz, Rohrkaste, VanderMeer, Sanfelippo, Shankland, Loudenbeck, Magnafici, Zimmerman, Kulp, Petryk, Spiros, Cabrera, Anderson, Kitchens, Riemer, Vining, Rodriguez, Ballweg, C. Taylor, Neubauer and Dittrich. January 15, 2020 Referred to Joint Committee on Finance March 26, 2020 Failed to pass pursuant to Senate Joint Resolution 1 ______________________________ Joe Malkasian Committee Clerk
https://docs.legis.wisconsin.gov/2019/related/records/joint/finance/1557561
2022-08-08T07:09:54
CC-MAIN-2022-33
1659882570767.11
[]
docs.legis.wisconsin.gov
Changelog for package nao_path_follower 0.3.1 (2015-08-11)) Fixing oscillations in path follower when next target is behind robot Add missing parameter for damping yaw velocity Small cleanup of nao_path_follower, fix potential problems with std::distance and angular distance (+-) Merge pull request #3 from sboettcher/groovy-devel path_follower now recognizes a straight line in a path and offsets the walk target further ahead Removing unneeded workaround for NaoQI API < 1.12 Cleaning up unused code Cleaning up nao_path_follower, more parameters Fixing nao_path_follower for paths with a single pose Small fix in nao_path_follower package.xml Cleaning up nao_path_follower output) nao_remote renamed to nao_path_follower, now only contains the path follower node Contributors: Armin Hornung 0.1.0 (2013-07-30)
http://docs.ros.org/en/hydro/changelogs/nao_path_follower/changelog.html
2022-09-25T07:56:05
CC-MAIN-2022-40
1664030334515.14
[]
docs.ros.org
1.2 ๅŠ ๅฏ†่ดงๅธ็ฎ€ไป‹ Intro to Cryptocurrencies ๅฏไปฅๅฐ†ๅŠ ๅฏ†่ดงๅธ็ณป็ปŸ็†่งฃไธบไธๅ—ไปปไฝ•ๅ•ไธ€ๅฎžไฝ“๏ผˆไพ‹ๅฆ‚้“ถ่กŒใ€ๅ…ฌๅธๆˆ–ๆ”ฟๅบœ๏ผ‰ๆŽงๅˆถ็š„ๆ”ฏไป˜ใ€้‡‘่žๅŸบ็ก€่ฎพๆ–ฝใ€‚ๅœจๅผ•ๅ…ฅๅŠ ๅฏ†่ดงๅธไน‹ๅ‰๏ผŒไธ€็›ดๆœ‰ไธ€ไธช่ฟ่ฅๅ•†ๅฏไปฅๆŽงๅˆถไบคๆ˜“ๆ‰€ๅŒ…ๅซ็š„ไธ€ๅˆ‡ๅ’Œ่ดงๅธๆ”ฟ็ญ–ใ€‚่ฟ่ฅๅ•†ไปฃ่กจ็€ๆƒๅˆฉๅ’Œๅคฑๅˆฉ็š„ไธญๅฟƒ็‚นใ€‚ ้š็€ 2009 ๅนด 1 ๆœˆ 3 ๆ—ฅๆฏ”็‰นๅธ็š„ๆŽจๅ‡บ๏ผŒ้‡‘่žไธ–็•Œๅ‘็”Ÿไบ†ๆ นๆœฌๆ€ง็š„ๅ˜ๅŒ–ใ€‚ๅœจ้šๅŽ็š„ๅ‡ ๅนดไธญ๏ผŒไบบไปฌๅˆ›ๅปบไบ†่ฎธๅคšๅ…ถไป–ๅŠ ๅฏ†่ดงๅธๆฅ่งฃๅ†ณไผ ็ปŸ้‡‘่ž้ข†ๅŸŸ็š„ๅ„็ง้—ฎ้ข˜ใ€‚ ๅŠ ๅฏ†่ดงๅธไฝฟ็”จๅทงๅฆ™็š„ๅฏ†็ ๅญฆใ€ๆ•ฐๅญฆๅ’Œ่ดงๅธๆฟ€ๅŠฑๅˆ›ๅปบไธ€ไธช็ณป็ปŸ๏ผŒๅœจ่ฏฅ็ณป็ปŸไธญ๏ผŒ่ขซ็งฐไธบๅ†œๆฐ‘ๆˆ–็Ÿฟๅทฅ็š„ไบบๅฏไปฅ้€š่ฟ‡่Žทๅพ—ๆŠฅ้…ฌๆฅ่ฟ่กŒ่ฏฅ็ณป็ปŸ๏ผŒๅนถไธ”ๆฒกๆœ‰ๅฏไปฅ่ขซๆถๆ„่กŒไธบ่€…ๅ–ๆถˆ็š„ไธญๅคฎๆŽงๅˆถ็‚นใ€‚ ่ฟ™ๅธฆๆฅไบ†่ฎธๅคšๅฅฝๅค„๏ผŒๅ…ถไธญไธ€ไบ›ๆ˜ฏ๏ผš - ๆ— ๅŠ ๅ…ฅ่ฆๆฑ‚๏ผšไปปไฝ•ๆœ‰ไบ’่”็ฝ‘่ฟžๆŽฅ็š„ไบบ้ƒฝๅฏไปฅๅ‚ไธŽๆ–ฐ็š„ๅŠ ๅฏ†็ปๆตŽ๏ผŒๆ— ่ฎบๅ›ฝ็ฑใ€่ดขๅฏŒ็Šถๅ†ตใ€ๅฎ—ๆ•™ไฟกไปฐ็ญ‰ใ€‚ - ๆŠ—ๅฎกๆŸฅ๏ผšๅพˆ้šพๆˆ–ๅฎŒๅ…จไธๅฏ่ƒฝๅฎกๆŸฅใ€‚ไปปไฝ•ไบบ้ƒฝๅฏไปฅ้šๆ—ถ่ฟ›่กŒไบคๆ˜“ใ€ๅ‘้€ไปปไฝ•้‡‘้ขๆˆ–่ฟ่กŒไปปไฝ•็จ‹ๅบใ€‚ - ็‹ฌ็ซ‹็š„่ดงๅธๆ”ฟ็ญ–๏ผšๅฏไปฅๅˆ›ๅปบไธไพ่ต–ไบŽไปปไฝ•ไธ€ไธช็ป„็ป‡ๆˆ–ไธ€ไธชๅ›ฝๅฎถ็š„ๅ†ณ็ญ–๏ผŒ่€Œๆ˜ฏๅŸบไบŽ็ฎ—ๆณ•ๆˆ–ๅ…ทๆœ‰ๅ›บๅฎšไพ›ๅบ”้‡็š„ๆ–ฐ่ดงๅธใ€‚ - ไธๅฏ้˜ปๆŒก็š„ๅบ”็”จ็จ‹ๅบ๏ผšไธบๅฎ‰ๅ…จๅŒบๅ—้“พๅผ€ๅ‘ๅนถ่ฟ่กŒ็š„็จ‹ๅบๆฐธ่ฟœไธไผšๆ”นๅ˜ๆˆ–ๅœๆญขใ€‚่ฏฅ็จ‹ๅบๆœฌ่บซๅฏไปฅๆ‹ฅๆœ‰่ต„้‡‘ๅนถ่ฟ›่กŒ้‡‘่žไบคๆ˜“ใ€‚ไปฃ็ ๅฏไปฅ่‡ชไธป่ฟ่กŒ๏ผŒไธไพ่ต–ไบŽไบบๅทฅๆ“ไฝœใ€‚ไธ€ไบ›ๅŒบๅ—้“พๅบ”็”จๅŒ…ๆ‹ฌ๏ผšๅ…ถไป–่ต„ไบง็š„ไปฃๅธๅŒ–ใ€้žๅŒ่ดจๅŒ–ไปฃๅธ (NFT)ใ€่ดทๆฌพใ€ๆฑ‡ๆฌพใ€่บซไปฝ้’ฑๅŒ…็ญ‰ใ€‚ - ๅ…จ็ƒๆ ‡ๅ‡†๏ผš้€š่ฟ‡ๅŠ ๅฏ†๏ผŒไธๅŒๅ›ฝๅฎถๅ’ŒๅœฐๅŒบๅฏไปฅๆ นๆฎไธ€ไธชๆ˜Ž็กฎ่ฎฐๅฝ•ใ€ๅฎŒๅ…จๅผ€ๆบไธ”ๅ…่ดน็š„ๅ…ฑไบซๆ ‡ๅ‡†่ฟ›่กŒไบคไบ’ๅ’Œไบคๆ˜“ใ€‚ไธๅŒ็š„ๅ„ๆ–นๅฏไปฅ่š้›†ๅœจไธ€่ตทไฝฟ็”จไธ€ไธชไธญ็ซ‹็š„ๅนณๅฐ๏ผŒ่ฟ™้™ไฝŽไบ†ๅปบ็ซ‹ๅœจๅŠ ๅฏ†่ดงๅธไน‹ไธŠ็š„ไบบๅทฅๆˆๆœฌใ€‚ - ๅฎ‰ๅ…จ๏ผšๅฏนไปปไฝ•้‡‘่žๅŸบ็ก€่ฎพๆ–ฝ้ƒฝๅญ˜ๅœจๅคš็งๅฝขๅผ็š„ๆฝœๅœจๆ”ปๅ‡ป๏ผŒๅŒ…ๆ‹ฌ่™šๆ‹Ÿๆ”ปๅ‡ปๅ’Œ็‰ฉ็†ๆ”ปๅ‡ปใ€่ดฟ่ต‚ใ€็ฝ‘็ปœ้—ฎ้ข˜็ญ‰ใ€‚ๅ…ทๆœ‰ไธ€็™พไธ‡ไธช่Š‚็‚น็š„็ณป็ปŸๆฏ”ไธŠ่ฟฐๅ•็‚นๆ•…้šœๆ›ด้šพไปฅๆ”ปๅ‡ปใ€‚ ๅŽŸๆ–‡ๅ‚่€ƒ A cryptocurrency system can be thought of as a payments and financial infrastructure that is not controlled by any single entity, such as a bank, company, or government. Prior to the introduction of cryptocurrencies, there had always been an operator that had control of transaction inclusion and monetary policy. This operator represented a centralized point of both power and failure. The financial world was fundamentally changed with the introduction of Bitcoin on January 3, 2009. In the years that have followed, many other cryptocurrencies have been created to solve various problems in the legacy financial realm. Cryptocurrencies use clever cryptography, mathematics, and monetary incentives to create a system where people called farmers or miners get paid to run the system, and there is no central point of control that can be taken down by malicious actors. This brings many benefits, some of which are: - No requirements to participate: Anyone with an internet connection can participate in the new crypto economy, regardless of nationality, wealth status, religion, etc. - Censorship resistance: Censorship is difficult or impossible. Anyone is allowed to transact, and to send any amount or run any program at any time. - Independent monetary policy: New currencies can be created that do not depend on decisions made by one group or one country, and instead can be based on algorithms or have a fixed supply. - Unstoppable applications: A program developed for, and run on, a secure blockchain can never be changed or stopped. The program itself can own funds and perform financial transactions. Code can run autonomously, without depending on a human operator. Some blockchain applications include: tokenization of other assets, non-fungible tokens (NFTs), loans, remittances, identity wallets, etc. - Global standards: Through crypto, different countries and regions can interact and transact on one shared standard that is clearly documented, fully open source, and available for free. Different parties can come together to use a neutral platform, which reduces costs for those who build on top of the cryptocurrency. - Security: There are many forms of potential attacks on any financial infrastructure, including virtual and physical hacks, bribery, network issues, etc. A system with a million nodes is much more difficult to attack than the aforementioned single point of failure. #ๅŠ ๅฏ†่ดงๅธๅฆ‚ไฝ•่ฟไฝœ๏ผŸ ่ฆไบ†่งฃๅƒๆฏ”็‰นๅธๆˆ–ๅฅ‡ไบš่ฟ™ๆ ท็š„ๅŠ ๅฏ†่ดงๅธๅฆ‚ไฝ•ๅทฅไฝœ็š„ๅŸบ็ก€็Ÿฅ่ฏ†๏ผŒๆˆ‘ไปฌ้ฆ–ๅ…ˆ้œ€่ฆไบ†่งฃๅฆ‚ไฝ•ไปŽๅคดๅผ€ๅง‹่ฎพ่ฎกๅŠ ๅฏ†่ดงๅธใ€‚ๆœฌ่Š‚้’ˆๅฏนๅŒบๅ—้“พ่กŒไธš็š„ๆ–ฐๆ‰‹๏ผŒๅ…ถไป–ไบบๅฏไปฅ่ทณ่ฟ‡ๅฎƒใ€‚ ๆˆ‘ไปฌๅฏไปฅไพ้ ไธ€ไธชๅธฆๆœ‰ๅ…ฌๅ…ฑ API ็š„ไธญๅคฎๆœๅŠกๅ™จๆฅๅ‘้€ไบคๆ˜“๏ผˆ้œ€่ฆ็”จๆˆทๅๅ’Œๅฏ†็ ๏ผ‰๏ผŒไธ€ไธช็”จไบŽ่ฏปๅ–ๆ•ฐๆฎใ€‚็„ถ่€Œ๏ผŒ่ฟ™ไธๆ˜ฏๅŽปไธญๅฟƒๅŒ–็š„๏ผŒๅฎƒไธไผšๅธฆๆฅไธŠ่ฟฐๅคง้ƒจๅˆ†ๅฅฝๅค„๏ผŒไฝ†่ฟ™ๅดๆ˜ฏ่ฎธๅคš้‡‘่ž็ณป็ปŸๅœจๆฏ”็‰นๅธๅ‡บ็Žฐไน‹ๅ‰็š„่ฟไฝœๆ–นๅผใ€‚ ๆˆ‘ไปฌๅฆ‚ไฝ•่ฎพ่ฎกไธ€ไธชไธไพ่ต–ไปปไฝ•ไธ€ๆ–น็š„ไบคๆ˜“็ณป็ปŸๅ‘ข๏ผŸ ๅŽŸๆ–‡ๅ‚่€ƒ To understand the basics of how a cryptocurrency like Bitcoin or Chia works, we first need to look at how one would design a cryptocurrency from scratch. This section is targeted toward those new to the blockchain industry; others can skip it. We could rely on a central server with a public API to send transactions (which takes in a username and password) and a public API for reading data. However, this is not decentralized, and it does not bring most of the benefits above. This is the way in which many financial systems worked before Bitcoin. How would we design a transaction system which does not depend on any one party? #้ชŒ่ฏ ้ฆ–ๅ…ˆ๏ผŒๆˆ‘ไปฌ้œ€่ฆไธ€็งๅฎ‰ๅ…จ็š„ๆ–นๅผๅฐ†ไบคๆ˜“ๅ‘้€ๅˆฐ่ฎธๅคšๆœๅŠกๅ™จใ€‚ๅ‡่ฎพๅ…จ็ƒๆœ‰ 1000 ๅฐๆœๅŠกๅ™จ๏ผŒ่€Œไธๆ˜ฏๅชๆœ‰ไธ€ๅฐ๏ผŒๅนถไธ”่ฟ™ไบ›ๆœๅŠกๅ™จ็›ธไบ’ๅ‘้€็”จๆˆท็š„ไบคๆ˜“ไฟกๆฏใ€‚ ๅ‡่ฎพ่ฟ™ไบ›ๆœๅŠกๅ™จ็”ฑไธๅŒ็š„ๅฎžไฝ“๏ผˆๅ…ฌๅธใ€ไบบๅ‘˜็ญ‰๏ผ‰่ฟ่กŒใ€‚็”จๆˆทๅๅ’Œๅฏ†็ ๅœจ่ฟ™็งๅŽปไธญๅฟƒๅŒ–ๆจกๅž‹ไธญไธ่ตทไฝœ็”จ๏ผŒๅ› ไธบๆฏไธชๆœๅŠกๅ™จ้ƒฝ้œ€่ฆ็Ÿฅ้“ๅฏ†็ ๆ‰่ƒฝ้ชŒ่ฏไบคๆ˜“ๆ˜ฏๅฆๆœ‰ๆ•ˆใ€‚่ฟ™ๅฐ†ๆ˜ฏ้žๅธธไธๅฎ‰ๅ…จ็š„ใ€‚ ็›ธๅ๏ผŒๆˆ‘ไปฌๅฏไปฅไฝฟ็”จ็”ฑ่ตซๅฐ”ๆ›ผ๏ผˆHellman๏ผ‰ใ€้ป˜ๅ…‹ๅฐ”๏ผˆMerkle๏ผ‰ๅ’Œ่ฟช่ฒ๏ผˆDiffie๏ผ‰ๅ‘ๆ˜Ž็š„ๅ…ฌ้’ฅๅฏ†็ ๆœฏใ€‚ ไพ‹ๅฆ‚๏ผŒๅไธบ็ˆฑไธฝไธ๏ผˆAlice๏ผ‰็š„็”จๆˆท็ปดๆŠค็€ไธ€ไธช็ง˜ๅฏ†ๅฏ†้’ฅ๏ผˆไนŸ็งฐไธบ็ง้’ฅ๏ผ‰ sk_a ๅ’Œไธ€ไธชๅ…ฌ้’ฅ pk_aใ€‚ๅ…ฌ้’ฅๅ‘ๅธƒๅœจๅฅนไฝ™้ขๆ—่พน็š„ไบคๆ˜“ไธญ๏ผŒๅ‡่ฎพไธบ 1 BTCใ€‚ไธบไบ†่ŠฑๆŽ‰้‚ฃ 1 ไธชๆฏ”็‰นๅธ๏ผŒๅฅน้œ€่ฆ็”จๅฅน็š„็ง้’ฅๆไพ›ๆ•ฐๅญ—็ญพๅใ€‚็ญพๅๅช่ƒฝไฝฟ็”จๅ…ฌ้’ฅๅ’Œๆถˆๆฏ่ฟ›่กŒ้ชŒ่ฏ๏ผŒๅนถไธ”็‰นๅฎšไบŽๆญฃๅœจ็ญพๅ็š„ๆ•ฐๆฎใ€‚ ๅœจ่ฟ™ไธชๅŽปไธญๅฟƒๅŒ–็ณป็ปŸไธญ่ฟ่กŒ็š„ๆฏไธชๆœๅŠกๅ™จ้ƒฝๅฏไปฅๆŽฅๅ—ไธ€็ฌ”ไบคๆ˜“๏ผŒๅ…ถไธญๅŒ…ๆ‹ฌๆญฃๅœจๅ‘้€็š„็กฌๅธ IDใ€ๆ”ถไปถไบบไฟกๆฏๅ’Œ็ญพๅใ€‚ ๆ•ฐๅญ—็ญพๅๆ˜ฏๅŠ ๅฏ†่ดงๅธ็š„ๅŸบๆœฌๆž„ๅปบๅ—ใ€‚ ๅŽŸๆ–‡ๅ‚่€ƒ First, we need a secure way to send transactions to many servers. Let's assume that there are 1000 servers across the world, instead of just one, and that these servers send transaction information of users to each other. These servers are assumed to be run by different entities (companies, people, etc). Usernames and passwords would not work in this decentralized model, because every server would need to know the password in order to verify that a transaction is valid. This would be extremely insecure. Instead, we can use public key cryptography, invented by Hellman, Merkle, and Diffie. For example, a user named Alice maintains a secret key (also called a private key) sk_a, and a public key pk_a. The public key is posted in a transaction next to her balance, let's say 1 BTC. In order to spend that 1 BTC, she needs to provide a digital signature with her private key. The signature can be verified with the public key and message only, and is specific to the data that is being signed. Each server running in this decentralized system can accept a transaction, which includes the ID of the coin that is being sent, the recipient information, and the signature. Digital signatures are fundamental building blocks for cryptocurrencies. #ๅŒ้‡ๆ”ฏไป˜ ็„ถ่€Œ๏ผŒ็ญพๅๆ˜ฏไธๅคŸ็š„๏ผŒๅ› ไธบไธ€ไธชๅซๅšโ€œๅŒ้‡ๆ”ฏไป˜้—ฎ้ข˜โ€็š„้—ฎ้ข˜ใ€‚ๅœจ 1000 ๅฐๆœๅŠกๅ™จไธญ๏ผŒๅ‡่ฎพ 500 ๅฐๅœจไบšๆดฒ๏ผŒ500 ๅฐๅœจ็พŽๅ›ฝใ€‚ๆ”ปๅ‡ป่€…้ฒๅ‹ƒ๏ผˆBob๏ผ‰ๅฐ†่Šฑ่ดน็›ธๅŒ็กฌๅธ็š„ไธค็ฌ”ไบคๆ˜“ๅŒๆ—ถๅ‘้€ๅˆฐไธคๅฐๆœๅŠกๅ™จ๏ผšไธ€ไธชๅœจไบšๆดฒ๏ผŒไธ€ไธชๅœจ็พŽๅ›ฝใ€‚่ฟ™ไบ›ไบคๆ˜“ๅฐ†้’ฑๅ‘้€็ป™ไธๅŒ็š„ๆ”ถไปถไบบ๏ผŒ่ฟ™ๆ˜ฏไธๅ…่ฎธ็š„ใ€‚ ๅœจ่ฟ™็งๆƒ…ๅ†ตไธ‹๏ผŒไธคไธชๆœๅŠกๅ™จ้œ€่ฆๅฐฑๅ“ชไธชไบ‹ๅŠกๅ…ˆๅ‘็”Ÿ่พพๆˆไธ€่‡ดใ€‚ๅฆๅˆ™๏ผŒๅฎƒไปฌๅฐ†ๅ‡บ็Žฐๅ‘ๆ•ฃ็Šถๆ€๏ผŒ็ณป็ปŸๅฐ†ๆ— ๆณ•่Žทๅพ—ๅ…จ็ƒๅ…ฑ่ฏ†ใ€‚ไธบไบ†่งฃๅ†ณ่ฟ™ไธช้—ฎ้ข˜๏ผŒๆˆ‘ไปฌ้œ€่ฆไธ€็งๅ…ฑ่ฏ†็ฎ—ๆณ•๏ผŒๆˆ–่€…ไธ€็ง่ฎฉ็ณป็ปŸไธญๆ‰€ๆœ‰่ฎก็ฎ—ๆœบๅฟซ้€Ÿๅฐฑไบคๆ˜“็š„้กบๅบๅ’Œๅ†…ๅฎน่พพๆˆๆ˜Ž็กฎไธ€่‡ด็š„ๆ–นๆณ•ใ€‚ ๆ—ข็„ถๆˆ‘ไปฌๆญฃๅœจๅฐ่ฏ•ๅˆ›ๅปบไธ€ไธชๅ…จ็ƒๅŽปไธญๅฟƒๅŒ–ๅ’Œๅฎ‰ๅ…จ็š„็ณป็ปŸ๏ผŒไธบไป€ไนˆไธ่ฎฉๆฏไธชไบบๆŠ•ไธ€็ฅจ๏ผŒ็„ถๅŽๅฐ†ๆŠ•็ฅจๅŠ ๅ’Œๆฅๅ†ณๅฎšไบคๆ˜“้กบๅบ๏ผŸๅฆ‚ๆžœๅฏ่ƒฝ็š„่ฏ๏ผŒ่ฟ™ไผšๅพˆๆฃ’๏ผŒไฝ†ไธๅนธ็š„ๆ˜ฏ๏ผŒๅฎƒ้œ€่ฆๆŸ็ง็ฑปๅž‹็š„ไธญๅคฎๆ”ฟๅ…š๏ผŒ้ฆ–ๅ…ˆๅ†ณๅฎš่ฐๆ˜ฏโ€œไบบโ€๏ผŒ็„ถๅŽๅˆ›ๅปบ่ฟ™ไบ›่บซไปฝ๏ผŒ่ฟ™ๅฐ†ไฝฟ็ณป็ปŸ้›†ไธญใ€‚ ็›ธๅ๏ผŒๆˆ‘ไปฌๅฏไปฅๅฐ†็ณป็ปŸๅปบ็ซ‹ๅœจโ€œไธ€ๅฐ่ฎก็ฎ—ๆœบไธ€็ฅจโ€็š„ๅŸบ็ก€ไธŠ๏ผŒๅฐ†ๆฏไธช IP ๅœฐๅ€่ฎก็ฎ—ไธบไธ€ๅฐโ€œ่ฎก็ฎ—ๆœบโ€ใ€‚ไฝ†ๆ˜ฏ๏ผŒ่ดญไนฐๆ–ฐ IP ๅœฐๅ€ใ€ไฝฟ็”จ VPN ๆˆ–ไปฃ็†ๆœๅŠกๅ™จๆ›ดๆ”นๅœฐๅ€ๆ˜ฏๅๅˆ†ๅฎนๆ˜“็š„ใ€‚ๆ”ปๅ‡ป่€…็”š่‡ณๅฏไปฅๅˆ›ๅปบๆ•ฐ็™พไธ‡ไธชๅ‡ IP ๅœฐๅ€ใ€‚ไธ€ๆ—ฆๆ”ปๅ‡ป่€…ๆ‹ฅๆœ‰ 51% ็š„ๅœฐๅ€๏ผŒไป–ไปฌๅฐฑๅฏไปฅๆŽงๅˆถ็ฝ‘็ปœใ€‚ๆญคๆ—ถ๏ผŒไป–ไปฌๅฏไปฅๅ†ณๅฎšไบคๆ˜“้กบๅบๅ’Œๅ†…ๅฎนใ€‚ๅŒๆ ท๏ผŒ็ณป็ปŸๅ˜ๅพ—้›†ไธญ๏ผŒๅนถๅฏ่ƒฝๅ—ๅˆฐๆŸๅฎณใ€‚ ๅฏผ่‡ดๅŒ้‡ๆ”ฏไป˜้—ฎ้ข˜้šพไปฅ่งฃๅ†ณ็š„ๅ…ณ้”ฎ้—ฎ้ข˜ๆ˜ฏๅฅณๅทซๆ”ปๅ‡ปใ€‚ๅฅณๅทซๆ”ปๅ‡ปๆ˜ฏๆŒ‡ๆ”ปๅ‡ป่€…ไปฅไฝŽๆˆๆœฌๅˆ›ๅปบๅคง้‡่™šๅ‡่บซไปฝใ€‚ๅคงๅคšๆ•ฐโ€œX ่ฏๆ˜Žโ€ๅŒบๅ—้“พๅนถไธๅฎ‰ๅ…จ๏ผŒๅ› ไธบๅฆ‚ๆžœๆ”ปๅ‡ป่€…ๅˆ›ๅปบๅคšไธช่บซไปฝ๏ผŒ่ฟ™ไผš็ป™ๆ”ปๅ‡ป่€…ๅธฆๆฅไผ˜ๅŠฟใ€‚ ไธญๆœฌ่ช็š„ๅคฉๆ‰ไน‹ๅค„ๅœจไบŽ่งฃๅ†ณไบ†ๅŒ้‡ๆ”ฏไป˜้—ฎ้ข˜๏ผŒ้œ€่ฆ้€š่ฟ‡็Žฐๅฎžไธ–็•Œ็š„ๅทฅไฝœๆฅ่Žทๅพ—โ€œๆŠ•็ฅจโ€ๅนถๅ†ณๅฎšๅ…ฑ่ฏ†ใ€‚่ฟ™็งโ€œๅทฅไฝœ้‡่ฏๆ˜Žโ€ๆ˜ฏๅฏๅŠ ๅฏ†้ชŒ่ฏ็š„๏ผŒๅ‚ไธŽ็š„ๅ”ฏไธ€่ฆๆฑ‚ๆ˜ฏไธ€ๅฐ่ฎก็ฎ—ๆœบๅ’Œไบ’่”็ฝ‘่ฟžๆŽฅใ€‚ ๅœจๅทฅไฝœ้‡่ฏๆ˜Ž็ฝ‘็ปœไธญ๏ผŒๅ‚ไธŽ็š„ๆฏๅฐ่ฎก็ฎ—ๆœบไฝฟ็”จ้šๆœบ่พ“ๅ…ฅ้‡ๅค็”ŸๆˆๅŠ ๅฏ†ๅ“ˆๅธŒใ€‚่ฟ™่ตทๅˆฐไบ†ๅ…จ็ƒๅฝฉ็ฅจ็š„ไฝœ็”จ๏ผŒๅœจๅ…ถไธญ็”Ÿๆˆๅ“ˆๅธŒ๏ผŒ็›ดๅˆฐไธ€ๅฐ่ฎก็ฎ—ๆœบ็”Ÿๆˆ่ตขๅฎถโ€”โ€”ๅ…ทๆœ‰ไธ€ๅฎšๆ•ฐ้‡็š„ๅ‰ๅฏผ้›ถ็š„ๅ“ˆๅธŒใ€‚่ฟ™่ขซ็งฐไธบๅทฅไฝœ้‡่ฏๆ˜Ž๏ผŒๅ› ไธบๆฒกๆœ‰ๆทๅพ„๏ผŒๆ‰€ไปฅ่ฎก็ฎ—ๆœบๅฟ…้กป้€š่ฟ‡็”Ÿๆˆๅ“ˆๅธŒๆฅๆŠ•ๅ…ฅๆ‰€้œ€ๆ•ฐ้‡็š„่ฎก็ฎ—โ€œๅทฅไฝœโ€ใ€‚ ๅฝ“ๆ‰พๅˆฐ่Žท่ƒœ่ฏๆ˜Žๆ—ถ๏ผŒๅ‘็Žฐๅฎƒ็š„่ฎก็ฎ—ๆœบ่Žทๅพ—ๅœจๅŒบๅ—้“พไธญ็”Ÿๆˆๆ–ฐโ€œๅŒบๅ—โ€็š„ๆƒๅˆฉใ€‚่ฏฅๅŒบๅ—ๅŒ…ๅซๆŒ‡ๅ‘ๅ‰ไธ€ไธชๅŒบๅ—็š„ๆŒ‡้’ˆใ€ๆœ‰ๆ•ˆไบคๆ˜“ๅˆ—่กจๅ’Œ่Žท่ƒœๅ“ˆๅธŒใ€‚ๆ‰€ๆœ‰่Š‚็‚น้ƒฝ้œ€่ฆๆŽฅๅ—ๆœ€้‡็š„้“พ๏ผˆ้œ€่ฆๆœ€ๅคšๅทฅไฝœ็š„้“พ๏ผ‰ใ€‚ๅ› ๆญค๏ผŒๆ‰€ๆœ‰่Š‚็‚น้ƒฝไผšๆŽฅๅ—ๆ–ฐ็š„ๅŒบๅ—๏ผŒๅทฅไฝœ้‡่ฏๆ˜Ž็š„ๅฝฉ็ฅจ้‡ๆ–ฐๅผ€ๅง‹ใ€‚ ๅœจๆฏ”็‰นๅธ็š„ๅ…ฑ่ฏ†็ฎ—ๆณ•ไธญ๏ผŒๆฏไธช่ฏๆ˜Žๅนณๅ‡้œ€่ฆ 10 ๅˆ†้’Ÿๆฅ็”Ÿๆˆใ€‚้š็€่ถŠๆฅ่ถŠๅคš็š„่ฎก็ฎ—ๆœบๅŠ ๅ…ฅ็ฝ‘็ปœ๏ผŒ็”Ÿๆˆ่ฏๆ˜Ž็š„ๅนณๅ‡ๆ—ถ้—ด่‡ช็„ถไผšๅ‡ๅฐ‘ใ€‚่ฟ™็ป™ๆˆ‘ไปฌๅธฆๆฅไบ†ไธญๆœฌ่ช็š„ๅฆไธ€ไธช็ฎ€ๅ•่€Œไผ˜้›…็š„ๆƒณๆณ•๏ผš้šพๅบฆ่ฐƒๆ•ดใ€‚ๆฏ 2016 ไธชๅŒบๅ—๏ผˆๅนณๅ‡ไธคๅ‘จ๏ผ‰๏ผŒๅทฅไฝœ้‡่ฏๆ˜Ž็ฎ—ๆณ•ไผš่‡ชๅŠจ่ฐƒๆ•ดๆ‰พๅˆฐ่ฏๆ˜Ž็š„้šพๅบฆใ€‚ๅฎƒ้€š่ฟ‡ๅขžๅŠ ๆˆ–ๅ‡ๅฐ‘็”Ÿๆˆ็š„ๅ“ˆๅธŒไธญๆ‰€้œ€็š„ๅ‰ๅฏผ้›ถๆ•ฐ้‡ๆฅๅฎž็Žฐ่ฟ™ไธ€็‚นใ€‚็ป“ๆžœๆ˜ฏ๏ผŒๆ— ่ฎบๆœ‰ๅคšๅฐ‘่ฎก็ฎ—ๆœบๅผ€ๅง‹ๆˆ–ๅœๆญขๅ‚ไธŽๅทฅไฝœ้‡่ฏๆ˜ŽๆŠฝๅฅ–๏ผŒๆ‰พๅˆฐ่ฏๆ˜Žๆ‰€้œ€็š„ๅนณๅ‡ๆ—ถ้—ดๅง‹็ปˆไธบ 10 ๅˆ†้’Ÿใ€‚ ๆœ‰ไบ†่ฟ™็งๅ…ฑ่ฏ†ๆœบๅˆถ๏ผŒๆ”ปๅ‡ป็ฝ‘็ปœๅฐฑๅ˜ๅพ—้žๅธธๅ›ฐ้šพใ€‚ๅฆ‚ๆžœๆ”ปๅ‡ป่€…ๆƒณ้€š่ฟ‡ๅˆ›ๅปบๆ›ฟไปฃๅŒบๅ—้“พๆฅโ€œ้‡ๅ†™ๅŽ†ๅฒโ€๏ผŒไป–ไปฌ้œ€่ฆๆฏ”็ณป็ปŸไธญ็š„่ฏšๅฎžๅ‚ไธŽ่€…ๆ›ดๅฟซๅœฐๅˆ›ๅปบๆ–ฐๅŒบๅ—ใ€‚็”ฑไบŽๅˆ›ๅปบๆฏไธชๅŒบๅ—ๆ‰€้œ€็š„ๅทฅไฝœ้‡่ฏๆ˜Ž๏ผŒๆ”ปๅ‡ป่€…้œ€่ฆๆฏ”็ฝ‘็ปœไธญๆ‰€ๆœ‰ๅ…ถไป–่ฎก็ฎ—ๆœบ็š„ๆ€ปๅ’Œๆ›ดๅฟซๅœฐ็”Ÿๆˆๅ“ˆๅธŒใ€‚่ฟ™่ขซ็งฐไธบโ€œ51% ๆ”ปๅ‡ปโ€๏ผŒ็จๅŽๅฐ†ๅœจ็ฌฌ 3.14 ่Š‚ไธญ่ฟ›่กŒๆ›ด่ฏฆ็ป†็š„่ฎจ่ฎบใ€‚ ๅทฅไฝœ้‡่ฏๆ˜Ž่งฃๅ†ณไบ†ๅŒ้‡ๆ”ฏไป˜้—ฎ้ข˜โ€”โ€”ไปปไฝ•ๆ—ถๅ€™ๅชๆœ‰ไธ€ๅฐ่ฎก็ฎ—ๆœบๅฏไปฅๅˆ›ๅปบไธ€ไธชๅŒบๅ—ใ€‚ๅฎƒ่ฟ˜่งฃๅ†ณไบ†ๅฅณๅทซ้—ฎ้ข˜โ€”โ€”ๅˆ›ๅปบไธ€ไธชๅŒบๅ—ไธไป…้œ€่ฆๅฏน็กฌไปถ่ฟ›่กŒๅฎž้™…ๆŠ•่ต„๏ผŒ่€Œไธ”ๅฎƒไนŸไธไผš็ป™ๅˆ›ๅปบๅคšไธช่บซไปฝ็š„ไบบๅธฆๆฅไปปไฝ•ๅฅฝๅค„ใ€‚ๆฏไธชไบบๆœ‰็›ธๅŒ็š„่Žท่ƒœๆฆ‚็Ž‡๏ผŒๆ— ่ฎบไป–ไปฌไฝฟ็”จ็š„ๆ˜ฏไธ€ไธช่บซไปฝ่ฟ˜ๆ˜ฏไธ€็™พไธ‡ไธช่บซไปฝใ€‚ ๅŽŸๆ–‡ๅ‚่€ƒ However, signatures are not enough, because of an issue called the "double spend problem." Of the 1000 servers, let's say 500 are in Asia and 500 are in America. An attacker, Bob, sends two transactions that spend the same coin, to two servers, at the same time: one in Asia and one in America. Those transactions send the money to different recipients, which should not be allowed. In this case, the two servers need to come to agreement as to which transaction came first. Otherwise, they will have diverging state, and the system will not have global consensus. To solve this issue, we need a consensus algorithm, or a way for all computers in the system to quickly come to unambiguous agreement on the ordering and content of transactions. Since we are trying to create a globally decentralized and secure system, why not allow each person one vote, and add up votes for deciding transaction ordering? This would be great if it were possible, but it unfortunately requires some type of central party, first to decide who is a "person," and then to create these identities. This would make the system centralized. We could instead base the system on "one computer, one vote," counting each IP address as a "computer." However, it is trivial to buy new IP addresses, or to change addresses using a VPN or a proxy server. An attacker could even create millions of fake IP addresses. The attacker would gain control of the network once they own 51% of the addresses. At this point, they could decide transaction ordering and content. Again, the system becomes centralized, and possibly compromised. The key issue that makes it difficult to solve the double-spend problem is the Sybil attack. A Sybil attack is when an attacker creates a large amount of fake identities at a low cost. Most "Proof of X" blockchains are not secure because if an attacker creates multiple identities, this will give the attacker an advantage. The genius of Satoshi Nakamoto was to solve the double-spend problem by requiring real-world work in order to obtain "votes," and to decide consensus. This "Proof of Work" is cryptographically verifiable. The only requirements for participation are a computer and an internet connection. In Proof of Work networks, each computer that is participating repeatedly generates cryptographic hashes using random input. This functions as a global lottery, where hashes are generated until one computer generates a winner -- a hash with a certain number of leading zeros. This is known as a proof of work because there are no shortcuts. Computers must put in the required amount of computational "work" by generating hashes. When a winning proof is found, the computer that discovered it earns the right to generate a new "block" in the blockchain. This block contains a pointer to the previous block, a list of valid transactions, and the winning hash. All nodes are required to accept the heaviest chain (the one which required the most work). Therefore, all nodes will accept the new block, and the proof-of-work lottery begins anew. In Bitcoin's consensus algorithm, each proof takes an average of 10 minutes to generate. As more computers join the network, the average amount of time to generate a proof will naturally decrease. This brings us to another of Satoshi's simple and elegant ideas: the difficulty adjustment. Every 2016 blocks (two weeks, on average) the proof-of-work algorithm automatically adjusts how difficult it is to find a proof. It accomplishes this by increasing or decreasing the required number of leading zeros in a generated hash. The result is that the average time required to find a proof will always be 10 minutes, no matter how many computers start or stop participating in the proof-of-work lottery. With this consensus mechanism in place, attacking the network becomes very difficult. If an attacker wants to "rewrite history" by creating an alternative blockchain, they'll need to create new blocks faster than the honest actors in the system. Because of the proof of work that is required to create each block, the attacker will need to generate hashes faster than all other computers in the network, combined. This is known as a "51% attack" and is discussed in greater detail later Section 3.14. Proof of Work solves the double-spend problem -- only one computer can create a block at any one time. It also solves the Sybil problem -- not only does creating a block require a real-world investment in hardware, but it also gives no advantage to someone who creates multiple identities. This person has the same probability of winning, whether they're using one identity or a million. #ๅŒบๅ—้“พ ็ฝ‘็ปœไธญ็š„ๆฏไธช่Š‚็‚น้ƒฝไธŽๅ…ถไป–ไธ€ไบ›้šๆœบ่Š‚็‚นไฟๆŒๆดป่ทƒ็š„่ฟžๆŽฅใ€‚ๅฆ‚ๆžœ็”จๆˆทๆƒณ่ฆ่ฟ›่กŒไบคๆ˜“๏ผŒไป–ไปฌไผšๅฐ†ๅ…ถๅ‘้€ๅˆฐ็ฝ‘็ปœไธญ็š„ไปปไฝ•่Š‚็‚น๏ผŒ่ฏฅ่Š‚็‚นไผš่‡ชๅŠจๅฐ†ๅ…ถๅนฟๆ’ญ็ป™ไป–ไปฌ็š„ๅฏน็ญ‰ๆ–นใ€‚ๅ› ไธบๆฏไธช่Š‚็‚น้ƒฝ่ฟžๆŽฅๅˆฐไธ€็ป„ๅ”ฏไธ€็š„ๅฏน็ญ‰็‚น๏ผŒๆ‰€ไปฅไบคๆ˜“ๅพˆๅฟซๅฐฑไผšไผ ๆ’ญๅˆฐ็ฝ‘็ปœไธญ็š„ๆฏไธช่Š‚็‚นใ€‚็„ถๅŽ่Š‚็‚นๅฐ†ไบคๆ˜“๏ผŒๅŒ…ๆ‹ฌๆ‰€ๆœ‰ๅ…ถไป–ๆœชๅ†ณไบคๆ˜“๏ผŒไฟๅญ˜ๅœจๆœฌๅœฐๅ†…ๅญ˜ไธญ็š„ใ€‚่ฟ™็งฐไธบ ๅ†…ๅญ˜ๆฑ ใ€‚ ไธบไบ†่ฎฉๆฏไธช่Š‚็‚นๆœ็ดขไธ€ไธช่ฏๆ˜Ž๏ผŒๅฎƒๅฟ…้กป็ป„่ฃ…ไธ€ไธชๅŒบๅ—ๆฅ่ฟ›่กŒๆ•ฃๅˆ—ใ€‚ๅฎƒ้€š่ฟ‡ๅŒ…ๅซๆฅ่‡ชๅ†…ๅญ˜ๆฑ ็š„ไบคๆ˜“ๆฅๅšๅˆฐ่ฟ™ไธ€็‚น๏ผŒๅนถไธ”ๅฎƒๅพˆๅฏ่ƒฝไผš้€‰ๆ‹ฉๆ”ฏไป˜ๆœ€้ซ˜่ดน็”จ็š„ๅพ…ๅค„็†ไบคๆ˜“ใ€‚่ฟ™ๆ ทๅฐฑๅˆ›ๅปบไบ†ไธ€ไธชไบคๆ˜“่ดน็”จๅธ‚ๅœบ๏ผŒๅ…ถไธญไพ›ๅบ”ๆ˜ฏ็ณป็ปŸๆ”ฏๆŒ็š„ๆฏ็ง’ๆ€ปไบคๆ˜“้‡ (TPS)๏ผŒ้œ€ๆฑ‚ๅŸบไบŽๅ†…ๅญ˜ๆฑ ไธญ็š„ไบคๆ˜“ๆ•ฐ้‡ใ€‚ไธ€ๆ—ฆไบคๆ˜“่ขซๅŒ…ๅซๅœจๅ…ทๆœ‰ๆ‰€้œ€ๅทฅไฝœ่ฏๆ˜Ž็š„ๅ—ไธญ๏ผŒๅˆ™็งฐ่ฏฅไบคๆ˜“่ขซโ€œ็กฎ่ฎคโ€ใ€‚ ๅŒบๅ—้“พไบคๆ˜“่ฟ˜ๅฏไปฅๅŒ…ๆ‹ฌ่„šๆœฌๆˆ–็จ‹ๅบ๏ผŒๅ…่ฎธ็›ดๆŽฅ็”จไปฃ็ ๆŽงๅˆถ่ต„้‡‘ใ€‚ๆญคไปฃ็ ๅฏ่ƒฝ้œ€่ฆไธ€ๅฎšๆ•ฐ้‡็š„็ญพๅๆ‰่ƒฝ้‡Šๆ”พ่ต„้‡‘๏ผŒๆˆ–่€…ๅ…ทๆœ‰ไปปๆ„้€ป่พ‘ใ€‚ ่ฏท่ฎฐไฝ๏ผŒๅŒบๅ—้“พ็จ‹ๅบ็š„่ฟ่กŒๆˆๆœฌๅพˆ้ซ˜๏ผŒๅ› ไธบ็ณป็ปŸไธญ็š„ๆฏไธช่Š‚็‚น้ƒฝๅฟ…้กปไธ‹่ฝฝๅนถ่ฟ่กŒ่ฏฅ็จ‹ๅบใ€‚ไป…ไป…ๅ› ไธบๅฎƒๅฏไปฅๅœจๅŒบๅ—้“พไธŠ่ฟ่กŒ๏ผŒๅนถไธๆ„ๅ‘ณ็€ๅฎƒๅบ”่ฏฅๅœจไธ€ไธชๅŒบๅ—้“พไธŠ่ฟ่กŒใ€‚ ๆฏไธชๅ—่ฟ˜ๆœ‰ไธ€ไธชๆŒ‡ๅ‘ๅ‰ไธ€ไธชๅ—็š„ๅ“ˆๅธŒๆŒ‡้’ˆใ€‚่ฟ™ๆ„ๅ‘ณ็€ๅ‰ไธ€ไธชๅ—็š„ๅ†…ๅฎน็š„ๅ“ˆๅธŒๅ€ผๅŒ…ๅซๅœจๅฝ“ๅ‰ๅ—ไธญใ€‚ๅฆ‚ๆžœๆ”ปๅ‡ป่€…ๅฏไปฅๆ‰พๅˆฐๅŽ†ๅฒๅŒบๅ—็š„ๆ›ฟไปฃๆœ‰ๆ•ˆ่ฏๆ˜Ž๏ผŒ้‚ฃไนˆ่ฏฅ่ฏๆ˜Žๅฐ†ๆ›ดๆ”น่ฏฅๅŒบๅ—็š„ๅ“ˆๅธŒๅ€ผ๏ผŒ่ฟ™ๅฐ†ไฝฟไธ‹ไธ€ไธชๅŒบๅ—ๆ— ๆ•ˆใ€‚ๅฆ‚ๆžœๆ”ปๅ‡ป่€…ๆƒณ่ฆๆ›ดๆ”น่ฟ‡ๅŽปๅ‘็”Ÿ 10 ไธชๅŒบๅ—็š„ๅŒบๅ—๏ผŒไป–ไปฌๅ› ๆญค้œ€่ฆ้‡ๆ–ฐๅš่‡ณๅฐ‘ 10 ไธชๅŒบๅ—็š„ๅทฅไฝœ้‡่ฏๆ˜Žใ€‚็„ถ่€Œ๏ผŒ็ฝ‘็ปœ็š„ๅ…ถไฝ™้ƒจๅˆ†ๅฐ†็ปง็ปญๅˆ›ๅปบๅˆๆณ•ๅŒบๅ—๏ผŒๅ› ๆญคๅœจ็Žฐๅฎžไธญ๏ผŒๆ”ปๅ‡ป่€…ๅฏ่ƒฝ้œ€่ฆๅˆ›ๅปบ็š„ๅŒบๅ—่ฟœไธๆญข 10 ไธชใ€‚ไบ‹ๅฎžไธŠ๏ผŒๅช่ฆ็ฝ‘็ปœ็š„ๅ…ถไฝ™้ƒจๅˆ†็ป“ๅˆ่ตทๆฅ๏ผŒๅฏไปฅไปฅ็›ธๅŒๆˆ–ๆ›ดๅฟซ็š„้€Ÿๅบฆๅˆ›ๅปบๅŒบๅ—๏ผŒๆ”ปๅ‡ป่€…ๅฐฑๆฐธ่ฟœๆ— ๆณ•ๅˆ›ๅปบๆฏ”ๅˆๆณ•้“พๆ›ด้•ฟ็š„้“พใ€‚ ๆฏ”็‰นๅธ็ฝ‘็ปœๆฏ็ง’ๆ‰ง่กŒๅคง็บฆ 170 ๅƒไบฟ(170,000,000,000,000,000,000) ๆฌกๅ“ˆๅธŒ๏ผ›ๆ”ปๅ‡ป่€…ๅฟ…้กป่‡ณๅฐ‘ๆŽงๅˆถ้‚ฃไนˆๅคš็š„็ฎ—ๅŠ›ๆ‰่ƒฝ่ฟ›่กŒ 51% ็š„ๆ”ปๅ‡ปใ€‚ ๅŽŸๆ–‡ๅ‚่€ƒ Each node in the network maintains active connections with a few other random nodes. If a user wants to make a transaction, they send it to any node in the network, which automatically broadcasts it to their peers. Because each node is connected to a unique set of peers, the transaction quickly gets propagated to every node in the network. The nodes then save the transaction, including all other pending transactions, locally in memory. This is called the mempool. For more info on Chia's mempool, see Section 6. In order for each node to search for a proof, it must assemble a block to hash against. It does this by including transactions from the mempool, and it will most likely choose the pending transactions that pay the highest fee. A transaction fee market is thus created, where the supply is the total transactions per second (TPS) that the system supports, and the demand is based on the number of transactions in the mempool. A transaction is said to be "confirmed" once it is included inside a block which has the required proof of work. Blockchain transactions can also include scripts or programs, which allow controlling funds directly with code. This code can require a certain number of signatures to release the funds, or have any arbitrary logic. Keep in mind that blockchain programs are expensive to run, since every node in the system must download and run the program. Just because it can be run on a blockchain, doesn't mean that is should be run on one. Each block also has a hash pointer to the previous block. This means that the hash of the contents of the previous block are included in the current block. If an attacker could find an alternative valid proof for a historical block, the proof would then change that block's hash, which would invalidate the next block. If the attacker wanted to change a block that occurred 10 blocks in the past, they would therefore need to re-do the proof of work for at least 10 blocks. The rest of the network would continue to create legitimate blocks, however, so in reality, the attacker would likely have to create many more than just 10 blocks. In fact, as long as the rest of the network, combined, could create blocks at the same rate or faster, the attacker would never be able to create a chain longer than the legitimate chain. The Bitcoin network performs around 170 quintillion (170,000,000,000,000,000,000) hashes per second; the attacker would have to control at least that much hashpower to make a 51% attack feasible. #่ถ…่ถŠๅทฅไฝœ้‡่ฏๆ˜Ž ่‡ชๆฏ”็‰นๅธๅ’Œๅทฅไฝœ้‡่ฏๆ˜ŽๅŒบๅ—้“พ่ฏž็”Ÿไปฅๆฅ๏ผŒๅทฒ็ป่ฟ‡ๅŽปไบ†ๅๅคšๅนดใ€‚่™ฝ็„ถๅทฅไฝœ้‡่ฏๆ˜Ž้žๅธธๅฎ‰ๅ…จ๏ผŒไฝ†่ฟ™็งๅฎ‰ๅ…จๆ€งๆ˜ฏๆœ‰ไปฃไปท็š„๏ผšๆฏ็ง’ไบง็”Ÿ 170 ๅƒไบฟๅ“ˆๅธŒ้œ€่ฆๅคง้‡็š„่ƒฝๆบๆถˆ่€—ใ€‚ๆœ€้‡่ฆ็š„ๆ˜ฏ๏ผŒๅœจ่ฟ™ไบ›็ณป็ปŸไธŠ่ฟ่กŒ่Š‚็‚น้œ€่ฆไธ“้—จ็š„็กฌไปถ๏ผŒ่ฟ™ๅฏผ่‡ด้กถ็บง็Ÿฟๅทฅไน‹้—ด็š„้ซ˜ๅบฆ้›†ไธญใ€‚ ไนŸ่ฎธๆœ€ไปคไบบไธๅฎ‰็š„ๆ˜ฏ็Ÿฟๆฑ ใ€‚ๅœจๆŸไธ€ๅคฉ๏ผŒๅ‰ๅ››ๆˆ–ไบ”ไธชๆฏ”็‰นๅธๆฑ ็š„็ฎ—ๅŠ›ๅ ๆ€ป็ฎ—ๅŠ›็š„ไธ€ๅŠไปฅไธŠใ€‚ๅฏไปฅ่ฏด๏ผŒๅฏนๆฏ”็‰นๅธ็ฝ‘็ปœๆœ€็ฎ€ๅ•็š„ๆ”ปๅ‡ปๆ˜ฏ็Ÿฟๆฑ ่ฟ่ฅๅ•†ไธฒ้€š๏ผˆๆ— ่ฎบๆ˜ฏ่‡ชๆ„ฟ่ฟ˜ๆ˜ฏๅ—ๅˆฐๅจ่ƒ๏ผ‰๏ผŒไปŽ่€Œไฝฟ 51% ็š„ๆ”ปๅ‡ปๅ˜ๅพ—่งฆๆ‰‹ๅฏๅŠใ€‚ ่ฟ™ไบ›้—ฎ้ข˜ไฟƒไฝฟไบบไปฌๅผ€ๅ‘ๆ›ฟไปฃ็š„ๅฅณๅทซๆŠ—ๆ€งๅ…ฑ่ฏ†ๆจกๅž‹ใ€‚่‚กๆƒ่ฏๆ˜Ž๏ผˆ็”จๅŒบๅ—้“พ่ต„ไบงๆŠ•็ฅจ๏ผ‰ๆ˜ฏๆœ€ๆต่กŒ็š„ๆ–นๆณ•ไน‹ไธ€๏ผŒๅœจ่ฟ™ไธ€็ฑปๅˆซไธญๆœ‰่ฎธๅคš็ฑปๅž‹็š„็ฎ—ๆณ•ใ€‚่ฟ™ไบ›็ณป็ปŸๅ€พๅ‘ไบŽๅœจไธๅŒ็จ‹ๅบฆไธŠๅฆฅๅๅŽปไธญๅฟƒๅŒ–๏ผˆไปฅๅŠๅฎ‰ๅ…จๆ€ง๏ผ‰ใ€‚ Chia ้‡‡็”จไบ†ไธ€็ง็งฐไธบ็ฉบ้—ดๅ’Œๆ—ถ้—ด่ฏๆ˜Ž (PoST) ็š„ๆ›ฟไปฃๆ–นๆณ•๏ผŒๆˆ‘ไปฌ่ฎคไธบๅฎƒๅฏ่ƒฝๆฏ”่‚กๆƒ่ฏๆ˜Žๆ›ดๅŠ ๅˆ†ๆ•ฃๅ’Œๆ˜“ไบŽ่ฎฟ้—ฎใ€‚ๅœจ่ฟ™ไธชๆจกๅž‹ไธญ๏ผŒๅ…จ่Š‚็‚นๅœจ็กฌ็›˜้ฉฑๅŠจๅ™จไธŠๅญ˜ๅ‚จๅŒ…ๅซๆ•ฐ็™พไธ‡ไธชๅ“ˆๅธŒๅ€ผ็š„ๆ–‡ไปถ๏ผˆ็ฑปไผผไบŽๅฝฉ็ฅจ๏ผŒๅฆ‚ไธŠๆ‰€่ฟฐ๏ผ‰ใ€‚่ฏฅๆจกๅž‹ไฟๆŒไบ†ไธญๆœฌ่ชๅทฅไฝœ้‡่ฏๆ˜Ž็š„ๅฎ‰ๅ…จๅฑžๆ€ง๏ผŒๅŒๆ—ถๆ™ฎ้€š็”จๆˆทๆ— ้œ€ไปปไฝ•็‰นๆฎŠ็กฌไปถๅณๅฏ่ฎฟ้—ฎใ€‚ ๅŽŸๆ–‡ๅ‚่€ƒ Over a decade has passed since the creation of Bitcoin and Proof of Work blockchains. While Proof of Work is quite secure, that security comes at a cost: a tremendous expenditure of energy is required to generate those 170 quintillion hashes per second. On top of that, specialized hardware is required to run nodes on these systems, which has led to a high degree of centralization among the top miners. Perhaps most troubling of all are the pools. On a given day, the hashrate of the top four or five Bitcoin pools constitutes over half of the overall hashrate. Arguably, the easiest attack against the Bitcoin network would be for the pool operators to collude (either willingly or under threat), putting a 51% attack well within reach. These issues have prompted people to develop alternative Sybil-resistant consensus models. Proof of Stake (voting with blockchain assets) is one of the most popular approaches, and within this category there are many types of algorithms. These systems tend to compromise on decentralization (and thus, security) to varying degrees. Chia takes an alternate approach called Proofs of Space and Time (PoST), which we think is likely to be more decentralized and accessible than Proof of Stake. In this model, full nodes store files full of millions of hashes (akin to lottery tickets, as described above) on hard drives. This model maintains the security properties of Nakamoto's Proof of Work, while remaining accessible to normal users without any special hardware.
https://chiadocs.chiabee.net/docs/01introduction/intro-to-cryptocurrencies/
2022-09-25T07:26:28
CC-MAIN-2022-40
1664030334515.14
[array(['/assets/images/crypto01-74539fda6967c7710c916ee3f75ff995.png', None], dtype=object) array(['/assets/images/crypto01-74539fda6967c7710c916ee3f75ff995.png', None], dtype=object) array(['/assets/images/crypto02-88c31a7d0ce65fe89a4596fcf3870a40.png', None], dtype=object) array(['/assets/images/crypto03-e047dd6201f483b747226dafe39ca8d4.png', None], dtype=object) array(['/assets/images/crypto02-88c31a7d0ce65fe89a4596fcf3870a40.png', None], dtype=object) array(['/assets/images/crypto03-e047dd6201f483b747226dafe39ca8d4.png', None], dtype=object) array(['/assets/images/crypto04-12403e3ef8d623b27874c95708d8aaa1.png', None], dtype=object) array(['/assets/images/crypto04-12403e3ef8d623b27874c95708d8aaa1.png', None], dtype=object) array(['/assets/images/crypto05-c1e2f8dc0c69aeb51c1dc386529fa045.png', None], dtype=object) array(['/assets/images/crypto05-c1e2f8dc0c69aeb51c1dc386529fa045.png', None], dtype=object) ]
chiadocs.chiabee.net
DO Apply the same formatting to all data in a given column. Keep row heights uniform, preferably keeping text short so that it can be displayed on one line. Use grids to display tabular data in a structured, easy-to-scan layout. Keep grid data values concise and consistently-formatted to maximize readability. DO Apply the same formatting to all data in a given column. Keep row heights uniform, preferably keeping text short so that it can be displayed on one line. DON'T Don't use grids to display large blocks of text. Don't show different types of content or varied formatting within the same column. Don't display values that create variation in row height as this makes the grid hard to scan. Grids should be designed to allow users to quickly find the item(s) that they are looking for. The first priority should be to apply a logical sort order that would display the most important information at the top of the list. The sort order of the grid is applied to the "Tenure (in Years)" column Then, provide commonly used filter and search capabilities to let users narrow down the list. Use a Record Type as the gridโ€™s data source to take advantage of a search field and user filters that have already been defined in the Record Type object. Record action buttons can be displayed in a grid column by using the Record Action component. The pencil icon is a related action using the โ€œIcon Onlyโ€ style DON'T Donโ€™t display multiple related actions within a grid cell. Instead, try placing actions above the grid or in separate columns. Record actions can be displayed in โ€œToolbarโ€ style above a grid when the grid is backed by a Record Type. Selection is often necessary when related actions are configured. Refer to the Read-Only Grid documentation to learn more. โ€œNew Itemโ€ is a record list action and โ€œEdit Caseโ€ and โ€œReassign Caseโ€ are related actions Establish an appropriate batch size that minimizes scrolling. If the interface has several other components along with the grid, then use a smaller batch size, such as 5 - 10 items, so that the user can easily access the other components in the interface. If the grid is the only component on the interface, prioritize getting the user to the items they are looking for on the first page using sort order and filter controls. Use a large batch size, such as 50 items, so that users can scroll to their items rather than paging multiple times and waiting for items to load. If you are not sure if the sort order and filter controls will get the users to the items they are looking for on the first page, then use a smaller batch, size such as 25 items, and ensure that users are able to access paging controls without scrolling. The first column should always be left aligned (in left-to-right languages) regardless of the value type. For all other columns, left align text columns and right align columns with numbers or dates. For editable grid columns, use left alignment for all field types (in left-to-right languages) including numbers and dates. Always align headers consistently with column content. Use a hyphen (โ€“) when displaying cells with no value/data. DO DON'T Avoid using โ€œN/Aโ€ or โ€œNot Applicableโ€, as it tends to create clutter Set column widths to โ€œAutoโ€ (this is the default for new columns) to allow the grid to distribute space based on the amount of content in each column. When in doubt, try this setting first as it often produces good results without additional effort. Note that column widths will fluctuate as data changes: a cell with a large amount of text will cause its column to be wider, all else being equal. Use fixed column widths, such as "Narrow" or "Wide", for consistent behavior that mimics how spreadsheets work. The width of each column will remain constant even as the width of the grid changes (such as when resizing the browser window). Horizontal scrolling will be automatically enabled when the total of column widths exceeds the gridโ€™s width. Note: when the total of configured column widths is less than the gridโ€™s overall width, columns will be expanded to fill the available space. Use relative widths to set proportional column widths that distribute available space. For example, 3 columns with "2X", "3X", and "5X" widths, respectively, will take up 20%, 30%, and 50% of the width of the grid. As much as possible, the proportional relationship between columns will be preserved as the overall grid width changes. A particular set of relative widths that look appropriate on a wide monitor, for example, may not work well on a phone display because each column will become much narrower. A particular set of relative widths that work well on wide screens may not transfer well to narrow screens. The "Start Date" and "Department" column values on the phone are too narrow. In general, relative column widths work well when the grid is typically viewed on similar screen sizes (i.e. mostly viewed on a phone or mostly viewed on a desktop monitor). Fixed column widths offer more predictability when grids are viewed across a wide variety of screen form factors. Sometimes, the best results may come from using more than one type of grid column width configuration within the same grid. For example, one might set fixed widths for columns that always require the same amount of space: "Icon"-width for a column that shows a status icon, "Narrow"-width for a column that shows a percentage value. The remaining columns in the grid may show varying amounts of text and work best with "Auto" or a set proportion of relative widths. While considerations for setting column widths are similar across the two types of grids, editable grids lack some of the configurations available for read-only grids. Editable grids do not support automatic column widths based on the amount of content in each cell. The default column width of "Distribute" produces the same result as specifying "1" as the weight. A grid with all "Distribute"-width columns will evenly distribute space across all columns. The editable grid (TOP) has "Distribute"-width columns, so the column spacing is evenly distributed. The read-only grid (BOTTOM) has "Auto"-width columns, allocating space based on the column content. Editable grids only support "Icon" and "Narrow" fixed widths. Compared to read-only grids, less precision is possible when using fixed column widths and proportional weights may have to be relied upon in more cases. Remember that the same column weight may translate into a different width when a grid is viewed on different screen sizes. The "Standard" grid spacing offers a good balance between information density and readability. Use the "Dense" spacing option to reduce the need for vertical scrolling when showing grids with a large number of rows. Note that some users may find the dense grids to be harder to read because of their reduced white space. Use the alternate row shading option to help users match up values on the same row when scanning grids with a lot of data. The shading may not be necessary when showing grids with only a few rows of data. Use the "Light" border style to remove the outer border and vertical column divider lines from grids. This creates a less cluttered look for simple grids that can easily be scanned without the need for extra decorative lines. DO Use the "Row Highlight" selection style for read only content. An example of this can be seen in the grid with detail pattern. DON'T Avoid mixing "Row Highlight" selection style and interactive components, such as links or inputs, in grid cells. This may make it difficult for the user to differentiate between clicking on the row or one of the interactive components within the row. Instead, use the "Checkbox" selection style when there are interactive components in a grid. When displaying multiple grids on the same page, use the same density and styling options for all grids to create a consistent experience. Use the height option to maintain a fixed height for the grid regardless of the number of rows and to ensure that the header is always visible. DON'T Don't pick a fixed height when the grid is the only/main thing on the page as users may have to scroll the page AND scroll the contents of the grid to find what they're looking for DON'T Don't mix paging and fixed heights as users would have to scroll and page to find what they're looking for. At the same time, be aware of performance considerations when choosing to remove paging or setting a very large batch size and relying on scrolling. When designing a grid, use the row header parameter to help screen reader users better understand the context of each cell theyโ€™re traversing. The row header acts as the identifier for a given row, similar to how the column header is the identifier for each column. When users navigate to a cell within a grid, both the corresponding column header and row header values are announced by the screen reader. Note that the row header is only recognized as the header for columns to its right. Because of this, the first column containing text is usually the correct choice for row header. On This Page
https://docs.appian.com/suite/help/22.1/ux_grids.html
2022-09-25T07:49:20
CC-MAIN-2022-40
1664030334515.14
[]
docs.appian.com
. IMPORTANT: Please note that some components' configuration and log files can contain credentials or vulnerable information. Despite the interest for debugging purposes of some of this files, the Bitnami Diagnostic Tool skips those files in order to guarantee the security and privacy of the users. IMPORTANT: For security reasons, never post or disclose your serverโ€™s SSL private key in a public forum. Run the Bitnami Diagnostic Tool The Bitnami Diagnostic NOTE: You need to replace the placeholder APP_NAME with the name of the application you are using. For instance, if you are using the Bitnami Wordpress Stack go to. If the tool is unable to upload the support bundle, refer to the next section for instructions on how to upload it manually. Contact Bitnami support through the github repository and provide the code generated by the tool. NOTE: It is not necessary for you to email log files or other data files to Bitnami support. Bitnami support agents can retrieve the information collected about your stack using the code generated by the. IMPORTANT: Do not share the link to the support bundle in any public forum as it contains detailed diagnostic information about your system.
https://docs.bitnami.com/aws/how-to/understand-bndiagnostic/
2022-09-25T07:15:18
CC-MAIN-2022-40
1664030334515.14
[]
docs.bitnami.com
Bounding Box Center Reference - Mode Object Mode and Edit Mode - Header - Shortcut Period. The image below shows how the objectโ€™s bounding box size is determined by the size of the object. Relationship between an object and its bounding box. In Object Mode In Object Mode, transformation takes place relative to the location of the objects origin (indicated by the yellow circle), and the size of objects is not taken into account. The image below shows the results of using the Bounding Box as the pivot point in some situations. In this example, the orange rectangle has its origin located on the far left of the mesh, while the blue rectangle has its origin located in the center of the mesh. When a single object is selected, the rotation takes place around its origin. Shows the location of the bounding box (right) pivot point compared to the median point (left). The image above (left) shows that when multiple objects are selected, the pivot point is calculated based on the location of all the selected objects. More precisely, the centers of objects are taken into account. In Edit Mode This time it is the geometry that is enclosed in the bounding box. The bounding box in Edit Mode takes no account of the object(s) origins, only the center of the selected vertices. The effects of rotation in different mesh selection modes when the bounding box is set as the pivot point. The pivot point is shown by a yellow circle. The bounding box center compared to the median point.
https://docs.blender.org/manual/en/3.0/editors/3dview/controls/pivot_point/bounding_box_center.html
2022-09-25T08:40:41
CC-MAIN-2022-40
1664030334515.14
[array(['../../../../_images/editors_3dview_controls_pivot-point_bounding-box-center_demo.png', '../../../../_images/editors_3dview_controls_pivot-point_bounding-box-center_demo.png'], dtype=object) array(['../../../../_images/editors_3dview_controls_pivot-point_individual-origins_rotation-around-center.png', '../../../../_images/editors_3dview_controls_pivot-point_individual-origins_rotation-around-center.png'], dtype=object) array(['../../../../_images/editors_3dview_controls_pivot-point_bounding-box-center_object-mode.png', '../../../../_images/editors_3dview_controls_pivot-point_bounding-box-center_object-mode.png'], dtype=object) array(['../../../../_images/editors_3dview_controls_pivot-point_bounding-box-center_edit-mode-rotation.png', '../../../../_images/editors_3dview_controls_pivot-point_bounding-box-center_edit-mode-rotation.png'], dtype=object) array(['../../../../_images/editors_3dview_controls_pivot-point_bounding-box-center_median-point.png', '../../../../_images/editors_3dview_controls_pivot-point_bounding-box-center_median-point.png'], dtype=object) ]
docs.blender.org
- File import options: - Celigo Cash Application Manager can be configured to automatically import files from an FTP site based on a specified schedule. - Clients may choose to manually import files that they have obtained directly from their bank. The manual import will place the files into the file cabinet in NetSuite. - Deposited or Undeposited Status. The Celigo Cash App Manager can be configured either to import payments with a status of deposited (into a bank account that you specify) or undeposited. If your payments are imported with a status of undeposited, you will need to run the NetSuite Make Deposit transaction later. Consider the format of your bank statement and your internal bank reconciliation process to determine whether to import payments as deposited or undeposited. Please sign in to leave a comment.
https://docs.celigo.com/hc/en-us/articles/115000817652-Options-Determined-Pre-Implementation
2022-09-25T07:17:13
CC-MAIN-2022-40
1664030334515.14
[]
docs.celigo.com
The Field Name and Contract Renewal Field Values are directly sourced from the Contract Renewal Module in NetSuite and canโ€™t be set. Contract Renewal Field Name: Field to use from NetSuite Contract Renewals Module to determine Renewal Type. This field comes from Contract Renewal Module and cannot be changed. Typically, it is Order Type Contract Renewal Field Values: NetSuite Contract Renewal Types to use for Opportunity Creation in Salesforce. Contract Renewal Transaction Type: NetSuite Contracts Module selected Renewal Transaction Type. Refresh to update. Create Salesforce Opportunity in Stage: When creating Renewals in Salesforce as an Opportunity, set the Opportunity Status to selected value. Available values are: - Prospecting - Valueprop - โ€ฆ. Click Save Settings to complete the Contract Renewals Sync Configurations. synced previously., then it is synced back total order amount for a given Contract. The Connector also shows the total amount calculations done by NetSuite Contract renewal module within the Salesforce Opportunity so that the Sales rep can be sure about their selections before syncing the Opportunity as a Sales Order. Contract Renewal flow In NetSuite Contract Renewal module, the Sales Order has the following entities: - End User - Bill to Customer (customer who pays the bill) The โ€˜Bill to Customerโ€™ field is not editable and is set up by NetSuite, based purpose. Please sign in to leave a comment.
https://docs.celigo.com/hc/en-us/articles/228868848-Configure-Contract-Renewals-Sync
2022-09-25T08:26:56
CC-MAIN-2022-40
1664030334515.14
[array(['/hc/en-us/article_attachments/218015548/qo2.png', None], dtype=object) array(['/hc/en-us/article_attachments/218212707/qo3.png', None], dtype=object) array(['/hc/en-us/article_attachments/218212727/qo4.png', None], dtype=object) array(['/hc/en-us/article_attachments/218015628/qo5.png', None], dtype=object) array(['/hc/en-us/article_attachments/218015668/qo6.png', None], dtype=object) ]
docs.celigo.com
Run HDX channel system reports In the User Details view, check the status of the HDX channels on the userโ€™s machine in the HDX panel. This panel is available only if the user machine is connected using HDX. If a message appears indicating that the information is not currently available, wait for one minute for the page to refresh, or select the Refresh button. HDX data takes a little longer to update than other data. Click an error or warning icon for more information. Tip: You can view information about other channels in the same dialog box by clicking the left and right arrows in the left corner of the title bar. HDX channel system reports are used mainly by Citrix Support to troubleshoot further. - In the HDX panel, click Download System Report. - You can view or save the .xml report file. - To view the .xml file, click Open. The .xml file appears in the same window as the Director application. - To save the .xml file, click Save. The Save As window appears, prompting you for a location on the Director machine to download the file to. Run HDX channel system reports In this article Copied! Failed!
https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/2106/director/troubleshoot-deployments/user-issues/hdx-channel-reports.html
2022-09-25T08:13:33
CC-MAIN-2022-40
1664030334515.14
[]
docs.citrix.com
RPM Packaging This chapter deals with security-related concerns around RPM packaging. It has to be read in conjunction with distribution-specific packaging guidelines. Generating X.509 Self-signed Certificates during Installation Some applications need X.509 certificates for authentication purposes. For example, a single private/public key pair could be used to define cluster membership, enabling authentication and encryption of all intra-cluster communication. (Lack of certification from a CA matters less in such a context.) For such use, generating the key pair at package installation time when preparing system images for use in the cluster is reasonable. For other use cases, it is necessary to generate the key pair before the service is started for the first time, see Generating X.509 Self-signed Certificates before Service Start, and Packaging:Initial Service Setup. In the spec file, we define two RPM variables which contain the names of the files used to store the private and public key, and the user name for the service: # Name of the user owning the file with the private key %define tlsuser %{name} # Name of the directory which contains the key and certificate files %define tlsdir %{_sysconfdir}/%{name} %define tlskey %{tlsdir}/%{name}.key %define tlscert %{tlsdir}/%{name}.crt These variables likely need adjustment based on the needs of the package. Typically, the file with the private key needs to be owned by the system user which needs to read it, %{tlsuser} (not root). In order to avoid races, if the directory %{tlsdir} is owned by the services user, you should use the code in Creating a key pair in a user-owned directory. The invocation of su with the -s /bin/bash argument is necessary in case the login shell for the user has been disabled. %post if [ $1 -eq 1 ] ; then if ! test -e %{tlskey} ; then su -s /bin/bash \ -c "umask 077 && openssl genrsa -out %{tlskey} 2048 2>/dev/null" \ %{tlsuser} fi if ! test -e %{tlscert} ; then cn="Automatically generated certificate for the %{tlsuser} service" req_args="-key %{tlskey} -out %{tlscert} -days 7305 -subj \"/CN=$cn/\"" su -s /bin/bash \ -c "openssl req -new -x509 -extensions usr_cert $req_args" \ %{tlsuser} fi fi %files %dir %attr(0755,%{tlsuser},%{tlsuser]) %{tlsdir} %ghost %attr(0600,%{tlsuser},%{tlsuser}) %config(noreplace) %{tlskey} %ghost %attr(0644,%{tlsuser},%{tlsuser}) %config(noreplace) %{tlscert} The files containing the key material are marked as ghost configuration files. This ensures that they are tracked in the RPM database as associated with the package, but RPM will not create them when the package is installed and not verify their contents (the %ghost), or delete the files when the package is uninstalled (the %config(noreplace) part). If the directory %{tlsdir} is owned by root, use the code in Creating a key pair in a root-owned directory. root-owned directory %post if [ $1 -eq 1 ] ; then if ! test -e %{tlskey} ; then (umask 077 && openssl genrsa -out %{tlskey} 2048 2>/dev/null) chown %{tlsuser} %{tlskey} fi if ! test -e %{tlscert} ; then cn="Automatically generated certificate for the %{tlsuser} service" openssl req -new -x509 -extensions usr_cert \ -key %{tlskey} -out %{tlscert} -days 7305 -subj "/CN=$cn/" fi fi %files %dir %attr(0755,root,root]) %{tlsdir} %ghost %attr(0600,%{tlsuser},%{tlsuser}) %config(noreplace) %{tlskey} %ghost %attr(0644,root,root) %config(noreplace) %{tlscert} In order for this to work, the package which generates the keys must require the openssl package. If the user which owns the key file is generated by a different package, the package generating the certificate must specify a Requires(pre): on the package which creates the user. This ensures that the user account will exist when it is needed for the su or chmod invocation. Generating X.509 Self-signed Certificates before Service Start An alternative way to automatically provide an X.509 key pair is to create it just before the service is started for the first time. This ensures that installation images which are created from installed RPM packages receive different key material. Creating the key pair at package installation time (see Generating X.509 Self-signed Certificates during Installation) would put the key into the image, which may or may not make sense. Generating key material before service start may happen very early during boot, when the kernel randomness pool has not yet been initialized. Currently, the only way to check for the initialization is to look for the kernel message random: nonblocking pool is initialized, or ensure that the application used for generating the keys is utilizing the getrandom() system call. In theory, it is also possible to use an application which reads from /dev/random while generating the key material (instead of /dev/urandom), but this can block not just during the boot process, but also much later at run time, and generally results in a poor user experience. The requirements for generating such keys is documented at Packaging:Initial Service Setup.
https://docs.fedoraproject.org/es_419/defensive-coding/tasks/Tasks-Packaging/
2022-09-25T08:02:56
CC-MAIN-2022-40
1664030334515.14
[]
docs.fedoraproject.org
In GIMP, a layer is not always the same size as the others. This command changes the dimensions of a layer, but it does not scale its contents.. Afbeelding 16.138. Example The selected layer for resizing The frame representing the new layer size. It has been placed at the center of the layer using thebutton. If the image has only one layer, it's better to use the Crop tool.
https://docs.gimp.org/2.10/nl/gimp-layer-resize.html
2022-09-25T08:34:47
CC-MAIN-2022-40
1664030334515.14
[]
docs.gimp.org
Creating New Application - Select the channel from the context selection box, then select Applications.This lists all applications deployed in this sales channel. - Click New.This opens an empty detail view. - Specify the required information for the new application.The tables below list the available fields. - Click Apply to create the application.Otherwise, click Cancel to discard your settings.
https://docs.intershop.com/icm/latest/olh/icm/en/operation_maintenance/task_creating_new_application.html
2022-09-25T08:11:57
CC-MAIN-2022-40
1664030334515.14
[]
docs.intershop.com
Forewordยถ From rsync Brief Description we know that rsync is an incremental synchronization tool. Every time the rsync command is executed, data can be synchronized once, but data cannot be synchronized in real time. How to do it? With inotify-tools, this program tool can realize one-way real-time synchronization. Since it is real-time data synchronization, the prerequisite is to log in without password authentication. Regardless of whether it is rsync protocol or SSH protocol, both can achieve password-free authentication login. SSH protocol password-free authentication loginยถ First, generate a public key and private key pair on the client, and keep pressing Enter after typing the command. The key pair is saved in the /root/.ssh/ directory [root@fedora ~]# ssh-keygen -t rsa -b 2048: TDA3tWeRhQIqzTORLaqy18nKnQOFNDhoAsNqRLo1TMg root@fedora The key's randomart image is: +---[RSA 2048]----+ |O+. +o+o. .+. | |BEo oo*....o. | |*o+o..*.. ..o | |.+..o. = o | |o o S | |. o | | o +. | |....=. | | .o.o. | +----[SHA256]-----+ Then, use the scp command to upload the public key file to the server. For example, I upload this public key to the user testrsync [root@fedora ~]# scp -P 22 /root/.ssh/id_rsa.pub [email protected]:/home/testrsync/ [root@Rocky ~]# cat /home/testrsync/id_rsa.pub >> /home/testrsync/.ssh/authorized_keys Try to log in without secret authentication, success! [root@fedora ~]# ssh -p 22 [email protected] Last login: Tue Nov 2 21:42:44 2021 from 192.168.100.5 [testrsync@Rocky ~]$ tip The server configuration file /etc/ssh/sshd_config should be opened PubkeyAuthentication yes rsync protocol password-free authentication loginยถ On the client side, the rsync service prepares an environment variable for the system-RSYNC_PASSWORD, which is empty by default, as shown below: [root@fedora ~]# echo "$RSYNC_PASSWORD" [root@fedora ~]# If you want to achieve password-free authentication login, you only need to assign a value to this variable. The value assigned is the password previously set for the virtual user li. At the same time, declare this variable as a global variable. [root@Rocky ~]# cat /etc/rsyncd_users.db li:13579 [root@fedora ~]# export RSYNC_PASSWORD=13579 Try it, success! No new files appear here, so the list of transferred files is not displayed. [root@fedora ~]# rsync -avz [email protected]::share /root/ receiving incremental file list ./ sent 30 bytes received 193 bytes 148.67 bytes/sec total size is 883 speedup is 3.96 tip You can write this variable into /etc/profile to make it take effect permanently. The content is: export RSYNC_PASSWORD=13579 Author: tianci li Contributors: Steven Spencer
https://docs.rockylinux.org/pl/books/learning_rsync/05_rsync_authentication-free_login/
2022-09-25T08:27:53
CC-MAIN-2022-40
1664030334515.14
[]
docs.rockylinux.org
enctype=multipart/form-dataattribute. Then add an <input type='file' name='My File'/>tag to your form. There's a number of options you can add to this tag to customize it. For example, if you only want to accept PNG and JPEG images, we can add accept="image/png, image/jpeg". You can learn more about all the available options for the file input tag here.
https://docs.sheetmonkey.io/guides/file-uploads
2022-09-25T07:56:51
CC-MAIN-2022-40
1664030334515.14
[]
docs.sheetmonkey.io
public interface RSocketRequester RSocketwith a fluent API accepting and returning higher level Objects for input and for output, along with methods to prepare routing and other metadata. io.rsocket.RSocket rsocket() MimeType dataMimeType() MimeTypeselected for the underlying RSocket at connection time. On the client side this is configured via RSocketRequester.Builder.dataMimeType(MimeType)while on the server side it's obtained from the ConnectionSetupPayload. MimeType metadataMimeType() MimeTypeselected for the underlying RSocket at connection time. On the client side this is configured via RSocketRequester.Builder.metadataMimeType(MimeType)while on the server side it's obtained from the ConnectionSetupPayload. RSocketRequester.RequestSpec route(String route, Object... routeVars) The route can be a template with placeholders, e.g. "flight.{code}" in which case the supplied route variables are formatted via toString() and expanded into the template. If a formatted variable contains a "." it is replaced with the escape sequence "%2E" to avoid treating it as separator by the responder . If the connection is set to use composite metadata, the route is encoded as "message/x.rsocket.routing.v0". Otherwise the route is encoded according to the mime type for the connection. route- the route expressing a remote handler mapping routeVars- variables to be expanded into the route template RSocketRequester.RequestSpec metadata(Object metadata, @Nullable MimeType mimeType) metadata- the metadata value to encode mimeType- the mime type that describes the metadata; This is required for connection using composite metadata. Otherwise the value is encoded according to the mime type for the connection and this argument may be left as null. static RSocketRequester.Builder builder() RSocketRequesterby connecting to an RSocket server. static RSocketRequester wrap(io.rsocket.RSocket rsocket, MimeType dataMimeType, MimeType metadataMimeType, RSocketStrategies strategies) RSocket. Typically used in client or server responders to wrap the RSocketfor the remote side.
https://docs.spring.io/spring-framework/docs/5.2.0.RELEASE/javadoc-api/org/springframework/messaging/rsocket/RSocketRequester.html
2022-09-25T09:08:13
CC-MAIN-2022-40
1664030334515.14
[]
docs.spring.io
This section describes how to add Trend Micro Email Security as a new application and configure SSO settings on your Okta Admin Console. If you are in the Developer Console, click < > Developer Console in the upper-left corner and then click Classic UI to switch over to the Admin Console. The Create a New Application Integration screen appears. This step is required only if you want to configure a logoff URL on the Trend Micro Email Security administrator console. The logoff URL is used to log you off and also terminate the current identity provider logon session. Next to Enable Single Logout, select the Allow application to initiate Single Logout check box. Type.<domain_name>/uiserver/subaccount/sloAssert?cmpID=<unique_identifier> in Single Logout URL. Type.<domain_name>/uiserver/subaccount/ssoLogout in SP Issuer. Upload the logoff certificate in the Signature Certificate area. You need to download the logoff certificate from the Trend Micro Email Security administrator console in advance. Go to Administration > Administrator Management > Logon Methods. Click Add in the Single Sign-on section. On the pop-up screen, locate the Identity Provider Configuration section, select Okta as Identity provider and click Download Logoff Certificate to download the certificate file. Keep the default values for other settings. When configuring the identity claim type for an SSO profile on Trend Micro Email Security, make sure you use the attribute name specified here. The Sign On tab of your newly created Trend Micro Email Security application appears.
https://docs.trendmicro.com/en-us/enterprise/trend-micro-email-security-online-help/configuring-administ/administrator-manage/logon-methods-for-ad/configuring-single-s_001/configuring-okta-for.aspx
2022-09-25T07:55:16
CC-MAIN-2022-40
1664030334515.14
[]
docs.trendmicro.com
) -) - Contents Tutorial: Tutorial containing instructions on how to get started with ProjectQ. Examples: Example implementations of few quantum algorithms Code Documentation: The code documentation of ProjectQ.
https://projectq.readthedocs.io/en/v0.7.0/
2022-09-25T08:48:04
CC-MAIN-2022-40
1664030334515.14
[]
projectq.readthedocs.io
๐Ÿ’พ Databaseยถ Overviewยถ What is a database? When to use NULL What RDBMS should you use? Avoid NULL (usually) camelCase or underscore_case? When to Index? Why โ€œIDโ€? Naming ID Columns Normalization Example: States Database Design: Three Areas of Optimization Enum: Just Say No What is a database?ยถ This may seem like a very simple question, but it can have different answers, depending on who you ask. In order to avoid confusion, it is important to be aware of how different people might perceive this question. If you were asked to provide a quote for creating a database, what kind of number would you provide? Because I can think of many different numbers. I could create a database for $5.00. I could do it for $100. Or $10,000. Or $1 million. In theory, you could launch a database management tool, create a new, empty database instance, and youโ€™re done! You created a database. That will be $5.00 please. That is usually not what people mean, though. Typically when I discuss โ€œdesigning a database,โ€ I am referring to the act of creating a database schema, which is primarily the tables and columns needed to store all data necessary for a specific project. But people who are not database designers, or may not have any kind of technical expertise at all, may have very different perceptions of what a โ€œdatabaseโ€ is. They may expect a database to have data in it. They may expect a database to have a comprehensive front-end. They may think of a โ€œdatabaseโ€ as being something like โ€œIMDb.com,โ€ which has a sophisticated front end, fancy graphics, and a list of all possible movies, TV shows, and casts. So before embarking on a database project, one needs to define what it is that will be included in the delivered โ€œdatabaseโ€? What are all the project specifications? What are all the available sample data files? Will the database use explicit foreign keys or just implicit linking names? Will data need to be loaded into the database? Will there be human-friendly views for displaying database data? Will the system need to have automated data-loading tools? Will the system need to have tools for manually entering data? Will the system need to have data-editing tools? Will the system need a front end? What kind? Anybody who is a project owner or project manager of an information system which includes a back-end database should be aware of how the simple word โ€œdatabaseโ€ may signify different things to different people, including database designers. Database designers and programmers who create databases should be aware that when they refer to a โ€œdatabase,โ€ different people will have different perceptions of what this entails. When to use NULLยถ I previously wrote that optimal database design will avoid using columns that allow NULL. A database schema will be faster, easier to maintain, and easier to program for if you mostly avoid allowing NULL values. But there ARE times when allowing a NULL value makes sense. Sometimes your column setting should be to โ€œallow NULL.โ€ The main reason for this is when you have a column whose value really does need to differentiate between null and a default value. For text string columns (varchar), this is rarely the case. But for many other datatypes, this can make a difference. Letโ€™s consider numbers. If possible, I like to avoid using NULL for number columns as well. For example if I have inventory count column, I would prefer to use zero as a default and not allow null. This would mean that the system will display zero inventory for a product if the value is not known. Rather than allow null, the default for the column is set to zero. But what if the system demands a distinction between โ€œzero inventoryโ€ and โ€œwe donโ€™t know the inventoryโ€? Then there are two ways to handle this: - We could set the inventory column to allow null - Or we could use an additional boolean column to indicate that the inventory is unknown. Both of these are valid options. But note that the second option actually requires an additional column. So it actually requires a more complex database schema, and potentially more complex coding. The right way to handle this situation will depend on the needs of the project. But keep in mind that if you use โ€œzeroโ€ to indicate BOTH meanings: โ€œI donโ€™t know what the inventory isโ€ and โ€œI checked, and I verified that the inventory is zeroโ€โ€ฆ that you are introducing ambiguity into the system unless you do something to differentiate these meanings. Here is another example of when it would be a good idea to NOT allow null for a numeric field: The latitude and longitude of a location can be stored as a pair of decimal values. Here is an example: Latitude: 33.428606 Longitude: -111.927360 But what if you want to store a location in the database before you know the latitude and longitude? You COULD disallow NULL for these columns and set their default to zero. But the problem with that is: Zero is a legitimate value for a latitude or longitude. Zero signifies the equator or the Prime Meridian. So if you set the default value to zero, and disallow NULL, how do you know if any given value in the database is unknown versus actually being at the zero position? You canโ€™t use a negative number as a placeholder, either (e.g. โ€œ-1โ€ or โ€œ-99โ€), because negative values are legitimate values as well. (And that would be kind of deceptive anyway!) So I would set these columns to allow null, and I would record them as null in the database if the information was not providedโ€ฆ if the data was submitted without latitude and longitude info. There are other reasons to use NULL. This is the main reason. What RDBMS should you use?ยถ The abbreviation โ€œRDBMSโ€ is short for โ€œrelational database management system.โ€ Although we donโ€™t always think of it as such, an RDBMS is a software application, like Microsoft Word or Adobe Photoshop. It needs to run on a computer somewhere. It is different in the sense that we typically do not launch it independently on our own computer. In a typical information system, an RDBMS is running on a remotely accessible server and allows many users to connect to it from various โ€œfront-endโ€ applications, such as a mobile app, a web application, or a desktop application. An RDBMS may be hosting multiple databases concurrently. It may allow one, a few dozen, or many thousands of users to connect to it concurrently. Which should you use? I donโ€™t know. It depends on your situation. Some RDBMS are free and open-source. Some cost money to purchase or use. Some will work on your server of choice. Some wonโ€™t. Some will work with specific front-end software that you want to use. Some wonโ€™t. There are many considerations that go into choosing an RDBMS. But a key thing to understand is that all of them can utilize database designs which are optimized for these key criteria: data integrity speed maintainability The same RDBMS can be hosted on a server which runs quickly or not so quickly. Some commercial hosts (such as AWS) intentionally throttle (limit) the speed of their databases for customers at lower-paying tiers. For most information systems of small to moderate size, the choice of a specific RDBMS is not going to be the key factor in the speed and efficiency of your system. More important factors are: - where it is hosted - how you use it When a database-based information system has a query which runs slowly, a page that loads slowly, an update that takes a long time, a report that makes you wait, etcโ€ฆ Nearly always, if your database system is having speed problems, it is NOT because you are using MS SQL Server instead of Oracle, or because you are using MySQL instead of PostgreSQL. It isnโ€™t the choice of the RDBMS that is causing the problem. The problem stems from one of these causes: - the database design - the SQL query design - how the application source code was written - the host server settings and configuration (such as memory, available hard drive, etc.) Avoid NULL (usually)ยถ Columns in SQL database tables can have values. Their value can also be set to NULL. Whether or not to allow an individual column to be set to NULL is an important decision in database design. The importance of the NULL/NOT NULL decision is often overlooked. I often see inexperienced database designers use this incorrectly. Simply put: When optimizing a database for: - data integrity - speed - maintainability โ€ฆit is best to set all or most columns to NOT NULL. This may seem counter-intuitive, depending on when, where and how a person first learned database design. NULL means that you donโ€™t know the value, right? So letโ€™s take this table: Table: USERS ID middle_name Maybe we want to collect names of users. We block rows from being created without a first and last name. But we donโ€™t want to require users to enter a middle name or middle initial. In the front-end interface, we leave that field optional. So if a user doesnโ€™t provide a middle name, should we store that as NULL, because we donโ€™t know the value? No. Using NULL is indeed a โ€œsemantically pureโ€ decision. But we are not optimizing our database for โ€œsemantic purity.โ€ We are (instead) optimizing our database for data integrity, speed, and maintainability. Allowing the middle_name column to be NULL introduces an additional state or property to the column. This actually increases the memory requirements of the column. It may be a very slight amount, but with a large table, it is faster to deal with columns that do NOT allow null. And queries are more complex if you allow NULL. Queries may need to use functions to check for null status, instead of simply querying columns directly. Where clauses and joins are particularly trick. Many queries which use a column which allows null will fail if not handled properly. One might run a query that is supposed to return 100 rows of data. But it only returns 80 rows. Why? A programmer or user could spend a long time being baffled by the results until they realize that the query did not properly account for the NULL-able columns which were part of where clauses. What are you going to do with that null value middle name? Youโ€™re going to display it as blank on the front-end, or in the report. How is that result any different than simply storing the middle name as a blank (empty string)? Itโ€™s not. The result is exactly the same. So by setting the column to NOT NULL, you reduce the complexity of the database table by eliminating one possible state and also eliminating one possible value which has no meaningful difference from another possible value. If โ€œNULLโ€ and โ€œblankโ€ mean the same thing, then theyโ€™re redundant. And it is better to use only one of them. There ARE legitimate reasons to set a column to โ€œallow NULL.โ€ I do use NULL-able columns in my database designs. But I use them cautiously and sparingly. I try to avoid using them whenever possible, but if there is a need for them, I use them. When NULL-able columns should be used can be discussed another time. camelCase or underscore_case?ยถ The formatting of table names and column names within a database schema has been debated for decades. The same discussion occurs in relation to programming source code. The discussions (arguments) go round-and-round. They will probably never end. I donโ€™t want to write anything lengthy about this. Let me point out a few general facts: This relates specifically to optimizing for maintainability. This doesnโ€™t make any difference when it comes to optimizing for data integrity or speed. The most important decision on this topic is to be consistent. Stick with one format throughout the database design. Otherwise, people will type in the wrong column names and table names, and that will cause errors. Although some RDBMS are case-insensitive with regards to table names and column names, some are case-sensitive. Moreover, the programming languages you use to interact with a database may be case-sensitive. Donโ€™t ever be sloppy with regards to case. Always capitalize names consistently. Never use alternatively-capitalized variants to signify something different. Before I whether or not you should use camelCase or underscore_case for naming tables and columns, let me point out that there ARE other alternatives, which I find even less preferable than these two naming conventions: - alllowercaseruntogether - ALLCAPSRUNTOGETHER - spaces between words - dashes-between-words - ALL_CAPS_WITH_UNDERSCORES These alternative formats, which you may see in some databases, are all inferior. They are either difficult to read, or they are error prone. Some of these formats might even seem impossible, such as naming tables and columns with spaces in between words. In most RDBMS, something like this IS possible, itโ€™s just a very bad idea, because such a name only works by enclosing it to force the system to recognize it as a string, such as by enclosing it within double-quotes. Thatโ€™s just asking for trouble, though. Also, there may be places where it wonโ€™t work at all, so you would end up with a table schema which is not very portable. I do not use camelCase in table and column names. I use underscore_case. Two main reasons: I find that underscore_case is easier to read. Not everybody agrees. But I think most people agree. The other main reason I use underscore_case: It is programmatically very easy to resolve into a human-readable word, phrase or label. For example, if I want to automatically convert a table to a form that is presented on a user-interface screen to be seen by human users, I might want to convert the name of a table and its columns to titles and labels. It is easy for any program (including a โ€œreplaceโ€ command within SQL) to convert underscores to spaces, and thus make a human friendly label. Like this: original_product_name => original product name But what if a column is like this: originalProductName Not so easy is it? A clever programmer using an application programming language can come up with an elegant function to handle this. But can it be done easily within a pure SQL query statement? Not really. And even the cleverest programmer is going to run into trouble working with something like this: originalProductUPCCode How would you easily convert that to something human-readable? Itโ€™s not even very readable in camelCase. Compare that to this: original_product_UPC_code You can see how this is much easier to read as part of a schema. And it is much easier to utilize in automated name conversion scripts and functions. My advice: - Be consistent. - Donโ€™t go to war over naming conventions if youโ€™re a guest in an already-established database. - if you have a choice, use underscore_case. When to Index?ยถ Indexing is a awesome. It really is like a magic bullet to turbo-charge the speed of your database system. But that doesnโ€™t mean you should index everything all the time. High-quality database design requires an awareness of what indexing means for your database, including when to use it and when to NOT use it. One of the most common โ€œbig mistakesโ€ I have seen in database schemas created by programmers inexperienced in database design is that they donโ€™t add any indices to tables that really need them. They might be a programmer creating a new app or web application, and they create a database that their system needs. They add a few test rows, connect the database to the front-end, and everything works fine. So they think theyโ€™re done. And they are done. As long as the table is small. For a few rows or a few hundred rows, having an index is not going to matter. But then their system grows. More people use it. Now they have 10,000+ rows in that table, or 100,000+ rowsโ€ฆ And their whole system starts to get slower, and slower, and slower. Why? Because their system involves a search on the โ€œnameโ€ column within that 100,000+ table. That search is taking a long time, although it was previously very fast. Add an index to the โ€œnameโ€ column and like magic, the queries begin returning results instantaneously, just like when the table only had 100 rows in it. Donโ€™t wait to add an index that the table will need when it grows. If a table might grow to be large enough that indexing will make a difference, add the index to the columns that need it right from the beginning. This will mean your design is really โ€œfinishedโ€ and you wonโ€™t need to monitor the system for the point in time when it starts to slow down because the table is getting too big to operate quickly without additional indices. So we know that adding an index to a table can speed it up tremendously. Does that mean we should simply index every column right from the start? No. Donโ€™t index every column. I only add an index to a column if it will be beneficial. I donโ€™t do it by default. There are many tables that may have a dozen or a few dozen columns, but which only need to have one or two of them indexed. Here is a very simple guide about when to add an index to a column: - add an index to columns that are uses as SEARCH criteria - add an index to columns that are used in a โ€œSORT BYโ€ clause - add an index to columns that need to be referenced by FOREIGN KEY LINKS - add an index to columns that are used in any JOINS Note that there other index types aside from the standard index: unique contraints and primary key indices and others. We will not discuss those index types today, except to point out that if a column already has been indexed using one of these other index types, it does NOT need to be indexed again. You donโ€™t need to index the same column twice. A unique constraint index (for example) does double duty: preventing the same value from being used twice in a column AND also provides the speed optimization that a non-unique index provides. Canโ€™t I just index everything and not think about it? No, you shouldnโ€™t do that. Although indexing is great when you can benefit it, indexing isnโ€™t โ€œfree.โ€ Every index you add DOES increase overhead on the system. Indices take up file space. Indices require additional processing time when inserting rows. Indexing columns unnecessarily will create a net drain on your system rather than a boost. A truly interactive information system which connects many relational tables to each other (joins, foreign keys), allows a user to conduct searches and to sort data using different criteria is likely to have MANY indexed columns. But it wonโ€™t have EVERY column be indexed. Why โ€œIDโ€?ยถ In a previous entry I suggested that when designing a new database, the best name for an โ€œID columnโ€ (โ€œprimary identity columnโ€ or โ€œprimary key columnโ€) is simply: ID Yetโ€ฆ if you have seen a wide range of databases created by different people using different RDBMS, covering different periods of time, you may have seen many different naming conventions. Here are some examples: We have have seen various databases which use different names: ID id Id state_ID state_id state_Id STATEID StateID StateId stateid state_num state_primary_key Why is โ€œIDโ€ the best name to use? Let us put aside the interesting fact that some software and source code libraries specifically look for this name and provide some benefits when using it. That is really a minor point. And it is NOT a โ€œuniversalโ€ thing. Not by a longshot. I like to use โ€œIDโ€ because it is very readable and instantly recognizable. It really stands out as something distinctive and different from the other column names. Look at this example of column names for a single table: ID name abbreviation description population capitol I like the way that the capitalized form โ€œIDโ€ stands out. Also, if I reference this column name in a foreign key column (linking from another table), I like how it stands out from the table name: state_ID I want to consistently use the same primary identity column name as a part of foreign key column names. When I use โ€œIDโ€, I can pick that part of a column name out quickly and easily. One final question: Why donโ€™t I use a naming convention like this for the ID columns: STATES state_ID name abbreviation I have seen this in some database designs. In these designs, there are NO primary identity columns named โ€œIDโ€. These columns are all named something different. Donโ€™t do this. This introduces unnecessary complexity and duplication into your database design. Your goal should be to make the database design as simple and as non-duplicative as necessary. If you add a version of a table name in the tableโ€™s primary identity column, then your database is NOT as simple. Instead of having ONE name for ID columns (โ€œIDโ€), you end up having many. You would theoretically have as many different names for ID columns as you have tables. A table with 20 table would have 20 different tables would have 20 different names for primary identity columns. 20 is NOT as simple as 1. And you have also introduced duplication. Because what you really have, if you reference the โ€œfull nameโ€ of these columns is this: states.state_ID users.user_ID products.product_ID See how each column DUPLICATES information? If I have an ID column within โ€œstatesโ€ table, I donโ€™t need to name the column โ€œstate_ID.โ€ I already know that the โ€œIDโ€ column within the states table is the ID column for states. See how these column names avoid unnecessary duplication: states.ID users.ID products.ID So to summarize my advice on this topic: - keep it simple - make it easy to read - avoid duplication - use โ€œIDโ€ Naming ID Columnsยถ I want to discuss one of the most important aspects of database design: naming ID columns. What are โ€œID columnsโ€? This is one simple and common way to refer to primary identity columns. For example, in a table like this: STATES ID 101 Name Alabama Code AL โ€ฆthe โ€œID columnโ€ is the first column: โ€œIDโ€. This is the column within the table that is intended to be unchanging, and it is intended to be the most reliable way to reference a row in a table. These columns are also known as the โ€œprimary key.โ€ There are some technical distinctions between the complete meanings of these terms, but that is a topic for another day. Many techniques impact two or even three areas of database optimization (data integrity, speed, and maintainability.) But naming ID columns is something that is really ONLY related to ONE of these areas of optimization: maintainability. This IS important, even though the database engine itself doesnโ€™t really care. Your database is not going to run faster due to the column names you use. The technical ability of your database to be able to enforce data integrity will not change. But your ability of human user and developers to maintain the database will be significantly impacted. Your naming conventions can positively or negatively impact the human actions that have an effect on data integrity. So I suppose we could say that how you name ID columns impacts โ€œone and a halfโ€ areas of optimization. We have have seen various databases which use different names: ID id Id state_ID state_id state_Id STATEID StateID StateId stateid state_num state_primary_key How should you name your ID columns? The most important answer to this question is: CONSISTENTLY. Because if you are consistent, your database will be maintainable. If you are not consistent, then mistakes will happen. Subtle bugs will creep in that are hard to track down. The ad hoc queries that users write wonโ€™t work. BE CONSISTENT! So many choices! Argh! Just tell me what to do! Okay: Here is a short answer to what to do: If you are coming to an already-established database and you are being asked to use it but not recreate its naming conventions, then just BE CONSISTENT. Use whatever naming convention is already in place. If you are adding a new table to a database in which ID columns all look like this: โ€œidโ€, then that is what you should use. Simple. But if you are designing a database from scratch, you can use the convention you prefer. This may be a slightly โ€œcontroversialโ€ decision. The preferred format may vary depending on the specific RDBMS being used. I will tell you that in working with MySQL databases, the convention I see most often in โ€œbest practicesโ€ databases, and the convention I personally use in creating new database is to always use: ID Two letters only. Capitalized. Always and consistently when I create new databases, I use โ€œID.โ€ Some MySQL tools actually recognize this column name and provide you with a few extra benefits when you use โ€œID.โ€ For example, phpMyAdmin will automatically provide you with in-line editing functionality on query result screens in which the ID column is โ€œIDโ€, but doesnโ€™t provide this functionality when ID columns are spelled or capitalized differently. Maybe this will change in the future, but for now, itโ€™s a significant advantage. So when confronted with the question about how to name an ID column, my simple answer is: Use the naming convention already in place within your database Use โ€œIDโ€ (Tomorrow, Iโ€™ll explain more reasons for using โ€œIDโ€.) Normalization Example: Statesยถ I want to discuss one specific data concept: States. As with most any specific data concept example, this discussion serves a dual purpose. It presents ideas about how a specific type of data can be handled, and it also serves as an example of root principles. So we will talk about how to handle states, which is a common, useful concept. But this example also illustrates principles for optimizing the database for data integrity (through normalization) and for speed. Sometimes I see databases which store contact information, such as a userโ€™s address, and the table with contact information has a varchar (text string) column for โ€œstate.โ€ Often these are varchar columns that contain values such as โ€œNYโ€, โ€œAZโ€ and โ€œNSW.โ€ I never do this. I never store states as text strings. Even in a small, simple database, it is better to store states in a separate table, such as โ€œstatesโ€ or โ€œlocationsโ€, and then link an address to that table through an integer foreign key link. So I may have a table like this: ID Name 1 Alabama 2 Alaska 3 Arizona And in the table with contact information, we would store only the integer that references the states/locations table like this: state_ID = 1 (This means โ€œAlabama.โ€) This will save all kinds of headaches. What we do NOT want to have happen is somebody else on the team coming to us and saying: โ€œWhy are there only 90 customers from New York in this report? I know there are 100.โ€ And then we look in the database and we have to tell them: โ€œThere are 90 customers from New York, but there are also 5 customers from โ€œN.Y.โ€, 4 customers from โ€œNYโ€, and 1 customer from โ€œNewYork.โ€ If we store states as blind text strings in a varchar field, then we are asking to have problems with data integrity. People donโ€™t always spell state names correctly. People donโ€™t always get the abbreviations correct. So we want to constraint the possible values for states to a list that contains ONLY possible values. And it isnโ€™t that complicated to do so. There are only 50 states in the United States. Even if we add in all states and territories from Australia and the United States, and all provinces from Canada, and even the special military service mailing codes, there are still fewer than 100 items in our โ€œstatesโ€ table. If our database encompasses other countries, then we can license a comprehensive location data source, build our own or (most likely) either leave this blank or allow for a general โ€œprovinceโ€ field for other countries. In practice, most international addresses work fine with a city name, a postal code and a country name, and donโ€™t need a โ€œstateโ€ field. Normalizing states (storing values as foreign key-referencing integers such as 1, 4, and 40) instead of using varchars (โ€œALโ€ or โ€œAlabamaโ€) will also improve the speed of your database. If you have only a few hundred records, it wonโ€™t matter. But if you have a large number of records, such as a hundred thousand or more, you will definitely notice a difference between how fast it takes your database to do a search for all records using a text string (โ€œALโ€) versus an integer (1). Database Design: Three Areas of Optimizationยถ Real database design is more than just creating a database schema that โ€œworks.โ€ Real database design means creating a database schema that works well. It should work well now, and it should work well in the future, as a project continues to be used and (in all likelihood) changes as new features are added or new situations are encountered. Whenever I engage in the task of โ€œdatabase designโ€ โ€“ whether Iโ€™m designing a database from scratch or redesigning an existing database โ€“ I like to explain to project owners, project managers, and development teams that Iโ€™m focusing on three areas of optimization: data integrity speed maintainability This helps everybody (including myself) stay focused on important goals and helps explain why I make the decisions that I make. These are straightforward concepts that everybody from non-technical people to highly experienced programmers can understand. If you think about it, these same goals (or โ€œareas of optimizationโ€) should seem very similar to key โ€œbest practicesโ€ used in non-database programming. In practice, it is rare that I find a database project which needs to be optimized for some other goal. Here are simple examples of these optimization concepts: [optimization for: data integrity] Question: Why do we need to add a foreign key constraint to this column? Answer: This will help guarantee that valid data is always stored in this column which references a foreign key. Even if a programmer messes up with integrity checking at the source code level, or even if a back-end database user attempts to inset invalid data, this foreign key constraint will ensure that only valid data is input into the table. [optimization for: speed] Question? Why do we need to add an index to this column? The table already works without it? Answer: This column is used by the front end as a key search field. The system will return results much faster if we index it. [optimization for: maintainability] Question: Why do we have to change these column names? The app is already written to use โ€œlocation_IDโ€ in most tables, and โ€œlocation_Idโ€ in a few tables. Answer: Then we need to change the appโ€™s source code AND change the column names so that they all match. Things may work now. But as soon as anybody starts making changes, somebody will mis-type the column name, and case-sensitive source code will fail to work. I find that that nearly every decision in database design can be explained as something that is done in order to optimize the database for one of these three areas: data integrity, speed, and maintainability. Rather than doing database design based on a large collection of โ€œtextbook rulesโ€ whose meanings and purposes are obscure, I like to use these key goals as a guiding philosophy. Enum: Just Say Noยถ There is a time and place for everything, so the saying goes. And I agree this is true for most database concepts and techniques. But not for enum. Donโ€™t use enum at all. Itโ€™s a nasty habit. Many database designers might have blissfully worked their entire careers without running into enum. But then they help out with somebody elseโ€™s database and see these little critters all over the place. So what is โ€œenumโ€? This is an odd data construct masquerading as a โ€œdatatypeโ€ option within an SQL table. Instead of being a specific type of data (such as varchar, int, decimal, etc.), an enum is actually a LIST of items, such a list of varchar values. If youโ€™re thinking: โ€œA list? Isnโ€™t that what a table is for?โ€ โ€ฆThen my response is: Exactly. A column which uses the โ€œenumโ€ datatype in a table is essentially hiding a little table WITHIN a column. It may seem handy, but it is a complete violation of the SQL relational data model. Enum columns are NOT part of the ANSI SQL standard. They are NOT universally accepted among SQL databases. They are NOT handled the same way by different SQL databases. So when one uses an enum, one ends up with a table design which is not easily or consistently portable. โ€œBut theyโ€™re so convenient,โ€ some people might say. They can be used to store a discrete set of values, such as for a drop-down menu in a mobile app. An example might be a โ€œstatusโ€ field which contains only these possibilities: โ€œpendingโ€, โ€œin progressโ€, and โ€œcomplete.โ€ Create an enum field and youโ€™ve got a varchar that only allows those values: pending in progress complete But by doing that, youโ€™re basically hiding these values in a place that isnโ€™t where users of the database would look for it. You canโ€™t simply type in โ€œselect * from status_typesโ€. There are no standard SQL queries for obtaining the possible values of an enum datatype column. And what if you want to ADD a NEW status type? Are you ABSOLUTELY certain your system will NEVER need a new status type? A few months into the project, it turns out you need a new status type: โ€œunder review.โ€ Can you just add that status to the enum set of values? Itโ€™s not so simple. This change will actually require a change to the table schema. Which is fine if youโ€™ve got a tiny database. But what if that column is used in a table with 5 million records? In many RDBMS, adding a single string (โ€œunder reviewโ€) to that list will actually require the database software to create an entirely new table, with the new schema, and copy all of the record to it. You could wait an hour just to add โ€œunder reviewโ€ to your list of possible values. It might seem far-fetched, but Iโ€™ve seen this. Itโ€™s a pain. But if you had a separate table for โ€œstatus_types,โ€ it would take a microsecond to add that new fourth value to a tiny 3-row table. So EVEN if you donโ€™t care about standards, EVEN IF you donโ€™t care that many standard database tools and techniques wonโ€™t know what to make of enums, and EVEN if you donโ€™t think your database schema will ever migrate to another type of databaseโ€ฆ you could still run into major headaches if you use enum. There are better ways (standard ways) to handle discrete, defined lists. Thatโ€™s what relational databases are all about. Enum may seem convenient, but itโ€™s really a cheap shortcut that will come back to bite you in the end.
https://slurp.readthedocs.io/en/latest/database.html
2022-09-25T07:49:50
CC-MAIN-2022-40
1664030334515.14
[]
slurp.readthedocs.io
User guide How to invite company users to the company Datahub account First of all, you have to define which level of access you would like to provide. There are three levels of access: - Company level - access to all company assets - Application level - access to all assets connected to the single app - Group level - access to all assets connected to the single user group (Instance) Then, follow the next steps: Datahub -> Management - > Access management How to create a community and add app users to the community. Our architecture allows you to use user groups (instances) as a community. To join the community or user group, please get an invitation code for a certain group and use it either in Zenroad based applications (open-source app) or via APIs. Steps 1 - get invitation code: Datahub -> Management -> Invitation code Steps 2 - Join a community: Zenroad based products Zenroad -> Settings -> join a company - > 'CODE' 'CODE' is a unique community/ user group ID. all users who use the 'CODE' will be automatically moved to the group. The changes will automatically appear in the Datahub. API How to enable Realtime data and point it to the 3rd party TMS/fleet platform please go to the Datahub -> management -> realtime data Then Enable the service to the selected group of users Configure EMEI for your users. Be aware that we generate a virtual EMEI for each user automatically, so you can use them in your TMS system or set your own EMEI. as the next step, configure your TMS/fleet management system server settings by providing IP and port Updated 3 months ago
https://docs.damoov.com/docs/user-guide
2022-09-25T07:35:42
CC-MAIN-2022-40
1664030334515.14
[array(['https://files.readme.io/ebaf7c2-Platformarchitecture.png', 'Platform+architecture.png 598'], dtype=object) array(['https://files.readme.io/ebaf7c2-Platformarchitecture.png', 'Click to close... 598'], dtype=object) array(['https://files.readme.io/0f0c6b2-Access_management.png', 'Access management.png 2908'], dtype=object) array(['https://files.readme.io/0f0c6b2-Access_management.png', 'Click to close... 2908'], dtype=object) array(['https://files.readme.io/ad46808-Community.png', 'Community.png 2946'], dtype=object) array(['https://files.readme.io/ad46808-Community.png', 'Click to close... 2946'], dtype=object) array(['https://files.readme.io/4ca62d7-Zenroad_-_join_a_company.png', 'Zenroad - join a company.png 2956'], dtype=object) array(['https://files.readme.io/4ca62d7-Zenroad_-_join_a_company.png', 'Click to close... 2956'], dtype=object) array(['https://files.readme.io/6dc9825-Realtime-data-screen.png', 'Realtime-data-screen.png 1848'], dtype=object) array(['https://files.readme.io/6dc9825-Realtime-data-screen.png', 'Click to close... 1848'], dtype=object) array(['https://files.readme.io/adae8b9-realtimedata_step2.png', 'realtimedata_step2.png 1572'], dtype=object) array(['https://files.readme.io/adae8b9-realtimedata_step2.png', 'Click to close... 1572'], dtype=object) array(['https://files.readme.io/56c9800-realtime-data_step3.png', 'realtime-data_step3.png 3196'], dtype=object) array(['https://files.readme.io/56c9800-realtime-data_step3.png', 'Click to close... 3196'], dtype=object) array(['https://files.readme.io/2351701-Realtime-data_step4.png', 'Realtime-data_step4.png 1436'], dtype=object) array(['https://files.readme.io/2351701-Realtime-data_step4.png', 'Click to close... 1436'], dtype=object) ]
docs.damoov.com
Chapter 10: Automating Snapshotsยถ Throughout this chapter you will need to be root or able to sudo to become root. Automating the snapshot process makes things a whole lot easier. Automating The Snapshot Copy Processยถ This process is performed on lxd-primary. First thing we need to do is create a script that will be run by cron in /usr/local/sbin called "refresh-containers" : sudo vi /usr/local/sbin/refreshcontainers.sh The script is pretty simple: #!/bin/bash # This script is for doing an lxc copy --refresh against each container, copying # and updating them to the snapshot server. for x in $(/var/lib/snapd/snap/bin/lxc ls -c n --format csv) do echo "Refreshing $x" /var/lib/snapd/snap/bin/lxc copy --refresh $x lxd-snapshot:$x done Make it executable: sudo chmod +x /usr/local/sbin/refreshcontainers.sh Change the ownership of this script to your lxdadmin user and group: sudo chown lxdadmin.lxdadmin /usr/local/sbin/refreshcontainers.sh Set up the crontab for the lxdadmin user to run this script, in this case at 10 PM: crontab -e And your entry will look like this: 00 22 * * * /usr/local/sbin/refreshcontainers.sh > /home/lxdadmin/refreshlog 2>&1 Save your changes and exit. This will create a log in lxdadmin's home directory called "refreshlog" which will give you knowledge of whether your process worked or not. Very important! The automated procedure will fail sometimes. This generally happens when a particular container fails to refresh. You can manually re-run the refresh with the following command (assuming rockylinux-test-9 here, as our container): lxc copy --refresh rockylinux-test-9 lxd-snapshot:rockylinux-test-9 Author: Steven Spencer Contributors: Ezequiel Bruni
https://docs.rockylinux.org/pt/books/lxd_server/10-automating/
2022-09-25T08:35:42
CC-MAIN-2022-40
1664030334515.14
[]
docs.rockylinux.org
# How to Change a Teamscale Project ID since 7.1 Teamscale supports a flexible project ID system that was introduced with its 7.1 release version. Setting and changing these IDs manually is an expert feature that is not necessary during regular operation. However, experienced project administrators may use it to deal with several complex use cases. # Use Case Example: Managing Project Copies Special circumstances may make it necessary to transparently create a copy of a Teamscale project on the same Teamscale instance as the original project, and to switch between these two projects in a way that is transparent to the end user. For example, sometimes a prolonged reanalysis of a project may not be feasible because of availability requirements. If a change to the analysis profile of such a project needs to be performed, instead a new project can be created that duplicates the settings of the original project (but with an adapted analysis profile). After analysis of the new project is completed, the switch from old to new project is performed, after which the user will access the new project rather than the old one. In this way, the necessary changes to the analysis profile can be made without any downtime. # Teamscale Project ID Paradigms Each Teamscale project is associated with a project ID. This is also referred to as the primary public project ID. In the Teamscale project configuration, it is displayed in the Advanced Settings as the Project ID option. This ID is the only one an end user will typically see, as it is the one that is used to define the URL where the project can be accessed. Additionally, this ID will be used in the UI to refer to this project whenever an ID needs to be displayed (for example, in the system logs view). In addition to the primary project ID, a project may also have alternative project IDs. These can be set in the field directly below the primary ID. A project may have any number of alternative IDs, although these are completely optional. The only difference between the primary ID and the alternative IDs is the UI behavior of Teamscale. End users will only see the primary ID of a project, but not the alternative IDs. In all other respects the IDs behave interchangeably. In particular, alternative IDs can be used in REST API calls in place of the primary project ID, and they may be used in URLs to link to the respective Teamscale project. Hence, alternative IDs can be used to preserve backwards compatibility of external systems that reference such URLs. The primary and the alternative IDs of a project can be changed freely, as long as they don't conflict with any other project ID. In other words, project IDs are globally unambiguous over the complete Teamscale instance. ID changes take effect immediately and do not require a reanalysis of the project. # Implications of Project ID Changes WARNING Even though project IDs can be changed freely, it is not always advisable to do so. In particular, changing a project ID from original-id to new-id means that end users who have bookmarked the URL https://<teamscale-server>/findings.html#/original-id will no longer be able to access this endpoint, until they have changed it to the new https://<teamscale-server>/findings.html#/new-id. In such cases, it may make sense to add original-id to the set of alternative project IDs when renaming it to new-id, allowing the user to keep the old bookmark. Indeed, preserving backwards compatibility for such URLs is the primary use case of the alternative project IDs. Keep in mind that project ID references may crop up in unexpected places, such as the merge request comments Teamscale can perform in code collaboration platforms, or in automated CI pipeline scripts that upload external data to a specific Teamscale project. Hence, project ID changes should be kept to a minimum to avoid unintentional consequences. Another aspect to keep in mind is the issue of project permissions. For example, if a specific user group is granted the view permission on a specific project, this will be done by project ID. If this project ID is changed, then the users will no longer be able to view the project. If the same project ID is entered as primary or alternative project ID of a second project, then the user group will be able to view the second project, but not the original one. This is the desired behavior for the example scenario mentioned above (migrating a user base transparently from one project to another), but it may not be appropriate for all use cases. Hence, permissions should be checked (and adapted as required) after changing any project ID.
https://docs.teamscale.com/reference/ui/project/changing-project-id/
2022-09-25T09:15:39
CC-MAIN-2022-40
1664030334515.14
[array(['/assets/img/project-advanced-settings.feec5f6e.png', 'Teamscale Project Configuration - Advanced Settings'], dtype=object) ]
docs.teamscale.com
Usage Using GUI portal for registering hostnames and assign IP address Open the link in your browser: Click on Log into go to login screen and choose eduTEAMS: After logging in via your eduTEAMS account, go to Overviewtab: Now you can manage your existing hostnames (if any) or register new ones. Click on Add Hostto register a new hostname: Enter a hostname, choose a domain for your hostname then click on Create: Your hostname has been create with a secret for updating IP. For your convenience, the whole URL for updating IP address (in form ) is also printed for copy and paste. Save the URL securely for late use. Go back to the Overviewtab, your newly registered hostname is now listed. Click on the hostname to perform any action: update IP address, show configuration or delete it: Using command line to update IP address For automation, it is useful to update IP address via command-line or API, e.g. for assigning IPs in installation scripts. Simply send a request to the URL, e.g. using curl command will update the IP address of the hostname to the actual host (where the command is executed): $ curl good 147.213.76.198 Support For additional questions, requests and support, please contact [email protected].
https://edudns.docs.fedcloud.eu/quickstart.html
2022-09-25T07:46:47
CC-MAIN-2022-40
1664030334515.14
[array(['_images/edudns-home.png', 'Vault scheme'], dtype=object) array(['_images/edudns-login.png', 'Vault scheme'], dtype=object) array(['_images/edudns-overview.png', 'Vault scheme'], dtype=object) array(['_images/edudns-create-host.png', 'Vault scheme'], dtype=object) array(['_images/edudns-create-host-demo.png', 'Vault scheme'], dtype=object) array(['_images/edudns-url.png', 'Vault scheme'], dtype=object) array(['_images/edudns-edit-host.png', 'Vault scheme'], dtype=object)]
edudns.docs.fedcloud.eu
Testingยถ Testing is done with the tox automation tool, which runs a pytest-backed test suite in the tests/ directory. This FAQ contains some useful information about how to use tox on Windows. Testing Requirementsยถ In addition to the installation requirements for the package itself, running tests and building documentation requires additional packages specified by the tests and docs extras in setup.py, along with any other explicitly specified deps in tox.ini. The full suite of tests also requires installation of the following software: Artleys Knitro version 10.3 or newer: testing optimization routines. MATLAB: comparing sparse grids with those created by the function nwspgr created by Florian Heiss and Viktor Winschel, which must be included in a directory on the MATLAB path. If software is not installed, its associated tests will be skipped. Additionally, some tests that require support for extended precision will be skipped if on the platform running the tests, numpy.longdouble has the same precision as numpy.float64. This tends to be the case on Windows. Running Testsยถ Defined in tox.ini are environments that test the package under different python versions, check types, enforce style guidelines, verify the integrity of the documentation, and release the package. First, tox should be installed on top of an Anaconda installation. The following command can be run in the top-level pyblp directory to run all testing environments: tox You can choose to run only one environment, such as the one that builds the documentation, with the -e flag: tox -e docs Test Organizationยถ Fixtures, which are defined in tests.conftest, configure the testing environment and simulate problems according to a range of specifications. Most BLP-specific tests in tests.test_blp verify properties about results obtained by solving the simulated problems under various parameterizations. Examples include: Reasonable formulations of problems should give rise to estimated parameters that are close to their true values. Cosmetic changes such as the number of processes should not change estimates. Post-estimation outputs should satisfy certain properties. Optimization routines should behave as expected. Derivatives computed with finite differences should approach analytic derivatives. Tests of generic utilities in tests.test_formulation, tests.test_integration, tests.test_iteration, and tests.test_optimization verify that matrix formulation, integral approximation, fixed point iteration, and nonlinear optimization all work as expected. Example include: Nonlinear formulas give rise to expected matrices and derivatives. Gauss-Hermite integrals are better approximated with quadrature based on Gauss-Hermite rules than with Monte Carlo integration. To solve a fixed point iteration problem for which it was developed, SQUAREM requires fewer fixed point evaluations than does simple iteration. All optimization routines manage to solve a well-known optimization problem under different parameterizations.
https://pyblp.readthedocs.io/en/latest/testing.html
2022-09-25T07:19:02
CC-MAIN-2022-40
1664030334515.14
[]
pyblp.readthedocs.io
Volunteer assignmentsยถ An "assignment" links a CiviCRM contact to a specific volunteering opportunity. After defining your opportunities, it's time to start assigning some volunteers to these opportunities! Allowing volunteers to self-assignยถ Volunteers can use the sign-up form to assign themselves to specific opportunities. Manually Assigning Volunteersยถ A user with the proper permissions (henceforth know as a "staff member") can sign anyone up to fill a volunteering opportunity. - Go to Volunteers > Manage Volunteer Projects - Find the project - Choose Assign Volunteers The Available Volunteers listยถ The left side shows a list of "Available Volunteers" which is populated by either of the following actions: - A volunteer uses the sign-up form and selects "Any" as the shift (which is only possible if "Allow users to sign up without specifying a shift" is checked while defining opportunities) - A staff member manually adds a contact to this list by clicking Add Volunteer... below it. This Avilable Volunteers list will persist even after closing Assign Volunteers. Think of it as the people you have "on deck", waiting to be placed into a specific opportunity. Making and editing assignmentsยถ Volunteers must be added to the Available Volunteers list before they can be assigned to any opportunities. After this list contains some contacts, make assignments using any of the following methods: - Drag and drop volunteers from Available Volunteers to the red More Needed boxes below the opportunities. - Click the triangle icon to the right of a volunteer and choose Move to or Copy to. When an opportunity has reached the required number of volunteer assignments, CiviVolunteer won't allow any more. Caution When you assign a contact to an opportunity, CiviVolunteer does not check whether the contact is already assigned to a different opportunity, overlapping in time. You will have to take this logic into account to avoid double-booking volunteers. Removing assignmentsยถ To remove an assignment, use the arrow button and choose Move to or Delete. Searching for volunteers based on skill levelยถ If you have set up and collected custom data on volunteer skills and interests (using the "Volunteer Information" custom data set), you can quickly search for volunteers based on criteria within these fields as follows: - Within Assign Volunteers, hover over the box for an assignment which is still in need of volunteers - Notice a magnifying glass icon appear at the top right of this box - Click the magnifying glass icon. - Search, and select volunteers Confirmation emailsยถ When a person fills out the sign-up form, CiviVolunteer sends them a confirmation email with the project managers BCC'd. (This email is not sent when using "Assign volunteers".) Tip To edit the text in the confirmation email - Go to Administer > CiviMail > Message Templates - Select System Workflow Messages - Find Volunteer - Registration (on-line) and click Edit. How assignments are storedยถ Assignments are activities, and thus are viewable within the Activities tab for each contact. This also means that you can used the activities fields within the Advanced Search for contacts to filter based on volunteering assignments to some extent. Do not add assignments by creating new activities CiviCRM will let you add a new "Volunteer" activity to a contact through the Activities tab on the contact's record, but don't do this. You need to create new assignments using one of the methods described above to receive all the expected functionality within CiviVolunteer. Viewing a roster of all assignmentsยถ To see a summary of all the volunteers signed up for opportunities within a given project, you can do any of the following: - Use Assign Volunteers as a way to view the assignments - Click on View Volunteer Roster (from the Manage Projects screen) to see a similar view - Use a report to gain even more control over what data is displayed.
https://docs.civicrm.org/volunteer/en/latest/assignments/
2022-09-25T07:30:15
CC-MAIN-2022-40
1664030334515.14
[array(['../images/assign-volunteers.gif', 'Assign Volunteers screenshot'], dtype=object) ]
docs.civicrm.org
get This is a paginated endpoint that retrieves a list of your tasks. The tasks will be returned in descending order based on created_at time. All time filters expect an ISO 8601-formatted string, like '2021-04-25' or '2021-04-25T03:14:15-07:00' The pagination is based on the limit and next_token parameters, which determine the page size and the current page we are on. The value of next_token is a unique pagination token for each page (nerdy details if you were curious). Make the call again using the returned token to retrieve the next page.
https://docs.scale.com/reference/list-multiple-tasks
2022-09-25T08:36:27
CC-MAIN-2022-40
1664030334515.14
[]
docs.scale.com
v2 API Python Code Example We #.
https://docs.sendgrid.com/for-developers/sending-email/v2-python-code-example
2022-09-25T08:02:40
CC-MAIN-2022-40
1664030334515.14
[]
docs.sendgrid.com
Setting a Custom BT Address - Production Approach Introduction The BLE stack usually derives the BT address (sometimes referred to as MAC address) of a device from its Unique Identifier, a 64-bit value located in the Device Information Page; this value is factory programmed and not modifiable. Since the BT address is only 48-bit long, part of the Unique Identifier is removed while deriving the address. See your device's reference manual for more details on the Device Information page. To extract the Unique Identifier from a radio board (e.g., BRD4182A - EFR32MG22), you can issue the following command using Simplicity Commander in a windows command prompt. $ commander device info You should get an output similar to this, notice the Unique ID value: Part Number : EFR32MG22C224F512IM40 Die Revision : A2 Production Ver : 2 Flash Size : 512 kB SRAM Size : 32 kB Unique ID : 680ae2fffe2886c5 DONE You can see the derived BT address after flashing the Bluetooth - SoC Empty example to the same device and scanning nearby advertisers using the EFR connect mobile application. Figure 1 below shows the expected output: Figure 1. BT address in EFR connect application Notice how the address is the same as the Unique ID except for the middle 16 bits (0xFFFE), hence why the BT address is a derived value. As mentioned before, this is the usual way for the BLE stack to acquire the BT address. Nonetheless, Nonetheless, if there's a valid BT address entry in the non-volatile region of the device (NVM3 for series 2 devices and PS Store or NVM3 for series 1), this value is used instead. For more details regarding non-volatile regions in the BLE stack, see the following documentation: - UG434: Bluetooth C application developer's guide - Chapter 7.1 - AN1135: Using Third Generation Non-Volatile Memory - Chapter 6 This document explains two methods to modify the non-volatile region to set a custom BT address. Both could be suitable options in a production context: Custom Application method: The running application reads a token located in the User Data page, derives the BT address from it, and stores it in the non-volatile region. This token is easily programmable through Simplicity Commander. Simplicity Commander method: Create a custom non-volatile region (NVM3) with the desired BT address with Simplicity Commander. Flash the output binary to the device. This option is viable only if you're using NVM3 as the persistent storage solution. Note: The solutions underneath were tested on a BRD4182A (EFR32MG22) using the Bluetooth - SoC Empty example as a baseline. Methods Custom Application This approach uses a custom function that leverages the sl_bt_system_set_identity_address and, sl_bt_system_get_identity_address APIs. The steps are as follows: First, create a new Bluetooth - SoC Empty example for your board. Open the app.c file of the project and copy the following code snippet. This is the custom function responsible for updating the BT address. #define MFG_CUSTOM_EUI_64_OFFSET 0x0002 static void sli_set_custom_bt_address(void) { uint8_t *mfg_token = (uint8_t*)USERDATA_BASE + MFG_CUSTOM_EUI_64_OFFSET; bd_addr myaddr, cur_addr; uint8_t address_type; sl_status_t status; //Adjust token byte Endianness for(uint8_t i = 0; i < 6; i++) { myaddr.addr[i] = mfg_token[7-i]; } //Get current BT address: // Current address is derived from EUI64 in DEVINFO unless there's an NVM3 // valid entry status = sl_bt_system_get_identity_address(&cur_addr, &address_type); if (status != SL_STATUS_OK) { while(1); //Issue retrieving the status } //Compare current and desired BT address, IF NOT EQUAL update and reset // reset needed to apply BT address changes in stack if((memcmp(&cur_addr, &myaddr, 6)) != 0) { status = sl_bt_system_set_identity_address(myaddr,0); // set new BT address if (status != SL_STATUS_OK) { while(1); //Issue setting the address } sl_bt_system_reset(0); // reset } } Then, still in the app.c file, call the function inside the system boot event sl_bt_evt_system_boot_id as follows: switch (SL_BT_MSG_ID(evt->header)) { // ------------------------------- // This event indicates the device has started and the radio is ready. // Do not call any stack command before receiving this boot event! case sl_bt_evt_system_boot_id: //Set custom bt address address sli_set_custom_bt_address(); The following is a brief explanation of the function's operation: - Retrieve the MFG_CUSTOM_EUI_64token from the user data page. - Adjust the byte endianness of the token to form the new BT address. - Retrieve the current BT address. - Update the BT address if the new BT address is different from the current one. - Reset the system. - This step is needed because during the BLE stack initialization, the BT address is determined. The BT address is ultimately derived from the MFG_CUSTOM_EUI_64 token in the User Data page in this example. Therefore, you should flash the token before executing the application. To do this, use the following Simplicity Commander command: $ commander flash --tokengroup znet --token "TOKEN_MFG_CUSTOM_EUI_64:0000AABBCCDDEE11" Note that the two initial bytes are 0x0000. The reason is that the token is 8 bytes long but, we only need 6 of them. You should get an output similar to this: Writing 1024 bytes starting at address 0x0fe00000 Comparing range 0x0FE00000 - 0x0FE003FF (1024 Bytes) DONE You can verify that the token was properly flashed using the following command: $ commander tokendump --tokengroup znet --token TOKEN_MFG_CUSTOM_EUI_64 You should get an output similar to this: # # The token data can be in one of three main forms: byte-array, integer, or string. # Byte-arrays are a series of hexadecimal numbers of the required length. # Integers are BIF endian hexadecimal numbers. # String data is a quoted set of ASCII characters. # MFG_CUSTOM_EUI_64: 0000AABBCCDDEE11 Finally, compile the modified Bluetooth - SoC Empty project and flash the application to your device. You'll also need a bootloader. Open the EFR connect application on your mobile phone and click on the Browser option to see a list of nearby advertisers. Figure 2 below shows the output of the application. It displays the new BT address after flashing the token and the custom Bluetooth - SoC Empty application. Figure 2. Modified BT address (Custom application method) Notice that the Device Name is Custom MAC. By default it should be Empty Example. You can modify this through the GATT configurator. For further details see the Getting Started with Bluetooth in Simplicity Studio v5 lab manual. Simplicity Commander (Custom NVM3) This method is a more advanced approach as it requires an understanding of the NVM3 and the available keys. In this case, the application is not customized; instead, Simplicity Commander is used to create a blank NVM3 region in an image file. This image is modified to add the desired BT address and flashed to the device. Note that this method will overwrite your whole non-volatile memory area, only use it at production or if you are sure that your NVM is empty. The steps are as follows: First, generate a blank NVM3 image file using the following command: $ commander nvm3 initfile --address 0x00074000 --size 0xA000 --device EFR32MG22 --outfile nvm3_custom_mac.s37 To determine the size of the NVM3, use the configuration of the "NVM3 Default Instance" software component as a reference. The Silicon Labs example projects set 5 flash pages by default, seen in figure 3 below. The page size depends on the device. For the EFR32MG22 each flash page is 8-kB. See your device's reference manual for details. To determine the starting address, subtract six flash pages from the end address of the main flash (5 pages for the NVM3 and 1 page for manufacturing tokens). See [UG434 - Chapter 7.1][ug136_flash_chapter_link] for flash distribution details in the BLE stack. The output of the command is a blank NVM3 .s37 binary that can be customized and flashed to the device, useful to create a default set of NVM3 data that can be written during production. Figure 3. Default NVM3 page count Next, add the desired BT address to the blank NVM3 image through a valid "key/value" pair. The "key" is specific to the BT address and defined in the BLE stack as 0x4002C. The "value" is the desired 48-bit address. You can add the entry to the blank NVM3 using two methods, the difference is the way the "key/value" pair is provided. As an argument: $ commander nvm3 set nvm3_custom_mac.s37 --object 0x4002c:1122334455AA --outfile nvm3_custom_mac.s37 Using a .txt file: $ commander nvm3 set nvm3_custom_mac.s37 --nvm3file custom_eui48.txt --outfile nvm3_custom_mac.s37 The .txt file has a specific format. The block below shows the contents of the .txt file used for this example. For more information, see UG162 - Section 6.15.4 Write NVM3 Data Using a Text File. 0x4002c : OBJ : 1122334455AA Optionally, you can verify that the entry was created in the custom NVM3 file through the following command: $ commander nvm3 parse nvm3_custom_mac.s37 You should get an output similar to this: Parsing file nvm3_custom_mac.s37... Found NVM3 range: 0x00074000 - 0x0007E000 Using 4096 B as maximum object size, based on given size of NVM3 area. All NVM3 objects: KEY - TYPE - SIZE - DATA 0x4002c - Data - 6 B - 11 22 33 44 55 AA Finally, flash the custom NVM3 image to the device along with the desired application, in this case, the Bluetooth - SoC Empty example and a bootloader. Figure 4 below shows the output of the EFR connect application with the new BT address. Figure 4. Modified BT address (Simplicity Commander method) Example This guide has a related code example here: Setting a custom BT address.
https://docs.silabs.com/bluetooth/3.3/general/system-and-performance/setting-a-custom-bt-address--production-approach
2022-09-25T08:53:54
CC-MAIN-2022-40
1664030334515.14
[]
docs.silabs.com
Install Dockerยถ Docker is available on multiple platforms. Check the following link to choose the best installation path for you: If the above does not fulfil your requirements follow the link here . Create & Execute the Dockerfileยถ Docker automatically build images by reading the instructions you provide in the Dockerfile. A Dockerfile is a text document that contains all the command a user could call on the command line to assemble an image. The Dockerfile must start with the FROM instruction. Below is an example of a Dockerfile # base image FROM ubuntu:latest # clean and update sources RUN apt-get clean && apt-get update # install basic apps RUN apt-get install -qy nano # install Python and modules RUN apt-get install -qy python3 RUN apt-get install -qy python3-psycopg2 Follow this link to know more about creating a Dockerfile. To login into your account $ docker login Executing the Dockerfilecommands To directly execute the file command use: $ docker build . (With this command <none> repo is created. To avoid this use the next command) Sending build context to Docker daemon 5.51 MB ... You can specify the name of the repository with: $ docker build -t your_name . View all the Imagesยถ To view all the top level images run $ docker images The output: REPOSITORY TAG IMAGE ID CREATED SIZE ----------------------------------------------------------- your_name latest d6e415a70abf 8 seconds ago 210MB ubuntu latest 747cb2d60bbe 2 weeks ago 122MB The above output states that the repository ubuntuis the base image, because of the FROM ubuntu:latestcommand in the Dockerfile. The your_nameimage is the combination of all the packages mentioned in the Dockerfile. To delete the images $ docker rmi image_id Containersยถ Docker containers ensures that the software will behave the same way, regardless of where it is deployed, because its runtime environment is ruthlessly consistent. To create a container in your image use $ docker run -ti image_name Where : - t : gives us the terminal - I : allows us to interact with the terminal Output - root : root@contaner_id:/# To exit from the container root : root@contaner_id:/# exit To Check all containers in your image To list all the containers: $ docker ps -a To check the running container: $ docker ps To delete the container $ docker rm contaner_id Image to Docker Cloudยถ Search command to find suitable image $ docker search image_name Below is a screenshot for : docker search ubuntu To pull the image $ docker pull username/repo_name:tag_Name To commit the image $ docker tag IMAGE_ID username/repo_name:tag_Name To push the image $ docker push username/repo_name:tag_Name
https://kontikilabs.readthedocs.io/en/latest/docker.html
2018-12-09T23:40:05
CC-MAIN-2018-51
1544376823228.36
[]
kontikilabs.readthedocs.io
Integrate data into Common Data Service for Apps The Data Integrator (for Admins) is a point-to-point integration service used to integrate data into Common Data Service for Apps. It supports integrating data from multiple sourcesโ€”for example, Dynamics 365 for Finance and Operations, Dynamics 365 for Sales and SalesForce (Preview), SQL (Preview)โ€”into Common Data Service for Apps. It also supports integrating data into Dynamics 365 for Finance and Operations and Dynamics 365 for Sales. This service has been generally available since July 2017. We started with first-party appsโ€”for example, Dynamics 365 for Finance and Operations and Dynamics 365 for Sales. With the help of Power Query or M-based connectors, we are now able to support additional sources like SalesForce (Preview) and SQL (Preview) and will extend this to 20+ sources in the near future. How can you use the Data Integrator for your business? The Data Integrator (for Admins) also supports process-based integration scenarios like Prospect to Cash that provide direct synchronization between Dynamics 365 for Finance and Operations and Dynamics 365 for Sales. The Prospect to Cash templates that are available with the data integration feature enable the flow of data for accounts, contacts, products, sales quotations, sales orders, and sales invoices between Finance and Operations and Sales. While data is flowing between Finance and Operations and Sales, you can perform sales and marketing activities in Sales, and you can handle order fulfillment by using inventory management in Finance and Operations. The Prospect to Cash integration enables sellers to handle and monitor their sales processes with the strengths from Dynamics 365 for Sales, while all aspects of fulfillment and invoicing happen using the rich functionality in Finance and Operations. With Microsoft Dynamics 365 Prospect to Cash integration, you get the combined power from both systems. See the video: Prospect to cash integration For more information about the Prospect to Cash integration, see the documentation on the Prospect to Cash solution. We also support Field Service integration and PSA (Project Service Automation) integration to Dynamics 365 for Finance and Operations. Data Integrator Platform The Data Integrator (for Admins) consists of the Data Integration platform, out-of-the-box templates provided by our application teams (for example, Dynamics 365 for Finance and Operations and Dynamics 365 for Sales) and custom templates created by our customers and partners. We have built an application-agnostic platform that can scale across various sources. At the very core of it, you create connections (to integration end points), choose one of the customizable templates with predefined mappings (that you can further customize), and create and execute the data integration project. Integration templates serve as a blueprint with predefined entities and field mappings to enable flow of data from source to destination. It also provides the ability to transform the data before importing it. Many times, the schema between the source and destinations apps can be very different and a template with predefined entities and field mappings serves as a great starting point for an integration project. How to set up a data integration project There are three primary steps: Create a connection (provide credentials to data sources). Create a connection set (identify environments for connections you created in the previous step). Create a data integration project using a template (create or use predefined mappings for one or more entities). Once you create an integration project, you get the option to run the project manually and also set up a schedule-based refresh for the future. The rest of this article expands on these three steps. How to create a connection Before you can create a data integration project, you must provision a connection for each system that you intend to work with in the Microsoft PowerApps portal. Think of these connections as your points of integration. To create a connection - Under Data, select Connections and then select New connection. You can either select a connection from the list of connections or search for your connection. Once you select your connection, select Create. Then you will be prompted for credentials. After you provide your credentials, the connection will be listed under your connections. Note Please make sure that the account you specify for each connection has access to entities for the corresponding applications. Additionally, the account for each connection can be in a different tenant. How to create a connection set Connection sets are a collection of two connections, environments for the connections, organization mapping information, and integration keys that can be reused among projects. You can start using a connection set for development and then switch to a different one for production. One key piece of information that is stored with a connection set is organization unit mappingsโ€”for example, mappings between the Finance and Operations legal entity (or company) and Dynamics 365 for Sales organization or business units. You can store multiple organization mappings in a connection set. To create a connection set Go to PowerApps Admin center. Select the Data Integration tab in the left-hand navigation pane. Select the Connection Sets tab and select New connection set. Provide a name for your connection set. Choose the connections you created earlier and select the appropriate environment. Repeat the steps by choosing your next connection (think of these as source and destination in no specific order). Specify the organization to business unit mapping (if you are integrating between Finance and Operations and Sales systems). Note You can specify multiple mappings for each connection set. Once you have completed all the fields, select Create. You will see the new connection set you just created under the Connection sets list page. Your connection set is ready to be used across various integration projects. How to create a data integration project Projects enable the flow of data between systems. A project contains mappings for one or more entities. Mappings indicate which fields map to which other fields. To create a data integration project Go to PowerApps Admin center. Select the Data Integration tab in the left navigation pane. While in the Projects tab, select New Project in the top right corner. Provide a name for your integration project. Select one of the available templates (or create your own template). In this case, we are moving the Products entity from Finance and Operations to Sales. Select Next and choose a connection set you created earlier (or create a new connection set). Make sure you have chosen the right one by confirming the connection and environment names. Select Next and then choose the legal entity to business unit mappings. Review and accept the privacy notice and consent on the next screen. Proceed to create the project and then run the project which in turn executes the project. On this screen, you will notice several tabsโ€”Scheduling and Execution historyโ€”along with some buttonsโ€”Add task, Refresh entities, and Advanced Queryโ€”that will be described later in this article. Execution history Execution history shows the history of all project executions with project name, timestamp of when the project was executed, and status of execution along with the number of upserts and/or errors. Example of project execution history. Example of successful execution, showing status as completed with # of upserts. (Update Insert is a logic to either update the record, if it already exists, or to insert new record.) For execution failures, you can drill down to see the root cause. Here is an example of a failure with project validation errors. In this case, the project validation error is due to missing source fields in the entity mappings. If the project execution is in โ€˜ERRORโ€™ state, then it will retry execution at the next scheduled run. If the project execution is in โ€˜WARNINGโ€™ state, then you will need to fix the issues on the source. It will retry execution at the next scheduled run. In either case, you could also choose to manually โ€˜re-run execution.โ€™ How to set up a schedule-based refresh We support two types of executions/writes today: Manual writes (execute and refresh project manually) Schedule-based writes (auto-refresh) After you create an integration project, you get the option to run it manually or configure schedule-based writes, which lets you set up automatic refresh for your projects. To set up schedule-based writes Go to PowerApps Admin center. You can schedule projects in two different ways. Either select the project and select the Scheduling tab or launch the scheduler from the project list page by clicking the ellipsis next to the project name. Select Recur every and once you have completed all the fields, select Save schedule. You can set a frequency as often as 1 minute or have it recur a certain number of hours, days, weeks, or months. Note that the next refresh won't start until the previous project task completes its run. Also note that under Notifications, you can opt in for email-based alert notifications, which would alert you on job executions that either completed with warnings and/or failed due to errors. You can provide multiple recipients, including groups separated by commas. Note - Currently, we support scheduling 50 integration projects at any given time per paid tenant. However you can create more projects and run them interactively. For trial tenants, we have an additional limitation that a scheduled project would only run for first 50 executions. - While we support scheduling projects to run every minute, please bear in mind that this may put a lot of stress on your apps and in turn impact overall performance. We highly encourage users to test project executions under true load conditions and optimize for performance with less frequent refreshes. In production environments, we do not recommend running more than 5 projects per minute per tenant. Customizing projects, templates, and mappings You use a template to create a data integration project. A template commoditizes the movement of data that in turn helps a business user or administrator expedite integrating data from sources to destination and reduces overall burden and cost. A business user or administrator can start with an out-of-the-box template published by Microsoft or its partner and then further customize it before creating a project. You can then save the project as a template and share with your organization and/or create a new project. A template provides you with source, destination, and direction of data flow. You need to keep this in mind while customizing and/or creating your own template. You can customize projects and templates in these ways: Customize field mappings. Customize a template by adding an entity of your choice. How to customize field mappings To create a connection set Go to PowerApps Admin center. Select the project for which you want to customize field mappings and then select the arrow between source and destination fields. This takes you to the mapping screen where you can add a new mapping by selecting Add mapping at the top right corner or Customize existing mappings from the dropdown list. Once you have customized your field mappings, select Save. How to customize or create your own template To create your own template Go to PowerApps Admin center. Identify source and destination and direction of flow for your new template. Create a project by choosing an existing template that matches your choice of source and destination and direction of flow. Create the project after choosing the appropriate connection. Before you save and/or run the project, at the top right corner, select Add task. This will launch the Add task dialog. Provide a meaningful task name and add source and destination entities of your choice. The dropdown list shows you all your source and destination entities. In this case, a new task was created to sync User entity from SalesForce to Users entity in Common Data Service for Apps. Once you create the task, you will see your new task listed and you can delete the original task. You just created a new templateโ€”in this case, a template to pull User entity data from SalesForce to Common Data Service. Select Save to save your customization. Follow the steps to customize field mappings for this new template. You could run this project and/or save the project as a template from the Project list page. Provide a name and description and/or share with others in your organization. Advanced data transformation and filtering With Power Query support, we now provide advanced filtering and data transformation of source data. Power Query enables users to reshape data to fit their needs, with an easy-to-use, engaging, and no-code user experience. You can enable this on a project-by-project basis. How to enable advanced query and filtering To set up advanced filtering and data transformation Go to PowerApps Admin center. Select the project where you want to enable advanced query and then select Advanced Query. You will get a warning that enabling advanced query is a one-way operation and cannot be undone. Select OK to proceed and then select the source and destination mapping arrow. You are now presented with the familiar entity mapping page with a link to launch Advanced Query and Filtering. Select to link to launch the Advanced Query and Filtering user interface, which gives you source field data in Microsoft Excel-type columns. From the top menu, you get several options for transforming data such as Add conditional column, Duplicate column, and Extract. You can also right-click any column for more options such as Remove columns, Remove duplicates, and Split column. You also can filter by clicking each column and using Excel-type filters. Default value transforms can be achieved using the conditional column. To do this, from the Add Column dropdown list, select Add Conditional Column and enter the name of the new column. Fill in both Then and Otherwise with what should be the default value, using any field and value for If and equal to. Notice the each clause in the fx editor, at the top. Fix the each clause in the fx editor and select OK. Each time you make a change, you apply a step. You can see the applied steps on the right-hand pane (scroll to the bottom to see the latest step). You can undo a step in case you need to edit. Additionally, you can go to the Advanced editor by right-clicking the QrySourceData on the left pane, at the top to view the M language that gets executed behind the scenes, with the same steps. Select OK to close the Advanced Query and Filtering interface and then, on the mapping task page, pick the newly created column as the source to create the mapping accordingly. For more information on Power Query, see Power Query documentation.
https://docs.microsoft.com/en-us/powerapps/administrator/data-integrator
2018-12-10T00:04:11
CC-MAIN-2018-51
1544376823228.36
[array(['media/data-integrator/dataintegratorp2p-new.png', 'Data source to destination Data source to destination'], dtype=object) array(['media/data-integrator/prospecttocash631.png', 'Prospect to Cash Prospect to Cash'], dtype=object) array(['media/data-integrator/diplatform.png', 'Data Integration platform Data Integration platform'], dtype=object) array(['media/data-integrator/emailnotification780.png', 'Email notification Email notification'], dtype=object)]
docs.microsoft.com
Release Notes - App Version: v1.5.40 - Release Date(s): 07/09/18 - Environment: Android & iOS Description - Changed - Animal names are now introduced on the activation screen - Changed - Channel alert notification now directly opens the in-app Relay screen if the app was left on another screen - Changed - The keyboard now remains open when selecting different fields in the app - Changed - Location now displayed as latitude/longitude for more precise locations in poorly mapped areas - Changed - When a network connection isnโ€™t available, the "Retry button" now refreshes the app and looks for a connection - New - Relay app now receives notification alerts when a Relayโ€™s Battery hits 25%, 10%, and 5% (Note: Relay's must be updated to ROM 105 or later) Clicking battery alerts within app shows additional information for the specific Relay - New - Relay app now allows user to immediately update Relay instead of waiting for a nighttime update (Note: Relay's must be updated to ROM 105 or later) - Fixed - Audio from other applications is now paused when switching to the Relay app - Fixed - Channel alerts / battery level notifications are properly cleared when logging out of the Relay app - Fixed - Loading icons now show up when enabling location history, and are no longer persistent on the in-app Relay screen - Fixed - Tapping the back button on the "Forgot Password" screen doesnโ€™t close the app
https://docs.relaygo.com/releases/apps-ios-and-android/relay-android-and-ios-app-update-v1540
2018-12-10T00:25:55
CC-MAIN-2018-51
1544376823228.36
[]
docs.relaygo.com
Initialization is the process of making setting values "sane". To be considered sane, the setting value must have correct checksum, and its members must conform to the constraints of P1 and P2 parameters. Non-volatile settings are never initialized automatically โ€” are always initialized when you call stg_start().: sub on_button_released() if button.time>4 then if stg_restore_multiple(EN_STG_INIT_MODE_NORMAL)<>EN_STG_STATUS_OK then sys.halt end if end sub. sub on_sys_init() dim x as en_stg_status_codes dim stg_name as string(STG_MAX_SETTING_NAME_LEN) if stg_start()<>EN_STG_STATUS_OK then sys.halt x=stg_check_all(stg_name) select case x case EN_STG_STATUS_OK: '--- all good --- case EN_STG_STATUS_INVALID, EN_STG_STATUS_FAILURE: case else: 'some other trouble sys.halt end select comms_init() There is also a library procedure to restore a single member of a single setting โ€” stg_restore_member(). The default value for any member of any setting can be obtained by calling stg_get_def().
http://docs.tibbo.com/taiko/lib_stg_verify_init.htm
2018-12-10T00:27:28
CC-MAIN-2018-51
1544376823228.36
[]
docs.tibbo.com
The. To edit format rules for the current Grid dashboard item, use the following options. All of these actions invoke the Edit Rules dialog containing existing format rules. To learn more, see Conditional Formatting.. Finally, add the created format rule to the GridDashboardItem.FormatRules collection. You can use the DashboardItemFormatRule.Enabled property to specify whether the current format rule is enabled.
https://docs.devexpress.com/Dashboard/114410/creating-dashboards/creating-dashboards-in-the-winforms-designer/designing-dashboard-items/grid/conditional-formatting
2018-12-09T23:39:20
CC-MAIN-2018-51
1544376823228.36
[]
docs.devexpress.com
The โ€œBrushesโ€ dialog is used to select a brush, for use with painting tools: see the Brushes section for basic information on brushes and how they are used in GIMP. The dialog also gives you access to several functions for manipulating brushes. You Paragraaf 2.3, โ€œKoppelen van dialoogvenstersโ€ . from the Tool Options dialog for any of the paint tools, by clicking on the Brush icon button, you get a popup with similar functionality that permits you to quickly choose a brush from the list; if you click on the button present on the right bottom of the popup, you open the real brush dialog. The simplified โ€œBrushesโ€ dialog This window has five buttons, clearly explained by help pop-ups: Smaller previews Larger previews View as list View as Grid Open the brush selection dialog Note that, depending on your Preferences, a brush selected with the popup may only apply to the currently active tool, not to other paint tools. See the Tool Option Preferences section for more information. In the Tab menu, you can choose betweenand . In Grid mode, the brush shapes are laid out in a rectangular array, making it easy to see many at once and find the one you are looking for. In List mode, the shapes are lined up in a list, with the names beside them. In the Tab menu, the option Preview Size allows you to adapt the size of brush previews to your liking. At the top of the dialog appears the name of the currently selected brush, and its size in pixels. In the center a grid view of all available brushes appears, with the currently selected one outlined. For the most part, the dialog works the same way in List mode as in Grid mode, with one exception: If you double-click on the name of a brush, you will be able to edit it. Note, however, that you are only allowed to change the names of brushes that you have created or installed yourself, not the ones that come pre-installed with GIMP. If you try to rename a pre-installed brush,. When you click on a brush preview, it becomes the current brush and it gets selected in the brush area of Toolbox and the Brush option of painting tools. When you double-click on a brush preview, you will activate the Brush Editor. You can also click on buttons at the bottom of the dialog to perform various actions. Meaning of the small symbols at the bottom right corner of every brush preview: A blue corner is for brushes in normal size. You can duplicate them. A small cross means that the brush preview is in a reduced size. You can get it in normal size by maintaining left click on it. A red corner is for animated brushes. If you maintain left click on the thumbnail, the animation is played. You can use tags to reorganize the brushes display. See Paragraaf 3.6, . This removes all traces of the brush, both from the dialog and the folder where its file is stored, if you have permission to do so. It asks for confirmation before doing anything. If you add brushes to your personal brushes folder or any other folder in your brush search path, by some means other than the Brush Editor, this button causes the list to be reloaded, so that the new entries will be available in the dialog. a diamond. This editor has several elements: The dialog bar: As with all dialog windows, a click on the small triangle prompts a menu allowing you to set the aspect of the Brush Editor. The title bar: To give a name to your brush. The preview area: Brush changes appear in real time in this preview. Settings: A circle, a square and a diamond are available. You will modify them by using the following options: Distance between brush center and edge, in the width direction. A square with a 10 pixels radius will have a 20 pixels side. A diamond with a 5 pixels radius will have a 10 pixels width. This parameter is useful only for square and diamond. With a square, increasing spikes results in a polygon. With a diamond, you get a star. This parameter controls the feathering of the brush border. Value = 1.00 gives a brush with a sharp border (0.00-1.00). This parameter controls the brush Width/Height ratio. A diamond with a 5 pixels radius and an Aspect Ratio = 2, will be flattened with a 10 pixels width and a 5 pixels height (1.0-20.0). This angle is the angle between the brush width direction, which is normally horizontal, and the horizontal direction, counter-clock-wise. When this value increases, the brush width turns counter-clock-wise (0ยฐ to 180ยฐ). When the brush draws a line, it actually stamps the brush icon repeatedly. If brush stamps are very close, you get the impression of a solid line: you get that with Spacing = 1. (1.00 to 200.0). When you use the Copy or Cut command on an image or a selection of it, a copy appears as a new brush in the upper left corner of the โ€œBrushesโ€ dialog. This brush will persist until you use the Copy command again. It disappears when you close GIMP.
https://docs.gimp.org/2.8/nl/gimp-brush-dialog.html
2018-12-10T00:47:09
CC-MAIN-2018-51
1544376823228.36
[]
docs.gimp.org
95, and (@currentField * 100 / 95)% otherwise. To fit this example to your number column, you can adjust the boundary condition ( 20) to match the maximum anticipated value inside the field, and change the equation to specify how much the bar should grow depending on the value inside the field. { "$schema": "", "elmType": "div", "txtContent": "@currentField", "attributes": { "class": "sp-field-dataBars" }, "style": { "width": "=if(@currentField > 95, '100%', toString(@currentField * 100 / 95) + '%')" } }. If the Flow is configured to gather data from the end user before running, the Flow Launch Panel will be displayed after choosing the button. Otherwise,": "span", "style": { "color": "#0078d7" }, "children": [ { "elmType": "span", "attributes": { "iconName": "Flow" } }, { "elmType": "button", "style": { "border": "none", "background-color": "transparent", "color": "#0078d7", "cursor": "pointer" }, "txtContent": "Send to Manager", "customRowAction": { "action": "executeFlow", "actionParams": "{\"id\": \"183bedd4-6f2b-4264-855c-9dc7617b4dbe\"}" } } ] } Supported column types The following column types support column formatting: - Single line of text - Number - Choice - Person or Group - Yes/No - Hyperlink - Picture - Date/Time - Lookup - Title (in Lists) The following are not currently supported: - Managed Metadata - Filename (in Document Libraries) - Calculated - Retention Label - Currency requred": "", "padding": "4px", "background-color": { "operator": "?", "operands": [ { "operator": "<", "operands": [ ". Expressions Values for txtContent, style properties, and attribute properties can be expressed as expressions, so that they are evaluated at runtime based on the context of the current field (or row). Expression objects can be nested to contain other Expression objects. Excel-style expressions All Excel-style expressions begin with an equal ( =) sign.() Binary operators - The following are the standard arithmetic binary operators that expect two operands: - + - - - / - * - < - > - <= - >= Unary operators - The following are standard unary operators that expect only one operand: - toString() - Number() - Date() - cos - sin - toLocaleString() - toLocaleDateString() - toLocaleTimeString() Conditional operator - The conditional operator is: - ? This is to achieve an expression equivalent to a ? b : c, where if the expression a evaluates to true, then the result is b, else the result is c." ] } } } "@now" This will evaluate to the current date and time. "@window.innerHeight" This will evaluate to a number equal to the height of the browser window (in pixels) when the list was rendered. "@window.innerWidth" This will evaluate to a number equal to the width of the browser window (in pixels) when the list was rendered.
https://docs.microsoft.com/en-us/sharepoint/dev/declarative-customization/column-formatting
2018-12-09T23:56:05
CC-MAIN-2018-51
1544376823228.36
[array(['../images/sp-columnformatting-none.png', 'SharePoint list with four unformatted columns'], dtype=object) array(['../images/sp-columnformatting-all.png', 'SharePoint list with three columns formatted'], dtype=object) array(['../images/sp-columnformatting-panel.png', 'Format column pane with space to paste or type column formatting JSON and options to preview, save, and cancel'], dtype=object) array(['../images/sp-columnformatting-conditionalbasic.png', 'Severity warning of 70 with orange background'], dtype=object) array(['../images/sp-columnformatting-conditionaladvanced.png', 'Status field with done colored green, blocked colored red, and in review colored orange'], dtype=object) array(['../images/sp-columnformatting-overdue.png', 'Status field with the Overdue text colored red'], dtype=object) array(['../images/sp-columnformatting-hyperlinks.png', 'Stocks list with ticker symbols turned into hyperlinks'], dtype=object) array(['../images/sp-columnformatting-actionbutton.png', 'Assigned To field with mail buttons added to names'], dtype=object) array(['../images/sp-columnformatting-databars.png', 'Effort list with number list items shown as bars'], dtype=object) array(['../images/sp-columnformatting-trending.png', 'List with trending up and trending down icons next to list items'], dtype=object) array(['../images/sp-columnformatting-flow.png', 'screenshot of the sample'], dtype=object)]
docs.microsoft.com
use. For this example, I've selected the "New Customer Form". This form is used to collect new customer data (ex. name, email address, etc.). any previously uploaded resources towards the top. Underneath these files your Device Magic Database live resources will be listed. Please note, Device Magic Database resources and uploaded resources have the same functionality. The available columns to choose from in a Device Magic Database resource will be the columns present in the database, excluding any questions inside a repeat group or subform. And that's it! If you have any questions or comments feel free to send us a message at [email protected].
https://docs.devicemagic.com/utilizing-resources/device-magic-database-live-resource
2018-12-09T23:49:34
CC-MAIN-2018-51
1544376823228.36
[array(['https://downloads.intercomcdn.com/i/o/32955193/4937c25ee92c1b1f4ff6f0ae/Screen+Shot+2017-09-05+at+2.21.04+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/32955852/007f257a423bbb6fe8cc830e/Screen+Shot+2017-09-05+at+2.28.38+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/32960466/3c9931765448819d3b3f7bca/Screen+Shot+2017-09-05+at+3.19.37+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/32961114/725b34cb7144defad018e6c4/Screen+Shot+2017-09-05+at+3.25.34+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/32961551/a2fcb43e1524c676fc35061d/Screen+Shot+2017-09-05+at+3.30.34+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/33079639/7b1abd11b7aa8639368a475d/Screen+Shot+2017-09-05+at+3.56.22+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/33079531/493de0c4613525873aa6c1ba/Screen+Shot+2017-09-05+at+3.56.43+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/33079695/e4625c0a6b8d2f6dab3440f1/Screen+Shot+2017-09-05+at+3.57.33+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/33172422/339ea00e9a2b44fe7d286232/Screen+Shot+2017-09-07+at+12.03.20+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/36239098/73a45d24f29a69af4b3a0506/Screenshot_20171011-130608.png', None], dtype=object) ]
docs.devicemagic.com