conversations
listlengths
2
3
source
stringclasses
1 value
stack
stringclasses
1 value
[ { "from": "system", "value": "Document how to use the Bintray repo\n" }, { "from": "human", "value": "This is a small Java library for parsing the Cloud Foundry environment variables (VCAP_SERVICES and so on).\n\n// the first line of this file is used as a description in the POM, so keep it short and sweet!\n\nDownload from Bintray:\n\nimage::https://api.bintray.com/packages/pivotal-labs-london/maven/cf-env/images/download.svg[link=\"https://bintray.com/pivotal-labs-london/maven/cf-env/_latestVersion\"]\n\nBuild with Gradle:\n\n--------------------------------------\n./gradlew build\n--------------------------------------\n\nRelease with Gradle:\n\n--------------------------------------\n# you probably want to make these changes manually rather than like this\nsed -i -e \"s/^version = .*/version = 'x.y.z'/\" build.gradle\necho -e \"bintrayUser=pivotal-labs-london\\nbintrayKey=...\" >gradle.properties\n\n./gradlew bintrayUpload\n--------------------------------------\n" }, { "from": "gpt", "value": "This is a small Java library for parsing the Cloud Foundry environment variables (VCAP_SERVICES and so on).\n\n// the first line of this file is used as a description in the POM, so keep it short and sweet!\n\nDownload from Bintray:\n\nimage::https://api.bintray.com/packages/pivotal-labs-london/maven/cf-env/images/download.svg[link=\"https://bintray.com/pivotal-labs-london/maven/cf-env/_latestVersion\"]\n\nUse as a dependency in your build:\n\n--------------------------------------\nrepositories {\n jcenter() // or mavenCentral, for transitive dependencies\n maven {\n url = 'http://dl.bintray.com/pivotal-labs-london/maven/'\n }\n}\n\ndependencies {\n compile group: 'io.pivotal.labs', name: 'cf-env', version: '0.0.1'\n}\n--------------------------------------\n\nWe hope to have the artifact available via JCenter soon, but until then, please use the repository on Bintray.\n\nBuild with Gradle:\n\n--------------------------------------\n./gradlew build\n--------------------------------------\n\nRelease with Gradle:\n\n--------------------------------------\n# you probably want to make these changes manually rather than like this\nsed -i -e \"s/^version = .*/version = 'x.y.z'/\" build.gradle\necho -e \"bintrayUser=pivotal-labs-london\\nbintrayKey=...\" >gradle.properties\n\n./gradlew bintrayUpload\n--------------------------------------\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Undo include change in readme.\n" }, { "from": "human", "value": "= Auxly\n\nimage:http://img.shields.io/:license-mit-blue.svg[\"License\", link=\"https://github.com/jeffrimko/Qprompt/blob/master/LICENSE\"]\nimage:https://travis-ci.org/jeffrimko/Auxly.svg?branch=master[\"Build Status\"]\n\n== Introduction\nThis project provides a Python 2.7/3.x library for common tasks especially when writing shell-like scripts. Some of the functionality overlaps with the standard library but the API is slightly modified.\n\n== Status\nThe status of this project is **pre-alpha**. This project is not yet suitable for use other than testing.\n\n== Requirements\nAuxly should run on any Python 2.7/3.x interpreter without additional dependencies.\n\n== Usage\n===\nThe following are basic examples of Auxly (all examples can be found https://github.com/jeffrimko/Auxly/tree/master/examples[here]):\n\n - https://github.com/jeffrimko/Auxly/blob/master/examples/delete_1.py[examples/delete_1.py] - Deletes all PYC files in the project.\n\n----\ninclude::examples\\delete_1.py[]\n----\n" }, { "from": "gpt", "value": "= Auxly\n\nimage:http://img.shields.io/:license-mit-blue.svg[\"License\", link=\"https://github.com/jeffrimko/Qprompt/blob/master/LICENSE\"]\nimage:https://travis-ci.org/jeffrimko/Auxly.svg?branch=master[\"Build Status\"]\n\n== Introduction\nThis project provides a Python 2.7/3.x library for common tasks especially when writing shell-like scripts. Some of the functionality overlaps with the standard library but the API is slightly modified.\n\n== Status\nThe status of this project is **pre-alpha**. This project is not yet suitable for use other than testing.\n\n== Requirements\nAuxly should run on any Python 2.7/3.x interpreter without additional dependencies.\n\n== Usage\n\nThe following are basic examples of Auxly (all examples can be found https://github.com/jeffrimko/Auxly/tree/master/examples[here]):\n\n - https://github.com/jeffrimko/Auxly/blob/master/examples/delete_1.py[examples/delete_1.py] - Deletes all PYC files in the project.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix spec.template.metadata.annotations for min and max scale example\n" }, { "from": "human", "value": "// Module included in the following assemblies:\n//\n// * serverless/configuring-knative-serving-autoscaling.adoc\n\n[id=\"configuring-scale-bounds-knative_{context}\"]\n= Configuring scale bounds Knative Serving autoscaling\n\nThe `minScale` and `maxScale` annotations can be used to configure the minimum and maximum number of Pods that can serve applications.\nThese annotations can be used to prevent cold starts or to help control computing costs.\n\nminScale:: If the `minScale` annotation is not set, Pods will scale to zero (or to 1 if enable-scale-to-zero is false per the `ConfigMap`).\n\nmaxScale:: If the `maxScale` annotation is not set, there will be no upper limit for the number of Pods created.\n\n`minScale` and `maxScale` can be configured as follows in the revision template:\n\n[source,yaml]\n----\nspec:\n template:\n metadata:\n autoscaling.knative.dev/minScale: \"2\"\n autoscaling.knative.dev/maxScale: \"10\"\n----\n\nUsing these annotations in the revision template will propagate this confguration to `PodAutoscaler` objects.\n\n[NOTE]\n====\nThese annotations apply for the full lifetime of a revision. Even when a revision is not referenced by any route, the minimal Pod count specified by `minScale` will still be provided. Keep in mind that non-routeable revisions may be garbage collected, which enables Knative to reclaim the resources.\n====\n" }, { "from": "gpt", "value": "// Module included in the following assemblies:\n//\n// * serverless/configuring-knative-serving-autoscaling.adoc\n\n[id=\"configuring-scale-bounds-knative_{context}\"]\n= Configuring scale bounds Knative Serving autoscaling\n\nThe `minScale` and `maxScale` annotations can be used to configure the minimum and maximum number of Pods that can serve applications.\nThese annotations can be used to prevent cold starts or to help control computing costs.\n\nminScale:: If the `minScale` annotation is not set, Pods will scale to zero (or to 1 if enable-scale-to-zero is false per the `ConfigMap`).\n\nmaxScale:: If the `maxScale` annotation is not set, there will be no upper limit for the number of Pods created.\n\n`minScale` and `maxScale` can be configured as follows in the revision template:\n\n[source,yaml]\n----\nspec:\n template:\n metadata:\n annotations:\n autoscaling.knative.dev/minScale: \"2\"\n autoscaling.knative.dev/maxScale: \"10\"\n----\n\nUsing these annotations in the revision template will propagate this confguration to `PodAutoscaler` objects.\n\n[NOTE]\n====\nThese annotations apply for the full lifetime of a revision. Even when a revision is not referenced by any route, the minimal Pod count specified by `minScale` will still be provided. Keep in mind that non-routeable revisions may be garbage collected, which enables Knative to reclaim the resources.\n====\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Use master version in docs\n" }, { "from": "human", "value": "= Packetbeat reference\n:libbeat: http://www.elastic.co/guide/en/beats/libbeat/1.0.0-rc1\n:version: 1.0.0-rc1\n\ninclude::./overview.asciidoc[]\n\ninclude::./gettingstarted.asciidoc[]\n\ninclude::./configuration.asciidoc[]\n\ninclude::./command-line.asciidoc[]\n\ninclude::./capturing.asciidoc[]\n\ninclude::./https.asciidoc[]\n\ninclude::./fields.asciidoc[]\n\ninclude::./thrift.asciidoc[]\n\ninclude::./windows.asciidoc[]\n\ninclude::./kibana3.asciidoc[]\n\ninclude::./filtering.asciidoc[]\n\ninclude::./troubleshooting.asciidoc[]\n\ninclude::./new_protocol.asciidoc[]\n" }, { "from": "gpt", "value": "= Packetbeat reference\n:libbeat: http://www.elastic.co/guide/en/beats/libbeat/master\n:version: master\n\ninclude::./overview.asciidoc[]\n\ninclude::./gettingstarted.asciidoc[]\n\ninclude::./configuration.asciidoc[]\n\ninclude::./command-line.asciidoc[]\n\ninclude::./capturing.asciidoc[]\n\ninclude::./https.asciidoc[]\n\ninclude::./fields.asciidoc[]\n\ninclude::./thrift.asciidoc[]\n\ninclude::./windows.asciidoc[]\n\ninclude::./kibana3.asciidoc[]\n\ninclude::./filtering.asciidoc[]\n\ninclude::./troubleshooting.asciidoc[]\n\ninclude::./new_protocol.asciidoc[]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Update docs link to point to docs repo\n" }, { "from": "human", "value": "= AsciiBinder\n\nimage:https://badge.fury.io/rb/ascii_binder.svg[\"Gem Version\", link=\"https://badge.fury.io/rb/ascii_binder\"]\n\nAsciiBinder is an AsciiDoc-based system for authoring and publishing closely related documentation sets from a single source.\n\n== Learn More\n\n* See the http://www.asciibinder.org[homepage].\n* Have a gander at the http://www.asciibinder.org/latest/welcome/[AsciiBinder documentation].\n* Or just take the https://rubygems.org/gems/ascii_binder[ascii_binder Ruby Gem] for a spin.\n\nThe AsciiBinder system was initially developed for https://github.com/openshift/openshift-docs[OpenShift documentation], but has been revised to work for documenting a wide variety of complex, multi-versioned software projects.\n\n== Contributing\n\nWe are using the https://github.com/redhataccess/ascii_binder/issues[Issues] page to track bugs and feature ideas on the code, so have a look and feel free to ask questions there. You can also chat with us on IRC at FreeNode, http://webchat.freenode.net/?randomnick=1&channels=asciibinder&uio=d4[#asciibinder] channel, or on Twitter - https://twitter.com/AsciiBinder[@AsciiBinder].\n\n== License\n\nThe gem is available as open source under the terms of the http://opensource.org/licenses/MIT[MIT License].\n" }, { "from": "gpt", "value": "= AsciiBinder\n\nimage:https://badge.fury.io/rb/ascii_binder.svg[\"Gem Version\", link=\"https://badge.fury.io/rb/ascii_binder\"]\n\nAsciiBinder is an AsciiDoc-based system for authoring and publishing closely related documentation sets from a single source.\n\n== Learn More\n\n* Have a gander at the https://github.com/redhataccess/ascii_binder-docs/blob/master/welcome/index.adoc[AsciiBinder documentation].\n* Or just take the https://rubygems.org/gems/ascii_binder[ascii_binder Ruby Gem] for a spin.\n\nThe AsciiBinder system was initially developed for https://github.com/openshift/openshift-docs[OpenShift documentation], but has been revised to work for documenting a wide variety of complex, multi-versioned software projects.\n\n== Contributing\n\nWe are using the https://github.com/redhataccess/ascii_binder/issues[Issues] page to track bugs and feature ideas on the code, so have a look and feel free to ask questions there. You can also chat with us on IRC at FreeNode, http://webchat.freenode.net/?randomnick=1&channels=asciibinder&uio=d4[#asciibinder] channel, or on Twitter - https://twitter.com/AsciiBinder[@AsciiBinder].\n\n== License\n\nThe gem is available as open source under the terms of the http://opensource.org/licenses/MIT[MIT License].\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Update documentation for NPM support\n" }, { "from": "human", "value": ":figure-caption!:\n\nimage::https://travis-ci.org/mmjmanders/ng-iban.svg?branch=master[title=\"travis status\", alt=\"travis status\", link=\"https://travis-ci.org/mmjmanders/ng-iban\"]\n\nimage::https://app.wercker.com/status/eb4337041c62e162c5dd7af43122647c/m[title=\"wercker status\", alt=\"wercker status\", link=\"https://app.wercker.com/project/bykey/eb4337041c62e162c5dd7af43122647c\"]\n\n= ng-iban - validate input fields as IBAN\nThe goal is to provide an easy way to validate an input field as an IBAN number with https://angularjs.org/[AngularJS].\nFrom version `0.4.0` the module uses https://github.com/arhs/iban.js[iban.js] for validation.\n\n== Usage\nFirst add\n\n* `AngularJS`\n* `ng-iban`\n\nto your HTML file.\n\nMake sure you require `mm.iban` as a dependency of your AngularJS module.\n\n== Installation\n`bower install ng-iban`\n\n=== directive\n[source,html]\n----\n<input type=\"text\" ng-model=\"iban\" ng-iban/>\n----\n\nTo use this directive the `ngModel` directive must also be used because this directive depends on it.\n" }, { "from": "gpt", "value": ":figure-caption!:\n\nimage::https://travis-ci.org/mmjmanders/ng-iban.svg?branch=master[title=\"travis status\", alt=\"travis status\", link=\"https://travis-ci.org/mmjmanders/ng-iban\"]\n\nimage::https://app.wercker.com/status/eb4337041c62e162c5dd7af43122647c/m[title=\"wercker status\", alt=\"wercker status\", link=\"https://app.wercker.com/project/bykey/eb4337041c62e162c5dd7af43122647c\"]\n\n= ng-iban - validate input fields as IBAN\nThe goal is to provide an easy way to validate an input field as an IBAN number with https://angularjs.org/[AngularJS].\nFrom version `0.4.0` the module uses https://github.com/arhs/iban.js[iban.js] for validation.\n\n== Installation\n\n=== Bower\n\n`bower install ng-iban`\n\n=== NPM\n\n`npm install ng-iban`\n\n=== Other\n\nDownload file `dist/ng-iban.mni.js`.\n\n== Usage\n\nAdd `mm.iban` as a dependency of your AngularJS module.\n\n=== directive\n[source,html]\n----\n<input type=\"text\" ng-model=\"iban\" ng-iban/>\n----\n\nTo use this directive the `ngModel` directive must also be used because this directive depends on it.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Revert \"Revert \"Revert \"Revert \"Test trigger on push\"\"\"\"\n\nThis reverts commit 2e835acb457e13f3fc5a49459604488388d80cae.\n" }, { "from": "human", "value": "= Infinispan Cluster Manager\n\nimage:https://vertx.ci.cloudbees.com/buildStatus/icon?job=vert.x3-infinispan[\"Build Status\",link=\"https://vertx.ci.cloudbees.com/view/vert.x-3/job/vert.x3-infinispan/\"]\n\nThis is a cluster manager implementation for Vert.x that uses http://infinispan.org[Infinispan].\n\nPlease see the in-source asciidoc documentation or the main documentation on the web-site for a full description\nof this component:\n\n* link:http://vertx.io/docs/vertx-infinispan/java/[web-site docs]\n* link:src/main/asciidoc/java/index.adoc[in-source docs]\n" }, { "from": "gpt", "value": "= Infinispan Cluster Manager\n\nimage:https://vertx.ci.cloudbees.com/buildStatus/icon?job=vert.x3-infinispan[\"Build Status\",link=\"https://vertx.ci.cloudbees.com/view/vert.x-3/job/vert.x3-infinispan/\"]\n\nThis is a cluster manager implementation for Vert.x that uses http://infinispan.org[Infinispan].\n\nPlease see the in-source asciidoc documentation or the main documentation on the web-site for a full description\nof this component:\n\n* link:http://vertx.io/docs/vertx-infinispan/java/[web-site docs]\n* link:src/main/asciidoc/java/index.adoc[in-source docs]\n\n-- will remove --\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Update the links to the forum\n" }, { "from": "human", "value": "[[validator-further-reading]]\n== Further reading\n\nLast but not least, a few pointers to further information.\n\nA great source for examples is the Bean Validation TCK which is available for anonymous access on\nhttps://github.com/beanvalidation/beanvalidation-tck/[GitHub]. In particular the TCK's\nhttps://github.com/beanvalidation/beanvalidation-tck/tree/master/tests[tests] might be\nof interest. {bvSpecUrl}[The JSR 380] specification itself\nis also a great way to deepen your understanding of Bean Validation and Hibernate Validator.\n\nIf you have any further questions about Hibernate Validator or want to share some of your use cases,\nhave a look at the http://community.jboss.org/en/hibernate/validator[Hibernate Validator\nWiki], the https://forum.hibernate.org/viewforum.php?f=9[Hibernate Validator Forum] and the\nhttps://stackoverflow.com/questions/tagged/hibernate-validator[Hibernate Validator tag on Stack Overflow].\n\nIn case you would like to report a bug use\nhttps://hibernate.atlassian.net/projects/HV/[Hibernate's Jira] instance.\nFeedback is always welcome!\n" }, { "from": "gpt", "value": "[[validator-further-reading]]\n== Further reading\n\nLast but not least, a few pointers to further information.\n\nA great source for examples is the Bean Validation TCK which is available for anonymous access on\nhttps://github.com/beanvalidation/beanvalidation-tck/[GitHub]. In particular the TCK's\nhttps://github.com/beanvalidation/beanvalidation-tck/tree/master/tests[tests] might be\nof interest. {bvSpecUrl}[The JSR 380] specification itself\nis also a great way to deepen your understanding of Bean Validation and Hibernate Validator.\n\nIf you have any further questions about Hibernate Validator or want to share some of your use cases,\nhave a look at the http://community.jboss.org/en/hibernate/validator[Hibernate Validator\nWiki], the https://discourse.hibernate.org/c/hibernate-validator[Hibernate Validator Forum] and the\nhttps://stackoverflow.com/questions/tagged/hibernate-validator[Hibernate Validator tag on Stack Overflow].\n\nIn case you would like to report a bug use\nhttps://hibernate.atlassian.net/projects/HV/[Hibernate's Jira] instance.\nFeedback is always welcome!\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Remove reference to deleted scripts." }, { "from": "human", "value": "== Introduction to Regression Test Mode\n\nBitcoin 0.9 and later include support for Regression Test Mode (aka RegTest mode). RegTest mode creates a single node Bitcoin \"network\" that can confirm blocks upon command. (RegTest mode can also be used to create small, multi-node networks and even to simulate blockchain reorganizations.)\n\nFor example the following command will generate 101 blocks\n\n ./bitcoin-cli -regtest setgenerate true 101\n\nAnd yes, you get the newly mined coins. They can't be spent anywhere, but they're great for testing.\n\nThe best documentation of RegTest mode that I've seen so far is https://bitcoinj.github.io/testing[How to test applications] on the new https://bitcoinj.github.io[Bitcoinj website].\n\n\nOther Links::\n* http://geraldkaszuba.com/creating-your-own-experimental-bitcoin-network/[Creating your own experimental Bitcoin network]\n* https://github.com/gak/docker-bitcoin-regtest[docker-bitcoin-regtest]\n\n== Simple Demo of RegTest mode with Bash Scripts\n\nThese are some really rough Bash scripts that can drive bitcoind in RegTest mode.\n\nProcedure::\n* Make sure Bitcoin Core 0.9 or later is installed and in your path.\n* Run the server script\n\n ./server.sh &\n\n* Run the client setup script to mine some coins to get started:\n\n ./setup-client.sh\n\n* Run the client script (repeat as desired)\n\n ./client.sh\n\n* A directory named +regtest-datadir+ is created in the current directory.\n" }, { "from": "gpt", "value": "== Introduction to Regression Test Mode\n\nBitcoin 0.9 and later include support for Regression Test Mode (aka RegTest mode). RegTest mode creates a single node Bitcoin \"network\" that can confirm blocks upon command. (RegTest mode can also be used to create small, multi-node networks and even to simulate blockchain reorganizations.)\n\nFor example the following command will generate 101 blocks\n\n ./bitcoin-cli -regtest setgenerate true 101\n\nAnd yes, you get the newly mined coins. They can't be spent anywhere, but they're great for testing.\n\nThe best documentation of RegTest mode that I've seen so far is https://bitcoinj.github.io/testing[How to test applications] on the new https://bitcoinj.github.io[Bitcoinj website].\n\n\nOther Links::\n* http://geraldkaszuba.com/creating-your-own-experimental-bitcoin-network/[Creating your own experimental Bitcoin network]\n* https://github.com/gak/docker-bitcoin-regtest[docker-bitcoin-regtest]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add license information to docs clarifying build scan plugin license\n" }, { "from": "human", "value": "// Copyright 2017 the original author or authors.\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n// http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n= Documentation licenses\n\n\n[[sec:gradle_documentation]]\n== Gradle Documentation\n\n_Copyright © 2007-2018 Gradle, Inc._\n\nGradle build tool source code is open and licensed under the link:https://github.com/gradle/gradle/blob/master/LICENSE[Apache License 2.0].\nGradle user manual and DSL references are licensed under link:http://creativecommons.org/licenses/by-nc-sa/4.0/[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License].\n" }, { "from": "gpt", "value": "// Copyright 2017 the original author or authors.\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n// http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n[[licenses]]\n= License Information\n\n\n[[sec:gradle_documentation]]\n== Gradle Documentation\n\n_Copyright © 2007-2018 Gradle, Inc._\n\nGradle build tool source code is open-source and licensed under the link:https://github.com/gradle/gradle/blob/master/LICENSE[Apache License 2.0].\n\n\nGradle user manual and DSL references are licensed under link:http://creativecommons.org/licenses/by-nc-sa/4.0/[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License].\n\n\n[[licenses:build_scan_plugin]]\n== Gradle Build Scan Plugin\n\nUse of the link:https://scans.gradle.com/plugin/[build scan plugin] is subject to link:https://gradle.com/legal/terms-of-service/[Gradle's Terms of Service].\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Remove the deprecated etcd section\n" }, { "from": "human", "value": "[id='monitoring']\n= Monitoring\ninclude::modules/common-attributes.adoc[]\n:context: monitoring\n\ntoc::[]\n\n{product-title} uses the Prometheus open source monitoring system. The stack built around Prometheus provides {product-title} cluster monitoring by default. It also provides custom-configured application monitoring as a technology preview. The cluster monitoring stack is only supported for monitoring {product-title} clusters.\n\n[id='cluster-monitoring']\n== Cluster monitoring\n\ninclude::modules/monitoring-monitoring-overview.adoc[leveloffset=+1]\ninclude::monitoring/configuring-monitoring-stack.adoc[leveloffset=+1]\ninclude::modules/monitoring-configuring-etcd-monitoring.adoc[leveloffset=+1]\ninclude::modules/monitoring-accessing-prometheus-alertmanager-grafana.adoc[leveloffset=+1]\n\n[id='application-monitoring']\n== Application monitoring\n\nYou can do custom metrics scraping for your applications. This is done using the Prometheus Operator and a custom Prometheus instance.\n\n[IMPORTANT]\n====\nApplication monitoring is a technology preview. Exposing custom metrics will change without the consent of the user of the cluster.\n====\n\ninclude::modules/monitoring-configuring-cluster-for-application-monitoring.adoc[leveloffset=+1]\ninclude::modules/monitoring-configuring-monitoring-for-application.adoc[leveloffset=+1]\ninclude::modules/monitoring-exposing-application-metrics-for-horizontal-pod-autoscaling.adoc[leveloffset=+1]\n" }, { "from": "gpt", "value": "[id='monitoring']\n= Monitoring\ninclude::modules/common-attributes.adoc[]\n:context: monitoring\n\ntoc::[]\n\n{product-title} uses the Prometheus open source monitoring system. The stack built around Prometheus provides {product-title} cluster monitoring by default. It also provides custom-configured application monitoring as a technology preview. The cluster monitoring stack is only supported for monitoring {product-title} clusters.\n\n[id='cluster-monitoring']\n== Cluster monitoring\n\ninclude::modules/monitoring-monitoring-overview.adoc[leveloffset=+1]\ninclude::monitoring/configuring-monitoring-stack.adoc[leveloffset=+1]\ninclude::modules/monitoring-accessing-prometheus-alertmanager-grafana.adoc[leveloffset=+1]\n\n[id='application-monitoring']\n== Application monitoring\n\nYou can do custom metrics scraping for your applications. This is done using the Prometheus Operator and a custom Prometheus instance.\n\n[IMPORTANT]\n====\nApplication monitoring is a technology preview. Exposing custom metrics will change without the consent of the user of the cluster.\n====\n\ninclude::modules/monitoring-configuring-cluster-for-application-monitoring.adoc[leveloffset=+1]\ninclude::modules/monitoring-configuring-monitoring-for-application.adoc[leveloffset=+1]\ninclude::modules/monitoring-exposing-application-metrics-for-horizontal-pod-autoscaling.adoc[leveloffset=+1]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix url to odo linux binary\n" }, { "from": "human", "value": "// Module included in the following assemblies:\n//\n// * cli_reference/openshift_developer_cli/installing-odo.adoc \n\n[id=\"installing-odo-on-linux\"]\n\n= Installing {odo-title} on Linux\n\n== Binary installation\n\n----\n# curl -L https://mirror.openshift.com/pub/openshift-v4/clients/odo/latest/odo-darwin-amd64 -o /usr/local/bin/odo\n# chmod +x /usr/local/bin/odo\n----\n\n== Tarball installation\n\n----\n# sh -c 'curl -L https://mirror.openshift.com/pub/openshift-v4/clients/odo/latest/odo-linux-amd64.tar.gz | gzip -d > /usr/local/bin/odo'\n# chmod +x /usr/local/bin/odo\n----" }, { "from": "gpt", "value": "// Module included in the following assemblies:\n//\n// * cli_reference/openshift_developer_cli/installing-odo.adoc \n\n[id=\"installing-odo-on-linux\"]\n\n= Installing {odo-title} on Linux\n\n== Binary installation\n\n----\n# curl -L https://mirror.openshift.com/pub/openshift-v4/clients/odo/latest/odo-linux-amd64 -o /usr/local/bin/odo\n# chmod +x /usr/local/bin/odo\n----\n\n== Tarball installation\n\n----\n# sh -c 'curl -L https://mirror.openshift.com/pub/openshift-v4/clients/odo/latest/odo-linux-amd64.tar.gz | gzip -d > /usr/local/bin/odo'\n# chmod +x /usr/local/bin/odo\n----" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add arc 42 Architecture Constraints" }, { "from": "human", "value": "ifdef::env-github[]\n:imagesdir: https://github.com/Moose2Model/Moose2Model/blob/master/Documentation/images/\nendif::[]\n\n:toc:\n:toc-placement!:\ntoc::[]\n\nThis documentation follows the arc42 template for architecture documentation (https://arc42.org/).\n\n1 Introduction and Goals\n========================\n\n1.1 Requirements Overview\n-------------------------\n- Provide diagrams for developers that can be easily kept correct\n- Reduces the cognitive load of developers who work in complex software systems\n- Supports only diagram that show the dependencies between components of a software\n- Supports dependencies between entities of code that are more detailed then a class (method, attribute, ...)\n- Works in the moment best with models that are extracted by the SAP2Moose project\n- Shall support all models that are compatible with Moose (http://moosetechnology.org/)\n\n1.2 Quality Goals\n-----------------\n- Shows all dependencies between elements in a diagram\n- Shows all elements that should be in a diagram\n\n1.3 Stake Holders\n-----------------\n\n.Stake Holders\n|===\n| Role/Name |Expectations\n\n|Developer\n|Build diagrams where the customiziation is not lost when they are regenerated with new informations. Build diagrams that are sufficiently detailed to support the development.\n\n|Software Architect\n|Have a tool to compare the planned architecture with the realized architecture\n|===\n\n" }, { "from": "gpt", "value": "ifdef::env-github[]\n:imagesdir: https://github.com/Moose2Model/Moose2Model/blob/master/Documentation/images/\nendif::[]\n\n:toc:\n:toc-placement!:\ntoc::[]\n\nThis documentation follows the arc42 template for architecture documentation (https://arc42.org/).\n\n1 Introduction and Goals\n========================\n\n1.1 Requirements Overview\n-------------------------\n- Provide diagrams for developers that can be easily kept correct\n- Reduces the cognitive load of developers who work in complex software systems\n- Supports only diagram that show the dependencies between components of a software\n- Supports dependencies between entities of code that are more detailed then a class (method, attribute, ...)\n- Works in the moment best with models that are extracted by the SAP2Moose project\n- Shall support all models that are compatible with Moose (http://moosetechnology.org/)\n\n1.2 Quality Goals\n-----------------\n- Shows all dependencies between elements in a diagram\n- Shows all elements that should be in a diagram\n\n1.3 Stake Holders\n-----------------\n\n.Stake Holders\n|===\n| Role/Name |Expectations\n\n|Developer\n|Build diagrams where the customiziation is not lost when they are regenerated with new informations. Build diagrams that are sufficiently detailed to support the development.\n\n|Software Architect\n|Have a tool to compare the planned architecture with the realized architecture\n|===\n\n2 Architecture Constraints\n==========================\n- Easy to install\n\n\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Split optaplanner's dev list away from drools's mailing list\n" }, { "from": "human", "value": "= Forum (free support)\n:awestruct-layout: base\n:showtitle:\n\n== Usage questions\n\nIf you have a question about OptaPlanner, just ask our friendly community:\n\n* Ask on http://www.jboss.org/drools/lists[the Drools user mailing list] (recommended).\n\n* Or ask on http://stackoverflow.com/questions/tagged/optaplanner[StackOverflow].\n\nPlease follow these recommendations when posting a question:\n\n* Get to the point. Keep it as short as possible.\n* Include _relevant_ technical details (stacktrace, short code snippet, ...).\n* Be polite, friendly and clear. Reread your question before posting it.\n* Be patient. This isn't link:product.html[paid support].\n\n== Development discussions\n\nIf you've link:../code/sourceCode.html[build OptaPlanner from source] and you would like to improve it,\nthen come talk with us:\n\n* Join us on http://www.jboss.org/drools/lists[the Drools developer mailing list].\n\n* And link:chat.html[chat with us] (recommended).\n" }, { "from": "gpt", "value": "= Forum\n:awestruct-layout: base\n:showtitle:\n\n== Usage questions\n\nIf you have a question about OptaPlanner, just ask our friendly community:\n\n* *http://stackoverflow.com/questions/tagged/optaplanner[Ask a usage question on StackOverflow.]*\n\n* To start a discussion, use https://groups.google.com/forum/#!forum/optaplanner-dev[the OptaPlanner developer forum].\n\nPlease follow these recommendations when posting a question or a discussion:\n\n* Get to the point. Keep it as short as possible.\n* Include _relevant_ technical details (stacktrace, short code snippet, ...).\n* Be polite, friendly and clear. Reread your question before posting it.\n* Be patient. This isn't link:product.html[paid support].\n\n== Development discussions\n\nIf you've link:../code/sourceCode.html[build OptaPlanner from source] and want to improve something, come talk with us:\n\n* Join https://groups.google.com/forum/#!forum/optaplanner-dev[the OptaPlanner developer google group].\n\n** Mail directly to the group via mailto:[email protected][[email protected]].\n\n* And link:chat.html[chat with us] (recommended).\n\n** Or ping us on link:socialMedia.html[any social media].\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix APIM version to 3.13\n" }, { "from": "human", "value": "[[apim-kubernetes-overview]]\n= Kubernetes plugin\n:page-sidebar: apim_3_x_sidebar\n:page-permalink: apim/3.x/apim_kubernetes_overview.html\n:page-folder: apim/kubernetes\n:page-layout: apim3x\n:page-liquid:\n\n[label label-version]#New in version 3.7#\n\n== Overview\n\nAPIM 3.7.0 introduces a Kubernetes plugin for APIM Gateway allowing the deployment of APIs using https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/[Custom Resource Definitions (CRDs)^].\n\nAPIs deployed using CRDs are not visible through APIM Console.\n\nWARNING: This plugin is currently in Alpha version; the API key policy isn't available for APIs deployed using this plugin.\n\nYou can find more detailed information about the plugin in the following sections:\n\n* link:/apim/3.x/apim_kubernetes_quick_start.html[Quick Start]\n* link:/apim/3.x/apim_kubernetes_installation.html[How to install]\n* link:/apim/3.x/apim_kubernetes_custom_resources.html[Custom Resources]\n* link:/apim/3.x/apim_kubernetes_admission_hook.html[Admission hook]\n" }, { "from": "gpt", "value": "[[apim-kubernetes-overview]]\n= Kubernetes plugin\n:page-sidebar: apim_3_x_sidebar\n:page-permalink: apim/3.x/apim_kubernetes_overview.html\n:page-folder: apim/kubernetes\n:page-layout: apim3x\n:page-liquid:\n\n[label label-version]#New in version 3.13#\n\n== Overview\n\nAPIM 3.13 introduces a Kubernetes plugin for APIM Gateway allowing the deployment of APIs using https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/[Custom Resource Definitions (CRDs)^].\n\nAPIs deployed using CRDs are not visible through APIM Console.\n\nWARNING: This plugin is currently in Alpha version; the API key policy isn't available for APIs deployed using this plugin.\n\nYou can find more detailed information about the plugin in the following sections:\n\n* link:/apim/3.x/apim_kubernetes_quick_start.html[Quick Start]\n* link:/apim/3.x/apim_kubernetes_installation.html[How to install]\n* link:/apim/3.x/apim_kubernetes_custom_resources.html[Custom Resources]\n* link:/apim/3.x/apim_kubernetes_admission_hook.html[Admission hook]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add spring-boot link in doc\n" }, { "from": "human", "value": "= CLI Connector Component\n:doctitle: CLI Connector\n:shortname: cli-connector\n:artifactid: camel-cli-connector\n:description: Runtime adapter connecting with Camel CLI\n:since: 3.19\n:supportlevel: Preview\n\n*Since Camel {since}*\n\nThe camel-cli-connector allows the Camel CLI to be able to manage running Camel integrations.\n\nCurrently, only a local connector is provided, which means that the Camel CLI can only be\nmanaging local running Camel integrations.\n\nThese integrations can be using different runtimes such as Camel Main, Camel Spring Boot or Camel Quarkus etc.\n\n\n== Auto-detection from classpath\n\nTo use this implementation all you need to do is to add the `camel-cli-connector` dependency to the classpath,\nand Camel should auto-detect this on startup and log as follows:\n\n[source,text]\n----\nLocal CLI Connector started\n----\n" }, { "from": "gpt", "value": "= CLI Connector Component\n:doctitle: CLI Connector\n:shortname: cli-connector\n:artifactid: camel-cli-connector\n:description: Runtime adapter connecting with Camel CLI\n:since: 3.19\n:supportlevel: Preview\n//Manually maintained attributes\n:camel-spring-boot-name: cli-connector\n\n*Since Camel {since}*\n\nThe camel-cli-connector allows the Camel CLI to be able to manage running Camel integrations.\n\nCurrently, only a local connector is provided, which means that the Camel CLI can only be\nmanaging local running Camel integrations.\n\nThese integrations can be using different runtimes such as Camel Main, Camel Spring Boot or Camel Quarkus etc.\n\n\n== Auto-detection from classpath\n\nTo use this implementation all you need to do is to add the `camel-cli-connector` dependency to the classpath,\nand Camel should auto-detect this on startup and log as follows:\n\n[source,text]\n----\nLocal CLI Connector started\n----\n\n\ninclude::spring-boot:partial$starter.adoc[]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Order release notes from newest to oldest\n\nIssue: #1066\n" }, { "from": "human", "value": "[[release-notes]]\n== Release Notes\n\n:numbered!:\n\ninclude::release-notes-5.0.0-ALPHA.adoc[]\n\ninclude::release-notes-5.0.0-M1.adoc[]\n\ninclude::release-notes-5.0.0-M2.adoc[]\n\ninclude::release-notes-5.0.0-M3.adoc[]\n\ninclude::release-notes-5.0.0-M4.adoc[]\n\ninclude::release-notes-5.0.0-M5.adoc[]\n\ninclude::release-notes-5.0.0-M6.adoc[]\n\ninclude::release-notes-5.0.0-RC1.adoc[]\n\ninclude::release-notes-5.0.0-RC2.adoc[]\n\ninclude::release-notes-5.0.0-RC3.adoc[]\n\ninclude::release-notes-5.0.0.adoc[]\n\ninclude::release-notes-5.1.0-M1.adoc[]\n\n:numbered:\n" }, { "from": "gpt", "value": "[[release-notes]]\n== Release Notes\n\n:numbered!:\n\ninclude::release-notes-5.1.0-M1.adoc[]\n\ninclude::release-notes-5.0.0.adoc[]\n\ninclude::release-notes-5.0.0-RC3.adoc[]\n\ninclude::release-notes-5.0.0-RC2.adoc[]\n\ninclude::release-notes-5.0.0-RC1.adoc[]\n\ninclude::release-notes-5.0.0-M6.adoc[]\n\ninclude::release-notes-5.0.0-M5.adoc[]\n\ninclude::release-notes-5.0.0-M4.adoc[]\n\ninclude::release-notes-5.0.0-M3.adoc[]\n\ninclude::release-notes-5.0.0-M2.adoc[]\n\ninclude::release-notes-5.0.0-M1.adoc[]\n\ninclude::release-notes-5.0.0-ALPHA.adoc[]\n\n:numbered:\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add travis badge + fix coverage badge\n" }, { "from": "human", "value": "== Fusioninventory Plugin\n\nhttps://coveralls.io/r/fusioninventory/fusioninventory-for-glpi[image:https://coveralls.io/repos/fusioninventory/fusioninventory-for-glpi/badge.svg]\n\nThis plugin makes GLPI to process various types of tasks for Fusioninventory agents:\n\n* Computer inventory\n* Network discovery\n* Network (SNMP) inventory\n* Software deployment\n* VMWare ESX host remote inventory\n\nFor further information and documentation, please check http://www.fusioninventory.org .\n\nIf you want to report bugs or check for development status, you can check http://forge.fusioninventory.org .\n\n== Third-party code\n\n* PluginFusioninventoryFindFiles() is copyright http://rosettacode.org/wiki/Walk_a_directory/Recursively#PHP[rosettacode.org] and\n made available under GNU Free Documentation License.\n\n== Third-party icons and images\n\nSome icons used in the project comes from the following set of\ngraphics licensed:\n\n* Dortmund is copyright by http://pc.de/icons/[PC.DE] and made available under a\n http://creativecommons.org/licenses/by/3.0/deed[Creative Commons Attribution 3.0 License].\n\n* Fugue Icons is copyright by http://p.yusukekamiyamane.com/[Yusuke Kamiyamame] and made\n available under a http://creativecommons.org/licenses/by/3.0/deed[Creative Commons Attribution 3.0 License].\n" }, { "from": "gpt", "value": "== Fusioninventory Plugin\n\nimage:https://travis-ci.org/fusioninventory/fusioninventory-for-glpi.svg?branch=master[\"Build Status\", link=\"https://travis-ci.org/fusioninventory/fusioninventory-for-glpi\"]\nimage:https://coveralls.io/repos/fusioninventory/fusioninventory-for-glpi/badge.svg[\"Coverage Status\", link=\"https://coveralls.io/r/fusioninventory/fusioninventory-for-glpi\"]\n\nThis plugin makes GLPI to process various types of tasks for Fusioninventory agents:\n\n* Computer inventory\n* Network discovery\n* Network (SNMP) inventory\n* Software deployment\n* VMWare ESX host remote inventory\n\nFor further information and documentation, please check http://www.fusioninventory.org .\n\nIf you want to report bugs or check for development status, you can check http://forge.fusioninventory.org .\n\n== Third-party code\n\n* PluginFusioninventoryFindFiles() is copyright http://rosettacode.org/wiki/Walk_a_directory/Recursively#PHP[rosettacode.org] and\n made available under GNU Free Documentation License.\n\n== Third-party icons and images\n\nSome icons used in the project comes from the following set of\ngraphics licensed:\n\n* Dortmund is copyright by http://pc.de/icons/[PC.DE] and made available under a\n http://creativecommons.org/licenses/by/3.0/deed[Creative Commons Attribution 3.0 License].\n\n* Fugue Icons is copyright by http://p.yusukekamiyamane.com/[Yusuke Kamiyamame] and made\n available under a http://creativecommons.org/licenses/by/3.0/deed[Creative Commons Attribution 3.0 License].\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Update readme with Wiki reference.\n" }, { "from": "human", "value": "= Hawkular Android Client\n\nThis repository contains the source code for the Hawkular Android application.\n\n== License\n\n* http://www.apache.org/licenses/LICENSE-2.0.html[Apache Version 2.0]\n\n== Building\n\nifdef::env-github[]\n[link=https://travis-ci.org/hawkular/hawkular-android-client]\nimage:https://travis-ci.org/hawkular/hawkular-android-client.svg[\"Build Status\", link=\"https://travis-ci.org/hawkular/hawkular-android-client\"]\nendif::[]\n\nYou will need JDK 1.7+ installed.\nGradle, Android SDK and all dependencies will be downloaded automatically.\n\n----\n$ ./gradlew clean assembleDebug\n----\n" }, { "from": "gpt", "value": "= Hawkular Android Client\n\nThis repository contains the source code for the Hawkular Android application.\n\n== License\n\n* http://www.apache.org/licenses/LICENSE-2.0.html[Apache Version 2.0]\n\n== Building\n\nifdef::env-github[]\n[link=https://travis-ci.org/hawkular/hawkular-android-client]\nimage:https://travis-ci.org/hawkular/hawkular-android-client.svg[\"Build Status\", link=\"https://travis-ci.org/hawkular/hawkular-android-client\"]\nendif::[]\n\nYou will need JDK 1.7+ installed.\nGradle, Android SDK and all dependencies will be downloaded automatically.\n\n----\n$ ./gradlew clean assembleDebug\n----\n\n== Reading\n\nThere are some documents on the link:../../wiki[Wiki], including API overview, UI mockups\nand instructions on running necessary servers for using the client in common and\npush notifications specifically.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Adjust readme to Spring Boot 2.1\n" }, { "from": "human", "value": "= Spring Boot and Two DataSources\n\nThis project demonstrates how to use two `DataSource` s with Spring Boot 2.0.\n\nIt utilizes: \n\n* Spring Data https://github.com/spring-projects/spring-data-jpa[JPA] / https://github.com/spring-projects/spring-data-rest[REST]\n* https://github.com/flyway/flyway[Flyway] migrations for the two `DataSource` s\n* Separate Hibernate properties for each `DataSource` defined in the application.yml\n* https://github.com/thymeleaf/thymeleaf[Thymeleaf] 3\n* https://github.com/DataTables/DataTablesSrc[DataTables]\n* Unit tests for components\n\n Note: It may take a few seconds for the app to start if no one has not accessed it recently" }, { "from": "gpt", "value": "= Spring Boot and Two DataSources\n\nThis project demonstrates how to use two `DataSource` s with Spring Boot 2.1.\n\nIt utilizes: \n\n* Spring Data https://github.com/spring-projects/spring-data-jpa[JPA] / https://github.com/spring-projects/spring-data-rest[REST]\n* https://github.com/flyway/flyway[Flyway] migrations for the two `DataSource` s\n* Separate Hibernate properties for each `DataSource` defined in the application.yml\n* https://github.com/thymeleaf/thymeleaf[Thymeleaf] 3\n* https://github.com/DataTables/DataTablesSrc[DataTables]\n* Unit tests for components\n\n Note: It may take a few seconds for the app to start if no one has not accessed it recently" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add reference to classloader issue and workaround\n" }, { "from": "human", "value": "= Snoop - A Discovery Service for Java EE\n\nSnoop is an experimental registration and discovery service for Java EE based microservices.\n\n== Getting Started\n\n. Start the link:snoop-service.adoc[Snoop Service]\n. link:service-registration.adoc[Service Registration]\n. link:service-discovery.adoc[Service Discovery]\n\n== Maven\n\n. Released artifacts are available in link:http://search.maven.org/#search%7Cga%7C1%7Csnoop[Maven Central]\n. Snapshots configuration:\n\n <repositories>\n <repository>\n <id>agilejava-snapshots</id>\n <url>http://nexus.agilejava.eu/content/groups/public</url>\n <snapshots>\n <enabled>true</enabled>\n </snapshots>\n </repository>\n </repositories>\n\n== Examples\n\n- link:https://github.com/ivargrimstad/snoop-samples[snoop-samples@GitHub] \n- link:https://github.com/arun-gupta/microservices[https://github.com/arun-gupta/microservices]\n\n== FAQ\n\n- link:FAQ.adoc[Frequently Asked Questions]\n" }, { "from": "gpt", "value": "= Snoop - A Discovery Service for Java EE\n\nSnoop is an experimental registration and discovery service for Java EE based microservices.\n\n== Getting Started\n\n. Start the link:snoop-service.adoc[Snoop Service]\n. link:service-registration.adoc[Service Registration]\n. link:service-discovery.adoc[Service Discovery]\n\n== Maven\n\n. Released artifacts are available in link:http://search.maven.org/#search%7Cga%7C1%7Csnoop[Maven Central]\n. Snapshots configuration:\n\n <repositories>\n <repository>\n <id>agilejava-snapshots</id>\n <url>http://nexus.agilejava.eu/content/groups/public</url>\n <snapshots>\n <enabled>true</enabled>\n </snapshots>\n </repository>\n </repositories>\n\n== Classloader Issue\n\n- link:classloader-issue.adoc[Description and Workaround]\n\n== Examples\n\n- link:https://github.com/ivargrimstad/snoop-samples[snoop-samples@GitHub]\n- link:https://github.com/arun-gupta/microservices[https://github.com/arun-gupta/microservices]\n\n== FAQ\n\n- link:FAQ.adoc[Frequently Asked Questions]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Revert \"Add spring-boot link in doc\"\n\nThis reverts commit fbd8d2ed399230a8e3e1d469e9825eb0602fb981.\n" }, { "from": "human", "value": "= CLI Connector Component\n:doctitle: CLI Connector\n:shortname: cli-connector\n:artifactid: camel-cli-connector\n:description: Runtime adapter connecting with Camel CLI\n:since: 3.19\n:supportlevel: Preview\n//Manually maintained attributes\n:camel-spring-boot-name: cli-connector\n\n*Since Camel {since}*\n\nThe camel-cli-connector allows the Camel CLI to be able to manage running Camel integrations.\n\nCurrently, only a local connector is provided, which means that the Camel CLI can only be\nmanaging local running Camel integrations.\n\nThese integrations can be using different runtimes such as Camel Main, Camel Spring Boot or Camel Quarkus etc.\n\n\n== Auto-detection from classpath\n\nTo use this implementation all you need to do is to add the `camel-cli-connector` dependency to the classpath,\nand Camel should auto-detect this on startup and log as follows:\n\n[source,text]\n----\nLocal CLI Connector started\n----\n\n\ninclude::spring-boot:partial$starter.adoc[]\n" }, { "from": "gpt", "value": "= CLI Connector Component\n:doctitle: CLI Connector\n:shortname: cli-connector\n:artifactid: camel-cli-connector\n:description: Runtime adapter connecting with Camel CLI\n:since: 3.19\n:supportlevel: Preview\n\n*Since Camel {since}*\n\nThe camel-cli-connector allows the Camel CLI to be able to manage running Camel integrations.\n\nCurrently, only a local connector is provided, which means that the Camel CLI can only be\nmanaging local running Camel integrations.\n\nThese integrations can be using different runtimes such as Camel Main, Camel Spring Boot or Camel Quarkus etc.\n\n\n== Auto-detection from classpath\n\nTo use this implementation all you need to do is to add the `camel-cli-connector` dependency to the classpath,\nand Camel should auto-detect this on startup and log as follows:\n\n[source,text]\n----\nLocal CLI Connector started\n----\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Migrate to Apache v2.0 license\n" }, { "from": "human", "value": "\nCopyright 2016 higherfrequencytrading.com\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License." }, { "from": "gpt", "value": "\n== Copyright 2016 higherfrequencytrading.com\n\nLicensed under the *Apache License, Version 2.0* (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License." } ]
asciidoc
chosen
[ { "from": "system", "value": "Edit to OLM webhook workflow\n" }, { "from": "human", "value": "[id=\"olm-webhooks\"]\n= Managing admission webhooks in Operator Lifecycle Manager\ninclude::modules/common-attributes.adoc[]\n:context: olm-webhooks\n\ntoc::[]\n\nValidating and mutating admission webhooks allow Operator authors to intercept,\nmodify, and accept or reject resources before they are handled by the Operator\ncontroller. Operator Lifecycle Manager (OLM) can manage the lifecycle of these\nwebhooks when they are shipped alongside your Operator.\n\ninclude::modules/olm-defining-csv-webhooks.adoc[leveloffset=+1]\ninclude::modules/olm-webhook-considerations.adoc[leveloffset=+1]\n\n[id=\"olm-webhooks-additional-resources\"]\n== Additional resources\n\n* xref:../architecture/admission-plug-ins.adoc#admission-webhook-types_admission-plug-ins[Types of webhook admission plug-ins]\n" }, { "from": "gpt", "value": "[id=\"olm-webhooks\"]\n= Managing admission webhooks in Operator Lifecycle Manager\ninclude::modules/common-attributes.adoc[]\n:context: olm-webhooks\n\ntoc::[]\n\nValidating and mutating admission webhooks allow Operator authors to intercept,\nmodify, and accept or reject resources before they are saved to the object store\nand handled by the Operator controller. Operator Lifecycle Manager (OLM) can\nmanage the lifecycle of these webhooks when they are shipped alongside your\nOperator.\n\ninclude::modules/olm-defining-csv-webhooks.adoc[leveloffset=+1]\ninclude::modules/olm-webhook-considerations.adoc[leveloffset=+1]\n\n[id=\"olm-webhooks-additional-resources\"]\n== Additional resources\n\n* xref:../architecture/admission-plug-ins.adoc#admission-webhook-types_admission-plug-ins[Types of webhook admission plug-ins]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix version order for breaking changes docs\n" }, { "from": "human", "value": "[[breaking-changes]]\n= Breaking changes\n\n[partintro]\n--\nThis section discusses the changes that you need to be aware of when migrating\nyour application from one version of Elasticsearch to another.\n\nAs a general rule:\n\n* Migration between major versions -- e.g. `1.x` to `2.x` --\n requires a <<restart-upgrade,full cluster restart>>.\n\n* Migration between minor versions -- e.g. `1.x` to `1.y` -- can be\n performed by <<rolling-upgrades,upgrading one node at a time>>.\n\nSee <<setup-upgrade>> for more info.\n--\ninclude::migrate_3_0.asciidoc[]\n\ninclude::migrate_2_1.asciidoc[]\n\ninclude::migrate_2_2.asciidoc[]\n\ninclude::migrate_2_0.asciidoc[]\n\ninclude::migrate_1_6.asciidoc[]\n\ninclude::migrate_1_4.asciidoc[]\n\ninclude::migrate_1_0.asciidoc[]\n" }, { "from": "gpt", "value": "[[breaking-changes]]\n= Breaking changes\n\n[partintro]\n--\nThis section discusses the changes that you need to be aware of when migrating\nyour application from one version of Elasticsearch to another.\n\nAs a general rule:\n\n* Migration between major versions -- e.g. `1.x` to `2.x` --\n requires a <<restart-upgrade,full cluster restart>>.\n\n* Migration between minor versions -- e.g. `1.x` to `1.y` -- can be\n performed by <<rolling-upgrades,upgrading one node at a time>>.\n\nSee <<setup-upgrade>> for more info.\n--\ninclude::migrate_3_0.asciidoc[]\n\ninclude::migrate_2_2.asciidoc[]\n\ninclude::migrate_2_1.asciidoc[]\n\ninclude::migrate_2_0.asciidoc[]\n\ninclude::migrate_1_6.asciidoc[]\n\ninclude::migrate_1_4.asciidoc[]\n\ninclude::migrate_1_0.asciidoc[]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Remove Why Edge? link from preamble\n\nIt was not useful.\n" }, { "from": "human", "value": "= Edge Documentation\n\nEdge is a starting point for creating Clojure projects.\nNot sure if Edge is for you? See <<why-edge.adoc#,Why Edge?>>.\n\n== Get Started\n\nAre you new to Edge?\nThis is the place to start!\n\n. link:https://clojure.org/guides/getting_started[Install clj] (<<windows.adoc#,Additional notes for installing on Windows>>)\n. <<editor.adoc#,Set up your editor for Clojure>>\n. <<setup.adoc#,Set up Edge for your project>>\n. <<dev-guide.adoc#,Developing on Edge>>\n\n== Using Edge\n\n//. Configuration\n//. Components\n* <<dev-guide.adoc#,Developing on Edge>>\n* <<uberjar.adoc#,Producing an Uberjar>>\n* <<elastic-beanstalk.adoc#,Using the Elastic Beanstalk Quickstart>>\n* <<socket-repl.adoc#,Setting up a socket REPL>>\n\n== The Edge Project\n\n* <<why-edge.adoc#,Why Edge?>>\n* <<guidelines.adoc#,Contributing Guidelines>>\n//* Getting help\n//* How to get involved\n//* License\n" }, { "from": "gpt", "value": "= Edge Documentation\n\nEdge is a starting point for creating Clojure projects of all sizes.\n\n== Get Started\n\nAre you new to Edge?\nThis is the place to start!\n\n. link:https://clojure.org/guides/getting_started[Install clj] (<<windows.adoc#,Additional notes for installing on Windows>>)\n. <<editor.adoc#,Set up your editor for Clojure>>\n. <<setup.adoc#,Set up Edge for your project>>\n. <<dev-guide.adoc#,Developing on Edge>>\n\n== Using Edge\n\n//. Configuration\n//. Components\n* <<dev-guide.adoc#,Developing on Edge>>\n* <<uberjar.adoc#,Producing an Uberjar>>\n* <<elastic-beanstalk.adoc#,Using the Elastic Beanstalk Quickstart>>\n* <<socket-repl.adoc#,Setting up a socket REPL>>\n\n== The Edge Project\n\n* <<why-edge.adoc#,Why Edge?>>\n* <<guidelines.adoc#,Contributing Guidelines>>\n//* Getting help\n//* How to get involved\n//* License\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Apply Spotless to User Guide\n\n------------------------------------------------------------------------\nOn behalf of the community, the JUnit Lambda Team thanks Klarna AB\n(http://www.klarna.com) for supporting the JUnit crowdfunding campaign!\n------------------------------------------------------------------------\n" }, { "from": "human", "value": "[[overview]]\n== Overview\n\nThe goal of this document is to provide comprehensive reference documentation for both\nprogrammers writing tests and extension authors.\n\nWARNING: Work in progress!\n\n\n=== Supported Java Versions\n\nJUnit 5 only supports Java 8 and above. However, you can still test classes compiled with\nlower versions.\n\n\n== Installation\n\nSnapshot artifacts are deployed to Sonatype's {snapshot-repo}[snapshots repository].\n\n[[dependency-metadata]]\n=== Dependency Metadata\n\n* *Group ID*: `org.junit`\n* *Version*: `{junit-version}` \n* *Artifact IDs*:\n** `junit-commons`\n** `junit-console`\n** `junit-engine-api`\n** `junit-gradle`\n** `junit-launcher`\n** `junit4-engine`\n** `junit4-runner`\n** `junit5-api`\n** `junit5-engine`\n** `surefire-junit5`\n\nSee also: {snapshot-repo}/org/junit/\n\n=== JUnit 5 Sample Projects\n\nThe {junit5-samples-repo}[`junit5-samples`] repository hosts a collection of sample\nprojects based on JUnit 5. You'll find the respective `build.gradle` and `pom.xml` in\nthe projects below.\n\n* For Gradle, check out the `{junit5-gradle-consumer}` project.\n* For Maven, check out the `{junit5-maven-consumer}` project.\n" }, { "from": "gpt", "value": "[[overview]]\n== Overview\n\nThe goal of this document is to provide comprehensive reference documentation for both\nprogrammers writing tests and extension authors.\n\nWARNING: Work in progress!\n\n\n=== Supported Java Versions\n\nJUnit 5 only supports Java 8 and above. However, you can still test classes compiled with\nlower versions.\n\n\n== Installation\n\nSnapshot artifacts are deployed to Sonatype's {snapshot-repo}[snapshots repository].\n\n[[dependency-metadata]]\n=== Dependency Metadata\n\n* *Group ID*: `org.junit`\n* *Version*: `{junit-version}`\n* *Artifact IDs*:\n** `junit-commons`\n** `junit-console`\n** `junit-engine-api`\n** `junit-gradle`\n** `junit-launcher`\n** `junit4-engine`\n** `junit4-runner`\n** `junit5-api`\n** `junit5-engine`\n** `surefire-junit5`\n\nSee also: {snapshot-repo}/org/junit/\n\n=== JUnit 5 Sample Projects\n\nThe {junit5-samples-repo}[`junit5-samples`] repository hosts a collection of sample\nprojects based on JUnit 5. You'll find the respective `build.gradle` and `pom.xml` in\nthe projects below.\n\n* For Gradle, check out the `{junit5-gradle-consumer}` project.\n* For Maven, check out the `{junit5-maven-consumer}` project.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Use 'Main'-Label instead of type-attribute.\n" }, { "from": "human", "value": "[[structure:Default]]\n[role=group,includesConstraints=\"structure:packagesShouldConformToTheMainBuildingBlocks\"]\n\nAll the blackboxes above should correspond to Java packages. Those packages should have no dependencies to other packages outside themselves but for the support or shared package:\n\n[[structure:packagesShouldConformToTheMainBuildingBlocks]]\n[source,cypher,role=constraint,requiresConcepts=\"structure:configPackages,structure:supportingPackages\"]\n.Top level packages should conform to the main building blocks.\n----\nMATCH (a:Artifact {type: 'jar'})\nMATCH (a) -[:CONTAINS]-> (p1:Package) -[:DEPENDS_ON]-> (p2:Package) <-[:CONTAINS]- (a)\nWHERE not p1:Config\n and not (p1) -[:CONTAINS]-> (p2)\n and not p2:Support\n and not p1.fqn = 'ac.simons.biking2.summary'\nRETURN p1, p2\n----" }, { "from": "gpt", "value": "[[structure:Default]]\n[role=group,includesConstraints=\"structure:packagesShouldConformToTheMainBuildingBlocks\"]\n\nAll the blackboxes above should correspond to Java packages. Those packages should have no dependencies to other packages outside themselves but for the support or shared package:\n\n[[structure:packagesShouldConformToTheMainBuildingBlocks]]\n[source,cypher,role=constraint,requiresConcepts=\"structure:configPackages,structure:supportingPackages\"]\n.Top level packages should conform to the main building blocks.\n----\nMATCH (a:Main:Artifact)\nMATCH (a) -[:CONTAINS]-> (p1:Package) -[:DEPENDS_ON]-> (p2:Package) <-[:CONTAINS]- (a)\nWHERE not p1:Config\n and not (p1) -[:CONTAINS]-> (p2)\n and not p2:Support\n and not p1.fqn = 'ac.simons.biking2.summary'\nRETURN p1, p2\n----" } ]
asciidoc
chosen
[ { "from": "system", "value": "Clean up 5.5 M2 release notes\n\nThe following feature enhancement was already shipped with 5.5 M1:\n\n* `@ValueSource` now additionally supports literal values\n of type `boolean` for parameterized tests.\n" }, { "from": "human", "value": "[[release-notes-5.5.0-M2]]\n== 5.5.0-M2️\n\n*Date of Release:* ❓\n\n*Scope:* ❓\n\nFor a complete list of all _closed_ issues and pull requests for this release, consult\nthe link:{junit5-repo}+/milestone/37?closed=1+[5.5 M2] milestone page in the JUnit repository\non GitHub.\n\n\n[[release-notes-5.5.0-M2-junit-platform]]\n=== JUnit Platform\n\n==== Bug Fixes\n\n* ❓\n\n==== Deprecations and Breaking Changes\n\n* ❓\n\n==== New Features and Improvements\n\n* ❓\n\n\n[[release-notes-5.5.0-M2-junit-jupiter]]\n=== JUnit Jupiter\n\n==== Bug Fixes\n\n* Parameterized tests no longer throw an `ArrayStoreException` when creating human-readable\n test names.\n\n==== Deprecations and Breaking Changes\n\n* ❓\n\n==== New Features and Improvements\n\n* New `booleans` property in `ValueSource`.\n\n\n[[release-notes-5.5.0-M2-junit-vintage]]\n=== JUnit Vintage\n\n==== Bug Fixes\n\n* ❓\n\n==== Deprecations and Breaking Changes\n\n* ❓\n\n==== New Features and Improvements\n\n* ❓\n" }, { "from": "gpt", "value": "[[release-notes-5.5.0-M2]]\n== 5.5.0-M2️\n\n*Date of Release:* ❓\n\n*Scope:* ❓\n\nFor a complete list of all _closed_ issues and pull requests for this release, consult\nthe link:{junit5-repo}+/milestone/37?closed=1+[5.5 M2] milestone page in the JUnit repository\non GitHub.\n\n\n[[release-notes-5.5.0-M2-junit-platform]]\n=== JUnit Platform\n\n==== Bug Fixes\n\n* ❓\n\n==== Deprecations and Breaking Changes\n\n* ❓\n\n==== New Features and Improvements\n\n* ❓\n\n\n[[release-notes-5.5.0-M2-junit-jupiter]]\n=== JUnit Jupiter\n\n==== Bug Fixes\n\n* Parameterized tests no longer throw an `ArrayStoreException` when creating human-readable\n test names.\n\n==== Deprecations and Breaking Changes\n\n* ❓\n\n==== New Features and Improvements\n\n* ❓\n\n\n[[release-notes-5.5.0-M2-junit-vintage]]\n=== JUnit Vintage\n\n==== Bug Fixes\n\n* ❓\n\n==== Deprecations and Breaking Changes\n\n* ❓\n\n==== New Features and Improvements\n\n* ❓\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix typo in sample json\n\nFixes #9253\n" }, { "from": "human", "value": "[[analysis-keyword-marker-tokenfilter]]\n=== Keyword Marker Token Filter\n\nProtects words from being modified by stemmers. Must be placed before\nany stemming filters.\n\n[cols=\"<,<\",options=\"header\",]\n|=======================================================================\n|Setting |Description\n|`keywords` |A list of words to use.\n\n|`keywords_path` |A path (either relative to `config` location, or\nabsolute) to a list of words.\n\n|`ignore_case` |Set to `true` to lower case all words first. Defaults to\n`false`.\n|=======================================================================\n\nHere is an example:\n\n[source,js]\n--------------------------------------------------\nindex :\n analysis :\n analyzer :\n myAnalyzer :\n type : custom\n tokenizer : standard\n filter : [lowercase, protwods, porter_stem] \n filter :\n protwods :\n type : keyword_marker\n keywords_path : analysis/protwords.txt\n--------------------------------------------------\n" }, { "from": "gpt", "value": "[[analysis-keyword-marker-tokenfilter]]\n=== Keyword Marker Token Filter\n\nProtects words from being modified by stemmers. Must be placed before\nany stemming filters.\n\n[cols=\"<,<\",options=\"header\",]\n|=======================================================================\n|Setting |Description\n|`keywords` |A list of words to use.\n\n|`keywords_path` |A path (either relative to `config` location, or\nabsolute) to a list of words.\n\n|`ignore_case` |Set to `true` to lower case all words first. Defaults to\n`false`.\n|=======================================================================\n\nHere is an example:\n\n[source,js]\n--------------------------------------------------\nindex :\n analysis :\n analyzer :\n myAnalyzer :\n type : custom\n tokenizer : standard\n filter : [lowercase, protwords, porter_stem] \n filter :\n protwords :\n type : keyword_marker\n keywords_path : analysis/protwords.txt\n--------------------------------------------------\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add myself to the authors file\n" }, { "from": "human", "value": "= Authors and contributors\n\n- Simon Cruanes (`companion_cube`)\n- Drup (Gabriel Radanne)\n- Jacques-Pascal Deplaix\n- Nicolas Braud-Santoni\n- Whitequark (Peter Zotov)\n- hcarty (Hezekiah M. Carty)\n- struktured (Carmelo Piccione)\n- Bernardo da Costa\n- Vincent Bernardoff (vbmithr)\n- Emmanuel Surleau (emm)\n- Guillaume Bury (guigui)\n- JP Rodi\n- Florian Angeletti (@octachron)\n- Johannes Kloos\n- Geoff Gole (@gsg)\n- Roma Sokolov (@little-arhat)\n- Malcolm Matalka (`orbitz`)\n- David Sheets (@dsheets)\n- Glenn Slotte (glennsl)\n- @LemonBoy\n- Leonid Rozenberg (@rleonid)\n- Bikal Gurung (@bikalgurung)\n- Fabian Hemmer (copy)\n- Maciej Woś (@lostman)\n- Orbifx (Stavros Polymenis)\n- Rand (@rand00)\n- Dave Aitken (@actionshrimp)\n- Etienne Millon (@emillon)\n- Christopher Zimmermann (@madroach)\n- Jules Aguillon (@julow)\n" }, { "from": "gpt", "value": "= Authors and contributors\n\n- Simon Cruanes (`companion_cube`)\n- Drup (Gabriel Radanne)\n- Jacques-Pascal Deplaix\n- Nicolas Braud-Santoni\n- Whitequark (Peter Zotov)\n- hcarty (Hezekiah M. Carty)\n- struktured (Carmelo Piccione)\n- Bernardo da Costa\n- Vincent Bernardoff (vbmithr)\n- Emmanuel Surleau (emm)\n- Guillaume Bury (guigui)\n- JP Rodi\n- Florian Angeletti (@octachron)\n- Johannes Kloos\n- Geoff Gole (@gsg)\n- Roma Sokolov (@little-arhat)\n- Malcolm Matalka (`orbitz`)\n- David Sheets (@dsheets)\n- Glenn Slotte (glennsl)\n- @LemonBoy\n- Leonid Rozenberg (@rleonid)\n- Bikal Gurung (@bikalgurung)\n- Fabian Hemmer (copy)\n- Maciej Woś (@lostman)\n- Orbifx (Stavros Polymenis)\n- Rand (@rand00)\n- Dave Aitken (@actionshrimp)\n- Etienne Millon (@emillon)\n- Christopher Zimmermann (@madroach)\n- Jules Aguillon (@julow)\n- Metin Akat (@loxs)\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add images directory attribute to devguide" }, { "from": "human", "value": "= OmniJ Developer's Guide\nSean Gilligan\nv0.1, July 30, 2015: Early draft\n:numbered:\n:toc:\n:toclevels: 3\n:linkattrs:\n\nParagraph TBD.\n\n== Introduction to OmniJ\n\nThis section is TBD. For now the project http://github.com/OmniLayer/OmniJ/README.adoc[README] is the best place to get started.\n\n== JSON-RPC Clients\n\n[plantuml, diagram-classes, svg]\n....\nskinparam packageStyle Rect\nskinparam shadowing false\nhide empty members\n\nnamespace com.msgilligan.bitcoin.rpc {\n class RPCClient\n RPCClient <|-- class DynamicRPCClient << Groovy >>\n RPCClient <|-- BitcoinClient\n BitcoinClient <|-- class BitcoinCLIClient << Groovy >>\n}\n\nnamespace foundation.omni.rpc {\n com.msgilligan.bitcoin.rpc.BitcoinClient <|-- OmniClient\n OmniClient <|-- OmniExtendedClient\n OmniExtendedClient <|-- class OmniCLIClient << Groovy >>\n}\n....\n\n\n\n" }, { "from": "gpt", "value": "= OmniJ Developer's Guide\nSean Gilligan\nv0.1, July 30, 2015: Early draft\n:numbered:\n:toc:\n:toclevels: 3\n:linkattrs:\n:imagesdir: images\n\nParagraph TBD.\n\n== Introduction to OmniJ\n\nThis section is TBD. For now the project http://github.com/OmniLayer/OmniJ/README.adoc[README] is the best place to get started.\n\n== JSON-RPC Clients\n\n[plantuml, diagram-classes, svg]\n....\nskinparam packageStyle Rect\nskinparam shadowing false\nhide empty members\n\nnamespace com.msgilligan.bitcoin.rpc {\n class RPCClient\n RPCClient <|-- class DynamicRPCClient << Groovy >>\n RPCClient <|-- BitcoinClient\n BitcoinClient <|-- class BitcoinCLIClient << Groovy >>\n}\n\nnamespace foundation.omni.rpc {\n com.msgilligan.bitcoin.rpc.BitcoinClient <|-- OmniClient\n OmniClient <|-- OmniExtendedClient\n OmniExtendedClient <|-- class OmniCLIClient << Groovy >>\n}\n....\n\n\n\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Put Best practices in a separate section.\n" }, { "from": "human", "value": "= Groundwork\n:toc:\n:source-highlighter: pygments\n\nlink:index.html[back to index page]\n\n== Laying the Groundwork\n\nTo redeploy RecordTrac, you need support from key stakeholders _within_ government. The administrator or elected official in charge of overseeing public records request must agree to use this system, and instruct their colleagues to do so. \n\nRecordTrac assumes there is a contact for a given municipality or department within the municipality to handle public records requests. If a government agency has no process at all in place, but is interested in using the system, they could start with one ‘champion’ that is knowledgeable about who has access to different records. The champion can then route requests to the proper parties within government who may have the documents or information a requester needs. \n\n\n== Best Practices to Consider\n\nRecordTrac is flexible and could complement almost any governmental agency's process for fulfilling records requests. There are however, best practices a governmental agency should adopt to really leverage the power of RecordTrac. Below is an example lifted from the City of Oakland:\n\n* Track all public records requests through RecordTrac, even if you originally received it over the phone, by email, fax, or mail. \n\n* Don't reveal sensitive information in your message or upload documents that haven't been thoroughly redacted. Everything you do on the site is immediately viewable to the public.\n\n* Upload scanned copies of the [redacted] records online instead of sending the document only to the requester. This prevents you from answering the same public records request multiple times. It also provides proof you responded to the request and exactly what you provided.\n\n* Communicate with everyone through RecordTrac. Only take conversations offline if it involves confidential or sensitive information.\n\n* Review requests no later than two business days after you receive them. This ensures the person responsible for fulfilling a records request gets it in time if it needs to be re-routed to him or her.\n" }, { "from": "gpt", "value": "= Groundwork\n:toc:\n:source-highlighter: pygments\n\nlink:index.html[back to index page]\n\n== Laying the Groundwork\n\nTo redeploy RecordTrac, you need support from key stakeholders _within_ government. The administrator or elected official in charge of overseeing public records request must agree to use this system, and instruct their colleagues to do so. \n\nRecordTrac assumes there is a contact for a given municipality or department within the municipality to handle public records requests. If a government agency has no process at all in place, but is interested in using the system, they could start with one ‘champion’ that is knowledgeable about who has access to different records. The champion can then route requests to the proper parties within government who may have the documents or information a requester needs. \n\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Update to the example section.\n" }, { "from": "human", "value": "= Qprompt\n\n== Introduction\nThis project provides a Python 2.x library that allows the user to quickly create CLI prompts for user input. The main features are the following:\n\n - Simple multi-entry menus.\n - Prompt for yes/no response.\n - Prompt for integer response.\n - Prompt for float response.\n - Optional default value.\n - Optional validity check.\n - Should work on any platform without additional dependencies.\n\n== Status\nCurrently, this project is **under active development**. The contents of the repository should be considered unstable during active development.\n\n== Requirements\nQprompt should run on any Python 2.x interpreter without additional dependencies.\n\n== Installation\nQprompt can be installed with pip using the following command: `pip install qprompt`\n\nAdditional, Qprompt can be installed from source by running: `python setup.py install`\n\n== Examples\nExample of basic information prompting:\n\n--------\ninclude::examples/ask_1.py[]\n--------\n\nExample of menu usage:\n\n--------\ninclude::examples/menu_1.py[]\n--------\n\n== Similar\nThe following projects are similar and may be worth checking out:\n\n - https://github.com/Sleft/cliask[cliask]\n - https://github.com/aventurella/promptly[Promptly]\n - https://github.com/tylerdave/prompter[prompter]\n - https://github.com/magmax/python-inquirer[python-inquirer]\n" }, { "from": "gpt", "value": "= Qprompt\n\n== Introduction\nThis project provides a Python 2.x library that allows the user to quickly create CLI prompts for user input. The main features are the following:\n\n - Simple multi-entry menus.\n - Prompt for yes/no response.\n - Prompt for integer response.\n - Prompt for float response.\n - Optional default value.\n - Optional validity check.\n - Should work on any platform without additional dependencies.\n\n== Status\nCurrently, this project is **under active development**. The contents of the repository should be considered unstable during active development.\n\n== Requirements\nQprompt should run on any Python 2.x interpreter without additional dependencies.\n\n== Installation\nQprompt can be installed with pip using the following command: `pip install qprompt`\n\nAdditional, Qprompt can be installed from source by running: `python setup.py install`\n\n== Examples\nThe following are basic examples of Qprompt (all examples can be found https://github.com/jeffrimko/Qprompt/tree/master/examples[here]):\n\n - https://github.com/jeffrimko/Qprompt/blob/master/examples/ask_1.py[`examples/ask_1.py`] - Basic info prompting.\n - https://github.com/jeffrimko/Qprompt/blob/master/examples/menu_1.py[`examples/menu_1.py`] - Basic menu usage.\n\n== Similar\nThe following projects are similar and may be worth checking out:\n\n - https://github.com/Sleft/cliask[cliask]\n - https://github.com/aventurella/promptly[Promptly]\n - https://github.com/tylerdave/prompter[prompter]\n - https://github.com/magmax/python-inquirer[python-inquirer]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Remove nested includes from adoc\n" }, { "from": "human", "value": "*--map-http-status* 'TEXT'::\n\nMap non success HTTP response codes to exit codes other than 1.\ne.g. \"--map-http-satus 403=0,404=0\" would exit with 0 even if a 403\nor 404 http error code was received. Valid exit codes are 0,1,50-99.\n\ninclude::format_option.adoc[]\n\ninclude::jmespath_option.adoc[]\n\ninclude::help_option.adoc[]\n\ninclude::verbose_option.adoc[]\n" }, { "from": "gpt", "value": "*--map-http-status* 'TEXT'::\n\nMap non success HTTP response codes to exit codes other than 1.\ne.g. \"--map-http-satus 403=0,404=0\" would exit with 0 even if a 403\nor 404 http error code was received. Valid exit codes are 0,1,50-99.\n\n*-F, --format* '[json|text]'::\n\nSet the output format for stdout. Defaults to \"text\".\n\n*--jq, --jmespath* 'EXPR'::\n\nSupply a JMESPath expression to apply to json output.\nTakes precedence over any specified '--format' and forces the format to be json\nprocessed by this expression.\n+\nA full specification of the JMESPath language for querying JSON structures may\nbe found at https://jmespath.org/\n\n*-h, --help*::\n\nShow help text for this command.\n\n*-v, --verbose*::\n\nControl the level of output. \n+\nUse -v or --verbose to show warnings and any additional text output.\n+ \nUse -vv to add informative logging.\n+\nUse -vvv to add debug logging and full stack on any errors. (equivalent to\n-v --debug)\n\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Update documentation to current interface on cloud.rh.c" }, { "from": "human", "value": "// Module included in the following assemblies:\n//\n// administering_a_cluster/dedicated-admin-role.adoc\n\n[id=\"dedicated-managing-dedicated-administrators_{context}\"]\n= Managing {product-title} administrators\n\nAdministrator roles are managed using a `dedicated-admins` group on the cluster.\nExisting members of this group can edit membership via the\nlink:https://cloud.redhat.com/openshift[{cloud-redhat-com}] site.\n\n[id=\"dedicated-administrators-adding-user_{context}\"]\n== Adding a user\n. Navigate to the *Cluster Details* page and *Users* tab.\n. Click the *Add user* button. (first user only)\n. Enter the user name and select the group (*dedicated-admins*)\n. Click the *Add* button.\n\n[id=\"dedicated-administrators-removing-user_{context}\"]\n== Removing a user\n. Navigate to the *Cluster Details* page and *Users* tab.\n. Click the *X* to the right of the user / group combination to be deleted..\n" }, { "from": "gpt", "value": "// Module included in the following assemblies:\n//\n// administering_a_cluster/dedicated-admin-role.adoc\n\n[id=\"dedicated-managing-dedicated-administrators_{context}\"]\n= Managing {product-title} administrators\n\nAdministrator roles are managed using a `dedicated-admins` group on the cluster.\nExisting members of this group can edit membership via the\nlink:https://cloud.redhat.com/openshift[{cloud-redhat-com}] site.\n\n[id=\"dedicated-administrators-adding-user_{context}\"]\n== Adding a user\n. Navigate to the *Cluster Details* page and *Access Control* tab.\n. Click the *Add user* button. (first user only)\n. Enter the user name and select the group (*dedicated-admins*)\n. Click the *Add* button.\n\n[id=\"dedicated-administrators-removing-user_{context}\"]\n== Removing a user\n. Navigate to the *Cluster Details* page and *Access Control* tab.\n. Click the 3 vertical dots to the right of the user / group combination to show a menu, then click on *Delete*.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Set date for breaking Relx 4 change\n" }, { "from": "human", "value": "2020/06/18: Concuerror integration has been added. It is\n currently minimal but usable. Experimentation\n and feedback is welcome.\n\n2020/11/30: Support for publishing Hex releases and docs\n has been added. It is currently experimental.\n Feedback is more than welcome.\n\n2022/03/25: The -Wrace_conditions Dialyzer flag was removed\n as it is no longer available starting from OTP 25.\n\n2022/??/??: Relx has been updated to v4. Relx v4 is no longer\n an escript, therefore breaking changes were\n introduced. The `RELX`, `RELX_URL` and `RELX_OPTS`\n variables were removed. The `relx` project must\n be added as a `DEPS`, `BUILD_DEPS` or `REL_DEPS`\n dependency to enable building releases. For example:\n `REL_DEPS = relx`. Relx itself has had some\n additional changes: the `start` command has\n been replaced by `daemon`, and configuration\n defaults have changed so that you may need\n to add the following to your relx.config file:\n\n ``` erlang\n {dev_mode, false}.\n {include_erts, true}.\n ```\n" }, { "from": "gpt", "value": "2020/06/18: Concuerror integration has been added. It is\n currently minimal but usable. Experimentation\n and feedback is welcome.\n\n2020/11/30: Support for publishing Hex releases and docs\n has been added. It is currently experimental.\n Feedback is more than welcome.\n\n2022/03/25: The -Wrace_conditions Dialyzer flag was removed\n as it is no longer available starting from OTP 25.\n\n2022/05/20: Relx has been updated to v4. Relx v4 is no longer\n an escript, therefore breaking changes were\n introduced. The `RELX`, `RELX_URL` and `RELX_OPTS`\n variables were removed. The `relx` project must\n be added as a `DEPS`, `BUILD_DEPS` or `REL_DEPS`\n dependency to enable building releases. For example:\n `REL_DEPS = relx`. Relx itself has had some\n additional changes: the `start` command has\n been replaced by `daemon`, and configuration\n defaults have changed so that you may need\n to add the following to your relx.config file:\n\n ``` erlang\n {dev_mode, false}.\n {include_erts, true}.\n ```\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add offset examples to Zones\n" }, { "from": "human", "value": "== Time Zones & Offset\n\nExtract a zone from a `java.time.ZonedDateTime`:\n====\n[source.code,clojure]\n----\n(t/zone (t/zoned-date-time \"2000-01-01T00:00:00Z[Europe/Paris]\"))\n----\n\n[source.code,clojure]\n----\n(t/zone)\n----\n\n====\n\nCreate a `java.time.ZonedDateTime` in a particular time zone:\n\n====\n[source.code,clojure]\n----\n(t/in (t/instant \"2000-01-01T00:00\") \"Australia/Darwin\")\n----\n====\n\n\n=== TBD : offsets" }, { "from": "gpt", "value": "== Time Zones & Offset\n\nExtract a zone from a `java.time.ZonedDateTime`:\n====\n[source.code,clojure]\n----\n(t/zone (t/zoned-date-time \"2000-01-01T00:00:00Z[Europe/Paris]\"))\n----\n\n[source.code,clojure]\n----\n(t/zone)\n----\n\n====\n\nCreate a `java.time.ZonedDateTime` in a particular time zone:\n\n====\n[source.code,clojure]\n----\n(t/in (t/instant \"2000-01-01T00:00\") \"Australia/Darwin\")\n----\n====\n\nGive the `OffsetDateTime` instead of `ZonedDateTime`:\n\n====\n[source.code,clojure]\n----\n(t/offset-date-time (t/zoned-date-time \"2000-01-01T00:00:00Z[Australia/Darwin]\"))\n----\n====\n\nSpecify the offset for a `LocalDateTime`:\n====\n[source.code,clojure]\n----\n(t/offset-by (t/date-time \"2018-01-01T00:00\") 9)\n----\n====\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add title image to pdf\n" }, { "from": "human", "value": "= geo-shell\nJared Erickson\nv0.7-SNAPSHOT\nifndef::imagesdir[:imagesdir: images]\n\ninclude::intro.adoc[]\n\ninclude::workspace.adoc[]\n\ninclude::layer.adoc[]\n\ninclude::format.adoc[]\n\ninclude::raster.adoc[]\n\ninclude::tile.adoc[]\n\ninclude::style.adoc[]\n\ninclude::map.adoc[]\n\ninclude::builtin.adoc[]" }, { "from": "gpt", "value": "= Geo Shell\nJared Erickson\nv0.7-SNAPSHOT\n:title-logo-image: image:geoshell.png[pdfwidth=5.5in,align=center]\nifndef::imagesdir[:imagesdir: images]\n\ninclude::intro.adoc[]\n\ninclude::workspace.adoc[]\n\ninclude::layer.adoc[]\n\ninclude::format.adoc[]\n\ninclude::raster.adoc[]\n\ninclude::tile.adoc[]\n\ninclude::style.adoc[]\n\ninclude::map.adoc[]\n\ninclude::builtin.adoc[]" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add generated data type definitions to Swagger documentation\n" }, { "from": "human", "value": ":generated: ../../../target/generated-docs/asciidoc\n\ninclude::{generated}/overview.adoc[]\ninclude::manual_rest_doc.adoc[]\ninclude::{generated}/paths.adoc[]\n" }, { "from": "gpt", "value": ":generated: ../../../target/generated-docs/asciidoc\n\ninclude::{generated}/overview.adoc[]\ninclude::manual_rest_doc.adoc[]\ninclude::{generated}/paths.adoc[]\ninclude::{generated}/definitions.adoc[]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Remove note about app starting up\n\nIt's no longer applicable to the branch that is purely backend components without any frontend.\n" }, { "from": "human", "value": "= Spring Boot and Two DataSources\n\nThis project demonstrates how to use two `DataSource` s with Spring Boot 2.1.\n\nIt utilizes: \n\n* Spring Data https://github.com/spring-projects/spring-data-jpa[JPA]\n* https://github.com/flyway/flyway[Flyway] migrations for the two `DataSource` s\n* Separate Hibernate properties for each `DataSource` defined in the application.yml\n* Tests for components\n\n Note: It may take a few seconds for the app to start if no one has not accessed it recently" }, { "from": "gpt", "value": "= Spring Boot and Two DataSources\n\nThis project demonstrates how to use two `DataSource` s with Spring Boot 2.1.\n\nIt utilizes: \n\n* Spring Data https://github.com/spring-projects/spring-data-jpa[JPA]\n* https://github.com/flyway/flyway[Flyway] migrations for the two `DataSource` s\n* Separate Hibernate properties for each `DataSource` defined in the application.yml\n* Tests for components\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add how to use asciidoctor plugin in dev\n" }, { "from": "human", "value": "[[development]]\n== Development\n\nGithub repository: {datasource-proxy}\n\n=== Build Documentation\n\n```sh\n> ./mvnw asciidoctor:process-asciidoc@output-html\n```\n" }, { "from": "gpt", "value": "[[development]]\n== Development\n\nGithub repository: {datasource-proxy}\n\n=== Build Documentation\n\nGenerate `index.html`\n\n```sh\n> ./mvnw asciidoctor:process-asciidoc@output-html\n```\n\nHttp preview\n\n```sh\n> ./mvnw asciidoctor:http@output-html\n```" } ]
asciidoc
chosen
[ { "from": "system", "value": "Revert \"Revert \"Revert \"Test trigger on push\"\"\"\n\nThis reverts commit f235543226884c6293a70b2540441e5b1528ff6c.\n" }, { "from": "human", "value": "= Infinispan Cluster Manager\n\nimage:https://vertx.ci.cloudbees.com/buildStatus/icon?job=vert.x3-infinispan[\"Build Status\",link=\"https://vertx.ci.cloudbees.com/view/vert.x-3/job/vert.x3-infinispan/\"]\n\nThis is a cluster manager implementation for Vert.x that uses http://infinispan.org[Infinispan].\n\nPlease see the in-source asciidoc documentation or the main documentation on the web-site for a full description\nof this component:\n\n* link:http://vertx.io/docs/vertx-infinispan/java/[web-site docs]\n* link:src/main/asciidoc/java/index.adoc[in-source docs]\n\n-- will remove --\n" }, { "from": "gpt", "value": "= Infinispan Cluster Manager\n\nimage:https://vertx.ci.cloudbees.com/buildStatus/icon?job=vert.x3-infinispan[\"Build Status\",link=\"https://vertx.ci.cloudbees.com/view/vert.x-3/job/vert.x3-infinispan/\"]\n\nThis is a cluster manager implementation for Vert.x that uses http://infinispan.org[Infinispan].\n\nPlease see the in-source asciidoc documentation or the main documentation on the web-site for a full description\nof this component:\n\n* link:http://vertx.io/docs/vertx-infinispan/java/[web-site docs]\n* link:src/main/asciidoc/java/index.adoc[in-source docs]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Replace \"the next\" with \"the following\"" }, { "from": "human", "value": "The next transformations can be applied to any class to simplify greatly the development of Multi-Tenant applications. These include:\n\n- `@CurrentTenant` - Resolve the current tenant for the context of a class or method\n- `@Tenant` - Use a specific tenant for the context of a class or method\n- `@WithoutTenant` - Execute logic without a specific tenant (using the default connection)\n\nFor example:\n\n[source,groovy]\n----\nimport grails.gorm.multitenancy.*\n\n// resolve the current tenant for every method\n@CurrentTenant\nclass TeamService {\n\n // execute the countPlayers method without a tenant id\n @WithoutTenant\n int countPlayers() {\n Player.count()\n }\n\n // use the tenant id \"another\" for all GORM logic within the method\n @Tenant({\"another\"})\n List<Team> allTwoTeams() {\n Team.list()\n }\n\n List<Team> listTeams() {\n Team.list(max:10)\n }\n\n @Transactional\n void addTeam(String name) {\n new Team(name:name).save(flush:true)\n }\n}\n----" }, { "from": "gpt", "value": "The following transformations can be applied to any class to simplify greatly the development of Multi-Tenant applications. These include:\n\n- `@CurrentTenant` - Resolve the current tenant for the context of a class or method\n- `@Tenant` - Use a specific tenant for the context of a class or method\n- `@WithoutTenant` - Execute logic without a specific tenant (using the default connection)\n\nFor example:\n\n[source,groovy]\n----\nimport grails.gorm.multitenancy.*\n\n// resolve the current tenant for every method\n@CurrentTenant\nclass TeamService {\n\n // execute the countPlayers method without a tenant id\n @WithoutTenant\n int countPlayers() {\n Player.count()\n }\n\n // use the tenant id \"another\" for all GORM logic within the method\n @Tenant({\"another\"})\n List<Team> allTwoTeams() {\n Team.list()\n }\n\n List<Team> listTeams() {\n Team.list(max:10)\n }\n\n @Transactional\n void addTeam(String name) {\n new Team(name:name).save(flush:true)\n }\n}\n----\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add apachebeat to the list of beats from opensource" }, { "from": "human", "value": "[[community-beats]]\n== Community Beats\n\nThe open source community has been hard at work developing new Beats. You can check\nout a few of them here:\n\n[horizontal]\nhttps://github.com/Ingensi/dockerbeat[dockerbeat]:: Reads docker container\nstatistics and indexes them in Elasticsearch\nhttps://github.com/christiangalsterer/httpbeat[httpbeat]:: Polls multiple HTTP(S) endpoints and sends the data to\nLogstash, Elasticsearch. Supports all HTTP methods and proxies.\nhttps://github.com/mrkschan/nginxbeat[nginxbeat]:: Reads status from Nginx\nhttps://github.com/joshuar/pingbeat[pingbeat]:: Sends ICMP pings to a list\nof targets and stores the round trip time (RTT) in Elasticsearch\nhttps://github.com/mrkschan/uwsgibeat[uwsgibeat]:: Reads stats from uWSGI\nhttps://github.com/kozlice/phpfpmbeat[phpfpmbeat]:: Reads status from PHP-FPM\n\nHave you created a Beat that's not listed? Open a pull request to add your link\nhere: https://github.com/elastic/libbeat/blob/master/docs/communitybeats.asciidoc\n\nNOTE: Elastic provides no warranty or support for community-sourced Beats.\n\n[[contributing-beats]]\n=== Contributing to Beats\n\nRemember, you can be a Beats developer, too. <<new-beat, Learn how>>\n\n" }, { "from": "gpt", "value": "[[community-beats]]\n== Community Beats\n\nThe open source community has been hard at work developing new Beats. You can check\nout a few of them here:\n\n[horizontal]\nhttps://github.com/Ingensi/dockerbeat[dockerbeat]:: Reads docker container\nstatistics and indexes them in Elasticsearch\nhttps://github.com/christiangalsterer/httpbeat[httpbeat]:: Polls multiple HTTP(S) endpoints and sends the data to\nLogstash, Elasticsearch. Supports all HTTP methods and proxies.\nhttps://github.com/mrkschan/nginxbeat[nginxbeat]:: Reads status from Nginx\nhttps://github.com/joshuar/pingbeat[pingbeat]:: Sends ICMP pings to a list\nof targets and stores the round trip time (RTT) in Elasticsearch\nhttps://github.com/mrkschan/uwsgibeat[uwsgibeat]:: Reads stats from uWSGI\nhttps://github.com/kozlice/phpfpmbeat[phpfpmbeat]:: Reads status from PHP-FPM\nhttps://github.com/radoondas/apachebeat[apachebeat]:: Reads status from Apache HTTPD server-status\n\nHave you created a Beat that's not listed? Open a pull request to add your link\nhere: https://github.com/elastic/libbeat/blob/master/docs/communitybeats.asciidoc\n\nNOTE: Elastic provides no warranty or support for community-sourced Beats.\n\n[[contributing-beats]]\n=== Contributing to Beats\n\nRemember, you can be a Beats developer, too. <<new-beat, Learn how>>\n\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add section on breadcrumbs endpoint" }, { "from": "human", "value": "= RESTful API Endpoint specification\n\n== Nodes\n\n=== Idea for accessing fields directly\n\n* RUD: /nodes/:uuid/relatedProducts/:uuid -> Pageable list of nodes\n* R: /nodes/:uuid/name\n\nTODO: Do we want to restrict the primitiv types to read only? The user can update the node via PUT /nodes/:uuid anyway.\n\n== Webroot\n\n\n== Tags\n\nTODO: Define how tag familes are setup and tags are assigned to those families.\n\n== Users / Groups / Roles\n\n\n== Projects\n\nTODO: Define how languages are assigned to projects\n" }, { "from": "gpt", "value": "= RESTful API Endpoint specification\n\n== Nodes\n\n=== Idea for accessing fields directly\n\n* RUD: /nodes/:uuid/relatedProducts/:uuid -> Pageable list of nodes\n* R: /nodes/:uuid/name\n\nTODO: Do we want to restrict the primitiv types to read only? The user can update the node via PUT /nodes/:uuid anyway.\n\n== Breadcrumbs\n\n`/breadcrumb/:uuid` -> object containing breadcrumb to the node specified by uuid in the following format:\n\n[source,json]\n----\n[ \n { \"uuid\": \"e0c64dsgasdgasdgdgasdgasdgasdg33\", \"name\": \"products\" }, \n { \"uuid\": \"e0c64ad00a9343cc864ad00a9373cc23\", \"name\": \"aeroplane.en.html\" }\n]\n----\n\nTODO: Where does the \"name\" property come from? It should be the field specified by \"segmentField\" in the schema. \n\nTwo issues:\n\n1. Should we normalize the property name to \"name\", or retain the name of the field specified by \"segmentField\"?\n2. If the schema of a node in the breadcrumb path does not specify \"segmentField\", what do we do? Just display the uuid?\n\n== Webroot\n\n\n== Tags\n\nTODO: Define how tag familes are setup and tags are assigned to those families.\n\n== Users / Groups / Roles\n\n\n== Projects\n\nTODO: Define how languages are assigned to projects\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add note to add keys for dokku\n" }, { "from": "human", "value": "= conoha/dokku-apps\n\n\n.Add pytohn-getting-started app\n----\nalias dokku=\"ssh -t dokku@conoha\"\n\ncd python-getting-started\n\ndokku apps:create python-getting-started\ngit remote add dokku dokku@conoha:python-getting-started\ngit push dokku master\n----\n\nAnd this app can be available at http://python-getting-started.d.10sr.f5.si .\n" }, { "from": "gpt", "value": "= conoha/dokku-apps\n\nFirst you have to run:\n\n----\ncat .ssh/id_rsa.pub | ssh conoha 'sudo sshcommand acl-add dokku dokkudeploy'\n----\n\n\n\n.Add pytohn-getting-started app\n----\nalias dokku=\"ssh -t dokku@conoha\"\n\ncd python-getting-started\n\ndokku apps:create python-getting-started\ngit remote add dokku dokku@conoha:python-getting-started\ngit push dokku master\n----\n\nAnd this app can be available at http://python-getting-started.d.10sr.f5.si .\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Update public Jenkins job URL" }, { "from": "human", "value": "= Continuous integration\n:awestruct-description: Check if the latest nightly build passes all automated tests.\n:awestruct-layout: normalBase\n:showtitle:\n\n== OptaPlanner\n\nWe use Jenkins for continuous integration.\n\n*Show https://hudson.jboss.org/hudson/job/optaplanner/[the public Jenkins job].* This is a mirror of a Red Hat internal Jenkins job.\n\nKeep the build blue!\n\n== Project website (optaplanner.org)\n\nWe use Travis to build this project website, see https://travis-ci.org/droolsjbpm/optaplanner-website[the travis job].\n" }, { "from": "gpt", "value": "= Continuous integration\n:awestruct-description: Check if the latest nightly build passes all automated tests.\n:awestruct-layout: normalBase\n:showtitle:\n\n== OptaPlanner\n\nWe use Jenkins for continuous integration.\n\n*Show https://jenkins-kieci.rhcloud.com/job/optaplanner/[the public Jenkins job].* This is a mirror of a Red Hat internal Jenkins job.\n\nKeep the build green!\n\n== Project website (optaplanner.org)\n\nWe use Travis to build this project website, see https://travis-ci.org/droolsjbpm/optaplanner-website[the travis job].\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix command to get DNS logs\n\n- https://bugzilla.redhat.com/show_bug.cgi?id=1834702\n" }, { "from": "human", "value": "// Module included in the following assemblies:\n//\n// * dns/dns-operator.adoc\n\n[id=\"nw-dns-operator-logs_{context}\"]\n= DNS Operator logs\n\nYou can view DNS Operator logs by using the `oc logs` command.\n\n.Procedure\n\nView the logs of the DNS Operator:\n----\n$ oc logs --namespace=openshift-dns-operator deployment/dns-operator\n----\n" }, { "from": "gpt", "value": "// Module included in the following assemblies:\n//\n// * dns/dns-operator.adoc\n\n[id=\"nw-dns-operator-logs_{context}\"]\n= DNS Operator logs\n\nYou can view DNS Operator logs by using the `oc logs` command.\n\n.Procedure\n\nView the logs of the DNS Operator:\n----\n$ oc logs -n openshift-dns-operator deployment/dns-operator -c dns-operator\n----\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add documentation for the parameters offered by the exportEA configuration.\n" }, { "from": "human", "value": ":filename: manual/03_task_exportEA.adoc\nifndef::imagesdir[:imagesdir: ../images]\n\n= exportEA\n\nIMPORTANT: Currently this feature is WINDOWS-only. https://github.com/docToolchain/docToolchain/issues/231[See related issue]\n\ninclude::feedback.adoc[]\n\nimage::ea/Manual/exportEA.png[]\n\nTIP: Blog-Posts: https://rdmueller.github.io/jria2eac/[JIRA to Sparx EA], https://rdmueller.github.io/sparx-ea/[Did you ever wish you had better Diagrams?]\n\n== Source\n\n.build.gradle\n[source,groovy]\n----\ninclude::../../../scripts/exportEA.gradle[tags=exportEA]\n----\n\n.scripts/exportEAP.vbs\n[source]\n----\ninclude::../../../scripts/exportEAP.vbs[]\n----" }, { "from": "gpt", "value": ":filename: manual/03_task_exportEA.adoc\nifndef::imagesdir[:imagesdir: ../images]\n\n= exportEA\n\nIMPORTANT: Currently this feature is WINDOWS-only. https://github.com/docToolchain/docToolchain/issues/231[See related issue]\n\ninclude::feedback.adoc[]\n\nimage::ea/Manual/exportEA.png[]\n\nTIP: Blog-Posts: https://rdmueller.github.io/jria2eac/[JIRA to Sparx EA], https://rdmueller.github.io/sparx-ea/[Did you ever wish you had better Diagrams?]\n\n== Configuration\nBy default no special configuration is necessary.\nBut, to be more specific on the project and its packages to be used for export,\ntwo optional parameter configurations are available.\nThe parameters can be used independently from each other.\nA sample how to edit your projects Config.groovy is given in the 'Config.groovy'\nof the docToolchain project itself.\n\nconnection::\nSet the connection to a certain project or comment it out to use all project files inside the src folder or its child folder.\n\npackageFilter::\nAdd one or multiple packageGUIDs to be used for export. All packages are analysed, if no packageFilter is set.\n\n== Source\n\n.build.gradle\n[source,groovy]\n----\ninclude::../../../scripts/exportEA.gradle[tags=exportEA]\n----\n\n.scripts/exportEAP.vbs\n[source]\n----\ninclude::../../../scripts/exportEAP.vbs[]\n----\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add Improving the Problem section.\n" }, { "from": "human", "value": ":doctitle:\r\n:author: Jerod Gawne\r\n:email: [email protected]\r\n:docdate: June 07, 2018\r\n:revdate: {docdatetime}\r\n:src-uri: https://github.com/jerodg/hackerrank\r\n\r\n:difficulty:\r\n:time-complexity:\r\n:required-knowledge:\r\n:advanced-knowledge:\r\n:solution-variability:\r\n:score:\r\n:keywords: python, {required-knowledge}, {advanced-knowledge}\r\n:summary:\r\n\r\n:doctype: article\r\n:sectanchors:\r\n:sectlinks:\r\n:sectnums:\r\n:toc:\r\n\r\n{summary}\r\n\r\n== Learning\r\n\r\n== Tutorial\r\n\r\n== Improving the Template\r\n=== Convention\r\n.Missing\r\n* shebang\r\n* encoding\r\n* doc-comments\r\n\r\n=== Extraneous\r\nN/A\r\n\r\n=== Pep8\r\n* No new-line at end of file\r\n\r\n=== Syntax\r\nN/A\r\n\r\n== Reference\r\n" }, { "from": "gpt", "value": ":doctitle:\r\n:author: Jerod Gawne\r\n:email: [email protected]\r\n:docdate: June 07, 2018\r\n:revdate: {docdatetime}\r\n:src-uri: https://github.com/jerodg/hackerrank\r\n\r\n:difficulty:\r\n:time-complexity:\r\n:required-knowledge:\r\n:advanced-knowledge:\r\n:solution-variability:\r\n:score:\r\n:keywords: python, {required-knowledge}, {advanced-knowledge}\r\n:summary:\r\n\r\n:doctype: article\r\n:sectanchors:\r\n:sectlinks:\r\n:sectnums:\r\n:toc:\r\n\r\n{summary}\r\n\r\n== Learning\r\n\r\n== Tutorial\r\n\r\n== Improving the Problem\r\n\r\n== Improving the Template\r\n=== Convention\r\n.Missing\r\n* shebang\r\n* encoding\r\n* doc-comments\r\n\r\n=== Extraneous\r\nN/A\r\n\r\n=== Pep8\r\n* No new-line at end of file\r\n\r\n=== Syntax\r\nN/A\r\n\r\n== Reference\r\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix the example document error \n\nFixed the document error of camel-example-spring-boot-supervising-route-controller" }, { "from": "human", "value": "# Camel Supervising Route Controller Example Spring Boot\n\nThis example shows how to work with a simple Apache Camel application using Spring Boot and a Supervising Route Controller.\n\n## How to run\n\nYou can run this example using\n\n mvn spring-boot:run\n\nBeside JMX you can use Spring Boot Endpoints to interact with the routes:\n\n* To get info about the routes\n+\n[source]\n----\ncurl -XGET -s http://localhost:8080/actuator/camelroutes\n----\n+\n+* To get details about a route\n++\n+[source]\n+----\n+curl -XGET -s http://localhost:8080/actuator/camelroutes/{id}/detail\n+----\n\n* To get info about a route\n+\n[source]\n----\ncurl -XGET -s http://localhost:8080/actuator/camelroutes/{id}/info\n----\n\n* To stop a route\n+\n[source]\n----\ncurl -XPOST -H \"Content-Type: application/json\" -s http://localhost:8080/actuator/camelroutes/{id}/stop\n----\n\n* To start a route\n+\n[source]\n----\ncurl -XPOST -H \"Content-Type: application/json\" -s http://localhost:8080/actuator/camelroutes/{id}/start\n----\n\n\n## More information\n\nYou can find more information about Apache Camel at the website: http://camel.apache.org/\n" }, { "from": "gpt", "value": "# Camel Supervising Route Controller Example Spring Boot\n\nThis example shows how to work with a simple Apache Camel application using Spring Boot and a Supervising Route Controller.\n\n## How to run\n\nYou can run this example using\n\n mvn spring-boot:run\n\nBeside JMX you can use Spring Boot Endpoints to interact with the routes:\n\n* To get info about the routes\n+\n[source]\n----\ncurl -XGET -s http://localhost:8080/actuator/camelroutes\n----\n\n* To get details about a route\n+\n[source]\n----\ncurl -XGET -s http://localhost:8080/actuator/camelroutes/{id}/detail\n----\n\n* To get info about a route\n+\n[source]\n----\ncurl -XGET -s http://localhost:8080/actuator/camelroutes/{id}/info\n----\n\n* To stop a route\n+\n[source]\n----\ncurl -XPOST -H \"Content-Type: application/json\" -s http://localhost:8080/actuator/camelroutes/{id}/stop\n----\n\n* To start a route\n+\n[source]\n----\ncurl -XPOST -H \"Content-Type: application/json\" -s http://localhost:8080/actuator/camelroutes/{id}/start\n----\n\n\n## More information\n\nYou can find more information about Apache Camel at the website: http://camel.apache.org/\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix image tags in updated readme\n" }, { "from": "human", "value": "= The Ehcache 3.x line is currently the development line.\n\nStatus of the build: image:https://ehcache.ci.cloudbees.com/buildStatus/icon?job=ehcache3\n\nFor more information, you might want to go check the https://github.com/ehcache/ehcache3/wiki[wiki].\n\nimage:http://cloudbees.prod.acquia-sites.com/sites/default/files/styles/large/public/Button-Built-on-CB-1.png?itok=3Tnkun-C" }, { "from": "gpt", "value": "= The Ehcache 3.x line is currently the development line.\n\nStatus of the build: image:https://ehcache.ci.cloudbees.com/buildStatus/icon?job=ehcache3[Ehcache@Cloudbees, link=\"https://ehcache.ci.cloudbees.com/job/ehcache3/\"]\n\nFor more information, you might want to go check the https://github.com/ehcache/ehcache3/wiki[wiki].\n\nimage:http://cloudbees.prod.acquia-sites.com/sites/default/files/styles/large/public/Button-Powered-by-CB.png?itok=uMDWINfY[Cloudbees, link=\"http://www.cloudbees.com/resources/foss\"]" } ]
asciidoc
chosen
[ { "from": "system", "value": "Format ToDo as a list\n" }, { "from": "human", "value": "= clublist - Club Membership List\n\nTrack members for a small non-profit club.\n\nThis shows off some basic functionality of JPA and DeltaSpike Data in a JSF environment.\n\n== Deployment\n\n. Copy config-sample.properties to config.properties, change the name in the orgName property\nin this file from 'Sample Club' to your organization's name (keep it short).\n. Setup the datasource in your app server\n. Setup the deployment as needed (e.g., edit jboss-deployment.xml)\n. Create a user in the role of club_exec in your app server.\n. Deploy (e.g., mvn wildfly:deploy if you use JBoss WildFly).\n. Enjoy!\n\n== ToDo\n\nSecurity: Maybe allow members to update their own record (only!)\n\nUse redirect after editing to really go back to the List page.\n\n\"Position\" should be a relationship to another Entity, with a dropdown chooser.\n\n\"Membership Type\" should be a relationship to another Entity, with a dropdown.\n\nSearch with 'Like' method in DS Data\n\nA confirmation (p:dialog?) on the Edit->Delete button would be a good idea.\n\nImplement the Mailing List page.\n\nImplement the Print Member Badge/Label page.\n\nEven though people should not use spreadsheets for database work, you will probably\nbe pressured to impelement the \"Export\" capability. You will need Apache POI for this.\n\nRefactor Home object to merge w/ darwinsys-ee EntityHome\n" }, { "from": "gpt", "value": "= clublist - Club Membership List\n\nTrack members for a small non-profit club.\n\nThis shows off some basic functionality of JPA and DeltaSpike Data in a JSF environment.\n\n== Deployment\n\n. Copy config-sample.properties to config.properties, change the name in the orgName property\nin this file from 'Sample Club' to your organization's name (keep it short).\n. Setup the datasource in your app server\n. Setup the deployment as needed (e.g., edit jboss-deployment.xml)\n. Create a user in the role of club_exec in your app server.\n. Deploy (e.g., mvn wildfly:deploy if you use JBoss WildFly).\n. Enjoy!\n\n== ToDo\n\nHere are some things that should be added. https://github.com/IanDarwin/clublist[Fork this project on GitHub] and send pull requests when you get one working!\n\n. Search with 'Like' method in DS Data\n. Use redirect after editing to really go back to the List page.\n. Maybe allow members to update their own record (only!)\n\tProbably requires moving to app-managed security since you already have a record for each person.\n. \"Position\" should be a relationship to another Entity, with a dropdown chooser.\n. \"Membership Type\" should be a relationship to another Entity, with a dropdown.\n. A confirmation (p:dialog?) on the Edit->Delete button would be a good idea.\n. Implement the Mailing List page.\n. Implement the Print Member Badge/Label page.\n. Even though people should not use spreadsheets for database work, you will probably\nbe pressured to impelement the \"Export\" capability. You will need Apache POI for this.\n. Refactor Home object to merge w/ darwinsys-ee EntityHome\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add prebuilt binary installation instructions\n" }, { "from": "human", "value": "proxy\n=====\n\n[quote]\nA development proxy with logging and redirect-rewriting\n\nInstallation\n------------\n\n[source,bash]\n----\ngo get -u github.com/ciarand/proxy\n----\n\nUsage\n-----\n\n[source,bash]\n----\n# start the proxy in one shell:\nproxy -from=https://www.google.com -to=http://0.0.0.0:8080\n\n# and in another, run curl:\ncurl -s http://localhost:8080/ | head -c 15\n<!doctype html>\n\n# result from proxy shell (shortened for width):\nINFO[0022] request started client_address=[::1]:58988 method=GET uri=/\nINFO[0023] request complete elapsed=624.644152ms status=200\n----\n\nLicense\n-------\nSee the link:LICENSE[LICENSE] file.\n" }, { "from": "gpt", "value": "proxy\n=====\n\n[quote]\nA development proxy with logging and redirect-rewriting\n\nInstallation\n------------\n\nDownload a prebuilt binary for your platform and architecture from the\nlink:https://github.com/ciarand/proxy/releases[release page].\n\nOr, build from source:\n\n[source,bash]\n----\n# from source\ngo get -u github.com/ciarand/proxy\n----\n\nUsage\n-----\n\n[source,bash]\n----\n# start the proxy in one shell:\nproxy -from=https://www.google.com -to=http://0.0.0.0:8080\n\n# and in another, run curl:\ncurl -s http://localhost:8080/ | head -c 15\n<!doctype html>\n\n# result from proxy shell (shortened for width):\nINFO[0022] request started client_address=[::1]:58988 method=GET uri=/\nINFO[0023] request complete elapsed=624.644152ms status=200\n----\n\nLicense\n-------\nSee the link:LICENSE[LICENSE] file.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix typo in plantuml definition" }, { "from": "human", "value": "# android-images\nYet another repo with docker images for Android developers\n\n[source,planzuml]\n------\nnode \"java jdk-8\" as jdk8\nnode \"java jdk-7\" as jdk7\nartifact \"Android\" {\nnode \"gradle\" as gradle\nnode \"sdk\" as sdk\nnode \"ndk 11\" as ndk11\nnode \"ndk 13\" as ndk13\nnode \"vlc\" as vlc\n}\nartifact \"Tools\" {\nnode \"ruby\" as ruby\nnode \"Asciidoctor\" as asciidoctor\n}\n\ngradle -up-> jdk8 \nsdk -up-> gradle\nndk11 -up-> sdk\nndk13 -up-> sdk\nvlc -up-> ndk11\n\n\nruby -up-> jdk8 \nasciidoctor -up-> ruby\n------" }, { "from": "gpt", "value": "# android-images\nYet another repo with docker images for Android developers\n\n[source,plantuml]\n------\nnode \"java jdk-8\" as jdk8\nnode \"java jdk-7\" as jdk7\nartifact \"Android\" {\nnode \"gradle\" as gradle\nnode \"sdk\" as sdk\nnode \"ndk 11\" as ndk11\nnode \"ndk 13\" as ndk13\nnode \"vlc\" as vlc\n}\nartifact \"Tools\" {\nnode \"ruby\" as ruby\nnode \"Asciidoctor\" as asciidoctor\n}\n\ngradle -up-> jdk8 \nsdk -up-> gradle\nndk11 -up-> sdk\nndk13 -up-> sdk\nvlc -up-> ndk11\n\n\nruby -up-> jdk8 \nasciidoctor -up-> ruby\n------\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add link to GitHup page" }, { "from": "human", "value": "= Blueprint\n:author: Hafid Haddouti\n\nimage:https://travis-ci.org/haf-tech/blueprint.svg?branch=master[\"Build Status\", link=\"https://travis-ci.org/haf-tech/blueprint\"] \nimage:https://img.shields.io/badge/License-Apache%202.0-blue.svg[\"License\", link=\"https://opensource.org/licenses/Apache-2.0\"]\n\n....\nBlueprint is a playground for illustrating different paradigms.\n\nIn the meantime the following concepts are integrated or planned:\n- Spring Boot\n- AsciiDoctor integration, with UML and different outputs\n- Onion architecture\n- Docker build/push\n\nNext:\n- reactive programming\n\n" }, { "from": "gpt", "value": "= Blueprint\n:author: Hafid Haddouti\n\nimage:https://travis-ci.org/haf-tech/blueprint.svg?branch=master[\"Build Status\", link=\"https://travis-ci.org/haf-tech/blueprint\"] \nimage:https://img.shields.io/badge/License-Apache%202.0-blue.svg[\"License\", link=\"https://opensource.org/licenses/Apache-2.0\"]\n\n\nBlueprint is a playground for illustrating different paradigms.\n\nIn the meantime the following concepts are integrated or planned:\n- Spring Boot\n- AsciiDoctor integration, with UML and different outputs\n- Onion architecture\n- Docker build/push\n\nNext:\n- reactive programming\n\nAn up to date documentation is https://haf-tech.github.io/blueprint/[online] available.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix broken link to source of page" }, { "from": "human", "value": "= Who's Using Debezium?\n:awestruct-layout: doc\n:linkattrs:\n:icons: font\n:source-highlighter: highlight.js\n\nDebezium is used in production by a wide range of companies and organizations.\nThis list contains users of Debezium who agreed to serve as public reference;\nwhere available, further resources with more details are linked.\n\nIf your organization would like to be added to (or removed from) this list,\nplease send a pull request for updating the https://github.com/debezium/debezium.github.io/blob/develop/docs/users.asciidoc[source of this page].\n\n* Convoy (https://medium.com/convoy-tech/logs-offsets-near-real-time-elt-with-apache-kafka-snowflake-473da1e4d776[details])\n* JW Player (https://www.slideshare.net/jwplayer/polylog-a-logbased-architecture-for-distributed-systems-124997666[details])\n* OYO\n* Usabilla by Surveymonkey\n* WePay, Inc. (https://wecode.wepay.com/posts/streaming-databases-in-realtime-with-mysql-debezium-kafka[details], https://wecode.wepay.com/posts/streaming-cassandra-at-wepay-part-1[more details])\n* ... and you? Then let us know and get added to the list, too. Thanks!\n" }, { "from": "gpt", "value": "= Who's Using Debezium?\n:awestruct-layout: doc\n:linkattrs:\n:icons: font\n:source-highlighter: highlight.js\n\nDebezium is used in production by a wide range of companies and organizations.\nThis list contains users of Debezium who agreed to serve as public reference;\nwhere available, further resources with more details are linked.\n\nIf your organization would like to be added to (or removed from) this list,\nplease send a pull request for updating the https://github.com/debezium/debezium.github.io/blob/develop/community/users.asciidoc[source of this page].\n\n* Convoy (https://medium.com/convoy-tech/logs-offsets-near-real-time-elt-with-apache-kafka-snowflake-473da1e4d776[details])\n* JW Player (https://www.slideshare.net/jwplayer/polylog-a-logbased-architecture-for-distributed-systems-124997666[details])\n* OYO\n* Usabilla by Surveymonkey\n* WePay, Inc. (https://wecode.wepay.com/posts/streaming-databases-in-realtime-with-mysql-debezium-kafka[details], https://wecode.wepay.com/posts/streaming-cassandra-at-wepay-part-1[more details])\n* ... and you? Then let us know and get added to the list, too. Thanks!\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add extra line between include.\n" }, { "from": "human", "value": "= Sample Book\nAuthor Name <[email protected]>\nv1.0, October 4, 2015: First Draft\n:doctype: book\n:docinfo:\n:toc: left\n:toclevels: 2\n:sectnums:\n:linkcss:\n\nAn sample book to show case AsciiDoctor folder structure.\n\ninclude::book/chapter-1/chapter-1.adoc[leveloffset=+1]\ninclude::book/chapter-2/chapter-2.adoc[leveloffset=+1]\n" }, { "from": "gpt", "value": "= Sample Book\nAuthor Name <[email protected]>\nv1.0, October 4, 2015: First Draft\n:doctype: book\n:docinfo:\n:toc: left\n:toclevels: 2\n:sectnums:\n:linkcss:\n\nAn sample book to show case AsciiDoctor folder structure.\n\ninclude::book/chapter-1/chapter-1.adoc[leveloffset=+1]\n\ninclude::book/chapter-2/chapter-2.adoc[leveloffset=+1]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix minor error in release process doc." }, { "from": "human", "value": "= bitcoinj-addons Release Process\n\n== Main Release Process\n\n. Update `CHANGELOG.adoc`\n. Set versions\n.. `README.adoc`\n.. bitcoinj-groovy `ExtensionModule`\n.. `gradle.properties`\n. Commit version bump and changelog.\n. Tag: `git tag -a v0.x.y -m \"Release 0.x.y\"`\n. Push: `git push --tags origin master`\n. Full build, test\n.. `./gradlew clean jenkinsBuild regTest`\n.. Recommended: test with *OmniJ* regTests.\n. Publish to Bintray: \n.. `./gradlew bintrayUpload`\n.. Confirm publish of artifacts in Bintray Web UI.\n. Update github-pages site (including JavaDoc): `./gradlew publishSite`\n\n== Announcements\n\n. Not yet.\n\n== After release\n\n. Set versions back to -SNAPSHOT\n.. `gradle.properties`\n.. bitcoinj-groovy `ExtensionModule`\n.. *Not* `README.adoc` -- it should match release version\n. Commit and push to master\n\n\n\n" }, { "from": "gpt", "value": "= bitcoinj-addons Release Process\n\n== Main Release Process\n\n. Update `CHANGELOG.adoc`\n. Set versions\n.. `README.adoc`\n.. bitcoinj-groovy `ExtensionModule`\n.. `build.gradle` (should move to `gradle.properties`)\n. Commit version bump and changelog.\n. Tag: `git tag -a v0.x.y -m \"Release 0.x.y\"`\n. Push: `git push --tags origin master`\n. Full build, test\n.. `./gradlew clean jenkinsBuild regTest`\n.. Recommended: test with *OmniJ* regTests.\n. Publish to Bintray: \n.. `./gradlew bintrayUpload`\n.. Confirm publish of artifacts in Bintray Web UI.\n. Update github-pages site (including JavaDoc): `./gradlew publishSite`\n\n== Announcements\n\n. Not yet.\n\n== After release\n\n. Set versions back to -SNAPSHOT\n.. `gradle.properties`\n.. bitcoinj-groovy `ExtensionModule`\n.. *Not* `README.adoc` -- it should match release version\n. Commit and push to master\n\n\n\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add travis badge to readme\n" }, { "from": "human", "value": "|===\n|image:http://goreportcard.com/badge/spohnan/ci-bot-01[\"Go Report Card\",link=\"http://goreportcard.com/report/spohnan/ci-bot-01\", window=\"_blank\"]|image:https://travis-ci.org/spohnan/ci-bot-01.svg?branch=master[\"Build Status\", link=\"https://travis-ci.org/spohnan/ci-bot-01\", window=\"_blank\"]\n|===\n\n=== Automated GitHub Interactions\n" }, { "from": "gpt", "value": "[options=\"header\"]\n|===\n|CI Build and Tests|Static Analysis\n|image:https://travis-ci.org/spohnan/ci-bot-01.svg?branch=master[\"Build Status\", link=\"https://travis-ci.org/spohnan/ci-bot-01\", window=\"_blank\"]|image:http://goreportcard.com/badge/spohnan/ci-bot-01[\"Go Report Card\",link=\"http://goreportcard.com/report/spohnan/ci-bot-01\", window=\"_blank\"]\n|===\n\n=== Automated GitHub Interactions\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix copy and paste doc\n" }, { "from": "human", "value": "[[SpringCloud-SpringCloud]]\nSpring Cloud\n~~~~~~~~~~~\n\n*Available as of Camel 2.19*\n\nSpring Cloud component\n\nMaven users will need to add the following dependency to their `pom.xml`\nin order to use this component:\n\n[source,xml]\n------------------------------------------------------------------------------------------------\n<dependency>\n <groupId>org.apache.camel</groupId>\n <artifactId>camel-spring-cloud</artifactId>\n <version>${camel.version}</version> <!-- use the same version as your Camel core version -->\n</dependency>\n------------------------------------------------------------------------------------------------\n\n`camel-spring-cloud` jar comes with the `spring.factories` file, so as\nsoon as you add that dependency into your classpath, Spring Boot will\nautomatically auto-configure Camel for you.\n\n[[SpringCloud-CamelSpringCloudStarter]]\nCamel Spring Cloud Starter\n^^^^^^^^^^^^^^^^^^^^^^^^^\n\n*Available as of Camel 2.19*\n\nTo use the starter, add the following to your spring boot pom.xml file:\n\n[source,xml]\n------------------------------------------------------\n<dependency>\n <groupId>org.apache.camel</groupId>\n <artifactId>camel-spring-cloud-starter</artifactId>\n <version>${camel.version}</version> <!-- use the same version as your Camel core version -->\n</dependency>\n------------------------------------------------------\n" }, { "from": "gpt", "value": "=== Spring Cloud Netflix\n\n*Available as of Camel 2.19*\n\nThe Spring Cloud Netflix component bridges Camel Cloud and Spring Cloud Netflix so you can leverage Spring Cloud Netflix service discovery and load balance features in Camel and/or you can use Camel Service Discovery implementations as ServerList source for Spring Cloud Netflix's Ribbon load balabncer.\n\nMaven users will need to add the following dependency to their `pom.xml` in order to use this component:\n\n[source,xml]\n----\n<dependency>\n <groupId>org.apache.camel</groupId>\n <artifactId>camel-spring-cloud-netflix</artifactId>\n <version>${camel.version}</version>\n <!-- use the same version as your Camel core version -->\n</dependency>\n----\n\n`camel-spring-cloud-netflix` jar comes with the `spring.factories` file, so as soon as you add that dependency into your classpath, Spring Boot will automatically auto-configure Camel for you.\n\nYou can disable Camel Spring Cloud Netflix with the following properties:\n\n[source,properties]\n----\n# Enable/Disable the whole integration, default true\ncamel.cloud.netflix = true\n\n# Enable/Disable the integration with Ribbon, default true\ncamel.cloud.netflix.ribbon = true\n----\n\n=== Spring Cloud Netflix Starter\n\n*Available as of Camel 2.19*\n\nTo use the starter, add the following to your spring boot pom.xml file:\n\n[source,xml]\n----\n<dependency>\n <groupId>org.apache.camel</groupId>\n <artifactId>camel-spring-cloud-netflix-starter</artifactId>\n <version>${camel.version}</version>\n <!-- use the same version as your Camel core version -->\n</dependency>\n----\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Use master version in docs\n" }, { "from": "human", "value": "= Filebeat\n:libbeat: http://www.elastic.co/guide/en/beats/libbeat/1.0.0-rc1\n:version: 1.0.0-rc1\n\ninclude::./overview.asciidoc[]\n\ninclude::./getting-started.asciidoc[]\n\ninclude::./fields.asciidoc[]\n\ninclude::./configuration.asciidoc[]\n\ninclude::./command-line.asciidoc[]\n\ninclude::./migration.asciidoc[]\n\ninclude::./support.asciidoc[]\n" }, { "from": "gpt", "value": "= Filebeat\n:libbeat: http://www.elastic.co/guide/en/beats/libbeat/master\n:version: master\n\ninclude::./overview.asciidoc[]\n\ninclude::./getting-started.asciidoc[]\n\ninclude::./fields.asciidoc[]\n\ninclude::./configuration.asciidoc[]\n\ninclude::./command-line.asciidoc[]\n\ninclude::./migration.asciidoc[]\n\ninclude::./support.asciidoc[]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add migration info for missing values in script\n\nRelates to #30975\n" }, { "from": "human", "value": "[float]\n[[breaking_70_scripting_changes]]\n=== Scripting changes\n\n[float]\n==== getDate() and getDates() removed\n\nFields of type `long` and `date` had `getDate()` and `getDates()` methods\n(for multi valued fields) to get an object with date specific helper methods\nfor the current doc value. In 5.3.0, `date` fields were changed to expose\nthis same date object directly when calling `doc[\"myfield\"].value`, and\nthe getter methods for date objects were deprecated. These methods have\nnow been removed. Instead, use `.value` on `date` fields, or explicitly\nparse `long` fields into a date object using\n`Instance.ofEpochMillis(doc[\"myfield\"].value)`.\n\n[float]\n==== Script errors will return as `400` error codes\n\nMalformed scripts, either in search templates, ingest pipelines or search \nrequests, return `400 - Bad request` while they would previously return\n`500 - Internal Server Error`. This also applies for stored scripts.\n" }, { "from": "gpt", "value": "[float]\n[[breaking_70_scripting_changes]]\n=== Scripting changes\n\n[float]\n==== getDate() and getDates() removed\n\nFields of type `long` and `date` had `getDate()` and `getDates()` methods\n(for multi valued fields) to get an object with date specific helper methods\nfor the current doc value. In 5.3.0, `date` fields were changed to expose\nthis same date object directly when calling `doc[\"myfield\"].value`, and\nthe getter methods for date objects were deprecated. These methods have\nnow been removed. Instead, use `.value` on `date` fields, or explicitly\nparse `long` fields into a date object using\n`Instance.ofEpochMillis(doc[\"myfield\"].value)`.\n\n[float]\n==== Accessing missing document values will throw an error\n`doc['field'].value` will throw an exception if\nthe document is missing a value for the field `field`.\n\nTo check if a document is missing a value, you can use\n`doc['field'].size() == 0`.\n\n\n[float]\n==== Script errors will return as `400` error codes\n\nMalformed scripts, either in search templates, ingest pipelines or search \nrequests, return `400 - Bad request` while they would previously return\n`500 - Internal Server Error`. This also applies for stored scripts.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add default Maven lifecycle bindings to the manual" }, { "from": "human", "value": "= Maven Goals\n\nThis plugin supports the following goals which are explained in detail\nin the next sections.\n\n.Plugin Goals\n[cols=\"1,3\"]\n|===\n|Goal | Description\n\n|**<<{plugin}:build>>**\n|Build images\n\n|**<<{plugin}:start>>** or **<<{plugin}:start,{plugin}:run>>**\n|Create and start containers\n\n|**<<{plugin}:stop>>**\n|Stop and destroy containers\n\n|**<<{plugin}:push>>**\n|Push images to a registry\n\n|**<<{plugin}:watch>>**\n|Watch for doing rebuilds and restarts\n\n|**<<{plugin}:remove>>**\n|Remove images from local docker host\n\n|**<<{plugin}:logs>>**\n|Show container logs\n\n|**<<{plugin}:source>>**\n|Attach docker build archive to Maven project\n\n|**<<{plugin}:save>>**\n|Save images to a file\n\n|**<<{plugin}:volume-create>>**\n|Create a volume for containers to share data\n\n|**<<{plugin}:volume-remove>>**\n|Remove a volume\n|===\n\nNote that all goals are orthogonal to each other. For example in order\nto start a container for your application you typically have to build\nits image before. `{plugin}:start` does *not* imply building the image\nso you should use it then in combination with `{plugin}:build`.\n" }, { "from": "gpt", "value": "= Maven Goals\n\nThis plugin supports the following goals which are explained in detail\nin the next sections.\n\n.Plugin Goals\n[cols=\"1,1,2\"]\n|===\n|Goal | Default Lifecycle Phase | Description\n\n|**<<{plugin}:build>>**\n|install\n|Build images\n\n|**<<{plugin}:start>>** or **<<{plugin}:start,{plugin}:run>>**\n|pre-integration-test\n|Create and start containers\n\n|**<<{plugin}:stop>>**\n|post-integration-test\n|Stop and destroy containers\n\n|**<<{plugin}:push>>**\n|deploy\n|Push images to a registry\n\n|**<<{plugin}:watch>>**\n|\n|Watch for doing rebuilds and restarts\n\n|**<<{plugin}:remove>>**\n|post-integration-test\n|Remove images from local docker host\n\n|**<<{plugin}:logs>>**\n|\n|Show container logs\n\n|**<<{plugin}:source>>**\n|\n|Attach docker build archive to Maven project\n\n|**<<{plugin}:save>>**\n|\n|Save images to a file\n\n|**<<{plugin}:volume-create>>**\n|pre-integration-test\n|Create a volume for containers to share data\n\n|**<<{plugin}:volume-remove>>**\n|post-integration-test\n|Remove a volume\n|===\n\nNote that all goals are orthogonal to each other. For example in order\nto start a container for your application you typically have to build\nits image before. `{plugin}:start` does *not* imply building the image\nso you should use it then in combination with `{plugin}:build`.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Use latest version of aktualizr. Add known issue.\n" }, { "from": "human", "value": "# Hacky Implicit Provisioning version 2\n\nThis document describes a quick hack to test the implicit provisioning flow as it currently works in aktualizr before full support is available in meta-updater and on the server.\n\n## Goals\n\n* end-to-end installation of updates using OSTree\n\n## Steps\n\n1. Edit `recipes-sota/aktualizr/files/sota_autoprov.toml` in meta-updater:\n* Remove the `provision_path` line.\n* Add a line in the `[tls]` section with `server = \"https://...\"`. Use the URL from your account's credentials.zip file.\n1. Edit `recipes-sota/aktualizr/aktualizr_git.bb` in meta-updater:\n* Change aktualizr `SRC_URI` to `git://github.com/advancedtelematic/aktualizr;branch=bugfix/cert_prov_install`.\n* Change `SRCREV` to `13828774baa5a7369e6f5ca552b879ad1ce773d5`.\n* Increment `PR`.\n1. Build a standard image using bitbake. Make sure to set `SOTA_PACKED_CREDENTIALS` like normal.\n1. Boot the image.\n1. Optionally, verify that aktualizr is not provisioning. Make sure the device is not visible in the Garage.\n1. Run `cert_provider` from the aktualizr repo: `cert_provider -c credentials.zip -t <device>`\n1. Verify that aktualizr provisions correctly with the server using the device_id generated by `cert_provider`.\n" }, { "from": "gpt", "value": "# Hacky Implicit Provisioning version 2\n\nThis document describes a quick hack to test the implicit provisioning flow as it currently works in aktualizr before full support is available in meta-updater and on the server.\n\n## Goals\n\n* end-to-end installation of updates using OSTree\n\n## Steps\n\n1. Edit `recipes-sota/aktualizr/files/sota_autoprov.toml` in meta-updater:\n* Remove the `provision_path` line.\n* Add a line in the `[tls]` section with `server = \"https://...\"`. Use the URL from your account's credentials.zip file.\n1. Edit `recipes-sota/aktualizr/aktualizr_git.bb` in meta-updater:\n* Change `SRCREV` to `1c635c67d70cf38d2c841d449e750f08855e41a4`.\n* Increment `PR`.\n1. Build a standard image using bitbake. Make sure to set `SOTA_PACKED_CREDENTIALS` like normal.\n1. Boot the image.\n1. Optionally, verify that aktualizr is not provisioning. Make sure the device is not visible in the Garage.\n1. Run `cert_provider` from the aktualizr repo: `cert_provider -c credentials.zip -t <device>`\n1. Verify that aktualizr provisions correctly with the server using the device_id generated by `cert_provider`.\n\n## Known Issues\n\nAt this time, although the device will be shown in the Garage, its installed package will not be.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Work around Github image float issue.\n" }, { "from": "human", "value": "= Videos\nJames Elliott <[email protected]>\n:icons: font\n\n// Set up support for relative links on GitHub; add more conditions\n// if you need to support other environments and extensions.\nifdef::env-github[:outfilesuffix: .adoc]\n\nThis page collects performance videos that highlight Afterglow in\naction. If you have any to share, please post them to the\nhttps://gitter.im/brunchboy/afterglow[Afterglow room on Gitter]!\n\nhttps://vimeo.com/153492480[image:assets/Deepower-2015.png[Deepower Live Report,align=\"right\",float=\"right\"]]\nSince I have been too busy working on the software to get out and\nperform with it, I am deeply grateful to\nhttps://github.com/dandaka[Vlad Rafeev] for getting this page started\nby sharing a video of a\nhttps://www.facebook.com/deepowerband/[Deepower audiovisual] show he\nlit using Afterglow in December, 2015, in Kaliningrad, Russia last\nmonth. He modestly says, “My first experience with lights, still a lot\nto accomplish.” And I say, there is a ton left to implement in the\nsoftware too. But it is already fun to see\nhttps://vimeo.com/153492480[great video like this]!\n" }, { "from": "gpt", "value": "= Videos\nJames Elliott <[email protected]>\n:icons: font\n\n// Set up support for relative links on GitHub; add more conditions\n// if you need to support other environments and extensions.\nifdef::env-github[:outfilesuffix: .adoc]\n\nThis page collects performance videos that highlight Afterglow in\naction. If you have any to share, please post them to the\nhttps://gitter.im/brunchboy/afterglow[Afterglow room on Gitter]!\n\n+++<a href=\"https://vimeo.com/153492480\"><img src=\"assets/Deepower-2015.png\" align=\"right\" alt=\"Deepower Live Report\"></a>+++\nSince I have been too busy working on the software to get out and\nperform with it, I am deeply grateful to\nhttps://github.com/dandaka[Vlad Rafeev] for getting this page started\nby sharing a video of a\nhttps://www.facebook.com/deepowerband/[Deepower audiovisual] show he\nlit using Afterglow in December, 2015, in Kaliningrad, Russia last\nmonth. He modestly says, “My first experience with lights, still a lot\nto accomplish.” And I say, there is a ton left to implement in the\nsoftware too. But it is already fun to see\nhttps://vimeo.com/153492480[great video like this]!\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Update tools sentence after removing CPMA\n" }, { "from": "human", "value": "[id=\"about-migration\"]\n= About migrating {product-title} 3 to 4\ninclude::modules/common-attributes.adoc[]\n:context: about-migration\n\ntoc::[]\n\n{product-title} 4 includes new technologies and functionality that results in a cluster that is self-managing, flexible, and automated. The way that {product-title} 4 clusters are deployed and managed drastically differs from {product-title} 3.\n\nTo successfully transition from {product-title} 3 to {product-title} 4, it is important that you review the following information:\n\nxref:../../migration/migrating_3_4/planning-migration-3-to-4.adoc#planning-migration-3-to-4[Planning your transition]::\nLearn about the differences between {product-title} versions 3 and 4. Prior to transitioning, be sure that you have reviewed and prepared for storage, networking, logging, security, and monitoring considerations.\n\nxref:../../migration/migrating_3_4/migrating-application-workloads-3-4.adoc#migrating-application-workloads-3-4[Performing your migration]::\nLearn about and use the tools to perform your migration:\n\n* {mtc-full} ({mtc-short}) to migrate your application workloads\n" }, { "from": "gpt", "value": "[id=\"about-migration\"]\n= About migrating {product-title} 3 to 4\ninclude::modules/common-attributes.adoc[]\n:context: about-migration\n\ntoc::[]\n\n{product-title} 4 includes new technologies and functionality that results in a cluster that is self-managing, flexible, and automated. The way that {product-title} 4 clusters are deployed and managed drastically differs from {product-title} 3.\n\nTo successfully transition from {product-title} 3 to {product-title} 4, it is important that you review the following information:\n\nxref:../../migration/migrating_3_4/planning-migration-3-to-4.adoc#planning-migration-3-to-4[Planning your transition]::\nLearn about the differences between {product-title} versions 3 and 4. Prior to transitioning, be sure that you have reviewed and prepared for storage, networking, logging, security, and monitoring considerations.\n\nxref:../../migration/migrating_3_4/migrating-application-workloads-3-4.adoc#migrating-application-workloads-3-4[Performing your migration]::\nLearn about and use the {mtc-full} ({mtc-short}) to migrate your application workloads.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Update docs for running at start.\n" }, { "from": "human", "value": "= Jira Release Notes Generator (JiraRnGen)\n:Author: David Thompson \n:Email: <[email protected]>\n:Revision: 0.1.0 2016-08-03\n\n== Description\nI was looking for a way with hosted Jira to be able to send out release\nnotes with our own template. Since the hosted version doesn't allow you to\nedit the templates, I decided to build an application that worked with the\nJira api to build an html page that could be emailed to our team. This NodeJS\napp is the result of this work.\n\n== ReleaseNotes\n0.1.0 - First iteration at being able to send Jira Release Notes\n\n== Usage\nFirst time to use, run within the JiraRnGen directory.\n----\n$ npm install\n----\nThen to run each time\n----\n$ node release-notes.js -u username -p password --email -n jira-hostname -f release-name -s email-address\n----\n" }, { "from": "gpt", "value": "= Jira Release Notes Generator (JiraRnGen)\n:Author: David Thompson \n:Email: <[email protected]>\n:Revision: 0.1.0 2016-08-03\n\n== Description\nI was looking for a way with hosted Jira to be able to send out release\nnotes with our own template. Since the hosted version doesn't allow you to\nedit the templates, I decided to build an application that worked with the\nJira api to build an html page that could be emailed to our team. This NodeJS\napp is the result of this work.\n\n== ReleaseNotes\n0.1.0 - First iteration at being able to send Jira Release Notes\n\n== Usage\nFirst time to use, run within the JiraRnGen directory.\n----\n$ npm install\n----\nYou will need to create a image/logo.json file; you can copy the example to logo.json for testing.\nYou will need to set up a sendgrid account and set up an API key to use with sendgrid. Copy conf/config-example.json to conf/config.json.\n\nThen to run each time\n----\n$ node release-notes.js -u username -p password --email -n jira-hostname -f release-name -s email-address\n----\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix typo in Busines -> Business" }, { "from": "human", "value": "// Global settings\n:ascii-ids:\n:encoding: UTF-8\n:lang: en\n:icons: font\n:toc: left\n:toclevels: 3\n:numbered:\n\n[[gu]]\n= Users Guide\n:author: Copyright (c) 2014-2016 The OpenNMS Group, Inc.\n:revnumber: {opennms-product-name} {opennms-version}\n:revdate: {last-update-label} {docdatetime}\n:version-label!:\n\n[[gu-service-assurance]]\n== Service Assurance\ninclude::text/service-assurance/introduction.adoc[]\ninclude::text/service-assurance/critical-service.adoc[]\ninclude::text/service-assurance/path-outage.adoc[]\n\ninclude::text/surveillance-view.adoc[]\ninclude::text/dashboard.adoc[]\n\n[[gu-bsm]]\n== Busines Service Monitoring\ninclude::text/bsm/introduction.adoc[]\ninclude::text/bsm/business-service-hierarchy.adoc[]\ninclude::text/bsm/operational-status.adoc[]\ninclude::text/bsm/root-cause-impact-analysis.adoc[]\ninclude::text/bsm/simulation-mode.adoc[]\ninclude::text/bsm/share-bsm-view.adoc[]\ninclude::text/bsm/change-icons.adoc[]\n\n[[gu-alarms]]\n== Alarms\ninclude::text/alarms/alarm-notes.adoc[]\n" }, { "from": "gpt", "value": "// Global settings\n:ascii-ids:\n:encoding: UTF-8\n:lang: en\n:icons: font\n:toc: left\n:toclevels: 3\n:numbered:\n\n[[gu]]\n= Users Guide\n:author: Copyright (c) 2014-2016 The OpenNMS Group, Inc.\n:revnumber: {opennms-product-name} {opennms-version}\n:revdate: {last-update-label} {docdatetime}\n:version-label!:\n\n[[gu-service-assurance]]\n== Service Assurance\ninclude::text/service-assurance/introduction.adoc[]\ninclude::text/service-assurance/critical-service.adoc[]\ninclude::text/service-assurance/path-outage.adoc[]\n\ninclude::text/surveillance-view.adoc[]\ninclude::text/dashboard.adoc[]\n\n[[gu-bsm]]\n== Business Service Monitoring\ninclude::text/bsm/introduction.adoc[]\ninclude::text/bsm/business-service-hierarchy.adoc[]\ninclude::text/bsm/operational-status.adoc[]\ninclude::text/bsm/root-cause-impact-analysis.adoc[]\ninclude::text/bsm/simulation-mode.adoc[]\ninclude::text/bsm/share-bsm-view.adoc[]\ninclude::text/bsm/change-icons.adoc[]\n\n[[gu-alarms]]\n== Alarms\ninclude::text/alarms/alarm-notes.adoc[]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add build instructions to readme\n" }, { "from": "human", "value": "image:https://jenkins-kieci.rhcloud.com/buildStatus/icon?job=optaplanner[\"Build Status\", link=\"https://jenkins-kieci.rhcloud.com/job/optaplanner\"]\n\n== Developing Drools, OptaPlanner and jBPM\n\n*If you want to build or contribute to a droolsjbpm project, https://github.com/droolsjbpm/droolsjbpm-build-bootstrap/blob/master/README.md[read this document].*\n\n*It will save you and us a lot of time by setting up your development environment correctly.*\nIt solves all known pitfalls that can disrupt your development.\nIt also describes all guidelines, tips and tricks.\nIf you want your pull requests (or patches) to be merged into master, please respect those guidelines.\n" }, { "from": "gpt", "value": "image:https://jenkins-kieci.rhcloud.com/buildStatus/icon?job=optaplanner[\"Build Status\", link=\"https://jenkins-kieci.rhcloud.com/job/optaplanner\"]\n\n== Quick development start\n\nTo build and run from source:\n\n[source,sh]\n----\n $ mvn clean install\n $ cd optaplanner-examples\n $ mvn exec:java\n----\n\nTo develop with IntelliJ IDEA, Eclipse or NetBeans, open the root `pom.xml` as a new project\nand configure a _Run/Debug configuration_ like this:\n\n* Main class: `org.optaplanner.examples.app.OptaPlannerExamplesApp`\n* VM options: `-Xmx2G -server` (Memory only needed when using the big datasets in the examples)\n* Program arguments: ``\n* Working directory: `$MODULE_DIR$` (must resolve to optaplanner-examples directory)\n* Use classpath of module: `optaplanner-examples`\n\n== Developing Drools, OptaPlanner and jBPM\n\n*If you want to build or contribute to a droolsjbpm project, https://github.com/droolsjbpm/droolsjbpm-build-bootstrap/blob/master/README.md[read this document].*\n\n*It will save you and us a lot of time by setting up your development environment correctly.*\nIt solves all known pitfalls that can disrupt your development.\nIt also describes all guidelines, tips and tricks.\nIf you want your pull requests (or patches) to be merged into master, please respect those guidelines.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Make build step more clear.\n" }, { "from": "human", "value": "= Explorer for Hawkular\n\nhttp://hawkular.org/[Hawkular] is a set of components for Monitoring.\nThis explorer connects to a Hawkular server and allows to browse trough\ninventory and view entities, graph metrics.\n\n\nifndef::env-github[]\nimage::docs/screenshot.png[]\nendif::[]\nifdef::env-github[]\nimage::https://github.com/pilhuhn/hawkfx/blob/master/docs/screenshot.png[]\nendif::[]\n\n== Running\n\nThe explorer requires JRuby in version 9+\n\nIf you are using `rvm` you can select it via\n\n`rvm use jruby-9.0.5.0`\n\nthen use `bundler` to install the required gems\n\n`bundle install`\n\nthen run\n\n`jruby hawkfx.rb`" }, { "from": "gpt", "value": "= Explorer for Hawkular\n\nhttp://hawkular.org/[Hawkular] is a set of components for Monitoring.\nThis explorer connects to a Hawkular server and allows to browse trough\ninventory and view entities, graph metrics.\n\n\nifndef::env-github[]\nimage::docs/screenshot.png[]\nendif::[]\nifdef::env-github[]\nimage::https://github.com/pilhuhn/hawkfx/blob/master/docs/screenshot.png[]\nendif::[]\n\n== Running\n\nThe explorer requires JRuby in version 9+\n\nIf you are using `rvm` you can select it via\n\n`rvm use jruby-9.0.5.0`\n\ninstall and use `bundler` to install the required gems\n\n`gem install bundler`\n\n`bundle install`\n\nthen run\n\n`jruby hawkfx.rb`" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix docs example for the _id field, the field is not accessible in scripts\n\nCloses #19274\n" }, { "from": "human", "value": "[[mapping-id-field]]\n=== `_id` field\n\nEach document indexed is associated with a <<mapping-type-field,`_type`>> (see\n<<mapping-type>>) and an <<mapping-id-field,`_id`>>. The `_id` field is not\nindexed as its value can be derived automatically from the\n<<mapping-uid-field,`_uid`>> field.\n\nThe value of the `_id` field is accessible in certain queries (`term`,\n`terms`, `match`, `query_string`, `simple_query_string`) and scripts, but\n_not_ in aggregations or when sorting, where the <<mapping-uid-field,`_uid`>>\nfield should be used instead:\n\n[source,js]\n--------------------------\n# Example documents\nPUT my_index/my_type/1\n{\n \"text\": \"Document with ID 1\"\n}\n\nPUT my_index/my_type/2\n{\n \"text\": \"Document with ID 2\"\n}\n\nGET my_index/_search\n{\n \"query\": {\n \"terms\": {\n \"_id\": [ \"1\", \"2\" ] <1>\n }\n },\n \"script_fields\": {\n \"UID\": {\n \"script\": \"doc['_id']\" <2>\n }\n }\n}\n--------------------------\n// AUTOSENSE\n\n<1> Querying on the `_id` field (also see the <<query-dsl-ids-query,`ids` query>>)\n<2> Accessing the `_id` field in scripts (inline scripts must be <<enable-dynamic-scripting,enabled>> for this example to work)\n\n" }, { "from": "gpt", "value": "[[mapping-id-field]]\n=== `_id` field\n\nEach document indexed is associated with a <<mapping-type-field,`_type`>> (see\n<<mapping-type>>) and an <<mapping-id-field,`_id`>>. The `_id` field is not\nindexed as its value can be derived automatically from the\n<<mapping-uid-field,`_uid`>> field.\n\nThe value of the `_id` field is accessible in certain queries (`term`,\n`terms`, `match`, `query_string`, `simple_query_string`), but\n_not_ in aggregations, scripts or when sorting, where the <<mapping-uid-field,`_uid`>>\nfield should be used instead:\n\n[source,js]\n--------------------------\n# Example documents\nPUT my_index/my_type/1\n{\n \"text\": \"Document with ID 1\"\n}\n\nPUT my_index/my_type/2\n{\n \"text\": \"Document with ID 2\"\n}\n\nGET my_index/_search\n{\n \"query\": {\n \"terms\": {\n \"_id\": [ \"1\", \"2\" ] <1>\n }\n }\n}\n--------------------------\n// AUTOSENSE\n\n<1> Querying on the `_id` field (also see the <<query-dsl-ids-query,`ids` query>>)\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add elixir.mk and lfe.mk to the plugins list\n" }, { "from": "human", "value": "[[plugins_list]]\n== List of plugins\n\nThis is a non-exhaustive list of Erlang.mk plugins, sorted\nalphabetically.\n\n=== elvis.mk\n\nAn https://github.com/inaka/elvis.mk[Elvis plugin] for Erlang.mk.\nElvis is an https://github.com/inaka/elvis[Erlang style reviewer].\n\n=== geas\n\nhttps://github.com/crownedgrouse/geas[Geas] gives aggregated\ninformation on a project and its dependencies, and is available\nas an Erlang.mk plugin.\n\n=== hexer.mk\n\nAn https://github.com/inaka/hexer.mk[Hex plugin] for Erlang.mk.\nHex is a https://hex.pm/[package manager for the Elixir ecosystem].\n\n=== reload.mk\n\nA https://github.com/bullno1/reload.mk[live reload plugin] for Erlang.mk.\n" }, { "from": "gpt", "value": "[[plugins_list]]\n== List of plugins\n\nThis is a non-exhaustive list of Erlang.mk plugins, sorted\nalphabetically.\n\n=== elixir.mk\n\nAn https://github.com/botsunit/elixir.mk[Elixir plugin] for\nErlang.mk. http://elixir-lang.org/[Elixir] is an alternative\nlanguage for the BEAM.\n\n=== elvis.mk\n\nAn https://github.com/inaka/elvis.mk[Elvis plugin] for Erlang.mk.\nElvis is an https://github.com/inaka/elvis[Erlang style reviewer].\n\n=== geas\n\nhttps://github.com/crownedgrouse/geas[Geas] gives aggregated\ninformation on a project and its dependencies, and is available\nas an Erlang.mk plugin.\n\n=== hexer.mk\n\nAn https://github.com/inaka/hexer.mk[Hex plugin] for Erlang.mk.\nHex is a https://hex.pm/[package manager for the Elixir ecosystem].\n\n=== lfe.mk\n\nAn https://github.com/ninenines/lfe.mk[LFE plugin] for Erlang.mk.\nLFE, or http://lfe.io/[Lisp Flavoured Erlang], is an alternative\nlanguage for the BEAM.\n\n=== reload.mk\n\nA https://github.com/bullno1/reload.mk[live reload plugin] for Erlang.mk.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix project name in readme [skip ci]\n" }, { "from": "human", "value": "= Json-lib\n:version: 3.0.0.SNAPSHOT\n:linkattrs:\n:project-name: json-lib\n\nimage:http://img.shields.io/travis/aalmiray/{project-name}/development.svg[\"Build Status\", link=\"https://travis-ci.org/aalmiray/{project-name}\"]\nimage:http://img.shields.io/coveralls/aalmiray/{project-name}/development.svg[\"Coverage Status\", link=\"https://coveralls.io/r/aalmiray/{project-name}\"]\nimage:http://img.shields.io/:semver-{version}-blue.svg[\"Semantic Versioning\", link=\"http://semver.org\"]\nimage:http://img.shields.io/badge/license-ASF2-blue.svg[\"Apache License 2\", link=\"http://www.apache.org/licenses/LICENSE-2.0.txt\"]\nimage:http://img.shields.io/badge/download-latest-bb00bb.svg[link=\"https://bintray.com/aalmiray/kordamp/{project-name}/_latestVersion\"]\n\n---\n\nJSON-lib is a java library for transforming beans, maps, collections, java arrays and XML to JSON and back again to beans and DynaBeans.\n\nRefer to the link:http://aalmiray.github.io/{project-name}/[project guide, window=\"_blank\"] for\nfurther information on configuration and usage.\n" }, { "from": "gpt", "value": "= Json-lib\n:version: 3.0.0.SNAPSHOT\n:linkattrs:\n\nimage:http://img.shields.io/travis/aalmiray/Json-lib/development.svg[\"Build Status\", link=\"https://travis-ci.org/aalmiray/Json-lib\"]\nimage:http://img.shields.io/coveralls/aalmiray/Json-lib/development.svg[\"Coverage Status\", link=\"https://coveralls.io/r/aalmiray/Json-lib\"]\nimage:http://img.shields.io/:semver-{version}-blue.svg[\"Semantic Versioning\", link=\"http://semver.org\"]\nimage:http://img.shields.io/badge/license-ASF2-blue.svg[\"Apache License 2\", link=\"http://www.apache.org/licenses/LICENSE-2.0.txt\"]\nimage:http://img.shields.io/badge/download-latest-bb00bb.svg[link=\"https://bintray.com/aalmiray/kordamp/json-lib/_latestVersion\"]\n\n---\n\nJSON-lib is a java library for transforming beans, maps, collections, java arrays and XML to JSON and back again to beans and DynaBeans.\n\nRefer to the link:http://aalmiray.github.io/json-lib/[project guide, window=\"_blank\"] for\nfurther information on configuration and usage.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Change link to CI server.\n" }, { "from": "human", "value": "\n== ANDROID MAVEN PLUGIN\n\nA plugin for Android application development with http://maven.apache.org[Apache Maven 3.0.3+] and \nthe http://tools.android.com[Android SDK].\n\n=== Links\n\n* http://code.google.com/p/maven-android-plugin[Project site] with wiki and more\n* http://code.google.com/p/maven-android-plugin/issues/list[Issue tracker]\n* http://maven-android-plugin-m2site.googlecode.com/svn/index.html[Maven generated plugin documentation site]\n* http://www.sonatype.com/books/mvnref-book/reference/android-dev.html[Maven: Complete Reference - Chapter - Android Application Development with Maven]\n* https://groups.google.com/forum/?fromgroups#!forum/maven-android-developers[Mailinglist]\n* http://code.google.com/p/maven-android-plugin/wiki/Changelog[Changelog]\n* http://jenkins.josefson.org/[Continuous Integration Server Builds]\n\n=== Contributions\n\nWe welcome your feature enhancements and bug fixes in pull requests!\n" }, { "from": "gpt", "value": "\n== ANDROID MAVEN PLUGIN\n\nA plugin for Android application development with http://maven.apache.org[Apache Maven 3.0.3+] and \nthe http://tools.android.com[Android SDK].\n\n=== Links\n\n* http://code.google.com/p/maven-android-plugin[Project site] with wiki and more\n* http://code.google.com/p/maven-android-plugin/issues/list[Issue tracker]\n* http://maven-android-plugin-m2site.googlecode.com/svn/index.html[Maven generated plugin documentation site]\n* http://www.sonatype.com/books/mvnref-book/reference/android-dev.html[Maven: Complete Reference - Chapter - Android Application Development with Maven]\n* https://groups.google.com/forum/?fromgroups#!forum/maven-android-developers[Mailinglist]\n* http://code.google.com/p/maven-android-plugin/wiki/Changelog[Changelog]\n* image:https://travis-ci.org/jayway/maven-android-plugin.png[\"Build Status\", link=\"https://travis-ci.org/jayway/maven-android-plugin\"]\n\n=== Contributions\n\nWe welcome your feature enhancements and bug fixes in pull requests!\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix usage command with install-gogs.sh\n" }, { "from": "human", "value": "= Install a Git Server\n\n== Start Services\n\n. All services can be started, in detached mode, by giving the command:\n\n+\n docker-compose up -d\n+\nAnd this shows the output as:\n+\n Creating git_serverdata...\n Creating gitserver_git_dbdata_1...\n Creating gitserver_git_db_1...\n Creating git...\n+\n\n. Configure the installation\n.. execute the following script\n+\n ./install-gogs.sh <DOCKER_HOST_IP> <git_server_PORT>\n+\n\n_example: ./install.sh 192.168.99.100 3000_\n\n== Sign Up\n\n. Access to this URL: http://dockerhost:3000/user/sign_up\n. Create an account and enjoy!\n. Enjoy! :)\n" }, { "from": "gpt", "value": "= Install a Git Server\n\n== Start Services\n\n. All services can be started, in detached mode, by giving the command:\n\n+\n docker-compose up -d\n+\nAnd this shows the output as:\n+\n Creating git_serverdata...\n Creating gitserver_git_dbdata_1...\n Creating gitserver_git_db_1...\n Creating git...\n+\n\n. Configure the installation\n.. execute the following script\n+\n ./install-gogs.sh <DOCKER_HOST_IP> <git_server_PORT>\n+\n\n_example: ./install-gogs.sh 192.168.99.100 3000_\n\n== Sign Up\n\n. Access to this URL: http://dockerhost:3000/user/sign_up\n. Create an account and enjoy!\n. Enjoy! :)\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix an example missing a \" in the manual\n" }, { "from": "human", "value": "= cowboy_router:compile(3)\n\n== Name\n\ncowboy_router:compile - Compile routes to the resources\n\n== Description\n\n[source,erlang]\n----\ncompile(cowboy_router:routes()) -> cowboy_router:dispatch_rules()\n----\n\nCompile routes to the resources.\n\nTakes a human readable list of routes and transforms it\ninto a form more efficient to process.\n\n== Arguments\n\nRoutes::\n\nHuman readable list of routes.\n\n== Return value\n\nAn opaque dispatch rules value is returned. This value\nmust be given to Cowboy as a middleware environment value.\n\n== Changelog\n\n* *1.0*: Function introduced.\n\n== Examples\n\n.Compile routes and start a listener\n[source,erlang]\n----\nDispatch = cowboy_router:compile([\n {'_', [\n {\"/\", toppage_h, []},\n {\"/[...], cowboy_static, {priv_dir, my_example_app, \"\"}}\n ]}\n]),\n\n{ok, _} = cowboy:start_clear(example, [{port, 8080}], #{\n env => #{dispatch => Dispatch}\n}).\n----\n\n== See also\n\nlink:man:cowboy_router(3)[cowboy_router(3)]\n" }, { "from": "gpt", "value": "= cowboy_router:compile(3)\n\n== Name\n\ncowboy_router:compile - Compile routes to the resources\n\n== Description\n\n[source,erlang]\n----\ncompile(cowboy_router:routes()) -> cowboy_router:dispatch_rules()\n----\n\nCompile routes to the resources.\n\nTakes a human readable list of routes and transforms it\ninto a form more efficient to process.\n\n== Arguments\n\nRoutes::\n\nHuman readable list of routes.\n\n== Return value\n\nAn opaque dispatch rules value is returned. This value\nmust be given to Cowboy as a middleware environment value.\n\n== Changelog\n\n* *1.0*: Function introduced.\n\n== Examples\n\n.Compile routes and start a listener\n[source,erlang]\n----\nDispatch = cowboy_router:compile([\n {'_', [\n {\"/\", toppage_h, []},\n {\"/[...]\", cowboy_static, {priv_dir, my_example_app, \"\"}}\n ]}\n]),\n\n{ok, _} = cowboy:start_clear(example, [{port, 8080}], #{\n env => #{dispatch => Dispatch}\n}).\n----\n\n== See also\n\nlink:man:cowboy_router(3)[cowboy_router(3)]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add documentation explaining the format of the searchContent response payload\n" }, { "from": "human", "value": "[[search]]\n= Search\n\n== The SearchContent Resource\n\nWhen a Store extending `Searchable` is exported, a `searchContent` endpoint will be\navailable at the `/{store}/searchContent` URI.\n\n====\n[source, sh]\n----\n curl -H 'Accept: application/hal+json' http://localhost:8080/searchContent?queryString=foo\n----\n====\n\n=== Supported HTTP Methods\n\nAs the SearchContent resource is read-only it supports `GET` only. All other HTTP methods will\ncause a `405 Method Not Allowed`.\n\n==== Supported media types\n\n- `application/hal+json`\n- `application/json`.\n" }, { "from": "gpt", "value": "[[search]]\n= Search\n\n== The SearchContent Resource\n\nWhen a Store extending `Searchable` is exported, a `searchContent` endpoint will be\navailable at the `/{store}/searchContent` URI.\n\n====\n[source, sh]\n----\n curl -H 'Accept: application/hal+json' http://localhost:8080/searchContent?queryString=foo\n----\n====\n\n=== Supported HTTP Methods\n\nAs the SearchContent resource is read-only it supports `GET` only. All other HTTP methods will\ncause a `405 Method Not Allowed`.\n\n==== Supported media types\n\n- `application/hal+json`\n- `application/json`.\n\n=== Format of the response payload\n\nThis resource can return entities, or a custom search result type, depending on how the Searchable interface is \nspecified and the type of Store it decorates. \n\nWhen Searchable decorates an AssociativeStore this resource will lookup and return representations of the content's\nassociated entities. These lookups can be made more efficient by specifying a `@FulltextEntityLookupQuery` query \nmethod. This is a custom `findAll` method that accepts a single `Collection` parameter annotated with the name \n`contentIds`, as follows:\n\n```\n public interface MyRepository extends CrudRepository<MyEntity, Long> {\n\n @FulltextEntityLookupQuery\n List<MyEntity> findAllByContentIdIn(@Param(\"contentIds\") List<UUID> contentIds);\n }\n \n public interface MyStore extends AssociativeStore<MyEntity, UUID>, Searchable<UUID> {}\n```\n\nWhen Searchable is typed to your own search return type the resource will return a representation of this type instead. \nSee `Search Return Types` in the respective documentation for your chosen Spring Content fulltext module; solr or \nelasticsearch, for more information on specifying a custom search return types." } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix navigation path and CRD format in creating-custom-notification-banners\n" }, { "from": "human", "value": "// Module included in the following assemblies:\n//\n// * web_console/customizing-the-web-console.adoc\n\n[id=\"creating-custom-notification-banners_{context}\"]\n= Creating custom notification banners\n\n.Prerequisites\n\n* You must have administrator privileges.\n\n.Procedure\n\n. From *Administration* -> *Custom Resource Definitions*, click on\n*ConsoleNotification*.\n\n. Click *YAML* and edit the file:\n+\n----\napiVersion: console.openshift.io/v1\nkind: ConsoleNotification\nmetadata:\n name: example\nspec:\n backgroundColor: '#0088ce'\n color: '#fff'\n link:\n href: 'https://www.example.com'\n text: Optional link text\n location: BannerTop <1>\n text: This is an example notification message with an optional link.\n----\n<1> Valid location settings are `BannerTop`, `BannerBottom`, and `BannerTopBottom`.\n\n. Click the *Save* button to apply your changes.\n" }, { "from": "gpt", "value": "// Module included in the following assemblies:\n//\n// * web_console/customizing-the-web-console.adoc\n\n[id=\"creating-custom-notification-banners_{context}\"]\n= Creating custom notification banners\n\n.Prerequisites\n\n* You must have administrator privileges.\n\n.Procedure\n\n. From *Administration* -> *Custom Resource Definitions*, click on\n*ConsoleNotification*.\n. Select *Instances* tab\n. Click *Create Console Notification* and edit the file:\n+\n----\napiVersion: console.openshift.io/v1\nkind: ConsoleNotification\nmetadata:\n name: example\nspec:\n text: This is an example notification message with an optional link.\n location: BannerTop <1>\n link:\n href: 'https://www.example.com'\n text: Optional link text\n color: '#fff'\n backgroundColor: '#0088ce'\n\n----\n<1> Valid location settings are `BannerTop`, `BannerBottom`, and `BannerTopBottom`.\n\n. Click the *Create* button to apply your changes.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Update FC link to RHEL8 doc\n" }, { "from": "human", "value": "[id=\"persistent-storage-using-fibre\"]\n= Persistent storage using Fibre Channel\ninclude::modules/common-attributes.adoc[]\n:context: persistent-storage-fibre\n\ntoc::[]\n\n{product-title} supports Fibre Channel, allowing you to provision your\n{product-title} cluster with persistent storage using Fibre channel volumes.\nSome familiarity with Kubernetes and Fibre Channel is assumed.\n\nThe Kubernetes persistent volume framework allows administrators to provision a\ncluster with persistent storage and gives users a way to request those\nresources without having any knowledge of the underlying infrastructure.\nPersistent volumes are not bound to a single project or namespace; they can be\nshared across the {product-title} cluster.\nPersistent volume claims are specific to a project or namespace and can be\nrequested by users.\n\n[IMPORTANT]\n====\nHigh availability of storage in the infrastructure is left to the underlying\nstorage provider.\n====\n\n.Additional resources\n\n* link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/storage_administration_guide/ch-fibrechanel[Fibre Channel]\n\ninclude::modules/persistent-storage-fibre-provisioning.adoc[leveloffset=+1]\n\ninclude::modules/persistent-storage-fibre-disk-quotas.adoc[leveloffset=+2]\n\ninclude::modules/persistent-storage-fibre-volume-security.adoc[leveloffset=+2]\n" }, { "from": "gpt", "value": "[id=\"persistent-storage-using-fibre\"]\n= Persistent storage using Fibre Channel\ninclude::modules/common-attributes.adoc[]\n:context: persistent-storage-fibre\n\ntoc::[]\n\n{product-title} supports Fibre Channel, allowing you to provision your\n{product-title} cluster with persistent storage using Fibre channel volumes.\nSome familiarity with Kubernetes and Fibre Channel is assumed.\n\nThe Kubernetes persistent volume framework allows administrators to provision a\ncluster with persistent storage and gives users a way to request those\nresources without having any knowledge of the underlying infrastructure.\nPersistent volumes are not bound to a single project or namespace; they can be\nshared across the {product-title} cluster.\nPersistent volume claims are specific to a project or namespace and can be\nrequested by users.\n\n[IMPORTANT]\n====\nHigh availability of storage in the infrastructure is left to the underlying\nstorage provider.\n====\n\n.Additional resources\n\n* link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_storage_devices/using-fibre-channel-devices_managing-storage-devices[Using Fibre Channel devices]\n\ninclude::modules/persistent-storage-fibre-provisioning.adoc[leveloffset=+1]\n\ninclude::modules/persistent-storage-fibre-disk-quotas.adoc[leveloffset=+2]\n\ninclude::modules/persistent-storage-fibre-volume-security.adoc[leveloffset=+2]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add Kerberos/SPNEGO Shield custom realm\n\nCloses #14282\n" }, { "from": "human", "value": "[[security]]\n== Security Plugins\n\nSecurity plugins add a security layer to Elasticsearch.\n\n[float]\n=== Core security plugins\n\nThe core security plugins are:\n\nlink:/products/shield[Shield]::\n\nShield is the Elastic product that makes it easy for anyone to add\nenterprise-grade security to their ELK stack. Designed to address the growing security\nneeds of thousands of enterprises using ELK today, Shield provides peace of\nmind when it comes to protecting your data.\n\n[float]\n=== Community contributed security plugins\n\nThe following plugin has been contributed by our community:\n\n* https://github.com/sscarduzio/elasticsearch-readonlyrest-plugin[Readonly REST]:\n High performance access control for Elasticsearch native REST API (by Simone Scarduzio)\n\nThis community plugin appears to have been abandoned:\n\n* https://github.com/sonian/elasticsearch-jetty[Jetty HTTP transport plugin]:\n Uses Jetty to provide SSL connections, basic authentication, and request logging (by Sonian Inc.)" }, { "from": "gpt", "value": "[[security]]\n== Security Plugins\n\nSecurity plugins add a security layer to Elasticsearch.\n\n[float]\n=== Core security plugins\n\nThe core security plugins are:\n\nlink:/products/shield[Shield]::\n\nShield is the Elastic product that makes it easy for anyone to add\nenterprise-grade security to their ELK stack. Designed to address the growing security\nneeds of thousands of enterprises using ELK today, Shield provides peace of\nmind when it comes to protecting your data.\n\n[float]\n=== Community contributed security plugins\n\nThe following plugins have been contributed by our community:\n\n* https://github.com/codecentric/elasticsearch-shield-kerberos-realm[Kerberos/SPNEGO Realm]:\n Custom Shield realm to Authenticate HTTP and Transport requests via Kerberos/SPNEGO (by codecentric AG)\n\n* https://github.com/sscarduzio/elasticsearch-readonlyrest-plugin[Readonly REST]:\n High performance access control for Elasticsearch native REST API (by Simone Scarduzio)\n\nThis community plugin appears to have been abandoned:\n\n* https://github.com/sonian/elasticsearch-jetty[Jetty HTTP transport plugin]:\n Uses Jetty to provide SSL connections, basic authentication, and request logging (by Sonian Inc.)\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Remove dead link to javadoc of ExchangePattern\n\nuse the fully qualified name so that users can find the enumeration easily as the information is no more available from the link that is removed." }, { "from": "human", "value": "[[ExchangePattern-ExchangePattern]]\n= Exchange Pattern\n\nThere are many different _Message Exchange Patterns_ you can use in\nmessaging. This concept is also demonstrated in WSDL and JBI's MEPs.\n\nFrom there xref:enterprise-integration-patterns.adoc[Enterprise\nIntegration Patterns] the common examples are\n\n* Request Reply\n* Event Message (or one way)\n\nIn Camel we have an\nhttp://camel.apache.org/maven/current/camel-core/apidocs/org/apache/camel/ExchangePattern.html[ExchangePattern]\nenumeration which can be configured on the *exchangePattern* property on\nthe Message Exchange indicating if a message\nexchange is a one way Event Message (InOnly) or\na Request Reply message exchange (InOut).\n\nFor example to override the default pattern on a xref:components::jms-component.adoc[JMS]\nendpoint you could use this URI\n\n[source,java]\n---------------------------------\njms:MyQueue?exchangePattern=InOut\n---------------------------------\n" }, { "from": "gpt", "value": "[[ExchangePattern-ExchangePattern]]\n= Exchange Pattern\n\nThere are many different _Message Exchange Patterns_ you can use in\nmessaging. This concept is also demonstrated in WSDL and JBI's MEPs.\n\nFrom there xref:enterprise-integration-patterns.adoc[Enterprise\nIntegration Patterns] the common examples are\n\n* Request Reply\n* Event Message (or one way)\n\nIn Camel we have an\n`org.apache.camel.ExchangePattern`\nenumeration which can be configured on the *exchangePattern* property on\nthe Message Exchange indicating if a message\nexchange is a one way Event Message (InOnly) or\na Request Reply message exchange (InOut).\n\nFor example to override the default pattern on a xref:components::jms-component.adoc[JMS]\nendpoint you could use this URI\n\n[source,java]\n---------------------------------\njms:MyQueue?exchangePattern=InOut\n---------------------------------\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "[DOCS] Fix formatting in change password API\n\nOriginal commit: elastic/x-pack-elasticsearch@acbe051abb686fb46f20b22a29b3d4330dd46de9\n" }, { "from": "human", "value": "[role=\"xpack\"]\n[[security-api-change-password]]\n=== Change Password API\n\nThe Change Password API enables you to submit a request to change the password\nof a user.\n\n==== Request\n\n`POST _xpack/security/user/_password` +\n`POST _xpack/security/user/<username>/_password` \n\n\n==== Path Parameters\n\n`username`::\n (string) The user whose password you want to change. If you do not specify\n this parameter, the password is changed for the current user.\n\n\n==== Request Body\n\n`password` (required)::\n (string) The new password value.\n\n\n==== Authorization\n\nEvery user can change their own password. Users with the `manage_security`\nprivilege can change passwords of other users.\n\n\n==== Examples\n\nThe following example updates the password for the `elastic` user:\n\n[source,js]\n--------------------------------------------------\nPOST _xpack/security/user/elastic/_password\n{\n \"password\": \"x-pack-test-password\"\n}\n--------------------------------------------------\n// CONSOLE\n\nA successful call returns an empty JSON structure.\n\n[source,js]\n--------------------------------------------------\n{}\n--------------------------------------------------\n// TESTRESPONSE\n" }, { "from": "gpt", "value": "[role=\"xpack\"]\n[[security-api-change-password]]\n=== Change Password API\n\nThe Change Password API enables you to submit a request to change the password\nof a user.\n\n==== Request\n\n`POST _xpack/security/user/_password` +\n\n`POST _xpack/security/user/<username>/_password`\n\n\n==== Path Parameters\n\n`username`::\n (string) The user whose password you want to change. If you do not specify\n this parameter, the password is changed for the current user.\n\n\n==== Request Body\n\n`password` (required)::\n (string) The new password value.\n\n\n==== Authorization\n\nEvery user can change their own password. Users with the `manage_security`\nprivilege can change passwords of other users.\n\n\n==== Examples\n\nThe following example updates the password for the `elastic` user:\n\n[source,js]\n--------------------------------------------------\nPOST _xpack/security/user/elastic/_password\n{\n \"password\": \"x-pack-test-password\"\n}\n--------------------------------------------------\n// CONSOLE\n\nA successful call returns an empty JSON structure.\n\n[source,js]\n--------------------------------------------------\n{}\n--------------------------------------------------\n// TESTRESPONSE\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Reword note about windows and FDs\n\nMake it clear why this isn't a thing in windows and remove value\njudgement about file descriptors.\n\nRelates to #20737\n" }, { "from": "human", "value": "[[file-descriptors]]\n=== File Descriptors\n\n[NOTE]\nThis is only a problem for Linux and macOS and can be safely ignored if running\nElasticsearch on Windows.\n\nElasticsearch uses a lot of file descriptors or file handles. Running out of\nfile descriptors can be disastrous and will most probably lead to data loss.\nMake sure to increase the limit on the number of open files descriptors for\nthe user running Elasticsearch to 65,536 or higher.\n\nFor the `.zip` and `.tar.gz` packages, set <<ulimit,`ulimit -n 65536`>> as\nroot before starting Elasticsearch, or set `nofile` to `65536` in\n<<limits.conf,`/etc/security/limits.conf`>>.\n\nRPM and Debian packages already default the maximum number of file\ndescriptors to 65536 and do not require further configuration.\n\nYou can check the `max_file_descriptors` configured for each node\nusing the <<cluster-nodes-stats>> API, with:\n\n[source,js]\n--------------------------------------------------\nGET _nodes/stats/process?filter_path=**.max_file_descriptors\n--------------------------------------------------\n// CONSOLE\n" }, { "from": "gpt", "value": "[[file-descriptors]]\n=== File Descriptors\n\n[NOTE]\nThis is only relevant for Linux and macOS and can be safely ignored if running\nElasticsearch on Windows. On Windows that JVM uses an\nhttps://msdn.microsoft.com/en-us/library/windows/desktop/aa363858(v=vs.85).aspx[API]\nlimited only by available resources.\n\nElasticsearch uses a lot of file descriptors or file handles. Running out of\nfile descriptors can be disastrous and will most probably lead to data loss.\nMake sure to increase the limit on the number of open files descriptors for\nthe user running Elasticsearch to 65,536 or higher.\n\nFor the `.zip` and `.tar.gz` packages, set <<ulimit,`ulimit -n 65536`>> as\nroot before starting Elasticsearch, or set `nofile` to `65536` in\n<<limits.conf,`/etc/security/limits.conf`>>.\n\nRPM and Debian packages already default the maximum number of file\ndescriptors to 65536 and do not require further configuration.\n\nYou can check the `max_file_descriptors` configured for each node\nusing the <<cluster-nodes-stats>> API, with:\n\n[source,js]\n--------------------------------------------------\nGET _nodes/stats/process?filter_path=**.max_file_descriptors\n--------------------------------------------------\n// CONSOLE\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add steps to delete EFK\n" }, { "from": "human", "value": "// Module included in the following assemblies:\n//\n// * logging/efk-logging-uninstall.adoc\n\n[id='efk-logging-uninstall-efk_{context}']\n= Uninstalling EFK from {product-title}\n\nUse the following command to remove everything generated during the deployment.\n\n\n" }, { "from": "gpt", "value": "// Module included in the following assemblies:\n//\n// * logging/efk-logging-uninstall.adoc\n\n[id='efk-logging-uninstall-efk_{context}']\n= Uninstalling EFK from {product-title}\n\nUse the following command to remove everything generated during the deployment.\n\n----\n#oc delete clusterlogging $clusterlogging_names -n openshift-logging'\n----\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix typo in Camel 2.18.3 release notes\n" }, { "from": "human", "value": "[[Camel2.18.3Release-Camel2.18.3Release]]\n== Camel 2.18.3 Release\n\n### New and Noteworthy\n\nWelcome to the 2.18.3 release which is mainly a maintenance release with\n38 fixes resolved.\n\nFor further details,\nsee https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12339161&projectId=12311211[the\nlist of JIRA issues].\n\n### Known Issues\n\n* See <<Camel2.18.0Release,Camel 2.18.0 Release>>\n\n### Important changes to consider when upgrading\n\n* See <<Camel2.18.0Release,Camel 2.18.0 Release>>\n\n### Getting the Binaries using Maven 2\n\nTo use this release in your maven project, the proper dependency\nconfiguration that you should use in your\nhttp://maven.apache.org/guides/introduction/introduction-to-the-pom.html[Maven\nPOM] is:\n\n[source,java]\n-------------------------------------\n<dependency>\n <groupId>org.apache.camel</groupId>\n <artifactId>camel-core</artifactId>\n <version>2.18.3</version>\n</dependency>\n-------------------------------------\n\n### Changelog\n\nFor a more detailed view of new features and bug fixes, see the:\n\n*\nhttps://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12339161&projectId=12311211[Release\nnotes for 2.18.3]\n\n" }, { "from": "gpt", "value": "[[Camel2.18.3Release-Camel2.18.3Release]]\n== Camel 2.18.3 Release\n\n### New and Noteworthy\n\nWelcome to the 2.18.3 release which is mainly a maintenance release with\n38 fixes resolved.\n\nFor further details,\nsee https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12339161&projectId=12311211[the\nlist of JIRA issues].\n\n### Known Issues\n\n* See <<Camel2.18.0Release,Camel 2.18.0 Release>>\n\n### Important changes to consider when upgrading\n\n* See <<Camel2.18.0Release,Camel 2.18.0 Release>>\n\n### Getting the Binaries using Maven 2\n\nTo use this release in your maven project, the proper dependency\nconfiguration that you should use in your\nhttp://maven.apache.org/guides/introduction/introduction-to-the-pom.html[Maven\nPOM] is:\n\n[source,java]\n-------------------------------------\n<dependency>\n <groupId>org.apache.camel</groupId>\n <artifactId>camel-core</artifactId>\n <version>2.18.3</version>\n</dependency>\n-------------------------------------\n\n### Changelog\n\nFor a more detailed view of new features and bug fixes, see the:\n\nhttps://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12339161&projectId=12311211[Release\nnotes for 2.18.3]\n\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Enable IDE rendering of generated Swagger documentation\n" }, { "from": "human", "value": "include::{generated}/overview.adoc[]\ninclude::manual_rest_doc.adoc[]\ninclude::{generated}/paths.adoc[]\n" }, { "from": "gpt", "value": ":generated: ../../../target/generated-docs/asciidoc\n\ninclude::{generated}/overview.adoc[]\ninclude::manual_rest_doc.adoc[]\ninclude::{generated}/paths.adoc[]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix minor issues in delimited payload token filter docs\n\nThis commit backports commit b4fbe1e1b80dc47120de5111e31c12db5d514af2\nfrom master to 2.x.\n\nRelates #15486\n" }, { "from": "human", "value": "[[analysis-delimited-payload-tokenfilter]]\n=== Delimited Payload Token Filter\n\nNamed `delimited_payload_filter`. Splits tokens into tokens and payload whenever a delimiter character is found.\n\nExample: \"the|1 quick|2 fox|3\" is split per default int to tokens `fox`, `quick` and `the` with payloads `1`, `2` and `3` respectively.\n\n\n\nParameters:\n\n`delimiter`:: \n Character used for splitting the tokens. Default is `|`. \n\n`encoding`:: \n The type of the payload. `int` for integer, `float` for float and `identity` for characters. Default is `float`." }, { "from": "gpt", "value": "[[analysis-delimited-payload-tokenfilter]]\n=== Delimited Payload Token Filter\n\nNamed `delimited_payload_filter`. Splits tokens into tokens and payload whenever a delimiter character is found.\n\nExample: \"the|1 quick|2 fox|3\" is split by default into tokens `the`, `quick`, and `fox` with payloads `1`, `2`, and `3` respectively.\n\nParameters:\n\n`delimiter`:: \n Character used for splitting the tokens. Default is `|`. \n\n`encoding`:: \n The type of the payload. `int` for integer, `float` for float and `identity` for characters. Default is `float`." } ]
asciidoc
chosen
[ { "from": "system", "value": "HZN-1194: Make it more clear which collector/detector/monitor to use instead\n" }, { "from": "human", "value": "[[releasenotes-22]]\n== What's New in OpenNMS Horizon 22\n\n=== System Requirements\n\nTODO\n\n=== Important Upgrade Notes\n\n* All collectors using the JMXSecureCollector, MX4JCollector or JBossCollector must be migrated to use the JMXCollector instead.\n* All monitors using the JBossMonitor or JMXSecureMonitor must be migrated to use the JMXMonitor instead.\n* All detectors using the JBossDetector or MX4JDetector must be migrated to use the JMXDetector instead.\n\n=== Breaking Changes\n\n* JMXSecureCollector, MX4JCollector and JBossCollector have been removed. Please use the JMXCollector or Jsr160Collector instead.\n* JBossMonitor and JMXSecureMonitor have been. Removed. Use the JMXMonitor instead.\n* JBossDetector and MX4JDetector have been removed. Use the JMXDetector instead.\n\n=== New Features\n\n\n=== Dependency Updates\n" }, { "from": "gpt", "value": "[[releasenotes-22]]\n== What's New in OpenNMS Horizon 22\n\n=== System Requirements\n\nTODO\n\n=== Important Upgrade Notes\n\n* All collectors using the `JMXSecureCollector`, `MX4JCollector` or `JBossCollector` must be migrated to use the `Jsr160Collector` instead.\n* All monitors using the `JBossMonitor` or `JMXSecureMonitor` must be migrated to use the `Jsr160Monitor` instead.\n* All detectors using the `JBossDetector` or `MX4JDetector` must be migrated to use the `Jsr160Detector` instead.\n\n=== Breaking Changes\n\n* `JMXSecureCollector`, `MX4JCollector` and `JBossCollector` have been removed. Please use the `Jsr160Collector` instead.\n* `JBossMonitor` and `JMXSecureMonitor` have been. Removed. Use the `Jsr160Monitor` instead.\n* `JBossDetector` and `MX4JDetector` have been removed. Use the `Jsr160Detector` instead.\n\n=== New Features\n\n\n=== Dependency Updates\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add paragraph about destination parameter in Bus endpoints\n\nFixes gh-16\n" }, { "from": "human", "value": "= Spring Cloud Bus\n:github: https://github.com/spring-cloud/spring-cloud-config\n:githubmaster: {github}/tree/master\n:docslink: {githubmaster}/docs/src/main/asciidoc\n:toc:\n\ninclude::intro.adoc[]\n\ninclude::https://raw.githubusercontent.com/spring-cloud/spring-cloud-build/master/docs/src/main/asciidoc/contributing-docs.adoc[]\n\n== Quick Start\n\ninclude::quickstart.adoc[]\n\n== Customizing the AMQP ConnectionFactory\n\nIf you are using AMQP there needs to be a `ConnectionFactory` (from\nSpring Rabbit) in the application context. If there is a single\n`ConnectionFactory` it will be used, or if there is a one qualified as\n`@BusConnectionFactory` it will be preferred over others, otherwise\nthe `@Primary` one will be used. If there are multiple unqualified\nconnection factories there will be an error.\n\nNote that Spring Boot (as of 1.2.2) creates a `ConnectionFactory` that\nis _not_ `@Primary`, so if you want to use one connection factory for\nthe bus and another for business messages, you need to create both,\nand annotate them `@BusConnectionFactory` and `@Primary` respectively." }, { "from": "gpt", "value": "= Spring Cloud Bus\n:github: https://github.com/spring-cloud/spring-cloud-config\n:githubmaster: {github}/tree/master\n:docslink: {githubmaster}/docs/src/main/asciidoc\n:toc:\n\ninclude::intro.adoc[]\n\ninclude::https://raw.githubusercontent.com/spring-cloud/spring-cloud-build/master/docs/src/main/asciidoc/contributing-docs.adoc[]\n\n== Quick Start\n\ninclude::quickstart.adoc[]\n\n== Addressing an Instance\n\nThe HTTP endpoints accept a \"destination\" parameter, e.g. \"/bus/refresh?destination=customers:9000\", where the destination is an `ApplicationContext` ID. If the ID is owned by an instance on the Bus then it will process the message and all other instances will ignore it. Spring Boot sets the ID for you in the `ContextIdApplicationContextInitializer` to a combination of the `spring.application.name`, active profiles and `server.port` by default.\n\n== Customizing the AMQP ConnectionFactory\n\nIf you are using AMQP there needs to be a `ConnectionFactory` (from\nSpring Rabbit) in the application context. If there is a single\n`ConnectionFactory` it will be used, or if there is a one qualified as\n`@BusConnectionFactory` it will be preferred over others, otherwise\nthe `@Primary` one will be used. If there are multiple unqualified\nconnection factories there will be an error.\n\nNote that Spring Boot (as of 1.2.2) creates a `ConnectionFactory` that\nis _not_ `@Primary`, so if you want to use one connection factory for\nthe bus and another for business messages, you need to create both,\nand annotate them `@BusConnectionFactory` and `@Primary` respectively." } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix link to Log4j 2 example\n" }, { "from": "human", "value": "Ctrlflow Automated Error Reporting Client Examples\n==================================================\n\nExamples of how to use various clients to communicate with the https://www.ctrlflow.com/automated-error-reporting/[Ctrlflow Automated Error Reporting (AER) server].\n\nAt the moment, the following examples are available:\n\nlink:logback-java-example[Log4j 2 Java Example]::\nIf you are developing a stand-alone *Java application* that uses the popular https://logging.apache.org/log4j/2.x/[*Log4j 2*] logging framework, have a look at this example.\nIt shows you how to configure Log4j 2 to send error reports to a Ctrlflow AER server.\n\nlink:logback-java-example[Logback Java Example]::\nIf you are developing a stand-alone *Java application* that uses the popular http://logback.qos.ch/[*Logback*] logging framework, have a look at this example.\nIt shows you how to configure Logback to send error reports to a Ctrlflow AER server.\n\nlink:simple-java-example[Simple Java Example]::\nIf you are developing a stand-alone *Java application* that uses none of the logging frameworks supported out-of-the-box, have a look at this example.\nIt shows you how to use the Ctrlflow AER Simple Client to send caught exceptions to your server.\n" }, { "from": "gpt", "value": "Ctrlflow Automated Error Reporting Client Examples\n==================================================\n\nExamples of how to use various clients to communicate with the https://www.ctrlflow.com/automated-error-reporting/[Ctrlflow Automated Error Reporting (AER) server].\n\nAt the moment, the following examples are available:\n\nlink:log4j2-java-example[Log4j 2 Java Example]::\nIf you are developing a stand-alone *Java application* that uses the popular https://logging.apache.org/log4j/2.x/[*Log4j 2*] logging framework, have a look at this example.\nIt shows you how to configure Log4j 2 to send error reports to a Ctrlflow AER server.\n\nlink:logback-java-example[Logback Java Example]::\nIf you are developing a stand-alone *Java application* that uses the popular http://logback.qos.ch/[*Logback*] logging framework, have a look at this example.\nIt shows you how to configure Logback to send error reports to a Ctrlflow AER server.\n\nlink:simple-java-example[Simple Java Example]::\nIf you are developing a stand-alone *Java application* that uses none of the logging frameworks supported out-of-the-box, have a look at this example.\nIt shows you how to use the Ctrlflow AER Simple Client to send caught exceptions to your server.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Remove reference to \"circuit breaker\"\n" }, { "from": "human", "value": "This project provides Etcd integrations for Spring Boot apps through autoconfiguration\nand binding to the Spring Environment and other Spring programming model idioms. With a few\nsimple annotations you can quickly enable and configure the common patterns inside your\napplication and build large distributed systems with Etcd based components. The\npatterns provided include Service Discovery, Circuit Breaker and Configuration.\nIntelligent Routing (Zuul) and Client Side Load Balancing (Ribbon) are provided by\nintegration with Spring Cloud Netflix.\n\n" }, { "from": "gpt", "value": "This project provides Etcd integrations for Spring Boot apps through autoconfiguration\nand binding to the Spring Environment and other Spring programming model idioms. With a few\nsimple annotations you can quickly enable and configure the common patterns inside your\napplication and build large distributed systems with Etcd based components. The\npatterns provided include Service Discovery and Configuration.\nIntelligent Routing (Zuul) and Client Side Load Balancing (Ribbon) are provided by\nintegration with Spring Cloud Netflix.\n\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix the news article to start with a top-level header.\n" }, { "from": "human", "value": "== Bugfix Release\n:docname: 20200614-bugfix-release\n:page-publish_date: 2020-06-14\n:page-layout: news-article\n\nThe `revapi-java` extension released a couple of weeks ago contained a rather ugly regression that caused an index out\nof bounds exception on interfaces that override methods from `java.lang.Object`.\n\nThis has now been fixed in the new `0.21.1` version of `revapi-java`. Please upgrade if you have been hit by this bug.\n\n" }, { "from": "gpt", "value": "= Bugfix Release\n:docname: 20200614-bugfix-release\n:page-publish_date: 2020-06-14\n:page-layout: news-article\n\nThe `revapi-java` extension released a couple of weeks ago contained a rather ugly regression that caused an index out\nof bounds exception on interfaces that override methods from `java.lang.Object`.\n\nThis has now been fixed in the new `0.21.1` version of `revapi-java`. Please upgrade if you have been hit by this bug.\n\n" } ]
asciidoc
chosen
[ { "from": "human", "value": "Write a trouble shooting guide\n" }, { "from": "gpt", "value": "== Troubleshooting\n\n[qanda]\nWhy does `Config.setSize(...)` not take effect?::\n The configuration state of a `Config` object is only read in the constructor of the pool, so changing the configuration after the pool has been created has no effect.\n Look at the `ResizablePool` interface for changing the size of an existing pool.\n\nWhy am I getting an `stormpot.PoolException: Slot release from bad state: 1` exception?::\n The slot has already been released.\n Either you are calling the `release` method twice, or you are sharing the `Poolable` instances without the consent of the pool.\n\nMy program won't stop running, or Stormpot leaves a thread running!::\n Stormpot creates a background thread that is non-daemon by default.\n This thread will run until the pool has been shut down with `LifecycledPool#shutdown`, and all the `Poolable` instances in it has been deallocated.\n If you have started the shut down process but it never finishes, then you have probably leaked an object.\n All objects in the pool must be released back to the pool before the shut down can complete.\n\nHow do I find out if I have leaked an object?::\n There are two kinds of leaks: one where the program forgets to `release` an object back to the pool, and one where the program both forgets to `release`, and also forgets the reference to the object, so it becomes garbage collected.\n You can diagnose the first case by taking a heap dump of the running program – possibly after you have initiated the shut down process and waited for it to complete for a little while – and look for any lingering `stormpot.BSlot`, `stormpot.QSlot` objects, or instances of your `Poolable` implementation, and see where they are referenced.\n The precise leak detector will notice the last case, but cannot tell you where in the code the problem is.\n\nWhy are there so many \"Stormpot\" threads in my program?::\n Stormpot starts a thread for every pool instance you create.\n It does not share threads across pool instances.\n\nI'm not allowed to start threads in my environment, or I get an exception about not being able to start a thread when I try to create an pool instance!::\n Some environments restrict the ability to create threads.\n These environments often have a way to get around that restriction, for instance by creating threads with a particular `ThreadGroup` or creating the thread in a particular privileged part of the code.\n Stormpot can be configured to use a specific `ThreadFactory` for creating its threads.\n So giving it a `ThreadFactory` that knows how to create threads in your particular environment, is the way to get around these restrictions.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Revert \"Remove travis badge until false positives are fixed\"\n\nThis reverts commit 9d4ef4e9570c0cab394cf7f52251903aa1679568.\n" }, { "from": "human", "value": "# SAAV\n\nSoftware application to analyse and visualise research output in the humanities and social sciences\n\n# Documentation\n\n* link:CONTRIBUTING.adoc[Development Setup]\n* link:docs/architecture-and-design.adoc[Architecture & Design]\n* link:docs/deployment.adoc[Deployment]\n\n# Live\n\nThe latest milestone is deployed via GitHub Pages:\n\n* http://fhnw-saav.github.io/saav/[Live Site] (http://fhnw-saav.github.io/saav/version.txt[inspect current version])\n" }, { "from": "gpt", "value": "# SAAV image:https://travis-ci.org/fhnw-saav/saav.svg?branch=master[\"Build Status\", link=\"https://travis-ci.org/fhnw-saav/saav\"]\n\nSoftware application to analyse and visualise research output in the humanities and social sciences\n\n# Documentation\n\n* link:CONTRIBUTING.adoc[Development Setup]\n* link:docs/architecture-and-design.adoc[Architecture & Design]\n* link:docs/deployment.adoc[Deployment]\n\n# Live\n\nThe latest milestone is deployed via GitHub Pages:\n\n* http://fhnw-saav.github.io/saav/[Live Site] (http://fhnw-saav.github.io/saav/version.txt[inspect current version])\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add link to field data docs from field data fields\n\nThis commit adds a link to the field data docs from the field data\nfields docs for completeness.\n\nCloses #17672\n" }, { "from": "human", "value": "[[search-request-fielddata-fields]]\n=== Field Data Fields\n\nAllows to return the field data representation of a field for each hit, for\nexample:\n\n[source,js]\n--------------------------------------------------\n{\n \"query\" : {\n ...\n },\n \"fielddata_fields\" : [\"test1\", \"test2\"]\n}\n--------------------------------------------------\n\nField data fields can work on fields that are not stored.\n\nIt's important to understand that using the `fielddata_fields` parameter will\ncause the terms for that field to be loaded to memory (cached), which will\nresult in more memory consumption.\n" }, { "from": "gpt", "value": "[[search-request-fielddata-fields]]\n=== Field Data Fields\n\nAllows to return the <<fielddata,field data>> representation of a field for each hit, for\nexample:\n\n[source,js]\n--------------------------------------------------\n{\n \"query\" : {\n ...\n },\n \"fielddata_fields\" : [\"test1\", \"test2\"]\n}\n--------------------------------------------------\n\nField data fields can work on fields that are not stored.\n\nIt's important to understand that using the `fielddata_fields` parameter will\ncause the terms for that field to be loaded to memory (cached), which will\nresult in more memory consumption.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Update explanation about adding width and height to the root svg element\n" }, { "from": "human", "value": "= A test for embedding a SVG image\n\n.Caption for a SVG diagram\nimage::Vsw9cGkj8bJjEBr6.svg[width=417]\n\n* I'd like to remove a redundant vertical space between the diagram above and the caption.\n* I suppose Height 275 below is too much.\n\n <<\n /Name /Im7\n /Type /XObject\n /Length 29 0 R\n /Filter /FlateDecode\n /Subtype /Image\n /Width 417\n /Height 275\n /BitsPerComponent 8\n /ColorSpace /DeviceGray\n >>\n\nimage::Vsw9cGkj8bJjEBr6.svg[width=417]\n\n* There is still a redundant vertical space even when I do not have a caption, so the redundant space seems to be inside the image.\n\nimage::Vsw9cGkj8bJjEBr6.svg[width=417,height=242]\n\n* However, the diagram becomes too small if I set the height as well as the width.\n* And there still exists a redundant vertical space. I'd like to remove it!\n" }, { "from": "gpt", "value": "= A test for embedding a SVG image\n\n.Caption for a SVG diagram\nimage::Vsw9cGkj8bJjEBr6.svg[]\n\nYou must be add `width` and `height` attributes to the root `<svg>` element to avoid\nredundant spaces.\n\nIn the above example, I added `width=\"441.0\" height=\"265.5\"`.\n" } ]
asciidoc
chosen
README.md exists but content is empty.
Downloads last month
47