conversations
listlengths
2
3
source
stringclasses
1 value
stack
stringclasses
1 value
[ { "from": "system", "value": "Add some build/test commands to readme.\n" }, { "from": "human", "value": "TinkerPop3\n----------\n\nimage:https://raw.githubusercontent.com/tinkerpop/tinkerpop3/master/docs/static/images/tinkerpop3-splash.png[TinkerPop3]\n\nDocumentation\n~~~~~~~~~~~~~\n\n* link:http://tinkerpop.com/docs/current/[user documentation]\n* link:http://tinkerpop.com/javadocs/current/[javadoc]\n\nBuilding\n~~~~~~~~\n\n* Build Project: `mvn clean install`\n* Build AsciiDocs: `mvn process-resources -Dasciidoc`\n* Deploy AsciiDocs: `mvn deploy -Dasciidoc`\n* Deploy JavaDocs: `mvn deploy -Djavadoc`\n\nGet Started\n~~~~~~~~~~~\n\n[source,bash]\n----\n$ bin/gremlin.sh\n\n \\,,,/\n (o o)\n-----oOOo-(3)-oOOo-----\ngremlin>\n----\n" }, { "from": "gpt", "value": "TinkerPop3\n----------\n\nimage:https://raw.githubusercontent.com/tinkerpop/tinkerpop3/master/docs/static/images/tinkerpop3-splash.png[TinkerPop3]\n\nDocumentation\n~~~~~~~~~~~~~\n\n* link:http://tinkerpop.com/docs/current/[user documentation]\n* link:http://tinkerpop.com/javadocs/current/[javadoc]\n\nBuilding and Testing\n~~~~~~~~~~~~~~~~~~~~\n\n* Build Project: `mvn clean install`\n* Build AsciiDocs: `mvn process-resources -Dasciidoc`\n* Deploy AsciiDocs: `mvn deploy -Dasciidoc`\n* Deploy JavaDocs: `mvn deploy -Djavadoc`\n* Travis Deploy Docs: merge/push to `publish-docs` branch\n* Integration Tests: `mvn clean verify -DskipIntegrationTests=false`\n* Performance Tests: `mvn clean verify -DskipPerformanceTests=false`\n* Gatling Load Tests: `mvn clean test -Pgatling -Dhost={hostname}`\n\nGet Started\n~~~~~~~~~~~\n\n[source,bash]\n----\n$ bin/gremlin.sh\n\n \\,,,/\n (o o)\n-----oOOo-(3)-oOOo-----\ngremlin>\n----\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Update walk through link text\n\nChanged \"walk through\" link text to be grammatically correct." }, { "from": "human", "value": "= Getting Started\n\nFirst you need to Download the Camel distribution;\nor you could grab the Source and try\nBuilding it yourself.\n\nThen come back here, and you might want to read the following\ndocumentation before continuing:\n\n* Longer xref:book-getting-started.adoc[Getting Started Guide]\n* Find out about xref:components:eips:enterprise-integration-patterns.adoc[Enterprise Integration Patterns]\n and how to implement them with Camel\n* Review the xref:architecture.adoc[Architecture guide] to see how to build Routes using the xref:dsl.adoc[DSL].\n\n== Working with CamelContexts and RouteBuilders\n\nTo get started with Camel:\n\n1. Create a xref:camelcontext.adoc[CamelContext].\n2. Add whatever routing rules you wish using the DSL and `RouteBuilder` or using XML DSL.\n3. Start the Camel context.\n\nWhen your application is closing you may wish to stop the context\n\nWhen you are ready, why not xref:walk-through-an-example.adoc[Walk through an Example]?\nAnd then continue the walk xref:walk-through-another-example.adoc[Walk through another example].\n\n" }, { "from": "gpt", "value": "= Getting Started\n\nFirst you need to Download the Camel distribution;\nor you could grab the Source and try\nBuilding it yourself.\n\nThen come back here, and you might want to read the following\ndocumentation before continuing:\n\n* Longer xref:book-getting-started.adoc[Getting Started Guide]\n* Find out about xref:components:eips:enterprise-integration-patterns.adoc[Enterprise Integration Patterns]\n and how to implement them with Camel\n* Review the xref:architecture.adoc[Architecture guide] to see how to build Routes using the xref:dsl.adoc[DSL].\n\n== Working with CamelContexts and RouteBuilders\n\nTo get started with Camel:\n\n1. Create a xref:camelcontext.adoc[CamelContext].\n2. Add whatever routing rules you wish using the DSL and `RouteBuilder` or using XML DSL.\n3. Start the Camel context.\n\nWhen your application is closing you may wish to stop the context\n\nWhen you are ready, why not xref:walk-through-an-example.adoc[walk through an example]?\nAnd then xref:walk-through-another-example.adoc[walk through another example].\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add additional steps for creating address space\n" }, { "from": "human", "value": "// Module included in the following assemblies:\n//\n// assembly-configure-address-spaces-addresses-cli-kube.adoc\n// assembly-configure-address-spaces-addresses-cli-oc.adoc\n\n[id='create-address-space-cli-{context}']\n= Creating an address space\n\n.Procedure\n\n. Create an address space definition:\n+\n[source,yaml,options=\"nowrap\"]\n.link:resources/standard-address-space.yaml[standard-address-space.yaml]\n----\ninclude::../common/standard-address-space.yaml[]\n----\n\n. Create the address space:\n+\n[options=\"nowrap\",subs=\"attributes\"]\n----\n{cmdcli} create -f standard-address-space.yaml\n----\n+\nThe address space is ready for use when `.status.isReady` field is set to `true`.\n\n. Check the status of the address space:\n+\n[options=\"nowrap\",subs=\"attributes\"]\n----\n{cmdcli} get addressspace myspace -o jsonpath={.status.isReady}\n----\n\n. Retrieve console URL:\n+\n[options=\"nowrap\",subs=\"attributes\"]\n----\n{cmdcli} get addressspace myspace -o jsonpath={.status.endpointStatuses[?(@.name==\\'console\\')].host}\n----\n" }, { "from": "gpt", "value": "// Module included in the following assemblies:\n//\n// assembly-configure-address-spaces-addresses-cli-kube.adoc\n// assembly-configure-address-spaces-addresses-cli-oc.adoc\n\n[id='create-address-space-cli-{context}']\n= Creating an address space\n\n.Procedure\n\nifeval::[\"{cmdcli}\" == \"oc\"]\n. Log in as a messaging tenant:\n+\n[source,yaml,options=\"nowrap\"]\n----\n{cmdcli} login -u developer\n----\n\n. Create the project for the messaging application:\n+\n[source,yaml,options=\"nowrap\"]\n----\n{cmdcli} new-project myapp\n----\nendif::[]\n\n. Create an address space definition:\n+\n[source,yaml,options=\"nowrap\"]\n.link:resources/standard-address-space.yaml[standard-address-space.yaml]\n----\ninclude::../common/standard-address-space.yaml[]\n----\n\n. Create the address space:\n+\n[options=\"nowrap\",subs=\"attributes\"]\n----\n{cmdcli} create -f standard-address-space.yaml\n----\n+\nThe address space is ready for use when `.status.isReady` field is set to `true`.\n\n. Check the status of the address space:\n+\n[options=\"nowrap\",subs=\"attributes\"]\n----\n{cmdcli} get addressspace myspace -o jsonpath={.status.isReady}\n----\n\n. Retrieve console URL:\n+\n[options=\"nowrap\",subs=\"attributes\"]\n----\n{cmdcli} get addressspace myspace -o jsonpath={.status.endpointStatuses[?(@.name==\\'console\\')].host}\n----\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix bz1729334: remove GCE storage reference\n\nhttps://bugzilla.redhat.com/show_bug.cgi?id=1729334\n" }, { "from": "human", "value": "// Module included in the following assemblies:\n//\n// * storage/dynamic-provisioning.adoc\n\n[id=\"change-default-storage-class_{context}\"]\n= Changing the default StorageClass\n\nIf you are using GCE and AWS, use the following process to change the \ndefault StorageClass. This process assumes you have two StorageClasses\ndefined, `gp2` and `standard`, and you want to change the default\nStorageClass from `gp2` to `standard`.\n\n. List the StorageClass:\n+\n----\n$ oc get storageclass\n\nNAME TYPE\ngp2 (default) kubernetes.io/aws-ebs <1>\nstandard kubernetes.io/gce-pd\n----\n<1> `(default)` denotes the default StorageClass.\n\n. Change the value of the annotation \n`storageclass.kubernetes.io/is-default-class` to `false` for the default \nStorageClass:\n+ \n----\n$ oc patch storageclass gp2 -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'\n----\n\n. Make another StorageClass the default by adding or modifying the \nannotation as `storageclass.kubernetes.io/is-default-class=true`.\n+ \n----\n$ oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'\n----\n\n. Verify the changes:\n+\n----\n$ oc get storageclass\n\nNAME TYPE\ngp2 kubernetes.io/aws-ebs\nstandard (default) kubernetes.io/gce-pd\n----\n\n" }, { "from": "gpt", "value": "// Module included in the following assemblies:\n//\n// * storage/dynamic-provisioning.adoc\n\n[id=\"change-default-storage-class_{context}\"]\n= Changing the default StorageClass\n\nIf you are using AWS, use the following process to change the default\nStorageClass. This process assumes you have two StorageClasses\ndefined, `gp2` and `standard`, and you want to change the default\nStorageClass from `gp2` to `standard`.\n\n. List the StorageClass:\n+\n----\n$ oc get storageclass\n\nNAME TYPE\ngp2 (default) kubernetes.io/aws-ebs <1>\nstandard kubernetes.io/aws-ebs\n----\n<1> `(default)` denotes the default StorageClass.\n\n. Change the value of the annotation \n`storageclass.kubernetes.io/is-default-class` to `false` for the default \nStorageClass:\n+ \n----\n$ oc patch storageclass gp2 -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'\n----\n\n. Make another StorageClass the default by adding or modifying the \nannotation as `storageclass.kubernetes.io/is-default-class=true`.\n+ \n----\n$ oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'\n----\n\n. Verify the changes:\n+\n----\n$ oc get storageclass\n\nNAME TYPE\ngp2 kubernetes.io/aws-ebs\nstandard (default) kubernetes.io/aws-ebs\n----\n\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix header in test source\n" }, { "from": "human", "value": ":title: Natural habitat\n:parent: Traits and environment\n\n== Natural habitatz\n\nC. porcellus is not found naturally in the wild; it is likely descended from some closely related species of cavies, such as C. aperea, C. fulgida, and C. tschudii, which are still commonly found in various regions of South America.\nSome species of cavy identified in the 20th century, such as C. anolaimae and C. guianae, may be domestic guinea pigs that have become feral by reintroduction into the wild.\nWild cavies are found on grassy plains and occupy an ecological niche similar to that of cattle.\nThey are social, living in the wild in small groups which consist of several females (sows), a male (boar), and the young (which in a break with the preceding porcine nomenclature are called pups).\nThey move together in groups (herds) eating grass or other vegetation, and do not store food.\nWhile they do not burrow or build nests, they frequently seek shelter in the burrows of other animals, as well as in crevices and tunnels formed by vegetation.\nThey tend to be most active during dawn and dusk, when it is harder for predators to spot them.\n" }, { "from": "gpt", "value": ":title: Natural habitat\n:parent: Traits and environment\n\n== Natural habitat\n\nC. porcellus is not found naturally in the wild; it is likely descended from some closely related species of cavies, such as C. aperea, C. fulgida, and C. tschudii, which are still commonly found in various regions of South America.\nSome species of cavy identified in the 20th century, such as C. anolaimae and C. guianae, may be domestic guinea pigs that have become feral by reintroduction into the wild.\nWild cavies are found on grassy plains and occupy an ecological niche similar to that of cattle.\nThey are social, living in the wild in small groups which consist of several females (sows), a male (boar), and the young (which in a break with the preceding porcine nomenclature are called pups).\nThey move together in groups (herds) eating grass or other vegetation, and do not store food.\nWhile they do not burrow or build nests, they frequently seek shelter in the burrows of other animals, as well as in crevices and tunnels formed by vegetation.\nThey tend to be most active during dawn and dusk, when it is harder for predators to spot them.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Clarify that BlockMacro- and BlockProcessors are called in the same stage\n" }, { "from": "human", "value": "= AsciidoctorJ Conversion Process Overview\n\nBefore starting to write your first extension, some basic understanding of how Asciidoctor treats the document is helpful.\nAs any language processing tool, the process can be roughly split into three steps:\n\n. Parsing: the raw sources content is read and analyzed to generate the internal representation, the AST (_abstract syntax tree_).\n. Processing: the AST is processed. For example to detect possible errors, add automatically generated content (toc), etc.\n. Output generation: once the final AST is set, it's again processed to generate the desired output.\nFor example, a sub-section of the AST representing a title with a paragraph will be converted into its correspondent HTML or PDF output.\n\nNOTE: Some liberty is taken to make the process easier to understand.\nIn reality, Asciidoctor has implementation details that divert from the 3 steps above.\n\nThe different extension types are called in different steps of the conversion process in the following order:\n\n. IncludeProcessors are called in an arbitrary and changeable order during processing and are called whenever an `include::` is found.\n. Preprocessors are called just before parsing.\n. BlockMacroProcessors are called during processing in the order that they appear in the document.\n. BlockProcessors are called during processing in the order that they appear in the document.\n. Treeprocessors are called right before processing.\n. InlineMacroProcessors are called during processing in the order that they appear in the document.\n. Postprocessors are called before output generation.\n. DocinfoProcessors are called in an arbitrary and changeable order during processing.\n" }, { "from": "gpt", "value": "= AsciidoctorJ Conversion Process Overview\n\nBefore starting to write your first extension, some basic understanding of how Asciidoctor treats the document is helpful.\nAs any language processing tool, the process can be roughly split into three steps:\n\n. Parsing: the raw sources content is read and analyzed to generate the internal representation, the AST (_abstract syntax tree_).\n. Processing: the AST is processed. For example to detect possible errors, add automatically generated content (toc), etc.\n. Output generation: once the final AST is set, it's again processed to generate the desired output.\nFor example, a sub-section of the AST representing a title with a paragraph will be converted into its correspondent HTML or PDF output.\n\nNOTE: Some liberty is taken to make the process easier to understand.\nIn reality, Asciidoctor has implementation details that divert from the 3 steps above.\n\nThe different extension types are called in different steps of the conversion process in the following order:\n\n. IncludeProcessors are called in an arbitrary and changeable order during processing and are called whenever an `include::` is found.\n. Preprocessors are called just before parsing.\n. BlockMacroProcessors and BlockProcessors are called during processing in the order that they appear in the document.\n. Treeprocessors are called right before processing.\n. InlineMacroProcessors are called during processing in the order that they appear in the document.\n. Postprocessors are called before output generation.\n. DocinfoProcessors are called in an arbitrary and changeable order during processing.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix missing part of sentence" }, { "from": "human", "value": "ifndef::imagesdir[:imagesdir: ../images]\n= generateDeck\n\nimage::ea/Manual/generateDeck.png[]\n\nThis task makes use of the https://github.com/asciidoctor/asciidoctor-reveal.js/[asciidoctor-reveal.js] backend to render your documents HTML based presentation.\n\nThis task is best used together with the <<exportPPT>> task.\nCreate a PowerPoint presentation and enrich it with reveal.js slide definitions in AsciiDoc within the speaker notes.\n\n== Source\n\n.build.gradle\n[source,groovy]\n----\ninclude::../../../build.gradle[tags=generateDeck]\n----\n" }, { "from": "gpt", "value": "ifndef::imagesdir[:imagesdir: ../images]\n= generateDeck\n\nimage::ea/Manual/generateDeck.png[]\n\nThis task makes use of the https://github.com/asciidoctor/asciidoctor-reveal.js/[asciidoctor-reveal.js] backend to render your documents into a HTML based presentation.\n\nThis task is best used together with the <<exportPPT>> task.\nCreate a PowerPoint presentation and enrich it with reveal.js slide definitions in AsciiDoc within the speaker notes.\n\n== Source\n\n.build.gradle\n[source,groovy]\n----\ninclude::../../../build.gradle[tags=generateDeck]\n----\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add how-to section and documentation site link\n" }, { "from": "human", "value": "# byteslice\n\n## Status\nimage:https://travis-ci.org/rlespinasse/byteslice.svg?branch=master[\"Build Status\", link=\"https://travis-ci.org/rlespinasse/byteslice\"]\nimage:https://coveralls.io/repos/github/rlespinasse/byteslice/badge.svg?branch=master[\"Coverage Status\", link=\"https://coveralls.io/github/rlespinasse/byteslice?branch=master\"]\n" }, { "from": "gpt", "value": "= byteslice\n\nimage:https://travis-ci.org/rlespinasse/byteslice.svg?branch=master[\"Build Status\", link=\"https://travis-ci.org/rlespinasse/byteslice\"]\nimage:https://coveralls.io/repos/github/rlespinasse/byteslice/badge.svg?branch=master[\"Coverage Status\", link=\"https://coveralls.io/github/rlespinasse/byteslice?branch=master\"]\n\n== How to\n\n=== test it with coverage\n\n[source,shell]\n-----\n$ go test -coverprofile=/tmp/byteslice-cover .\nok \tgithub.com/rlespinasse/byteslice\t0.007s\tcoverage: 100.0% of statements\n-----\n\n=== launch the benchmark\n\n[source,shell]\n-----\n$ go test -bench=. .\ngoos: ...\ngoarch: ...\npkg: github.com/rlespinasse/byteslice\nBenchmarkRBitState/get_low_bit_of_high_nibble-8 \t2000000000\t 1.99 ns/op\nBenchmarkRBitState/get_low_bit-8 \t2000000000\t 1.96 ns/op\n...\nPASS\nok \tgithub.com/rlespinasse/byteslice\t158.442s\n-----\n\n== Documentation Site\n\nSee https://godoc.org/github.com/rlespinasse/byteslice\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "[Docs] Mark Rollup as Experimental instead of Beta\n\nOriginal commit: elastic/x-pack-elasticsearch@bce120daae6b3e7603f737337e746fdf8bdedd4e\n" }, { "from": "human", "value": "[[xpack-rollup]]\n= Rolling up historical data\n\n[partintro]\n--\n\nbeta[]\n\nKeeping historical data around for analysis is extremely useful but often avoided due to the financial cost of\narchiving massive amounts of data. Retention periods are thus driven by financial realities rather than by the\nusefulness of extensive historical data.\n\nThe Rollup feature in {xpack} provides a means to summarize and store historical data so that it can still be used\nfor analysis, but at a fraction of the storage cost of raw data.\n\n\n* <<rollup-overview, Overview>>\n* <<rollup-getting-started,Getting Started>>\n* <<rollup-api-quickref, API Quick Reference>>\n* <<rollup-understanding-groups,Understanding Rollup Grouping>>\n* <<rollup-search-limitations,Limitations of Rollup Search>>\n\n\n--\n\ninclude::overview.asciidoc[]\ninclude::api-quickref.asciidoc[]\ninclude::rollup-getting-started.asciidoc[]\ninclude::understanding-groups.asciidoc[]\ninclude::rollup-search-limitations.asciidoc[]" }, { "from": "gpt", "value": "[[xpack-rollup]]\n= Rolling up historical data\n\n[partintro]\n--\n\nexperimental[]\n\nKeeping historical data around for analysis is extremely useful but often avoided due to the financial cost of\narchiving massive amounts of data. Retention periods are thus driven by financial realities rather than by the\nusefulness of extensive historical data.\n\nThe Rollup feature in {xpack} provides a means to summarize and store historical data so that it can still be used\nfor analysis, but at a fraction of the storage cost of raw data.\n\n\n* <<rollup-overview, Overview>>\n* <<rollup-getting-started,Getting Started>>\n* <<rollup-api-quickref, API Quick Reference>>\n* <<rollup-understanding-groups,Understanding Rollup Grouping>>\n* <<rollup-search-limitations,Limitations of Rollup Search>>\n\n\n--\n\ninclude::overview.asciidoc[]\ninclude::api-quickref.asciidoc[]\ninclude::rollup-getting-started.asciidoc[]\ninclude::understanding-groups.asciidoc[]\ninclude::rollup-search-limitations.asciidoc[]" } ]
asciidoc
chosen
[ { "from": "system", "value": "Update OLM metrics for 4.3\n" }, { "from": "human", "value": "// Module included in the following assemblies:\n//\n// * operators/understanding-olm/olm-understanding-olm.adoc\n\n[id=\"olm-metrics_{context}\"]\n= Exposed metrics\n\nThe Operator Lifecycle Manager (OLM) exposes certain OLM-specific resources for\nuse by the Prometheus-based {product-title} cluster monitoring stack.\n\n.Metrics exposed by OLM\n[cols=\"2a,8a\",options=\"header\"]\n|===\n|Name |Description\n\n|`csv_count`\n|Number of CSVs successfully registered.\n\n|`install_plan_count`\n|Number of InstallPlans.\n\n|`subscription_count`\n|Number of Subscriptions.\n\n|`csv_upgrade_count`\n|Monotonic count of CatalogSources.\n|===\n" }, { "from": "gpt", "value": "// Module included in the following assemblies:\n//\n// * operators/understanding-olm/olm-understanding-olm.adoc\n\n[id=\"olm-metrics_{context}\"]\n= Exposed metrics\n\nThe Operator Lifecycle Manager (OLM) exposes certain OLM-specific resources for\nuse by the Prometheus-based {product-title} cluster monitoring stack.\n\n.Metrics exposed by OLM\n[cols=\"2a,8a\",options=\"header\"]\n|===\n|Name |Description\n\n|`catalog_source_count`\n|Number of CatalogSources.\n\n|`csv_abnormal`\n|When reconciling a CSV, present whenever a CSV version is in any state other\nthan `Succeeded`. Includes the `name`, `namespace`, `phase`, `reason`, and\n`version` labels. A Prometheus alert is created when this metric is present.\n\n|`csv_count`\n|Number of CSVs successfully registered.\n\n|`csv_succeeded`\n|When reconciling a CSV, represents whether a CSV version is in a `Succeeded`\nstate (value `1`) or not (value `0`). Includes the `name`, `namespace`, and\n`version` labels.\n\n|`csv_upgrade_count`\n|Monotonic count of CatalogSources.\n\n|`install_plan_count`\n|Number of InstallPlans.\n\n|`subscription_count`\n|Number of Subscriptions.\n\n|`subscription_sync_total`\n|Monotonic count of Subscription syncs. Includes the `channel`, `installed` CSV,\nand Subscription `name` labels.\n\n|===\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add url to access apps\n" }, { "from": "human", "value": "= conoha/roles/dokku\n\nRole to install dokku 0.5.7 .\n\nAfter finish installation, access to `http://d.10sr.mydns.jp` and finish setup.\n\n\n\n== Deploying Apps\n\n\nデプロむするには、\n\n----\ncat .ssh/id_rsa.pub | ssh conoha 'sudo sshcommand acl-add dokku dokkudeploy'\n----\n\nγΏγŸγ„γͺγ“γ¨γ‚’γƒžγ‚·γƒ³ζ―Žγ«γ—γͺきゃいけません。どうにかしてθ‡ͺε‹•εŒ–γ—γŸγ„γ€‚\n\n\n\n\n=== Add Apps\n\n.Add pytohn-getting-started app\n----\nalias dokku=\"ssh -t dokku@conoha\"\n\ncd ../../dokku-apps/python-getting-started\n\ndokku apps:create python-getting-started\ngit remote add dokku dokku@conoha:python-getting-started\ngit push dokku master\n----\n\n\n\n== Refs\n\n* http://dokku.viewdocs.io/dokku/application-deployment/\n* https://gist.github.com/10sr/cf8b84cf16f2e67f5dac\n" }, { "from": "gpt", "value": "= conoha/roles/dokku\n\nRole to install dokku 0.5.7 .\n\nAfter finish installation, access to `http://d.10sr.mydns.jp` and finish setup.\n\n\n\n== Deploying Apps\n\n\nデプロむするには、\n\n----\ncat .ssh/id_rsa.pub | ssh conoha 'sudo sshcommand acl-add dokku dokkudeploy'\n----\n\nγΏγŸγ„γͺγ“γ¨γ‚’γƒžγ‚·γƒ³ζ―Žγ«γ—γͺきゃいけません。どうにかしてθ‡ͺε‹•εŒ–γ—γŸγ„γ€‚\n\n\n\n\n=== Add Apps\n\n.Add pytohn-getting-started app\n----\nalias dokku=\"ssh -t dokku@conoha\"\n\ncd ../../dokku-apps/python-getting-started\n\ndokku apps:create python-getting-started\ngit remote add dokku dokku@conoha:python-getting-started\ngit push dokku master\n----\n\nAnd this app can be available at http://python-getting-started.d.10sr.mydns.jp\n\n\n\n== Refs\n\n* http://dokku.viewdocs.io/dokku/application-deployment/\n* https://gist.github.com/10sr/cf8b84cf16f2e67f5dac\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix typo: change 'and' to 'of'\n\nA couple of people have been confused by the current wording on Slack: I believe the intent was to say that `destr` \"takes a sequence _of_ optional keyword arguments\" (not _and_)." }, { "from": "human", "value": "= Keyword argument functions now also accept maps\nFogus\n2021-03-18\n:jbake-type: post\n\nifdef::env-github,env-browser[:outfilesuffix: .adoc]\n\nTo date, Clojure’s support for keyword arguments forces programmers to choose between creating APIs that better support people (accepting keyword args) or APIs that better support programs (by taking a map of those args).\n\nIntroduced in Clojure 1.11, a function specified to take keyword arguments may be passed a single map instead of or in addition to (and following) the key/value pairs. When a lone map is passed, it is used outright for destructuring, else a trailing map is added to the map built from the preceding key/values via `conj`. For example, a function that takes a sequence and optional keyword arguments and returns a vector containing the values is defined as:\n\n[source,clojure]\n----\n(defn destr [& {:keys [a b] :as opts}]\n [a b opts])\n\n(destr :a 1)\n->[1 nil {:a 1}]\n\n(destr {:a 1 :b 2})\n->[1 2 {:a 1 :b 2}]\n----\n\nIn Clojure 1.11 the call to `destr` accepts a mixture of key/value pairs and/or a lone (or trailing) map benefitting both programmer and program.\n\nThis enhancement is available now in `org.clojure/clojure \"1.11.0-alpha1\"`.\n" }, { "from": "gpt", "value": "= Keyword argument functions now also accept maps\nFogus\n2021-03-18\n:jbake-type: post\n\nifdef::env-github,env-browser[:outfilesuffix: .adoc]\n\nTo date, Clojure’s support for keyword arguments forces programmers to choose between creating APIs that better support people (accepting keyword args) or APIs that better support programs (by taking a map of those args).\n\nIntroduced in Clojure 1.11, a function specified to take keyword arguments may be passed a single map instead of or in addition to (and following) the key/value pairs. When a lone map is passed, it is used outright for destructuring, else a trailing map is added to the map built from the preceding key/values via `conj`. For example, a function that takes a sequence of optional keyword arguments and returns a vector containing the values is defined as:\n\n[source,clojure]\n----\n(defn destr [& {:keys [a b] :as opts}]\n [a b opts])\n\n(destr :a 1)\n->[1 nil {:a 1}]\n\n(destr {:a 1 :b 2})\n->[1 2 {:a 1 :b 2}]\n----\n\nIn Clojure 1.11 the call to `destr` accepts a mixture of key/value pairs and/or a lone (or trailing) map benefitting both programmer and program.\n\nThis enhancement is available now in `org.clojure/clojure \"1.11.0-alpha1\"`.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Adjust links to lighttable and full tutorial\n\nMove the [link title] behind the link to display it correctly" }, { "from": "human", "value": "= Light Table\nDavid Nolen\n2016-08-02\n:type: tools\n:toc: macro\n:icons: font\n\n[Light Table]http://www.lighttable.com is an extensible IDE that offers\ninstant evaluation of your code, realtime feedback, and a ClojureScript\nplugin ecosystem.\n\nTo get started, check out this concise\nhttp://docs.lighttable.com/#start[introduction] or the\n[full tutorial]http://docs.lighttable.com/tutorials/full/.\n\nOnce you feel comfortable navigating Light Table's interface and using\nbasic commands, it is advisable to install the official Paredit plugin.\nYou can do so via the integrated plugin manager: open the command bar\nand look for `Plugins: Show plugin manager`.\n" }, { "from": "gpt", "value": "= Light Table\nDavid Nolen\n2016-08-02\n:type: tools\n:toc: macro\n:icons: font\n\nhttp://www.lighttable.com[Light Table] is an extensible IDE that offers\ninstant evaluation of your code, realtime feedback, and a ClojureScript\nplugin ecosystem.\n\nTo get started, check out this concise\nhttp://docs.lighttable.com/#start[introduction] or the\nhttp://docs.lighttable.com/tutorials/full/[full tutorial].\n\nOnce you feel comfortable navigating Light Table's interface and using\nbasic commands, it is advisable to install the official Paredit plugin.\nYou can do so via the integrated plugin manager: open the command bar\nand look for `Plugins: Show plugin manager`.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix list formatting in 5.3 migration docs\n\nThis commit fixes an issue with the list formatting in the 5.3 migration\ndocs.\n" }, { "from": "human", "value": "[[breaking-changes-5.3]]\n== Breaking changes in 5.3\n\n[[breaking_53_packaging_changes]]\n[float]\n=== Packaging changes\n\n[float]\n==== Logging configuration\n\nPreviously Elasticsearch exposed a single system property (`es.logs`) that\nincluded the absolute path to the configured logs directory, and the prefix of\nthe filenames used for the various logging files (the main log file, the\ndeprecation log, and the slow logs). This property has been replaced in favor of\nthree properties:\n - `es.logs.base_path`: the absolute path to the configured logs directory\n - `es.logs.cluster_name`: the default prefix of the filenames used for the\n various logging files\n - `es.logs.node_name`: exposed if `node.name` is configured for inclusion in\n the filenames of the various logging files (if you prefer)\n\nThe property `es.logs` is deprecated and will be removed in Elasticsearch 6.0.0.\n" }, { "from": "gpt", "value": "[[breaking-changes-5.3]]\n== Breaking changes in 5.3\n\n[[breaking_53_packaging_changes]]\n[float]\n=== Packaging changes\n\n[float]\n==== Logging configuration\n\nPreviously Elasticsearch exposed a single system property (`es.logs`) that\nincluded the absolute path to the configured logs directory, and the prefix of\nthe filenames used for the various logging files (the main log file, the\ndeprecation log, and the slow logs). This property has been replaced in favor of\nthree properties:\n\n * `es.logs.base_path`: the absolute path to the configured logs directory\n * `es.logs.cluster_name`: the default prefix of the filenames used for the\n various logging files\n * `es.logs.node_name`: exposed if `node.name` is configured for inclusion in\n the filenames of the various logging files (if you prefer)\n\nThe property `es.logs` is deprecated and will be removed in Elasticsearch 6.0.0.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add apidoc links to guide\n" }, { "from": "human", "value": "= {project-name}\n:author: {project-author}\n:revnumber: {project-version}\n:toclevels: 10\n\ninclude::_links.adoc[]\n\n:leveloffset: 1\ninclude::introduction.adoc[]\ninclude::configuration.adoc[]\ninclude::modules.adoc[]\n" }, { "from": "gpt", "value": "= {project-name}\n:author: {project-author}\n:revnumber: {project-version}\n:toclevels: 10\n\ninclude::_links.adoc[]\n\n:leveloffset: 1\ninclude::introduction.adoc[]\ninclude::configuration.adoc[]\ninclude::modules.adoc[]\n\n= Links\n\nlink:api/index.html[Javadoc, window=\"_blank\"]\n\nlink:api-src/index.html[API Sources, window=\"_blank\"]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Update truncate token filter to not mention the keyword tokenizer\n\nThe advice predates the existence of the keyword field\n\nCloses #22650\n" }, { "from": "human", "value": "[[analysis-truncate-tokenfilter]]\n=== Truncate Token Filter\n\nThe `truncate` token filter can be used to truncate tokens into a\nspecific length. This can come in handy with keyword (single token)\nbased mapped fields that are used for sorting in order to reduce memory\nusage.\n\nIt accepts a `length` parameter which control the number of characters\nto truncate to, defaults to `10`.\n" }, { "from": "gpt", "value": "[[analysis-truncate-tokenfilter]]\n=== Truncate Token Filter\n\nThe `truncate` token filter can be used to truncate tokens into a\nspecific length.\n\nIt accepts a `length` parameter which control the number of characters\nto truncate to, defaults to `10`.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix source code block formatting\n" }, { "from": "human", "value": "James Glasbrenner's dotfiles\n============================\nJames Glasbrenner <[email protected]>\nJuly 27, 2017\n\nMy dotfiles repo.\n\nBootstrapping\n-------------\n\nStow 2.3.0 is included in this repo for bootstrapping purposes.\nTo stow stow after cloning this repository to `$HOME/.dotfiles`, run\n\n.bash\n----------------------------------------------\nPERL5LIB=$HOME/.dotfiles/stow/stow/lib/perl5 \\\n $HOME/.dotfiles/stow/stow/bin/stow \\\n -d $HOME/.dotfiles/stow \\\n -t /home/glasbren/.local \\\n stow\n----------------------------------------------\n\nLicense\n-------\n\nAll content is licensed under the terms of link:LICENSE[The Unlicense License].\n" }, { "from": "gpt", "value": "James Glasbrenner's dotfiles\n============================\nJames Glasbrenner <[email protected]>\nJuly 27, 2017\n\nMy dotfiles repo.\n\nBootstrapping\n-------------\n\nStow 2.3.0 is included in this repo for bootstrapping purposes.\nTo stow stow after cloning this repository to `$HOME/.dotfiles`, run\n\n[source,bash]\n----------------------------------------------\nPERL5LIB=$HOME/.dotfiles/stow/stow/lib/perl5 \\\n $HOME/.dotfiles/stow/stow/bin/stow \\\n -d $HOME/.dotfiles/stow \\\n -t /home/glasbren/.local \\\n stow\n----------------------------------------------\n\nLicense\n-------\n\nAll content is licensed under the terms of link:LICENSE[The Unlicense License].\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Remove links to the periodic commit reference\n" }, { "from": "human", "value": "[[query-periodic-commit]]\nUsing Periodic Commit\n=====================\n\nNOTE: See <<cypherdoc-importing-csv-files-with-cypher>> on how to import data from CSV files.\n\nUpdating very large amounts of data (e.g. when importing) with a single Cypher query may fail due to memory constraints.\nFor these situations *only*, Cypher provides the global +USING PERIODIC COMMIT+ query hint for updating queries.\n\nPeriodic Commit tracks the number of updates performed by a query (creating a node, setting a property etc.).\nWhenever the number of updates reaches a limit, the current transaction is committed and replaced with a newly opened transaction.\n\nUsing periodic commit will prevent running out of memory when updating large amounts of data.\nHowever it will also break transactional isolation thus it should only be used where needed.\n\n:leveloffset: 2\n\ninclude::periodic-commit-without-update-limit.asciidoc[]\n\ninclude::periodic-commit-with-update-limit.asciidoc[]\n\ninclude::import-using-periodic-commit.asciidoc[]\n\n\n\n\n" }, { "from": "gpt", "value": "[[query-periodic-commit]]\nUsing Periodic Commit\n=====================\n\nNOTE: See <<cypherdoc-importing-csv-files-with-cypher>> on how to import data from CSV files.\n\nUpdating very large amounts of data (e.g. when importing) with a single Cypher query may fail due to memory constraints.\nFor these situations *only*, Cypher provides the global +USING PERIODIC COMMIT+ query hint for updating queries.\n\nPeriodic Commit tracks the number of updates performed by a query (creating a node, setting a property etc.).\nWhenever the number of updates reaches a limit, the current transaction is committed and replaced with a newly opened transaction.\n\nUsing periodic commit will prevent running out of memory when updating large amounts of data.\nHowever it will also break transactional isolation thus it should only be used where needed.\n\n:leveloffset: 2\n\ninclude::import-using-periodic-commit.asciidoc[]\n\n\n\n\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Update latest news on site project\n" }, { "from": "human", "value": "= Griffon 2.0.0.RC1 Released\nAndres Almiray\n2014-07-29\n:jbake-type: post\n:jbake-status: published\n:category: news\n:idprefix:\n\n== Griffon 2.0.0.RC1 Released\n\nThe Griffon team is happy to announce the release of Griffon 2.0.0.RC1!\n\nThis is the first release candidate of Griffon 2.0.0. If all goes according to planf the next release\nwill be 2.0.0 final.\n\nThe following list summarizes the changes brought by this release:\n\n * Groovy support upgraded to Groovy 2.3.5.\n * Several build plugins updated to their latest versions.\n * Addition of a master pom file for application projects.\n * Enable +mavenLocal()+ on gradle-griffon plugins by default.\n * more content added to the link:../guide/2.0.0.RC1/index.html[Griffon Guide].\n\nWe look forward to your feedback. Please report any problems you find to the Griffon User list,\nor better yet, file a bug at http://jira.codehaus.org/browse/griffon\n\nRemember you can also contact the team on Twitter: http://twitter.com/theaviary[@theaviary].\n\nMany thanks to all who contributed to this release!\n\nThe Griffon Team\n" }, { "from": "gpt", "value": "= Griffon 2.0.0.RC1 Released\nAndres Almiray\n2014-07-29\n:jbake-type: post\n:jbake-status: published\n:category: news\n:idprefix:\n\n== Griffon 2.0.0.RC1 Released\n\nThe Griffon team is happy to announce the release of Griffon 2.0.0.RC1!\n\nThis is the first release candidate of Griffon 2.0.0. If all goes according to plan the next release\nwill be 2.0.0 final in a few weeks time.\n\nThe following list summarizes the changes brought by this release:\n\n * Groovy support upgraded to Groovy 2.3.5.\n * Several build plugins updated to their latest versions.\n * Addition of a master pom file for application projects.\n * Enable +mavenLocal()+ on gradle-griffon plugins by default.\n * Lots of updates applied to Lazybones application templates.\n * More content added to the link:../guide/2.0.0.RC1/index.html[Griffon Guide].\n\nWe look forward to your feedback. Please report any problems you find to the Griffon User list,\nor better yet, file a bug at http://jira.codehaus.org/browse/griffon\n\nRemember you can also contact the team on Twitter: http://twitter.com/theaviary[@theaviary].\n\nMany thanks to all who contributed to this release!\n\nThe Griffon Team\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Use correct command to add cluster-reader rolebinding\n\nThis fixes #982\n" }, { "from": "human", "value": "[[monitoring]]\n\n== Monitoring {ProductName} on OpenShift\n\n{ProductName} comes with addons for Prometheus and Grafana for monitoring the service. Cluster-admin\nprivileges is required for Prometheus to monitor pods in the cluster.\n\n=== Deploying Prometheus\n\n.Procedure\n\n. Create Prometheus deployment\n+\n[options=\"nowrap\"]\n----\noc create -f ./openshift/addons/prometheus.yaml -n enmasse\n----\n\n. Grant cluster-reader privileges to Prometheus service account\n+\n[options=\"nowrap\"]\n----\noc policy add-role-to-user cluster-reader system:serviceaccount:enmasse:prometheus-server\n----\n\n=== Deploying Grafana\n\n.Procedure\n\n. Create Grafana deployment\n+\n[options=\"nowrap\"]\n----\noc create -f ./openshift/addons/grafana.yaml -n enmasse\n----\n\n. Expose Grafana service\n+\n[options=\"nowrap\"]\n----\noc expose service grafana\n----\n\nGrafana accepts the username 'admin' and password 'admin' by default. See the link:https://prometheus.io/docs/visualization/grafana/#creating-a-prometheus-data-source[Prometheus Documentation] on how to connect Grafana to Prometheus. Use `prometheus.enmasse.svc.cluster.local` as the prometheus hostname.\n" }, { "from": "gpt", "value": "[[monitoring]]\n\n== Monitoring {ProductName} on OpenShift\n\n{ProductName} comes with addons for Prometheus and Grafana for monitoring the service. Cluster-admin\nprivileges is required for Prometheus to monitor pods in the cluster.\n\n=== Deploying Prometheus\n\n.Procedure\n\n. Create Prometheus deployment\n+\n[options=\"nowrap\"]\n----\noc create -f ./openshift/addons/prometheus.yaml -n enmasse\n----\n\n. Grant cluster-reader privileges to Prometheus service account\n+\n[options=\"nowrap\"]\n----\noc adm policy add-cluster-role-to-user cluster-reader system:serviceaccount:enmasse:prometheus-server\n----\n\n=== Deploying Grafana\n\n.Procedure\n\n. Create Grafana deployment\n+\n[options=\"nowrap\"]\n----\noc create -f ./openshift/addons/grafana.yaml -n enmasse\n----\n\n. Expose Grafana service\n+\n[options=\"nowrap\"]\n----\noc expose service grafana\n----\n\nGrafana accepts the username 'admin' and password 'admin' by default. See the link:https://prometheus.io/docs/visualization/grafana/#creating-a-prometheus-data-source[Prometheus Documentation] on how to connect Grafana to Prometheus. Use `prometheus.enmasse.svc.cluster.local` as the prometheus hostname.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Docs: Add section on how to install MIM files\n" }, { "from": "human", "value": "= M17n\n\nTODO\n" }, { "from": "gpt", "value": "= M17n\n\nhttps://www.nongnu.org/m17n/[m17n]\nrefers to project for enhanced multilingualization on Linux.\nThe output target generates `.mim` files,\nwhich are used as configuration files for\nhttps://en.wikipedia.org/wiki/Intelligent_Input_Bus[IBUS].\n\n== Project-level configuration and properties\n\nNOTE: For a list of all supported properties of the Android target files, see <<TargetX11>> in the reference section below.\n\n== Layout-level configuration and properties\n\nCurrently none.\n\n== How to install a new locale on X11\n\nAssumes Ubuntu 18.04.\n\n. Make sure Gnome settings app is not open\n\n. Copy the new `.mim` file(s) to either `~/.m17n.d/` (user local) or `/usr/share/m17n/` (global)\n\n. Make sure they have the correct permissions set,\ni.e. `644` like the other files\n\n. restart all IBus daemons (according to https://askubuntu.com/a/656243[this question]),\ne.g. using `pgrep ibus | xargs kill`\n\n. Open Gnome settings\n.. Go to \"Region & Language\"\n.. Under \"Input Sources\", press the \"+\" button\n.. In the modal window, select your newly added language variant.\nNote that it might be grouped by the main language,\ne.g., you might need to click on \"Swedish (Sweden)\" first,\nand then select the specific variant like \"Swedish (test (m17n))\".\n.. Confirm with \"Add\"\n.. You can test the keyboard layout by clicking on the \"eye\" icon\nnext to the language in the list\n\n. The language should show up global menu bar's language selection.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add error file docs to important settings\n\nThis commit adds the error file documentation to the important settings\ndocs so that the page is actually visible.\n" }, { "from": "human", "value": "[[important-settings]]\n== Important Elasticsearch configuration\n\nWhile Elasticsearch requires very little configuration, there are a number of\nsettings which need to be considered before going into production.\n\nThe following settings *must* be considered before going to production:\n\n* <<path-settings,Path settings>>\n* <<cluster.name,Cluster name>>\n* <<node.name,Node name>>\n* <<network.host,Network host>>\n* <<discovery-settings,Discovery settings>>\n* <<heap-size,Heap size>>\n* <<heap-dump-path,Heap dump path>>\n* <<gc-logging,GC logging>>\n\ninclude::important-settings/path-settings.asciidoc[]\n\ninclude::important-settings/cluster-name.asciidoc[]\n\ninclude::important-settings/node-name.asciidoc[]\n\ninclude::important-settings/network-host.asciidoc[]\n\ninclude::important-settings/discovery-settings.asciidoc[]\n\ninclude::important-settings/heap-size.asciidoc[]\n\ninclude::important-settings/heap-dump-path.asciidoc[]\n\ninclude::important-settings/gc-logging.asciidoc[]\n" }, { "from": "gpt", "value": "[[important-settings]]\n== Important Elasticsearch configuration\n\nWhile Elasticsearch requires very little configuration, there are a number of\nsettings which need to be considered before going into production.\n\nThe following settings *must* be considered before going to production:\n\n* <<path-settings,Path settings>>\n* <<cluster.name,Cluster name>>\n* <<node.name,Node name>>\n* <<network.host,Network host>>\n* <<discovery-settings,Discovery settings>>\n* <<heap-size,Heap size>>\n* <<heap-dump-path,Heap dump path>>\n* <<gc-logging,GC logging>>\n\ninclude::important-settings/path-settings.asciidoc[]\n\ninclude::important-settings/cluster-name.asciidoc[]\n\ninclude::important-settings/node-name.asciidoc[]\n\ninclude::important-settings/network-host.asciidoc[]\n\ninclude::important-settings/discovery-settings.asciidoc[]\n\ninclude::important-settings/heap-size.asciidoc[]\n\ninclude::important-settings/heap-dump-path.asciidoc[]\n\ninclude::important-settings/gc-logging.asciidoc[]\n\ninclude::important-settings/error-file.asciidoc[]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Update step for deploying an odo single application\n\nMissing important step to change to the Node.JS directory before running\nthe next command...\n\nPing @boczkowska @bergerhoffer\n" }, { "from": "human", "value": "// Module included in the following assemblies:\n//\n// * cli_reference/openshift_developer_cli/creating-a-single-component-application-with-odo \n\n[id=\"creating-a-nodejs-application-with-odo_{context}\"]\n= Creating a Node.js application with {odo-title} \n\n.Procedure \n\n. Add a component of the type Node.js to your application:\n+\n----\n$ odo create nodejs\n----\n+\nNOTE: By default, the latest image is used. You can also explicitly supply an image version by using `odo create openshift/nodejs:8`.\n\n. Push the initial source code to the component:\n+\n----\n$ odo push\n----\n+\nYour component is now deployed to {product-title}.\n\n. Create a URL and add an entry in the local configuration file as follows:\n+\n----\n$ odo url create --port 8080\n----\n+\n. Push the changes. This creates a URL on the cluster.\n+\n----\n$ odo push\n----\n+\n. List the URLs to check the desired URL for the component.\n+\n----\n$ odo url list\n----\n+\n. View your deployed application using the generated URL.\n+\n----\n$ curl <URL>\n----" }, { "from": "gpt", "value": "// Module included in the following assemblies:\n//\n// * cli_reference/openshift_developer_cli/creating-a-single-component-application-with-odo \n\n[id=\"creating-a-nodejs-application-with-odo_{context}\"]\n= Creating a Node.js application with {odo-title} \n\n.Procedure \n\n. Change the current directory to the front-end directory:\n+\n----\n$ cd <directory-name>\n---- \n\n. Add a component of the type Node.js to your application:\n+\n----\n$ odo create nodejs\n----\n+\nNOTE: By default, the latest image is used. You can also explicitly supply an image version by using `odo create openshift/nodejs:8`.\n\n. Push the initial source code to the component:\n+\n----\n$ odo push\n----\n+\nYour component is now deployed to {product-title}.\n\n. Create a URL and add an entry in the local configuration file as follows:\n+\n----\n$ odo url create --port 8080\n----\n+\n. Push the changes. This creates a URL on the cluster.\n+\n----\n$ odo push\n----\n+\n. List the URLs to check the desired URL for the component.\n+\n----\n$ odo url list\n----\n+\n. View your deployed application using the generated URL.\n+\n----\n$ curl <URL>\n----\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add more goals for metrics\n" }, { "from": "human", "value": "= flexy-pool\nAuthor <[email protected]>\nv1.0.0, 2014-02-25\n\n:toc:\n:imagesdir: images\n:homepage: http://vladmihalcea.com/\n\n== Introduction\n\nThe flexy-pool library brings adaptability to a given Connection Pool, allowing it to resize on demand.\nThis is very handy since most connection pools offer a limited set of dynamic configuration strategies.\n\n== Features (Goals)\n\n* extensive connection pool support(Bitronix TM, C3PO, DBCP)\n* statistics support\n** connection acquiring time histogram\n** retries attempts histogram\n** minimum CP size histogram\n** maximum CP size histogram\n** average CP size histogram" }, { "from": "gpt", "value": "= flexy-pool\nAuthor <[email protected]>\nv1.0.0, 2014-02-25\n\n:toc:\n:imagesdir: images\n:homepage: http://vladmihalcea.com/\n\n== Introduction\n\nThe flexy-pool library brings adaptability to a given Connection Pool, allowing it to resize on demand.\nThis is very handy since most connection pools offer a limited set of dynamic configuration strategies.\n\n== Features (Goals)\n\n* extensive connection pool support(Bitronix TM, C3PO, DBCP)\n* statistics support\n** source connection acquiring time histogram\n** total connection acquiring time histogram\n** retries attempts histogram\n** minimum CP size histogram\n** maximum CP size histogram\n** average connection request count histogram" } ]
asciidoc
chosen
[ { "from": "system", "value": "[DOCS] Mark SQL feature as experimental\n" }, { "from": "human", "value": "[role=\"xpack\"]\n[[xpack-sql]]\n= SQL Access\n\n:sql-tests: {xes-repo-dir}/../../qa/sql\n:sql-specs: {sql-tests}/src/main/resources\n:jdbc-tests: {sql-tests}/src/main/java/org/elasticsearch/xpack/qa/sql/jdbc\n:security-tests: {sql-tests}/security/src/test/java/org/elasticsearch/xpack/qa/sql/security\n\n[partintro]\n--\nX-Pack includes a SQL feature to execute SQL against Elasticsearch\nindices and return tabular results. There are four main components:\n\n<<sql-rest,REST API>>::\n Accepts SQL in a JSON document, executes it, and returns the\n results.\n<<sql-translate,Translate API>>::\n Accepts SQL in a JSON document and translates it into a native\n Elasticsearch query and returns that.\n<<sql-cli,CLI>>::\n Command line application that connects to Elasticsearch to execute\n SQL and print tabular results.\n<<sql-jdbc,JDBC>>::\n A JDBC driver for Elasticsearch.\n--\n\ninclude::getting-started.asciidoc[]\ninclude::endpoints/index.asciidoc[]\ninclude::functions/index.asciidoc[]\ninclude::language/index.asciidoc[]\n\n:jdbc-tests!:\n" }, { "from": "gpt", "value": "[role=\"xpack\"]\n[[xpack-sql]]\n= SQL Access\n\n:sql-tests: {xes-repo-dir}/../../qa/sql\n:sql-specs: {sql-tests}/src/main/resources\n:jdbc-tests: {sql-tests}/src/main/java/org/elasticsearch/xpack/qa/sql/jdbc\n:security-tests: {sql-tests}/security/src/test/java/org/elasticsearch/xpack/qa/sql/security\n\n[partintro]\n--\n\nexperimental[]\n\nX-Pack includes a SQL feature to execute SQL against Elasticsearch\nindices and return tabular results. There are four main components:\n\n<<sql-rest,REST API>>::\n Accepts SQL in a JSON document, executes it, and returns the\n results.\n<<sql-translate,Translate API>>::\n Accepts SQL in a JSON document and translates it into a native\n Elasticsearch query and returns that.\n<<sql-cli,CLI>>::\n Command line application that connects to Elasticsearch to execute\n SQL and print tabular results.\n<<sql-jdbc,JDBC>>::\n A JDBC driver for Elasticsearch.\n--\n\ninclude::getting-started.asciidoc[]\ninclude::endpoints/index.asciidoc[]\ninclude::functions/index.asciidoc[]\ninclude::language/index.asciidoc[]\n\n:jdbc-tests!:\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add a mini-faq and see-also URIs" }, { "from": "human", "value": "= Cloud Enablement CCT module\n\nThis repository serves as a temporary public link:https://github.com/concrt/concreate[Concreate] module for building JBoss Openshift container images. This repository is an enhanced copy of shell scripts used to build JBoss images for OpenShift.\n" }, { "from": "gpt", "value": "= Cloud Enablement CCT module\n\nThis repository serves as a temporary public link:https://github.com/concrt/concreate[Concreate] module for building JBoss Openshift container images. This repository is an enhanced copy of shell scripts used to build JBoss images for OpenShift.\n\n== FAQ\n\n=== Why the name?\n\n*cct* (container configuration tool) was the name of a precursor tool to *concreate*. This module bundle was originally created for use with *cct* but has outlived it.\n\n=== Temporary?\n\nWe are in the process of refactoring the modules. The end-goal is likely to be splitting this repository up into separate modules aligned by product.\n\n== See also\n\n * https://github.com/jboss-container-images[Red Hat Middleware Container Images]\n * http://registry.access.redhat.com/[Red Hat Container Catalog] \n * https://www.openshift.com/[Red Hat OpenShift]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Update public Jenkins job URL" }, { "from": "human", "value": "= Continuous integration\n:awestruct-layout: normalBase\n:showtitle:\n\nWe use Jenkins for continuous integration.\n\n*Show https://hudson.jboss.org/hudson/job/drools/[the Jenkins job].*\n\nKeep the build blue!\n" }, { "from": "gpt", "value": "= Continuous integration\n:awestruct-layout: normalBase\n:showtitle:\n\nWe use Jenkins for continuous integration.\n\n*Show https://jenkins-kieci.rhcloud.com/[the Jenkins jobs].* These are mirrors of a Red Hat internal Jenkins jobs.\n\nKeep the builds green!\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Include Hello World Youtube video" }, { "from": "human", "value": "= Minimal-J\n\nJava - but small.\n\nimage::doc/frontends.png[]\n\nMinimal-J applications are\n\n* Responsive to use on every device\n* Straight forward to specify and implement and therefore\n* Easy to plan and manage\n\n=== Idea\n\nBusiness applications tend to get complex and complicated. Minimal-J prevents this by setting clear rules how an application should behave and how it should be implemented.\n\nMinimal applications may not always look the same. But the UI concepts never change. There are no surprises for the user.\n\n== Technical Features\n\n* Independent of the used UI technology. Implementations for Web / Mobile / Desktop.\n* ORM persistence layer for Maria DB or in memory DB. Transactions and Authorization supported.\n* Small: The minimalj.jar is still < 1MB\n* Very few dependencies\n* Applications can run standalone (like SpringBoot)\n\n== Documentation\n\n* link:doc/user_guide/user_guide.adoc[Minimal user guide] User guide for Minimal-J applications.\n* link:doc/topics.adoc[Tutorial and examples] Informations for developers.\n* link:doc/release_notes.adoc[Release Notes]\n\n\n=== Contact\n\n* Bruno Eberhard, mailto:[email protected][[email protected]] " }, { "from": "gpt", "value": "= Minimal-J\n\nJava - but small.\n\nimage::doc/frontends.png[]\n\nMinimal-J applications are\n\n* Responsive to use on every device\n* Straight forward to specify and implement and therefore\n* Easy to plan and manage\n\n=== Idea\n\nBusiness applications tend to get complex and complicated. Minimal-J prevents this by setting clear rules how an application should behave and how it should be implemented.\n\nMinimal applications may not always look the same. But the UI concepts never change. There are no surprises for the user.\n\n== Technical Features\n\n* Independent of the used UI technology. Implementations for Web / Mobile / Desktop.\n* ORM persistence layer for Maria DB or in memory DB. Transactions and Authorization supported.\n* Small: The minimalj.jar is still < 1MB\n* Very few dependencies\n* Applications can run standalone (like SpringBoot)\n\n== Documentation\n\n* link:doc/user_guide/user_guide.adoc[Minimal user guide] User guide for Minimal-J applications.\n* link:doc/topics.adoc[Tutorial and examples] Informations for developers.\n* link:doc/release_notes.adoc[Release Notes]\n\n== Hello World\n\nHow to implement Hello World in Minimal-J:\n\n[![Minimal-J - Hello Wolrd](http://img.youtube.com/vi/0VHz7gv6TpA/0.jpg)](http://www.youtube.com/watch?v=0VHz7gv6TpA)\n\n=== Contact\n\n* Bruno Eberhard, mailto:[email protected][[email protected]] " } ]
asciidoc
chosen
[ { "from": "system", "value": "Remove CSI spec ref in note\n" }, { "from": "human", "value": "[id=\"persistent-storage-using-csi\"]\n= Configuring CSI volumes\ninclude::modules/common-attributes.adoc[]\n:context: persistent-storage-csi\n\ntoc::[]\n\nThe Container Storage Interface (CSI) allows {product-title} to consume\nstorage from storage backends that implement the\nlink:https://github.com/container-storage-interface/spec[CSI interface]\nas persistent storage.\n\n[IMPORTANT]\n====\n{product-title} does not ship with any CSI drivers. It is recommended\nto use the CSI drivers provided by\nlink:https://kubernetes-csi.github.io/docs/drivers.html[community or storage vendors].\n\nInstallation instructions differ by driver, and are found in each driver's\ndocumentation. Follow the instructions provided by the CSI driver.\n\n{product-title} {product-version} supports version 1.1.0 of the\nlink:https://github.com/container-storage-interface/spec[CSI specification].\n====\n\ninclude::modules/persistent-storage-csi-architecture.adoc[leveloffset=+1]\n\ninclude::modules/persistent-storage-csi-external-controllers.adoc[leveloffset=+2]\n\ninclude::modules/persistent-storage-csi-driver-daemonset.adoc[leveloffset=+2]\n\ninclude::modules/persistent-storage-csi-dynamic-provisioning.adoc[leveloffset=+1]\n\ninclude::modules/persistent-storage-csi-mysql-example.adoc[leveloffset=+1]\n" }, { "from": "gpt", "value": "[id=\"persistent-storage-using-csi\"]\n= Configuring CSI volumes\ninclude::modules/common-attributes.adoc[]\n:context: persistent-storage-csi\n\ntoc::[]\n\nThe Container Storage Interface (CSI) allows {product-title} to consume\nstorage from storage back ends that implement the\nlink:https://github.com/container-storage-interface/spec[CSI interface]\nas persistent storage.\n\n[IMPORTANT]\n====\n{product-title} does not ship with any CSI drivers. It is recommended\nto use the CSI drivers provided by\nlink:https://kubernetes-csi.github.io/docs/drivers.html[community or storage vendors].\n\nInstallation instructions differ by driver, and are found in each driver's\ndocumentation. Follow the instructions provided by the CSI driver.\n====\n\ninclude::modules/persistent-storage-csi-architecture.adoc[leveloffset=+1]\n\ninclude::modules/persistent-storage-csi-external-controllers.adoc[leveloffset=+2]\n\ninclude::modules/persistent-storage-csi-driver-daemonset.adoc[leveloffset=+2]\n\ninclude::modules/persistent-storage-csi-dynamic-provisioning.adoc[leveloffset=+1]\n\ninclude::modules/persistent-storage-csi-mysql-example.adoc[leveloffset=+1]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add how to build documentation\n" }, { "from": "human", "value": "[[development]]\n== Development\n\nGithub repository: {datasource-proxy}" }, { "from": "gpt", "value": "[[development]]\n== Development\n\nGithub repository: {datasource-proxy}\n\n=== Build Documentation\n\n```sh\n> ./mvnw asciidoctor:process-asciidoc@output-html\n```\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix order of includes for Release Notes\n" }, { "from": "human", "value": "[[release-notes]]\n= JUnit 5 Release Notes\nStefan Bechtold; Sam Brannen; Johannes Link; Matthias Merdes; Marc Philipp; Christian Stein\n//\n:docinfodir: ../docinfos\n:docinfo: private-head\n:numbered!:\n//\n\nThis document contains the _change log_ for all JUnit 5 releases since 5.0 GA.\n\nPlease refer to the <<../user-guide/index.adoc#user-guide,User Guide>> for\ncomprehensive reference documentation for programmers writing tests,\nextension authors, and engine authors as well as build tool and IDE\nvendors.\n\ninclude::../link-attributes.adoc[]\n\ninclude::release-notes-5.1.0-M2.adoc[]\n\ninclude::release-notes-5.1.0-RC1.adoc[]\n\ninclude::release-notes-5.1.0-M1.adoc[]\n\ninclude::release-notes-5.0.3.adoc[]\n\ninclude::release-notes-5.0.2.adoc[]\n\ninclude::release-notes-5.0.1.adoc[]\n\ninclude::release-notes-5.0.0.adoc[]\n" }, { "from": "gpt", "value": "[[release-notes]]\n= JUnit 5 Release Notes\nStefan Bechtold; Sam Brannen; Johannes Link; Matthias Merdes; Marc Philipp; Christian Stein\n//\n:docinfodir: ../docinfos\n:docinfo: private-head\n:numbered!:\n//\n\nThis document contains the _change log_ for all JUnit 5 releases since 5.0 GA.\n\nPlease refer to the <<../user-guide/index.adoc#user-guide,User Guide>> for\ncomprehensive reference documentation for programmers writing tests,\nextension authors, and engine authors as well as build tool and IDE\nvendors.\n\ninclude::../link-attributes.adoc[]\n\ninclude::release-notes-5.1.0-RC1.adoc[]\n\ninclude::release-notes-5.1.0-M2.adoc[]\n\ninclude::release-notes-5.1.0-M1.adoc[]\n\ninclude::release-notes-5.0.3.adoc[]\n\ninclude::release-notes-5.0.2.adoc[]\n\ninclude::release-notes-5.0.1.adoc[]\n\ninclude::release-notes-5.0.0.adoc[]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add documentation about running tests\n\nAs discussed in #8.\n" }, { "from": "human", "value": "== Setting up a development environment on Ubuntu 16.04 (Xenial)\n\nInstall development dependencies:\n\n $ sudo apt-get install python-pip python-pyscard libykpers-1-1 libu2f-host0 \n\nSetup the repository:\n\n $ git clone --recursive https://github.com/Yubico/yubikey-manager.git\n $ cd yubikey-manager\n\nInstall in editable mode with pip (from root of repository):\n\n $ sudo pip install -e .\n\nRun the app:\n\n $ ykman --help\n\nTo update once installed, just make sure the repo is up to date:\n\n $ git pull\n $ git submodule update\n\nTo uninstall, run:\n\n $ sudo pip uninstall yubikey-manager\n" }, { "from": "gpt", "value": "== Setting up a development environment on Ubuntu 16.04 (Xenial)\n\nInstall development dependencies:\n\n $ sudo apt-get install python-pip python-pyscard libykpers-1-1 libu2f-host0 \n\nSetup the repository:\n\n $ git clone --recursive https://github.com/Yubico/yubikey-manager.git\n $ cd yubikey-manager\n\nInstall in editable mode with pip (from root of repository):\n\n $ sudo pip install -e .\n\nRun the app:\n\n $ ykman --help\n\nTo update once installed, just make sure the repo is up to date:\n\n $ git pull\n $ git submodule update\n\nTo uninstall, run:\n\n $ sudo pip uninstall yubikey-manager\n\n=== Unit tests\n\nTo run unit tests:\n\n $ python setup.py test\n\n=== Integration tests\n\nWARNING: ONLY run these on a dedicated developer key, as it will permanently delete data on the device!\n\nTo run integration tests:\n\n $ INTEGRATION_TESTS=TRUE python setup.py test\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Introduce chapter trx_if.adoc and add it to OsmoTRX and OsmoBTS\n\nThis chapter defines the protocol used between osmo-trx and\nosmo-bts-trx.\n\nMost of the text comes originally from osmo-trx.git/README, as it's the\nonly known documentation of the protocol other than the code itself.\n\nChange-Id: I56c418eef0f826ae1aadbed5b151fbed241c7885\n" }, { "from": "human", "value": ":gfdl-enabled:\n\nOsmoTRX User Manual\n====================\nPau Espin Pedrol <[email protected]>\n\n\ninclude::../common/chapters/preface.adoc[]\n\ninclude::chapters/overview.adoc[]\n\ninclude::chapters/running.adoc[]\n\ninclude::../common/chapters/control_if.adoc[]\n\ninclude::chapters/control.adoc[]\n\ninclude::../common/chapters/vty.adoc[]\n\ninclude::../common/chapters/logging.adoc[]\n\ninclude::chapters/counters.adoc[]\n\ninclude::chapters/configuration.adoc[]\n\ninclude::../common/chapters/port_numbers.adoc[]\n\ninclude::../common/chapters/bibliography.adoc[]\n\ninclude::../common/chapters/glossary.adoc[]\n\ninclude::../common/chapters/gfdl.adoc[]\n" }, { "from": "gpt", "value": ":gfdl-enabled:\n\nOsmoTRX User Manual\n====================\nPau Espin Pedrol <[email protected]>\n\n\ninclude::../common/chapters/preface.adoc[]\n\ninclude::chapters/overview.adoc[]\n\ninclude::chapters/running.adoc[]\n\ninclude::../common/chapters/control_if.adoc[]\n\ninclude::chapters/control.adoc[]\n\ninclude::../common/chapters/vty.adoc[]\n\ninclude::../common/chapters/logging.adoc[]\n\ninclude::chapters/counters.adoc[]\n\ninclude::chapters/configuration.adoc[]\n\ninclude::../common/chapters/trx_if.adoc[]\n\ninclude::../common/chapters/port_numbers.adoc[]\n\ninclude::../common/chapters/bibliography.adoc[]\n\ninclude::../common/chapters/glossary.adoc[]\n\ninclude::../common/chapters/gfdl.adoc[]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add Pes Plugin to the plugin page\n\nI do not know normally whether this section is the right place or not, please let me know.\n\nHere is the pes plugin's GitHub site :https://github.com/kodcu/pes\n\nCloses #10398\n" }, { "from": "human", "value": "[[misc]]\n== Misc\n\n\n* https://github.com/elasticsearch/puppet-elasticsearch[Puppet]:\n Elasticsearch puppet module.\n\n* http://github.com/elasticsearch/cookbook-elasticsearch[Chef]:\n Chef cookbook for Elasticsearch\n\n* https://github.com/medcl/salt-elasticsearch[SaltStack]:\n SaltStack Module for Elasticsearch\n\n* http://www.github.com/neogenix/daikon[daikon]:\n Daikon Elasticsearch CLI\n\n* https://github.com/Aconex/scrutineer[Scrutineer]:\n A high performance consistency checker to compare what you've indexed\n with your source of truth content (e.g. DB)\n\n* https://www.wireshark.org/[Wireshark]:\n Protocol dissection for Zen discovery, HTTP and the binary protocol\n\n* https://github.com/sscarduzio/elasticsearch-readonlyrest-plugin[Readonly REST]:\n High performance access control for Elasticsearch native REST API.\n\n" }, { "from": "gpt", "value": "[[misc]]\n== Misc\n\n\n* https://github.com/elasticsearch/puppet-elasticsearch[Puppet]:\n Elasticsearch puppet module.\n\n* http://github.com/elasticsearch/cookbook-elasticsearch[Chef]:\n Chef cookbook for Elasticsearch\n\n* https://github.com/medcl/salt-elasticsearch[SaltStack]:\n SaltStack Module for Elasticsearch\n\n* http://www.github.com/neogenix/daikon[daikon]:\n Daikon Elasticsearch CLI\n\n* https://github.com/Aconex/scrutineer[Scrutineer]:\n A high performance consistency checker to compare what you've indexed\n with your source of truth content (e.g. DB)\n\n* https://www.wireshark.org/[Wireshark]:\n Protocol dissection for Zen discovery, HTTP and the binary protocol\n\n* https://github.com/sscarduzio/elasticsearch-readonlyrest-plugin[Readonly REST]:\n High performance access control for Elasticsearch native REST API.\n\n* https://github.com/kodcu/pes[Pes]:\n A pluggable elastic query DSL builder for Elasticsearch\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add wp_cli update in progress scenario to logbook\n" }, { "from": "human", "value": "= Configuration notes: bryn.justinwflory.com\n\nConfiguration or changelog notes for Bryn.\n\n\n== Firewall\n\n* *Services*:\n** SSH (22)\n** HTTP (80)\n** DNS (52)\n** HTTPS (443)\n** MySQL\n* *Ports*:\n** `8086/tcp`: InfluxDB\n** `60597/udp`: Mosh\n\n\n== XenForo\n\n* `sitemaps/` and `temp/` must have 0777 permissions…\n** `temp/` is for the image proxy\n" }, { "from": "gpt", "value": "= Configuration notes: bryn.justinwflory.com\n\nConfiguration or changelog notes for Bryn.\n\n\n== Firewall\n\n* *Services*:\n** SSH (22)\n** HTTP (80)\n** DNS (52)\n** HTTPS (443)\n** MySQL\n* *Ports*:\n** `8086/tcp`: InfluxDB\n** `60597/udp`: Mosh\n\n\n== WordPress\n\n=== wp_cli cannot upgrade WordPress because another update is in progress\n\n`wp option get core_updater.lock`::\nCheck if WordPress is holding a version lock.\nIf a number is returned, the version lock is on.\n\n`wp option delete core_updater.lock`::\nBreak a version lock on WordPress.\nVerify an update is NOT in progress before deleting (check processes, visit admin panel).\n\nhttps://wordpress.stackexchange.com/questions/224989/get-rid-of-another-update-is-currently-in-progress[Reference]\n\n\n== XenForo\n\n* `sitemaps/` and `temp/` must have 0777 permissions…\n** `temp/` is for the image proxy\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add how to run the browser via Gradle." }, { "from": "human", "value": "= beeLΔ«n Java / JVM\n\nThis is the start of a DNSChain client for the JVM.\n\n== Install Prerequisites\n\nMake sure you have JDK 7 or later installed.\n\nWARNING: **beeLin-java** is using HTTPS to query the DNSChain server at https://api.dnschain.net[api.dnschain.net] which uses a https://cert.startcom.org[StartCom] SSL certificate. By default, Java installations do not trust the StartCom certificate authority and your HTTPS connection to `api.dnschain.net` may fail. For more detail and instructions on how to install the StartCom certificate in your JDK/JRE see http://wernerstrydom.com/2014/01/14/installing-startcom-certificate-authority-certificates-java/[Installing StartCom Certificate Authority Certificates in Java].\n\n== Build and run tests\n\n ./gradlew build\n\nYou can view the test results in `beelinj-lib/build/reports/tests/index.html`\n\nOf course you probably want to install https://www.jetbrains.com/idea/download/[IntelliJ IDEA] to do this.\n\n" }, { "from": "gpt", "value": "= beeLΔ«n Java / JVM\n\nThis is the start of a DNSChain client for the JVM.\n\n== Install Prerequisites\n\nMake sure you have JDK 7 or later installed.\n\nWARNING: **beeLin-java** is using HTTPS to query the DNSChain server at https://api.dnschain.net[api.dnschain.net] which uses a https://cert.startcom.org[StartCom] SSL certificate. By default, Java installations do not trust the StartCom certificate authority and your HTTPS connection to `api.dnschain.net` may fail. For more detail and instructions on how to install the StartCom certificate in your JDK/JRE see http://wernerstrydom.com/2014/01/14/installing-startcom-certificate-authority-certificates-java/[Installing StartCom Certificate Authority Certificates in Java].\n\n== Build and run tests\n\n ./gradlew build\n\nYou can view the test results in `beelinj-lib/build/reports/tests/index.html`\n\nOf course you probably want to install https://www.jetbrains.com/idea/download/[IntelliJ IDEA] to do this.\n\n== Build and run the bare-bones browser\n\n ./gradlew :beelinj-browser-fx:run\n\n\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix typo. teh -> the" }, { "from": "human", "value": "= Installers\n\n:toc:\n:toc-placement!:\n\ntoc::[]\n\n[[intro]]\nIntroduction\n------------\n\nCMake has the ability to create installers for multiple platforms using a program\ncalled link:https://cmake.org/Wiki/CMake:CPackPackageGenerators[CPack].\nCPack includes the ability to create Linux RPM, deb and gzip distributions\n of both binaries and source code. It also includes teh ability to create NSIS files for Microsoft Windows.\n" }, { "from": "gpt", "value": "= Installers\n\n:toc:\n:toc-placement!:\n\ntoc::[]\n\n[[intro]]\nIntroduction\n------------\n\nCMake has the ability to create installers for multiple platforms using a program\ncalled link:https://cmake.org/Wiki/CMake:CPackPackageGenerators[CPack].\nCPack includes the ability to create Linux RPM, deb and gzip distributions\n of both binaries and source code. It also includes the ability to create NSIS files for Microsoft Windows.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Update readme for mavn deploy\n" }, { "from": "human", "value": "// Do not edit this file (e.g. go instead to src/main/asciidoc)\n\nSpring Cloud Build is a common utility project for Spring Cloud\nto use for plugin and dependency management.\n\n== Building and Deploying\n\nTo install locally:\n\n----\n\n$ mvn install\n----\n\nand to deploy snapshots to repo.spring.io:\n\n----\n$ mvn install -DaltSnapshotDeploymentRepository=repo.spring.io::default::https://repo.spring.io/libs-snapshot-local\n----\n\nfor a RELEASE build use\n\n----\n$ mvn install -DaltReleaseDeploymentRepository=repo.spring.io::default::https://repo.spring.io/libs-release-local\n----\n\nand for Maven Central use\n\n----\n$ mvn install -P central -DaltReleaseDeploymentRepository=sonatype-nexus-staging::default::https://oss.sonatype.org/service/local/staging/deploy/maven2\n----\n\n(the \"central\" profile is available for all projects in Spring Cloud and it sets up the gpg jar signing, and the repository has to be specified separately for this project because it is a parent of the starter parent which users inturn have as their own parent).\n" }, { "from": "gpt", "value": "// Do not edit this file (e.g. go instead to src/main/asciidoc)\n\nSpring Cloud Build is a common utility project for Spring Cloud\nto use for plugin and dependency management.\n\n== Building and Deploying\n\nTo install locally:\n\n----\n\n$ mvn install\n----\n\nand to deploy snapshots to repo.spring.io:\n\n----\n$ mvn deploy -DaltSnapshotDeploymentRepository=repo.spring.io::default::https://repo.spring.io/libs-snapshot-local\n----\n\nfor a RELEASE build use\n\n----\n$ mvn deploy -DaltReleaseDeploymentRepository=repo.spring.io::default::https://repo.spring.io/libs-release-local\n----\n\nand for Maven Central use\n\n----\n$ mvn deploy -P central -DaltReleaseDeploymentRepository=sonatype-nexus-staging::default::https://oss.sonatype.org/service/local/staging/deploy/maven2\n----\n\n(the \"central\" profile is available for all projects in Spring Cloud and it sets up the gpg jar signing, and the repository has to be specified separately for this project because it is a parent of the starter parent which users in turn have as their own parent).\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add projects subdomain in project page URL\n\nWhile the projects subdomain is not necessary due to a redirect\nit does provide a faster experience for users.\n\nThis commit adds the projects subdomain to the project URL to\navoid unnecessary redirects.\n\nFixes gh-220\n" }, { "from": "human", "value": "= Spring Session\nRob Winch\n\nSpring Session aims to provide a common infrastructure for managing sessions. This provides many benefits including:\n\n* Accessing a session from any environment (i.e. web, messaging infrastructure, etc)\n* In a web environment\n** Support for clustering in a vendor neutral way\n** Pluggable strategy for determining the session id\n** Easily keep the HttpSession alive when a WebSocket is active\n\n= Spring Session Project Site\n\nYou can find the documentation, issue management, support, samples, and guides for using Spring Session at http://spring.io/spring-session/\n\n= License\n\nSpring Session is Open Source software released under the http://www.apache.org/licenses/LICENSE-2.0.html[Apache 2.0 license].\n" }, { "from": "gpt", "value": "= Spring Session\nRob Winch\n\nSpring Session aims to provide a common infrastructure for managing sessions. This provides many benefits including:\n\n* Accessing a session from any environment (i.e. web, messaging infrastructure, etc)\n* In a web environment\n** Support for clustering in a vendor neutral way\n** Pluggable strategy for determining the session id\n** Easily keep the HttpSession alive when a WebSocket is active\n\n= Spring Session Project Site\n\nYou can find the documentation, issue management, support, samples, and guides for using Spring Session at http://projects.spring.io/spring-session/\n\n= License\n\nSpring Session is Open Source software released under the http://www.apache.org/licenses/LICENSE-2.0.html[Apache 2.0 license].\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Update release notes for Testable annotation.\n\n#2168\n" }, { "from": "human", "value": "[[release-notes-5.7.0-M1]]\n== 5.7.0-M1\n\n*Date of Release:* ❓\n\n*Scope:* ❓\n\nFor a complete list of all _closed_ issues and pull requests for this release, consult the\nlink:{junit5-repo}+/milestone/44?closed=1+[5.7 M1] milestone page in the JUnit repository\non GitHub.\n\n\n[[release-notes-5.7.0-M1-junit-platform]]\n=== JUnit Platform\n\n==== Bug Fixes\n\n* ❓\n\n==== Deprecations and Breaking Changes\n\n* ❓\n\n==== New Features and Improvements\n\n* The number of containers and tests excluded by post discovery filters based on their tags\n is now logged, along with the exclusion reasons.\n\n\n[[release-notes-5.7.0-M1-junit-jupiter]]\n=== JUnit Jupiter\n\n==== Bug Fixes\n\n* ❓\n\n==== Deprecations and Breaking Changes\n\n* ❓\n\n==== New Features and Improvements\n\n* The Javadoc for the `provideTestTemplateInvocationContexts()` method in\n `TestTemplateInvocationContextProvider` has been aligned with the actual implementation.\n Providers are now officially allowed to return an empty stream, and the error message\n when all provided streams are empty is now more helpful.\n\n\n[[release-notes-5.7.0-M1-junit-vintage]]\n=== JUnit Vintage\n\n==== Bug Fixes\n\n* ❓\n\n==== Deprecations and Breaking Changes\n\n* ❓\n\n==== New Features and Improvements\n\n* ❓\n" }, { "from": "gpt", "value": "[[release-notes-5.7.0-M1]]\n== 5.7.0-M1\n\n*Date of Release:* ❓\n\n*Scope:* ❓\n\nFor a complete list of all _closed_ issues and pull requests for this release, consult the\nlink:{junit5-repo}+/milestone/44?closed=1+[5.7 M1] milestone page in the JUnit repository\non GitHub.\n\n\n[[release-notes-5.7.0-M1-junit-platform]]\n=== JUnit Platform\n\n==== Bug Fixes\n\n* ❓\n\n==== Deprecations and Breaking Changes\n\n* ❓\n\n==== New Features and Improvements\n\n* The number of containers and tests excluded by post discovery filters based on their tags\n is now logged, along with the exclusion reasons.\n\n* The annotation `@Testable` may now be applied _directly_ to fields.\n\n[[release-notes-5.7.0-M1-junit-jupiter]]\n=== JUnit Jupiter\n\n==== Bug Fixes\n\n* ❓\n\n==== Deprecations and Breaking Changes\n\n* ❓\n\n==== New Features and Improvements\n\n* The Javadoc for the `provideTestTemplateInvocationContexts()` method in\n `TestTemplateInvocationContextProvider` has been aligned with the actual implementation.\n Providers are now officially allowed to return an empty stream, and the error message\n when all provided streams are empty is now more helpful.\n\n\n[[release-notes-5.7.0-M1-junit-vintage]]\n=== JUnit Vintage\n\n==== Bug Fixes\n\n* ❓\n\n==== Deprecations and Breaking Changes\n\n* ❓\n\n==== New Features and Improvements\n\n* ❓\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Docs: Add missing link to the 2.0 migration guide.\n" }, { "from": "human", "value": "[[breaking-changes]]\n= Breaking changes\n\n[partintro]\n--\nThis section discusses the changes that you need to be aware of when migrating\nyour application from one version of Elasticsearch to another.\n\nAs a general rule:\n\n* Migration between major versions -- e.g. `1.x` to `2.x` --\n requires a <<restart-upgrade,full cluster restart>>.\n\n* Migration between minor versions -- e.g. `1.x` to `1.y` -- can be\n performed by <<rolling-upgrades,upgrading one node at a time>>.\n\nSee <<setup-upgrade>> for more info.\n--\n\ninclude::migrate_1_x.asciidoc[]\n\ninclude::migrate_1_0.asciidoc[]\n\n\n\n" }, { "from": "gpt", "value": "[[breaking-changes]]\n= Breaking changes\n\n[partintro]\n--\nThis section discusses the changes that you need to be aware of when migrating\nyour application from one version of Elasticsearch to another.\n\nAs a general rule:\n\n* Migration between major versions -- e.g. `1.x` to `2.x` --\n requires a <<restart-upgrade,full cluster restart>>.\n\n* Migration between minor versions -- e.g. `1.x` to `1.y` -- can be\n performed by <<rolling-upgrades,upgrading one node at a time>>.\n\nSee <<setup-upgrade>> for more info.\n--\n\ninclude::migrate_2_0.asciidoc[]\n\ninclude::migrate_1_x.asciidoc[]\n\ninclude::migrate_1_0.asciidoc[]\n\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add a current thought dump on the architecture\n" }, { "from": "human", "value": "= Code Valet\n\n== Problem\n\nThe link:https://jenkins.io[Jenkins] project faces a challenge unusual to other\ncontemporary CI/CD tools in that it is downloaded and executed on a user's\nmachine(s) rather than as a service. While this offers quite a lot of\nflexibility to the end-user, it puts Jenkins developers at a disadvantage for a\nnumber of reasons:\n\n. There is no direct return line of feedback from where their code is executed.\n No error reports, etc.\n. There is a significant lag between developers releasing code and it being\n adopted/used.\n\n\n== Solution\n\nA free service which provides basic CI/CD functions with Jenkins as the core\nplatform. With a regularly updated \"Jenkins distribution\" consisting of many of\nthe key plugins which people are used, built from `master`, and rolled through\nthe Code Valet cluster:\n\n. Blue Ocean\n. GitHub Branch Source\n. Pipeline\n. ???\n" }, { "from": "gpt", "value": "= Code Valet\n\n== Problem\n\nThe link:https://jenkins.io[Jenkins] project faces a challenge unusual to other\ncontemporary CI/CD tools in that it is downloaded and executed on a user's\nmachine(s) rather than as a service. While this offers quite a lot of\nflexibility to the end-user, it puts Jenkins developers at a disadvantage for a\nnumber of reasons:\n\n. There is no direct return line of feedback from where their code is executed.\n No error reports, etc.\n. There is a significant lag between developers releasing code and it being\n adopted/used.\n\n\n== Solution\n\nA free service which provides basic CI/CD functions with Jenkins as the core\nplatform. With a regularly updated \"Jenkins distribution\" consisting of many of\nthe key plugins which people are used, built from `master`, and rolled through\nthe Code Valet cluster:\n\n. Blue Ocean\n. GitHub Branch Source\n. Pipeline\n. ???\n\n\n\n== Architecture\n\nWARNING: This is most definitely not set in stone\n\n\n=== Control Plane\n\n\nA Kubernetes cluster which will act as the control plane\n\n\n=== Management\n\nA webapp which acts as the primary controller, user interface for non-Jenkins\nspecific tasks:\n\n. Surfacing logs from the Jenkins masters\n.. Searchable logs for exceptions, etc\n. Listing statistics from the other running Jenkins masters\n.. Number of Pipelines active\n\n=== Jenkins\n\nEach GitHub user will have a per-user allocated Jenkins master which executes\nas a container in this control plane. For example: `rtyler.codevalet.io` would\ncontain all the Pipelines which can be found inside of\n`https://github.com/rtyler`\n\nTo start, new Jenkins masters will be configured manual (beta testing!) and\nusers will have access primarily through to\nlink:https://jenkins.io/projects/blueocean[Blue Ocean], but will **not** be\ngiven administrative access to the Jenkins master.\n\n\nGroovy scripting will set up some VM templates which will provide dynamically\nprovisioned agents in a cloud provider such as Azure, with a fixed quota, i.e.\n2 active machines per.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add example setting environment variable.\n" }, { "from": "human", "value": "MrTweety Analytic\n=================\n\nimage:https://travis-ci.org/kubahorak/mrtweety-analytic.svg?branch=master[\"Build Status\", link=\"https://travis-ci.org/kubahorak/mrtweety-analytic\"]\n\nAnalyses tweets in realtime to figure out the most popular hashtags.\n\nUses https://spark.apache.org[Apache Spark] and a https://github.com/spark-packages/dstream-twitter[Twitter DStream] to\nretrieve top 5 tweets in a sliding window of 5 minutes. This result is stored in a JSON file and a small JS app is\nproducing a small website of this data.\n\nEverything is included into one Docker container together with Nginx. Just run\n\n ./gradlew buildImage\n\nTo use the Spark module alone, build it like so:\n\n ./gradlew :spark:shadowJar\n\nOptionally you can change the default path of the result JSON file by setting the environment variable\n`RESULT_FILENAME`.\nFinally submit the \"fat jar\" to your Spark for execution:\n\n spark-submit --class \"cz.zee.mrtweety.analytic.spark.Application\" --master local[4] build/libs/spark-all.jar \n\n\n(c) 2016 Jakub HorΓ‘k\n" }, { "from": "gpt", "value": "MrTweety Analytic\n=================\n\nimage:https://travis-ci.org/kubahorak/mrtweety-analytic.svg?branch=master[\"Build Status\", link=\"https://travis-ci.org/kubahorak/mrtweety-analytic\"]\n\nAnalyses tweets in realtime to figure out the most popular hashtags.\n\nUses https://spark.apache.org[Apache Spark] and a https://github.com/spark-packages/dstream-twitter[Twitter DStream] to\nretrieve top 5 tweets in a sliding window of 5 minutes. This result is stored in a JSON file and a small JS app is\nproducing a small website of this data.\n\nEverything is included into one Docker container together with Nginx. Just run\n\n ./gradlew buildImage\n\nTo use the Spark module alone, build it like so:\n\n ./gradlew :spark:shadowJar\n\nOptionally you can change the default path of the result JSON file by setting the environment variable\n`RESULT_FILENAME`.\n\n export RESULT_FILENAME=/tmp/analytic.json\n\nFinally submit the \"fat jar\" to your Spark for execution:\n\n spark-submit --class \"cz.zee.mrtweety.analytic.spark.Application\" --master local[4] build/libs/spark-all.jar \n\n\n(c) 2016 Jakub HorΓ‘k\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add about updating mydns entry\n" }, { "from": "human", "value": "= conoha/\n\nAnsible playbook for my conoha instance.\n\n\n== Current setups\n\n\n|====\n|Memory |1GB\n|Disk |SSD 50GB\n|OS |Ubuntu 14.4 trusty\n|====\n\n\n== Pre configurations\n\n* Add normal user: `adduser NAME`\n** Add `.ssh/authorized_keys` file for the user\n** Add the user to `sudo` group: `gpasswd -a NAME sudo`\n\n\n== Usage\n\nIssue\n\n----\nansible-playbook ansible.yml --ask-become-pass\n----\n" }, { "from": "gpt", "value": "= conoha/\n\nAnsible playbook for my conoha instance.\n\n\n== Current setups\n\n\n|====\n|Memory |1GB\n|Disk |SSD 50GB\n|OS |Ubuntu 14.4 trusty\n|====\n\n\n== Pre configurations\n\n* Add normal user: `adduser NAME`\n** Add `.ssh/authorized_keys` file for the user\n** Add the user to `sudo` group: `gpasswd -a NAME sudo`\n\n\n== Usage\n\nIssue\n\n----\nansible-playbook ansible.yml --ask-become-pass\n----\n\nOptionally, when you want to update mydns entry:\n\n----\nansible-playbook ansible.yml --extra-vars=mydns_password=xxxx\n----\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Make image support blurb appear for Dedicated.\n" }, { "from": "human", "value": "= Overview\n{product-author}\n{product-version}\n:data-uri:\n:icons:\n:experimental:\n\nUse these topics to discover the different\nlink:../architecture/core_concepts/builds_and_image_streams.html#source-build[S2I\n(Source-to-Image)], database, and other Docker images that are available for\nOpenShift users.\n\nifdef::openshift-enterprise[]\nRed Hat's official container images are provided in the Red Hat Registry at\nhttps://registry.access.redhat.com[registry.access.redhat.com]. OpenShift's\nsupported S2I, database, and Jenkins images are provided in the\nhttps://access.redhat.com/search/#/container-images?q=openshift3&p=1&sort=relevant&rows=12&srch=any&documentKind=ImageRepository[*openshift3*\nrepository] in the Red Hat Registry. For example,\n`registry.access.redhat.com/openshift3/nodejs-010-rhel7` for the Node.js image.\n\nThe xPaaS middleware images are provided in their respective product\nrepositories on the Red Hat Registry, but suffixed with a *-openshift*. For\nexample, `registry.access.redhat.com/jboss-eap-6/eap64-openshift` for\nthe JBoss EAP image.\nendif::[]\n" }, { "from": "gpt", "value": "= Overview\n{product-author}\n{product-version}\n:data-uri:\n:icons:\n:experimental:\n\nUse these topics to discover the different\nlink:../architecture/core_concepts/builds_and_image_streams.html#source-build[S2I\n(Source-to-Image)], database, and other Docker images that are available for\nOpenShift users.\n\nifdef::openshift-enterprise,openshift-dedicated[]\nRed Hat's official container images are provided in the Red Hat Registry at\nhttps://registry.access.redhat.com[registry.access.redhat.com]. OpenShift's\nsupported S2I, database, and Jenkins images are provided in the\nhttps://access.redhat.com/search/#/container-images?q=openshift3&p=1&sort=relevant&rows=12&srch=any&documentKind=ImageRepository[*openshift3*\nrepository] in the Red Hat Registry. For example,\n`registry.access.redhat.com/openshift3/nodejs-010-rhel7` for the Node.js image.\n\nThe xPaaS middleware images are provided in their respective product\nrepositories on the Red Hat Registry, but suffixed with a *-openshift*. For\nexample, `registry.access.redhat.com/jboss-eap-6/eap64-openshift` for\nthe JBoss EAP image.\nendif::[]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Change Travis badge regarding changed repository path.\n" }, { "from": "human", "value": "= Hawkular Android Client\n\nThis repository contains the source code for the Hawkular Android application.\n\n== License\n\n* http://www.apache.org/licenses/LICENSE-2.0.html[Apache Version 2.0]\n\n== Building\n\nifdef::env-github[]\n[link=https://travis-ci.org/hawkular/android-client]\nimage:https://travis-ci.org/hawkular/android-client.svg[\"Build Status\", link=\"https://travis-ci.org/hawkular/android-client\"]\nendif::[]\n\nYou will need JDK 1.7+ installed.\nGradle, Android SDK and all dependencies will be downloaded automatically.\n\n----\n$ ./gradlew clean assembleDebug\n----\n" }, { "from": "gpt", "value": "= Hawkular Android Client\n\nThis repository contains the source code for the Hawkular Android application.\n\n== License\n\n* http://www.apache.org/licenses/LICENSE-2.0.html[Apache Version 2.0]\n\n== Building\n\nifdef::env-github[]\n[link=https://travis-ci.org/hawkular/hawkular-android-client]\nimage:https://travis-ci.org/hawkular/hawkular-android-client.svg[\"Build Status\", link=\"https://travis-ci.org/hawkular/hawkular-android-client\"]\nendif::[]\n\nYou will need JDK 1.7+ installed.\nGradle, Android SDK and all dependencies will be downloaded automatically.\n\n----\n$ ./gradlew clean assembleDebug\n----\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Document that we require python-rrdtool\n" }, { "from": "human", "value": "Domotica\n========\n\nA simple webapp for a home automation system based around a Siemens S7-300 PLC,\nusing libs7comm-python. It's intended to be used from smartphones.\n\nNote that this is not a generic solution. It's an implementation for one\nspecific house.\n\nRequirements\n============\n * Django\n * python-daemon\n" }, { "from": "gpt", "value": "Domotica\n========\n\nA simple webapp for a home automation system based around a Siemens S7-300 PLC,\nusing libs7comm-python. It's intended to be used from smartphones.\n\nNote that this is not a generic solution. It's an implementation for one\nspecific house.\n\nRequirements\n============\n * Django\n * python-daemon\n * python-rrdtool\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Include 5.1.0-M2 release notes in main document\n" }, { "from": "human", "value": "[[release-notes]]\n== Release Notes\n\n:numbered!:\n\ninclude::release-notes-5.1.0-M1.adoc[]\n\ninclude::release-notes-5.0.2.adoc[]\n\ninclude::release-notes-5.0.1.adoc[]\n\ninclude::release-notes-5.0.0.adoc[]\n\ninclude::release-notes-5.0.0-RC3.adoc[]\n\ninclude::release-notes-5.0.0-RC2.adoc[]\n\ninclude::release-notes-5.0.0-RC1.adoc[]\n\ninclude::release-notes-5.0.0-M6.adoc[]\n\ninclude::release-notes-5.0.0-M5.adoc[]\n\ninclude::release-notes-5.0.0-M4.adoc[]\n\ninclude::release-notes-5.0.0-M3.adoc[]\n\ninclude::release-notes-5.0.0-M2.adoc[]\n\ninclude::release-notes-5.0.0-M1.adoc[]\n\ninclude::release-notes-5.0.0-ALPHA.adoc[]\n\n:numbered:\n" }, { "from": "gpt", "value": "[[release-notes]]\n== Release Notes\n\n:numbered!:\n\ninclude::release-notes-5.1.0-M2.adoc[]\n\ninclude::release-notes-5.1.0-M1.adoc[]\n\ninclude::release-notes-5.0.2.adoc[]\n\ninclude::release-notes-5.0.1.adoc[]\n\ninclude::release-notes-5.0.0.adoc[]\n\ninclude::release-notes-5.0.0-RC3.adoc[]\n\ninclude::release-notes-5.0.0-RC2.adoc[]\n\ninclude::release-notes-5.0.0-RC1.adoc[]\n\ninclude::release-notes-5.0.0-M6.adoc[]\n\ninclude::release-notes-5.0.0-M5.adoc[]\n\ninclude::release-notes-5.0.0-M4.adoc[]\n\ninclude::release-notes-5.0.0-M3.adoc[]\n\ninclude::release-notes-5.0.0-M2.adoc[]\n\ninclude::release-notes-5.0.0-M1.adoc[]\n\ninclude::release-notes-5.0.0-ALPHA.adoc[]\n\n:numbered:\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add label class hierarchy information to FAQ\n" }, { "from": "human", "value": "[appendix]\n[[faq]]\n= Frequently Asked Questions (FAQ)\n\n[qanda]\nWhat is the difference between Neo4j OGM and Spring Data Neo4j (SDN)?::\n Spring Data Neo4j (SDN) uses the OGM under the covers. It's like Spring Data JPA, where JPA/Hibernate underly it. Most\n of the power of SDN actually comes from the OGM.\n" }, { "from": "gpt", "value": "[appendix]\n[[faq]]\n= Frequently Asked Questions (FAQ)\n\n[qanda]\nWhat is the difference between Neo4j OGM and Spring Data Neo4j (SDN)?::\n Spring Data Neo4j (SDN) uses the OGM under the covers. It's like Spring Data JPA, where JPA/Hibernate underly it. Most\n of the power of SDN actually comes from the OGM.\n\nHow are labels generated when using inheritance?::\n All concrete classes generate a label, abstract classes and interfaces not.\n If any kind of class or interface gets annotated with @NodeEntity or @NodeEntity(label=\"customLabel\") it will generate a label.\n Any class annotated with @Transient will not generate a label.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add note about using mountain lion as base version" }, { "from": "human", "value": "= Building Mac OS X client =\n\nOS X licensing means we can't just simply provide a build vm via vagrant as\nwe've done for linux. Creating a vagrant-ready VM is fairly simple though.\n\n== Client Build VM ==\n\nInstall link:https://www.vagrantup.com/[Vagrant], buy VMware Fusion, and buy the\nlink:https://www.vagrantup.com/vmware[Vagrant VMWare Fusion plugin]. It's\npossible to get this to work with VirtualBox, which is free, but it's missing\nsome useful features such as shared folders.\n\nCreate a new\nlink:http://ilostmynotes.blogspot.com/2015/02/building-os-x-vagrant-vmware-fusion-vm.html[vagrant-ready\nOS X VM in VMWare Fusion]. The essentials are:\n\n * Commandline Tools for XCode\n * PackageMaker.app for building .pkg's\n * VMWare Tools\n * Vagrant ssh and sudo access\n" }, { "from": "gpt", "value": "= Building Mac OS X client =\n\nOS X licensing means we can't just simply provide a build vm via vagrant as\nwe've done for linux. Creating a vagrant-ready VM is fairly simple though.\n\n== Client Build VM ==\n\nInstall link:https://www.vagrantup.com/[Vagrant], buy VMware Fusion, and buy the\nlink:https://www.vagrantup.com/vmware[Vagrant VMWare Fusion plugin]. It's\npossible to get this to work with VirtualBox, which is free, but it's missing\nsome useful features such as shared folders.\n\nCreate a new\nlink:http://ilostmynotes.blogspot.com/2015/02/building-os-x-vagrant-vmware-fusion-vm.html[vagrant-ready\nOS X VM in VMWare Fusion]. We use OS X 10.8 (Mountain Lion) as a base so we can install on systems at least that old, the build scripts are tested and work on that version. The essentials are:\n\n * Commandline Tools for XCode\n * PackageMaker.app for building .pkg's\n * VMWare Tools\n * Vagrant ssh and sudo access\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "[DOCS] Remove ifdefs for rollup refactor\n\nRemoves some ifdef conditionals overlooked in #70885. That PR removed\ndocs for the rollup refactor.\n" }, { "from": "human", "value": "[[data-rollup-transform]]\n= Roll up or transform your data\n\n[partintro]\n--\n\n{es} offers the following methods for manipulating your data:\n\nifdef::permanently-unreleased-branch[]\n\n* <<xpack-rollup>>\n+\nA rollup aggregates an index's time series data and stores the results in new\nread-only index. For example, you can roll up hourly data into daily or weekly\nsummaries.\n\nendif::[]\nifndef::permanently-unreleased-branch[]\n\n* <<xpack-rollup,Rolling up your historical data>>\n+\ninclude::rollup/index.asciidoc[tag=rollup-intro]\n\nendif::[]\n\n* <<transforms,Transforming your data>>\n+\ninclude::transform/transforms.asciidoc[tag=transform-intro]\n\n--\n\ninclude::rollup/index.asciidoc[]\n\ninclude::transform/index.asciidoc[]\n" }, { "from": "gpt", "value": "[[data-rollup-transform]]\n= Roll up or transform your data\n\n[partintro]\n--\n\n{es} offers the following methods for manipulating your data:\n\n* <<xpack-rollup,Rolling up your historical data>>\n+\ninclude::rollup/index.asciidoc[tag=rollup-intro]\n\n* <<transforms,Transforming your data>>\n+\ninclude::transform/transforms.asciidoc[tag=transform-intro]\n\n--\n\ninclude::rollup/index.asciidoc[]\n\ninclude::transform/index.asciidoc[]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Remove link to in-source docs\n" }, { "from": "human", "value": "= Vert.x-Web\n\nimage:https://vertx.ci.cloudbees.com/buildStatus/icon?job=vert.x3-web[\"Build Status\",link=\"https://vertx.ci.cloudbees.com/view/vert.x-3/job/vert.x3-web/\"]\nimage:https://img.shields.io/maven-central/v/io.vertx/vertx-web.svg[\"Maven Central\"]\nimage:https://bestpractices.coreinfrastructure.org/projects/540/badge[\"CII Best Practices\",link=\"https://bestpractices.coreinfrastructure.org/projects/540\"]\n\nVert.x-Web is a set of building blocks for building web applications with Vert.x. Think of it as a Swiss Army Knife for building\nmodern, scalable, web apps.\n\nPlease see the in source asciidoc link:vertx-web/src/main/asciidoc/index.adoc[documentation] or the main http://vertx.io/docs/#web[documentation] on the web-site for a full description\nof Vert.x-Web:\n\n== Template engines\n\nTemplate engine implementations are in the template engine sub-project.\n" }, { "from": "gpt", "value": "= Vert.x-Web\n\nimage:https://vertx.ci.cloudbees.com/buildStatus/icon?job=vert.x3-web[\"Build Status\",link=\"https://vertx.ci.cloudbees.com/view/vert.x-3/job/vert.x3-web/\"]\nimage:https://img.shields.io/maven-central/v/io.vertx/vertx-web.svg[\"Maven Central\"]\nimage:https://bestpractices.coreinfrastructure.org/projects/540/badge[\"CII Best Practices\",link=\"https://bestpractices.coreinfrastructure.org/projects/540\"]\n\nVert.x-Web is a set of building blocks for building web applications with Vert.x. Think of it as a Swiss Army Knife for building\nmodern, scalable, web apps.\n\nPlease see the main documentation on the web-site for a full description:\n\n* https://vertx.io/docs/#web[Web-site documentation]\n\n== Template engines\n\nTemplate engine implementations are in the template engine sub-project.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add notes for Mac OS X" }, { "from": "human", "value": "= java-libbitcoinconsensus\n\nimage:https://travis-ci.org/dexX7/java-libbitcoinconsensus.svg[\"Build Status\", link=\"https://travis-ci.org/dexX7/java-libbitcoinconsensus\"]\n\nA https://github.com/java-native-access/jna[JNA] binding and Java wrapper for https://github.com/bitcoin/bitcoin/blob/master/doc/shared-libraries.md#bitcoinconsensus[libbitcoinconsensus].\n\nSee the http://dexx7.github.io/java-libbitcoinconsensus/[Javadoc] for an overview of this project.\n\nWARNING: This software is EXPERIMENTAL software. USE AT YOUR OWN RISK.\n\n== Setup\n\n git clone https://github.com/dexX7/java-libbitcoinconsensus.git\n cd java-libbitcoinconsensus/\n\n== Building\n\n ./gradlew build\n\n\n== Testing\n\n ./gradlew test\n\nNOTE: `libbitcoinconsensus` must be available in the class path or as system library.\n" }, { "from": "gpt", "value": "= java-libbitcoinconsensus\n\nimage:https://travis-ci.org/dexX7/java-libbitcoinconsensus.svg[\"Build Status\", link=\"https://travis-ci.org/dexX7/java-libbitcoinconsensus\"]\n\nA https://github.com/java-native-access/jna[JNA] binding and Java wrapper for https://github.com/bitcoin/bitcoin/blob/master/doc/shared-libraries.md#bitcoinconsensus[libbitcoinconsensus].\n\nSee the http://dexx7.github.io/java-libbitcoinconsensus/[Javadoc] for an overview of this project.\n\nWARNING: This software is EXPERIMENTAL software. USE AT YOUR OWN RISK.\n\n== Setup\n\n git clone https://github.com/dexX7/java-libbitcoinconsensus.git\n cd java-libbitcoinconsensus/\n\n== Building\n\n ./gradlew build\n\n\n== Testing\n\n ./gradlew test\n\nNOTE: `libbitcoinconsensus` must be available in the class path or as system library.\n\n== Platform Notes\n\n=== Mac OS X\n\nOn Mac OS X you can put `libbitcoinconsensus.dylib` in `src/main/resources/`.\n\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Update AsciiDoc.js to Asciidoctor in the title and the repository URL (asciidoctor org)\n" }, { "from": "human", "value": "= asciidoctor-chrome-extension, AsciiDoc.js Chrome Extension\nGuillaume Grossetie\n:sources: https://github.com/Mogztter/asciidoctor-chrome-extension\n:license: https://github.com/Mogztter/asciidoctor-chrome-extension/blob/master/LICENSE\n\nThis project uses https://github.com/asciidoctor/asciidoctor.js[Asciidoctor.js] to render AsciiDoc as HTML inside Chrome!\n\n== Copyright\n\nCopyright (C) 2013 Guillaume Grossetie.\nFree use of this software is granted under the terms of the MIT License.\n\nSee the {license}[LICENSE] file for details." }, { "from": "gpt", "value": "= asciidoctor-chrome-extension, Asciidoctor Chrome Extension\nGuillaume Grossetie\n:sources: https://github.com/asciidoctor/asciidoctor-chrome-extension\n:license: https://github.com/asciidoctor/asciidoctor-chrome-extension/blob/master/LICENSE\n\nThis project uses https://github.com/asciidoctor/asciidoctor.js[Asciidoctor.js] to render AsciiDoc as HTML inside Chrome!\n\n== Copyright\n\nCopyright (C) 2013 Guillaume Grossetie.\nFree use of this software is granted under the terms of the MIT License.\n\nSee the {license}[LICENSE] file for details." } ]
asciidoc
chosen
[ { "from": "system", "value": "Add codecentric Spring Batch Starter Info\n\nFixes gh-870\n" }, { "from": "human", "value": "= Starter POMs\n\nSpring Boot Starters are a set of convenient dependency descriptors that you can include\nin your application. You get a one-stop-shop for all the Spring and related technology\nthat you need without having to hunt through sample code and copy paste loads of\ndependency descriptors. For example, if you want to get started using Spring and\nJPA for database access just include the `spring-boot-starter-data-jpa` dependency in\nyour project, and you are good to go.\n\nFor complete details see the\nhttp://docs.spring.io/spring-boot/docs/current-SNAPSHOT/reference/htmlsingle/#using-boot-starter-poms[reference documentation]\n\n== Community Contributions\nIf you create a starter for a technology that is not already in the standard list we can\nlist it here. Just send a pull request for this page.\n\n|===\n| Name | Location\n\n| https://code.google.com/p/wro4j/[WRO4J]\n| https://github.com/sbuettner/spring-boot-autoconfigure-wro4j\n|===\n" }, { "from": "gpt", "value": "= Starter POMs\n\nSpring Boot Starters are a set of convenient dependency descriptors that you can include\nin your application. You get a one-stop-shop for all the Spring and related technology\nthat you need without having to hunt through sample code and copy paste loads of\ndependency descriptors. For example, if you want to get started using Spring and\nJPA for database access just include the `spring-boot-starter-data-jpa` dependency in\nyour project, and you are good to go.\n\nFor complete details see the\nhttp://docs.spring.io/spring-boot/docs/current-SNAPSHOT/reference/htmlsingle/#using-boot-starter-poms[reference documentation]\n\n== Community Contributions\nIf you create a starter for a technology that is not already in the standard list we can\nlist it here. Just send a pull request for this page.\n\n|===\n| Name | Location\n\n| https://code.google.com/p/wro4j/[WRO4J]\n| https://github.com/sbuettner/spring-boot-autoconfigure-wro4j\n\n| http://projects.spring.io/spring-batch/[Spring Batch] (Advanced usage)\n| https://github.com/codecentric/spring-boot-starter-batch-web\n|===\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix warning in asciidoc rendering\n" }, { "from": "human", "value": "= Reactor Netty\nStephane Maldini;\n:appversion: 0.5.0 (wiki)\nifndef::host-github[:ext-relative: {outfilesuffix}]\n{appversion}\n:doctype: book\n:icons: font\n\ninclude::net.adoc[]\ninclude::net-tcp.adoc[]\ninclude::net-http.adoc[]\ninclude::net-e2e.adoc[]\n\n" }, { "from": "gpt", "value": "= Reactor Netty\nStephane Maldini;\n:appversion: 0.5.0 (wiki)\nifndef::host-github[:ext-relative: {outfilesuffix}]\n{appversion}\n:doctype: book\n:icons: font\n\ninclude::net.adoc[]\ninclude::net-tcp.adoc[]\ninclude::net-http.adoc[]\n\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Update link to the Slack invite" }, { "from": "human", "value": "= Getting support for Micrometer\n\n== GitHub issues\nWe choose not to use GitHub issues for general usage questions and support, preferring to\nuse issues solely for the tracking of bugs and enhancements. If you have a general\nusage question please do not open a GitHub issue, but use one of the other channels\ndescribed below.\n\nIf you are reporting a bug, please help to speed up problem diagnosis by providing as\nmuch information as possible. Ideally, that would include a small sample project that\nreproduces the problem.\n\n== Stack Overflow\nThe Micrometer community monitors the\nhttps://stackoverflow.com/tags/micrometer[`micrometer`] tag on Stack Overflow. Before\nasking a question, please familiarize yourself with Stack Overflow's\nhttps://stackoverflow.com/help/how-to-ask[advice on how to ask a good question].\n\n== Slack\nIf you want to discuss something or have a question that isn't suited to Stack Overflow,\nthe Micrometer community chat in the\nhttp://slack.micrometer.io/[micrometer-metrics channel on Slack].\n" }, { "from": "gpt", "value": "= Getting support for Micrometer\n\n== GitHub issues\nWe choose not to use GitHub issues for general usage questions and support, preferring to\nuse issues solely for the tracking of bugs and enhancements. If you have a general\nusage question please do not open a GitHub issue, but use one of the other channels\ndescribed below.\n\nIf you are reporting a bug, please help to speed up problem diagnosis by providing as\nmuch information as possible. Ideally, that would include a small sample project that\nreproduces the problem.\n\n== Stack Overflow\nThe Micrometer community monitors the\nhttps://stackoverflow.com/tags/micrometer[`micrometer`] tag on Stack Overflow. Before\nasking a question, please familiarize yourself with Stack Overflow's\nhttps://stackoverflow.com/help/how-to-ask[advice on how to ask a good question].\n\n== Slack\nIf you want to discuss something or have a question that isn't suited to Stack Overflow,\nthe Micrometer community chat in the\nhttps://join.slack.com/t/micrometer-metrics/shared_invite/zt-ewo3kcs0-Ji3aOAqTxnjYPEFBBI5HqQ[micrometer-metrics Slack].\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Update What's New for 5.8\n" }, { "from": "human", "value": "[[new]]\n= What's New in Spring Security 5.8\n\nSpring Security 5.8 provides a number of new features.\nBelow are the highlights of the release.\n\n* https://github.com/spring-projects/spring-security/pull/11638[gh-11638] - Refresh remote JWK when unknown KID error occurs\n* https://github.com/spring-projects/spring-security/pull/11782[gh-11782] - @WithMockUser Supported as Merged Annotation\n* https://github.com/spring-projects/spring-security/issues/11661[gh-11661] - Configurable authentication converter for resource-servers with token introspection\n* https://github.com/spring-projects/spring-security/pull/11771[gh-11771] - `HttpSecurityDsl` should support `apply` method\n" }, { "from": "gpt", "value": "[[new]]\n= What's New in Spring Security 5.8\n\nSpring Security 5.8 provides a number of new features.\nBelow are the highlights of the release.\n\n* https://github.com/spring-projects/spring-security/pull/11638[gh-11638] - Refresh remote JWK when unknown KID error occurs\n* https://github.com/spring-projects/spring-security/pull/11782[gh-11782] - @WithMockUser Supported as Merged Annotation\n* https://github.com/spring-projects/spring-security/issues/11661[gh-11661] - Configurable authentication converter for resource-servers with token introspection\n* https://github.com/spring-projects/spring-security/pull/11771[gh-11771] - `HttpSecurityDsl` should support `apply` method\n* https://github.com/spring-projects/spring-security/pull/11232[gh-11232] - `ClientRegistrations#rest` defines 30s connect and read timeouts\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "[docs] Add 'failed to create directory' WordPress scenario\n" }, { "from": "human", "value": "= Logbook: horizon.justinwflory.com\n\nRunning logbook of documentation notes for Horizon.\n\n\n== Firewall\n\n_See https://github.com/jwflory/infrastructure/blob/master/roles/firewalld/tasks/main.yml[jwflory/infrastructure firewalld Ansible role]._\n\n\n== WordPress\n\n=== wp_cli cannot upgrade WordPress because another update is in progress\n\n`wp option get core_updater.lock`::\nCheck if WordPress is holding a version lock.\nIf a number is returned, the version lock is on.\n\n`wp option delete core_updater.lock`::\nBreak a version lock on WordPress.\nVerify an update is NOT in progress before deleting (check processes, visit admin panel).\n\nhttps://wordpress.stackexchange.com/questions/224989/get-rid-of-another-update-is-currently-in-progress[Reference]\n\n\n== XenForo\n\n* `sitemaps/` and `temp/` must have 0777 permissions…\n** `temp/` is for the image proxy\n\n" }, { "from": "gpt", "value": "= Logbook: horizon.justinwflory.com\n\nRunning logbook of documentation notes for Horizon.\n\n\n== Firewall\n\n_See https://github.com/jwflory/infrastructure/blob/master/roles/firewalld/tasks/main.yml[jwflory/infrastructure firewalld Ansible role]._\n\n\n== WordPress\n\n=== Updating from admin dashboard does not work: \"failed to create directory\"\n\nThe web server cannot read or write to files in the web directory.\nMake sure the `php-fpm` user is a member of the `www-admin` group and that the folder is group-writable.\n\nWhy does this happen?\nNormally, your web server needs access, like `nginx` or `httpd`.\nBut in my case, `nginx` is spinning off `php-fpm` processes to run PHP web applications.\nThis user is trying to read and write from files instead of the web server.\nLook closer at https://github.com/jwflory/infrastructure[jwflory/infrastructure] for additional context.\n\n=== wp_cli cannot upgrade WordPress because another update is in progress\n\n`wp option get core_updater.lock`::\nCheck if WordPress is holding a version lock.\nIf a number is returned, the version lock is on.\n\n`wp option delete core_updater.lock`::\nBreak a version lock on WordPress.\nVerify an update is NOT in progress before deleting (check processes, visit admin panel).\n\nhttps://wordpress.stackexchange.com/questions/224989/get-rid-of-another-update-is-currently-in-progress[Reference]\n\n\n== XenForo\n\n* `sitemaps/` and `temp/` must have 0777 permissions…\n** `temp/` is for the image proxy\n\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Include header and footer in Release Notes\n\nIssue: #1068\n" }, { "from": "human", "value": "[[release-notes]]\n== JUnit 5 Release Notes\n\ninclude::link-attributes.adoc[]\n\n:numbered!:\n\ninclude::release-notes-5.1.0-M2.adoc[]\n\ninclude::release-notes-5.1.0-M1.adoc[]\n\ninclude::release-notes-5.0.2.adoc[]\n\ninclude::release-notes-5.0.1.adoc[]\n\ninclude::release-notes-5.0.0.adoc[]\n\ninclude::release-notes-5.0.0-RC3.adoc[]\n\ninclude::release-notes-5.0.0-RC2.adoc[]\n\ninclude::release-notes-5.0.0-RC1.adoc[]\n\ninclude::release-notes-5.0.0-M6.adoc[]\n\ninclude::release-notes-5.0.0-M5.adoc[]\n\ninclude::release-notes-5.0.0-M4.adoc[]\n\ninclude::release-notes-5.0.0-M3.adoc[]\n\ninclude::release-notes-5.0.0-M2.adoc[]\n\ninclude::release-notes-5.0.0-M1.adoc[]\n\ninclude::release-notes-5.0.0-ALPHA.adoc[]\n\n:numbered:\n" }, { "from": "gpt", "value": "[[release-notes]]\n= JUnit 5 Release Notes\nStefan Bechtold; Sam Brannen; Johannes Link; Matthias Merdes; Marc Philipp; Christian Stein\n//\n:docinfodir: docinfos\n:docinfo: private-head\n:numbered!:\n//\n\ninclude::link-attributes.adoc[]\n\ninclude::release-notes-5.1.0-M2.adoc[]\n\ninclude::release-notes-5.1.0-M1.adoc[]\n\ninclude::release-notes-5.0.2.adoc[]\n\ninclude::release-notes-5.0.1.adoc[]\n\ninclude::release-notes-5.0.0.adoc[]\n\ninclude::release-notes-5.0.0-RC3.adoc[]\n\ninclude::release-notes-5.0.0-RC2.adoc[]\n\ninclude::release-notes-5.0.0-RC1.adoc[]\n\ninclude::release-notes-5.0.0-M6.adoc[]\n\ninclude::release-notes-5.0.0-M5.adoc[]\n\ninclude::release-notes-5.0.0-M4.adoc[]\n\ninclude::release-notes-5.0.0-M3.adoc[]\n\ninclude::release-notes-5.0.0-M2.adoc[]\n\ninclude::release-notes-5.0.0-M1.adoc[]\n\ninclude::release-notes-5.0.0-ALPHA.adoc[]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Update link to puppet module and remove link to other RPM repo as we have our own.\n" }, { "from": "human", "value": "[[misc]]\n== Misc\n\n* https://github.com/electrical/puppet-elasticsearch[Puppet]:\n Elasticsearch puppet module.\n\n* http://github.com/elasticsearch/cookbook-elasticsearch[Chef]:\n Chef cookbook for Elasticsearch\n\n* https://github.com/tavisto/elasticsearch-rpms[elasticsearch-rpms]:\n RPMs for elasticsearch.\n\n* http://www.github.com/neogenix/daikon[daikon]:\n Daikon Elasticsearch CLI\n\n* https://github.com/Aconex/scrutineer[Scrutineer]:\n A high performance consistency checker to compare what you've indexed \n with your source of truth content (e.g. DB)\n" }, { "from": "gpt", "value": "[[misc]]\n== Misc\n\n* https://github.com/elasticsearch/puppet-elasticsearch[Puppet]:\n Elasticsearch puppet module.\n\n* http://github.com/elasticsearch/cookbook-elasticsearch[Chef]:\n Chef cookbook for Elasticsearch\n\n* http://www.github.com/neogenix/daikon[daikon]:\n Daikon Elasticsearch CLI\n\n* https://github.com/Aconex/scrutineer[Scrutineer]:\n A high performance consistency checker to compare what you've indexed \n with your source of truth content (e.g. DB)\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add Example projects and compatibility matrix\n" }, { "from": "human", "value": "= JoinFaces Documentation\nMarcelo Fernandes <[email protected]>; Lars Grefer <[email protected]>\n:toc: left\n\n\n== Using JoinFaces\n\n=== System Requirements\n\n=== Maven Integration\n\n=== Gradle Integration" }, { "from": "gpt", "value": "= JoinFaces Documentation\nMarcelo Fernandes <[email protected]>; Lars Grefer <[email protected]>\n:toc: left\n:sectnums:\n:sectanchors:\n\n== Using JoinFaces\n\n.Official example projects\n|===\n|Build tool \\ packaing |`jar` |`war`\n\n|https://maven.apache.org/[Maven]\n|https://github.com/joinfaces/joinfaces-maven-jar-example[maven-jar-example]\n|https://github.com/joinfaces/joinfaces-maven-war-example[maven-war-example]\n\n|https://gradle.org/[Gradle]\n|https://github.com/joinfaces/joinfaces-gradle-jar-example[gradle-jar-example]\n|https://github.com/joinfaces/joinfaces-gradle-war-example[gradle-war-example]\n\n|===\n\n=== System Requirements\n\n.System Requirements\n|===\n|Joinfaces |Java |Spring Boot |JSF\n\n|3.3 |1.8 to 11 |2.1 |2.0 to 2.3\n|3.0 to 3.2 |1.8 |2.0 |2.0 to 2.3\n|2.x |1.6 to 1.8 |1.x |2.0 to 2.2\n\n|===\n\nThis are the combinations we have tested and expect to work, but depending on the features you are using, other combinations might work, too.\n\n\n=== Maven Integration\n\n=== Gradle Integration\n\n== Features" } ]
asciidoc
chosen
[ { "from": "system", "value": "Make windows caveats a note\n" }, { "from": "human", "value": "= Instructions\n\nFollow these steps to setup Docker on your machine.\n\n* Go to https://www.docker.com/products/overview#/install_the_platform\n* Download the distributable for your system type\n* Follow the installation instructions\n\nAfter installation continue with Exercise 1 to verify that your setup is running correctly.\n\n= Troubleshooting during installation\n*Windows Home Edition*\n\nIf you run Windows Home Edition and/or get a message about Hyper-V being disabled, you have to install https://www.docker.com/products/docker-toolbox[Docker Toolbox] instead because your version of Windows does not support the \"Docker for Windows\"-product.\n\n*Hardware Virtualization disabled*\n\nIf you encounter the message \"VT-x/AMD-v has to be enabled\", please enable \"Intel Virtualization\" in the BIOS of your system. This is required for running virtual machines. Detailed instructions can be found http://www.howtogeek.com/213795/how-to-enable-intel-vt-x-in-your-computers-bios-or-uefi-firmware/[here] under \"Access the BIOS or UEFI Firmware\".\n\n" }, { "from": "gpt", "value": "= Instructions\n\nFollow these steps to setup Docker on your machine.\n\n* Go to https://www.docker.com/products/overview#/install_the_platform\n* Download the distributable for your system type\n* Follow the installation instructions\n\nAfter installation continue with Exercise 1 to verify that your setup is running correctly.\n\n[NOTE]\n*Windows Home Edition* +\nIf you run Windows Home Edition and/or get a message about Hyper-V being disabled, you have to install https://www.docker.com/products/docker-toolbox[Docker Toolbox] instead because your version of Windows does not support the \"Docker for Windows\"-product. +\n*Hardware Virtualization disabled* +\nIf you encounter the message \"VT-x/AMD-v has to be enabled\", please enable \"Intel Virtualization\" in the BIOS of your system. This is required for running virtual machines. Detailed instructions can be found http://www.howtogeek.com/213795/how-to-enable-intel-vt-x-in-your-computers-bios-or-uefi-firmware/[here] under \"Access the BIOS or UEFI Firmware\".\n\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix missing yum install when installing the repo" }, { "from": "human", "value": "\n==== Step 2: Install the repository and Minion package\n\nConnect with _SSH_ to your remote _RHEL_ system where you need a _Minion_ to be installed.\n\n.Install the Yum repository\n[source, shell]\n----\nyum -y https://yum.opennms.org/repofiles/opennms-repo-stable-rhel7.noarch.rpm\nrpm --import https://yum.opennms.org/OPENNMS-GPG-KEY\n----\n\n.Install the Minion package\n[source, bash]\n----\nyum -y install opennms-minion\n----\n\nThe following packages will be automatically installed:\n\n* _opennms-minion_: The Minion meta package\n* _opennms-minion-container_: The _Karaf_ OSGi container with _Minion_ branding and additional management extensions\n* _opennms-minion-features-core_: Core utilities and services required by the _Minion_ features\n* _opennms-minion-features-default_: Service-specific features\n\nWith the successful installed packages the _Minion_ is installed in the following directory structure:\n\n[source, shell]\n----\n[root@localhost /opt/minion]# $ tree -L 1\n.\nβ”œβ”€β”€ bin\nβ”œβ”€β”€ deploy\nβ”œβ”€β”€ etc\nβ”œβ”€β”€ lib\nβ”œβ”€β”€ repositories\n└── system\n----\n\nThe Minion's startup configuration can be changed by editing the `/etc/sysconfig/minion` file.\nIt allows to override the defaults used at startup including:\n\n* Location of the JDK\n* Memory usage\n* User to run as" }, { "from": "gpt", "value": "\n==== Step 2: Install the repository and Minion package\n\nConnect with _SSH_ to your remote _RHEL_ system where you need a _Minion_ to be installed.\n\n.Install the Yum repository\n[source, shell]\n----\nyum -y install https://yum.opennms.org/repofiles/opennms-repo-stable-rhel7.noarch.rpm\nrpm --import https://yum.opennms.org/OPENNMS-GPG-KEY\n----\n\n.Install the Minion package\n[source, bash]\n----\nyum -y install opennms-minion\n----\n\nThe following packages will be automatically installed:\n\n* _opennms-minion_: The Minion meta package\n* _opennms-minion-container_: The _Karaf_ OSGi container with _Minion_ branding and additional management extensions\n* _opennms-minion-features-core_: Core utilities and services required by the _Minion_ features\n* _opennms-minion-features-default_: Service-specific features\n\nWith the successful installed packages the _Minion_ is installed in the following directory structure:\n\n[source, shell]\n----\n[root@localhost /opt/minion]# $ tree -L 1\n.\nβ”œβ”€β”€ bin\nβ”œβ”€β”€ deploy\nβ”œβ”€β”€ etc\nβ”œβ”€β”€ lib\nβ”œβ”€β”€ repositories\n└── system\n----\n\nThe Minion's startup configuration can be changed by editing the `/etc/sysconfig/minion` file.\nIt allows to override the defaults used at startup including:\n\n* Location of the JDK\n* Memory usage\n* User to run as\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Use Beats Platform Reference as title\n" }, { "from": "human", "value": "= Beats Platform\n\ninclude::./overview.asciidoc[]\n\ninclude::./gettingstarted.asciidoc[]\n\ninclude::./configuration.asciidoc[]\n\ninclude::./https.asciidoc[]\n\ninclude::./newbeat.asciidoc[]\n" }, { "from": "gpt", "value": "= Beats Platform Reference\n\ninclude::./overview.asciidoc[]\n\ninclude::./gettingstarted.asciidoc[]\n\ninclude::./configuration.asciidoc[]\n\ninclude::./https.asciidoc[]\n\ninclude::./newbeat.asciidoc[]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Document support for Java 18\n\nCloses gh-30548\n" }, { "from": "human", "value": "[[getting-started.system-requirements]]\n== System Requirements\nSpring Boot {spring-boot-version} requires https://www.java.com[Java 8] and is compatible up to and including Java 17.\n{spring-framework-docs}/[Spring Framework {spring-framework-version}] or above is also required.\n\nExplicit build support is provided for the following build tools:\n\n|===\n| Build Tool | Version\n\n| Maven\n| 3.5+\n\n| Gradle\n| 6.8.x, 6.9.x, and 7.x\n|===\n\n\n\n[[getting-started.system-requirements.servlet-containers]]\n=== Servlet Containers\nSpring Boot supports the following embedded servlet containers:\n\n|===\n| Name | Servlet Version\n\n| Tomcat 9.0\n| 4.0\n\n| Jetty 9.4\n| 3.1\n\n| Jetty 10.0\n| 4.0\n\n| Undertow 2.0\n| 4.0\n|===\n\nYou can also deploy Spring Boot applications to any Servlet 3.1+ compatible container.\n" }, { "from": "gpt", "value": "[[getting-started.system-requirements]]\n== System Requirements\nSpring Boot {spring-boot-version} requires https://www.java.com[Java 8] and is compatible up to and including Java 18.\n{spring-framework-docs}/[Spring Framework {spring-framework-version}] or above is also required.\n\nExplicit build support is provided for the following build tools:\n\n|===\n| Build Tool | Version\n\n| Maven\n| 3.5+\n\n| Gradle\n| 6.8.x, 6.9.x, and 7.x\n|===\n\n\n\n[[getting-started.system-requirements.servlet-containers]]\n=== Servlet Containers\nSpring Boot supports the following embedded servlet containers:\n\n|===\n| Name | Servlet Version\n\n| Tomcat 9.0\n| 4.0\n\n| Jetty 9.4\n| 3.1\n\n| Jetty 10.0\n| 4.0\n\n| Undertow 2.0\n| 4.0\n|===\n\nYou can also deploy Spring Boot applications to any Servlet 3.1+ compatible container.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Document snapshot download from JitPack\n" }, { "from": "human", "value": "[[installation]]\n== Installation\n\n=== Dependency\n\n```xml\n<dependency>\n <groupId>net.ttddyy</groupId>\n <artifactId>datasource-assert</artifactId>\n <version>[LATEST_VERSION]</version>\n</dependency>\n```\n\n{datasource-proxy}[datasource-proxy] is the only transitive dependency library. +\n{assertj}[AssertJ] and {hamcrest}[Hamcrest] are optional. If you need to use their assertions, you need to explicitly specify dependency\nin your project.\n\n\n==== Snapshot\n\nSnapshot is available via {oss-snapshot-repository}[oss sonatype snapshot repository].\n\nTo download snapshot jars, enable sonatype snapshot repository:\n\n```xml\n<repositories>\n <repository>\n <id>sonatype-snapshots-repo</id>\n <url>https://oss.sonatype.org/content/repositories/snapshots</url>\n <releases>\n <enabled>false</enabled>\n </releases>\n <snapshots>\n <enabled>true</enabled>\n </snapshots>\n </repository>\n</repositories>\n```\n\n=== Supported Java Versions\n\n`datasource-assert` works with java 1.7+.\n\n" }, { "from": "gpt", "value": "[[installation]]\n== Installation\n\n=== Dependency\n\n```xml\n<dependency>\n <groupId>net.ttddyy</groupId>\n <artifactId>datasource-assert</artifactId>\n <version>[LATEST_VERSION]</version>\n</dependency>\n```\n\n{datasource-proxy}[datasource-proxy] is the only transitive dependency library. +\n{assertj}[AssertJ] and {hamcrest}[Hamcrest] are optional. If you need to use their assertions, you need to explicitly specify dependency\nin your project.\n\n\n==== Snapshot (via Sonatype OSSRH)\n\nSnapshot is available via {oss-snapshot-repository}[oss sonatype snapshot repository].\n\nTo download snapshot jars, enable sonatype snapshot repository:\n\n```xml\n<repositories>\n <repository>\n <id>sonatype-snapshots-repo</id>\n <url>https://oss.sonatype.org/content/repositories/snapshots</url>\n <releases>\n <enabled>false</enabled>\n </releases>\n <snapshots>\n <enabled>true</enabled>\n </snapshots>\n </repository>\n</repositories>\n```\n\n==== Snapshot (via JitPack)\n\n```xml\n<repositories>\n <repository>\n <id>jitpack.io</id>\n <url>https://jitpack.io</url>\n </repository>\n</repositories>\n```\n\n```xml\n<dependency>\n <groupId>om.github.ttddyy</groupId>\n <artifactId>datasource-assert</artifactId>\n <version>master-SNAPSHOT</version>\n</dependency>\n```\n\n=== Supported Java Versions\n\n`datasource-assert` works with java 1.7+.\n\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add WebFlux section to What's New\n\nCloses gh-9590\n" }, { "from": "human", "value": "[[new]]\n== What's New in Spring Security 5.5\n\nSpring Security 5.5 provides a number of new features.\nBelow are the highlights of the release.\n\n[[whats-new-servlet]]\n=== Servlet\n* OAuth 2.0 Client\n\n** Added support for https://github.com/spring-projects/spring-security/pull/9520[Jwt Client Authentication] `private_key_jwt` and `client_secret_jwt`\n** Added https://github.com/spring-projects/spring-security/pull/9535[Jwt Bearer Authorization Grant] support\n** Added https://github.com/spring-projects/spring-security/pull/8765[R2DBC implementation] of `ReactiveOAuth2AuthorizedClientService`\n\n* Configuration\n\n** Introduced https://github.com/spring-projects/spring-security/issues/9205[DispatcherType request matcher]\n\n* Kotlin DSL\n\n** Added https://github.com/spring-projects/spring-security/issues/9319[rememberMe support]\n" }, { "from": "gpt", "value": "[[new]]\n== What's New in Spring Security 5.5\n\nSpring Security 5.5 provides a number of new features.\nBelow are the highlights of the release.\n\n[[whats-new-servlet]]\n=== Servlet\n* OAuth 2.0 Client\n\n** Added support for https://github.com/spring-projects/spring-security/pull/9520[Jwt Client Authentication] `private_key_jwt` and `client_secret_jwt`\n** Added https://github.com/spring-projects/spring-security/pull/9535[Jwt Bearer Authorization Grant] support\n** Added https://github.com/spring-projects/spring-security/pull/8765[R2DBC implementation] of `ReactiveOAuth2AuthorizedClientService`\n\n* Configuration\n\n** Introduced https://github.com/spring-projects/spring-security/issues/9205[DispatcherType request matcher]\n\n* Kotlin DSL\n\n** Added https://github.com/spring-projects/spring-security/issues/9319[rememberMe support]\n\n[[whats-new-webflux]]\n=== WebFlux\n\n** Added https://github.com/spring-projects/spring-security/issues/8143[Kotlin coroutine support] for `EnableReactiveMethodSecurity`\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Change Charting as a Service from CaaS to ChaaS. Pronounced Chaz. More unique chan CaaS and sounds better.\n" }, { "from": "human", "value": "= rhq-metrics\n\nProject to implement a MaaS and CaaS\n\n* MaaS: Metrics as a Service\n* CaaS: Charts as a Service\n" }, { "from": "gpt", "value": "= rhq-metrics\n\nProject to implement a MaaS and CaaS\n\n* MaaS: Metrics as a Service\n* ChaaS: Charts as a Service\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Use project version number in docs\n" }, { "from": "human", "value": "To activate JSON views, add the following dependency to the `dependencies` block of your `build.gradle` where `VERSION` is the version of the plugin you plan to use:\n\n[source,groovy]\ncompile \"org.grails.plugins:views-json:VERSION\"\n\n\nTo enable Gradle compilation of JSON views for production environment add the following to the `buildscript` `dependencies` block:\n\n[source,groovy]\nbuildscript {\n ...\n dependencies {\n ...\n classpath \"org.grails.plugins:views-gradle:VERSION\"\n }\n}\n\nThen apply the `org.grails.plugins.views-json` Gradle plugin after any Grails core gradle plugins:\n\n[source,groovy]\n...\napply plugin: \"org.grails.grails-web\"\napply plugin: \"org.grails.plugins.views-json\"\n\nThis will add a `compileGsonViews` task to Gradle that is executed when producing a JAR or WAR file." }, { "from": "gpt", "value": "To activate JSON views, add the following dependency to the `dependencies` block of your `build.gradle`:\n\n[source,groovy,subs=\"attributes\"]\ncompile \"org.grails.plugins:views-json:{version}\"\n\n\nTo enable Gradle compilation of JSON views for production environment add the following to the `buildscript` `dependencies` block:\n\n[source,groovy,subs=\"attributes\"]\nbuildscript {\n ...\n dependencies {\n ...\n classpath \"org.grails.plugins:views-gradle:{version}\"\n }\n}\n\nThen apply the `org.grails.plugins.views-json` Gradle plugin after any Grails core gradle plugins:\n\n[source,groovy]\n...\napply plugin: \"org.grails.grails-web\"\napply plugin: \"org.grails.plugins.views-json\"\n\nThis will add a `compileGsonViews` task to Gradle that is executed when producing a JAR or WAR file.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Adjust year in copyright notice\n" }, { "from": "human", "value": "// Copyright 2017 the original author or authors.\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n// http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n[[licenses]]\n= License Information\n\n\n[[sec:gradle_documentation]]\n== Gradle Documentation\n\n_Copyright Β© 2007-2018 Gradle, Inc._\n\nGradle build tool source code is open-source and licensed under the link:https://github.com/gradle/gradle/blob/master/LICENSE[Apache License 2.0].\n\n\nGradle user manual and DSL references are licensed under link:http://creativecommons.org/licenses/by-nc-sa/4.0/[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License].\n\n\n[[licenses:build_scan_plugin]]\n== Gradle Build Scan Plugin\n\nUse of the link:https://scans.gradle.com/plugin/[build scan plugin] is subject to link:https://gradle.com/legal/terms-of-service/[Gradle's Terms of Service].\n" }, { "from": "gpt", "value": "// Copyright 2017 the original author or authors.\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n// http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n[[licenses]]\n= License Information\n\n\n[[sec:gradle_documentation]]\n== Gradle Documentation\n\n_Copyright Β© 2007-2019 Gradle, Inc._\n\nGradle build tool source code is open-source and licensed under the link:https://github.com/gradle/gradle/blob/master/LICENSE[Apache License 2.0].\n\n\nGradle user manual and DSL references are licensed under link:http://creativecommons.org/licenses/by-nc-sa/4.0/[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License].\n\n\n[[licenses:build_scan_plugin]]\n== Gradle Build Scan Plugin\n\nUse of the link:https://scans.gradle.com/plugin/[build scan plugin] is subject to link:https://gradle.com/legal/terms-of-service/[Gradle's Terms of Service].\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Increment docs version to align with current code tag\n" }, { "from": "human", "value": "= tick\nMalcolm Sparks <[email protected]>; Henry Widd; Johanna Antonelli <[email protected]>\n0.4.10-alpha, 2019-03-27\n:toc: left\n:toclevels: 4\n:docinfo: shared\n:sectnums: true\n:sectnumlevels: 2\n:xrefstyle: short\n:nofooter:\n\n:leveloffset: +1\n\ninclude::intro.adoc[]\n\ninclude::setup.adoc[]\n\ninclude::api.adoc[]\n\ninclude::dates.adoc[]\n\ninclude::durations.adoc[]\n\ninclude::clocks.adoc[]\n\ninclude::intervals.adoc[]\n\ninclude::calendars.adoc[]\n\ninclude::schedules.adoc[]\n\ninclude::formatting.adoc[]\n\ninclude::cookbook/index.adoc[]\n\ninclude::bibliography.adoc[]\n" }, { "from": "gpt", "value": "= tick\nMalcolm Sparks <[email protected]>; Henry Widd; Johanna Antonelli <[email protected]>\n0.4.25-alpha, 2020-10-01\n:toc: left\n:toclevels: 4\n:docinfo: shared\n:sectnums: true\n:sectnumlevels: 2\n:xrefstyle: short\n:nofooter:\n\n:leveloffset: +1\n\ninclude::intro.adoc[]\n\ninclude::setup.adoc[]\n\ninclude::api.adoc[]\n\ninclude::dates.adoc[]\n\ninclude::durations.adoc[]\n\ninclude::clocks.adoc[]\n\ninclude::intervals.adoc[]\n\ninclude::calendars.adoc[]\n\ninclude::schedules.adoc[]\n\ninclude::formatting.adoc[]\n\ninclude::cookbook/index.adoc[]\n\ninclude::bibliography.adoc[]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Revert indent of link, formatting failure\n" }, { "from": "human", "value": "= SpringBoot WebApp Demo\n\nhttps://projects.spring.io/spring-boot[Spring Boot]\nlooks like a nice way to get started.\nThis is a trivial webapp created using SpringBoot.\n\n== HowTo\n\n\tmvn spring-boot:run\n\nOr, in Eclipse with Spring plugin (or \"Spring Tool Suite\"), \nright-click Application.java, then RunAs->Spring Boot Application\n\nthen connect to \n\n\thttp://localhost:8000/\n\nNote: that is port 8000, _not_ the usual 8080 to avoid conflicts.\nChange this in application.properties if you don't like it.\n\nFill in the form and click Submit.\n\n== War Deployment\n\nIt seems that you can't have both the instant-deployment convenience of\nSpring Boot AND the security of a full WAR deployment in the same POM file.\nYou will need to make several changes to deploy as a WAR file. See the\nsection entitled \"Traditional Deployment\"--\"Create a deployable war file\" in\nthe spring-boot reference manual (Section 73.1 in the current snapshot as of\nthis writing).\n" }, { "from": "gpt", "value": "= SpringBoot WebApp Demo\n\nhttps://projects.spring.io/spring-boot[Spring Boot]\nlooks like a nice way to get started.\nThis is a trivial webapp created using SpringBoot.\n\n== HowTo\n\n\tmvn spring-boot:run\n\nOr, in Eclipse with Spring plugin (or \"Spring Tool Suite\"), \nright-click Application.java, then RunAs->Spring Boot Application\n\nthen connect to \n\nhttp://localhost:8000/\n\nNote: that is port 8000, _not_ the usual 8080 to avoid conflicts.\nChange this in application.properties if you don't like it.\n\nFill in the form and click Submit.\n\n== War Deployment\n\nIt seems that you can't have both the instant-deployment convenience of\nSpring Boot AND the security of a full WAR deployment in the same POM file.\nYou will need to make several changes to deploy as a WAR file. See the\nsection entitled \"Traditional Deployment\"--\"Create a deployable war file\" in\nthe spring-boot reference manual (Section 73.1 in the current snapshot as of\nthis writing).\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Include 5.1 migration docs in migration docs\n\nThe migration docs need to include a copy of the migration docs for all\nversions.\n" }, { "from": "human", "value": "[[breaking-changes]]\n= Breaking changes\n\n[partintro]\n--\nThis section discusses the changes that you need to be aware of when migrating\nyour application from one version of Elasticsearch to another.\n\nAs a general rule:\n\n* Migration between minor versions -- e.g. `5.x` to `5.y` -- can be\n performed by <<rolling-upgrades,upgrading one node at a time>>.\n\n* Migration between consecutive major versions -- e.g. `2.x` to `5.x` --\n requires a <<restart-upgrade,full cluster restart>>.\n\n* Migration between non-consecutive major versions -- e.g. `1.x` to `5.x` --\n is not supported.\n\nSee <<setup-upgrade>> for more info.\n--\ninclude::migrate_5_0.asciidoc[]\n" }, { "from": "gpt", "value": "[[breaking-changes]]\n= Breaking changes\n\n[partintro]\n--\nThis section discusses the changes that you need to be aware of when migrating\nyour application from one version of Elasticsearch to another.\n\nAs a general rule:\n\n* Migration between minor versions -- e.g. `5.x` to `5.y` -- can be\n performed by <<rolling-upgrades,upgrading one node at a time>>.\n\n* Migration between consecutive major versions -- e.g. `2.x` to `5.x` --\n requires a <<restart-upgrade,full cluster restart>>.\n\n* Migration between non-consecutive major versions -- e.g. `1.x` to `5.x` --\n is not supported.\n\nSee <<setup-upgrade>> for more info.\n--\n\ninclude::migrate_5_1.asciidoc[]\n\ninclude::migrate_5_0.asciidoc[]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Update script query doc for 5.1\n\nFrom 5.1, we changed the order of Script class ctor.\n\nRelated to https://github.com/elastic/elasticsearch/pull/21321#issuecomment-266432519\n\nBackport of #22161 in 5.x branch\n" }, { "from": "human", "value": "[[java-query-dsl-script-query]]\n==== Script Query\n\nSee {ref}/query-dsl-script-query.html[Script Query]\n\n[source,java]\n--------------------------------------------------\nQueryBuilder qb = scriptQuery(\n new Script(\"doc['num1'].value > 1\") <1>\n);\n--------------------------------------------------\n<1> inlined script\n\n\nIf you have stored on each data node a script named `myscript.painless` with:\n\n[source,painless]\n--------------------------------------------------\ndoc['num1'].value > params.param1\n--------------------------------------------------\n\nYou can use it then with:\n\n[source,java]\n--------------------------------------------------\nQueryBuilder qb = scriptQuery(\n new Script(\n \"myscript\", <1>\n ScriptType.FILE, <2>\n \"painless\", <3>\n Collections.singletonMap(\"param1\", 5)) <4>\n);\n--------------------------------------------------\n<1> Script name\n<2> Script type: either `ScriptType.FILE`, `ScriptType.INLINE` or `ScriptType.INDEXED`\n<3> Scripting engine\n<4> Parameters as a `Map` of `<String, Object>`\n" }, { "from": "gpt", "value": "[[java-query-dsl-script-query]]\n==== Script Query\n\nSee {ref}/query-dsl-script-query.html[Script Query]\n\n[source,java]\n--------------------------------------------------\nQueryBuilder qb = scriptQuery(\n new Script(\"doc['num1'].value > 1\") <1>\n);\n--------------------------------------------------\n<1> inlined script\n\n\nIf you have stored on each data node a script named `myscript.painless` with:\n\n[source,painless]\n--------------------------------------------------\ndoc['num1'].value > params.param1\n--------------------------------------------------\n\nYou can use it then with:\n\n[source,java]\n--------------------------------------------------\nQueryBuilder qb = scriptQuery(\n new Script(\n ScriptType.FILE, <1>\n \"painless\", <2>\n \"myscript\", <3>\n Collections.singletonMap(\"param1\", 5)) <4>\n);\n--------------------------------------------------\n<1> Script type: either `ScriptType.FILE`, `ScriptType.INLINE` or `ScriptType.INDEXED`\n<2> Scripting engine\n<3> Script name\n<4> Parameters as a `Map` of `<String, Object>`\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Document more steps to do when bumbing version\n" }, { "from": "human", "value": "HACKING\n=======\n\nHOMEPAGE\n--------\n\nThe master source code repository, bug tracking, etc for _pvpn_ is\nhttps://www.github.com/halhen/pvpn. Please report issues here, or to\[email protected].\n\n\nCONTRIBUTING\n------------\n* To contribute, send a patch as a github pull request or using\n +git format-patch+ to [email protected].\n* Keep your patches atomic and complete. Remember documentation.\n* Add yourself to _AUTHORS_.\n* Unless you have a good reason to, rebase your changes to the latest\n master before submitting.\n* Spend an extra minute to write a proper commit message\n** Describe the effect of the patch, rather than the contents.\n** Use imperative form (e.g. +Change foo+ rather than +Changed foo+)\n** Reference the issue (+#NNN+ or, if fixed, +fixes #NNN+) if\n applicable.\n** Max 50 characters in the title.\n** Max 72 characters wide lines.\n\nDon't be discouraged or annoyed if I ask you to change your patch before\nI accept it. The reason is probably that I have not been specific enough\nabout how I would like patches to be.\n\n\nCHECKLISTS\n----------\n\nCreate a new version\n~~~~~~~~~~~~~~~~~~~~\n\n* Update +VERSION+ in +Makefile+\n* Describe changes +NEWS.asciidoc+\n* Commit with version bump commit message\n* +git tag <VERSION>+\n" }, { "from": "gpt", "value": "HACKING\n=======\n\nHOMEPAGE\n--------\n\nThe master source code repository, bug tracking, etc for _pvpn_ is\nhttps://www.github.com/halhen/pvpn. Please report issues here, or to\[email protected].\n\n\nCONTRIBUTING\n------------\n* To contribute, send a patch as a github pull request or using\n +git format-patch+ to [email protected].\n* Keep your patches atomic and complete. Remember documentation.\n* Add yourself to _AUTHORS_.\n* Unless you have a good reason to, rebase your changes to the latest\n master before submitting.\n* Spend an extra minute to write a proper commit message\n** Describe the effect of the patch, rather than the contents.\n** Use imperative form (e.g. +Change foo+ rather than +Changed foo+)\n** Reference the issue (+#NNN+ or, if fixed, +fixes #NNN+) if\n applicable.\n** Max 50 characters in the title.\n** Max 72 characters wide lines.\n\nDon't be discouraged or annoyed if I ask you to change your patch before\nI accept it. The reason is probably that I have not been specific enough\nabout how I would like patches to be.\n\n\nCHECKLISTS\n----------\n\nCreate a new version\n~~~~~~~~~~~~~~~~~~~~\n\n* Update +VERSION+ in +Makefile+\n* Describe changes +NEWS.asciidoc+\n* Commit with version bump commit message\n* +git tag <VERSION>+\n* +git push+\n* +git push --tags+\n* make and upload new PKGBUILD to AUR\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add draft of i18n docs\n" }, { "from": "human", "value": "\n\n== Internationalization (i18n)\n\n" }, { "from": "gpt", "value": "\n= Internationalization (i18n)\n\n== Intro\n\nTranslation messages for Beavy core, as well as Beavy apps and modules, use the\n[ICU syntax](http://userguide.icu-project.org/formatparse/messages).\n\nThe translation messages themselves are managed with the\n[transifex](http://docs.transifex.com/introduction/)\nlocalization platform, using English as the default language. Please see\nBeavyHQ's transifex dashboard here:\n\n[www.transifex.com/beavy/beavy/](https://www.transifex.com/beavy/beavy/)\n\nIf you would like to help us translate Beavy, please let us know or click the\n\"Help Translate Beavy\" button on the link above.\n\n== Python Translations\n\nBackend translations in python are accomplished using the following functions\n(supported by [Flask-ICU](https://github.com/beavyHQ/flask-icu)):\n\n * `format()`\n * `format_date()`\n * `format_time()`\n * `format_datetime()`\n * `format_number()`\n * `format_decimal()`\n * `format_currency()`\n * `format_scientific()`\n * `format_percent()`\n\n== How to use it in Javascript\n\n== Working with translations files\n\n== Extracting & pushing to transifex\n\nTo extract messages \n\n== Updating from transifex\n\n== Shipping your own translations (for your app)\n\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Update link to CI build\n" }, { "from": "human", "value": "## Developer Info\n\n[discrete]\n### Source\n\nhttps://github.com/wildfly-extras/wildfly-camel\n\n[discrete]\n### Issues\n\nhttps://github.com/wildfly-extras/wildfly-camel/issues\n\n[discrete]\n### Jenkins\n\nhttps://ci.fabric8.io/job/wildfly-camel\n\n[discrete]\n### Downloads\n\nhttps://github.com/wildfly-extras/wildfly-camel/releases\n\n[discrete]\n### Forums, Lists, IRC\n\n#wildfly-camel channel on irc.freenode.net\n" }, { "from": "gpt", "value": "## Developer Info\n\n[discrete]\n### Source\n\nhttps://github.com/wildfly-extras/wildfly-camel\n\n[discrete]\n### Issues\n\nhttps://github.com/wildfly-extras/wildfly-camel/issues\n\n[discrete]\n### CI Build\n\nhttps://github.com/wildfly-extras/wildfly-camel/actions\n\n[discrete]\n### Downloads\n\nhttps://github.com/wildfly-extras/wildfly-camel/releases\n\n[discrete]\n### Forums, Lists, IRC\n\n#wildfly-camel channel on irc.freenode.net\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add information about failure to load waypoints" }, { "from": "human", "value": "= How to Run a Mock Mission\n\n1. Start MavProxy at the location of the first sample waypoint.\n\n $ cd <path-to-ardupilot>/APMrover2\n $ sim_vehicle.py -l 37.659984,-121.8874624,0,45\n\n2. Start the simulation. Zoom and center the waypoints, if you like.\nMouse wheel or 2-finger scroll zooms, control-shift-drag moves the\nmap around.\n\n $ roslaunch robo_magellan mock_mission.launch\n\n3. In MavProxy, load the waypoint sample.\n\n rosrun mavros mavwp load data/sample.qgc-wpl.txt\n\n4. Start the mission by arming the throttle and going into AUTO mode.\n\n $ rosservice call /mavros/cmd/arming \"value: true\"\n success: True\n result: 0\n $ rosservice call /mavros/set_mode \"{custom_mode: 'AUTO'}\"\n mode_sent: True\n\nIn rviz, you should see the rover follow the waypoints. When a cone is visible,\nthe heading to the cone is shown with a red arrow.\n\nYou can move the robot back to the starting point (approximately) using the command:\n\n $ rosservice call /mavros/set_mode \"{custom_mode: 'RTL'}\"\n \nThen you can restart the mock mission by setting the mode to AUTO again, in step 4.\n" }, { "from": "gpt", "value": "= How to Run a Mock Mission\n\n1. Start MavProxy at the location of the first sample waypoint.\n\n $ cd <path-to-ardupilot>/APMrover2\n $ sim_vehicle.py -l 37.659984,-121.8874624,0,45\n\n2. Start the simulation. Zoom and center the waypoints, if you like.\nMouse wheel or 2-finger scroll zooms, control-shift-drag moves the\nmap around.\n\n $ roslaunch robo_magellan mock_mission.launch\n\n3. In MavProxy, load the waypoint sample. You should then see the robot and the waypoints\nin rviz. If you don't, try it again after a few seconds. (MavProxy can still be waiting\nfor SITL to start up.)\n\n rosrun mavros mavwp load data/sample.qgc-wpl.txt\n\n4. Start the mission by arming the throttle and going into AUTO mode.\n\n $ rosservice call /mavros/cmd/arming \"value: true\"\n success: True\n result: 0\n $ rosservice call /mavros/set_mode \"{custom_mode: 'AUTO'}\"\n mode_sent: True\n\nIn rviz, you should see the rover follow the waypoints. When a cone is visible,\nthe heading to the cone is shown with a red arrow.\n\nYou can move the robot back to the starting point (approximately) using the command:\n\n $ rosservice call /mavros/set_mode \"{custom_mode: 'RTL'}\"\n \nThen you can restart the mock mission by setting the mode to AUTO again, in step 4.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix copy & paste error" }, { "from": "human", "value": "\n// Allow GitHub image rendering\n:imagesdir: ../../../images\n\n[[ga-opennms-operation-daemon-config-files-pollerd]]\n==== Notifd\n\n[options=\"header, autowidth\"]\n|===\n| Internal Daemon Name | Reload Event\n| _Notifd_ | `uei.opennms.org/internal/reloadDaemonConfig -p 'daemonName Notifd'`\n|===\n\n.Notifd configuration file overview\n[options=\"header, autowidth\"]\n|===\n| File | Restart Required | Reload Event | Description\n| `notifd-configuration.xml` | no | yes | Describes auto-acknowledge prefix, e.g. prefix \"RESOLVED: \" for nodeUp/nodeDown events.\n| `notificationCommands.xml` | no | no | Configuration for notification media, e.g. scripts, XMPP or HTTP Post, immediately applied.\n| `notifications.xml` | no | no | Event notification definitions and changes are immediately applied.\n| `destinationPaths.xml` | no | no | Contains paths for notification targets, e.g. JavaMail, XMPP or external scripts.\n| `users.xml` | no | no | Contain pager and address information for notification destination paths.\n| `groups.xml` | no | no | Groups can be used as target for notifications.\n| `javamail-configuration.properties` | no | no | Configuration to send notification mails via specific mail servers.\n|===\n" }, { "from": "gpt", "value": "\n// Allow GitHub image rendering\n:imagesdir: ../../../images\n\n[[ga-opennms-operation-daemon-config-files-notifd]]\n==== Notifd\n\n[options=\"header, autowidth\"]\n|===\n| Internal Daemon Name | Reload Event\n| _Notifd_ | `uei.opennms.org/internal/reloadDaemonConfig -p 'daemonName Notifd'`\n|===\n\n.Notifd configuration file overview\n[options=\"header, autowidth\"]\n|===\n| File | Restart Required | Reload Event | Description\n| `notifd-configuration.xml` | no | yes | Describes auto-acknowledge prefix, e.g. prefix \"RESOLVED: \" for nodeUp/nodeDown events.\n| `notificationCommands.xml` | no | no | Configuration for notification media, e.g. scripts, XMPP or HTTP Post, immediately applied.\n| `notifications.xml` | no | no | Event notification definitions and changes are immediately applied.\n| `destinationPaths.xml` | no | no | Contains paths for notification targets, e.g. JavaMail, XMPP or external scripts.\n| `users.xml` | no | no | Contain pager and address information for notification destination paths.\n| `groups.xml` | no | no | Groups can be used as target for notifications.\n| `javamail-configuration.properties` | no | no | Configuration to send notification mails via specific mail servers.\n|===\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "[DOCS] Fix snapshot repo API jump list\n" }, { "from": "human", "value": "[[snapshot-restore-apis]]\n== Snapshot and restore APIs\n\nYou can use the following APIs to set up snapshot repositories, manage snapshot\nbackups, and restore snapshots to a running cluster.\n\nFor more information, see <<snapshot-restore>>.\n\n[NOTE]\n====\nWe are working on including more snapshot and restore APIs in this section. Some\ncontent may not be included yet.\n====\n\n[discrete]\n[[snapshot-restore-repo-apis]]\n=== Snapshot repository management APIs\n\n* <<clean-up-snapshot-repo-api,Clean up snapshot repository>>\n* <<delete-snapshot-repo-api,Delete snapshot repository>>\n* <<put-snapshot-repo-api,Put snapshot repository>>\n\n\ninclude::clean-up-repo-api.asciidoc[]\ninclude::delete-repo-api.asciidoc[]\ninclude::get-repo-api.asciidoc[]\ninclude::put-repo-api.asciidoc[]" }, { "from": "gpt", "value": "[[snapshot-restore-apis]]\n== Snapshot and restore APIs\n\nYou can use the following APIs to set up snapshot repositories, manage snapshot\nbackups, and restore snapshots to a running cluster.\n\nFor more information, see <<snapshot-restore>>.\n\n[NOTE]\n====\nWe are working on including more snapshot and restore APIs in this section. Some\ncontent may not be included yet.\n====\n\n[discrete]\n[[snapshot-restore-repo-apis]]\n=== Snapshot repository management APIs\n\n* <<clean-up-snapshot-repo-api,Clean up snapshot repository>>\n* <<delete-snapshot-repo-api,Delete snapshot repository>>\n* <<get-snapshot-repo-api,Get snapshot repository>>\n* <<put-snapshot-repo-api,Put snapshot repository>>\n\n\ninclude::clean-up-repo-api.asciidoc[]\ninclude::delete-repo-api.asciidoc[]\ninclude::get-repo-api.asciidoc[]\ninclude::put-repo-api.asciidoc[]" } ]
asciidoc
chosen
[ { "from": "system", "value": "Docs: Rename reference docs to Elasticsearch Reference\n" }, { "from": "human", "value": "[[elasticsearch-reference]]\n= Reference\n\n:version: 1.5.2\n:branch: 1.5\n:jdk: 1.8.0_25\n:defguide: https://www.elastic.co/guide/en/elasticsearch/guide/current\n\ninclude::getting-started.asciidoc[]\n\ninclude::setup.asciidoc[]\n\ninclude::migration/index.asciidoc[]\n\ninclude::api-conventions.asciidoc[]\n\ninclude::docs.asciidoc[]\n\ninclude::search.asciidoc[]\n\ninclude::indices.asciidoc[]\n\ninclude::cat.asciidoc[]\n\ninclude::cluster.asciidoc[]\n\ninclude::query-dsl.asciidoc[]\n\ninclude::mapping.asciidoc[]\n\ninclude::analysis.asciidoc[]\n\ninclude::modules.asciidoc[]\n\ninclude::index-modules.asciidoc[]\n\ninclude::testing.asciidoc[]\n\ninclude::glossary.asciidoc[]\n\n\n\n" }, { "from": "gpt", "value": "[[elasticsearch-reference]]\n= Elasticsearch Reference\n\n:version: 1.5.2\n:branch: 1.5\n:jdk: 1.8.0_25\n:defguide: https://www.elastic.co/guide/en/elasticsearch/guide/current\n\ninclude::getting-started.asciidoc[]\n\ninclude::setup.asciidoc[]\n\ninclude::migration/index.asciidoc[]\n\ninclude::api-conventions.asciidoc[]\n\ninclude::docs.asciidoc[]\n\ninclude::search.asciidoc[]\n\ninclude::indices.asciidoc[]\n\ninclude::cat.asciidoc[]\n\ninclude::cluster.asciidoc[]\n\ninclude::query-dsl.asciidoc[]\n\ninclude::mapping.asciidoc[]\n\ninclude::analysis.asciidoc[]\n\ninclude::modules.asciidoc[]\n\ninclude::index-modules.asciidoc[]\n\ninclude::testing.asciidoc[]\n\ninclude::glossary.asciidoc[]\n\n\n\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Use `sectlinks` in docs to make linking to sections easier\n" }, { "from": "human", "value": ":author: Peter Niederwieser, The Spock Framework Team\n:revnumber: X-replaced-by-gradle\n:sourcedir: ../spock-specs/src/test/groovy/org/spockframework/docs\n:sourcedir-spring: ../spock-spring/src/test/groovy/org/spockframework/spring/docs\n:resourcedir-spring: ../spock-spring/src/test/resources/org/spockframework/spring/docs\n:sourcedir-spring-boot: ../spock-spring/boot-test/src/test/groovy/org/spockframework/boot\n" }, { "from": "gpt", "value": ":author: Peter Niederwieser, The Spock Framework Team\n:revnumber: X-replaced-by-gradle\n:sourcedir: ../spock-specs/src/test/groovy/org/spockframework/docs\n:sourcedir-spring: ../spock-spring/src/test/groovy/org/spockframework/spring/docs\n:resourcedir-spring: ../spock-spring/src/test/resources/org/spockframework/spring/docs\n:sourcedir-spring-boot: ../spock-spring/boot-test/src/test/groovy/org/spockframework/boot\n:sectlinks: true\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add additional info to overview\n" }, { "from": "human", "value": "= Overview\n\nSpring Security is a framework that provides authentication, authorization, and protection against common attacks.\nWith first class support for both imperative and reactive applications, it is the de-facto standard for securing Spring-based applications.\n\n" }, { "from": "gpt", "value": "= Spring Security\n\nSpring Security is a framework that provides xref:features/authentication/index.adoc[authentication], xref:features/authorization/index.adoc[authorization], and xref:features/exploits/index.adoc[protection against common attacks].\nWith first class support for securing both xref:servlet/index.adoc[imperative] and xref:reactive/index.adoc[reactive] applications, it is the de-facto standard for securing Spring-based applications.\n\nFor a complete list of features, see the xref:features/index.adoc[Features] section of the reference.\n\n== Getting Started\n\nIf you are ready to start securing an application see the Getting Started sections for xref:servlet/getting-started.adoc[servlet] and xref:reactive/getting-started.adoc[reactive]. These sections will walk you through creating your first Spring Security applications.\n\nIf you want to understand how Spring Security works, you can refer to the xref:servlet/architecture.adoc[Architecture] section.\n// FIXME add link to reactive architecture\n\nIf you have any questions, there is a wonderful xref:community.adoc[community] that would love to help you!\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix links to RHPDS demo guides\n" }, { "from": "human", "value": "= RHPDS instructions for OptaPlanner demos\n:awestruct-description: How to demo OptaPlanner with RHPDS\n:awestruct-layout: normalBase\n:awestruct-priority: 0.1\n:showtitle:\n\n== Visit RHPDS\n\nhttps://rhpds.redhat.com/[Visit RHPDS]\n\n== Available demos on RHPDS\n\n* link:optaweb-vehicle-routing.adoc[OptaWeb Vehicle Routing]\n* link:optaweb-employee-rostering.adoc[OptaWeb Employee Rostering]\n" }, { "from": "gpt", "value": "= RHPDS instructions for OptaPlanner demos\n:awestruct-description: How to demo OptaPlanner with RHPDS\n:awestruct-layout: normalBase\n:awestruct-priority: 0.1\n:showtitle:\n\n== Visit RHPDS\n\nhttps://rhpds.redhat.com/[Visit RHPDS]\n\n== Available demos on RHPDS\n\n* link:optaweb-vehicle-routing.html[OptaWeb Vehicle Routing]\n* link:optaweb-employee-rostering.html[OptaWeb Employee Rostering]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Remove Temp-URL step from Swift procedure - BZ 1805176\n" }, { "from": "human", "value": "//Module included in the following assemblies:\n//\n// * installing/installing_openstack/installing-openstack-installer.adoc\n// * installing/installing_openstack/installing-openstack-installer-custom.adoc\n// * installing/installing_openstack/installing-openstack-installer-kuryr.adoc\n\n[id=\"installation-osp-enabling-swift_{context}\"]\n= Enabling Swift on OpenStack\n\n{product-title} on {rh-openstack-first} uses https://docs.openstack.org/security-guide/object-storage.html[OpenStack Object Storage (Swift)] to store and serve user configuration files.\n\nSwift is operated by a user account with the `swiftoperator` role and `temp-url` support.\n\n.Prerequisites\n\n* A {rh-openstack} administrator account on the target environment\n* On Ceph RGW, https://docs.ceph.com/docs/master/radosgw/config-ref/#swift-settings[the `account in url` option must be enabled]\n\n.Procedure\n\nTo enable Swift on {rh-openstack}:\n\n. As an administrator in the {rh-openstack} CLI, add the `swiftoperator` role to the account that will access Swift:\n+\n----\n$ openstack role add --user <user> --project <project> swiftoperator\n----\n. As the account with the `swiftoperator` role, set a temporary URL property for the account:\n+\n----\n$ openstack object store account set --property Temp-URL-Key=superkey\n----\n\nYour {rh-openstack} deployment can now use Swift to store and serve files.\n" }, { "from": "gpt", "value": "//Module included in the following assemblies:\n//\n// * installing/installing_openstack/installing-openstack-installer.adoc\n// * installing/installing_openstack/installing-openstack-installer-custom.adoc\n// * installing/installing_openstack/installing-openstack-installer-kuryr.adoc\n\n[id=\"installation-osp-enabling-swift_{context}\"]\n= Enabling Swift on OpenStack\n\n{product-title} on {rh-openstack-first} uses https://docs.openstack.org/security-guide/object-storage.html[OpenStack Object Storage (Swift)] to store and serve user configuration files.\n\nSwift is operated by a user account with the `swiftoperator` role.\n\n.Prerequisites\n\n* A {rh-openstack} administrator account on the target environment\n* On Ceph RGW, https://docs.ceph.com/docs/master/radosgw/config-ref/#swift-settings[the `account in url` option must be enabled]\n\n.Procedure\n\nTo enable Swift on {rh-openstack}:\n\n. As an administrator in the {rh-openstack} CLI, add the `swiftoperator` role to the account that will access Swift:\n+\n----\n$ openstack role add --user <user> --project <project> swiftoperator\n----\n\nYour {rh-openstack} deployment can now use Swift to store and serve files.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add oauth apps section to path, in case users are reading the text and not clicking the links.\nChoosing the Developer settings optiion on GitHub brings users to the GitHub Apps\nsection (settings/apps) and not the OAuth Apps section (/settings/developers)\n" }, { "from": "human", "value": "// Module included in the following assemblies:\n//\n// * authentication/identity_providers/configuring-github-identity-provider.adoc\n\n[id=\"identity-provider-registering-github_{context}\"]\n= Registering a GitHub application\n\nTo use GitHub or GitHub Enterprise as an identity provider, you must register\nan application to use.\n\n.Procedure\n\n. Register an application on GitHub:\n** For GitHub, click https://github.com/settings/profile[Settings] ->\nhttps://github.com/settings/developers[Developer settings] ->\nhttps://github.com/settings/applications/new[Register a new OAuth application].\n** For GitHub Enterprise, go to your GitHub Enterprise home page and then click\n*Settings -> Developer settings -> Register a new application*.\n. Enter an application name, for example `My OpenShift Install`.\n. Enter a homepage URL, such as\n`\\https://oauth-openshift.apps.<cluster-name>.<cluster-domain>`.\n. Optional: Enter an application description.\n. Enter the authorization callback URL, where the end of the URL contains the\nidentity provider `name`:\n+\n----\nhttps://oauth-openshift.apps.<cluster-name>.<cluster-domain>/oauth2callback/<idp-provider-name>\n----\n+\nFor example:\n+\n----\nhttps://oauth-openshift.apps.example-openshift-cluster.com/oauth2callback/github/\n----\n. Click *Register application*. GitHub provides a Client ID and a Client Secret.\nYou need these values to complete the identity provider configuration.\n" }, { "from": "gpt", "value": "// Module included in the following assemblies:\n//\n// * authentication/identity_providers/configuring-github-identity-provider.adoc\n\n[id=\"identity-provider-registering-github_{context}\"]\n= Registering a GitHub application\n\nTo use GitHub or GitHub Enterprise as an identity provider, you must register\nan application to use.\n\n.Procedure\n\n. Register an application on GitHub:\n** For GitHub, click https://github.com/settings/profile[Settings] ->\nhttps://github.com/settings/apps[Developer settings] ->\nhttps://github.com/settings/developers[OAuth Apps] ->\nhttps://github.com/settings/applications/new[Register a new OAuth application].\n** For GitHub Enterprise, go to your GitHub Enterprise home page and then click\n*Settings -> Developer settings -> Register a new application*.\n. Enter an application name, for example `My OpenShift Install`.\n. Enter a homepage URL, such as\n`\\https://oauth-openshift.apps.<cluster-name>.<cluster-domain>`.\n. Optional: Enter an application description.\n. Enter the authorization callback URL, where the end of the URL contains the\nidentity provider `name`:\n+\n----\nhttps://oauth-openshift.apps.<cluster-name>.<cluster-domain>/oauth2callback/<idp-provider-name>\n----\n+\nFor example:\n+\n----\nhttps://oauth-openshift.apps.example-openshift-cluster.com/oauth2callback/github/\n----\n. Click *Register application*. GitHub provides a Client ID and a Client Secret.\nYou need these values to complete the identity provider configuration.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix minor issues in delimited payload token filter docs\n\nThis commit addresses a few minor issues in the delimited payload token\nfilter docs:\n - the provided example reversed the payloads associated with the\n tokens \"the\" and \"fox\"\n - two additional typos in the same sentence\n - \"per default\" -> \"by default\"\n - \"default int to\" -> \"default into\"\n - adds two serial commas\n" }, { "from": "human", "value": "[[analysis-delimited-payload-tokenfilter]]\n=== Delimited Payload Token Filter\n\nNamed `delimited_payload_filter`. Splits tokens into tokens and payload whenever a delimiter character is found.\n\nExample: \"the|1 quick|2 fox|3\" is split per default int to tokens `fox`, `quick` and `the` with payloads `1`, `2` and `3` respectively.\n\n\n\nParameters:\n\n`delimiter`:: \n Character used for splitting the tokens. Default is `|`. \n\n`encoding`:: \n The type of the payload. `int` for integer, `float` for float and `identity` for characters. Default is `float`." }, { "from": "gpt", "value": "[[analysis-delimited-payload-tokenfilter]]\n=== Delimited Payload Token Filter\n\nNamed `delimited_payload_filter`. Splits tokens into tokens and payload whenever a delimiter character is found.\n\nExample: \"the|1 quick|2 fox|3\" is split by default into tokens `the`, `quick`, and `fox` with payloads `1`, `2`, and `3` respectively.\n\nParameters:\n\n`delimiter`:: \n Character used for splitting the tokens. Default is `|`. \n\n`encoding`:: \n The type of the payload. `int` for integer, `float` for float and `identity` for characters. Default is `float`." } ]
asciidoc
chosen
[ { "from": "system", "value": "Create Dockerfile in empty folder\n" }, { "from": "human", "value": "= Running your database with data included\n*Having to insert the data everytime you start a container can become a hassle. In this exercise we will put the data inside the Postgres image, so that it will be available on startup.*\n\nThe https://hub.docker.com/_/postgres/[Postgres] image provides a convenient way to execute scripts on startup of your database. All we have to do is add a .sh script to the docker-entrypoint-initdb.d-folder as per the documentation of the image.\n\n== Steps to prepare the database\n. Create the .sh script on your harddisk\n. Create a Dockerfile that\n.. Is build https://docs.docker.com/engine/reference/builder/#/from[from] postgres:9\n.. https://docs.docker.com/engine/reference/builder/#/add[Adds] the .sh file to the docker-entrypoint-initdb.d folder in the container\n. Build the image, run it and link it with the testnetapp container\n\n\n.The script to add to the Dockerfile\n----\n#!/bin/bash\nset -e\n\npsql -v ON_ERROR_STOP=1 --username \"$POSTGRES_USER\" <<-EOSQL\n CREATE DATABASE testnet;\n \\c testnet\n CREATE TABLE kv (key varchar(100) PRIMARY KEY, value varchar(100));\n INSERT INTO kv VALUES ('provider','postgres database');\nEOSQL\n----" }, { "from": "gpt", "value": "= Running your database with data included\n*Having to insert the data everytime you start a container can become a hassle. In this exercise we will put the data inside the Postgres image, so that it will be available on startup.*\n\nThe https://hub.docker.com/_/postgres/[Postgres] image provides a convenient way to execute scripts on startup of your database. All we have to do is add a .sh script to the docker-entrypoint-initdb.d-folder as per the documentation of the image.\n\n== Steps to prepare the database\n. Create an empty folder\n. Put the .sh script from below in that folder\n. Create a Dockerfile in the folder that\n.. Is build https://docs.docker.com/engine/reference/builder/#/from[from] postgres:9\n.. https://docs.docker.com/engine/reference/builder/#/add[Adds] the .sh file to the docker-entrypoint-initdb.d folder in the container\n. Build the image, run it and link it with the testnetapp container\n\n\n.The script to add to the Dockerfile\n----\n#!/bin/bash\nset -e\n\npsql -v ON_ERROR_STOP=1 --username \"$POSTGRES_USER\" <<-EOSQL\n CREATE DATABASE testnet;\n \\c testnet\n CREATE TABLE kv (key varchar(100) PRIMARY KEY, value varchar(100));\n INSERT INTO kv VALUES ('provider','postgres database');\nEOSQL\n----" } ]
asciidoc
chosen
[ { "from": "system", "value": "Update What's New for 5.7\n" }, { "from": "human", "value": "[[new]]\n= What's New in Spring Security 5.7\n\nSpring Security 5.7 provides a number of new features.\nBelow are the highlights of the release.\n\n[[whats-new-servlet]]\n== Servlet\n\n* Web\n\n** Introduced xref:servlet/authentication/persistence.adoc#requestattributesecuritycontextrepository[`RequestAttributeSecurityContextRepository`]\n** Introduced xref:servlet/authentication/persistence.adoc#securitycontextholderfilter[`SecurityContextHolderFilter`] - Ability to require explicit saving of the `SecurityContext`\n\n* OAuth 2.0 Client\n\n** Allow configuring https://github.com/spring-projects/spring-security/issues/6548[PKCE for confidential clients]\n** Allow configuring a https://github.com/spring-projects/spring-security/issues/9812[JWT assertion resolver] in `JwtBearerOAuth2AuthorizedClientProvider`\n** Allow customizing claims on https://github.com/spring-projects/spring-security/issues/9855[JWT client assertions]\n\n[[whats-new-webflux]]\n== WebFlux\n\n* OAuth 2.0 Client\n\n** Allow configuring https://github.com/spring-projects/spring-security/issues/6548[PKCE for confidential clients]\n** Allow configuring a https://github.com/spring-projects/spring-security/issues/9812[JWT assertion resolver] in `JwtBearerReactiveOAuth2AuthorizedClientProvider`\n" }, { "from": "gpt", "value": "[[new]]\n= What's New in Spring Security 5.7\n\nSpring Security 5.7 provides a number of new features.\nBelow are the highlights of the release.\n\n[[whats-new-servlet]]\n== Servlet\n\n* Web\n\n** Introduced xref:servlet/authentication/persistence.adoc#requestattributesecuritycontextrepository[`RequestAttributeSecurityContextRepository`]\n** Introduced xref:servlet/authentication/persistence.adoc#securitycontextholderfilter[`SecurityContextHolderFilter`] - Ability to require explicit saving of the `SecurityContext`\n\n* OAuth 2.0 Client\n\n** Allow configuring https://github.com/spring-projects/spring-security/issues/6548[PKCE for confidential clients]\n** Allow configuring a https://github.com/spring-projects/spring-security/issues/9812[JWT assertion resolver] in `JwtBearerOAuth2AuthorizedClientProvider`\n** Allow customizing claims on https://github.com/spring-projects/spring-security/issues/9855[JWT client assertions]\n\n[[whats-new-webflux]]\n== WebFlux\n\n* Web\n\n** Allow customizing https://github.com/spring-projects/spring-security/issues/10903[charset] in `ServerHttpBasicAuthenticationConverter`\n\n* OAuth 2.0 Client\n\n** Allow configuring https://github.com/spring-projects/spring-security/issues/6548[PKCE for confidential clients]\n** Allow configuring a https://github.com/spring-projects/spring-security/issues/9812[JWT assertion resolver] in `JwtBearerReactiveOAuth2AuthorizedClientProvider`\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Update What's New for 6.0\n" }, { "from": "human", "value": "[[new]]\n= What's New in Spring Security 6.0\n\nSpring Security 6.0 provides a number of new features.\nBelow are the highlights of the release.\n\n== Breaking Changes\n\n* https://github.com/spring-projects/spring-security/issues/10556[gh-10556] - Remove EOL OpenSaml 3 Support.\nUse the OpenSaml 4 Support instead.\n* https://github.com/spring-projects/spring-security/issues/8980[gh-8980] - Remove unsafe/deprecated `Encryptors.querableText(CharSequence,CharSequence)`.\nInstead use data storage to encrypt values.\n* https://github.com/spring-projects/spring-security/issues/11520[gh-11520] - Remember Me uses SHA256 by default\n* https://github.com/spring-projects/spring-security/issues/8819 - Move filters to web package\nReorganize imports\n* https://github.com/spring-projects/spring-security/issues/7349 - Move filter and token to appropriate packages\nReorganize imports\n" }, { "from": "gpt", "value": "[[new]]\n= What's New in Spring Security 6.0\n\nSpring Security 6.0 provides a number of new features.\nBelow are the highlights of the release.\n\n== Breaking Changes\n\n* https://github.com/spring-projects/spring-security/issues/10556[gh-10556] - Remove EOL OpenSaml 3 Support.\nUse the OpenSaml 4 Support instead.\n* https://github.com/spring-projects/spring-security/issues/8980[gh-8980] - Remove unsafe/deprecated `Encryptors.querableText(CharSequence,CharSequence)`.\nInstead use data storage to encrypt values.\n* https://github.com/spring-projects/spring-security/issues/11520[gh-11520] - Remember Me uses SHA256 by default\n* https://github.com/spring-projects/spring-security/issues/8819 - Move filters to web package\nReorganize imports\n* https://github.com/spring-projects/spring-security/issues/7349 - Move filter and token to appropriate packages\nReorganize imports\n* https://github.com/spring-projects/spring-security/issues/11026[gh-11026] - Use `RequestAttributeSecurityContextRepository` instead of `NullSecurityContextRepository`\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Disable GPG check for DataStax repo on RHEL 7.\n\nChange-Id: I1c9c1bfddfff7847374563aedaa3130132100f13\nFixes: MND-19\nSigned-off-by: Jan Hilberath <[email protected]>\n" }, { "from": "human", "value": ". *Enable DataStax repository*\n+\n====\nCreate the `/etc/yum.repos.d/datastax.repo` file and edit it to contain the\nfollowing:\n\n[source]\n----\n# DataStax (Apache Cassandra)\n[datastax]\nname = DataStax Repo for Apache Cassandra\nbaseurl = http://rpm.datastax.com/community\nenabled = 1\ngpgcheck = 1\ngpgkey = https://rpm.datastax.com/rpm/repo_key\n----\n====\n" }, { "from": "gpt", "value": ". *Enable DataStax repository*\n+\n====\nCreate the `/etc/yum.repos.d/datastax.repo` file and edit it to contain the\nfollowing:\n\n[source]\n----\n# DataStax (Apache Cassandra)\n[datastax]\nname = DataStax Repo for Apache Cassandra\nbaseurl = http://rpm.datastax.com/community\nenabled = 1\ngpgcheck = 0\ngpgkey = https://rpm.datastax.com/rpm/repo_key\n----\n====\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix section links to not include .html suffix\n\nCloses gh-33206\n" }, { "from": "human", "value": "[[features]]\n= Core Features\ninclude::attributes.adoc[]\n\n\n\nThis section dives into the details of Spring Boot.\nHere you can learn about the key features that you may want to use and customize.\nIf you have not already done so, you might want to read the \"<<getting-started#getting-started>>\" and \"<<using#using>>\" sections, so that you have a good grounding of the basics.\n\n\n\ninclude::features/spring-application.adoc[]\n\ninclude::features/external-config.adoc[]\n\ninclude::features/profiles.adoc[]\n\ninclude::features/logging.adoc[]\n\ninclude::features/internationalization.adoc[]\n\ninclude::features/json.adoc[]\n\ninclude::features/task-execution-and-scheduling.adoc[]\n\ninclude::features/testing.adoc[]\n\ninclude::features/developing-auto-configuration.adoc[]\n\ninclude::features/kotlin.adoc[]\n\ninclude::features/whats-next.adoc[]\n" }, { "from": "gpt", "value": "[[features]]\n= Core Features\ninclude::attributes.adoc[]\n\n\n\nThis section dives into the details of Spring Boot.\nHere you can learn about the key features that you may want to use and customize.\nIf you have not already done so, you might want to read the \"<<getting-started#getting-started, Getting Started>>\" and \"<<using#using, Developing with Spring Boot>>\" sections, so that you have a good grounding of the basics.\n\n\n\ninclude::features/spring-application.adoc[]\n\ninclude::features/external-config.adoc[]\n\ninclude::features/profiles.adoc[]\n\ninclude::features/logging.adoc[]\n\ninclude::features/internationalization.adoc[]\n\ninclude::features/json.adoc[]\n\ninclude::features/task-execution-and-scheduling.adoc[]\n\ninclude::features/testing.adoc[]\n\ninclude::features/developing-auto-configuration.adoc[]\n\ninclude::features/kotlin.adoc[]\n\ninclude::features/whats-next.adoc[]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Remove extra step for removing an additional network\n\n- https://issues.redhat.com/browse/OSDOCS-872\n- https://bugzilla.redhat.com/show_bug.cgi?id=1755908\n" }, { "from": "human", "value": "// Module included in the following assemblies:\n//\n// * networking/multiple_networks/remove-additional-network.adoc\n\n[id=\"nw-multus-delete-network_{context}\"]\n= Removing an additional network attachment definition\n\nAs a cluster administrator, you can remove an additional network from your\n{product-title} cluster. The additional network is not removed from any Pods it\nis attached to.\n\n.Prerequisites\n\n* Install the OpenShift Command-line Interface (CLI), commonly known as `oc`.\n* Log in as a user with `cluster-admin` privileges.\n\n.Procedure\n\nTo remove an additional network from your cluster, complete the following steps:\n\n. Edit the Cluster Network Operator (CNO) in your default text editor by running\nthe following command:\n+\n----\n$ oc edit networks.operator.openshift.io cluster\n----\n\n. Modify the CR by removing the configuration from the `additionalNetworks`\ncollection for the network attachment definition you are removing.\n+\n[source,yaml]\n----\napiVersion: operator.openshift.io/v1\nkind: Network\nmetadata:\n name: cluster\nspec:\n additionalNetworks: [] <1>\n----\n<1> If you are removing the configuration mapping for the only additional\nnetwork attachment definition in the `additionalNetworks` collection, you must\nspecify an empty collection.\n\n. Save your changes and quit the text editor to commit your changes.\n\n. Delete the NetworkAttachmentDefinition CR for the additional network by running the following command. Replace `<name>` with the name of the additional network to remove.\n+\n----\n$ oc delete network-attachment-definition <name>\n----\n\n. Optional: Confirm that the additional network CR was deleted by running the following command:\n+\n----\n$ oc get network-attachment-definition --all-namespaces\n----\n" }, { "from": "gpt", "value": "// Module included in the following assemblies:\n//\n// * networking/multiple_networks/remove-additional-network.adoc\n\n[id=\"nw-multus-delete-network_{context}\"]\n= Removing an additional network attachment definition\n\nAs a cluster administrator, you can remove an additional network from your\n{product-title} cluster. The additional network is not removed from any Pods it\nis attached to.\n\n.Prerequisites\n\n* Install the OpenShift Command-line Interface (CLI), commonly known as `oc`.\n* Log in as a user with `cluster-admin` privileges.\n\n.Procedure\n\nTo remove an additional network from your cluster, complete the following steps:\n\n. Edit the Cluster Network Operator (CNO) in your default text editor by running\nthe following command:\n+\n----\n$ oc edit networks.operator.openshift.io cluster\n----\n\n. Modify the CR by removing the configuration from the `additionalNetworks`\ncollection for the network attachment definition you are removing.\n+\n[source,yaml]\n----\napiVersion: operator.openshift.io/v1\nkind: Network\nmetadata:\n name: cluster\nspec:\n additionalNetworks: [] <1>\n----\n<1> If you are removing the configuration mapping for the only additional\nnetwork attachment definition in the `additionalNetworks` collection, you must\nspecify an empty collection.\n\n. Save your changes and quit the text editor to commit your changes.\n\n. Optional: Confirm that the additional network CR was deleted by running the following command:\n+\n----\n$ oc get network-attachment-definition --all-namespaces\n----\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Add --binary-image to opm docs\n" }, { "from": "human", "value": "// Module included in the following assemblies:\n//\n// * operators/olm-managing-custom-catalogs.adoc\n\n[id=\"olm-creating-index-image_{context}\"]\n= Creating an index image\n\nYou can create an index image using the `opm` CLI.\n\n.Prerequisites\n\n* `opm` version 1.12.3+\n* `podman` version 1.4.4+\n* A bundle image built and pushed to a registry.\n\n.Procedure\n\n. Start a new index:\n+\n----\n$ opm index add \\\n --bundles quay.io/<namespace>/test-operator:v0.1.0 \\//<1>\n --tag quay.io/<namespace>/test-catalog:latest <2>\n----\n<1> The bundle image to add to the index.\n<2> The image tag that you want the index image to have.\n\n. Push the index image to a registry:\n+\n----\n$ podman push quay.io/<namespace>/test-catalog:latest\n----\n" }, { "from": "gpt", "value": "// Module included in the following assemblies:\n//\n// * operators/olm-managing-custom-catalogs.adoc\n\n[id=\"olm-creating-index-image_{context}\"]\n= Creating an index image\n\nYou can create an index image using the `opm` CLI.\n\n.Prerequisites\n\n* `opm` version 1.12.3+\n* `podman` version 1.4.4+\n* A bundle image built and pushed to a registry.\n\n.Procedure\n\n. Start a new index:\n+\n----\n$ opm index add \\\n --bundles quay.io/<namespace>/test-operator:v0.1.0 \\//<1>\n --tag quay.io/<namespace>/test-catalog:latest \\//<2>\n [--binary-image <registry_base_image>] <3>\n----\n<1> Comma-separated list of bundle images to add to the index.\n<2> The image tag that you want the index image to have.\n<3> Optional: An alternative registry base image to use for serving the catalog.\n\n. Push the index image to a registry:\n+\n----\n$ podman push quay.io/<namespace>/test-catalog:latest\n----\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Use master version in docs\n" }, { "from": "human", "value": "= Topbeat reference\n:libbeat: http://www.elastic.co/guide/en/beats/libbeat/1.0.0-rc1\n:version: 1.0.0-rc1\n\n\ninclude::./overview.asciidoc[]\n\ninclude::./gettingstarted.asciidoc[]\n\ninclude::./fields.asciidoc[]\n\n// include::./configuration.asciidoc[]\n\ninclude::./command-line.asciidoc[]\n\ninclude::./windows.asciidoc[]\n\n// include::./troubleshooting.asciidoc[]\n" }, { "from": "gpt", "value": "= Topbeat reference\n:libbeat: http://www.elastic.co/guide/en/beats/libbeat/master\n:version: master\n\n\ninclude::./overview.asciidoc[]\n\ninclude::./gettingstarted.asciidoc[]\n\ninclude::./fields.asciidoc[]\n\n// include::./configuration.asciidoc[]\n\ninclude::./command-line.asciidoc[]\n\ninclude::./windows.asciidoc[]\n\n// include::./troubleshooting.asciidoc[]\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Update What's New for 5.8\n" }, { "from": "human", "value": "[[new]]\n= What's New in Spring Security 5.8\n\nSpring Security 5.8 provides a number of new features.\nBelow are the highlights of the release.\n\n* https://github.com/spring-projects/spring-security/pull/11638[gh-11638] - Refresh remote JWK when unknown KID error occurs\n* https://github.com/spring-projects/spring-security/pull/11782[gh-11782] - @WithMockUser Supported as Merged Annotation\n* https://github.com/spring-projects/spring-security/issues/11661[gh-11661] - Configurable authentication converter for resource-servers with token introspection\n* https://github.com/spring-projects/spring-security/pull/11771[gh-11771] - `HttpSecurityDsl` should support `apply` method\n* https://github.com/spring-projects/spring-security/pull/11232[gh-11232] - `ClientRegistrations#rest` defines 30s connect and read timeouts\n" }, { "from": "gpt", "value": "[[new]]\n= What's New in Spring Security 5.8\n\nSpring Security 5.8 provides a number of new features.\nBelow are the highlights of the release.\n\n* https://github.com/spring-projects/spring-security/pull/11638[gh-11638] - Refresh remote JWK when unknown KID error occurs\n* https://github.com/spring-projects/spring-security/pull/11782[gh-11782] - @WithMockUser Supported as Merged Annotation\n* https://github.com/spring-projects/spring-security/issues/11661[gh-11661] - Configurable authentication converter for resource-servers with token introspection\n* https://github.com/spring-projects/spring-security/pull/11771[gh-11771] - `HttpSecurityDsl` should support `apply` method\n* https://github.com/spring-projects/spring-security/pull/11232[gh-11232] - `ClientRegistrations#rest` defines 30s connect and read timeouts\n* https://github.com/spring-projects/spring-security/pull/11464[gh-11464] - Remember Me supports SHA256 algorithm\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Update doc structure. ignore multicast on TC\n" }, { "from": "human", "value": "\n= Event Data\n\n=== Event Sourcing\n\nTODO, link core event sourcing docs.\n\nApart from the recommended event store, Photon, Muon Java provides an in memory, simplified event store that only implements\n the core ingest and stream functionality of a muon compatible event store. You can read more about it link:InMemEventStore.html[here]\n\nThis is most useful during building tests.\n\n=== Event Based Testing\n\nIf you adopt the Event system (which requires an event store such as _Photon_), then you will design your system domains\nas a set of _Events_, and your business processes as a set of causal relationships between them.\n" }, { "from": "gpt", "value": "\n== Event Based Systems\n\n\n\n=== Event Sourcing\n\nTODO, link core event sourcing docs.\n\nApart from the recommended event store, Photon, Muon Java provides an in memory, simplified event store that only implements\n the core ingest and stream functionality of a muon compatible event store. You can read more about it link:InMemEventStore.html[here]\n\nThis is most useful during building tests.\n\n=== Event Based Testing\n\nIf you adopt the Event system (which requires an event store such as _Photon_), then you will design your system domains\nas a set of _Events_, and your business processes as a set of causal relationships between them.\n\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix Zookeeper link in documentation\n\nFixes gh-238\nFixes gh-246\n" }, { "from": "human", "value": "\ninclude::_attributes.adoc[]\n\nSpring Cloud Zookeeper provides http://zookeeper.apache.org/[Apache Zookeeper] integrations for Spring Boot apps through autoconfiguration and binding to the Spring Environment and other Spring programming model idioms. With a few simple annotations you can quickly enable and configure the common patterns inside your application and build large distributed systems with Zookeeper. The patterns provided include Service Discovery and Distributed Configuration.\n\n## Features\n\n* Service Discovery: instances can be registered with Zookeeper and clients can discover the instances using Spring-managed beans\n * Supports Ribbon, the client side load-balancer via Spring Cloud Netflix\n * Supports Zuul, a dynamic router and filter via Spring Cloud Netflix\n* Distributed Configuration: using Zookeeper as a data store\n\n## Quick Start\n\nAs long as Spring Cloud Zookeeper, http://curator.apache.org/[Apache Curator] and the Zookeeper Java Client are on the\nclasspath any Spring Boot application with `@EnableDiscoveryClient` will try to contact a Zookeeper\nagent on `localhost:2181` (the default value of\n`zookeeper.connectString`).\n\n```java\n@Configuration\n@EnableAutoConfiguration\n@EnableDiscoveryClient\n@RestController\npublic class Application {\n\n @RequestMapping(\"/\")\n public String home() {\n return \"Hello World\";\n }\n\n public static void main(String[] args) {\n SpringApplication.run(Application.class, args);\n }\n\n}\n```\n\nA local Zookeeper server must be running. See the http://zookeeper.apache.org/doc/trunk/zookeeperStarted.html[Zookeeper documentation] on how to run a Zookeeper server.\n" }, { "from": "gpt", "value": "\ninclude::_attributes.adoc[]\n\nSpring Cloud Zookeeper provides http://zookeeper.apache.org/[Apache Zookeeper] integrations for Spring Boot apps through autoconfiguration and binding to the Spring Environment and other Spring programming model idioms. With a few simple annotations you can quickly enable and configure the common patterns inside your application and build large distributed systems with Zookeeper. The patterns provided include Service Discovery and Distributed Configuration.\n\n## Features\n\n* Service Discovery: instances can be registered with Zookeeper and clients can discover the instances using Spring-managed beans\n * Supports Ribbon, the client side load-balancer via Spring Cloud Netflix\n * Supports Zuul, a dynamic router and filter via Spring Cloud Netflix\n* Distributed Configuration: using Zookeeper as a data store\n\n## Quick Start\n\nAs long as Spring Cloud Zookeeper, http://curator.apache.org/[Apache Curator] and the Zookeeper Java Client are on the\nclasspath any Spring Boot application with `@EnableDiscoveryClient` will try to contact a Zookeeper\nagent on `localhost:2181` (the default value of\n`zookeeper.connectString`).\n\n```java\n@Configuration\n@EnableAutoConfiguration\n@EnableDiscoveryClient\n@RestController\npublic class Application {\n\n @RequestMapping(\"/\")\n public String home() {\n return \"Hello World\";\n }\n\n public static void main(String[] args) {\n SpringApplication.run(Application.class, args);\n }\n\n}\n```\n\nA local Zookeeper server must be running. See the http://zookeeper.apache.org/[Zookeeper documentation] on how to run a Zookeeper server.\n" } ]
asciidoc
chosen
[ { "from": "system", "value": "Fix link and snippet url\n" }, { "from": "human", "value": "= Using the Kafka controller\n:page-sidebar: apim_3_x_sidebar\n:page-permalink: apim/3.x/apim_publishersme_using_kafka.html\n:page-folder: apim/user-guide/publisher\n:page-layout: apim3x\n\n== Overview\n\nThis section describes the basic usage of the Kafka controller - producing and consuming messages.\n\n=== Producing messages\n\nUsing the HTTP `POST` command to the example endpoint `https://apim-3-x-x-gateway.cloud.gravitee.io/kafka/messages`, you can send a message with the followig structure:\n\n[source,json]\n----\nhttps://apim-3-13-x-gateway.cloud.gravitee.io/kafka/messages\n{\n \"messages\": [\n {\n \"key\": \"key\",\n \"value\": {\n \"val1\": \"hello\"\n }\n }\n ]\n}\n----\n\n=== Consuming messages\n\nUsing the HTTP `GET` command to the example endpoint `https://apim-3-x-x-gateway.cloud.gravitee.io/kafka/messages`, you can receive any available messages." }, { "from": "gpt", "value": "= Using the Kafka controller\n:page-sidebar: apim_3_x_sidebar\n:page-permalink: apim/3.x/apim_publisherguide_using_kafka.html\n:page-folder: apim/user-guide/publisher\n:page-layout: apim3x\n\n== Overview\n\nThis section describes the basic usage of the Kafka controller - producing and consuming messages.\n\n=== Producing messages\n\nUsing the HTTP `POST` command to the example endpoint `https://api.company.com/kafka/messages`, you can send a message with the followig structure:\n\n[source,json]\n----\nhttps://api.company.com/kafka/messages\n{\n \"messages\": [\n {\n \"key\": \"key\",\n \"value\": {\n \"val1\": \"hello\"\n }\n }\n ]\n}\n----\n\n=== Consuming messages\n\nUsing the HTTP `GET` command to the example endpoint `https://api.company.com/kafka/messages`, you can receive any available messages." } ]
asciidoc
chosen