conversations
listlengths 2
3
| source
stringclasses 1
value | stack
stringclasses 1
value |
---|---|---|
[
{
"from": "system",
"value": "Remove a comma in doc to make example a valid json.\n\nThis will help reader to do a hurry up copy-paste test.\n"
},
{
"from": "human",
"value": "[[mapping-dynamic-mapping]]\n== Dynamic Mapping\n\nDefault mappings allow to automatically apply generic mapping definition\nto types that do not have mapping pre defined. This is mainly done\nthanks to the fact that the\n<<mapping-object-type,object mapping>> and\nnamely the <<mapping-root-object-type,root\nobject mapping>> allow for schema-less dynamic addition of unmapped\nfields.\n\nThe default mapping definition is plain mapping definition that is\nembedded within the distribution:\n\n[source,js]\n--------------------------------------------------\n{\n \"_default_\" : {\n }\n}\n--------------------------------------------------\n\nPretty short, no? Basically, everything is defaulted, especially the\ndynamic nature of the root object mapping. The default mapping\ndefinition can be overridden in several manners. The simplest manner is\nto simply define a file called `default-mapping.json` and placed it\nunder the `config` directory (which can be configured to exist in a\ndifferent location). It can also be explicitly set using the\n`index.mapper.default_mapping_location` setting.\n\nThe dynamic creation of mappings for unmapped types can be completely\ndisabled by setting `index.mapper.dynamic` to `false`.\n\nAs an example, here is how we can change the default\n<<mapping-date-format,date_formats>> used in the\nroot and inner object types:\n\n[source,js]\n--------------------------------------------------\n{\n \"_default_\" : {\n \"date_formats\" : [\"yyyy-MM-dd\", \"dd-MM-yyyy\", \"date_optional_time\"],\n }\n}\n--------------------------------------------------\n"
},
{
"from": "gpt",
"value": "[[mapping-dynamic-mapping]]\n== Dynamic Mapping\n\nDefault mappings allow to automatically apply generic mapping definition\nto types that do not have mapping pre defined. This is mainly done\nthanks to the fact that the\n<<mapping-object-type,object mapping>> and\nnamely the <<mapping-root-object-type,root\nobject mapping>> allow for schema-less dynamic addition of unmapped\nfields.\n\nThe default mapping definition is plain mapping definition that is\nembedded within the distribution:\n\n[source,js]\n--------------------------------------------------\n{\n \"_default_\" : {\n }\n}\n--------------------------------------------------\n\nPretty short, no? Basically, everything is defaulted, especially the\ndynamic nature of the root object mapping. The default mapping\ndefinition can be overridden in several manners. The simplest manner is\nto simply define a file called `default-mapping.json` and placed it\nunder the `config` directory (which can be configured to exist in a\ndifferent location). It can also be explicitly set using the\n`index.mapper.default_mapping_location` setting.\n\nThe dynamic creation of mappings for unmapped types can be completely\ndisabled by setting `index.mapper.dynamic` to `false`.\n\nAs an example, here is how we can change the default\n<<mapping-date-format,date_formats>> used in the\nroot and inner object types:\n\n[source,js]\n--------------------------------------------------\n{\n \"_default_\" : {\n \"date_formats\" : [\"yyyy-MM-dd\", \"dd-MM-yyyy\", \"date_optional_time\"]\n }\n}\n--------------------------------------------------\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "[DOCS] Remove abbrevtitles for Asciidoctor migration\n"
},
{
"from": "human",
"value": "[[release-highlights]]\n= {es} Release Highlights\n++++\n<titleabbrev>Release Highlights</titleabbrev>\n++++\n\n[partintro]\n--\nThis section summarizes the most important changes in each release. For the \nfull list, see <<es-release-notes>> and <<breaking-changes>>. \n\n* <<release-highlights-6.7.0>>\n* <<release-highlights-6.6.0>>\n* <<release-highlights-6.5.0>>\n* <<release-highlights-6.4.0>>\n* <<release-highlights-6.3.0>>\n\n--\n\ninclude::highlights-6.7.0.asciidoc[]\ninclude::highlights-6.6.0.asciidoc[]\ninclude::highlights-6.5.0.asciidoc[]\ninclude::highlights-6.4.0.asciidoc[]\ninclude::highlights-6.3.0.asciidoc[]\n"
},
{
"from": "gpt",
"value": "[[release-highlights]]\n= Release Highlights\n\n[partintro]\n--\nThis section summarizes the most important changes in each release. For the \nfull list, see <<es-release-notes>> and <<breaking-changes>>. \n\n* <<release-highlights-6.7.0>>\n* <<release-highlights-6.6.0>>\n* <<release-highlights-6.5.0>>\n* <<release-highlights-6.4.0>>\n* <<release-highlights-6.3.0>>\n\n--\n\ninclude::highlights-6.7.0.asciidoc[]\ninclude::highlights-6.6.0.asciidoc[]\ninclude::highlights-6.5.0.asciidoc[]\ninclude::highlights-6.4.0.asciidoc[]\ninclude::highlights-6.3.0.asciidoc[]\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add note about link-local IP addresses\n\n- https://bugzilla.redhat.com/show_bug.cgi?id=1718389\n"
},
{
"from": "human",
"value": "[id=\"understanding-networking\"]\n= Understanding networking\ninclude::modules/common-attributes.adoc[]\n:context: understanding-networking\n\ntoc::[]\n\nKubernetes ensures that Pods are able to network with each other, and allocates\neach Pod an IP address from an internal network. This ensures all containers\nwithin the Pod behave as if they were on the same host. Giving each Pod its own\nIP address means that Pods can be treated like physical hosts or virtual\nmachines in terms of port allocation, networking, naming, service discovery,\nload balancing, application configuration, and migration.\n\ninclude::modules/nw-ne-openshift-dns.adoc[leveloffset=+1]\n"
},
{
"from": "gpt",
"value": "[id=\"understanding-networking\"]\n= Understanding networking\ninclude::modules/common-attributes.adoc[]\n:context: understanding-networking\n\ntoc::[]\n\nKubernetes ensures that Pods are able to network with each other, and allocates\neach Pod an IP address from an internal network. This ensures all containers\nwithin the Pod behave as if they were on the same host. Giving each Pod its own\nIP address means that Pods can be treated like physical hosts or virtual\nmachines in terms of port allocation, networking, naming, service discovery,\nload balancing, application configuration, and migration.\n\n[NOTE]\n====\nSome cloud platforms offer metadata APIs that listen on the 169.254.169.254 IP address, a link-local IP address in the IPv4 `169.254.0.0/16` CIDR block.\n\nThis CIDR block is not reachable from the pod network. Pods that need access to these IP addresses must be given host network access by setting the `spec.hostNetwork` field in the Pod spec to `true`.\n\nIf you allow a Pod host network access, you grant the Pod privileged access to the underlying network infrastructure.\n====\n\ninclude::modules/nw-ne-openshift-dns.adoc[leveloffset=+1]\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Update What's New for 6.0\n"
},
{
"from": "human",
"value": "[[new]]\n= What's New in Spring Security 6.0\n\nSpring Security 6.0 provides a number of new features.\nBelow are the highlights of the release.\n\n== Breaking Changes\n\n* https://github.com/spring-projects/spring-security/issues/10556[gh-10556] - Remove EOL OpenSaml 3 Support.\nUse the OpenSaml 4 Support instead.\n* https://github.com/spring-projects/spring-security/issues/8980[gh-8980] - Remove unsafe/deprecated `Encryptors.querableText(CharSequence,CharSequence)`.\nInstead use data storage to encrypt values.\n* https://github.com/spring-projects/spring-security/issues/11520[gh-11520] - Remember Me uses SHA256 by default\n* https://github.com/spring-projects/spring-security/issues/8819 - Move filters to web package\nReorganize imports\n* https://github.com/spring-projects/spring-security/issues/7349 - Move filter and token to appropriate packages\nReorganize imports\n* https://github.com/spring-projects/spring-security/issues/11026[gh-11026] - Use `RequestAttributeSecurityContextRepository` instead of `NullSecurityContextRepository`\n"
},
{
"from": "gpt",
"value": "[[new]]\n= What's New in Spring Security 6.0\n\nSpring Security 6.0 provides a number of new features.\nBelow are the highlights of the release.\n\n== Breaking Changes\n\n* https://github.com/spring-projects/spring-security/issues/10556[gh-10556] - Remove EOL OpenSaml 3 Support.\nUse the OpenSaml 4 Support instead.\n* https://github.com/spring-projects/spring-security/issues/8980[gh-8980] - Remove unsafe/deprecated `Encryptors.querableText(CharSequence,CharSequence)`.\nInstead use data storage to encrypt values.\n* https://github.com/spring-projects/spring-security/issues/11520[gh-11520] - Remember Me uses SHA256 by default\n* https://github.com/spring-projects/spring-security/issues/8819 - Move filters to web package\nReorganize imports\n* https://github.com/spring-projects/spring-security/issues/7349 - Move filter and token to appropriate packages\nReorganize imports\n* https://github.com/spring-projects/spring-security/issues/11026[gh-11026] - Use `RequestAttributeSecurityContextRepository` instead of `NullSecurityContextRepository`\n* https://github.com/spring-projects/spring-security/pull/11887[gh-11827] - Change default authority for `oauth2Login()`\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "GH32085: Add note to etcd encryption section\n"
},
{
"from": "human",
"value": "// Module included in the following assemblies:\n//\n// * security/encrypting-etcd.adoc\n// * post_installation_configuration/cluster-tasks.adoc\n\n[id=\"about-etcd_{context}\"]\n= About etcd encryption\n\nBy default, etcd data is not encrypted in {product-title}. You can enable etcd encryption for your cluster to provide an additional layer of data security. For example, it can help protect the loss of sensitive data if an etcd backup is exposed to the incorrect parties.\n\nWhen you enable etcd encryption, the following OpenShift API server and Kubernetes API server resources are encrypted:\n\n* Secrets\n* Config maps\n* Routes\n* OAuth access tokens\n* OAuth authorize tokens\n\nWhen you enable etcd encryption, encryption keys are created. These keys are rotated on a weekly basis. You must have these keys to restore from an etcd backup.\n"
},
{
"from": "gpt",
"value": "// Module included in the following assemblies:\n//\n// * security/encrypting-etcd.adoc\n// * post_installation_configuration/cluster-tasks.adoc\n\n[id=\"about-etcd_{context}\"]\n= About etcd encryption\n\nBy default, etcd data is not encrypted in {product-title}. You can enable etcd encryption for your cluster to provide an additional layer of data security. For example, it can help protect the loss of sensitive data if an etcd backup is exposed to the incorrect parties.\n\nWhen you enable etcd encryption, the following OpenShift API server and Kubernetes API server resources are encrypted:\n\n* Secrets\n* Config maps\n* Routes\n* OAuth access tokens\n* OAuth authorize tokens\n\nWhen you enable etcd encryption, encryption keys are created. These keys are rotated on a weekly basis. You must have these keys to restore from an etcd backup.\n\n[NOTE]\n====\nKeep in mind that etcd encryption only encrypts values, not keys. This means that resource types, namespaces, and object names are unencrypted.\n====\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Fix link to milestone page\n"
},
{
"from": "human",
"value": "[[release-notes-5.7.0]]\n== 5.7.0\n\n*Date of Release:* β\n\n*Scope:* β\n\nFor a complete list of all _closed_ issues and pull requests for this release, consult\nthe link:{junit5-repo}+/milestone/?closed=1+[5.7.0] milestone page in the JUnit repository\non GitHub.\n\n\n[[release-notes-5.7.0-junit-platform]]\n=== JUnit Platform\n\n==== Bug Fixes\n\n* β\n\n==== Deprecations and Breaking Changes\n\n* β\n\n==== New Features and Improvements\n\n* β\n\n\n[[release-notes-5.7.0-junit-jupiter]]\n=== JUnit Jupiter\n\n==== Bug Fixes\n\n* β\n\n==== Deprecations and Breaking Changes\n\n* β\n\n==== New Features and Improvements\n\n* β\n\n\n[[release-notes-5.7.0-junit-vintage]]\n=== JUnit Vintage\n\n==== Bug Fixes\n\n* β\n\n==== Deprecations and Breaking Changes\n\n* β\n\n==== New Features and Improvements\n\n* β\n"
},
{
"from": "gpt",
"value": "[[release-notes-5.7.0]]\n== 5.7.0\n\n*Date of Release:* β\n\n*Scope:* β\n\nFor a complete list of all _closed_ issues and pull requests for this release, consult\nthe link:{junit5-repo}+/milestone/50?closed=1+[5.7.0] milestone page in the JUnit repository\non GitHub.\n\n\n[[release-notes-5.7.0-junit-platform]]\n=== JUnit Platform\n\n==== Bug Fixes\n\n* β\n\n==== Deprecations and Breaking Changes\n\n* β\n\n==== New Features and Improvements\n\n* β\n\n\n[[release-notes-5.7.0-junit-jupiter]]\n=== JUnit Jupiter\n\n==== Bug Fixes\n\n* β\n\n==== Deprecations and Breaking Changes\n\n* β\n\n==== New Features and Improvements\n\n* β\n\n\n[[release-notes-5.7.0-junit-vintage]]\n=== JUnit Vintage\n\n==== Bug Fixes\n\n* β\n\n==== Deprecations and Breaking Changes\n\n* β\n\n==== New Features and Improvements\n\n* β\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Fix BibText reference (the author tag didn't didn't compile well in latex)\n"
},
{
"from": "human",
"value": "= Research\n:awestruct-description: Academic research for papers and articles.\n:awestruct-layout: normalBase\n:showtitle:\n\nOptaPlanner is a good base for metaheuristics research. Read about some of the advantages in\nhttp://www.orcomplete.com/research/geoffrey-de-smet/open-source-metaheuristics-research-on-drools-planner[this article].\nEspecially https://www.youtube.com/watch?v=JpcPEieU3Cg[OptaPlanner Benchmarker] and our big set of already implemented use cases,\nmake it easy to test your new algorithm objectively against the existing algorithms.\n\nIf you're doing academic or professional research on top of OptaPlanner, link:../community/team.html[let us know].\n\nTo reference OptaPlanner or the user manual, please use this BibTeX reference:\n\n----\n@manual{optaplanner,\n author = {Geoffrey De Smet et al},\n title = {OptaPlanner User Guide},\n organization = {Red Hat and the community},\n url = {https://www.optaplanner.org},\n note = {OptaPlanner is an open source constraint satisfaction solver in Java}\n}\n----\n"
},
{
"from": "gpt",
"value": "= Research\n:awestruct-description: Academic research for papers and articles.\n:awestruct-layout: normalBase\n:showtitle:\n\nOptaPlanner is a good base for metaheuristics research. Read about some of the advantages in\nhttp://www.orcomplete.com/research/geoffrey-de-smet/open-source-metaheuristics-research-on-drools-planner[this article].\nEspecially https://www.youtube.com/watch?v=JpcPEieU3Cg[OptaPlanner Benchmarker] and our big set of already implemented use cases,\nmake it easy to test your new algorithm objectively against the existing algorithms.\n\nIf you're doing academic or professional research on top of OptaPlanner, link:../community/team.html[let us know].\n\nTo reference OptaPlanner or the user manual, please use this BibTeX reference:\n\n----\n@manual{optaplanner,\n author = {De Smet, Geoffrey and open source contributors},\n title = {OptaPlanner User Guide},\n year = {2006},\n organization = {Red Hat, Inc. or third-party contributors},\n url = {https://www.optaplanner.org},\n note = {OptaPlanner is an open source constraint solver in Java}\n}\n----\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "NMS-8550: Add description to reload changes in poll-outages.xml\n"
},
{
"from": "human",
"value": "\n// Allow GitHub image rendering\n:imagesdir: ../../../images\n\n[[ga-opennms-operation-daemon-config-files-pollerd]]\n==== Pollerd\n\n[options=\"header, autowidth\"]\n|===\n| Internal Daemon Name | Reload Event\n| _Pollerd_ | `uei.opennms.org/internal/reloadDaemonConfig -p 'daemonName Pollerd'`\n|===\n\n.Pollerd configuration file overview\n[options=\"header, autowidth\"]\n|===\n| File | Restart Required | Reload Event | Description\n| `poller-configuration.xml` | yes | yes | Restart is required in case new monitors are created or removed.\n Reload Event loads changed configuration parameters of existing monitors.\n| `response-graph.properties` | no | no | Graph definition for response time graphs from monitors\n| `poll-outages.xml` | ? | ? | ?\n|===\n"
},
{
"from": "gpt",
"value": "\n// Allow GitHub image rendering\n:imagesdir: ../../../images\n\n[[ga-opennms-operation-daemon-config-files-pollerd]]\n==== Pollerd\n\n[options=\"header, autowidth\"]\n|===\n| Internal Daemon Name | Reload Event\n| _Pollerd_ | `uei.opennms.org/internal/reloadDaemonConfig -p 'daemonName Pollerd'`\n|===\n\n.Pollerd configuration file overview\n[options=\"header, autowidth\"]\n|===\n| File | Restart Required | Reload Event | Description\n| `poller-configuration.xml` | yes | yes | Restart is required in case new monitors are created or removed.\n Reload Event loads changed configuration parameters of existing monitors.\n| `response-graph.properties` | no | no | Graph definition for response time graphs from monitors\n| `poll-outages.xml` | no | yes | Can be reloaded with `uei.opennms.org/internal/schedOutagesChanged`\n|===\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Introduce README and externalize common information via variables\n"
},
{
"from": "human",
"value": "= {project-name} - Using JWT RBAC\n\nThis guide explains how your {project-name} application can utilize MicroProfile JWT RBAC to provide\nsecured access to the JAX-RS endpoints.\n\n== TODO\n\n"
},
{
"from": "gpt",
"value": "= {project-name} - Using JWT RBAC\n\nThis guide explains how your {project-name} application can utilize MicroProfile JWT RBAC to provide\nsecured access to the JAX-RS endpoints.\n\n[cols=\"<m,<m,<2\",options=\"header\"]\n|===\n|Property Name|Default|Description\n|quarkus.jwt.enabled|true|Determine if the jwt extension is enabled.\n|quarkus.jwt.realm-name|Quarkus-JWT|Name to use for security realm.\n|quarkus.jwt.auth-mechanism|MP-JWT|Name to use for authentication mechanism\n|===\n\n== Solution\n\nWe recommend that you follow the instructions in the next sections and create the application step by step.\nHowever, you can skip right to the completed example.\n\nClone the Git repository: `git clone `, or download an {quickstart-url}/archive/master.zip[archive].\n\nThe solution is located in the `using-opentracing` {quickstart-url}[directory].\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "[DOCS] Remove references to changelog and to highlights\n\nhighlights reference the changelog and it currently breaks the docs.\n\nThis aligns changes in master with the ones made in other branches.\n"
},
{
"from": "human",
"value": "\ninclude::how-to.asciidoc[]\n\ninclude::testing.asciidoc[]\n\ninclude::glossary.asciidoc[]\n\ninclude::release-notes/highlights.asciidoc[]\n\ninclude::release-notes.asciidoc[]"
},
{
"from": "gpt",
"value": "\ninclude::how-to.asciidoc[]\n\ninclude::testing.asciidoc[]\n\ninclude::glossary.asciidoc[]\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Align title with docs.asciidoctor.org site requirements\n"
},
{
"from": "human",
"value": "= Asciidoctor Maven User Manual\n\nThe Asciidoctor Maven Plugin is the official way to convert your {uri-asciidoc}[AsciiDoc] documentation using {uri-asciidoctor}[Asciidoctor] from an {uri-maven}[Apache Maven] build.\n\nThe project main goal is to offer a thin layer on top of https://github.com/asciidoctor/asciidoctorj[Asciidoctorj] following as much as possible to common Maven practices.\n\nThe conversion can happen in 2 flavors:\n\n. As a xref:plugin:introduction.adoc[Maven plugin]: AsciiDoc files are converted at full Asciidoctor power independently from Maven site,\n\n. As a xref:plugin:introduction.adoc[Maven site module]: AsciiDoc files are integrated with https://maven.apache.org/doxia/[Maven Doxia] tools, however, with a few limitations.\n"
},
{
"from": "gpt",
"value": "= Asciidoctor Maven Tools Documentation\n:navtitle: Introduction\n\nThe Asciidoctor Maven Plugin is the official way to convert your {uri-asciidoc}[AsciiDoc] documentation using {uri-asciidoctor}[Asciidoctor] from an {uri-maven}[Apache Maven] build.\n\nThe project main goal is to offer a thin layer on top of https://github.com/asciidoctor/asciidoctorj[Asciidoctorj] following as much as possible to common Maven practices.\n\nThe conversion can happen in 2 flavors:\n\n. As a xref:plugin:introduction.adoc[Maven plugin]: AsciiDoc files are converted at full Asciidoctor power independently from Maven site,\n\n. As a xref:plugin:introduction.adoc[Maven site module]: AsciiDoc files are integrated with https://maven.apache.org/doxia/[Maven Doxia] tools, however, with a few limitations.\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Use the guides without toc layout\n"
},
{
"from": "human",
"value": "////\nThis guide is maintained in the main Quarkus repository\nand pull requests should be submitted there:\nhttps://github.com/quarkusio/quarkus/tree/master/docs/src/main/asciidoc\n////\n= Quarkus - All configuration options\n\ninclude::./attributes.adoc[]\n\ninclude::{generated-dir}/config/all-config.adoc[opts=optional]\n"
},
{
"from": "gpt",
"value": "---\nlayout: guides-configuration-reference\n---\n////\nThis guide is maintained in the main Quarkus repository\nand pull requests should be submitted there:\nhttps://github.com/quarkusio/quarkus/tree/master/docs/src/main/asciidoc\n////\n= Quarkus - All configuration options\n\ninclude::./attributes.adoc[]\n\ninclude::{generated-dir}/config/all-config.adoc[opts=optional]\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Update documentation version and date\n"
},
{
"from": "human",
"value": "= tick\nMalcolm Sparks <[email protected]>; Henry Widd; Johanna Antonelli <[email protected]>\n0.4-1, 2018-02-09\n:toc: left\n:toclevels: 4\n:docinfo: shared\n:sectnums: true\n:sectnumlevels: 2\n:xrefstyle: short\n:nofooter:\n\n:leveloffset: +1\n\ninclude::intro.adoc[]\n\ninclude::setup.adoc[]\n\ninclude::api.adoc[]\n\ninclude::dates.adoc[]\n\ninclude::durations.adoc[]\n\ninclude::clocks.adoc[]\n\ninclude::intervals.adoc[]\n\ninclude::calendars.adoc[]\n\ninclude::schedules.adoc[]\n\ninclude::formatting.adoc[]\n\ninclude::cookbook/index.adoc[]\n\ninclude::bibliography.adoc[]\n"
},
{
"from": "gpt",
"value": "= tick\nMalcolm Sparks <[email protected]>; Henry Widd; Johanna Antonelli <[email protected]>\n0.4.5-alpha, 2018-10-10\n:toc: left\n:toclevels: 4\n:docinfo: shared\n:sectnums: true\n:sectnumlevels: 2\n:xrefstyle: short\n:nofooter:\n\n:leveloffset: +1\n\ninclude::intro.adoc[]\n\ninclude::setup.adoc[]\n\ninclude::api.adoc[]\n\ninclude::dates.adoc[]\n\ninclude::durations.adoc[]\n\ninclude::clocks.adoc[]\n\ninclude::intervals.adoc[]\n\ninclude::calendars.adoc[]\n\ninclude::schedules.adoc[]\n\ninclude::formatting.adoc[]\n\ninclude::cookbook/index.adoc[]\n\ninclude::bibliography.adoc[]\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add docs for how to use the capsule command\n\n"
},
{
"from": "human",
"value": "= Creating an Uberjar\n\nEdge provides a script for running pack to build an uberjar.\nTo use it, you can simply run the below from your project sub-directory (the same folder as your deps.edn).\n\n[source,shell]\n----\n$ ../bin/onejar -A:prod --args '-m edge.main' project.jar\n----\n\nThe `-A:prod` indicates an alias you would like to have its `:extra-deps` and `:paths` included in your resulting jar.\n`--args` are default arguments to your jar, in this case we are specifying that the application should run edge.main, part of the edge production modules.\n\nYou can run this jar in production quite easily:\n\n[source,shell]\n----\n$ java -Xmx1G -jar project.jar\n----\n\nWe recommend that you specify the memory usage of your JVM, as the default on Java 6+ is usually insufficient for hosts run only this JVM process.\nA rule of thumb is to use 2/3rds of the memory of your host.\n"
},
{
"from": "gpt",
"value": "= Creating an Uberjar\n\n== OneJar\n\nEdge provides a script for running pack to build an uberjar.\nTo use it, you can simply run the below from your project sub-directory (the same folder as your deps.edn).\n\n[source,shell]\n----\n$ ../bin/onejar -A:prod --args '-m edge.main' project.jar\n----\n\nThe `-A:prod` indicates an alias you would like to have its `:extra-deps` and `:paths` included in your resulting jar.\n`--args` are default arguments to your jar, in this case we are specifying that the application should run edge.main, part of the edge production modules.\n\n== Capsule \n\nBuild a capsule uberjar using this command: \n\n[source,shell]\n----\n$ ../bin/capsule -m edge.main -e 'target/prod' -A:prod project.jar\n----\n\nIf you are using clojurescript in your project make sure you run this command first: \n\n[source,shell]\n----\n$ clojure -A:build:build/once\n----\n\nIt will build the clojurescript files and put them in the correct folder to be included in the uberjar. \n\n== Running the jar\n\nYou can run the produced jar in production quite easily:\n\n[source,shell]\n----\n$ java -Xmx1G -jar project.jar\n----\n\nWe recommend that you specify the memory usage of your JVM, as the default on Java 6+ is usually insufficient for hosts run only this JVM process.\nA rule of thumb is to use 2/3rds of the memory of your host.\n\n\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Examples: Remove legacy LaTeX math delimiters in latexmath:[]\n\n$..$ in latexmath:[] macro are long time unnecessary.\nAsciidoctor 2.0.0+ strips them if used.\n\nSee https://github.com/asciidoctor/asciidoctor/commit/50e1c2533838648cdd5b8db613daa5622ed1cbdd\n"
},
{
"from": "human",
"value": "// .basic\n[why]#chunky bacon#\n\n// .emphasis\n_chunky bacon_\n\n// .emphasis-with-role\n[why]_chunky bacon_\n\n// .strong\n*chunky bacon*\n\n// .strong-with-role\n[why]*chunky bacon*\n\n// .monospaced\n`hello world!`\n\n// .monospaced-with-role\n[why]`hello world!`\n\n// .superscript\n^super^chunky bacon\n\n// .superscript-with-role\n[why]^super^chunky bacon\n\n// .subscript\n~sub~chunky bacon\n\n// .subscript-with-role\n[why]~sub~chunky bacon\n\n// .mark\n#chunky bacon#\n\n// .double\n\"`chunky bacon`\"\n\n// .double-with-role\n[why]\"`chunky bacon`\"\n\n// .single\n'`chunky bacon`'\n\n// .single-with-role\n[why]'`chunky bacon`'\n\n// .asciimath\nasciimath:[sqrt(4) = 2]\n\n// .latexmath\nlatexmath:[$C = \\alpha + \\beta Y^{\\gamma} + \\epsilon$]\n\n// .with-id\n[#why]_chunky bacon_\n\n// .mixed-monospace-bold-italic\n`*_monospace bold italic phrase_*` and le``**__tt__**``ers\n"
},
{
"from": "gpt",
"value": "// .basic\n[why]#chunky bacon#\n\n// .emphasis\n_chunky bacon_\n\n// .emphasis-with-role\n[why]_chunky bacon_\n\n// .strong\n*chunky bacon*\n\n// .strong-with-role\n[why]*chunky bacon*\n\n// .monospaced\n`hello world!`\n\n// .monospaced-with-role\n[why]`hello world!`\n\n// .superscript\n^super^chunky bacon\n\n// .superscript-with-role\n[why]^super^chunky bacon\n\n// .subscript\n~sub~chunky bacon\n\n// .subscript-with-role\n[why]~sub~chunky bacon\n\n// .mark\n#chunky bacon#\n\n// .double\n\"`chunky bacon`\"\n\n// .double-with-role\n[why]\"`chunky bacon`\"\n\n// .single\n'`chunky bacon`'\n\n// .single-with-role\n[why]'`chunky bacon`'\n\n// .asciimath\nasciimath:[sqrt(4) = 2]\n\n// .latexmath\nlatexmath:[C = \\alpha + \\beta Y^{\\gamma} + \\epsilon]\n\n// .with-id\n[#why]_chunky bacon_\n\n// .mixed-monospace-bold-italic\n`*_monospace bold italic phrase_*` and le``**__tt__**``ers\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Fix storage class CSI note in Creating SC-EBS/GCE\n\nIn the portal, when I select provisioner drop-down, I do not see an option for <foo>.csi.<bar>.com. The instruction in this note to select the csi option for provisioning is not a valid option."
},
{
"from": "human",
"value": "// Be sure to set the :StorageClass: and :Provisioner: value in each assembly\n// on the line before the include statement for this module. For example, to\n// set the StorageClass value to \"AWS EBS\", add the following line to the\n// assembly:\n// :StorageClass: AWS EBS\n// Module included in the following assemblies:\n//\n// * storage/persistent_storage-aws.adoc\n\n\n\n[id=\"storage-create-{StorageClass}-storage-class_{context}\"]\n= Creating the {StorageClass} storage class\n\nStorage classes are used to differentiate and delineate storage levels and\nusages. By defining a storage class, users can obtain dynamically provisioned\npersistent volumes.\n\n.Procedure\n\n. In the {product-title} console, click *Storage* -> *Storage Classes*.\n\n. In the storage class overview, click *Create Storage Class*.\n\n. Define the desired options on the page that appears.\n\n.. Enter a name to reference the storage class.\n\n.. Enter an optional description.\n\n.. Select the reclaim policy.\n\n.. Select `{Provisioner}` from the drop down list.\n+\n\n[NOTE]\n====\nFor Container Storage Interface (CSI) provisioning, select `ebs.csi.aws.com`.\n====\n\n.. Enter additional parameters for the storage class as desired.\n\n. Click *Create* to create the storage class.\n\n// Undefine {StorageClass} attribute, so that any mistakes are easily spotted\n:!StorageClass:\n"
},
{
"from": "gpt",
"value": "// Be sure to set the :StorageClass: and :Provisioner: value in each assembly\n// on the line before the include statement for this module. For example, to\n// set the StorageClass value to \"AWS EBS\", add the following line to the\n// assembly:\n// :StorageClass: AWS EBS\n// Module included in the following assemblies:\n//\n// * storage/persistent_storage-aws.adoc\n\n\n\n[id=\"storage-create-{StorageClass}-storage-class_{context}\"]\n= Creating the {StorageClass} storage class\n\nStorage classes are used to differentiate and delineate storage levels and\nusages. By defining a storage class, users can obtain dynamically provisioned\npersistent volumes.\n\n.Procedure\n\n. In the {product-title} console, click *Storage* -> *Storage Classes*.\n\n. In the storage class overview, click *Create Storage Class*.\n\n. Define the desired options on the page that appears.\n\n.. Enter a name to reference the storage class.\n\n.. Enter an optional description.\n\n.. Select the reclaim policy.\n\n.. Select `{Provisioner}` from the drop down list.\n\n.. Enter additional parameters for the storage class as desired.\n\n. Click *Create* to create the storage class.\n\n// Undefine {StorageClass} attribute, so that any mistakes are easily spotted\n:!StorageClass:\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add Dave Syer to Authors\n\nMuch of the content from architecture is borrowed, with permission\nfrom Dave.\n\nIssueh gh-7955\n"
},
{
"from": "human",
"value": "= Spring Security Reference\nBen Alex; Luke Taylor; Rob Winch; Gunnar Hillert; Joe Grandja; Jay Bryant; EddΓΊ MelΓ©ndez; Josh Cummings\n:include-dir: _includes\n:security-api-url: https://docs.spring.io/spring-security/site/docs/current/api/\n:source-indent: 0\n:tabsize: 4\n:toc: left\n\n// FIXME: Add links for authentication, authorization, common attacks\nSpring Security is a framework that provides authentication, authorization, and protection against common attacks.\n// FIXME: Add links for imperative and reactive applications\nWith first class support for both imperative and reactive applications, it is the de-facto standard for securing Spring-based applications.\n\ninclude::{include-dir}/about/index.adoc[]\n\ninclude::{include-dir}/servlet/index.adoc[]\n\ninclude::{include-dir}/reactive/index.adoc[]\n\n\n"
},
{
"from": "gpt",
"value": "= Spring Security Reference\nBen Alex; Luke Taylor; Rob Winch; Gunnar Hillert; Joe Grandja; Jay Bryant; EddΓΊ MelΓ©ndez; Josh Cummings; Dave Syer\n:include-dir: _includes\n:security-api-url: https://docs.spring.io/spring-security/site/docs/current/api/\n:source-indent: 0\n:tabsize: 4\n:toc: left\n\n// FIXME: Add links for authentication, authorization, common attacks\nSpring Security is a framework that provides authentication, authorization, and protection against common attacks.\n// FIXME: Add links for imperative and reactive applications\nWith first class support for both imperative and reactive applications, it is the de-facto standard for securing Spring-based applications.\n\ninclude::{include-dir}/about/index.adoc[]\n\ninclude::{include-dir}/servlet/index.adoc[]\n\ninclude::{include-dir}/reactive/index.adoc[]\n\n\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Include REST API docs that were already produced.\n"
},
{
"from": "human",
"value": "[[rest-api-node-properties]]\n== Node properties ==\n\ninclude::set-property-on-node.asciidoc[]\n\ninclude::update-node-properties.asciidoc[]\n\ninclude::get-properties-for-node.asciidoc[]\n\ninclude::property-values-can-not-be-null.asciidoc[]\n\ninclude::property-values-can-not-be-nested.asciidoc[]\n\ninclude::delete-all-properties-from-node.asciidoc[]\n\ninclude::delete-a-named-property-from-a-node.asciidoc[]\n\n"
},
{
"from": "gpt",
"value": "[[rest-api-node-properties]]\n== Node properties ==\n\ninclude::set-property-on-node.asciidoc[]\n\ninclude::update-node-properties.asciidoc[]\n\ninclude::get-properties-for-node.asciidoc[]\n\ninclude::get-property-for-node.asciidoc[]\n\ninclude::property-values-can-not-be-null.asciidoc[]\n\ninclude::property-values-can-not-be-nested.asciidoc[]\n\ninclude::delete-all-properties-from-node.asciidoc[]\n\ninclude::delete-a-named-property-from-a-node.asciidoc[]\n\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Include shared/attributes.asciidoc directly from docs master\n\nOriginal commit: elastic/x-pack-elasticsearch@f3a0828c5d64650b855e24c46d2c1abc06ca05a1\n"
},
{
"from": "human",
"value": "[[elasticsearch-reference]]\n= Elasticsearch Reference\n\n:include-xpack: true\n:xes-repo-dir: {docdir}\n:es-repo-dir: {docdir}/../../../../elasticsearch/docs\n:es-test-dir: {docdir}/../../../../elasticsearch/docs/src/test\n:plugins-examples-dir: {docdir}/../../../../elasticsearch/plugins/examples\n:docs-dir: {docdir}/../../../../docs\n\ninclude::{es-repo-dir}/Versions.asciidoc[]\n\ninclude::{es-repo-dir}/reference/index-shared1.asciidoc[]\n\n:edit_url!:\ninclude::setup-xes.asciidoc[]\n\n:edit_url:\ninclude::{es-repo-dir}/reference/index-shared2.asciidoc[]\n\n:edit_url!:\ninclude::rest-api/index.asciidoc[]\n\n:edit_url:\ninclude::{es-repo-dir}/reference/index-shared3.asciidoc[]\n"
},
{
"from": "gpt",
"value": "[[elasticsearch-reference]]\n= Elasticsearch Reference\n\n:include-xpack: true\n:xes-repo-dir: {docdir}\n:es-repo-dir: {docdir}/../../../../elasticsearch/docs\n:es-test-dir: {docdir}/../../../../elasticsearch/docs/src/test\n:plugins-examples-dir: {docdir}/../../../../elasticsearch/plugins/examples\n\ninclude::{es-repo-dir}/Versions.asciidoc[]\n\ninclude::{es-repo-dir}/reference/index-shared1.asciidoc[]\n\n:edit_url!:\ninclude::setup-xes.asciidoc[]\n\n:edit_url:\ninclude::{es-repo-dir}/reference/index-shared2.asciidoc[]\n\n:edit_url!:\ninclude::rest-api/index.asciidoc[]\n\n:edit_url:\ninclude::{es-repo-dir}/reference/index-shared3.asciidoc[]\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Fix Java API documentation for indexed scripts\n"
},
{
"from": "human",
"value": "[[indexed-scripts]]\n== Indexed Scripts API\n\nThe indexed script API allows one to interact with scripts and templates\nstored in an elasticsearch index. It can be used to create, update, get, \nand delete indexed scripts and templates.\n\n[source,java]\n--------------------------------------------------\nPutIndexedScriptResponse = client.preparePutIndexedScript()\n\t\t\t .setScriptLang(\"groovy\")\t\t\n\t\t\t .setId(\"script1\") \n\t\t\t .setSource(\"_score * doc['my_numeric_field'].value\")\n\t\t\t .execute()\n\t\t\t .actionGet();\n\nGetIndexedScriptResponse = client.prepareGetIndexedScript()\n\t\t\t .setScriptLang(\"groovy\")\n\t\t\t .setId(\"script1\")\n\t\t\t .execute()\n\t\t\t .actionGet();\n\nDeleteIndexedScriptResponse = client.prepareDeleteIndexedScript()\n\t\t\t .setScriptLang(\"groovy\")\n\t\t\t .setId(\"script1\")\n\t\t\t .execute()\n\t\t\t .actionGet();\n--------------------------------------------------\n\nTo store templates simply use \"mustache\" for the scriptLang.\n\n=== Script Language\n\nThe API allows one to set the language of the indexed script being \ninteracted with. If one is not provided the default scripting language\nwill be used."
},
{
"from": "gpt",
"value": "[[indexed-scripts]]\n== Indexed Scripts API\n\nThe indexed script API allows one to interact with scripts and templates\nstored in an elasticsearch index. It can be used to create, update, get, \nand delete indexed scripts and templates.\n\n[source,java]\n--------------------------------------------------\nPutIndexedScriptResponse = client.preparePutIndexedScript()\n\t\t\t .setScriptLang(\"groovy\")\t\t\n\t\t\t .setId(\"script1\") \n\t\t\t .setSource(\"script\", \"_score * doc['my_numeric_field'].value\")\n\t\t\t .execute()\n\t\t\t .actionGet();\n\nGetIndexedScriptResponse = client.prepareGetIndexedScript()\n\t\t\t .setScriptLang(\"groovy\")\n\t\t\t .setId(\"script1\")\n\t\t\t .execute()\n\t\t\t .actionGet();\n\nDeleteIndexedScriptResponse = client.prepareDeleteIndexedScript()\n\t\t\t .setScriptLang(\"groovy\")\n\t\t\t .setId(\"script1\")\n\t\t\t .execute()\n\t\t\t .actionGet();\n--------------------------------------------------\n\nTo store templates simply use \"mustache\" for the scriptLang.\n\n=== Script Language\n\nThe API allows one to set the language of the indexed script being \ninteracted with. If one is not provided the default scripting language\nwill be used."
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add Link to DispatcherServlet in Filter Review Doc\n\nCloses gh-8036\n"
},
{
"from": "human",
"value": "[[servlet-filters-review]]\n= A Review of ``Filter``s\n\nSpring Security's Servlet support is based on Servlet ``Filter``s, so it is helpful to look at the role of ``Filter``s generally first.\nThe picture below shows the typical layering of the handlers for a single HTTP request.\n\n.FilterChain\n[[servlet-filterchain-figure]]\nimage::{figures}/filterchain.png[]\n\nThe client sends a request to the application, and the container creates a `FilterChain` which contains the ``Filter``s and `Servlet` that should process the `HttpServletRequest` based on the path of the request URI.\nAt most one `Servlet` can handle a single `HttpServletRequest` and `HttpServletResponse`.\nHowever, more than one `Filter` can be used to:\n\n* Prevent downstream ``Filter``s or the `Servlet` from being invoked.\nIn this instance the `Filter` will typically write the `HttpServletResponse`.\n* Modify the `HttpServletRequest` or `HttpServletResponse` used by the downstream ``Filter``s and `Servlet`\n\nThe power of the `Filter` comes from the `FilterChain` that is passed into it.\n\n.`FilterChain` Usage Example\n===\n[source,java]\n----\npublic void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) {\n\t// do something before the rest of the application\n chain.doFilter(request, response); // invoke the rest of the application\n // do something after the rest of the application\n}\n----\n===\n\nSince a `Filter` only impacts downstream ``Filter``s and the `Servlet`, the order each `Filter` is invoked is extremely important.\n"
},
{
"from": "gpt",
"value": "[[servlet-filters-review]]\n= A Review of ``Filter``s\n\nSpring Security's Servlet support is based on Servlet ``Filter``s, so it is helpful to look at the role of ``Filter``s generally first.\nThe picture below shows the typical layering of the handlers for a single HTTP request.\n\n.FilterChain\n[[servlet-filterchain-figure]]\nimage::{figures}/filterchain.png[]\n\nThe client sends a request to the application, and the container creates a `FilterChain` which contains the ``Filter``s and `Servlet` that should process the `HttpServletRequest` based on the path of the request URI.\nIn a Spring MVC application the `Servlet` is an instance of https://docs.spring.io/spring-security/site/docs/current-SNAPSHOT/reference/html5/#servlet-filters-review[`DispatcherServlet`].\nAt most one `Servlet` can handle a single `HttpServletRequest` and `HttpServletResponse`.\nHowever, more than one `Filter` can be used to:\n\n* Prevent downstream ``Filter``s or the `Servlet` from being invoked.\nIn this instance the `Filter` will typically write the `HttpServletResponse`.\n* Modify the `HttpServletRequest` or `HttpServletResponse` used by the downstream ``Filter``s and `Servlet`\n\nThe power of the `Filter` comes from the `FilterChain` that is passed into it.\n\n.`FilterChain` Usage Example\n===\n[source,java]\n----\npublic void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) {\n\t// do something before the rest of the application\n chain.doFilter(request, response); // invoke the rest of the application\n // do something after the rest of the application\n}\n----\n===\n\nSince a `Filter` only impacts downstream ``Filter``s and the `Servlet`, the order each `Filter` is invoked is extremely important.\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add Eleftheria Stein to Reference Authors\n\nCloses gh-9762\n"
},
{
"from": "human",
"value": "= Spring Security Reference\nBen Alex; Luke Taylor; Rob Winch; Gunnar Hillert; Joe Grandja; Jay Bryant; EddΓΊ MelΓ©ndez; Josh Cummings; Dave Syer\n:include-dir: _includes\n:security-api-url: https://docs.spring.io/spring-security/site/docs/current/api/\n:source-indent: 0\n:tabsize: 4\n:toc: left\n\n// FIXME: Add links for authentication, authorization, common attacks\nSpring Security is a framework that provides authentication, authorization, and protection against common attacks.\n// FIXME: Add links for imperative and reactive applications\nWith first class support for both imperative and reactive applications, it is the de-facto standard for securing Spring-based applications.\n\ninclude::{include-dir}/about/index.adoc[]\n\ninclude::{include-dir}/servlet/index.adoc[]\n\ninclude::{include-dir}/reactive/index.adoc[]\n\n\n"
},
{
"from": "gpt",
"value": "= Spring Security Reference\nBen Alex; Luke Taylor; Rob Winch; Gunnar Hillert; Joe Grandja; Jay Bryant; EddΓΊ MelΓ©ndez; Josh Cummings; Dave Syer; Eleftheria Stein\n:include-dir: _includes\n:security-api-url: https://docs.spring.io/spring-security/site/docs/current/api/\n:source-indent: 0\n:tabsize: 4\n:toc: left\n\n// FIXME: Add links for authentication, authorization, common attacks\nSpring Security is a framework that provides authentication, authorization, and protection against common attacks.\n// FIXME: Add links for imperative and reactive applications\nWith first class support for both imperative and reactive applications, it is the de-facto standard for securing Spring-based applications.\n\ninclude::{include-dir}/about/index.adoc[]\n\ninclude::{include-dir}/servlet/index.adoc[]\n\ninclude::{include-dir}/reactive/index.adoc[]\n\n\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Fix index of a list item\n"
},
{
"from": "human",
"value": "\n\n\nDuring its life a Wicket component goes through the following stages:\n\n1. *Initialization:* a component is instantiated and initialized by Wicket.\n2. *Rendering:* components are prepared for rendering and generate markup. If a component contains children (i.e. is a subclass of _MarkupContainer_) their rendering result is included in the resulting markup.\n3. *Removed:* this stage is triggered when a component is explicitly removed from its component hierarchy, i.e. when its parent invokes _remove(component)_ on it. This stage is facultative and is never triggered for pages.\n3. *Detached:* after request processing has ended all components are notified to detach any state that is no longer needed. \n\nThe following picture shows the state diagram of component lifecycle:\n\nimage::../img/component-lifecycle.png[]\n\nOnce a component has been removed it could be added again to a container, but the initialization stage won't be executed again - it is easier to just create a new component instance instead.\n\nNOTE: If you read the JavaDoc of class _Component_ you will find a more detailed description of component lifecycle.\nHowever this description introduces some advanced topics we didn't covered yet hence, to avoid confusion, in this chapter some details have been omitted and they will be covered later in the next chapters. \n\nFor now you can consider just the simplified version of the lifecycle described above.\n\n"
},
{
"from": "gpt",
"value": "\n\n\nDuring its life a Wicket component goes through the following stages:\n\n1. *Initialization:* a component is instantiated and initialized by Wicket.\n2. *Rendering:* components are prepared for rendering and generate markup. If a component contains children (i.e. is a subclass of _MarkupContainer_) their rendering result is included in the resulting markup.\n3. *Removed:* this stage is triggered when a component is explicitly removed from its component hierarchy, i.e. when its parent invokes _remove(component)_ on it. This stage is facultative and is never triggered for pages.\n4. *Detached:* after request processing has ended all components are notified to detach any state that is no longer needed.\n\nThe following picture shows the state diagram of component lifecycle:\n\nimage::../img/component-lifecycle.png[]\n\nOnce a component has been removed it could be added again to a container, but the initialization stage won't be executed again - it is easier to just create a new component instance instead.\n\nNOTE: If you read the JavaDoc of class _Component_ you will find a more detailed description of component lifecycle.\nHowever this description introduces some advanced topics we didn't covered yet hence, to avoid confusion, in this chapter some details have been omitted and they will be covered later in the next chapters. \n\nFor now you can consider just the simplified version of the lifecycle described above.\n\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "HZN-325: Fix apt-get install command for Cassandra\n\nFixed apt-get install command and added apt-get update after installing Cassandra repository.\n"
},
{
"from": "human",
"value": "\n// Allow GitHub image rendering\n:imagesdir: ../../images\n\n[[gi-install-cassandra-debian]]\n==== Installing on Debian-based systems\n\nThis section describes how to install the latest _Cassandra 2.1.x_ release on a _Debian_ based systems for _Newts_.\nThe first steps add the _DataStax_ community repository and install the required _GPG Key_ to verify the integrity of the _DEB packages_.\nInstallation of the package is done with _apt_ and the _Cassandra_ service is added to the run level configuration.\n\nNOTE: This description is build on _Debian 8_ and _Ubuntu 14.04 LTS_.\n\n.Add DataStax repository\n[source, bash]\n----\nvi /etc/apt/sources.list.d/cassandra.sources.list\n----\n\n.Content of the cassandra.sources.list file\n[source, bash]\n----\ndeb http://debian.datastax.com/community stable main\n----\n\n.Install GPG key to verify DEB packages\n[source, bash]\n----\nwget -O - http://debian.datastax.com/debian/repo_key | apt-key add -\n----\n\n.Install latest Cassandra 2.1.x package\n[source, bash]\n----\ndsc21=2.1.8-1 cassandra=2.1.8\n----\n\nThe _Cassandra_ service is added to the run level configuration and is automatically started after installing the package.\n\nTIP: Verify if the _Cassandra_ service is automatically started after rebooting the server.\n"
},
{
"from": "gpt",
"value": "\n// Allow GitHub image rendering\n:imagesdir: ../../images\n\n[[gi-install-cassandra-debian]]\n==== Installing on Debian-based systems\n\nThis section describes how to install the latest _Cassandra 2.1.x_ release on a _Debian_ based systems for _Newts_.\nThe first steps add the _DataStax_ community repository and install the required _GPG Key_ to verify the integrity of the _DEB packages_.\nInstallation of the package is done with _apt_ and the _Cassandra_ service is added to the run level configuration.\n\nNOTE: This description is build on _Debian 8_ and _Ubuntu 14.04 LTS_.\n\n.Add DataStax repository\n[source, bash]\n----\nvi /etc/apt/sources.list.d/cassandra.sources.list\n----\n\n.Content of the cassandra.sources.list file\n[source, bash]\n----\ndeb http://debian.datastax.com/community stable main\n----\n\n.Install GPG key to verify DEB packages\n[source, bash]\n----\nwget -O - http://debian.datastax.com/debian/repo_key | apt-key add -\n----\n\n.Install latest Cassandra 2.1.x package\n[source, bash]\n----\napt-get update\napt-get install dsc21=2.1.8-1 cassandra=2.1.8\n----\n\nThe _Cassandra_ service is added to the run level configuration and is automatically started after installing the package.\n\nTIP: Verify if the _Cassandra_ service is automatically started after rebooting the server.\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Update Java documentation for 5.0\n\nFix after review\n"
},
{
"from": "human",
"value": "[[java-aggs-bucket-histogram]]\n==== Histogram Aggregation\n\nHere is how you can use\n{ref}/search-aggregations-bucket-histogram-aggregation.html[Histogram Aggregation]\nwith Java API.\n\n\n===== Prepare aggregation request\n\nHere is an example on how to create the aggregation request:\n\n[source,java]\n--------------------------------------------------\nAggregationBuilder aggregation =\n AggregationBuilders\n .histogram(\"agg\")\n .field(\"height\")\n .interval(1);\n--------------------------------------------------\n\n\n===== Use aggregation response\n\nImport Aggregation definition classes:\n\n[source,java]\n--------------------------------------------------\nimport org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n--------------------------------------------------\n\n[source,java]\n--------------------------------------------------\n// sr is here your SearchResponse object\nHistogram agg = sr.getAggregations().get(\"agg\");\n\n// For each entry\nfor (Histogram.Bucket entry : agg.getBuckets()) {\n Double key = (Double) entry.getKey(); // Key\n long docCount = entry.getDocCount(); // Doc count\n\n logger.info(\"key [{}], doc_count [{}]\", key, docCount);\n}\n--------------------------------------------------\n"
},
{
"from": "gpt",
"value": "[[java-aggs-bucket-histogram]]\n==== Histogram Aggregation\n\nHere is how you can use\n{ref}/search-aggregations-bucket-histogram-aggregation.html[Histogram Aggregation]\nwith Java API.\n\n\n===== Prepare aggregation request\n\nHere is an example on how to create the aggregation request:\n\n[source,java]\n--------------------------------------------------\nAggregationBuilder aggregation =\n AggregationBuilders\n .histogram(\"agg\")\n .field(\"height\")\n .interval(1);\n--------------------------------------------------\n\n\n===== Use aggregation response\n\nImport Aggregation definition classes:\n\n[source,java]\n--------------------------------------------------\nimport org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n--------------------------------------------------\n\n[source,java]\n--------------------------------------------------\n// sr is here your SearchResponse object\nHistogram agg = sr.getAggregations().get(\"agg\");\n\n// For each entry\nfor (Histogram.Bucket entry : agg.getBuckets()) {\n Number key = (Number) entry.getKey(); // Key\n long docCount = entry.getDocCount(); // Doc count\n\n logger.info(\"key [{}], doc_count [{}]\", key, docCount);\n}\n--------------------------------------------------\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Fix Typo in Reference Docs\n\nFixes gh-6076\n"
},
{
"from": "human",
"value": "[[jackson]]\n=== Jackson Support\n\nSpring Security has added Jackson Support for persisting Spring Security related classes.\nThis can improve the performance of serializing Spring Security related classes when working with distributed sessions (i.e. session replication, Spring Session, etc).\n\nTo use it, register the `JacksonJacksonModules.getModules(ClassLoader)` as http://wiki.fasterxml.com/JacksonFeatureModules[Jackson Modules].\n\n[source,java]\n----\nObjectMapper mapper = new ObjectMapper();\nClassLoader loader = getClass().getClassLoader();\nList<Module> modules = SecurityJackson2Modules.getModules(loader);\nmapper.registerModules(modules);\n\n// ... use ObjectMapper as normally ...\nSecurityContext context = new SecurityContextImpl();\n// ...\nString json = mapper.writeValueAsString(context);\n----\n"
},
{
"from": "gpt",
"value": "[[jackson]]\n=== Jackson Support\n\nSpring Security has added Jackson Support for persisting Spring Security related classes.\nThis can improve the performance of serializing Spring Security related classes when working with distributed sessions (i.e. session replication, Spring Session, etc).\n\nTo use it, register the `SecurityJackson2Modules.getModules(ClassLoader)` as http://wiki.fasterxml.com/JacksonFeatureModules[Jackson Modules].\n\n[source,java]\n----\nObjectMapper mapper = new ObjectMapper();\nClassLoader loader = getClass().getClassLoader();\nList<Module> modules = SecurityJackson2Modules.getModules(loader);\nmapper.registerModules(modules);\n\n// ... use ObjectMapper as normally ...\nSecurityContext context = new SecurityContextImpl();\n// ...\nString json = mapper.writeValueAsString(context);\n----\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Remove XPackExtension break from 7.0 it's ported to 6.3\n\nOriginal commit: elastic/x-pack-elasticsearch@5e8ebda481704dfc95fb48bc948558eabd2281b1\n"
},
{
"from": "human",
"value": "[role=\"xpack\"]\n[[breaking-changes-xes]]\n= {xpack} Breaking Changes\n\n[partintro]\n--\nThis section summarizes the changes that you need to be aware of when migrating\nyour application from one version of {xpack} to another.\n\n* <<breaking-7.0.0-xes>>\n\nSee also:\n\n* <<breaking-changes,{es} Breaking Changes>>\n* {kibana-ref}/breaking-changes-xkb.html[{kib} {xpack} Breaking Changes]\n* {logstash-ref}/breaking-changes-xls.html[Logstash {xpack} Breaking Changes]\n\n--\n\n[role=\"xpack\"]\n[[breaking-7.0.0-xes]]\n== {xpack} Breaking changes in 7.0.0\n\n\nMachine Learning::\n* The `max_running_jobs` node property is removed in this release. Use the\n`xpack.ml.max_open_jobs` setting instead. For more information, <<ml-settings>>.\n\nSecurity::\n* The fields returned as part of the mappings section by get index, get\nmappings, get field mappings and field capabilities API are now only the ones\nthat the user is authorized to access in case field level security is enabled.\n\n* The legacy `XPackExtension` extension mechanism has been removed and replaced\nwith an SPI based extension mechanism that is installed and built as an elasticsearch\nplugin.\n\nSee also:\n\n* <<breaking-changes-7.0,{es} Breaking Changes>>\n"
},
{
"from": "gpt",
"value": "[role=\"xpack\"]\n[[breaking-changes-xes]]\n= {xpack} Breaking Changes\n\n[partintro]\n--\nThis section summarizes the changes that you need to be aware of when migrating\nyour application from one version of {xpack} to another.\n\n* <<breaking-7.0.0-xes>>\n\nSee also:\n\n* <<breaking-changes,{es} Breaking Changes>>\n* {kibana-ref}/breaking-changes-xkb.html[{kib} {xpack} Breaking Changes]\n* {logstash-ref}/breaking-changes-xls.html[Logstash {xpack} Breaking Changes]\n\n--\n\n[role=\"xpack\"]\n[[breaking-7.0.0-xes]]\n== {xpack} Breaking changes in 7.0.0\n\n\nMachine Learning::\n* The `max_running_jobs` node property is removed in this release. Use the\n`xpack.ml.max_open_jobs` setting instead. For more information, <<ml-settings>>.\n\nSecurity::\n* The fields returned as part of the mappings section by get index, get\nmappings, get field mappings and field capabilities API are now only the ones\nthat the user is authorized to access in case field level security is enabled.\n\nSee also:\n\n* <<breaking-changes-7.0,{es} Breaking Changes>>\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Comment out logo for now\n\nSigned-off-by: Scott M Stark <[email protected]>\n"
},
{
"from": "human",
"value": "= TCK Reference Guide for Jakarta Bean Validation\nEmmanuel Bernard - Red Hat, Inc.; Hardy Ferentschik - Red Hat, Inc.; Gunnar Morling - Red Hat, Inc.\n:doctype: book\n:revdate: {docdate}\n:revnumber: {tckVersion}\n:revremark: Copyright {copyrightYear} - Red Hat, Inc. (Specification Lead)\n:sectanchors:\n:anchor:\n:toc: left\n:docinfodir: {docinfodir}\n:docinfo:\n:title-logo-image: image:beanvalidation_logo.png[align=left,pdfwidth=20%]\n\n// PDF uses :title-logo-image: on first page, no need to repeat image later on\nifndef::backend-pdf[]\nimage::beanvalidation_logo_smaller.png[align=\"center\"]\nendif::[]\n\ninclude::preface.asciidoc[]\n\n:numbered:\n\ninclude::introduction.asciidoc[]\n\ninclude::appeals-process.asciidoc[]\n\ninclude::installation.asciidoc[]\n\ninclude::reporting.asciidoc[]\n\ninclude::configuration.asciidoc[]\n\ninclude::sigtest.asciidoc[]"
},
{
"from": "gpt",
"value": "= TCK Reference Guide for Jakarta Bean Validation\nEmmanuel Bernard - Red Hat, Inc.; Hardy Ferentschik - Red Hat, Inc.; Gunnar Morling - Red Hat, Inc.\n:doctype: book\n:revdate: {docdate}\n:revnumber: {tckVersion}\n:revremark: Copyright {copyrightYear} - Red Hat, Inc. (Specification Lead)\n:sectanchors:\n:anchor:\n:toc: left\n:docinfodir: {docinfodir}\n:docinfo:\n// Comment out logo for final release for now\n//:title-logo-image: image:beanvalidation_logo.png[align=left,pdfwidth=20%]\n\n// PDF uses :title-logo-image: on first page, no need to repeat image later on\nifndef::backend-pdf[]\nimage::beanvalidation_logo_smaller.png[align=\"center\"]\nendif::[]\n\ninclude::preface.asciidoc[]\n\n:numbered:\n\ninclude::introduction.asciidoc[]\n\ninclude::appeals-process.asciidoc[]\n\ninclude::installation.asciidoc[]\n\ninclude::reporting.asciidoc[]\n\ninclude::configuration.asciidoc[]\n\ninclude::sigtest.asciidoc[]"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Correct and highlight port number, plus how-to-change."
},
{
"from": "human",
"value": "= SpringBoot WebApp Demo\r\n\r\nSpringBoot looks like a nice way to get started.\r\nThis is a trivial webapp created using SpringBoot.\r\n\r\n== HowTo\r\n\r\nmvn spring-boot:run\r\n\r\nthen connect to http://localhost:8080/notaservlet\r\n\r\n== War Deployment\r\n\r\nIt seems that you can't have both the instant-deployment convenience of Spring Boot\r\nAND the security of a full WAR deployment in the same pom file. You will need to \r\nmake several changes to deploy as a WAR file. See the section entitled\r\n\"Traditional Deployment\"--\"Create a deployable war file\" in the\r\nspring-boot reference manual (Section 73.1 in the current snapshot as of\r\nthis writing).\r\n"
},
{
"from": "gpt",
"value": "= SpringBoot WebApp Demo\r\n\r\nSpringBoot looks like a nice way to get started.\r\nThis is a trivial webapp created using SpringBoot.\r\n\r\n== HowTo\r\n\r\nmvn spring-boot:run\r\n\r\nthen connect to http://localhost:8000/notaservlet\r\n\r\nNote that is 8000 not the usual 8080 to avoid conflicts.\r\nChante this in application.properties if you don't like it.\r\n\r\n== War Deployment\r\n\r\nIt seems that you can't have both the instant-deployment convenience of Spring Boot\r\nAND the security of a full WAR deployment in the same pom file. You will need to \r\nmake several changes to deploy as a WAR file. See the section entitled\r\n\"Traditional Deployment\"--\"Create a deployable war file\" in the\r\nspring-boot reference manual (Section 73.1 in the current snapshot as of\r\nthis writing).\r\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add links to supported connection pools\n"
},
{
"from": "human",
"value": "= flexy-pool\nAuthor <[email protected]>\nv1.0.0, 2014-02-25\n\n:toc:\n:imagesdir: images\n:homepage: http://vladmihalcea.com/\n\n== Introduction\n\nThe flexy-pool library brings adaptability to a given Connection Pool, allowing it to resize on demand.\nThis is very handy since most connection pools offer a limited set of dynamic configuration strategies.\n\n== Features \n\n* extensive connection pool support(Bitronix TM, C3PO, DBCP 1, DBCP 2, BoneCP, HikariCP)\n* statistics support\n** source connection acquiring time histogram\n** total connection acquiring time histogram\n** retries attempts histogram\n** maximum CP size histogram\n** connection request count histogram\n** connection lease time histogram\n\nhttps://github.com/vladmihalcea/flexy-pool/wiki/Flexy-Pool-User-Guide[User Guide]\n\n== 1.0 TODO list\n\n* explain all configuration settings\n* explain jmx metrics\n* add real-life case study\n"
},
{
"from": "gpt",
"value": "= flexy-pool\nAuthor <[email protected]>\nv1.0.0, 2014-02-25\n\n:toc:\n:imagesdir: images\n:homepage: http://vladmihalcea.com/\n\n== Introduction\n\nThe flexy-pool library brings adaptability to a given Connection Pool, allowing it to resize on demand.\nThis is very handy since most connection pools offer a limited set of dynamic configuration strategies.\n\n== Features \n\n* extensive connection pool support\n** http://docs.codehaus.org/display/BTM/Home[Bitronix Transaction Manager]\n** http://commons.apache.org/proper/commons-dbcp/[Apache DBCP]\n** http://commons.apache.org/proper/commons-dbcp/[Apache DBCP2]\n** http://www.mchange.com/projects/c3p0/[C3P0]\n** http://jolbox.com/[BoneCP]\n** http://brettwooldridge.github.io/HikariCP/[HikariCP]\n* statistics support\n** source connection acquiring time histogram\n** total connection acquiring time histogram\n** retries attempts histogram\n** maximum CP size histogram\n** connection request count histogram\n** connection lease time histogram\n\n== Documentation \n\nhttps://github.com/vladmihalcea/flexy-pool/wiki/Flexy-Pool-User-Guide[Flexy Pool User Guide]\n\n== 1.0 Release TODO list\n\n* explain all configuration settings\n* explain jmx metrics\n* add real-life case study\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Update the doc with the instructions to run\n"
},
{
"from": "human",
"value": "= Microservices libraries comparison\n\n== Purpose\n\nThis project is the companion of this Blog article: https://cdelmas.github.io/xxxxxxx.\n\n== Build\n\nTo build the servers, just run `gradlew make` in the command line. By the way, you need a JDK 8, but I guess you're up-to-date :)\n\n== Run the perf tests\n\n\n\n== Notes\n\nThere are no unit tests, and it is fully assumed.\n\n== The missing guys (again)\n\nFeel free to add competing frameworks, such as Restx, Payara and Swarm to the comparison.\n\n"
},
{
"from": "gpt",
"value": "= Microservices libraries comparison\n\n== Purpose\n\nThis project is the companion of this Blog article: https://cdelmas.github.io/xxxxxxx.\n\n== Build\n\nTo build the servers, just run `gradlew make` in the command line. By the way, you need a JDK 8, but I guess you're up-to-date :)\n\n== Run the perf tests\n\n1. Move inside the `perf-runner` directory\n1. Each server has its own url :\n+\n- http://localhost:8081/dropwizard\n- http://localhost:8082/restlet\n- http://localhost:8083/spark\n- http://localhost:8084/spring\n- http://localhost:8085/vertx\n+\nSet the `SERVICE_URI` environment variable with one of these values. Run the correspounding server with `java -jar <project>/build/lib/xxx.jar`.\n1. Run the Gatling scenario:Β `./gradlew loadTest`\n\n== Notes\n\nThere are no unit tests, and it is fully assumed.\n\n== The missing guys (again)\n\nFeel free to add competing frameworks, such as Restx, Payara and Swarm to the comparison.\n\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Revert \"Revert \"Test trigger on push\"\"\n\nThis reverts commit a9898356427b003618fdb0f0214adf1866ee68c5.\n"
},
{
"from": "human",
"value": "= Infinispan Cluster Manager\n\nimage:https://vertx.ci.cloudbees.com/buildStatus/icon?job=vert.x3-infinispan[\"Build Status\",link=\"https://vertx.ci.cloudbees.com/view/vert.x-3/job/vert.x3-infinispan/\"]\n\nThis is a cluster manager implementation for Vert.x that uses http://infinispan.org[Infinispan].\n\nPlease see the in-source asciidoc documentation or the main documentation on the web-site for a full description\nof this component:\n\n* link:http://vertx.io/docs/vertx-infinispan/java/[web-site docs]\n* link:src/main/asciidoc/java/index.adoc[in-source docs]\n"
},
{
"from": "gpt",
"value": "= Infinispan Cluster Manager\n\nimage:https://vertx.ci.cloudbees.com/buildStatus/icon?job=vert.x3-infinispan[\"Build Status\",link=\"https://vertx.ci.cloudbees.com/view/vert.x-3/job/vert.x3-infinispan/\"]\n\nThis is a cluster manager implementation for Vert.x that uses http://infinispan.org[Infinispan].\n\nPlease see the in-source asciidoc documentation or the main documentation on the web-site for a full description\nof this component:\n\n* link:http://vertx.io/docs/vertx-infinispan/java/[web-site docs]\n* link:src/main/asciidoc/java/index.adoc[in-source docs]\n\n-- will remove --\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Fix minor issues in delimited payload token filter docs\n\nThis commit backports commit b4fbe1e1b80dc47120de5111e31c12db5d514af2\nfrom master to 1.7.\n\nRelates #15486\n"
},
{
"from": "human",
"value": "[[analysis-delimited-payload-tokenfilter]]\n=== Delimited Payload Token Filter\n\nNamed `delimited_payload_filter`. Splits tokens into tokens and payload whenever a delimiter character is found.\n\nExample: \"the|1 quick|2 fox|3\" is split per default int to tokens `fox`, `quick` and `the` with payloads `1`, `2` and `3` respectively.\n\n\n\nParameters:\n\n`delimiter`:: \n Character used for splitting the tokens. Default is `|`. \n\n`encoding`:: \n The type of the payload. `int` for integer, `float` for float and `identity` for characters. Default is `float`."
},
{
"from": "gpt",
"value": "[[analysis-delimited-payload-tokenfilter]]\n=== Delimited Payload Token Filter\n\nNamed `delimited_payload_filter`. Splits tokens into tokens and payload whenever a delimiter character is found.\n\nExample: \"the|1 quick|2 fox|3\" is split by default into tokens `the`, `quick`, and `fox` with payloads `1`, `2`, and `3` respectively.\n\nParameters:\n\n`delimiter`:: \n Character used for splitting the tokens. Default is `|`. \n\n`encoding`:: \n The type of the payload. `int` for integer, `float` for float and `identity` for characters. Default is `float`."
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Remove searching code that doesn't work on github"
},
{
"from": "human",
"value": "GRR Rapid Response documentation.\n=================================\n\nGRR is an Incident Response Framework focused on Remote Live Forensics.\n\nIndex\n-----\n1. link:user_manual.adoc[User Manual]\n2. link:admin.adoc[Administration Documentation (Setup and Configuration)]\n3. link:implementation.adoc[Developer and Implementation Documentation]\n4. link:configuration.adoc[The GRR Configuration system]\n5. link:https://code.google.com/p/grr/w/list[Wiki]\n\nSearch Wiki and Documentation\n-----------------------------\n++++\n<script>\n (function() {\n var cx = '017727954489331006196:9wffyoayvxs';\n var gcse = document.createElement('script');\n gcse.type = 'text/javascript';\n gcse.async = true;\n gcse.src = (document.location.protocol == 'https:' ? 'https:' : 'http:') +\n '//www.google.com/cse/cse.js?cx=' + cx;\n var s = document.getElementsByTagName('script')[0];\n s.parentNode.insertBefore(gcse, s);\n })();\n</script>\n<gcse:searchbox gname=\"docs\"></gcse:searchbox>\n++++\n\n\nSearch Code\n-----------\n++++\n<script>\n (function() {\n var cx = '017727954489331006196:pfi9rtrihuq';\n var gcse = document.createElement('script');\n gcse.type = 'text/javascript';\n gcse.async = true;\n gcse.src = (document.location.protocol == 'https:' ? 'https:' : 'http:') +\n '//www.google.com/cse/cse.js?cx=' + cx;\n var s = document.getElementsByTagName('script')[0];\n s.parentNode.insertBefore(gcse, s);\n })();\n</script>\n<gcse:searchbox gname=\"code\"></gcse:searchbox>\n++++\n\nResults\n-------\n++++\n<gcse:searchresults gname=\"docs\"></gcse:searchresults>\n<gcse:searchresults gname=\"code\"></gcse:searchresults>\n\n++++"
},
{
"from": "gpt",
"value": "GRR Rapid Response documentation.\n=================================\n\nGRR is an Incident Response Framework focused on Remote Live Forensics.\n\nIndex\n-----\n1. link:user_manual.adoc[User Manual]\n2. link:admin.adoc[Administration Documentation (Setup and Configuration)]\n3. link:implementation.adoc[Developer and Implementation Documentation]\n4. link:configuration.adoc[The GRR Configuration system]\n5. link:https://code.google.com/p/grr/w/list[Wiki]\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Update the Introduction and header\n\nWe added Travis-CI and AppVeyor badges, the version number and\nmy email address.\n"
},
{
"from": "human",
"value": "= Inform: A C library for information analysis of complex systems\nDouglas G. Moore <[email protected]>\nv0.0.5, June 2017\n:toc2:\n:toclevels: 2\n:source-highlighter: prettify\n:stem: latexmath\n\nInform is a cross-platform C library designed for performing information \nanalysis of complex system.\n\n1. The `inform_dist` struct provides discrete, emperical probability\n distributions. These form the basis for all of the information-theoretic\n measures.\n2. A collection of information measures built upon the distribution struct\n provide the core algorithms for the library, and are provided through the\n `shannon.h` header.\n3. A host of measures of the information dynamics on time series are built upon\n the core information measures. Each measure is housed in its own header, e.g.\n `active_info.h`.\n\nIn addition to the core components, a small collection of utilities are also\nprovided.\n\n:leveloffset: 1\n\ninclude::getting-started.adoc[]\n\ninclude::distributions.adoc[]\n\ninclude::shannon.adoc[]\n"
},
{
"from": "gpt",
"value": "= Inform: A C library for information analysis of complex systems\nDouglas G. Moore <[email protected]>\nv1.0.0, November 2017\n:toc2:\n:toclevels: 2\n:source-highlighter: prettify\n:stem: latexmath\n\nimage:https://travis-ci.org/ELIFE-ASU/Inform.svg?branch=master[Build Status (Travis CI),\nlink=https://travis-ci.org/ELIFE-ASU/Inform]\nimage:https://ci.appveyor.com/api/projects/status/7y015h6p7n0q7097/branch/master?svg=true[Build\nStatus (Appveyor), link=https://ci.appveyor.com/project/dglmoore/inform-vx977]\n\nInform is a cross-platform C library designed for performing information \nanalysis of complex system.\n\n1. The `inform_dist` struct provides discrete, emperical probability\n distributions. These form the basis for all of the information-theoretic\n measures.\n2. A collection of information measures built upon the distribution struct\n provide the core algorithms for the library, and are provided through the\n `shannon.h` header.\n3. A host of measures of the information dynamics on time series are built upon\n the core information measures. Each measure is housed in its own header, e.g.\n `active_info.h`.\n\nIn addition to the core components, a small collection of utilities are also\nprovided.\n\n:leveloffset: 1\n\ninclude::getting-started.adoc[]\n\ninclude::distributions.adoc[]\n\ninclude::shannon.adoc[]\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Update Release notes for v2.34.0"
},
{
"from": "human",
"value": "= [.ebi-color]#Release notes#\n:toc: auto\n\nThis pages contains links to release notes for DSP's REST API application.\n\n[[section]]\n== v2.33.0 Release notes\n\nNew features:\n----------------\n1. Add the possibility to set an additional checklist on a submittable for schema validation.\n2. Add the possibility to set an additional checklist on a submittable when they uploaded with a spreadsheet.\n3. Document checklist usage in API documentation.\n----------------\n\nPlatform upgrades:\n--------------\n1. DSP's REST API application now runs on Java 11 (Open JDK 11).\n--------------\n"
},
{
"from": "gpt",
"value": "= [.ebi-color]#Release Notes#\n:toc: auto\n\nThis pages contains links to release notes for DSP's REST API application.\n\n[[section]]\n== v2.34.0 Release Notes\n\nNew feautures:\n----------------\n1. DSP is going under maintenance from 8th March to 4th April 2020. For that period on DSP's welcome page displays a message regarding to this maintenance.\n----------------\n\n[[section]]\n== v2.33.0 Release Notes\n\nNew features: \n----------------\n1. Add the possibility to set an additional checklist on a submittable for schema validation.\n2. Add the possibility to set an additional checklist on a submittable when they uploaded with a spreadsheet.\n3. Document checklist usage in API documentation.\n----------------\n\nPlatform upgrades:\n--------------\n1. DSP's REST API application now runs on Java 11 (Open JDK 11).\n--------------\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Replace Travis badge by GitHub Actions badge\n"
},
{
"from": "human",
"value": "= Asciidoctor.js CLI\n\nifdef::env-github[]\nimage:https://img.shields.io/travis/asciidoctor/asciidoctor-cli.js/master.svg[Travis build status, link=https://travis-ci.org/asciidoctor/asciidoctor-cli.js]\nimage:https://img.shields.io/npm/v/@asciidoctor/cli.svg[npm version, link=https://www.npmjs.org/package/@asciidoctor/cli]\nendif::[]\n\nThe Command Line Interface (CLI) for Asciidoctor.js.\n\nInstall Asciidoctor.js globally and you'll have access to the `asciidoctor` command anywhere on your system:\n\n $ npm i -g asciidoctor\n\nType `asciidoctor --help` for more information.\n"
},
{
"from": "gpt",
"value": "= Asciidoctor.js CLI\n\nifdef::env-github[]\nimage:https://github.com/asciidoctor/asciidoctor-cli.js/workflows/Build/badge.svg[GitHub Actions Status, link=https://github.com/asciidoctor/asciidoctor-cli.js/actions]\nimage:https://img.shields.io/npm/v/@asciidoctor/cli.svg[npm version, link=https://www.npmjs.org/package/@asciidoctor/cli]\nendif::[]\n\nThe Command Line Interface (CLI) for Asciidoctor.js.\n\nInstall Asciidoctor.js globally and you'll have access to the `asciidoctor` command anywhere on your system:\n\n $ npm i -g asciidoctor\n\nType `asciidoctor --help` for more information.\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Use single-colon for image notation.\n\nGithub seems to use single-colon (powerman syntax) instead of\ndouble colon (asciidoctor)\n"
},
{
"from": "human",
"value": "README\n======\n\nauthor::\n grml solutions\nversion::\n 1.0.0\ntravis state::\n image::https://api.travis-ci.org/meisterluk/screenshot-compare.svg?branch=master[]\n\nWhat?\n-----\n\nYou can compare two image files and it will show a difference score between 0 and 1.\nUsing transparent reference PNG images, you can also skip certain areas of the file.\n\nWhy?\n----\n\nWe take screenshots of running live systems and check whether they are (visually) in a certain state.\n\nWho?\n----\n\nEspecially software developers testing software might find this software useful.\nConsider that Mozilla Firefox's just got a link:https://developer.mozilla.org/en-US/Firefox/Headless_mode[headless mode] and soon you will be able to take screenshots.\n\nSource Code\n-----------\n\nThe source code is available at link:https://github.com/mika/screenshot-compare/issues[Github].\n\nIssues\n------\n\nPlease report any issues on the link:https://github.com/mika/screenshot-compare/issues[Github issues page].\n\nLicense\n-------\n\nSee link:LICENSE[the LICENSE file] (Hint: MIT license).\n\nChangelog\n---------\n\n0.0.1::\n first release: PNG only, transparency support\n0.0.2::\n goroutine support, timeout argument, slight performance improvement\n1.0.0::\n complete rewrite, `--wait` and `--timeout` parameters, `Y'UV` support\n"
},
{
"from": "gpt",
"value": "README image:https://api.travis-ci.org/meisterluk/screenshot-compare.svg?branch=master[]\n========================================================================================\n\nauthor::\n grml solutions\nversion::\n 1.0.0\n\nWhat?\n-----\n\nYou can compare two image files and it will show a difference score between 0 and 1.\nUsing transparent reference PNG images, you can also skip certain areas of the file.\n\nWhy?\n----\n\nWe take screenshots of running live systems and check whether they are (visually) in a certain state.\n\nWho?\n----\n\nEspecially software developers testing software might find this software useful.\nConsider that Mozilla Firefox's just got a link:https://developer.mozilla.org/en-US/Firefox/Headless_mode[headless mode] and soon you will be able to take screenshots.\n\nSource Code\n-----------\n\nThe source code is available at link:https://github.com/mika/screenshot-compare/issues[Github].\n\nIssues\n------\n\nPlease report any issues on the link:https://github.com/mika/screenshot-compare/issues[Github issues page].\n\nLicense\n-------\n\nSee link:LICENSE[the LICENSE file] (Hint: MIT license).\n\nChangelog\n---------\n\n0.0.1::\n first release: PNG only, transparency support\n0.0.2::\n goroutine support, timeout argument, slight performance improvement\n1.0.0::\n complete rewrite, `--wait` and `--timeout` parameters, `Y'UV` support\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Include Hello World Youtube video"
},
{
"from": "human",
"value": "= Minimal-J\n\nJava - but small.\n\nimage::doc/frontends.png[]\n\nMinimal-J applications are\n\n* Responsive to use on every device\n* Straight forward to specify and implement and therefore\n* Easy to plan and manage\n\n=== Idea\n\nBusiness applications tend to get complex and complicated. Minimal-J prevents this by setting clear rules how an application should behave and how it should be implemented.\n\nMinimal applications may not always look the same. But the UI concepts never change. There are no surprises for the user.\n\n== Technical Features\n\n* Independent of the used UI technology. Implementations for Web / Mobile / Desktop.\n* ORM persistence layer for Maria DB or in memory DB. Transactions and Authorization supported.\n* Small: The minimalj.jar is still < 1MB\n* Very few dependencies\n* Applications can run standalone (like SpringBoot)\n\n== Documentation\n\n* link:doc/user_guide/user_guide.adoc[Minimal user guide] User guide for Minimal-J applications.\n* link:doc/topics.adoc[Tutorial and examples] Informations for developers.\n* link:doc/release_notes.adoc[Release Notes]\n\n== Hello World\n\nHow to implement Hello World in Minimal-J:\n\nvideo::[0VHz7gv6TpA][youtube]\n\n=== Contact\n\n* Bruno Eberhard, mailto:[email protected][[email protected]] "
},
{
"from": "gpt",
"value": "= Minimal-J\n\nJava - but small.\n\nimage::doc/frontends.png[]\n\nMinimal-J applications are\n\n* Responsive to use on every device\n* Straight forward to specify and implement and therefore\n* Easy to plan and manage\n\n=== Idea\n\nBusiness applications tend to get complex and complicated. Minimal-J prevents this by setting clear rules how an application should behave and how it should be implemented.\n\nMinimal applications may not always look the same. But the UI concepts never change. There are no surprises for the user.\n\n== Technical Features\n\n* Independent of the used UI technology. Implementations for Web / Mobile / Desktop.\n* ORM persistence layer for Maria DB or in memory DB. Transactions and Authorization supported.\n* Small: The minimalj.jar is still < 1MB\n* Very few dependencies\n* Applications can run standalone (like SpringBoot)\n\n== Documentation\n\n* link:doc/user_guide/user_guide.adoc[Minimal user guide] User guide for Minimal-J applications.\n* link:doc/topics.adoc[Tutorial and examples] Informations for developers.\n* link:doc/release_notes.adoc[Release Notes]\n\n== Hello World\n\nHow to implement Hello World in Minimal-J:\n\nlink:_includes/ex-video.adoc[0VHz7gv6TpA]\n\n=== Contact\n\n* Bruno Eberhard, mailto:[email protected][[email protected]] "
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Fix asciidoctor crossref problem example\n"
},
{
"from": "human",
"value": "= Bare code crossref example\nGraham Klyne\nv0.1.15, 2015-08-04\n:toc: macro\n\ntoc::[]\n\n== section `foo`\n\nReference to <<`bar`>>\n\n== `bar`\n\nReference to <<section `foo`>\n\nEnd.\n"
},
{
"from": "gpt",
"value": "= Bare <code> crossref example\nGraham Klyne\nv0.1.15, 2015-08-04\n:toc: macro\n\ntoc::[]\n\n== section `foo`\n\nReference to <<`bar`>>\n\n== `bar`\n\nReference to <<section `foo`>>\n\nEnd.\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Remove information admonition from readme\n"
},
{
"from": "human",
"value": "= Custom New Tab\nThor Andreas Rognan <[email protected]>\n:imagesdir: doc/assets/images\nifdef::env-github[]\n:tip-caption: :bulb:\n:note-caption: :information_source:\n:important-caption: :heavy_exclamation_mark:\n:caution-caption: :fire:\n:warning-caption: :warning:\nendif::[]\n\n'''\n\nNOTE: Chrome extension that creates a custom 'new tab'.\n\nimage::example.png[]\n\n== Copyright and Licensing\nCopyright (C) 2017- Thor Andreas Rognan\nFree use of this software is granted under the terms of the MIT License.\n\nSee the <<LICENSE#,LICENSE>> file for details.\n"
},
{
"from": "gpt",
"value": "= Custom New Tab\nThor Andreas Rognan <[email protected]>\n:imagesdir: doc/assets/images\nifdef::env-github[]\n:tip-caption: :bulb:\n:note-caption: :information_source:\n:important-caption: :heavy_exclamation_mark:\n:caution-caption: :fire:\n:warning-caption: :warning:\nendif::[]\n\n'''\n\n_Chrome extension that creates a custom 'new tab'._\n\nimage::example.png[]\n\n== Copyright and Licensing\nCopyright (C) 2017- Thor Andreas Rognan\nFree use of this software is granted under the terms of the MIT License.\n\nSee the <<LICENSE#,LICENSE>> file for details.\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add notes about graduation date, scholarship\n"
},
{
"from": "human",
"value": "== School\n\nI entered the Honours Computer Science Co-Op program at\nlink:http://mcmaster.ca[McMaster University] in Fall 2010. My original\ngraduation date would have been April 2014; however, due to my long-term\ninternship at Red Hat, this has been delayed by one year.\n"
},
{
"from": "gpt",
"value": "== School\n\nI entered the Honours Computer Science Co-Op program at\nlink:http://mcmaster.ca[McMaster University] in Fall 2010. My original\ngraduation date would have been April 2014; however, due to my long-term\ninternship at Red Hat, this has been delayed by one year. I'm on track\nto complete my Bachelor's of Applied Science, Computer Science degree\nin the Spring of 2015, with two advanced graduate level credits already\ncompleted, and having earned the Louis J. Shein scholarship for \"academic\nexcellence in Russian Language course\" after first year.\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": ":memo: Add a section on how to run Mozilla add-on linter\n"
},
{
"from": "human",
"value": "= Hacking\n:uri-nodejs: http://nodejs.org\n:uri-nvm: https://github.com/creationix/nvm\n\nThis guide will give you all the necessary information you need to become a successful code contributor!\n\n== Setup\n\nTo build this project, you will need {uri-nodejs}[Node.js] >= 4 and `npm` (we recommend {uri-nvm}[nvm] to manage multiple active Node.js versions).\n\nNOTE: If you feel more comfortable with `yarn`, you can use it as an alternative to `npm`\n\n== Building\n\n. Install all the dependencies\n+\n $ npm install\n\n. Build the project \n+\n $ npm run build\n+\nThis command will produce a zip file that can be load in Chrome: [.path]_dist/asciidoctor-browser-extension.zip_\n\n. Run tests\n+\n $ npm run test\n\nThis project is using a code linter to enforce code consistency. \nTo make sure that the code you have contributed, follow the code rules, execute the following command:\n\n $ npm run lint\n"
},
{
"from": "gpt",
"value": "= Hacking\n:uri-nodejs: http://nodejs.org\n:uri-nvm: https://github.com/creationix/nvm\n\nThis guide will give you all the necessary information you need to become a successful code contributor!\n\n== Setup\n\nTo build this project, you will need {uri-nodejs}[Node.js] >= 4 and `npm` (we recommend {uri-nvm}[nvm] to manage multiple active Node.js versions).\n\nNOTE: If you feel more comfortable with `yarn`, you can use it as an alternative to `npm`\n\n== Building\n\n. Install all the dependencies\n+\n $ npm install\n\n. Build the project \n+\n $ npm run build\n+\nThis command will produce a zip file that can be load in Chrome: [.path]_dist/asciidoctor-browser-extension.zip_\n\n. Run tests\n+\n $ npm run test\n\nThis project is using a code linter to enforce code consistency. \nTo make sure that the code you have contributed, follow the code rules, execute the following command:\n\n $ npm run lint\n\n== Add-on Linter\n\nMozilla provides a Node.js package to validate an add-on.\nYou can install the linter with `npm`:\n\n $ npm install -g addons-linter\n\nThis command will install the package globally, so you can use the linter from any directory on your machine.\n\nAfter installation, run the linter on the archive produced by `npm run build`:\n\n $ addons-linter dist/asciidoctor-browser-extension.zip\n\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add link to top project page.\n"
},
{
"from": "human",
"value": "= EB4J xml2eb tool\n:doctype: article\n:docinfo:\n:toc:\n:toclevels: 2\n:version: 1.99.0-SNAPSHOT\n:project-name: xml2eb\n\nThis is a project page for xml2eb.\n\ninclude::xml2eb.adoc[]\n\ninclude::converter.adoc[]\n\ninclude::reports.adoc[]\n\ninclude::links.adoc[]\n"
},
{
"from": "gpt",
"value": "= EB4J xml2eb tool\n:doctype: article\n:docinfo:\n:toc:\n:toclevels: 2\n:version: 1.99.0-SNAPSHOT\n:project-name: xml2eb\n\nIMPORTANT: link:https://github.com/eb4j/xml2eb[View on GitHub]\n | link:https://eb4j.github.io/[Top project page]\n\nThis is a project page for xml2eb.\n\ninclude::xml2eb.adoc[]\n\ninclude::converter.adoc[]\n\ninclude::reports.adoc[]\n\ninclude::links.adoc[]\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Expand a bit on the typing\n"
},
{
"from": "human",
"value": "Goals\n-----\n\nGearScript aims to introduce strong static typing into TorqueScript, allowing for\na more consistent syntax while not changing semantics too much. Static type information\nwill initially be erased (compare to Java Generics) as part of the compilation process,\nbut that may change further down along the line. Types should be inferred to the extent\npossible, in order to reduce the amount of typing needed.\n\nHierarchy\n---------\n\nGearScript's type system is hierarchical, but there is no true top type. Instead, there are six different top-level types: void, (float) numbers, strings, arrays, function references, and object references. This is because TorqueScript's syntax means that the primary way to do something often depends on which of these three types it belongs to. The most obvious example would be comparisons, which are done using `==` for number types and object reference comparison, but `$=` for strings. This ambiguity only arises when we have no type information available, and thus GearScript ought to resolve it transparently for the user.\n\nInference\n---------\n\nTypes should be inferred for the user whenever possible. In most cases this is obvious from a variable's definition, since the literal syntax always unambiguously contains the type of a given value. The exception to this is function parameters, the types of these have to be inferred from how the parameter is used. If ultimately multiple different branches of types are valid for a value then a type must be defined explicitly."
},
{
"from": "gpt",
"value": "Goals\n-----\n\nGearScript aims to introduce strong static typing into TorqueScript, allowing for a more consistent syntax while not changing semantics too much. Static type information will initially be erased (similarly to Java Generics) as part of the compilation process, but that may change further down along the line. Types should be inferred to the extent possible, in order to reduce the amount of typing needed.\n\nHierarchy\n---------\n\nGearScript's type system is hierarchical, but there is no true top type. Instead, there are six different top-level types: void, (float) numbers, strings, arrays, function references, and object references. This is because TorqueScript's syntax means that the primary way to do something often depends on which of these three types it belongs to. The most obvious example would be comparisons, which are done using `==` for number types and object reference comparison, but `$=` for strings. This ambiguity only arises when we have no type information available, and thus GearScript ought to resolve it transparently for the user.\n\nInference\n---------\n\nTypes should be inferred for the user whenever possible. In most cases this is obvious from a variable's definition, since the literal syntax always unambiguously contains the type of a given value. The exception to this is function parameters, the types of these have to be inferred from how the parameter is used. If they are used as arguments for a function, inherit that type. If methods are called on them, look at what types we know about that expose those methods. If ultimately multiple different branches of types (like either an int or a string, or two different subclasses of the same class) are valid for a value then a type must be defined explicitly.\n\nCustom Top Types\n----------------\n\nIt should be possible for users to define new top-level types with unique behaviour that wrap other types. Use case: tagged strings.\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add migration note on kernel parameters on install\n\nThis commit adds a note to the migration docs regarding how to avoid the\ninstallation packages from setting kernel parameters on install.\n"
},
{
"from": "human",
"value": "[[breaking-changes-5.5]]\n== Breaking changes in 5.5\n\n[[breaking_55_plugins]]\n[float]\n=== Ignoring hidden folders\n\nPrevious versions of Elasticsearch would skip hidden files and directories when\nscanning the plugins folder. This allows broken plugin installs to remain\nundetected so this leniency has been removed."
},
{
"from": "gpt",
"value": "[[breaking-changes-5.5]]\n== Breaking changes in 5.5\n\n[[breaking_55_plugins]]\n[float]\n=== Ignoring hidden folders\n\nPrevious versions of Elasticsearch would skip hidden files and directories when\nscanning the plugins folder. This allows broken plugin installs to remain\nundetected so this leniency has been removed.\n\n[[breaking_54_packaging_changes]]\n[float]\n=== Skip setting kernel parameters\n\nElasticsearch attempts to set the kernel parameter `vm.max_map_count` during\ninstallation. In some environments this can fail. Previous versions of\nElasticsearch accepted a flag `ES_SKIP_SET_KERNEL_PARAMTERS` to skip this\nwriting. Starting in Elasticsearch 5.5.0, this flag has been removed and\ninstead the same functionality can be achieved by masking the `systemd-sysctl`\nservice.\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add the good link to the blog article.\n"
},
{
"from": "human",
"value": "= Microservices libraries comparison\n\n== Purpose\n\nThis project is the companion of this Blog article: http://cdelmas.github.io/.....\n\n== Build\n\nTo build it, just run `gradle shadowJar` in the command line.\n\n== Run\n\nThen you can run each server using `java -jar <server>.jar`.\n\n"
},
{
"from": "gpt",
"value": "= Microservices libraries comparison\n\n== Purpose\n\nThis project is the companion of this Blog article: https://cdelmas.github.io/2015/11/01/A-comparison-of-Microservices-Frameworks.html.\n\n== Build\n\nTo build it, just run `gradle shadowJar` in the command line.\n\n== Run\n\nThen you can run each server using `java -jar <server>.jar`.\n\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Reset current version in docs to 1.5.0\n"
},
{
"from": "human",
"value": "[[elasticsearch-reference]]\n= Reference\n\n:version: 1.4.3\n:branch: 1.4\n:jdk: 1.8.0_25\n\ninclude::getting-started.asciidoc[]\n\ninclude::setup.asciidoc[]\n\ninclude::migration/index.asciidoc[]\n\ninclude::api-conventions.asciidoc[]\n\ninclude::docs.asciidoc[]\n\ninclude::search.asciidoc[]\n\ninclude::indices.asciidoc[]\n\ninclude::cat.asciidoc[]\n\ninclude::cluster.asciidoc[]\n\ninclude::query-dsl.asciidoc[]\n\ninclude::mapping.asciidoc[]\n\ninclude::analysis.asciidoc[]\n\ninclude::modules.asciidoc[]\n\ninclude::index-modules.asciidoc[]\n\ninclude::testing.asciidoc[]\n\ninclude::glossary.asciidoc[]\n\n\n\n"
},
{
"from": "gpt",
"value": "[[elasticsearch-reference]]\n= Reference\n\n:version: 1.5.0\n:branch: 1.5\n:jdk: 1.8.0_25\n\ninclude::getting-started.asciidoc[]\n\ninclude::setup.asciidoc[]\n\ninclude::migration/index.asciidoc[]\n\ninclude::api-conventions.asciidoc[]\n\ninclude::docs.asciidoc[]\n\ninclude::search.asciidoc[]\n\ninclude::indices.asciidoc[]\n\ninclude::cat.asciidoc[]\n\ninclude::cluster.asciidoc[]\n\ninclude::query-dsl.asciidoc[]\n\ninclude::mapping.asciidoc[]\n\ninclude::analysis.asciidoc[]\n\ninclude::modules.asciidoc[]\n\ninclude::index-modules.asciidoc[]\n\ninclude::testing.asciidoc[]\n\ninclude::glossary.asciidoc[]\n\n\n\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Use block syntax for note/warning\n"
},
{
"from": "human",
"value": ":function-title: {{name}}\n\n[#{{id}}]\n=== {function-title}\n\n{{#if deprecated}}\n[CAUTION]\n====\n{{{deprecatedDocs}}}\n====\n{{/if}}\n\n{{{brief}}}\n\n++++\n<pre class=\"highlightjs highlight\"><code class=\"language-{{@root.sourceLanguage}} hljs\" data-lang=\"{{@root.sourceLanguage}}\">{{declaration}}</code></pre>\n++++\n\n{{#if return.description}}==== Return\n{{return.description}}{{/if}}\n\n{{{description}}}\n\n{{#each note}}\nNOTE: {{{this}}}\n{{/each}}\n\n{{#each warning}}\nWARNING: {{{this}}}\n{{/each}}\n\n{{#if params}}\n==== Parameters\n\n[cols=\"1,3a\", stripes=\"even\"]\n|===\n|Name |Description\n\n{{#each params}}\n|``{{name}}``\n|{{{description}}}\n{{/each}}\n|===\n{{/if}}\n"
},
{
"from": "gpt",
"value": ":function-title: {{name}}\n\n[#{{id}}]\n=== {function-title}\n\n{{#if deprecated}}\n[CAUTION]\n====\n{{{deprecatedDocs}}}\n====\n{{/if}}\n\n{{{brief}}}\n\n++++\n<pre class=\"highlightjs highlight\"><code class=\"language-{{@root.sourceLanguage}} hljs\" data-lang=\"{{@root.sourceLanguage}}\">{{declaration}}</code></pre>\n++++\n\n{{#if return.description}}==== Return\n{{return.description}}{{/if}}\n\n{{{description}}}\n\n{{#each note}}\n[NOTE]\n====\n{{{this}}}\n====\n{{/each}}\n\n{{#each warning}}\n[WARNING]\n====\n{{{this}}}\n====\n{{/each}}\n\n{{#if params}}\n==== Parameters\n\n[cols=\"1,3a\", stripes=\"even\"]\n|===\n|Name |Description\n\n{{#each params}}\n|``{{name}}``\n|{{{description}}}\n{{/each}}\n|===\n{{/if}}\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add beat for readings stats from uWSGI"
},
{
"from": "human",
"value": "[[community-beats]]\n== Community Beats\n\nThe open source community has been hard at work developing new Beats. You can check \nout a few of them here:\n\n[horizontal]\nhttps://github.com/Ingensi/dockerbeat[dockerbeat]:: Reads docker container \nstatistics and indexes them in Elasticsearch\nhttps://github.com/mrkschan/nginxbeat[nginxbeat]:: Reads status from Nginx\nhttps://github.com/joshuar/pingbeat[pingbeat]:: Sends ICMP pings to a list \nof targets and stores the round trip time (RTT) in Elasticsearch \n\nHave you created a Beat that's not listed? Open a pull request to add your link \nhere: https://github.com/elastic/libbeat/blob/master/docs/communitybeats.asciidoc\n\nNOTE: Elastic provides no warranty or support for community-sourced Beats.\n\n[[contributing-beats]]\n=== Contributing to Beats\n\nRemember, you can be a Beats developer, too. <<new-beat, Learn how>>\n\n"
},
{
"from": "gpt",
"value": "[[community-beats]]\n== Community Beats\n\nThe open source community has been hard at work developing new Beats. You can check \nout a few of them here:\n\n[horizontal]\nhttps://github.com/Ingensi/dockerbeat[dockerbeat]:: Reads docker container \nstatistics and indexes them in Elasticsearch\nhttps://github.com/mrkschan/nginxbeat[nginxbeat]:: Reads status from Nginx\nhttps://github.com/joshuar/pingbeat[pingbeat]:: Sends ICMP pings to a list \nof targets and stores the round trip time (RTT) in Elasticsearch \nhttps://github.com/mrkschan/uwsgibeat[uwsgibeat]:: Reads stats from uWSGI\n\nHave you created a Beat that's not listed? Open a pull request to add your link \nhere: https://github.com/elastic/libbeat/blob/master/docs/communitybeats.asciidoc\n\nNOTE: Elastic provides no warranty or support for community-sourced Beats.\n\n[[contributing-beats]]\n=== Contributing to Beats\n\nRemember, you can be a Beats developer, too. <<new-beat, Learn how>>\n\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Use clojure, not clj, when using rebel-readline\n"
},
{
"from": "human",
"value": "= Edge App\n\nUntil CIDER 0.17 has been released (which brings improved support for\nClojure CLI), please use a terminal to run `clj`.\n\n----\nclj -A:dev:build:dev/rebel\nuser=> (go)\n----\n\nThis starts an nrepl-server, the port is announced on the command line.\n"
},
{
"from": "gpt",
"value": "= Edge App\n\nUntil CIDER 0.17 has been released (which brings improved support for\nClojure CLI), please use a terminal to run `clj`.\n\n----\nclojure -A:dev:build:dev/rebel\nuser=> (go)\n----\n\nThis starts an nrepl-server, the port is announced on the command line.\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Revert \"[Docs] Fix base directory to include for put_mapping.asciidoc\"\n\nThis reverts commit 4f5be7db3ce9f1ea7f864cc1fd38ee09363aa64d.\n"
},
{
"from": "human",
"value": "[[java-admin-indices-put-mapping]]\n:base-dir: {docdir}/../../server/src/test/java/org/elasticsearch/action/admin/indices/create\n\n==== Put Mapping\n\nThe PUT mapping API allows you to add a new type while creating an index:\n\n[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n--------------------------------------------------\ninclude-tagged::{base-dir}/CreateIndexIT.java[addMapping-create-index-request]\n--------------------------------------------------\n<1> <<java-admin-indices-create-index,Creates an index>> called `twitter`\n<2> It also adds a `tweet` mapping type.\n\n\nThe PUT mapping API also allows to add a new type to an existing index:\n\n[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n--------------------------------------------------\ninclude-tagged::{base-dir}/CreateIndexIT.java[putMapping-request-source]\n--------------------------------------------------\n<1> Puts a mapping on existing index called `twitter`\n<2> Adds a `user` mapping type.\n<3> This `user` has a predefined type\n<4> type can be also provided within the source\n\nYou can use the same API to update an existing mapping:\n\n[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n--------------------------------------------------\ninclude-tagged::{base-dir}/CreateIndexIT.java[putMapping-request-source-append]\n--------------------------------------------------\n<1> Puts a mapping on existing index called `twitter`\n<2> Updates the `user` mapping type.\n<3> This `user` has now a new field `user_name`\n\n:base-dir!:\n"
},
{
"from": "gpt",
"value": "[[java-admin-indices-put-mapping]]\n:base-dir: {docdir}/../../core/src/test/java/org/elasticsearch/action/admin/indices/create\n\n==== Put Mapping\n\nThe PUT mapping API allows you to add a new type while creating an index:\n\n[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n--------------------------------------------------\ninclude-tagged::{base-dir}/CreateIndexIT.java[addMapping-create-index-request]\n--------------------------------------------------\n<1> <<java-admin-indices-create-index,Creates an index>> called `twitter`\n<2> It also adds a `tweet` mapping type.\n\n\nThe PUT mapping API also allows to add a new type to an existing index:\n\n[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n--------------------------------------------------\ninclude-tagged::{base-dir}/CreateIndexIT.java[putMapping-request-source]\n--------------------------------------------------\n<1> Puts a mapping on existing index called `twitter`\n<2> Adds a `user` mapping type.\n<3> This `user` has a predefined type\n<4> type can be also provided within the source\n\nYou can use the same API to update an existing mapping:\n\n[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n--------------------------------------------------\ninclude-tagged::{base-dir}/CreateIndexIT.java[putMapping-request-source-append]\n--------------------------------------------------\n<1> Puts a mapping on existing index called `twitter`\n<2> Updates the `user` mapping type.\n<3> This `user` has now a new field `user_name`\n\n:base-dir!:"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Fix wrong link (even if it's temporally)\n"
},
{
"from": "human",
"value": "= Artificial Intelligence for Spring Boot\n:awestruct-description: Learn how to use OptaPlanner (open source, java) for Artificial Intelligence planning optimization on Spring Boot.\n:awestruct-layout: compatibilityBase\n:awestruct-priority: 1.0\n:awestruct-related_tag: spring\n:showtitle:\n\nOptaPlanner has a Spring Boot Starter to get up and running quickly.\n\nUsage is similar to the link:quarkus.html[Quarkus] extension, but without the performance benefits.\n\nvideo::U2N02ReT9CI[youtube]\n\n== Guide\n\n**https://github.com/ge0ffrey/gs-constraint-solving-ai-optaplanner-backup/blob/master/README.adoc[Read the OptaPlanner on Spring Boot guide.]**\n\n== Quick start\n\nRun the quick start yourself:\n\n. Download https://github.com/ge0ffrey/gs-constraint-solving-ai-optaplanner-backup/tree/master/complete[the source code].\n. Run `./mvnw clean install`\n. Open http://localhost:8080 in your browser\n"
},
{
"from": "gpt",
"value": "= Artificial Intelligence for Spring Boot\n:awestruct-description: Learn how to use OptaPlanner (open source, java) for Artificial Intelligence planning optimization on Spring Boot.\n:awestruct-layout: compatibilityBase\n:awestruct-priority: 1.0\n:awestruct-related_tag: spring\n:showtitle:\n\nOptaPlanner has a Spring Boot Starter to get up and running quickly.\n\nUsage is similar to the link:quarkus.html[Quarkus] extension, but without the performance benefits.\n\nvideo::U2N02ReT9CI[youtube]\n\n== Guide\n\n**https://github.com/ge0ffrey/getting-started-guides/blob/gs-constraint-solving-ai-optaplanner/README.adoc[Read the OptaPlanner on Spring Boot guide.]**\n\n== Quick start\n\nRun the quick start yourself:\n\n. Download https://github.com/ge0ffrey/gs-constraint-solving-ai-optaplanner-backup/tree/master/complete[the source code].\n. Run `./mvnw clean install`\n. Open http://localhost:8080 in your browser\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Fix the mTLS entry in the feature matrix\n"
},
{
"from": "human",
"value": "// Module included in the following assemblies:\n//\n// * serverless/serverless-release-notes.adoc\n\n:_content-type: REFERENCE\n[id=\"serverless-deprecated-removed-features_{context}\"]\n= Deprecated and removed features\n\nSome features available in previous releases have been deprecated or removed.\n\nDeprecated functionality is still included in {ServerlessProductName} and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality deprecated and removed within {ServerlessProductName}, refer to the table below.\n\nIn the table, features are marked with the following statuses:\n\n* *-*: _Not yet available_\n* *TP*: _Technology Preview_\n* *GA*: _General Availability_\n* *DEP*: _Deprecated_\n* *REM*: _Removed_\n\n.Deprecated and removed features tracker\n[cols=\"3,1,1,1\",options=\"header\"]\n|====\n|Feature |1.18|1.19|1.20\n\n|`kn func emit` (`kn func invoke` in 1.21+)\n|TP\n|TP\n|TP\n\n|mTLS\n|GA\n|GA\n|GA\n\n|`kn func` TypeScript templates\n|TP\n|TP\n|TP\n\n|`kn func` Rust templates\n|TP\n|TP\n|TP\n\n|`emptyDir` volumes\n|GA\n|GA\n|GA\n\n|`KafkaBinding` API\n|GA\n|DEP\n|DEP\n\n|HTTPS redirection\n|-\n|GA\n|GA\n\n|Kafka broker\n|-\n|-\n|TP\n\n|====\n"
},
{
"from": "gpt",
"value": "// Module included in the following assemblies:\n//\n// * serverless/serverless-release-notes.adoc\n\n:_content-type: REFERENCE\n[id=\"serverless-deprecated-removed-features_{context}\"]\n= Deprecated and removed features\n\nSome features available in previous releases have been deprecated or removed.\n\nDeprecated functionality is still included in {ServerlessProductName} and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality deprecated and removed within {ServerlessProductName}, refer to the table below.\n\nIn the table, features are marked with the following statuses:\n\n* *-*: _Not yet available_\n* *TP*: _Technology Preview_\n* *GA*: _General Availability_\n* *DEP*: _Deprecated_\n* *REM*: _Removed_\n\n.Deprecated and removed features tracker\n[cols=\"3,1,1,1\",options=\"header\"]\n|====\n|Feature |1.18|1.19|1.20\n\n|`kn func emit` (`kn func invoke` in 1.21+)\n|TP\n|TP\n|TP\n\n|Service Mesh mTLS\n|GA\n|GA\n|GA\n\n|`kn func` TypeScript templates\n|TP\n|TP\n|TP\n\n|`kn func` Rust templates\n|TP\n|TP\n|TP\n\n|`emptyDir` volumes\n|GA\n|GA\n|GA\n\n|`KafkaBinding` API\n|GA\n|DEP\n|DEP\n\n|HTTPS redirection\n|-\n|GA\n|GA\n\n|Kafka broker\n|-\n|-\n|TP\n\n|====\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Remove lines added for testing CI setup\n"
},
{
"from": "human",
"value": "= Infinispan Cluster Manager\n\nimage:https://vertx.ci.cloudbees.com/buildStatus/icon?job=vert.x3-infinispan[\"Build Status\",link=\"https://vertx.ci.cloudbees.com/view/vert.x-3/job/vert.x3-infinispan/\"]\n\nThis is a cluster manager implementation for Vert.x that uses http://infinispan.org[Infinispan].\n\nPlease see the in-source asciidoc documentation or the main documentation on the web-site for a full description\nof this component:\n\n* link:http://vertx.io/docs/vertx-infinispan/java/[web-site docs]\n* link:src/main/asciidoc/java/index.adoc[in-source docs]\n\n-- will remove --\n"
},
{
"from": "gpt",
"value": "= Infinispan Cluster Manager\n\nimage:https://vertx.ci.cloudbees.com/buildStatus/icon?job=vert.x3-infinispan[\"Build Status\",link=\"https://vertx.ci.cloudbees.com/view/vert.x-3/job/vert.x3-infinispan/\"]\n\nThis is a cluster manager implementation for Vert.x that uses http://infinispan.org[Infinispan].\n\nPlease see the in-source asciidoc documentation or the main documentation on the web-site for a full description\nof this component:\n\n* link:http://vertx.io/docs/vertx-infinispan/java/[web-site docs]\n* link:src/main/asciidoc/java/index.adoc[in-source docs]\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Remove extra spaces from changelog\n\nThis commit removes to extra spaces at the end of lines in the\nchangelog.\n"
},
{
"from": "human",
"value": "// Use these for links to issue and pulls. Note issues and pulls redirect one to\n// each other on Github, so don't worry too much on using the right prefix.\n// :issue: https://github.com/elastic/elasticsearch/issues/\n// :pull: https://github.com/elastic/elasticsearch/pull/\n\n= Elasticsearch Release Notes\n\n== Elasticsearch 7.0.0\n\n=== Breaking Changes\n\n=== Breaking Java Changes\n\n=== Deprecations\n\n=== New Features \n\n=== Enhancements\n\n=== Bug Fixes\n\n=== Regressions\n\n=== Known Issues\n\n== Elasticsearch version 6.3.0\n\n=== New Features \n\n=== Enhancements\n\n=== Bug Fixes\n\n=== Regressions\n\n=== Known Issues\n\n\n"
},
{
"from": "gpt",
"value": "// Use these for links to issue and pulls. Note issues and pulls redirect one to\n// each other on Github, so don't worry too much on using the right prefix.\n// :issue: https://github.com/elastic/elasticsearch/issues/\n// :pull: https://github.com/elastic/elasticsearch/pull/\n\n= Elasticsearch Release Notes\n\n== Elasticsearch 7.0.0\n\n=== Breaking Changes\n\n=== Breaking Java Changes\n\n=== Deprecations\n\n=== New Features\n\n=== Enhancements\n\n=== Bug Fixes\n\n=== Regressions\n\n=== Known Issues\n\n== Elasticsearch version 6.3.0\n\n=== New Features\n\n=== Enhancements\n\n=== Bug Fixes\n\n=== Regressions\n\n=== Known Issues\n\n\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Move shields right after the title\n"
},
{
"from": "human",
"value": "image:https://img.shields.io/github/release/heruan/humanize.svg[link=https://github.com/heruan/humanize/releases,title=Latest release]\nimage:https://img.shields.io/github/downloads/heruan/humanize/total.svg[link=https://github.com/heruan/humanize/archive/master.zip,title=GitHub]\nimage:https://img.shields.io/circleci/project/github/heruan/humanize.svg[link=https://circleci.com/gh/heruan/humanize,title=CricleCI]\nimage:https://img.shields.io/codecov/c/github/heruan/humanize.svg[link=https://codecov.io/gh/heruan/humanize,title=Codecov]\nimage:https://img.shields.io/github/license/heruan/humanize.svg[link=http://www.apache.org/licenses/LICENSE-2.0.html,title=Apache License 2.0]\n\n= Humanization libraries for Java\n\n== humanize-time\n\nimage:https://img.shields.io/maven-central/v/to.lova.humanize/humanize-time.svg[]\n\n[source,java]\n----\nTemporal minutesAgo = ZonedDateTime.now().minus(5, ChronoUnit.MINUTES);\nString relative = HumanizeTime.fromNow(minutesAgo);\n// produces: \"5 minutes ago\"\n----\n"
},
{
"from": "gpt",
"value": "= Humanization libraries for Java\nimage:https://img.shields.io/github/release/heruan/humanize.svg[link=https://github.com/heruan/humanize/releases,title=Latest release]\nimage:https://img.shields.io/github/downloads/heruan/humanize/total.svg[link=https://github.com/heruan/humanize/archive/master.zip,title=GitHub]\nimage:https://img.shields.io/circleci/project/github/heruan/humanize.svg[link=https://circleci.com/gh/heruan/humanize,title=CricleCI]\nimage:https://img.shields.io/codecov/c/github/heruan/humanize.svg[link=https://codecov.io/gh/heruan/humanize,title=Codecov]\nimage:https://img.shields.io/github/license/heruan/humanize.svg[link=http://www.apache.org/licenses/LICENSE-2.0.html,title=Apache License 2.0]\n\n== humanize-time\nimage:https://img.shields.io/maven-central/v/to.lova.humanize/humanize-time.svg[title=humanize-time]\n\n[source,java]\n----\nTemporal minutesAgo = ZonedDateTime.now().minus(5, ChronoUnit.MINUTES);\nString relative = HumanizeTime.fromNow(minutesAgo);\n// produces: \"5 minutes ago\"\n----\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Revert \"[Docs] Fix base directory to include for put_mapping.asciidoc\"\n\nThis reverts commit db4ecb34cef686e36a01d72fac4187b24dbeaaac.\n"
},
{
"from": "human",
"value": "[[java-admin-indices-put-mapping]]\n:base-dir: {docdir}/../../server/src/test/java/org/elasticsearch/action/admin/indices/create\n\n==== Put Mapping\n\nThe PUT mapping API allows you to add a new type while creating an index:\n\n[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n--------------------------------------------------\ninclude-tagged::{base-dir}/CreateIndexIT.java[addMapping-create-index-request]\n--------------------------------------------------\n<1> <<java-admin-indices-create-index,Creates an index>> called `twitter`\n<2> It also adds a `tweet` mapping type.\n\n\nThe PUT mapping API also allows to add a new type to an existing index:\n\n[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n--------------------------------------------------\ninclude-tagged::{base-dir}/CreateIndexIT.java[putMapping-request-source]\n--------------------------------------------------\n<1> Puts a mapping on existing index called `twitter`\n<2> Adds a `user` mapping type.\n<3> This `user` has a predefined type\n<4> type can be also provided within the source\n\nYou can use the same API to update an existing mapping:\n\n[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n--------------------------------------------------\ninclude-tagged::{base-dir}/CreateIndexIT.java[putMapping-request-source-append]\n--------------------------------------------------\n<1> Puts a mapping on existing index called `twitter`\n<2> Updates the `user` mapping type.\n<3> This `user` has now a new field `user_name`\n\n:base-dir!:\n"
},
{
"from": "gpt",
"value": "[[java-admin-indices-put-mapping]]\n:base-dir: {docdir}/../../core/src/test/java/org/elasticsearch/action/admin/indices/create\n\n==== Put Mapping\n\nThe PUT mapping API allows you to add a new type while creating an index:\n\n[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n--------------------------------------------------\ninclude-tagged::{base-dir}/CreateIndexIT.java[addMapping-create-index-request]\n--------------------------------------------------\n<1> <<java-admin-indices-create-index,Creates an index>> called `twitter`\n<2> It also adds a `tweet` mapping type.\n\n\nThe PUT mapping API also allows to add a new type to an existing index:\n\n[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n--------------------------------------------------\ninclude-tagged::{base-dir}/CreateIndexIT.java[putMapping-request-source]\n--------------------------------------------------\n<1> Puts a mapping on existing index called `twitter`\n<2> Adds a `user` mapping type.\n<3> This `user` has a predefined type\n<4> type can be also provided within the source\n\nYou can use the same API to update an existing mapping:\n\n[\"source\",\"java\",subs=\"attributes,callouts,macros\"]\n--------------------------------------------------\ninclude-tagged::{base-dir}/CreateIndexIT.java[putMapping-request-source-append]\n--------------------------------------------------\n<1> Puts a mapping on existing index called `twitter`\n<2> Updates the `user` mapping type.\n<3> This `user` has now a new field `user_name`\n\n:base-dir!:"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Fix cross link in 4.3 and master\n"
},
{
"from": "human",
"value": "[id=\"persistent-storage-using-flexvolume\"]\n= Persistent storage using FlexVolume\ninclude::modules/common-attributes.adoc[]\n:context: persistent-storage-flexvolume\n\ntoc::[]\n\n{product-title} supports FlexVolume, an out-of-tree plug-in that uses an executable model to interface with drivers.\n\nTo use storage from a back-end that does not have a built-in plug-in, you can extend {product-title} through FlexVolume drivers and provide persistent storage to applications.\n\nPods interact with FlexVolume drivers through the `flexvolume` in-tree plugin.\n\n.Additional References\n\n* link:expanding-persistent-volumes.adoc[Expanding persistent volumes]\n\ninclude::modules/persistent-storage-flexvolume-drivers.adoc[leveloffset=+1]\n\ninclude::modules/persistent-storage-flexvolume-driver-example.adoc[leveloffset=+1]\n\ninclude::modules/persistent-storage-flexvolume-installing.adoc[leveloffset=+1]\n\ninclude::modules/persistent-storage-flexvolume-consuming.adoc[leveloffset=+1]\n"
},
{
"from": "gpt",
"value": "[id=\"persistent-storage-using-flexvolume\"]\n= Persistent storage using FlexVolume\ninclude::modules/common-attributes.adoc[]\n:context: persistent-storage-flexvolume\n\ntoc::[]\n\n{product-title} supports FlexVolume, an out-of-tree plug-in that uses an executable model to interface with drivers.\n\nTo use storage from a back-end that does not have a built-in plug-in, you can extend {product-title} through FlexVolume drivers and provide persistent storage to applications.\n\nPods interact with FlexVolume drivers through the `flexvolume` in-tree plugin.\n\n.Additional References\n\n* link:../expanding-persistent-volumes.adoc#expanding-persistent-volumes_{context}[Expanding persistent volumes]\n\ninclude::modules/persistent-storage-flexvolume-drivers.adoc[leveloffset=+1]\n\ninclude::modules/persistent-storage-flexvolume-driver-example.adoc[leveloffset=+1]\n\ninclude::modules/persistent-storage-flexvolume-installing.adoc[leveloffset=+1]\n\ninclude::modules/persistent-storage-flexvolume-consuming.adoc[leveloffset=+1]\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add missing cd into build.\n"
},
{
"from": "human",
"value": "# Garage Client\n\n## TODO\n\n - Smarter upload process\n - Handle trailing slash on URLs correctly\n\n## Developing\n\nGetting started:\n\n sudo apt-get install build-essential cmake g++ libboost-dev libboost-program-options-dev libboost-filesystem-dev libboost-system-dev libcurl4-gnutls-dev clang clang-format-3.6 ninja-build\n mkdir build\n cmake -DCMAKE_BUILD_TYPE=Debug ..\n make\n ./garage-push --help\n\nGenerate tags\n\n make tags\n\nBefore commiting, run the pre-commit checks:\n\n CTEST_OUTPUT_ON_FAILURE=1 make qa\n\nThis will reformat all the code with clang-format and run clang-check and the test suite.\nPlease follow the https://google.github.io/styleguide/cppguide.html[Google C++ Style Guide] coding standard.\n\n\n## Dockerfile\n\nA Dockerfile is provided to check that the list of package dependencies is\nup-to-date. For day-to-day development it is recommended to build\n`garage-push` in your native environment.\n\nTo build inside docker:\n\n docker build -t garage-tools-build .\n docker run -ti -v $PWD:/src garage-tools-build\n mkdir build\n cd build\n cmake -DCMAKE_BUILD_TYPE=Debug /src\n make qa\n\n// vim: set tabstop=4 shiftwidth=4 expandtab:\n"
},
{
"from": "gpt",
"value": "# Garage Client\n\n## TODO\n\n - Smarter upload process\n - Handle trailing slash on URLs correctly\n\n## Developing\n\nGetting started:\n\n sudo apt-get install build-essential cmake g++ libboost-dev libboost-program-options-dev libboost-filesystem-dev libboost-system-dev libcurl4-gnutls-dev clang clang-format-3.6 ninja-build\n mkdir build\n cd build\n cmake -DCMAKE_BUILD_TYPE=Debug ..\n make\n ./garage-push --help\n\nGenerate tags\n\n make tags\n\nBefore commiting, run the pre-commit checks:\n\n CTEST_OUTPUT_ON_FAILURE=1 make qa\n\nThis will reformat all the code with clang-format and run clang-check and the test suite.\nPlease follow the https://google.github.io/styleguide/cppguide.html[Google C++ Style Guide] coding standard.\n\n\n## Dockerfile\n\nA Dockerfile is provided to check that the list of package dependencies is\nup-to-date. For day-to-day development it is recommended to build\n`garage-push` in your native environment.\n\nTo build inside docker:\n\n docker build -t garage-tools-build .\n docker run -ti -v $PWD:/src garage-tools-build\n mkdir build\n cd build\n cmake -DCMAKE_BUILD_TYPE=Debug /src\n make qa\n\n// vim: set tabstop=4 shiftwidth=4 expandtab:\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add addtional upgrade notes for 19.0.0.\n"
},
{
"from": "human",
"value": "[[releasenotes-19]]\n== OpenNMS 19\n\n=== Important Upgrade Notes\n\n* *Simple Topology Provider*: The Simple Topology Provider has been removed. A new GraphML Topology Provider is introduced and should be used instead.\n\n\n=== New Features\n* GraphML Topology Provider: A new Topology Provider which reads GraphML formatted graphs and displays them in the the Topology UI."
},
{
"from": "gpt",
"value": "[[releasenotes-19]]\n== OpenNMS 19\n\n=== Important Upgrade Notes\n\n* *Simple Topology Provider*: The Simple Topology Provider has been removed. A new GraphML Topology Provider is introduced and should be used instead.\n* *Cassandra JMX Metrics*: The default value for the `friendly-name` attribute on the `JMX-Cassandra` collection service has changed from `cassandra21x` to `cassandra`.\nThis changes the path in which the metrics are stored.\nIf you have already been collecting these metrics and wish to preserve them, you can ignore this change when merging your configuration.\n* *Jetty 9.3.x Upgrade*: Jetty has been upgraded from `8.1.x` to the latest `9.3.x`.\nIf you have a custom `jetty.xml` in your `etc` folder, you will need to migrate your changes.\nUse `etc/examples/jetty.xml` as a starting point.\n* *Drools 6.4.0 Upgrade*: Drools has been upgraded from `6.0.1.Final` to `6.4.0.Final`.\nIf you have custom Drools rules, they may need to be revised.\nThe compiler used in `6.4.0` is stricter than the compiler in previous versions.\n\n=== New Features\n* GraphML Topology Provider: A new Topology Provider which reads GraphML formatted graphs and displays them in the the Topology UI.\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add comma after eg in manpage\n"
},
{
"from": "human",
"value": "lsaddr(1)\n=======\n:doctype: manpage\n\nNAME\n----\nlsaddr - list active IP addresses\n\nSYNOPSIS\n--------\n*lsaddr* [ *-46* ] [*--include-loopback*] [*--include-ipv6-link-local]* [ _interface_ ... ]\n\nDESCRIPTION\n-----------\nList IP addresses of the specified __interface__s. By default, lists all IPv4\nand IPv6 addresses, with the loopback interfaces and IPv6 link-local omitted.\n\nThe _interface_ names are specified as in *ifconfig(8)*: usually a driver name\nfollowed by a number, eg *eth0*. Alias interfaces are not supported.\n\nOPTIONS\n-------\n*-4, --ipv4*::\n list IPv4 addresses\n*-6, --ipv6*::\n list IPv6 addresses\n*--include-loopback*::\n include addresses for the loopback interface\n*--include-link-local*::\n include IPv6 link local addresses\n\nAUTHOR\n------\nKamal Marhubi <[email protected]>\n"
},
{
"from": "gpt",
"value": "lsaddr(1)\n=======\n:doctype: manpage\n\nNAME\n----\nlsaddr - list active IP addresses\n\nSYNOPSIS\n--------\n*lsaddr* [ *-46* ] [*--include-loopback*] [*--include-ipv6-link-local]* [ _interface_ ... ]\n\nDESCRIPTION\n-----------\nList IP addresses of the specified __interface__s. By default, lists all IPv4\nand IPv6 addresses, with the loopback interfaces and IPv6 link-local omitted.\n\nThe _interface_ names are specified as in *ifconfig(8)*: usually a driver name\nfollowed by a number, eg, *eth0*. Alias interfaces are not supported.\n\nOPTIONS\n-------\n*-4, --ipv4*::\n list IPv4 addresses\n*-6, --ipv6*::\n list IPv6 addresses\n*--include-loopback*::\n include addresses for the loopback interface\n*--include-link-local*::\n include IPv6 link local addresses\n\nAUTHOR\n------\nKamal Marhubi <[email protected]>\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Update format docs to cover the UNIX format\n\nSections for each output format with basic description. Include mention\nof UNIX and recommend using `--jmespath` with it.\n"
},
{
"from": "human",
"value": "*-F, --format* '[json|text]'::\n\nSet the output format for stdout. Defaults to \"text\"."
},
{
"from": "gpt",
"value": "*-F, --format* '[json|text|unix]'::\n\nSet the output format for stdout. Defaults to \"text\".\n+\n*TEXT* output is human-friendly textual output, usually in table or\nrecord-oriented format.\nIn some cases, *TEXT* format is intentionally kept simple to support naive use\nof commands within a subshell, but it is not generally guaranteed to be\nparseable.\n+\n*JSON* output will produce raw JSON data from underlying Globus services.\nThis data is useful for using the CLI as part of a programmatic process, or for\ndeciding on valid *--jmespath* queries.\n+\n*UNIX* output will produce a line-by-line serialization of the result data\n(which would be visible with *JSON*).\nThis is very suitable for use in pipelines with `grep`, `sort`, and other\nstandard unix tools.\n+\nWhenever you request *--format=UNIX*, you should also be using a *--jmespath*\nquery to select the exact fields that you want.\nThis better guarantees the consistency of output contents and ordering.\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Update What's New for 5.6\n"
},
{
"from": "human",
"value": "[[new]]\n== What's New in Spring Security 5.6\n\nSpring Security 5.6 provides a number of new features.\nBelow are the highlights of the release.\n\n[[whats-new-servlet]]\n=== Servlet\n* Configuration\n\n** Introduced https://github.com/spring-projects/spring-security/pull/9630[`AuthorizationManager`] for method security\n\n* SAML 2.0 Service Provider\n\n** Added https://github.com/spring-projects/spring-security/pull/9483[SAML 2.0 Single Logout Support]\n** Added https://github.com/spring-projects/spring-security/pull/10060[Saml2AuthenticationRequestRepository]"
},
{
"from": "gpt",
"value": "[[new]]\n== What's New in Spring Security 5.6\n\nSpring Security 5.6 provides a number of new features.\nBelow are the highlights of the release.\n\n[[whats-new-servlet]]\n=== Servlet\n* Configuration\n\n** Introduced https://github.com/spring-projects/spring-security/pull/9630[`AuthorizationManager`] for method security\n\n* SAML 2.0 Service Provider\n\n** Added https://github.com/spring-projects/spring-security/pull/9483[SAML 2.0 Single Logout Support]\n** Added https://github.com/spring-projects/spring-security/pull/10060[Saml2AuthenticationRequestRepository]\n\n* OAuth 2.0 Login\n\n** Added https://github.com/spring-projects/spring-security/pull/10041[`Converter` for `Authentication` result]\n\n* OAuth 2.0 Client\n\n** Improved https://github.com/spring-projects/spring-security/pull/9791[client credentials] encoding\n** Improved https://github.com/spring-projects/spring-security/pull/9779[access token response] parsing\n** Added https://github.com/spring-projects/spring-security/pull/10155[custom grant types support] for authorization requests\n\n[[whats-new-webflux]]\n=== WebFlux\n\n** Improved https://github.com/spring-projects/spring-security/pull/9791[client credentials] encoding\n** Added https://github.com/spring-projects/spring-security/pull/10131[custom headers support] for access token requests\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Docs: Rename reference docs to Elasticsearch Reference\n"
},
{
"from": "human",
"value": "[[elasticsearch-reference]]\n= Reference\n\n:version: 1.5.2\n:branch: 1.5\n:jdk: 1.8.0_25\n:defguide: https://www.elastic.co/guide/en/elasticsearch/guide/current\n\ninclude::getting-started.asciidoc[]\n\ninclude::setup.asciidoc[]\n\ninclude::migration/index.asciidoc[]\n\ninclude::api-conventions.asciidoc[]\n\ninclude::docs.asciidoc[]\n\ninclude::search.asciidoc[]\n\ninclude::aggregations.asciidoc[]\n\ninclude::indices.asciidoc[]\n\ninclude::cat.asciidoc[]\n\ninclude::cluster.asciidoc[]\n\ninclude::query-dsl.asciidoc[]\n\ninclude::mapping.asciidoc[]\n\ninclude::analysis.asciidoc[]\n\ninclude::modules.asciidoc[]\n\ninclude::index-modules.asciidoc[]\n\ninclude::testing.asciidoc[]\n\ninclude::glossary.asciidoc[]\n\n\n\n"
},
{
"from": "gpt",
"value": "[[elasticsearch-reference]]\n= Elasticsearch Reference\n\n:version: 1.5.2\n:branch: 1.5\n:jdk: 1.8.0_25\n:defguide: https://www.elastic.co/guide/en/elasticsearch/guide/current\n\ninclude::getting-started.asciidoc[]\n\ninclude::setup.asciidoc[]\n\ninclude::migration/index.asciidoc[]\n\ninclude::api-conventions.asciidoc[]\n\ninclude::docs.asciidoc[]\n\ninclude::search.asciidoc[]\n\ninclude::aggregations.asciidoc[]\n\ninclude::indices.asciidoc[]\n\ninclude::cat.asciidoc[]\n\ninclude::cluster.asciidoc[]\n\ninclude::query-dsl.asciidoc[]\n\ninclude::mapping.asciidoc[]\n\ninclude::analysis.asciidoc[]\n\ninclude::modules.asciidoc[]\n\ninclude::index-modules.asciidoc[]\n\ninclude::testing.asciidoc[]\n\ninclude::glossary.asciidoc[]\n\n\n\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add date to the changelog entry.\n"
},
{
"from": "human",
"value": "= Changelog\n\n== Version 0.4.2\n\n- Bug fix related to :iat param validating on jws. (thanks to @tvanhens)\n\n\n== Version 0.4.1\n\nDate: 2015-03-14\n\n- Update nippy version from 2.7.1 to 2.8.0\n- Update buddy-core from 0.4.0 to 0.4.2\n- Update cats from 0.3.2 to 0.3.4\n\n\n== Version 0.4.0\n\nDate: 2015-02-22\n\n- Add encode/decode functions to JWS/JWT implementation. Them instead of return\n plain value, return a monadic either. That allows granular error reporting\n instead something like nil that not very useful. The previous sign/unsign\n are conserved for backward compatibility but maybe in future will be removed.\n- Rename parameter `maxage` to `max-age` on jws implementation. This change\n introduces a little backward incompatibility.\n- Add \"compact\" signing implementation as replacemen of django based one.\n- Django based generic signing is removed.\n- Update buddy-core version to 0.4.0\n\n\n== Version 0.3.0\n\nDate: 2014-01-18\n\n- First version splitted from monolitic buddy package.\n- No changes from original version.\n"
},
{
"from": "gpt",
"value": "= Changelog\n\n== Version 0.4.2\n\nDate: 2015-03-29\n\n- Bug fix related to :iat param validating on jws. (thanks to @tvanhens)\n\n\n== Version 0.4.1\n\nDate: 2015-03-14\n\n- Update nippy version from 2.7.1 to 2.8.0\n- Update buddy-core from 0.4.0 to 0.4.2\n- Update cats from 0.3.2 to 0.3.4\n\n\n== Version 0.4.0\n\nDate: 2015-02-22\n\n- Add encode/decode functions to JWS/JWT implementation. Them instead of return\n plain value, return a monadic either. That allows granular error reporting\n instead something like nil that not very useful. The previous sign/unsign\n are conserved for backward compatibility but maybe in future will be removed.\n- Rename parameter `maxage` to `max-age` on jws implementation. This change\n introduces a little backward incompatibility.\n- Add \"compact\" signing implementation as replacemen of django based one.\n- Django based generic signing is removed.\n- Update buddy-core version to 0.4.0\n\n\n== Version 0.3.0\n\nDate: 2014-01-18\n\n- First version splitted from monolitic buddy package.\n- No changes from original version.\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Update to reflect Ali Cloud TP status in 4.10\n"
},
{
"from": "human",
"value": "// Module included in the following assemblies:\n//\n// * operators/operator-reference.adoc\n\n[id=\"cluster-cloud-controller-manager-operator_{context}\"]\n= Cluster Cloud Controller Manager Operator\n\n[discrete]\n== Purpose\n\n[NOTE]\n====\nThis Operator is only fully supported for Alibaba Cloud, Microsoft Azure Stack Hub, and IBM Cloud.\n\nIt is available as a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] for Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, {rh-openstack-first}, and VMware vSphere.\n====\n\nThe Cluster Cloud Controller Manager Operator manages and updates the cloud controller managers deployed on top of {product-title}. The Operator is based on the Kubebuilder framework and `controller-runtime` libraries. It is installed via the Cluster Version Operator (CVO).\n\nIt contains the following components:\n\n* Operator\n* Cloud configuration observer\n\nBy default, the Operator exposes Prometheus metrics through the `metrics` service.\n\n[discrete]\n== Project\n\nlink:https://github.com/openshift/cluster-cloud-controller-manager-operator[cluster-cloud-controller-manager-operator]\n"
},
{
"from": "gpt",
"value": "// Module included in the following assemblies:\n//\n// * operators/operator-reference.adoc\n\n[id=\"cluster-cloud-controller-manager-operator_{context}\"]\n= Cluster Cloud Controller Manager Operator\n\n[discrete]\n== Purpose\n\n[NOTE]\n====\nThis Operator is only fully supported for Microsoft Azure Stack Hub and IBM Cloud.\n\nIt is available as a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] for Alibaba Cloud, Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, {rh-openstack-first}, and VMware vSphere.\n====\n\nThe Cluster Cloud Controller Manager Operator manages and updates the cloud controller managers deployed on top of {product-title}. The Operator is based on the Kubebuilder framework and `controller-runtime` libraries. It is installed via the Cluster Version Operator (CVO).\n\nIt contains the following components:\n\n* Operator\n* Cloud configuration observer\n\nBy default, the Operator exposes Prometheus metrics through the `metrics` service.\n\n[discrete]\n== Project\n\nlink:https://github.com/openshift/cluster-cloud-controller-manager-operator[cluster-cloud-controller-manager-operator]\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add new to-do items based on field testing from 04-14\n"
},
{
"from": "human",
"value": "= To-Do List\n\n- mock_cone_detector creates infinite area and overflows h\n- (*DONE*) new waypoints shorter than old don't delete existing waypoints\n- adjust waypoints for start position and cone position\n- cone area goes down when very close to cone\n- (*DONE*) parameterize throttle and steering PWM values\n- (*DONE*) touch sensor does not work\n- (*DONE*) cone detection in bright light does not work\n- GUIDED mode does not work\n- (*DONE*) Encode PWM values or range set to use in the waypoints file\n- If waypoint encountered before cone is seen, rover goes into HOLD mode\nwith no recovery. Needs to be fixed.\n\n== Possible To-Do\n\n- (*DONE*) Change from using WP_SPEED to CRUISE_SPEED. (Seems to be used by Vicky,\nwhile WP_SPEED is not.)\n- Have a way of manually triggering parameter reload\n\n== Notes\n\nMAV_CMD_DO_SET_HOME appears to reset the map origin, as well as zero the\noffset between the map origin and base_link (for /mavros/local_position/pose\nand /mavros/local_position/odom).\n"
},
{
"from": "gpt",
"value": "= To-Do List\n\n- shouldn't need intermediate waypoint to trigger EXPECTING_CONE\n- have to handle case where cone waypoint is achieved before seeing cone\n- need to limit time of reverse\n- have to handle case where in MANUAL mode and don't see cone\n- mock_cone_detector creates infinite area and overflows h\n- (*DONE*) new waypoints shorter than old don't delete existing waypoints\n- adjust waypoints for start position and cone position\n- cone area goes down when very close to cone\n- (*DONE*) parameterize throttle and steering PWM values\n- (*DONE*) touch sensor does not work\n- (*DONE*) cone detection in bright light does not work\n- GUIDED mode does not work\n- (*DONE*) Encode PWM values or range set to use in the waypoints file\n- If waypoint encountered before cone is seen, rover goes into HOLD mode\nwith no recovery. Needs to be fixed.\n\n== Possible To-Do\n\n- (*DONE*) Change from using WP_SPEED to CRUISE_SPEED. (Seems to be used by Vicky,\nwhile WP_SPEED is not.)\n- Have a way of manually triggering parameter reload\n\n== Notes\n\nMAV_CMD_DO_SET_HOME appears to reset the map origin, as well as zero the\noffset between the map origin and base_link (for /mavros/local_position/pose\nand /mavros/local_position/odom).\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Convert wiki link to jenkinsio plugin link\n"
},
{
"from": "human",
"value": "[[ivy-plugin]]\n= ivy-plugin\n\nimage:https://img.shields.io/jenkins/plugin/v/ivy.svg[Jenkins Plugin,link=https://plugins.jenkins.io/ivy]\nimage:https://img.shields.io/github/release/jenkinsci/ivy-plugin.svg?label=release[GitHub release,link=https://github.com/jenkinsci/ivy-plugin/releases/latest]\nimage:https://img.shields.io/jenkins/plugin/i/ivy.svg?color=blue[Jenkins Plugin Installs,link=https://plugins.jenkins.io/ivy]\nimage:https://ci.jenkins.io/job/Plugins/job/ivy-plugin/job/master/badge/icon[Build Status,link=https://ci.jenkins.io/job/Plugins/job/ivy-plugin/job/master/]\n\nProvides Jenkins integration with http://ant.apache.org/ivy/[Apache Ivy].\n\nSee http://wiki.jenkins-ci.org/display/JENKINS/Ivy+Plugin[Ivy Plugin] on the Jenkins wiki for more information.\n"
},
{
"from": "gpt",
"value": "[[ivy-plugin]]\n= ivy-plugin\n\nimage:https://img.shields.io/jenkins/plugin/v/ivy.svg[Jenkins Plugin,link=https://plugins.jenkins.io/ivy]\nimage:https://img.shields.io/github/release/jenkinsci/ivy-plugin.svg?label=release[GitHub release,link=https://github.com/jenkinsci/ivy-plugin/releases/latest]\nimage:https://img.shields.io/jenkins/plugin/i/ivy.svg?color=blue[Jenkins Plugin Installs,link=https://plugins.jenkins.io/ivy]\nimage:https://ci.jenkins.io/job/Plugins/job/ivy-plugin/job/master/badge/icon[Build Status,link=https://ci.jenkins.io/job/Plugins/job/ivy-plugin/job/master/]\n\nProvides Jenkins integration with http://ant.apache.org/ivy/[Apache Ivy].\n\nSee https://plugins.jenkins.io/ivy/[Ivy Plugin] on the Jenkins wiki for more information.\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add a note about completed example\n"
},
{
"from": "human",
"value": "To complete this guide you will need to checkout the source from Github and work through the steps presented by the guide.\n\nTo get started do the following:\n\n* link:https://github.com/{githubSlug}/archive/master.zip[Download] and unzip the source or if you already have https://git-scm.com/[Git]: `git clone https://github.com/{githubSlug}.git`\n* `cd` into `{githubSlug}/initial`\n* Head on over to the next section\n"
},
{
"from": "gpt",
"value": "To complete this guide you will need to checkout the source from Github and work through the steps presented by the guide.\n\nTo get started do the following:\n\n* link:https://github.com/{githubSlug}/archive/master.zip[Download] and unzip the source or if you already have https://git-scm.com/[Git]: `git clone https://github.com/{githubSlug}.git`\n* `cd` into `{githubSlug}/initial`\n* Head on over to the next section\n\nTIP: You can go right to the completed example if you `cd` into `{githubSlug}/complete`\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add doc push on snapshot release\n"
},
{
"from": "human",
"value": ":source-highlighter: pygments\n\n= Muon Clojure\n\nMuon Clojure is just awesome and clojurific\n"
},
{
"from": "gpt",
"value": "---\n---\n:title: Muon Clojure\n:layout: documentation\n:source-highlighter: pygments\n:toc: right\n\n= Muon Clojure docs\n\nMuon Clojure is just awesome and clojurific\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Fix nonsensical sentence in standard analyzer documentation so that it is more understandable\n"
},
{
"from": "human",
"value": "[[analysis-standard-analyzer]]\n=== Standard Analyzer\n\nAn analyzer of type `standard` that is built of using\n<<analysis-standard-tokenizer,Standard\nTokenizer>>, with\n<<analysis-standard-tokenfilter,Standard\nToken Filter>>,\n<<analysis-lowercase-tokenfilter,Lower\nCase Token Filter>>, and\n<<analysis-stop-tokenfilter,Stop\nToken Filter>>.\n\nThe following are settings that can be set for a `standard` analyzer\ntype:\n\n[cols=\"<,<\",options=\"header\",]\n|=======================================================================\n|Setting |Description\n|`stopwords` |A list of stopword to initialize the stop filter with.\nDefaults to the english stop words.\n\n|`max_token_length` |The maximum token length. If a token is seen that\nexceeds this length then it is discarded. Defaults to `255`.\n|=======================================================================\n\n"
},
{
"from": "gpt",
"value": "[[analysis-standard-analyzer]]\n=== Standard Analyzer\n\nAn analyzer of type `standard` is built using the\n<<analysis-standard-tokenizer,Standard\nTokenizer>> with the\n<<analysis-standard-tokenfilter,Standard\nToken Filter>>,\n<<analysis-lowercase-tokenfilter,Lower\nCase Token Filter>>, and\n<<analysis-stop-tokenfilter,Stop\nToken Filter>>.\n\nThe following are settings that can be set for a `standard` analyzer\ntype:\n\n[cols=\"<,<\",options=\"header\",]\n|=======================================================================\n|Setting |Description\n|`stopwords` |A list of stopword to initialize the stop filter with.\nDefaults to the english stop words.\n\n|`max_token_length` |The maximum token length. If a token is seen that\nexceeds this length then it is discarded. Defaults to `255`.\n|=======================================================================\n\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Document the -r <repository> option of the `kn func create` command\n\nChange argument order in a command\n"
},
{
"from": "human",
"value": "[id=\"serverless-create-func-kn_{context}\"]\n= Creating functions\n\nYou can create a basic serverless function using the `kn` CLI.\n\nYou can specify the path, runtime, and template as flags on the command line, or use the `-c` flag to start the interactive experience in the terminal.\n\n.Procedure\n\n* Create a function project:\n+\n[source,terminal]\n----\n$ kn func create <path> -l <runtime> -t <template>\n----\n** Supported runtimes include `node`, `go`, `python`, `quarkus`, and `typescript`.\n** Supported templates include `http` and `events`.\n+\n.Example command\n[source,terminal]\n----\n$ kn func create -l typescript -t events examplefunc\n----\n+\n.Example output\n[source,terminal]\n----\nProject path: /home/user/demo/examplefunc\nFunction name: examplefunc\nRuntime: typescript\nTemplate: events\nWriting events to /home/user/demo/examplefunc\n----\n"
},
{
"from": "gpt",
"value": "[id=\"serverless-create-func-kn_{context}\"]\n= Creating functions\n\nYou can create a basic serverless function using the `kn` CLI.\n\nYou can specify the path, runtime, template, and repository with the template as flags on the command line, or use the `-c` flag to start the interactive experience in the terminal.\n\n.Procedure\n\n* Create a function project:\n+\n[source,terminal]\n----\n$ kn func create -r <repository> -l <runtime> -t <template> <path>\n----\n** Supported runtimes include `node`, `go`, `python`, `quarkus`, and `typescript`.\n** Supported templates include `http` and `events`.\n+\n.Example command\n[source,terminal]\n----\n$ kn func create -l typescript -t events examplefunc\n----\n+\n.Example output\n[source,terminal]\n----\nProject path: /home/user/demo/examplefunc\nFunction name: examplefunc\nRuntime: typescript\nTemplate: events\nWriting events to /home/user/demo/examplefunc\n----\n+\n** Alternatively, you can specify a repository that contains a custom template.\n+\n.Example command\n[source,terminal]\n----\n$ kn func create -r https://github.com/boson-project/templates/ -l node -t hello-world examplefunc\n----\n+\n.Example output\n[source,terminal]\n----\nProject path: /home/user/demo/examplefunc\nFunction name: examplefunc\nRuntime: node\nTemplate: hello-world\nWriting events to /home/user/demo/examplefunc\n----\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Update readme with latest stats output\n"
},
{
"from": "human",
"value": "libframetime\n============\n\nA preloadable library, able to dump the frame times of any OpenGL application on Linux, on\nany driver.\n\nBy default, the timing is written into /tmp/libframetime.out, but you can specify an \nalternate file with the LIBFRAMETIME_FILE env var.\n\n\nUsage\n-----\n\n----\nLD_PRELOAD=path/to/libframetime.so dota2\n----\n\nOr with a custom output file:\n----\nLIBFRAMETIME_FILE=/tmp/dota2.frametime LD_PRELOAD=path/to/libframetime.so dota2\n----\n\nThe accompanying awk script can be used to calculate the usual stats:\n----\n$ stats.awk < libframetime.out\n----\n----\nMin/avg/max frametimes (us): 166 / 625.626 / 5955\nMin/avg/max FPS: 167.926 / 1598.4 / 6024.1\n----\n"
},
{
"from": "gpt",
"value": "libframetime\n============\n\nA preloadable library, able to dump the frame times of any OpenGL application on Linux, on\nany driver.\n\nBy default, the timing is written into /tmp/libframetime.out, but you can specify an \nalternate file with the LIBFRAMETIME_FILE env var.\n\n\nUsage\n-----\n\n----\nLD_PRELOAD=path/to/libframetime.so dota2\n----\n\nOr with a custom output file:\n----\nLIBFRAMETIME_FILE=/tmp/dota2.frametime LD_PRELOAD=path/to/libframetime.so dota2\n----\n\nThe accompanying awk script can be used to calculate the usual stats:\n----\n$ stats.awk < libframetime.out\n----\n----\nMin/avg/max frametimes (us): 166 / 625.626 / 5955\nMin/avg/max FPS: 167.926 / 1598.4 / 6024.1\n\n50/90/95/99 percentiles (us): 410 / 434 / 589 / 5018\n----\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add a link to the docs in the release announcement\n"
},
{
"from": "human",
"value": "== New Check and 2 Fixes\n:docname: 20180406-new-check-and-2-fixes\n\nI fixed 3 issues for which there are releases of the following components:\n\n* `revapi-basic-features-0.7.1` that contains a fix for https://github.com/revapi/revapi/issues/119[#119] which means\nthat the semver transform should no longer crash when there is no prior version of artifacts\n* `revapi-java-spi-0.15.1` that includes the definition of the new `serialVersionUIDChanged` check (prompted by\nhttps://github.com/revapi/revapi/issues/120[#120])\n* `revapi-java-0.16.0` that contains the implementation of `serialVersionUIDChanged` and additionally contains\nan important fix that could cause problems be reported on wrong elements in some cases.\n* `revapi-maven-plugin-0.10.1` that bundles the latest revapi-basic-features version\n\nYou are urged to upgrade especially to `revapi-java-0.16.0` to avoid some head scratching when examining the Revapi\nreports.\n\nThanks go out to Ricardo Ferreira for reporting https://github.com/revapi/revapi/issues/120[#120] and Matthew Kavanagh\nfor his analysis of https://github.com/revapi/revapi/issues/119[#119].\n\ninclude::../util/disqus.adoc[]\n\n"
},
{
"from": "gpt",
"value": "== New Check and 2 Fixes\n:docname: 20180406-new-check-and-2-fixes\n\nI fixed 3 issues for which there are releases of the following components:\n\n* `revapi-basic-features-0.7.1` that contains a fix for https://github.com/revapi/revapi/issues/119[#119] which means\nthat the semver transform should no longer crash when there is no prior version of artifacts\n* `revapi-java-spi-0.15.1` that includes the definition of the new `serialVersionUIDChanged`\nhttps://revapi.org/modules/revapi-java/index.html#field_code_serialversionuid_code_changed[check] (prompted by\nhttps://github.com/revapi/revapi/issues/120[#120])\n* `revapi-java-0.16.0` that contains the implementation of `serialVersionUIDChanged` and additionally contains\nan important fix that could cause problems be reported on wrong elements in some cases.\n* `revapi-maven-plugin-0.10.1` that bundles the latest revapi-basic-features version\n\nYou are urged to upgrade especially to `revapi-java-0.16.0` to avoid some head scratching when examining the Revapi\nreports.\n\nThanks go out to Ricardo Ferreira for reporting https://github.com/revapi/revapi/issues/120[#120] and Matthew Kavanagh\nfor his analysis of https://github.com/revapi/revapi/issues/119[#119].\n\ninclude::../util/disqus.adoc[]\n\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Update readme with blank info\n"
},
{
"from": "human",
"value": "= Vert.x Unit examples\n\nHere you'll find some examples of how to use Vert.x unit to test your asynchronous applications.\n\nTests are located in the link:src/test/java/io/vertx/example/unit/test directory.\n\nExamples can be run directly from the IDE.\n\n== Vertx Unit Test\n\nThe link:src/test/java/io/vertx/example/unit/test/VertxUnitTest.java demonstrates how the Vert.x Unit API can be used to run tests using the Vert.x Unit test runner.\n\nRun this example by running the `main` method.\n\n== Junit\n\nVert.x Unit can be used with Junit:\n\n* link:src/test/java/io/vertx/example/unit/test/MyJunitTest.java demonstrates how to use the Vert.x Unit Junit runner\n to execute your tests\n* link:src/test/java/io/vertx/example/unit/test/ParameterizedTest.java demonstrates how to inject parameters into\n your Junit tests\n* link:src/test/java/io/vertx/example/unit/test/RunOnContextTest.java demonstrates how to delegate Vert.x instance\n creation to Vert.x Unit and how to use a rule to run the test methods on the event loop (caution: they must be non-blocking)\n* link:src/test/java/io/vertx/example/unit/test/JUnitAndAssertJTest.java demonstrates how to use AssertJ in\n combination with Vert.x Unit\n* link:src/test/java/io/vertx/example/unit/test/JUnitAndHamcrestTest.java demonstrates how to use Hamcrest in\n combination with Vert.x Unit\n\nAll the tests can be run from your IDE, or directly with Maven:\n\n```\nmvn clean test\n```\n\n"
},
{
"from": "gpt",
"value": "= Vert.x gRPC examples\n\ntodo\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add required \"name\" parameter in example"
},
{
"from": "human",
"value": "= SpringBoot WebApp Demo\r\n\r\nSpringBoot looks like a nice way to get started.\r\nThis is a trivial webapp created using SpringBoot.\r\n\r\n== HowTo\r\n\r\nmvn spring-boot:run\r\n\r\nthen connect to http://localhost:8000/notaservlet\r\n\r\nNote that is 8000 not the usual 8080 to avoid conflicts.\r\nChange this in application.properties if you don't like it.\r\n\r\n== War Deployment\r\n\r\nIt seems that you can't have both the instant-deployment convenience of Spring Boot\r\nAND the security of a full WAR deployment in the same pom file. You will need to \r\nmake several changes to deploy as a WAR file. See the section entitled\r\n\"Traditional Deployment\"--\"Create a deployable war file\" in the\r\nspring-boot reference manual (Section 73.1 in the current snapshot as of\r\nthis writing).\r\n"
},
{
"from": "gpt",
"value": "= SpringBoot WebApp Demo\r\n\r\nSpringBoot looks like a nice way to get started.\r\nThis is a trivial webapp created using SpringBoot.\r\n\r\n== HowTo\r\n\r\nmvn spring-boot:run\r\n\r\nthen connect to \r\n\r\nhttp://localhost:8000/notaservlet?name=Robin Smith\r\n\r\nNote that is 8000 not the usual 8080 to avoid conflicts.\r\nChange this in application.properties if you don't like it.\r\n\r\n== War Deployment\r\n\r\nIt seems that you can't have both the instant-deployment convenience of Spring Boot\r\nAND the security of a full WAR deployment in the same pom file. You will need to \r\nmake several changes to deploy as a WAR file. See the section entitled\r\n\"Traditional Deployment\"--\"Create a deployable war file\" in the\r\nspring-boot reference manual (Section 73.1 in the current snapshot as of\r\nthis writing).\r\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add deprecatio not on changelog.\n"
},
{
"from": "human",
"value": "= Changelog\n\n== Version 0.4.0\n\nDate: unreleased\n\n- Add encode/decode functions to JWS/JWT implementation. Them instead of return\n plain value, return a monadic either. That allows granular error reporting\n instead something like nil that not very useful. The previous sign/unsign\n are conserved for backward compatibility but maybe in future will be removed.\n- Rename parameter `maxage` to `max-age` on jws implementation. This change\n introduces a little backward incompatibility.\n\n== Version 0.3.0\n\nDate: 2014-01-18\n\n- First version splitted from monolitic buddy package.\n- No changes from original version.\n"
},
{
"from": "gpt",
"value": "= Changelog\n\n== Version 0.4.0\n\nDate: unreleased\n\n- Add encode/decode functions to JWS/JWT implementation. Them instead of return\n plain value, return a monadic either. That allows granular error reporting\n instead something like nil that not very useful. The previous sign/unsign\n are conserved for backward compatibility but maybe in future will be removed.\n- Rename parameter `maxage` to `max-age` on jws implementation. This change\n introduces a little backward incompatibility.\n- Django based generic signing is deprecated.\n\n== Version 0.3.0\n\nDate: 2014-01-18\n\n- First version splitted from monolitic buddy package.\n- No changes from original version.\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add Gradle check to testApp snippet"
},
{
"from": "human",
"value": "To run the tests:\n\n[source, bash]\n----\n./grailsw\ngrails> test-app\ngrails> open test-report\n----\n"
},
{
"from": "gpt",
"value": "To run the tests:\n\n[source, bash]\n----\n./grailsw\ngrails> test-app\ngrails> open test-report\n----\n\nor\n\n[source, bash]\n----\n./gradlew check\nopen build/reports/tests/index.html\n----\n\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add DataTables and links to readme\n"
},
{
"from": "human",
"value": "= Spring Boot and Two DataSources\n\nThis project demonstrates how to use two `DataSource` s with Spring Boot. \n\nIt utilizes: \n\n* Spring Data JPA/REST\n* Flyway migrations for the two `DataSource` s\n* Separate Hibernate properties for each `DataSource`\n* Application properties with YAML\n* Thymeleaf\n* Unit tests for components"
},
{
"from": "gpt",
"value": "= Spring Boot and Two DataSources\n\nThis project demonstrates how to use two `DataSource` s with Spring Boot. \n\nIt utilizes: \n\n* Spring Data https://github.com/spring-projects/spring-data-jpa[JPA] / https://github.com/spring-projects/spring-data-rest[REST]\n* https://github.com/flyway/flyway[Flyway] migrations for the two `DataSource` s\n* Separate Hibernate properties for each `DataSource` defined in the application.yml\n* https://github.com/thymeleaf/thymeleaf[Thymeleaf]\n* https://github.com/DataTables/DataTablesSrc[DataTables]\n* Unit tests for components"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add a little more docs. More to come.\n"
},
{
"from": "human",
"value": "= Database Table Replicator (DbShadow)\n:Author: David Thompson, Matt Conroy\n:Email: <[email protected]> <[email protected]>\n:Revision: 0.0.1 2017-02-08\n\n== Description\nHave you ever had the need to copy data from one database to another? How about between to different types of\ndatabases? How about just verifying that your database tables between replicated instances are in sync?\n\nDbShadow is a Java command line application that can help you perform these actions. Using the power of JDBC, it\ncan read data from one database instance and then using primary keys compare, propogate, and locate differences within\na separate database instance.\n\n== ReleaseNotes\n0.0.1 - Base code\n\n== Usage\n\n"
},
{
"from": "gpt",
"value": "= Database Table Replicator (DbShadow)\n:Author: David Thompson, Matt Conroy\n:Email: <[email protected]> <[email protected]>\n:Revision: 0.0.1 2017-02-08\n\n== Description\nHave you ever had the need to copy data from one database to another? How about between to different types of\ndatabases? How about just verifying that your database tables between replicated instances are in sync?\n\nDbShadow is a Java command line application that can help you perform these actions. Using the power of JDBC, it\ncan read data from one database instance and then using primary keys compare, propogate, and locate differences within\na separate database instance.\n\n== ReleaseNotes\n0.0.1 - Base code\n\n== Building\nDbShadow uses Scala Build Tool (sbt) in order to build. Clone the repo and type \n....\n$ sbt clean compile\n....\n\nIn order to package the system into a Java binary that's useful for somebody to use:\n....\n$ sbt clean universal:packageZipTarball\n....\nThis will produce a tarball that can then be installed with a bash script for launching.\n\n== Usage\nHelp usage is available with the command line.\n\n$ dbshadow -- --help\n\nMore documentation to follow.\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add breaking GraphQL change to changelog\n"
},
{
"from": "human",
"value": "icon:plus[] GraphQL: Nodes can now be loaded in the context of a schema. This will return all nodes which use the schema.\n"
},
{
"from": "gpt",
"value": "CAUTION: GraphQL: The root field releases has been removed. The root field release now takes no parameters and loads the active release.\nicon:plus[] GraphQL: Nodes can now be loaded in the context of a schema. This will return all nodes which use the schema.\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add minor schema changes to Changelog.\n"
},
{
"from": "human",
"value": "=== Apm-Server version HEAD\nhttps://github.com/elastic/apm-server/compare/x...master[View commits]\n\n==== Breaking changes\n\n\n==== Bugfixes\n\n\n==== Added\n\n\n==== Deprecated\n\n\n==== Known Issue\n\n"
},
{
"from": "gpt",
"value": "=== Apm-Server version HEAD\nhttps://github.com/elastic/apm-server/compare/x...master[View commits]\n\n==== Breaking changes\n\n\n==== Bugfixes\n*Bugfixes in fields.yml leading to ES schema changes*\n- changed `context.system.title` to `context.system.process_title`, removed `transaction.context`, `trace.context` (already available on top level). {pull}10[10]\n\n\n\n==== Added\n\n\n==== Deprecated\n\n\n==== Known Issue\n\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Fix paragraph name in Johnzon docs for Gitbook\n"
},
{
"from": "human",
"value": "[[Johnzon-Johnzon]]\nJohnzon\n~~~~~~~\n\n*Available as of Camel 2.18*\n\nJohnzon is a link:data-format.html[Data Format] which uses the\nhttp://johnzon.apache.org/[Johnzon Library]\n\n[source,java]\n-------------------------------\nfrom(\"activemq:My.Queue\").\n marshal().json(JsonLibrary.Johnzon).\n to(\"mqseries:Another.Queue\");\n-------------------------------\n\n[[JacksonXML-Dependencies]]\nDependencies\n^^^^^^^^^^^^\n\nTo use Johnzon in your camel routes you need to add the dependency\non *camel-johnzon* which implements this data format.\n\nIf you use maven you could just add the following to your pom.xml,\nsubstituting the version number for the latest & greatest release (see\nlink:download.html[the download page for the latest versions]).\n\n[source,xml]\n----------------------------------------------------------\n<dependency>\n <groupId>org.apache.camel</groupId>\n <artifactId>camel-johnzon</artifactId>\n <version>x.x.x</version>\n <!-- use the same version as your Camel core version -->\n</dependency>\n----------------------------------------------------------\n"
},
{
"from": "gpt",
"value": "[[Johnzon-Johnzon]]\nJohnzon\n~~~~~~~\n\n*Available as of Camel 2.18*\n\nJohnzon is a link:data-format.html[Data Format] which uses the\nhttp://johnzon.apache.org/[Johnzon Library]\n\n[source,java]\n-------------------------------\nfrom(\"activemq:My.Queue\").\n marshal().json(JsonLibrary.Johnzon).\n to(\"mqseries:Another.Queue\");\n-------------------------------\n\n[[Johnzon-Dependencies]]\nDependencies\n^^^^^^^^^^^^\n\nTo use Johnzon in your camel routes you need to add the dependency\non *camel-johnzon* which implements this data format.\n\nIf you use maven you could just add the following to your pom.xml,\nsubstituting the version number for the latest & greatest release (see\nlink:download.html[the download page for the latest versions]).\n\n[source,xml]\n----------------------------------------------------------\n<dependency>\n <groupId>org.apache.camel</groupId>\n <artifactId>camel-johnzon</artifactId>\n <version>x.x.x</version>\n <!-- use the same version as your Camel core version -->\n</dependency>\n----------------------------------------------------------\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Update build-process (use `buildCI` target)"
},
{
"from": "human",
"value": "= bitcoinj-addons Release Process\n\n== Main Release Process\n\n. Update `CHANGELOG.adoc`\n. Set versions\n.. `README.adoc` (check/set bitcoinj version variable, too)\n.. bitcoinj-dsl `ExtensionModule`\n.. `gradle.properties`\n. Commit version bump and changelog.\n. Full build, test\n.. `./gradlew clean jenkinsBuild install regTest`\n.. Recommended: test with *OmniJ* regTests.\n. Tag: `git tag -a v0.x.y -m \"Release 0.x.y\"`\n. Push: `git push --tags origin master`\n. Publish to Bintray:\n.. `./gradlew bintrayUpload`\n.. Confirm publish of artifacts in Bintray Web UI.\n. Update github-pages site (including JavaDoc): `./gradlew publishSite`\n\n== Announcements\n\n. Not yet.\n\n== After release\n\n. Set versions back to -SNAPSHOT\n.. `gradle.properties`\n.. bitcoinj-groovy `ExtensionModule`\n.. *Not* `README.adoc` -- it should match release version\n. Commit and push to master\n\n\n\n"
},
{
"from": "gpt",
"value": "= bitcoinj-addons Release Process\n\n== Main Release Process\n\n. Update `CHANGELOG.adoc`\n. Set versions\n.. `README.adoc` (check/set bitcoinj version variable, too)\n.. bitcoinj-dsl `ExtensionModule`\n.. `gradle.properties`\n. Commit version bump and changelog.\n. Full build, test\n.. `./gradlew clean buildCI install regTest`\n.. Recommended: test with *OmniJ* regTests.\n. Tag: `git tag -a v0.x.y -m \"Release 0.x.y\"`\n. Push: `git push --tags origin master`\n. Publish to Bintray:\n.. `./gradlew bintrayUpload`\n.. Confirm publish of artifacts in Bintray Web UI.\n. Update github-pages site (including JavaDoc): `./gradlew publishSite`\n\n== Announcements\n\n. Not yet.\n\n== After release\n\n. Set versions back to -SNAPSHOT\n.. `gradle.properties`\n.. bitcoinj-groovy `ExtensionModule`\n.. *Not* `README.adoc` -- it should match release version\n. Commit and push to master\n\n\n\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "[DOCS] Relocate tip for Fleet APIs\n"
},
{
"from": "human",
"value": "[role=\"xpack\"]\n[[fleet-apis]]\n\nTIP: For the {kib} {fleet} APIs, see the\n{fleet-guide}/fleet-api-docs.html[`Fleet API Documentation`].\n\n== Fleet APIs\n\nThe following APIs support {fleet}'s use of {es} as a data store for internal\nagent and action data. These APIs are experimental and for internal use by\n{fleet} only.\n\n* <<get-global-checkpoints,Get global checkpoints>>\n\n// top-level\ninclude::get-global-checkpoints.asciidoc[]\n"
},
{
"from": "gpt",
"value": "[role=\"xpack\"]\n[[fleet-apis]]\n== Fleet APIs\n\nTIP: For the {kib} {fleet} APIs, see the\n{fleet-guide}/fleet-api-docs.html[Fleet API Documentation].\n\nThe following APIs support {fleet}'s use of {es} as a data store for internal\nagent and action data. These APIs are experimental and for internal use by\n{fleet} only.\n\n* <<get-global-checkpoints,Get global checkpoints>>\n\n// top-level\ninclude::get-global-checkpoints.asciidoc[]\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add the known kanban boards\n\n\r\n"
},
{
"from": "human",
"value": "= Report an issue\n:awestruct-layout: normalBase\n:showtitle:\n\n== Issue tracker\n\nWe welcome issue reports (bugs, improvements, new feature requests, ...) in our issue tracker:\n\n*Show https://issues.jboss.org/browse/drools[the JIRA issue tracker].*\n\nLog in and click on the button _Create Issue_ to report a bug, improvement or feature request.\n\n== Pull requests on GitHub\n\nWant to fix the issue yourself? Fork https://github.com/droolsjbpm[the git repository] and send in a pull request.\nWe usually process all pull requests within a few days.\n"
},
{
"from": "gpt",
"value": "= Report an issue\n:awestruct-layout: normalBase\n:showtitle:\n\n== Issue tracker\n\nWe welcome issue reports (bugs, improvements, new feature requests, ...) in our issue tracker:\n\n*Show https://issues.jboss.org/browse/drools[the JIRA issue tracker].*\n\nLog in and click on the button _Create Issue_ to report a bug, improvement or feature request.\n\n== Pull requests on GitHub\n\nWant to fix the issue yourself? Fork https://github.com/droolsjbpm[the git repository] and send in a pull request.\nWe usually process all pull requests within a few days.\n\n== Kanban boards\n\n* https://issues.jboss.org/secure/RapidBoard.jspa?rapidView=4016[Drools]\n* https://issues.jboss.org/secure/RapidBoard.jspa?rapidView=3828[OptaPlanner]\n* https://issues.jboss.org/secure/RapidBoard.jspa?rapidView=3972[jBPM]\n* AppFormer (todo)\n* https://issues.jboss.org/secure/RapidBoard.jspa?rapidView=3838[Designer.NEXT]\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Remove definition of unnecessary ProductName attribute in docs\n"
},
{
"from": "human",
"value": "// set attributes usually set by Antora\nifndef::site-gen-antora[]\n:moduledir: ..\n:attachmentsdir: {moduledir}/assets/attachments\n:examplesdir: {moduledir}/examples\n:imagesdir: {moduledir}/assets/images\n:partialsdir: {moduledir}/pages/_partials\nendif::[]\n\n:ProductName: Debezium\n:debezium-version: 1.0.0.Final\n:debezium-dev-version: 1.1\n:debezium-kafka-version: 2.4.0\n:debezium-docker-label: 1.0\n:install-version: 1.0\n:confluent-platform-version: 5.3.1\n:strimzi-version: 0.13.0\n:jira-url: https://issues.redhat.com\n:prodname: Debezium\n:assemblies: ../assemblies\n:modules: ../../modules\n\n:mysql-connector-plugin-download: https://repo1.maven.org/maven2/io/debezium/debezium-connector-mysql/1.0.0.Final/debezium-connector-mysql-1.0.0.Final-plugin.tar.gz\n\n:mysql-version: 8.0\n"
},
{
"from": "gpt",
"value": "// set attributes usually set by Antora\nifndef::site-gen-antora[]\n:moduledir: ..\n:attachmentsdir: {moduledir}/assets/attachments\n:examplesdir: {moduledir}/examples\n:imagesdir: {moduledir}/assets/images\n:partialsdir: {moduledir}/pages/_partials\nendif::[]\n\n:debezium-version: 1.0.0.Final\n:debezium-dev-version: 1.1\n:debezium-kafka-version: 2.4.0\n:debezium-docker-label: 1.0\n:install-version: 1.0\n:confluent-platform-version: 5.3.1\n:strimzi-version: 0.13.0\n:jira-url: https://issues.redhat.com\n:prodname: Debezium\n:assemblies: ../assemblies\n:modules: ../../modules\n\n:mysql-connector-plugin-download: https://repo1.maven.org/maven2/io/debezium/debezium-connector-mysql/1.0.0.Final/debezium-connector-mysql-1.0.0.Final-plugin.tar.gz\n\n:mysql-version: 8.0\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Fix broken xrefs that use the doc-guides attribute\n"
},
{
"from": "human",
"value": "// Allow examples to render correctly in previews despite being\n// a nested directory\n:idprefix:\n:idseparator: -\n:icons: font\n:doc-guides: ../\n:doc-examples: ../_examples\n:imagesdir: ../../asciidoc/images\n:includes: ../includes\n:root: ../../asciidoc/\n"
},
{
"from": "gpt",
"value": "// Allow examples to render correctly in previews despite being\n// a nested directory\n:idprefix:\n:idseparator: -\n:icons: font\n:doc-guides: ..\n:doc-examples: ../_examples\n:imagesdir: ../../asciidoc/images\n:includes: ../includes\n:root: ../../asciidoc/\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Comment out ref to API reference page\n"
},
{
"from": "human",
"value": "[id=\"security-container-content\"]\n= Securing container content\ninclude::modules/common-attributes.adoc[]\n:context: security-container-content\n\ntoc::[]\n\nTo ensure the security of the content inside your containers\nyou need to start with trusted base images, such as Red Hat\nUniversal Base Images, and add trusted software. To check the\nongoing security of your container images, there are both\nRed Hat and third-party tools for scanning images.\n\n// Security inside the container\ninclude::modules/security-container-content-inside.adoc[leveloffset=+1]\n\n// Red Hat Universal Base Images\ninclude::modules/security-container-content-universal.adoc[leveloffset=+1]\n\n// Container content scanning\ninclude::modules/security-container-content-scanning.adoc[leveloffset=+1]\n\n// Integrating external scanning tools with OpenShift\ninclude::modules/security-container-content-external-scanning.adoc[leveloffset=+1]\n.Additional resources\n* xref:../../openshift_images/images-understand.adoc#images-imagestream-use_images-understand[Image stream objects]\n* xref:../../rest_api/index.adoc#rest-api[{product-title} {product-version} REST APIs]\n"
},
{
"from": "gpt",
"value": "[id=\"security-container-content\"]\n= Securing container content\ninclude::modules/common-attributes.adoc[]\n:context: security-container-content\n\ntoc::[]\n\nTo ensure the security of the content inside your containers\nyou need to start with trusted base images, such as Red Hat\nUniversal Base Images, and add trusted software. To check the\nongoing security of your container images, there are both\nRed Hat and third-party tools for scanning images.\n\n// Security inside the container\ninclude::modules/security-container-content-inside.adoc[leveloffset=+1]\n\n// Red Hat Universal Base Images\ninclude::modules/security-container-content-universal.adoc[leveloffset=+1]\n\n// Container content scanning\ninclude::modules/security-container-content-scanning.adoc[leveloffset=+1]\n\n// Integrating external scanning tools with OpenShift\ninclude::modules/security-container-content-external-scanning.adoc[leveloffset=+1]\n.Additional resources\n* xref:../../openshift_images/images-understand.adoc#images-imagestream-use_images-understand[Image stream objects]\n// * xref::../../rest_api/index.adoc#rest-api[{product-title} {product-version} REST APIs]\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add the Efene plugin to the list\n"
},
{
"from": "human",
"value": "[[plugins_list]]\n== List of plugins\n\nThis is a non-exhaustive list of Erlang.mk plugins, sorted\nalphabetically.\n\n=== elixir.mk\n\nAn https://github.com/botsunit/elixir.mk[Elixir plugin] for\nErlang.mk. http://elixir-lang.org/[Elixir] is an alternative\nlanguage for the BEAM.\n\n=== elvis.mk\n\nAn https://github.com/inaka/elvis.mk[Elvis plugin] for Erlang.mk.\nElvis is an https://github.com/inaka/elvis[Erlang style reviewer].\n\n=== geas\n\nhttps://github.com/crownedgrouse/geas[Geas] gives aggregated\ninformation on a project and its dependencies, and is available\nas an Erlang.mk plugin.\n\n=== hexer.mk\n\nAn https://github.com/inaka/hexer.mk[Hex plugin] for Erlang.mk.\nHex is a https://hex.pm/[package manager for the Elixir ecosystem].\n\n=== lfe.mk\n\nAn https://github.com/ninenines/lfe.mk[LFE plugin] for Erlang.mk.\nLFE, or http://lfe.io/[Lisp Flavoured Erlang], is an alternative\nlanguage for the BEAM.\n\n=== reload.mk\n\nA https://github.com/bullno1/reload.mk[live reload plugin] for Erlang.mk.\n"
},
{
"from": "gpt",
"value": "[[plugins_list]]\n== List of plugins\n\nThis is a non-exhaustive list of Erlang.mk plugins, sorted\nalphabetically.\n\n=== efene.mk\n\nAn https://github.com/ninenines/efene.mk[Efene plugin] for Erlang.mk.\nhttp://efene.org/[Efene] is an alternative language for the BEAM.\n\n=== elixir.mk\n\nAn https://github.com/botsunit/elixir.mk[Elixir plugin] for\nErlang.mk. http://elixir-lang.org/[Elixir] is an alternative\nlanguage for the BEAM.\n\n=== elvis.mk\n\nAn https://github.com/inaka/elvis.mk[Elvis plugin] for Erlang.mk.\nElvis is an https://github.com/inaka/elvis[Erlang style reviewer].\n\n=== geas\n\nhttps://github.com/crownedgrouse/geas[Geas] gives aggregated\ninformation on a project and its dependencies, and is available\nas an Erlang.mk plugin.\n\n=== hexer.mk\n\nAn https://github.com/inaka/hexer.mk[Hex plugin] for Erlang.mk.\nHex is a https://hex.pm/[package manager for the Elixir ecosystem].\n\n=== lfe.mk\n\nAn https://github.com/ninenines/lfe.mk[LFE plugin] for Erlang.mk.\nLFE, or http://lfe.io/[Lisp Flavoured Erlang], is an alternative\nlanguage for the BEAM.\n\n=== reload.mk\n\nA https://github.com/bullno1/reload.mk[live reload plugin] for Erlang.mk.\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Simplify command to change managementState\n\nThe working example shows the image registry operator config object, but the command to change the ManagementState by editing it is not provided. As a reference, this would suffice: `oc edit configs.imageregistry.operator.openshift.io`. However, a simpler instruction for the user would be to patch in place, command provided by this proposed commit.\n"
},
{
"from": "human",
"value": "// Module included in the following assemblies:\n//\n// * installing/installing_bare_metal/installing-bare-metal.adoc\n// * installing/installing_bare_metal/installing-restricted-networks-bare-metal.adoc\n// * installing/installing_vsphere/installing-restricted-networks-vsphere.adoc\n// * installing/installing_vsphere/installing-vsphere.adoc\n// * registry/configuring-registry-storage/configuring-registry-storage-baremetal.adoc\n// * registry/configuring-registry-storage/configuring-registry-storage-vsphere.adoc\n\n[id=\"registry-change-management-state_{context}\"]\n= Change image registry ManagementState\n\nTo start the image registry, you must change `ManagementState` Image Registry Operator configuration from `Removed` to `Managed`.\n\n.Procedure\n\n* Change `ManagementState` Image Registry Operator configuration from `Removed` to `Managed`. For example:\n+\n[source,yaml]\n----\napiVersion: imageregistry.operator.openshift.io/v1\nkind: Config\nmetadata:\n creationTimestamp: <time>\n finalizers:\n - imageregistry.operator.openshift.io/finalizer\n generation: 3\n name: cluster\n resourceVersion: <version>\n selfLink: <link>\nspec:\n readOnly: false\n disableRedirect: false\n requests:\n read:\n maxInQueue: 0\n maxRunning: 0\n maxWaitInQueue: 0s\n write:\n maxInQueue: 0\n maxRunning: 0\n maxWaitInQueue: 0s\n defaultRoute: true\n managementState: Managed\n----\n"
},
{
"from": "gpt",
"value": "// Module included in the following assemblies:\n//\n// * installing/installing_bare_metal/installing-bare-metal.adoc\n// * installing/installing_bare_metal/installing-restricted-networks-bare-metal.adoc\n// * installing/installing_vsphere/installing-restricted-networks-vsphere.adoc\n// * installing/installing_vsphere/installing-vsphere.adoc\n// * registry/configuring_registry_storage/configuring-registry-storage-baremetal.adoc\n// * registry/configuring_registry_storage/configuring-registry-storage-vsphere.adoc\n// * virt/virtual_machines/importing_vms/virt-importing-vmware-vm.adoc\n\n[id=\"registry-change-management-state_{context}\"]\n= Changing the image registry's management state\n\nTo start the image registry, you must change the Image Registry Operator configuration's `managementState` from `Removed` to `Managed`.\n\n.Procedure\n\n* Change `managementState` Image Registry Operator configuration from `Removed` to `Managed`. For example:\n+\n[source,terminal]\n----\n$ oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'\n----\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add documentation on building uberjars\n"
},
{
"from": "human",
"value": "= Packed deployments\n\nWe use the term 'packed deployment' to mean that everything required at runtime is built (if necessary) and packed into an archive, ready for deployment and execution in the target operating environment.\n\nIt is common to create 'uberjars' containing pre-compiled Clojure code, togther with dependent pre-compiled library code from Clojure and other JVM languages.\n\nAdvantages of this approach include:\n\n* Start up time is fast, ideal for AWS Lambdas and for coping with sudden spikes in traffic, for example, using AWS auto-scaling groups.\n* Immutable\n\nDisadvantages include:\n\n* Uberjars can be large, meaning they _can_ be costly to build, store and transfer across network links.\n* Lossy uberjar build process, possible licensing issues and ambiguities.\n* Require a full redeploy on every change\n* Distance between dev and prod\n\nNOTE: Coming soon, documentation about integration with `pack`.\n"
},
{
"from": "gpt",
"value": "= Packed deployments\n\nWe use the term 'packed deployment' to mean that everything required at runtime is built (if necessary) and packed into an archive, ready for deployment and execution in the target operating environment.\n\nIt is common to create 'uberjars' containing pre-compiled Clojure code, togther with dependent pre-compiled library code from Clojure and other JVM languages.\n\nAdvantages of this approach include:\n\n* Start up time is fast, ideal for AWS Lambdas and for coping with sudden spikes in traffic, for example, using AWS auto-scaling groups.\n* Immutable\n\nDisadvantages include:\n\n* Uberjars can be large, meaning they _can_ be costly to build, store and transfer across network links.\n* Lossy uberjar build process, possible licensing issues and ambiguities.\n* Require a full redeploy on every change\n* Distance between dev and prod\n\n== Creating an uberjar\n\nFrom the top-level directory, run the following:\n\n[source]\n----\nedge$ bin/uberjar main\n----\n\nThis creates an uberjar named `main.jar`.\n\nNOTE: By default, uberjars are built using the https://github.com/juxt/pack.alpha#capsule[*capsule* strategy] in\nJUXT's https://github.com/juxt/pack.alpha[pack] tool.\n\n\n== Running the uberjar\n\n[source]\n----\nedge$ java -jar main.jar\n----\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Add note about supported Spring Boot plugin version\n"
},
{
"from": "human",
"value": "== Spring Boot Application Plugin\n\nThe plugin `com.bmuschko.docker-spring-boot-application` is a highly opinionated plugin for projects applying the https://docs.spring.io/spring-boot/docs/current/reference/html/build-tool-plugins-gradle-plugin.html[Spring Boot plugin].\nUnder the covers the plugin preconfigures tasks for creating and pushing Docker images for your Spring Boot application.\nThe default configuration is tweakable via an exposed extension. The plugin https://docs.spring.io/spring-boot/docs/current/gradle-plugin/reference/html/#reacting-to-other-plugins[reacts] to either the `java` or `war` plugin.\n\ninclude::31-usage.adoc[]\n\ninclude::32-extension.adoc[]\n\ninclude::33-tasks.adoc[]\n\ninclude::34-examples.adoc[]"
},
{
"from": "gpt",
"value": "== Spring Boot Application Plugin\n\nThe plugin `com.bmuschko.docker-spring-boot-application` is a highly opinionated plugin for projects applying the https://docs.spring.io/spring-boot/docs/current/reference/html/build-tool-plugins-gradle-plugin.html[Spring Boot plugin].\nUnder the covers the plugin preconfigures tasks for creating and pushing Docker images for your Spring Boot application.\nThe default configuration is tweakable via an exposed extension. The plugin https://docs.spring.io/spring-boot/docs/current/gradle-plugin/reference/html/#reacting-to-other-plugins[reacts] to either the `java` or `war` plugin.\n\n[IMPORTANT]\nThe plugin only supports projects that use a 2.x version of the Spring Boot plugin.\n\ninclude::31-usage.adoc[]\n\ninclude::32-extension.adoc[]\n\ninclude::33-tasks.adoc[]\n\ninclude::34-examples.adoc[]"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Document package migration in release notes\n\nIssue: #311\n"
},
{
"from": "human",
"value": "=== 5.0.0-M1\n\n*Date of Release:* June 30, 2016\n\n*Scope:* First milestone release of JUnit 5\n\n==== Summary of Changes\n\n===== JUnit Platform\n\n- `org.junit.platform.console.ConsoleRunner` has been renamed to `ConsoleLauncher`\n\n===== JUnit Jupiter\n\n- `ExtensionContext.getElement()` now returns `Optional<AnnotatedElement>`.\n\n===== JUnit Vintage\n\n- ???\n"
},
{
"from": "gpt",
"value": "=== 5.0.0-M1\n\n*Date of Release:* June 30, 2016\n\n*Scope:* First milestone release of JUnit 5\n\n==== Summary of Changes\n\nThe following is a list of global changes. For details regarding changes specific to the\nPlatform, Jupiter, and Vintage, consult the dedicated subsections.\n\n.Package Migration\n[cols=\"20,80\"]\n|===\n| Old Base Package | New Base Package\n\n| `org.junit.gen5.api` | `org.junit.jupiter.api`\n| `org.junit.gen5.commons` | `org.junit.platform.commons`\n| `org.junit.gen5.console` | `org.junit.platform.console`\n| `org.junit.gen5.engine.junit4` | `org.junit.vintage.engine`\n| `org.junit.gen5.engine.junit5` | `org.junit.jupiter.engine`\n| `org.junit.gen5.engine` | `org.junit.platform.engine`\n| `org.junit.gen5.gradle` | `org.junit.platform.gradle.plugin`\n| `org.junit.gen5.junit4.runner` | `org.junit.platform.runner`\n| `org.junit.gen5.launcher` | `org.junit.platform.launcher`\n| `org.junit.gen5.surefire` | `org.junit.platform.surefire.provider`\n|===\n\n===== JUnit Platform\n\n- `org.junit.platform.console.ConsoleRunner` renamed to `ConsoleLauncher`\n\n===== JUnit Jupiter\n\n- `ExtensionContext.getElement()` now returns `Optional<AnnotatedElement>`\n\n===== JUnit Vintage\n\n- ???\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "docs: Remove Ivan as doc author (sorry Ivan, old asciidoc only supports one author)."
},
{
"from": "human",
"value": "IronBee Reference Manual\n========================\nBrian Rectanus, Ivan Ristic\nv0.9.0, 2010-2014\n:doctype: book\n:encoding: utf-8\n:toc2:\n:toclevels: 3\n\ninclude::ch-preface.adoc[]\n\ninclude::ch-introduction.adoc[]\n\ninclude::ch-server-configuration.adoc[]\n\ninclude::ch-ironbee-configuration.adoc[]\n\ninclude::ch-diagnostics-devel-tools.adoc[]\n\ninclude::ch-inspection.adoc[]\n\ninclude::ch-rule-reference.adoc[]\n\ninclude::ch-extending-ironbee.adoc[]\n\ninclude::ch-installation-guide.adoc[]\n\ninclude::ap-configuration-examples.adoc[]\n\ninclude::ap-todo.adoc[]\n\ninclude::ap-comparison-with-modsecurity.adoc[]\n\n"
},
{
"from": "gpt",
"value": "IronBee Reference Manual\n========================\nBrian Rectanus\nv0.9.0, 2010-2014\n:doctype: book\n:encoding: utf-8\n:toc2:\n:toclevels: 3\n\ninclude::ch-preface.adoc[]\n\ninclude::ch-introduction.adoc[]\n\ninclude::ch-server-configuration.adoc[]\n\ninclude::ch-ironbee-configuration.adoc[]\n\ninclude::ch-diagnostics-devel-tools.adoc[]\n\ninclude::ch-inspection.adoc[]\n\ninclude::ch-rule-reference.adoc[]\n\ninclude::ch-extending-ironbee.adoc[]\n\ninclude::ch-installation-guide.adoc[]\n\ninclude::ap-configuration-examples.adoc[]\n\ninclude::ap-todo.adoc[]\n\ninclude::ap-comparison-with-modsecurity.adoc[]\n\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Update What's New for 5.8\n"
},
{
"from": "human",
"value": "[[new]]\n= What's New in Spring Security 5.8\n\nSpring Security 5.8 provides a number of new features.\nBelow are the highlights of the release.\n\n* https://github.com/spring-projects/spring-security/pull/11638[gh-11638] - Refresh remote JWK when unknown KID error occurs\n* https://github.com/spring-projects/spring-security/pull/11782[gh-11782] - @WithMockUser Supported as Merged Annotation\n* https://github.com/spring-projects/spring-security/issues/11661[gh-11661] - Configurable authentication converter for resource-servers with token introspection\n* https://github.com/spring-projects/spring-security/pull/11771[gh-11771] - `HttpSecurityDsl` should support `apply` method\n* https://github.com/spring-projects/spring-security/pull/11232[gh-11232] - `ClientRegistrations#rest` defines 30s connect and read timeouts\n* https://github.com/spring-projects/spring-security/pull/11464[gh-11464] - Remember Me supports SHA256 algorithm\n"
},
{
"from": "gpt",
"value": "[[new]]\n= What's New in Spring Security 5.8\n\nSpring Security 5.8 provides a number of new features.\nBelow are the highlights of the release.\n\n* https://github.com/spring-projects/spring-security/pull/11638[gh-11638] - Refresh remote JWK when unknown KID error occurs\n* https://github.com/spring-projects/spring-security/pull/11782[gh-11782] - @WithMockUser Supported as Merged Annotation\n* https://github.com/spring-projects/spring-security/issues/11661[gh-11661] - Configurable authentication converter for resource-servers with token introspection\n* https://github.com/spring-projects/spring-security/pull/11771[gh-11771] - `HttpSecurityDsl` should support `apply` method\n* https://github.com/spring-projects/spring-security/pull/11232[gh-11232] - `ClientRegistrations#rest` defines 30s connect and read timeouts\n* https://github.com/spring-projects/spring-security/pull/11464[gh-11464] - Remember Me supports SHA256 algorithm\n* https://github.com/spring-projects/spring-security/pull/11908[gh-11908] - Make X-Xss-Protection header value configurable in ServerHttpSecurity\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Remove reference to git, prefer -b in further statement\n"
},
{
"from": "human",
"value": "= Full Ergodox EZ build environment for Void Linux\n\nCreate a chroot like so:\n\n[source]\nxbps-install -S -R https://repo.voidlinux.eu/current -r /tmp/foo base-voidstrap\n\nThen you can run.\n\nINFO: /tmp/foo here must be absolute. Limitation of xbps-uunshare.\n\n[source]\nxbps-uunshare /tmp/foo xbps-install -- -S gcc wget unzip dfu-programmer zip make teensy_loader_cli git avr-gcc avr-libc\n\nYou can fetch qmk like so:\n\n[source]\nxbps-uunshare /tmp/foo git -- clone --depth 1 https://github.com/qmk/qmk_firmware.git\n\nBuild with\n\n[source]\nxbps-uunshare /tmp/foo /bin/sh -- -c '(cd /qmk_firmware && make ergodox-ez-default-teensy)'\n"
},
{
"from": "gpt",
"value": "= Full Ergodox EZ build environment for Void Linux\n\nCreate a chroot like so:\n\n[source]\nxbps-install -S -R https://repo.voidlinux.eu/current -r /tmp/foo base-voidstrap\n\nThen you can run.\n\nINFO: /tmp/foo here must be absolute. Limitation of xbps-uunshare.\n\n[source]\nxbps-uunshare /tmp/foo xbps-install -- -S gcc wget unzip dfu-programmer zip make teensy_loader_cli avr-gcc avr-libc\n\nYou can fetch qmk like so:\n\n[source]\nxbps-uunshare /tmp/foo git -- clone --depth 1 https://github.com/qmk/qmk_firmware.git\n\nBuild with\n\n[source]\nxbps-uunshare /tmp/foo /bin/sh -- -c '(cd /qmk_firmware && make ergodox-ez-default-teensy)'\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Document `--details verbose` stacktrace emission fix\n"
},
{
"from": "human",
"value": "[[release-notes-5.3.0-M1]]\n== 5.3.0-M1\n\n*Date of Release:* β\n\n*Scope:* β\n\nFor a complete list of all _closed_ issues and pull requests for this release, consult the\nlink:{junit5-repo}+/milestone/23?closed=1+[5.3 M1] milestone page in the JUnit repository\non GitHub.\n\n\n[[release-notes-5.3.0-M1-junit-platform]]\n=== JUnit Platform\n\n==== Bug Fixes\n\n* β\n\n==== Deprecations and Breaking Changes\n\n* β\n\n==== New Features and Improvements\n\n* β\n\n\n[[release-notes-5.3.0-M1-junit-jupiter]]\n=== JUnit Jupiter\n\n==== Bug Fixes\n\n* β\n\n==== Deprecations and Breaking Changes\n\n* β\n\n==== New Features and Improvements\n\n* New `assertThrows` methods in `Assertions` provide a more specific failure\n message if the lambda returns a result instead of throwing the expected\n exception.\n\n\n[[release-notes-5.3.0-M1-junit-vintage]]\n=== JUnit Vintage\n\n==== Bug Fixes\n\n* β\n\n==== Deprecations and Breaking Changes\n\n* β\n\n==== New Features and Improvements\n\n* β\n"
},
{
"from": "gpt",
"value": "[[release-notes-5.3.0-M1]]\n== 5.3.0-M1\n\n*Date of Release:* β\n\n*Scope:* β\n\nFor a complete list of all _closed_ issues and pull requests for this release, consult the\nlink:{junit5-repo}+/milestone/23?closed=1+[5.3 M1] milestone page in the JUnit repository\non GitHub.\n\n\n[[release-notes-5.3.0-M1-junit-platform]]\n=== JUnit Platform\n\n==== Bug Fixes\n\n* Full stacktrace is printed to the console, when running the `ConsoleLauncher`\n in `--details verbose` mode.\n\n==== Deprecations and Breaking Changes\n\n* β\n\n==== New Features and Improvements\n\n* β\n\n\n[[release-notes-5.3.0-M1-junit-jupiter]]\n=== JUnit Jupiter\n\n==== Bug Fixes\n\n* β\n\n==== Deprecations and Breaking Changes\n\n* β\n\n==== New Features and Improvements\n\n* New `assertThrows` methods in `Assertions` provide a more specific failure\n message if the lambda returns a result instead of throwing the expected\n exception.\n\n\n[[release-notes-5.3.0-M1-junit-vintage]]\n=== JUnit Vintage\n\n==== Bug Fixes\n\n* β\n\n==== Deprecations and Breaking Changes\n\n* β\n\n==== New Features and Improvements\n\n* β\n"
}
] | asciidoc | chosen |
[
{
"from": "system",
"value": "Update upgrade notes regarding Hibernate 5.2\n"
},
{
"from": "human",
"value": "==== Changes to Configuration Model\n\nIn preparation for Hibernate 5.2 support the previous \"SessionFactoryBean\" notion has been removed. Now if you wish to customize `SessionFactory` creation you should instead register a custom `org.grails.orm.hibernate.connections.HibernateConnectionSourceFactory` in Spring.\n\n\n"
},
{
"from": "gpt",
"value": "==== Changes to Configuration Model\n\nIn preparation for Hibernate 5.2 support the previous \"SessionFactoryBean\" notion has been removed. Now if you wish to customize `SessionFactory` creation you should instead register a custom `org.grails.orm.hibernate.connections.HibernateConnectionSourceFactory` in Spring.\n\n==== IdentityEnumType Handling Changed\n\nPrevious versions of GORM shipped with a `org.grails.orm.hibernate.cfg.IdentityEnumType` class for altering the handling of enums. In order to support different versions of Hibernate 5.x which feature different signatures for the `org.hibernate.usertype.UserType` interface this class has been removed.\n\nIf you wish to obtain the same functionality you need to change your `mapping` block to:\n\n[source,groovy]\n----\nstatic mapping = {\n myEnum enumType:\"identity\"\n}\n----\n\n==== Changes to Support Hibernate 5.2\n\nHibernate 5.2 includes many breaking API changes, in order to support Hibernate 5.2 several classes have been removed or rewritten. Including:\n\n* `org.grails.orm.hibernate.proxy.GroovyAwareJavassistLazyInitializer`\n* `org.grails.orm.hibernate.proxy.GroovyAwareJavassistProxyFactory`\n* `org.grails.orm.hibernate.persister.entity.GroovyAwareJoinedSubclassEntityPersister`\n* `org.grails.orm.hibernate.persister.entity.GroovyAwareSingleTableEntityPersister`\n\nMost of these classes are considered internal, however if you have extended or references these classes you may need to modify your code appropriately."
}
] | asciidoc | chosen |