question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
I want to use AWS CDK to define an API Gateway and a lambda that the APIG will proxy to.
The OpenAPI spec supports a x-amazon-apigateway-integration custom extension to the Swagger spec (detailed here), for which an invocation URL of the lambda is required. If the lambda is defined in the same stack as the API, I don't see how to provide this in the OpenAPI spec. The best I can think of would be to define one stack with the lambda in, then get the output from this and run sed to do a find-and-replace in the OpenAPI spec to insert the uri, then create a second stack with this modified OpenAPI spec.
Example:
/items:
post:
x-amazon-apigateway-integration:
uri: "arn:aws:apigateway:eu-west-2:lambda:path/2015-03-31/functions/arn:aws:lambda:eu-west-2:123456789012:function:MyStack-SingletonLambda4677ac3018fa48679f6-B1OYQ50UIVWJ/invocations"
passthroughBehavior: "when_no_match"
httpMethod: "POST"
type: "aws_proxy"
Q1. This seems like a chicken-and-egg problem, is the above the only way to do this?
I tried to use the defaultIntegration property of the SpecRestApi CDK construct. The documentation states:
An integration to use as a default for all methods created within this
API unless an integration is specified.
This seems like a should be able to define a default integration using a lambda defined in the CDK spec and therefore have all methods use this integration, without needing to know the uri of the lambda in advance.
Thus I tried this:
SingletonFunction myLambda = ...
SpecRestApi openapiRestApi = SpecRestApi.Builder.create(this, "MyApi")
.restApiName("MyApi")
.apiDefinition(ApiDefinition.fromAsset("openapi.yaml"))
.defaultIntegration(LambdaIntegration.Builder.create(myLambda)
.proxy(false)
.build())
.deploy(true)
.build();
The OpenAPI spec defined in openapi.yaml does not include a x-amazon-apigateway-integration stanza; it just has a single GET method defined within a standard OpenApi 3 specification.
However, when I try to deploy this, I get an error:
No integration defined for method (Service: AmazonApiGateway; Status Code: 400; Error Code: BadRequestException; Request ID: 56113150-1460-4ed2-93b9-a12618864582)
This seems like a bug, so I filed one here.
Q2. How do I define an API Gateway and Lambda using CDK and wire the two together via an OpenAPI spec?
| There is an existing workaround. Here is how:
Your OpenAPI file has to look like this:
openapi: "3.0.1"
info:
title: "The Super API"
description: "API to do super things"
version: "2019-09-09T12:56:55Z"
servers:
- url: ""
variables:
basePath:
default:
Fn::Sub: ${ApiStage}
paths:
/path/subpath:
get:
parameters:
- name: "Password"
in: "header"
schema:
type: "string"
responses:
200:
description: "200 response"
content:
application/json:
schema:
$ref: "#/components/schemas/UserConfigResponseModel"
security:
- sigv4: []
x-amazon-apigateway-integration:
uri:
Fn::Sub: "arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${MySuperLambda.Arn}/invocations"
responses:
default:
statusCode: "200"
requestTemplates:
application/json: "{blablabla}"
passthroughBehavior: "when_no_match"
httpMethod: "POST"
type: "aws"
As you can see, this OpenAPI template refers to ApiStage, AWS::Region and MySuperLambda.Arn.
The associated cdk file contains the following:
// To pass external string, nothing better than this hacky solution:
const ApiStage = new CfnParameter(this, 'ApiStage',{type: 'String', default: props.ApiStage})
ApiStage.overrideLogicalId('ApiStage')
Here the ApiStage is used in props. It allows me to pass it to the cdk app with an environment variable during the CI for example.
const MySuperLambda = new lambda.Function(this, 'MySuperLambda', {
functionName: "MySuperLambda",
description: "Hello world",
runtime: lambda.Runtime.PYTHON_3_7,
code: lambda.Code.asset(lambda_asset),
handler: "MySuperLambda.lambda_handler",
timeout: cdk.Duration.seconds(30),
memorySize: 128,
role: MySuperLambdaRole
});
const forceLambdaId = MySuperLambda.node.defaultChild as lambda.CfnFunction
forceLambdaId.overrideLogicalId('MySuperLambda')
Here, as previously, I'm forcing CDK to override the logical ids so that I know the id before deployment. Otherwise, cdk adds a suffix to the logical ids.
const asset = new Asset(this, 'SampleAsset', {
path: './api-gateway-definitions/SuperAPI.yml',
});
This allows me to upload the OpenAPI file directly on to the cdk bucket (no need to create a new one, this is amazing).
const data = Fn.transform('AWS::Include', {'Location': asset.s3ObjectUrl})
This is part of Cloudformation magic. This is where Fn::Sub and Fn::GetAtt are interpreted. I could not manage to make it work with !Ref function.
const SuperApiDefinition = apigateway.AssetApiDefinition.fromInline(data)
Create an api definition from the previously read file.
const sftpApiGateway = new apigateway.SpecRestApi(this, 'superAPI', {
apiDefinition: SuperApiDefinition,
deploy: false
})
Finally, create the SpecRestApi.
Run and magic, this is working. You may still encounter 400 errors, probably because of uncorrect format in your OpenAPI file (and don't use !Ref).
Would I recommand this?
Meh.
This is pretty much a workaround. It is really useful if you want to use the OpenAPI format with dynamic variables, within your CI. Without much effort, you can deploy in dev and prod, just by switching 1 environment variable.
However, this feels really hacky and does not seem to fit in CDK philosophy. This is what I'm currently using for deployment but this will probably change in the future. I believe a real templating solution could have a better fit here, but right now, I don't really thought about it.
| OpenAPI | 62,179,893 | 25 |
I'm trying to add an object in an array, but this seems not be possible. I've tried the following, but I get always the error:
Property Name is not allowed.
This is shown for all items defined in the devices array. How can I define items in an array in OpenAPI?
/demo/:
post:
summary: Summary
requestBody:
description: Description.
required: true
content:
application/json:
schema:
type: object
properties:
Name:
type: string
Number:
type: array
items:
type: string
description:
type: string
type:
type: string
devices:
type: array
items:
Name:
type: string
descripiton:
type: string
Number:
type: integer
enabled:
type: boolean
required:
- Name
- Number
- devices
responses:
'201': # status code
description: Created.
'500':
description: Error.
'405':
description: Invalid body has been presented.
| You need to add two extra lines inside items to specify that the item type is an object:
devices:
type: array
items:
type: object # <----------
properties: # <----------
Name:
type: string
descripiton:
type: string
Number:
type: integer
enabled:
type: boolean
| OpenAPI | 63,738,715 | 25 |
I am using Dotnet Core healthchecks as described here. In short, it looks like this:
First, you configure services like this:
services.AddHealthChecks()
.AddSqlServer("connectionString", name: "SQlServerHealthCheck")
... // Add multiple other checks
Then, you register an endpoint like this:
app.UseHealthChecks("/my/healthCheck/endpoint");
We are also using Swagger (aka Open API) and we see all the endpoints via Swagger UI, but not the health check endpoint.
Is there a way to add this to a controller method so that Swagger picks up the endpoint automatically, or maybe integrate it with swagger in another way?
The best solution I found so far is to add a custom hardcoded endpoint (like described here), but it is not nice to maintain.
| I used this approach and it worked well for me: https://www.codit.eu/blog/documenting-asp-net-core-health-checks-with-openapi
Add a new controller e.g. HealthController and inject the HealthCheckService into the constructor. The HealthCheckService is added as a dependency when you call AddHealthChecks in Startup.cs:
The HealthController should appear in Swagger when you rebuild:
[Route("api/v1/health")]
public class HealthController : Controller
{
private readonly HealthCheckService _healthCheckService;
public HealthController(HealthCheckService healthCheckService)
{
_healthCheckService = healthCheckService;
}
/// <summary>
/// Get Health
/// </summary>
/// <remarks>Provides an indication about the health of the API</remarks>
/// <response code="200">API is healthy</response>
/// <response code="503">API is unhealthy or in degraded state</response>
[HttpGet]
[ProducesResponseType(typeof(HealthReport), (int)HttpStatusCode.OK)]
[SwaggerOperation(OperationId = "Health_Get")]
public async Task<IActionResult> Get()
{
var report = await _healthCheckService.CheckHealthAsync();
return report.Status == HealthStatus.Healthy ? Ok(report) : StatusCode((int)HttpStatusCode.ServiceUnavailable, report);
}
}
One thing I noticed though is the endpoint is still "/health" (or whatever you set it to in Startup.cs) and not "/api/vxx/health" but it will still appear correctly in Swagger.
| OpenAPI | 54,362,223 | 24 |
I have the following SecurityScheme definition using springdoc-openapi for java SpringBoot RESTful app:
@Bean
public OpenAPI customOpenAPI() {
return new OpenAPI()
.components(new Components().addSecuritySchemes("bearer-jwt",
new SecurityScheme().type(SecurityScheme.Type.HTTP).scheme("bearer").bearerFormat("JWT")
.in(SecurityScheme.In.HEADER).name("Authorization")))
.info(new Info().title("App API").version("snapshot"));
}
Is it possible to apply it globally to all paths, without having to go and add @SecurityRequirement annotations to @Operation annotation everywhere in the code?
If it is, how to add exclusions to unsecured paths?
| Yes, you can do it in the same place calling addSecurityItem:
@Bean
public OpenAPI customOpenAPI() {
return new OpenAPI()
.components(new Components().addSecuritySchemes("bearer-jwt",
new SecurityScheme().type(SecurityScheme.Type.HTTP).scheme("bearer").bearerFormat("JWT")
.in(SecurityScheme.In.HEADER).name("Authorization")))
.info(new Info().title("App API").version("snapshot"))
.addSecurityItem(
new SecurityRequirement().addList("bearer-jwt", Arrays.asList("read", "write")));
}
Global security schema can be overridden by a different one with the @SecurityRequirements annotation. Including removing security schemas for an operation. For example, we can remove security for registration path.
@SecurityRequirements
@PostMapping("/registration")
public ResponseEntity post(@RequestBody @Valid Registration: registration) {
return registrationService.register(registration);
}
While still keeping security schemas for other APIs.
Old answer (Dec 20 '19):
Global security schema can be overridden by a different one with the @SecurityRequirements annotation. but it cannot be removed for unsecured paths. It is acctualy missing fueature in the springdoc-openapi, OpenAPI standard allows it. See disable global security for particular operation
There is a workaround though. The springdoc-openapi has a concept of an OpenApiCustomiser which can be used to intercept generated schema. Inside the customizer, an operation can be modified programmatically. To remove any inherited security, the field security needs to be set to an empty array. The logic may be based on any arbitrary rules e.g operation name. I used tags.
The customizer:
import io.swagger.v3.oas.models.OpenAPI;
import io.swagger.v3.oas.models.Operation;
import io.swagger.v3.oas.models.PathItem;
import org.springdoc.api.OpenApiCustomiser;
import org.springframework.stereotype.Component;
import javax.validation.constraints.NotNull;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import java.util.Objects;
import java.util.function.Function;
import java.util.stream.Collectors;
import java.util.stream.Stream;
@Component
public class SecurityOverrideCustomizer implements OpenApiCustomiser {
public static final String UNSECURED = "security.open";
private static final List<Function<PathItem, Operation>> OPERATION_GETTERS = Arrays.asList(
PathItem::getGet, PathItem::getPost, PathItem::getDelete, PathItem::getHead,
PathItem::getOptions, PathItem::getPatch, PathItem::getPut);
@Override
public void customise(OpenAPI openApi) {
openApi.getPaths().forEach((path, item) -> getOperations(item).forEach(operation -> {
List<String> tags = operation.getTags();
if (tags != null && tags.contains(UNSECURED)) {
operation.setSecurity(Collections.emptyList());
operation.setTags(filterTags(tags));
}
}));
}
private static Stream<Operation> getOperations(PathItem pathItem) {
return OPERATION_GETTERS.stream()
.map(getter -> getter.apply(pathItem))
.filter(Objects::nonNull);
}
private static List<String> filterTags(List<String> tags) {
return tags.stream()
.filter(t -> !t.equals(UNSECURED))
.collect(Collectors.toList());
}
}
Now we can add @Tag(name = SecurityOverrideCustomizer.UNSECURED) to unsecured methods:
@Tag(name = SecurityOverrideCustomizer.UNSECURED)
@GetMapping("/open")
@ResponseBody
public String open() {
return "It works!";
}
Please bear in mind that it is just a workaround. Hopefully, the issue will be resolved in the next springdoc-openapi versions (at the time of writing it the current version is 1.2.18).
For a working example see springdoc-security-override-fix
| OpenAPI | 59,357,205 | 24 |
Spring Boot 2.2 application with springdoc-openapi-ui (Swagger UI) runs HTTP port.
The application is deployed to Kubernetes with Ingress routing HTTPS requests from outside the cluster to the service.
In this case Swagger UI available at https://example.com/api/swagger-ui.html has wrong "Generated server url" - http://example.com/api. While it should be https://example.com/api.
While Swagger UI is accessed by HTTPS, the generated server URL still uses HTTP.
| I had same problem. Below worked for me.
@OpenAPIDefinition(
servers = {
@Server(url = "/", description = "Default Server URL")
}
)
@SpringBootApplication
public class App {
// ...
}
| OpenAPI | 60,625,494 | 24 |
One of my endpoints returns a JSON (not huge, around 2MB). Trying to run GET on this endpoint in swagger-ui results in the browser hanging for a few minutes. After this time, it finally displays the JSON.
Is there a way to define that the response shouldn't be rendered but provided as a file to download instead?
I'm using OpenAPI 3, and I tried the following:
content:
application/json:
schema:
type: string
format: binary
taken from the documentation. Still, swagger-ui renders the response.
Has anyone had the same problem?
| Lex45x proposes in this github issue to disable syntax highlighting. In ASP.Net Core you can do this with
app.UseSwaggerUI(config =>
{
config.ConfigObject.AdditionalItems["syntaxHighlight"] = new Dictionary<string, object>
{
["activated"] = false
};
});
This significantly improves render performance.
| OpenAPI | 63,615,230 | 24 |
I'm having trouble defining a reusable schema component using OpenAPI 3 which would allow for an array that contains multiple types. Each item type inherits from the same parent class but has specific child properties. This seems to work alright in the model view on SwaggerHub but the example view doesn't show the data correctly.
TLDR; Is there a way to define an array containing different object types in OpenAPI 3?
Response:
allOf:
- $ref: '#/components/schemas/BaseResponse'
- type: object
title: A full response
required:
- things
properties:
things:
type: array
items:
anyOf:
- $ref: '#/components/schemas/ItemOne'
- $ref: '#/components/schemas/ItemTwo'
- $ref: '#/components/schemas/ItemThree'
| Your spec is correct. It's just that example rendering for oneOf and anyOf schemas is not yet supported in Swagger UI. You can track this issue for status updates:
Multiple responses using oneOf attribute do not appear in UI
The workaround is to manually add an example alongside the oneOf/anyOf schema or to the parent schema:
things:
type: array
items:
anyOf:
- $ref: '#/components/schemas/ItemOne'
- $ref: '#/components/schemas/ItemTwo'
- $ref: '#/components/schemas/ItemThree'
# Note that array example is on the same
# level as `type: array`
example:
- foo: bar # Example of ItemOne
baz: qux
- "Hello, world" # Example of ItemTwo
- [4, 8, 15, 16, 23, 42] # Example of ItemThree
| OpenAPI | 47,656,791 | 23 |
Here it says I could refer to the definition in another file for an individual path, but the example seems to refer to a whole file, instead of a single path definition under the paths object. How to assign an individual path in another file's paths object?
For example, I have Anotherfile.yaml that contains the /a/b path:
paths:
/a/b:
post:
In another file, I use $ref to reference the /a/b path as follows:
paths:
/c/d:
$ref: 'Anotherfile.yaml#/paths/a/b'
but this gives an error:
Could not find paths/a/b in contents of ./Anotherfile.yaml
| When referencing paths, you need to escape the path name by replacing / with ~1, so that /a/b becomes ~1a~1b. Note that you escape just the path name and not the #/paths/ prefix.
$ref: 'Anotherfile.yaml#/paths/~1a~1b'
| OpenAPI | 58,909,124 | 23 |
Is it possible to group multiple parameters to reference them in multiple routes?
For example I have a combination of parameters which I need in every route. They are defined as global parameters. How can I group them?
I think about a definition like this:
parameters:
MetaDataParameters:
# Meta Data Properties
- name: id
in: query
description: Entry identification number
required: false
type: integer
- name: time_start
in: query
description: Start time of flare
required: false
type: string
- name: nar
in: query
description: Active region number
required: false
type: string
And then reference the whole group in my route:
/test/:
get:
tags:
- TEST
operationId: routes.test
parameters:
- $ref: "#/parameters/MetaDataParameters"
responses:
200:
description: OK
Is this possible with Swagger 2.0?
| This is not possible with Swagger 2.0, OpenAPI 3.0 or 3.1. I've opened a feature request for this and it is proposed for a future version:
https://github.com/OAI/OpenAPI-Specification/issues/445
| OpenAPI | 32,091,430 | 22 |
I am starting a REST service, using Swagger Codegen. I need to have different responses for different parameters.
Example: <baseURL>/path can use ?filter1= or ?filter2=, and these parameters should produce different response messages.
I want my OpenAPI YAML file to document these two query params separately. Is this possible?
| It is not supported in the 2.0 spec, and not in 3.x either.
Here are the corresponding proposals in the OpenAPI Specification repository:
Accommodate legacy APIs by allowing query parameters in the path
Querystring in Path Specification
| OpenAPI | 40,495,880 | 22 |
I'm using the online Swagger Editor to create a Swagger spec for my API.
My API has a single GET request endpoint, and I'm using the following YAML code to describe the input parameters:
paths:
/fooBar:
get:
tags:
- foobar
summary: ''
description: ''
operationId: foobar
consumes:
- application/x-www-form-urlencoded
produces:
- application/json
parameters:
- name: address
in: query
description: Address to be foobared
required: true
type: string
example: 123, FakeStreet
- name: city
in: query
description: City of the Address
required: true
type: string
example: New York
If I put in the example tag, I get an error saying:
is not exactly one from <#/definitions/parameter>,<#/definitions/jsonReference>
How do I set an example when writing GET parameters in Swagger?
| OpenAPI 2.0
OpenAPI/Swagger 2.0 does not have the example keyword for non-body parameters. You can specify examples in the parameter description. Some tools like Swagger UI v2, v3.12+ and Dredd also support the x-example extension property for this purpose:
parameters:
- name: address
in: query
description: Address to be foobared. Example: `123, FakeStreet`. # <-----
required: true
type: string
x-example: 123, FakeStreet # <-----
OpenAPI 3.x
Parameter examples are supported in OpenAPI 3.x:
parameters:
- name: address
in: query
description: Address to be foobared
required: true
schema:
type: string
example: 123, FakeStreet # <----
example: 456, AnotherStreet # Overrides the schema-level example
| OpenAPI | 43,933,516 | 22 |
I have the following OpenAPI definition:
swagger: "2.0"
info:
version: 1.0.0
title: Simple API
description: A simple API to learn how to write OpenAPI Specification
schemes:
- https
host: now.httpbin.org
paths:
/:
get:
summary: Get date in rfc2822 format
responses:
200:
schema:
type: object
items:
properties:
now:
type: object
rfc2822:
type: string
I would like to retrieve rfc2822 from the response:
{"now": {"epoch": 1531932335.0632613, "slang_date": "today", "slang_time": "now", "iso8601": "2018-07-18T16:45:35.063261Z", "rfc2822": "Wed, 18 Jul 2018 16:45:35 GMT", "rfc3339": "2018-07-18T16:45:35.06Z"}, "urls": ["/", "/docs", "/when/:human-timestamp", "/parse/:machine-timestamp"]}
But when I make a request from Swagger Editor, I get an error:
ERROR Server not found or an error occurred
What am I doing wrong?
|
No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:8081' is therefore not allowed access.
This is a CORS issue. The server at https://now.httpbin.org does not support CORS, so the browsers won't let web pages served from other domains to make requests to now.httpbin.org from JavaScript.
You have a few options:
Ask the owners of https://now.httpbin.org to support CORS.
Note: The server must not require authentication for preflight OPTIONS requests. OPTIONS requests should return 200 with the proper CORS headers.
If you are the owner - consider hosting Swagger UI on the same server and port (now.httpbin.org:443) to avoid CORS altogether.
Disable CORS restrictions in your browser. This reduces browser security so only do this if you understand the risks.
Bypass CORS in Chrome
Bypass CORS in Firefox
Use SwaggerHub instead of Swagger Editor to edit and test your API definitions. SwaggerHub proxies "try it out" requests through its servers so it's not subject to CORS restrictions. (Disclosure: I work for the company that makes SwaggerHub.)
By the way, your response definition is not valid. The response is missing a description and the schema is wrong (e.g. has an extra items keyword). It should be:
responses:
200:
description: OK
schema:
type: object
properties:
now:
type: object
properties:
rfc2822:
type: string
| OpenAPI | 51,407,341 | 22 |
I'm creating a V2 Function App and want to use Swagger/Open API for docs, however it is not yet supported in the Azure Portal for V2 Functions.
Any suggestions on how I can use Swagger with V2 Functions in VSTS to create the docs on each build?
| TL;DR - Use the NuGet package to render Open API document and Swagger UI through Azure Functions.
UPDATE (2021-06-04)
Microsoft recently announced the OpenAPI support on Azure Functions during the //Build event.
The Aliencube extension has now been archived and no longer supported. Please use this official extension.
As of today, it's in preview. Although it's in preview, it has more features than the Aliencube one.
Acknowledgement 2: I am still maintaining the official one.
Microsoft hasn’t officially started supporting Open API (or Swagger) yet. But there is a community-driven NuGet package currently available:
Nuget > Aliencube.AzureFunctions.Extensions.OpenApi
And here’s the blog post for it:
Introducing Swagger UI on Azure Functions
Basically its usage is similar to Swashbuckle — using decorators. And it supports both Azure Functions V1 and V2.
Acknowledgement 1: I am the owner of the NuGet package.
| OpenAPI | 52,500,329 | 22 |
type": "array",
"items": {
"type": "string",
"enum": ["MALE","FEMALE","WORKER"]
}
or
type": "array",
"items": {
"type": "string",
},
"enum": ["MALE","FEMALE","WORKER"]
?
Nothing in the spec about this. The goal is of course to get swagger-ui to show the enum values.
| It will depend on what you want to enum:
Each enum value MUST be of the described object type
in first case a String
in second one an Array of String
First syntax means These are the possible values of the String in this array
AnArray:
type: array
items:
type: string
enum:
- MALE
- FEMALE
- WORKER
This array can contain multiple String, but each String must have MALE, FEMALE or WORKER value.
Second one means These are the possible values of this Array
AnotherArray:
type: array
items:
type: string
enum:
-
- FEMALE
- WORKER
-
- MALE
- WORKER
Each enum value is therefore an array. In this example, this array can only have to possible value ["FEMALE","WORKER"] and ["MALE","WORKER"].
Unfortunately even if this syntax is valid, no enum values are shown in Swagger UI.
| OpenAPI | 36,888,626 | 21 |
I have an API which allows any arbitrary path to be passed in, for example all of these:
/api/tags
/api/tags/foo
/api/tags/foo/bar/baz
Are valid paths. I tried to describe it as follows:
/tags{tag_path}:
get:
parameters:
- name: tag_path
in: path
required: true
type: string
default: "/"
However, https://generator.swagger.io encodes slashes in the path, so it doesn't work. So is there a way to describe my API in Swagger?
| This is not supported as of OpenAPI 3.1, and I have to resort to a workaround.
If I have a path /tags{tag_path} and I enter something like this as tag_path: /foo/bar, then the actual query request URL will be: /tags%2Ffoo%2Fbar. So, I just added support for that on my backend: the endpoint handler for /tags* urldecodes the path (which is %2Ffoo%2Fbar), and it becomes /foo/bar again.
Yes, a hack, but it works, and it's better than nothing. In my case, tag names can't contain the / character, so there's no conflict. Your mileage may vary, of course.
| OpenAPI | 42,335,178 | 21 |
I couldn't find any resources on the use case differences between JSON:API & OpenAPI
From my understanding, JSON:API is more focused on the business data while OpenAPI is more about REST itself?
Any pointers would be great, thanks!
| You can use OpenAPI to describe API's, and JSON:API is a standard to structure your apis. If you use JSON:API, you can still use OpenAPI to describe it.
So OpenAPI's goal is really to provide a full description on how your API can be called, and what operations are available. JSON:API gives you a strong opinion on how to structure it.
| OpenAPI | 64,828,587 | 21 |
I am using OpenAPI generator maven plugin like one below for generating Java client code for models .
<plugin>
<groupId>org.openapitools</groupId>
<artifactId>openapi-generator-maven-plugin</artifactId>
<version>4.3.1</version>
<executions>
<execution>
<goals>
<goal>generate</goal>
</goals>
<configuration>
<inputSpec>${project.basedir}/src/main/resources/api.yaml</inputSpec>
<generatorName>java</generatorName>
<configOptions>
<sourceFolder>src/gen/java/main</sourceFolder>
</configOptions>
</configuration>
</execution>
</executions>
</plugin>
When , I generate the model classes, they get generated with usual POJO field declarations and getters and setters. But what I want to do is, instead of generating getters and setters, I want my classes to get automatically generated with Lombok annotations for Java pojos like @Getter, @Setter, @Data, etc. Is there a way to customize model generator to fit above use case requirement?
I tried to find out if there is a way. I found this discussion, where the very last comment talks about a PR, where the issue of generating models using Lombok annotations has been addressed. But I do not see any clear indication of usage or any documentation of this feature in the OpenAPI generator open source project that it has been implemented yet. So, is there any way of generating models with Lombok annotations instead of regular getters and setters today?
| To complete this very old thread: Now it does support Lombok annotations.
Example taken from here
<configOptions>
<additionalModelTypeAnnotations>@lombok.Builder @lombok.NoArgsConstructor @lombok.AllArgsConstructor</additionalModelTypeAnnotations>
</configOptions>
| OpenAPI | 65,733,938 | 21 |
The API for which I'm writing a Swagger 2.0 specification is basically a store for any JSON value.
I want a path to read value and a path to store any JSON values (null, number, integer, string, object, array) of non-predefined depth.
Unfortunately, it seems that Swagger 2.0 is quite strict on schemas for input and output and does not allow the whole set of schemas allowed by JSON Schema. The Swagger editor doesn't allow for example mixed values (for example a property that can be either a boolean or an integer) or loosely defined arrays (the type of items must be strictly defined) and objects.
So I'm trying a workaround by defining a MixedValue schema:
---
swagger: '2.0'
info:
version: 0.0.1
title: Data store API
consumes:
- application/json
produces:
- application/json
paths:
/attributes/{attrId}/value:
parameters:
- name: attrId
in: path
type: string
required: true
get:
responses:
'200':
description: Successful.
schema:
$ref: '#/definitions/MixedValue'
put:
parameters:
- name: value
in: body
required: true
schema:
$ref: '#/definitions/MixedValue'
responses:
responses:
'201':
description: Successful.
definitions:
MixedValue:
type: object
properties:
type:
type: string
enum:
- 'null'
- boolean
- number
- integer
- string
- object
- array
boolean:
type: boolean
number:
type: number
integer:
type: integer
string:
type: string
object:
description: deep JSON object
type: object
additionalProperties: true
array:
description: deep JSON array
type: array
required:
- type
But the Swagger Editor refuses the loosely defined object and array properties.
Questions:
- Is there a way to workaround this issue?
- Is it just a Swagger Editor bug or a strong limit of the Swagger 2.0 spec?
- Is there a better way (best practice) to specify what I need?
- Can I expect some limitations in code produced by swagger for some languages with my API spec?
| An arbitrary-type schema can be defined using an empty schema {}:
# swagger: '2.0'
definitions:
AnyValue: {}
# openapi: 3.0.0
components:
schemas:
AnyValue: {}
or if you want a description:
# swagger: '2.0'
definitions:
AnyValue:
description: 'Can be anything: string, number, array, object, etc. (except `null`)'
# openapi: 3.0.0
components:
schemas:
AnyValue:
description: 'Can be anything: string, number, array, object, etc., including `null`'
Without a defined type, a schema allows any values. Note that OpenAPI 2.0 Specification does not support null values, but some tools might support nulls nevertheless.
In OpenAPI 3.0, type-less schemas allow null values unless nulls are explicitly disallowed by other constraints (such as an enum).
See this Q&A for more details on how type-less schemas work.
Here's how Swagger Editor 2.0 handles a body parameter with the AnyValue schema:
I don't know how code generators handle this though.
| OpenAPI | 32,841,298 | 20 |
I am defining common schemas for Web services and I want to import them in the components/schema section of the specification.
I want to create a canonical data model that is common across multiple services to avoid redefining similar objects in each service definition.
Is there a way to do this?
Is there a similar mechanism to what XSD does with its import tag?
| You can $ref external OpenAPI schema objects directly using absolute or relative URLs:
responses:
'200':
description: OK
schema:
$ref: './common/Pet.yaml'
# or
# $ref: 'https://api.example.com/schemas/Pet.yaml'
where Pet.yaml contains, for example:
type: object
properties:
id:
type: integer
readOnly: true
petType:
type: string
name:
type: string
required:
- id
- petType
- name
See Using $ref for more information.
| OpenAPI | 50,179,541 | 20 |
How to enable "Authorize" button in springdoc-openapi-ui (OpenAPI 3.0 /swagger-ui.html) for Basic Authentication.
What annotations have to be added to Spring @Controller and @Configuration classes?
| Define a global security scheme for OpenAPI 3.0 using annotation @io.swagger.v3.oas.annotations.security.SecurityScheme in a @Configuration bean:
@Configuration
@OpenAPIDefinition(info = @Info(title = "My API", version = "v1"))
@SecurityScheme(
name = "basicAuth",
type = SecuritySchemeType.HTTP,
scheme = "basic"
)
public class OpenApi30Config {
}
Annotate @RestController with @SecurityRequirement(name = "basicAuth")
@RestController
@SecurityRequirement(name = "basicAuth")
public class Controller {}
OR
Annotate each @RestController method requiring Basic Authentication with @io.swagger.v3.oas.annotations.Operation referencing the defined security scheme:
@Operation(summary = "My endpoint", security = @SecurityRequirement(name = "basicAuth"))
| OpenAPI | 59,898,740 | 20 |
I'm migrating my API from Swagger 2.0 to OpenAPI 3.0. In a DTO I have a field specified as a byte array.
Swagger definition of the DTO:
Job:
type: object
properties:
body:
type: string
format: binary
Using the definition above the swagger code generator generates an object that accepts byte[] array as the body field new Job().setBody(new byte[1]).
After converting the API definition to OpenAPI the definition for that object stayed the same but the openapi code generator now requires org.springframework.core.io.Resource instead of byte[] (new Job().setBody(org.springframework.core.io.Resource)). There are some places in my code where I have to serialize the Job object but it's no longer possible because Resource doesn't implement serializable.
As a workaround I changed the type to object:
Job:
type: object
properties:
body:
type: object
Now I have to cast the body to String and then convert to byte[] everywhere and I'd rather have the type as byte[] as it was before.
How can I specify the type as byte[] using OpenAPI 3.0?
| You must set type: string and format: byte
Original answer: when using swagger codegen getting 'List<byte[]>' instead of simply 'byte[]'
| OpenAPI | 62,794,949 | 20 |
Having a hard time configuring Swagger UI
Here are the very explanatory docs: https://django-rest-swagger.readthedocs.io/en/latest/
YAML docstrings are deprecated. Does somebody know how to configure Swagger UI from within the python code? or what file should I change to group api endpoints, to add comments to each endpoint, to add query parameter fields in Swagger UI?
| This is how I managed to do it:
base urls.py
urlpatterns = [
...
url(r'^api/', include('api.urls', namespace='api')),
url(r'^api-auth/', include('rest_framework.urls', namespace='rest_framework')),
...
]
api.urls.py
urlpatterns = [
url(r'^$', schema_view, name='swagger'),
url(r'^article/(?P<pk>[0-9]+)/$',
ArticleDetailApiView.as_view(actions={'get': 'get_article_by_id'}),
name='article_detail_id'),
url(r'^article/(?P<name>.+)/(?P<pk>[0-9]+)/$',
ArticleDetailApiView.as_view(actions={'get': 'get_article'}),
name='article_detail'),
]
api.views.py. In MyOpenAPIRenderer I update the data dict to add description, query fields and to update type or required features.
class MyOpenAPIRenderer(OpenAPIRenderer):
def add_customizations(self, data):
super(MyOpenAPIRenderer, self).add_customizations(data)
data['paths']['/article/{name}/{pk}/']['get'].update(
{'description': 'Some **description**',
'parameters': [{'description': 'Add some description',
'in': 'path',
'name': 'pk',
'required': True,
'type': 'integer'},
{'description': 'Add some description',
'in': 'path',
'name': 'name',
'required': True,
'type': 'string'},
{'description': 'Add some description',
'in': 'query',
'name': 'a_query_param',
'required': True,
'type': 'boolean'},
]
})
# data['paths']['/article/{pk}/']['get'].update({...})
data['basePath'] = '/api'
@api_view()
@renderer_classes([MyOpenAPIRenderer, SwaggerUIRenderer])
def schema_view(request):
generator = SchemaGenerator(title='A title', urlconf='api.urls')
schema = generator.get_schema(request=request)
return Response(schema)
class ArticleDetailApiView(ViewSet):
@detail_route(renderer_classes=(StaticHTMLRenderer,))
def get_article_by_id(self, request, pk):
pass
@detail_route(renderer_classes=(StaticHTMLRenderer,))
def get_article(self, request, name, pk):
pass
update for django-rest-swagger (2.0.7): replace only add_customizations with get_customizations.
views.py
class MyOpenAPIRenderer(OpenAPIRenderer):
def get_customizations(self):
data = super(MyOpenAPIRenderer, self).get_customizations()
data['paths'] = custom_data['paths']
data['info'] = custom_data['info']
data['basePath'] = custom_data['basePath']
return data
You can read the swagger specification to create custom data.
| OpenAPI | 38,542,690 | 19 |
I am looking to represent the following JSON Object in OpenAPI:
{
"name": "Bob",
"age": 4,
...
}
The number of properties and the property names are not fully predetermined, so I look to use additionalProperties. However, I'm not too certain how it would be represented through OpenAPI/Swagger 2.0. I tried this:
Person:
type: object
additionalProperties:
type:
- int
- string
or the JSON equivalent:
{
"Person": {
"type": "object",
"additionalProperties": {
"type": ["int", "string"]
}
}
}
but that didn't quite work. Is there any way to keep the structure of the JSON Object I want to represent, for specifically strings and integers, and not arbitrary object types?
| OpenAPI 3.1
In OpenAPI 3.1, the type keyword can take a list of types:
Person:
type: object
additionalProperties:
type: [string, integer]
OpenAPI 3.x
OpenAPI 3.0+ supports oneOf so you can use:
Person:
type: object
additionalProperties:
oneOf:
- type: string
- type: integer
OpenAPI 2.0
OpenAPI 2.0 does not support multi-type values. The most you can do is use the typeless schema, which means the additional properties can be anything - strings, numbers, booleans, and so on - but you can't specify the exact types.
Person:
type: object
additionalProperties: {}
This is equivalent to:
Person:
type: object
| OpenAPI | 46,472,543 | 19 |
I have this schema defined:
User:
type: object
required:
- id
- username
properties:
id:
type: integer
format: int32
readOnly: true
xml:
attribute: true
description: The user ID
username:
type: string
readOnly: true
description: The username
first_name:
type: string
description: Users First Name
last_name:
type: string
description: Users Last Name
avatar:
$ref: '#/components/schemas/Image'
example:
id: 10
username: jsmith
first_name: Jessica
last_name: Smith
avatar: image goes here
xml:
name: user
Works great. The GET /user/{id} call displays the sample data just fine.
I have a second schema that creates an array of the above schema:
ArrayOfUsers:
type: array
items:
type: object
required:
- id
- username
properties:
id:
type: integer
format: int32
xml:
attribute: true
description: The user ID
username:
type: string
description: The username
first_name:
type: string
description: Users First Name
last_name:
type: string
description: Users Last Name
avatar:
$ref: '#/components/schemas/Image'
This also works great. The GET /user call displays the proper structure in an array just fine.
But I'd rather not define this schema twice.
I would like to create a schema that utilizes the first one and stick in an array.
I have failed in this attempt.
I tried it this way:
UserArray:
type: array
items:
type: object
required:
- id
- username
properties:
things:
type: array
items:
oneOf:
- $ref: '#/components/schemas/User'
This attempt gives me an empty array:
[
{}
]
This is not my desired result.
Any hints on this?
| An array of User objects is defined as follows:
UserArray:
type: array
items:
$ref: '#/components/schemas/User'
| OpenAPI | 49,827,240 | 19 |
Im trying to generate swagger document for my existing Flask app, I tried with Flask-RESTPlus initially and found out the project is abundant now and checked at the forked project flask-restx https://github.com/python-restx/flask-restx but still i dont think they support openapi 3.0
Im a bit confused to choose the package for my need. Im looking to solve a problem where we dont want to manually create swagger doc for our API, instead we would like to generate automatically using a packages.
import os
import requests
import json, yaml
from flask import Flask, after_this_request, send_file, safe_join, abort
from flask_restx import Resource, Api, fields
from flask_restx.api import Swagger
app = Flask(__name__)
api = Api(app=app, doc='/docs', version='1.0.0-oas3', title='TEST APP API',
description='TEST APP API')
response_fields = api.model('Resource', {
'value': fields.String(required=True, min_length=1, max_length=200, description='Book title')
})
@api.route('/compiler/', endpoint='compiler')
# @api.doc(params={'id': 'An ID'})
@api.doc(responses={403: 'Not Authorized'})
@api.doc(responses={402: 'Not Authorized'})
# @api.doc(responses={200: 'Not Authorized'})
class DemoList(Resource):
@api.expect(response_fields, validate=True)
@api.marshal_with(response_fields, code=200)
def post(self):
"""
returns a list of conferences
"""
api.payload["value"] = 'Im the response ur waiting for'
return api.payload
@api.route('/swagger')
class HelloWorld(Resource):
def get(self):
data = json.loads(json.dumps(api.__schema__))
with open('yamldoc.yml', 'w') as yamlf:
yaml.dump(data, yamlf, allow_unicode=True, default_flow_style=False)
file = os.path.abspath(os.getcwd())
try:
@after_this_request
def remove_file(resp):
try:
os.remove(safe_join(file, 'yamldoc.yml'))
except Exception as error:
log.error("Error removing or closing downloaded file handle", error)
return resp
return send_file(safe_join(file, 'yamldoc.yml'), as_attachment=True, attachment_filename='yamldoc.yml', mimetype='application/x-yaml')
except FileExistsError:
abort(404)
# main driver function
if __name__ == '__main__':
app.run(port=5003, debug=True)
The above code is a combination of my try on different packages, but it can generate swagger 2.0 doc but im trying to generate doc for openapi 3.0
Can some one suggest a good package which is supporting openapi 3.0 way of generating swagger yaml or json.
| I found a package to generate openapi 3.0 document
https://apispec.readthedocs.io/en/latest/install.html
This package serves the purpose neatly. Find the below code for detailed usage.
from apispec import APISpec
from apispec.ext.marshmallow import MarshmallowPlugin
from apispec_webframeworks.flask import FlaskPlugin
from marshmallow import Schema, fields
from flask import Flask, abort, request, make_response, jsonify
from pprint import pprint
import json
class DemoParameter(Schema):
gist_id = fields.Int()
class DemoSchema(Schema):
id = fields.Int()
content = fields.Str()
spec = APISpec(
title="Demo API",
version="1.0.0",
openapi_version="3.0.2",
info=dict(
description="Demo API",
version="1.0.0-oas3",
contact=dict(
email="[email protected]"
),
license=dict(
name="Apache 2.0",
url='http://www.apache.org/licenses/LICENSE-2.0.html'
)
),
servers=[
dict(
description="Test server",
url="https://resources.donofden.com"
)
],
tags=[
dict(
name="Demo",
description="Endpoints related to Demo"
)
],
plugins=[FlaskPlugin(), MarshmallowPlugin()],
)
spec.components.schema("Demo", schema=DemoSchema)
# spec.components.schema(
# "Gist",
# {
# "properties": {
# "id": {"type": "integer", "format": "int64"},
# "name": {"type": "string"},
# }
# },
# )
#
# spec.path(
# path="/gist/{gist_id}",
# operations=dict(
# get=dict(
# responses={"200": {"content": {"application/json": {"schema": "Gist"}}}}
# )
# ),
# )
# Extensions initialization
# =========================
app = Flask(__name__)
@app.route("/demo/<gist_id>", methods=["GET"])
def my_route(gist_id):
"""Gist detail view.
---
get:
parameters:
- in: path
schema: DemoParameter
responses:
200:
content:
application/json:
schema: DemoSchema
201:
content:
application/json:
schema: DemoSchema
"""
# (...)
return jsonify('foo')
# Since path inspects the view and its route,
# we need to be in a Flask request context
with app.test_request_context():
spec.path(view=my_route)
# We're good to go! Save this to a file for now.
with open('swagger.json', 'w') as f:
json.dump(spec.to_dict(), f)
pprint(spec.to_dict())
print(spec.to_yaml())
Hope this helps someone!! :)
Update: More Detailed Documents
Python Flask automatically generated Swagger 3.0/Openapi Document - http://donofden.com/blog/2020/06/14/Python-Flask-automatically-generated-Swagger-3-0-openapi-Document
Python Flask automatically generated Swagger 2.0 Document -
http://donofden.com/blog/2020/05/30/Python-Flask-automatically-generated-Swagger-2-0-Document
| OpenAPI | 62,066,474 | 19 |
I am using SpringDoc 1.4.3 for swagger. I have added the below configuration to disabled the petstore URLs in application.yml
Configuration
springdoc:
swagger-ui:
disable-swagger-default-url: true
tags-sorter: alpha
operations-sorter: alpha
doc-expansion: none
but when I hit the https://petstore.swagger.io/v2/swagger.json in explore text box, it is still showing me the petsore URLs as shown in the below image.
Swagger Image
| Already tested and validated thanks to the following feature support:
https://github.com/springdoc/springdoc-openapi/issues/714
Just use, the following property:
springdoc.swagger-ui.disable-swagger-default-url=true
| OpenAPI | 63,152,653 | 19 |
Below is my fastAPI code
from typing import Optional, Set
from fastapi import FastAPI
from pydantic import BaseModel, HttpUrl, Field
from enum import Enum
app = FastAPI()
class Status(Enum):
RECEIVED = 'RECEIVED'
CREATED = 'CREATED'
CREATE_ERROR = 'CREATE_ERROR'
class Item(BaseModel):
name: str
description: Optional[str] = None
price: float
tax: Optional[float] = None
tags: Set[str] = []
status: Status = None
@app.put("/items/{item_id}")
async def update_item(item_id: int, item: Item):
results = {"item_id": item_id, "item": item}
return results
Below is the swagger doc generated. The Status is not shown. I am new to pydantic and i am not sure on how to show status in the docs
| create the Status class by inheriting from both str and Enum
class Status(str, Enum):
RECEIVED = 'RECEIVED'
CREATED = 'CREATED'
CREATE_ERROR = 'CREATE_ERROR'
References
Working with Python enumerations--(FastAPI doc)
[BUG] docs don't show nested enum attribute for body--(Issue #329)
| OpenAPI | 64,160,594 | 19 |
When i generate code for Spring from my swagger yaml , usually controller layer is generated using delegate pattern , such that for a single model three files are generated . For example , if i defined a model named Person in my swagger/open API yaml file , three files get generated as :
PersonApi (interface that contains signatures of all person operations/methods)
PersonApiDelegate ( interface that provides default implementation of all PersonApi methods . Meant to be overriden )
PersonApiController (Which has a reference to PersonApiDelegate so that any implementation can override and provide custom implementation)
My question is for anyone who is familiar with building swagger/openapi generated code based apis that what is the significance of having such a pattern , instead of just exposing your service endpoints using a PersonController class , and not going through a PersonApi interface and then to a PersonApiDelegate and finally exposing the service through a PersonApiController ?
What is the valuable design extensibility we gain through this pattern ? I tried to find information from other resources on internet , but couldn't find a good answer in context of swagger first API development approach . Any insights on this will be really helpful .
| First of all a clarification: as already mentioned in a comment, you are not forced to use the delegation. On the contrary, the default behavior of the Spring generator is to not use the delegation pattern, as you can easily check in the docs. In this case it will generate only the PersonApi interface and PersonApiController.
Coming to your question, why using delegation?
This allows you to write a class that implements PersonApiDelegate, that can be easily injected in the generated code, without any need to manually touch generated sources, and keeping the implementation safe from possible future changes in the code generation.
Let's think what could happen without delegation.
A naive approach would be to generate the sources and then write directly the implementation inside the generated PersonController. Of course the next time there is a need to run the generator, it would be a big mess. All the implementation would be lost...
A slightly better scenario, but not perfect, would be to write a class that extends PersonController. That would keep the implementation safe from being overwritten during generation, but would not protect it from future changes of the generation engine: as a bare minimum the implementation class would need to implement the PersonController constructor. Right now the constructor of a generated controller has the following signature PersonApiController(ObjectMapper objectMapper, HttpServletRequest request), but the developers of the generator may need to change it in the future. So the implementation would need to change too.
A third approach would be to forget completely about the generated PersonApiController, and just write a class that implements the PersonApi interface. That would be fine, but every time the code is generated you would need to delete the PersonApiController, otherwise Spring router will complain. Still manual work...
But with the delegation, the implementation code is completely safe. No need to manually delete stuff, no need to adapt in case of future changes. Also the class that implements PersonApiDelegate can be treated as an independent service, so you can inject / autowire into it whatever you need.
| OpenAPI | 66,294,655 | 19 |
I want almost all my paths to have the following 3 generic error responses. How do I describe that in Swagger without copypasting these lines everywhere?
401:
description: The requester is unauthorized.
schema:
$ref: '#/definitions/Error'
500:
description: "Something went wrong. It's server's fault."
schema:
$ref: '#/definitions/Error'
503:
description: Server is unavailable. Maybe there is maintenance?
schema:
$ref: '#/definitions/Error'
Example of how I use this in a request:
paths:
/roles:
get:
summary: Roles
description: |
Returns all roles available for users.
responses:
200:
description: An array with all roles.
schema:
type: array
items:
$ref: '#/definitions/Role'
401:
description: The requester is unauthorized.
schema:
$ref: '#/definitions/Error'
500:
description: "Something went wrong. It's server's fault."
schema:
$ref: '#/definitions/Error'
503:
description: Server is unavailable. Maybe there is maintenance?
schema:
$ref: '#/definitions/Error'
| OpenAPI 2.0 (fka Swagger 2.0)
Looks like I can add the following global response definition:
# An object to hold responses that can be used across operations.
# This property does not define global responses for all operations.
responses:
NotAuthorized:
description: The requester is unauthorized.
schema:
$ref: '#/definitions/Error'
However I will still need to reference it in paths like this:
401:
$ref: '#/responses/NotAuthorized'
OpenAPI 3.x
Same thing in OpenAPI 3.x, except it uses #/components/responses/... instead of #/responses/...:
openapi: 3.0.0
# An object to hold responses that can be used across operations.
# This property does not define global responses for all operations.
components:
responses:
NotAuthorized:
description: The requester is unauthorized.
schema:
$ref: '#/components/schemas/Error'
# Then, in operation responses, use:
...
401:
$ref: '#/components/responses/NotAuthorized'
There's also an open feature request in the OpenAPI Specification repository to add support for global/default responses for operations.
| OpenAPI | 35,921,287 | 18 |
I would like to denote decimal with 2 places and decimal with 1 place in my api documentation. I'm using swagger 2.0, Is there inbuilt defined type or any other 'round' parameter in the specs, or my only option is to use 'x-' extension?
| OpenAPI (fka Swagger) Specification uses a subset of JSON Schema to describe the data types.
If the parameter is passed as a number, you can try using multipleOf as suggested in this Q&A:
type: number
multipleOf: 0.1 # up to 1 decimal place, e.g. 4.2
# multipleOf: 0.01 # up to 2 decimal places, e.g. 4.25
Hovewer, multipleOf validation against floating-point numbers can be unreliable due to floating-point math specifics.
If your number if passed as a string, you can specify a regex pattern for the desired number format:
type: string
pattern: your_regex
In any case, you can also document any restrictions verbally in the description.
| OpenAPI | 44,968,026 | 18 |
Designing an API using editor.swagger.io I find myself unable to add a requestBody attribute, getting an error I cannot address:
Schema error at paths['/projects/{projectId}/user'].post
should NOT have additional properties
additionalProperty: requestBody
Jump to line 91
I don't understand what I'm doing wrong, especially after looking at the requestBody documentation. Research has brought me nothing other than the tendency for errors to be misleading.
EDIT: From what the answers here have shown, it looks like the editor is supposed to use OpenAPI 2.0, but actually expects 3.0 while returning errors for both. I'd use some help on what to use, given that I've included a
swagger: "2.0"
line at the beginning of the document.
While testing with openapi: 3.0.0 as shown by @Mike in his answer, I just get more errors about allowed additional properties.
Here's what's generating the error, line 91 being post: .
/projects/{projectId}/user:
post:
tags:
- projects
summary: Modify project user.
operationId: modifyProjectUser
parameters:
- name: projectId
in: path
description: ID of the project
required: true
type: integer
format: int32
requestBody:
content:
application/json:
schema:
$ref: '#/definitions/User'
responses:
"200":
description: Successful operation
schema:
type: array
items:
$ref: "#/definitions/User"
security:
- api_key: []
| I got clarifications from an external source, so here's what I've learned:
Specifying swagger: 2.0 also means that the OpenAPI Specification 2.0.0 is expected by the editor, whereas I thought it used OAS 3.
I'm still unsure about why in: body did not work in the first place but I've added quotes around "body", which made the error disappear. Then I tried removing the quotes and it worked fine.
The editor doesn't seem very reliable when it comes to error reporting.
| OpenAPI | 47,632,281 | 18 |
I need to describe an api having in request body an object with required fields and one of these fields it's an object itself having another set of required fields.
I'm using open api v3 and swagger editor (https://editor.swagger.io/)
After i put my .yaml file onto the editor I generate an html client (> generate client > html). Then I open the static page index.html generated in the .zip file obtaning this schema:
Table of Contents
body
secureoauthservicesv2Nested_nestedobj
body
id
Integer id of nested obj
nestedobj
secureoauthservicesv2Nested_nestedobj
secureoauthservicesv2Nested_nestedobj
nested object
field1 (optional)
String
field2 (optional)
String
I expect field1 to be required and field2 to be optional but it's not.
This is my .yaml file
openapi: 3.0.0
info:
title: Example API
description: Example API specification
version: 0.0.1
servers:
- url: https://example/api
paths:
/secure/oauth/services/v2/Nested:
post:
summary: Try nested
description: Used to post Nested obj
requestBody:
required: true
content:
application/json:
schema:
type: object
required:
- id
- nestedobj
properties:
id:
type: integer
description: id of nested obj
nestedobj:
type: object
required:
- field1
description: nested object
properties:
field1:
type: string
field2:
type: string
responses:
'200':
description: Nested object OK
| Solved!
I used components and schemas, but I think this could be a bug, opened an issue on swagger editor repo:
https://github.com/swagger-api/swagger-editor/issues/1952
openapi: 3.0.0
info:
title: Example API
description: Example API specification
version: 0.0.2
servers:
- url: https://example/api
paths:
/secure/oauth/services/v2/Nested:
post:
summary: Try nested
description: Used to post Nested obj
requestBody:
required: true
content:
application/json:
schema:
type: object
required:
- id
- nestedobj
properties:
id:
type: integer
description: id of nested obj
nestedobj:
$ref: '#/components/schemas/nestedobj'
responses:
'200':
description: Nested object OK
components:
schemas:
element:
type: object
required:
- fieldArray1
properties:
fieldArray1:
type: string
description: field array
fieldArray2:
type: number
nestedobj:
type: object
required:
- field1
description: nested object
properties:
field1:
$ref: '#/components/schemas/woah'
field2:
type: string
woah:
type: object
required:
- woahthis
description: woah this
properties:
field3:
type: array
items:
$ref: '#/components/schemas/element'
woahthis:
type: number
description: numeber woah this
EDIT 23/08/21:
I opened a bug in swagger-codegen github in april 2019 but it still has no response whatsoever
| OpenAPI | 54,803,837 | 18 |
I am writing an app where I need to have two completely different set of response structures depending on logic.
Is there any way to handle this so that I can have two different response models serialized, validated and returned and reflect in OpenAPI JSON?
I am using pydantic to write models.
| Yes this is possible. You can use Union for that in the response_model= parameter in your path decorator (I used the new python 3.10 style below). Here is a full example, this will work as is.
from typing import Union
from fastapi import FastAPI, Query
from pydantic import BaseModel
class responseA(BaseModel):
name: str
class responseB(BaseModel):
id: int
app = FastAPI()
@app.get("/", response_model=Union[responseA,responseB])
def base(q: int|str = Query(None)):
if q and isinstance(q, str):
return responseA(name=q)
if q and isinstance(q, int):
return responseB(id=q)
raise HTTPException(status_code=400, detail="No q param provided")
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000, )
EDIT: As pointed out by TomášLinhart in the comments below, the response_model parameter must use Union. In the original answer, I used the 3.10 style responseA|responseB which did not work for everybody as explained here in the docs.
Result in your documentation:
| OpenAPI | 72,919,006 | 18 |
I have a symfony project, where I use api-platform.
I have an entity, and I have data providers for it. I am in trouble with definition of additional parameters for a collection endpoint.
An entity is called suggestion. It has to return collection of documents from elastic search.
An endpoint is:
/suggestion
This endpoint listens to additional GET parameters:
page, level
These two params are read each time, when the endpoint is requested.
In my SuggestionsCollectionDataProvider.php class I have:
/**
* Retrieves a collection.
*
* @param string $resourceClass
* @param string|null $operationName
* @return \Generator
*/
public function getCollection(string $resourceClass, string $operationName = null): \Generator
{
$query = $this->requestStack->getCurrentRequest()->query;
// I am reading these two parameters from RequestStack
// this one is built-in
$page = max($query->get('page', 1), 1);
// this is a custom one
$level = $query->get('level', 0);
...
In my SuggestionRepository.php class:
/**
* @return \Generator
*/
public function find(int $page, int $level): \Generator
{
// here I can process with $level without problems
Page parameter is default parameter, that is generating in swagger for collections.
A screenshot from API platform generated Swagger doc:
But the page parameter is now the only parameter, that can be edited in web version.
I need to add more parameters (level in this case) to swagger and describe them, so the user/tester knows which parameter actually goes to this endpoint.
Main question:
How to tell api-platform, that I want a user/tester of the API (from client side) to enter some other parameters, i.e. level for example?
| Finally figured it out.
I haven't found a documentation for it yet, but I found a way.
In an entity class Suggestion.php I've added some lines of annotations:
namespace App\Entity;
use ApiPlatform\Core\Annotation\ApiProperty;
use ApiPlatform\Core\Annotation\ApiResource;
use Symfony\Component\Serializer\Annotation\Groups;
use Symfony\Component\Validator\Constraints as Assert;
/**
* Class Suggestion. Represents an entity for an item from the suggestion result set.
* @package App\Entity
* @ApiResource(
* collectionOperations={
* "get"={
* "method"="GET",
* "swagger_context" = {
* "parameters" = {
* {
* "name" = "level",
* "in" = "query",
* "description" = "Levels available in result",
* "required" = "true",
* "type" : "array",
* "items" : {
* "type" : "integer"
* }
* }
* }
* }
* }
* },
* itemOperations={"get"}
* )
*/
The result view in API platform swagger DOCs:
| OpenAPI | 50,369,988 | 17 |
I'm writing a Swagger specification for an future public API that requires a very detailed and clean documentation. Is there a way to reference/link/point to another endpoint at some other location in the swagger.yml file?
For example, here is what I am trying to achieve:
paths:
/my/endpoint:
post:
tags:
- Some tag
summary: Do things
description: >
This endpoint does things.
See /my/otherEndpoint for stuff # Here I would like to have some kind of hyperlink
operationId: doThings
consumes:
- application/json
produces:
- application/json
parameters:
...
responses:
...
/my/otherEndpoint: # This is the endpoint to be referenced to
get:
...
I have found that $ref does not help because it simply replaces itself with the contents of the reference.
Can Swagger do such a thing?
| Swagger UI provides permalinks for tags and operations if it's configured with the deepLinking: true option. These permalinks are generated based on the tag names and operationId (or if there are no operationId - based on the endpoint names and HTTP verbs).
index.html#/tagName
index.html#/tagName/operationId
You can use these permalinks in your Markdown markup:
description: >
This endpoint does things.
See [/my/otherEndpoint](#/tagName/myOtherEndpointId) for stuff
Notes:
Markdown links (such as above) currently open in a new browser tab (as with target="_blank") - see issue #3473.
HTML-formatted links <a href="#/tagName/operationId">foobar</a> currently don't work.
Swagger Editor does not support such permalinks.
| OpenAPI | 52,703,804 | 17 |
I have an open API spec with a parameter like this:
- name: platform
in: query
description: "Platform of the application"
required: true
schema:
type: string
enum:
- "desktop"
- "online"
when I get the "platform" parameter from URL , it can be like this :
platform=online or
platform=ONLINE or
platform=Online or
platform=onLine or ... any other format
but when I am going to use it , it is only valid if the parameter is all lower case like "platform=online", obviously to match the enum value.
how can I make schema to be the case insensitive and understand all types of passed parameters?
| Enums are case-sensitive. To have a case-insensitive schema, you can use a regular expression pattern instead:
- name: platform
in: query
description: 'Platform of the application. Possible values: `desktop` or `online` (case-insensitive)'
required: true
schema:
type: string
pattern: '^[Dd][Ee][Ss][Kk][Tt][Oo][Pp]|[Oo][Nn][Ll][Ii][Nn][Ee]$'
Note that pattern is the pattern itself and does not support JavaScript regex literal syntax (/abc/i), which means you cannot specify flags like i (case insensitive search). As a result you need to specify both uppercase and lowercase letters in the pattern itself.
Alternatively, specify the possible values in the description rather than in pattern/enum, and verify the parameter values on the back end.
Here's the related discussion in the JSON Schema repository (OpenAPI uses JSON Schema to define the data types): Case Insensitive Enums?
| OpenAPI | 60,772,786 | 17 |
Is there a way to generate OpenAPI v3 specification from go source code? Let's say I have a go
API like the one below and I'd like to generate the OpenAPI specification (yaml file) from it. Something similar to Python's Flask RESTX. I know there are tools that generate go source code from the specs, however, I'd like to do it the other way around.
package main
import "net/http"
func main() {
http.HandleFunc("/hello", func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("world\n"))
})
http.ListenAndServe(":5050", nil)
}
| You can employ github.com/swaggest/rest to build a self-documenting HTTP REST API. This library establishes a convention to declare handlers in a way that can be used to reflect documentation and schema and maintain a single source of truth about it.
In my personal opinion code first approach has advantages comparing to spec first approach. It can lower the entry bar by not requiring to be an expert in spec language syntax. And it may help to come up with a spec that is well balanced with implementation details.
With code first approach it is not necessary to implement a full service to get the spec. You only need to define the structures and interfaces and may postpone actual logic implementation.
Please check a brief usage example.
package main
import (
"context"
"errors"
"fmt"
"log"
"net/http"
"time"
"github.com/go-chi/chi"
"github.com/go-chi/chi/middleware"
"github.com/swaggest/rest"
"github.com/swaggest/rest/chirouter"
"github.com/swaggest/rest/jsonschema"
"github.com/swaggest/rest/nethttp"
"github.com/swaggest/rest/openapi"
"github.com/swaggest/rest/request"
"github.com/swaggest/rest/response"
"github.com/swaggest/rest/response/gzip"
"github.com/swaggest/swgui/v3cdn"
"github.com/swaggest/usecase"
"github.com/swaggest/usecase/status"
)
func main() {
// Init API documentation schema.
apiSchema := &openapi.Collector{}
apiSchema.Reflector().SpecEns().Info.Title = "Basic Example"
apiSchema.Reflector().SpecEns().Info.WithDescription("This app showcases a trivial REST API.")
apiSchema.Reflector().SpecEns().Info.Version = "v1.2.3"
// Setup request decoder and validator.
validatorFactory := jsonschema.NewFactory(apiSchema, apiSchema)
decoderFactory := request.NewDecoderFactory()
decoderFactory.ApplyDefaults = true
decoderFactory.SetDecoderFunc(rest.ParamInPath, chirouter.PathToURLValues)
// Create router.
r := chirouter.NewWrapper(chi.NewRouter())
// Setup middlewares.
r.Use(
middleware.Recoverer, // Panic recovery.
nethttp.OpenAPIMiddleware(apiSchema), // Documentation collector.
request.DecoderMiddleware(decoderFactory), // Request decoder setup.
request.ValidatorMiddleware(validatorFactory), // Request validator setup.
response.EncoderMiddleware, // Response encoder setup.
gzip.Middleware, // Response compression with support for direct gzip pass through.
)
// Create use case interactor.
u := usecase.IOInteractor{}
// Describe use case interactor.
u.SetTitle("Greeter")
u.SetDescription("Greeter greets you.")
// Declare input port type.
type helloInput struct {
Locale string `query:"locale" default:"en-US" pattern:"^[a-z]{2}-[A-Z]{2}$" enum:"ru-RU,en-US"`
Name string `path:"name" minLength:"3"` // Field tags define parameter location and JSON schema constraints.
}
u.Input = new(helloInput)
// Declare output port type.
type helloOutput struct {
Now time.Time `header:"X-Now" json:"-"`
Message string `json:"message"`
}
u.Output = new(helloOutput)
u.SetExpectedErrors(status.InvalidArgument)
messages := map[string]string{
"en-US": "Hello, %s!",
"ru-RU": "Привет, %s!",
}
u.Interactor = usecase.Interact(func(ctx context.Context, input, output interface{}) error {
var (
in = input.(*helloInput)
out = output.(*helloOutput)
)
msg, available := messages[in.Locale]
if !available {
return status.Wrap(errors.New("unknown locale"), status.InvalidArgument)
}
out.Message = fmt.Sprintf(msg, in.Name)
out.Now = time.Now()
return nil
})
// Add use case handler to router.
r.Method(http.MethodGet, "/hello/{name}", nethttp.NewHandler(u))
// Swagger UI endpoint at /docs.
r.Method(http.MethodGet, "/docs/openapi.json", apiSchema)
r.Mount("/docs", v3cdn.NewHandler(apiSchema.Reflector().Spec.Info.Title,
"/docs/openapi.json", "/docs"))
// Start server.
log.Println("http://localhost:8011/docs")
if err := http.ListenAndServe(":8011", r); err != nil {
log.Fatal(err)
}
}
| OpenAPI | 66,171,424 | 17 |
I am using Swagger with Scala to document my REST API. I want to enable bulk operations for POST, PUT and DELETE and want the same route to accept either a single object or a collection of objects as body content.
Is there a way to tell Swagger that a param is either a list of values of type A or a single value of type A?
Something like varargs for REST.
|
Is there a way to tell Swagger that a param is either a list of values of type A or a single value of type A?
This depends on whether you use OpenAPI 3.0 or OpenAPI (Swagger) 2.0.
OpenAPI uses an extended subset of JSON Schema to describe body payloads. JSON Schema provides the oneOf and anyOf keywords to define multiple possible schemas for an instance. However, different versions of OpenAPI support different sets of JSON Schema keywords.
OpenAPI 3.0 supports oneOf and anyOf, so you can describe such an object or array of object as follows:
openapi: 3.0.0
...
components:
schemas:
A:
type: object
Body:
oneOf:
- $ref: '#/components/schemas/A'
- type: array
items:
$ref: '#/components/schemas/A'
In the example above, Body can be either object A or an array of objects A.
OpenAPI (Swagger) 2.0 does not support oneOf and anyOf. The most you can do is use a typeless schema:
swagger: '2.0'
...
definitions:
A:
type: object
# Note that Body does not have a "type"
Body:
description: Can be object `A` or an array of `A`
This means the Body can be anything - an object (any object!), an array (containing any items!), also a primitive (string, number, etc.). There is no way to define the exact Body structure in this case. You can only describe this verbally in the description.
You'll need to use OpenAPI 3.0 to define your exact scenario.
| OpenAPI | 31,742,151 | 16 |
I'm writing an Open API 3.0 spec and trying to get response links to render in Swagger UI v 3.18.3.
Example:
openapi: 3.0.0
info:
title: Test
version: '1.0'
tags:
- name: Artifacts
paths:
/artifacts:
post:
tags:
- Artifacts
operationId: createArtifact
requestBody:
content:
application/octet-stream:
schema:
type: string
format: binary
responses:
201:
description: create
headers:
Location:
schema:
type: string
format: uri
example: /artifacts/100
content:
application/json:
schema:
type: object
properties:
artifactId:
type: integer
format: int64
links:
Read Artifact:
operationId: getArtifact
parameters:
artifact-id: '$response.body#/artifactId'
/artifacts/{artifact-id}:
parameters:
- name: artifact-id
in: path
required: true
schema:
type: integer
format: int64
get:
tags:
- Artifacts
operationId: getArtifact
responses:
200:
description: read
content:
application/octet-stream:
schema:
type: string
format: binary
renders a link like this:
Is this expected? I ask because the operationId is exposed on the UI and parameters is shown as a JSON reference make it seem like something is not displaying properly. I would have expected a hyperlink or something to take me to the appropriate section in the Swagger web page that corresponds to the API being referenced by the link.
| Yes this is how Swagger UI currently renders OAS3 links. Rendering of links is one of the things on their OAS3 support backlog:
OAS 3.0 Support Backlog
This is a collection ticket for OAS3 specification features that are not yet supported by Swagger-UI.
...
[ ] Links can't be used to stage another operation
[ ] Link-level servers are not available for executing requests
| OpenAPI | 55,839,466 | 16 |
How to allow anonymous access to springdoc-openapi-ui (OpenAPI 3.0 /swagger-ui.html) in a Spring Boot application secured by Spring Security?
| To use springdoc-openapi-ui /swagger-ui.html, allow anonymous access to the following endpoints in the WebSecurityConfigurerAdapter using permitAll method:
/v3/api-docs/**
/swagger-ui/**
/swagger-ui.html
Example:
@EnableWebSecurity
public class SecurityConfig extends WebSecurityConfigurerAdapter {
@Override
public void configure(HttpSecurity http) throws Exception {
http.
.authorizeRequests()
.antMatchers("/v3/api-docs/**", "/swagger-ui/**", "/swagger-ui.html").permitAll()
.anyRequest().authenticated()
.and()
.httpBasic(); //or anything else, e.g. .oauth2ResourceServer().jwt()
}
}
Make sure a project has the following dependencies:
org.springdoc:springdoc-openapi-ui
org.springdoc:springdoc-openapi-security
| OpenAPI | 59,898,402 | 16 |
I'm using Open API code generator Maven plugin to generate Open API 3.0 from a file. I'm using this plugin in in my pom.xml:
<groupId>org.openapitools</groupId>
<artifactId>openapi-generator-maven-plugin</artifactId>
<version>4.3.0</version>
The plugin generates the API without any issues but instead of using Swagger v3 annotations it uses old Swagger annotations. For example parameters are annotated using @ApiParam, instead @Parameter annotation should be used from io.swagger.v3.oas.annotations package:
default ResponseEntity<Fault> getFault(@ApiParam(value = "",required=true) @PathVariable("jobId") String jobId) {
Because of it the latest Swagger UI isn't showing the documentation correctly. When I create an endpoint using swagger.v3 annotations then Swagger UI is working properly.
According to the official website https://openapi-generator.tech/docs/plugins/ , I should include this dependency:
<dependency>
<groupId>io.swagger.parser.v3</groupId>
<artifactId>swagger-parser</artifactId>
</dependency>
But even with this dependency the plugin still generates sources with the old annotations.
How can I force Open API code generator to use Swagger v3 annotations?
| V3 annotations are not supported at this moment.
You need to override mustache templates.
Check these PRs:
https://github.com/OpenAPITools/openapi-generator/pull/4779
https://github.com/OpenAPITools/openapi-generator/pull/6306
more info:
https://github.com/OpenAPITools/openapi-generator/issues/6108
https://github.com/OpenAPITools/openapi-generator/issues/5803
You can use upgraded templates from PRs above or wait when merged.
| OpenAPI | 62,915,594 | 16 |
I trying to map the following JSON to an OpenAPI 2.0 (Swagger 2.0) YAML definition, and I am not sure how to set mixed array types into my schema:
{
"obj1": [
"string data",
1
]
}
Now, my OpenAPI definition has:
schema:
object1:
type: array
items:
type: string
but this doesn't allow integers inside the array.
Is there a way to define a mixed type array?
| The answer depends on which version of the OpenAPI Specification you use.
OpenAPI 3.1
type can be a list of types, so you can write your schema as:
# openapi: 3.1.0
obj1:
type: array
items:
type: [string, integer]
# or if nulls are allowed:
# type: [string, integer, 'null']
OpenAPI 3.0.x
Mixed types are supported in OpenAPI 3.0 using oneOf / anyOf and optionally nullable: true to also allow nulls.
# openapi: 3.0.1
obj1:
type: array
items:
oneOf:
- type: string
nullable: true # If nulls are allowed
- type: integer
OpenAPI 2.0
OpenAPI 2.0 (Swagger 2.0) does not really support mixed-type array and parameters. The most you can do is to use a typeless schema {} for items, which means the items can be anything (except null) – numbers, objects, strings, etc. You cannot specify the exact types for items, but you can add an example of an array with different item types.
# swagger: '2.0'
obj1:
type: array
items: {} # <--- means "any type" (except null)
example:
- string data
- 1
Note: Typeless schema {} can only be used in body parameters and response schemas. Path, header and form parameters require a primitive type for array items.
| OpenAPI | 38,690,802 | 15 |
By Specification:
https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.2.md
A list of tags used by the specification with additional metadata. The order of the tags can be used to reflect on their order by the parsing tools. Not all tags that are used by the Operation Object must be declared. The tags that are not declared MAY be organized randomly or based on the tools' logic. Each tag name in the list MUST be unique.
How are these tags used in parsers, can you provide some examples?
And also why need to be unique?.
| A couple of examples:
Swagger UI uses tags to group the displayed operations. For example, the Petstore demo has three tags - pet, store and user.
Swagger Codegen uses tags to group endpoints into the same API class file:
For example, an endpoint with the "store" tags will be generated in the StoreApi class file.
And also why need to be unique?
Tag names must be unique in the sense that you cannot have two tags with the same name.
# Correct
openapi: 3.0.2
tags:
- name: pet # <--- unique tag name
description: Operations to manage the pets
- name: store # <--- unique tag name
descriptions: Access to Petstore orders
# Wrong
openapi: 3.0.2
tags:
- name: pet
- name: pet
| OpenAPI | 53,517,210 | 15 |
I'm using swagger-jsdoc with Express. Using this lib to describe an api end-point I use following lines in JSDock block in YAML:
/**
* @swagger
* /users:
* post:
* summary: Register a user
* tags: [Users]
* description: Register a new user and return its cookie token (connect.sid)
* parameters:
* - in: body
* name: body
* schema:
* type: object
* required: [login, password, confirm]
* description: user's credential
* properties:
* login:
* type: string
* minLength: 3
* maxLength: 10
* email:
* type: string
* password:
* type: string
* minLength: 6
* confirm:
* type: string
* responses:
* 200:
* description: OK
* schema:
* $ref: '#/components/schemas/AuthState'
* 422:
* $ref: '#/components/responses/UnprocessableEntity'
*/
router.post('/', usersController.register);
But the problem is that VSCode completely ignores indentation when I put a new line, it also doesn't show the level of indentation which makes it really difficult to make specification as every single new line I have to press [tab] to reach needed indentation level. Extensions like rainbow indents don't work either because they orient on vscode indents.
Are there any settings or extensions I could use to work with this?
Or maybe I'm using a wrong approach and there are better and more used approaches to work with this with Express? Would like to hear about these as well
| I created a simple extension that targets this particular issue when writing YAML specs with swagger-jsdoc.
Everything is documented in the README, but basically you write your spec like this (which allows for automatic indentation)
/**
* Spec for the route /auth/register.
*
@openapi
/auth/register:
post:
summary: Register as user
tags:
- Auth
requestBody:
required: true
content:
application/json:
schema:
type: object
required:
- name
- email
- password
properties:
name:
type: string
email:
type: string
format: email
description: Must be unique
password:
type: string
format: password
minLength: 8
description: At least one number and one letter
*
* More regular comments here
*/
router.post("/auth/register", authMiddleware.validate, authController.register);
Select your comment block, press cmd + shift + P (MacOS) or ctrl + shift + P (Windows) and search for Format swagger-jsdoc comment.
The extension will:
Run prettier to fix/catch indentation errors
Add an asterisk at the start of each line
Replace your comment block with the formatted one
Respect any indentation of your block
/**
* Spec for the route /auth/register.
*
* @openapi
* /auth/register:
* post:
* summary: Register as user
* tags:
* - Auth
* requestBody:
* required: true
* content:
* application/json:
* schema:
* type: object
* required:
* - name
* - email
* - password
* properties:
* name:
* type: string
* email:
* type: string
* format: email
* description: Must be unique
* password:
* type: string
* format: password
* minLength: 8
* description: At least one number and one letter
*
*
* More regular comments here
*/
router.post("/auth/register", authMiddleware.validate, authController.register);
| OpenAPI | 58,186,804 | 15 |
I'm digging here around trying to find a solution, how to merge several OpenApi v3 component definitions in one file.
Let's imagine a situation:
You decided to split your OpenApi into multiple files in different folders. (see image below)
Now you need to combine all your components.v1.yaml into a single schema (i named it blueprint.v1.yaml).
Usually, I use swagger-cli to merge all $ref dependencies, but now it's not a case, because I can not refer to the whole components/schemas object list
And use it to build a single OpenApi file with all fields filled: info, components, paths and so on with a swagger-cli bundle tool.
So, the question is - how to reuse already defined component blocks (files called components.v1.yaml) in my blueprint.v1.yaml file?
P.S. Every components.v1.yaml looks like this:
And a, for ex, location-create-single.v1.yaml path definition is shown on picture below. Mention, that all $ref referes to components.v1.yaml files.
| I don't think there is a "native" OpenAPI solution to your problem. People are discussing for a while about OpenAPI overlays/extends/merges. There is currently (2020-04-24) not any consensus about this topic.
Although you could implement your own tool or use an existing one to preprocess your blueprint.v1.yaml and generate a "merged OAS".
| OpenAPI | 61,262,561 | 15 |
For a legacy API that I document in order for a successful authentication I need to provide the following headers:
X-Access-Token: {token}
Accept: application/json; version=public/v2
For the token part I need document it via:
openapi: 3.0.0
info:
version: "v2"
title: Company App Public Api
description: Integrate your platform with company app website
components:
securitySchemes:
ApiKey:
type: 'apiKey'
in: 'header'
name: 'X-Access-Token'
security:
- ApiKey: []
But how I can document that also for an authentication I need to provide an Accept: application/json; version=public/v2. The Accept header must contain the application/json; version=public/v2 anything else returns 406 Not Acceptable header.
Also, the header Accept with value application/json; version=public/v2 should be in my request. The response header is always application/json.
Do you know how I can document that?
| In OpenAPI 3.0, the request header Accept and the response header Content-Type are both defined as responses.<code>.content.<Accept value>. This needs to be defined in every operation.
paths:
/something:
get:
responses:
'200':
description: Successful operation
content:
'application/json; version=public/v2': # <-----
schema:
...
'406':
description: Invalid media type was specified in the `Accept` header (should be `application/json; version=public/v2`)
| OpenAPI | 62,593,055 | 15 |
I have a Django project and I am using Django REST framework. I am using drf-spectacular
for OpenAPI representation, but I think my problem is not tied to this package, it's seems a more generic OpenAPI thing to me (but not 100% sure if I am right to this).
Assume that I have a URL structure like this:
urlpatterns = [
path('admin/', admin.site.urls),
path('api/', include([
path('v1/', include([
path('auth/', include('rest_framework.urls', namespace='rest_framework')),
path('jwt-auth/token/obtain', CustomTokenObtainPairView.as_view(), name='token_obtain_pair'),
path('jwt-auth/token/refresh', CustomTokenRefreshView.as_view(), name='token_refresh'),
path('home/', include("home.urls"))
]))
])),
# OpenAPI endpoints
path('swagger/', SpectacularSwaggerView.as_view(url_name='schema-swagger-json'), name='schema-swagger-ui'),
path('swagger.yaml/', SpectacularAPIView.as_view(), name='schema-swagger-yaml'),
path('swagger.json/', SpectacularJSONAPIView.as_view(), name='schema-swagger-json'),
path('redoc/', SpectacularRedocView.as_view(url_name='schema-swagger-yaml'), name='schema-redoc'),
]
In the corresponding swagger UI view, I get all endpoints grouped under api endpoint, e.g.:
If add more endpoints under v1, all go under the api endpoint.
What I want to achieve is, to have the endpoints in Swagger grouped differently, e.g. by app. So I'd have home, jwt, another_endpoint, instead of just api, so it will be easier to navigate in Swagger (when I add more endpoints, with the current method it's just showing a massive list of URLs, not very user friendly).
I've read that those groups are being extracted from the first path of a URL, in my case this is api, so if I change the URL structure, I could achieve what I need.
But isn't there another way of doing this? I want to keep my URL structure, and customize how I display this with OpenAPI, so in the end I have a swagger that groups the endpoints by app, so it's easier to navigate and find what you are looking for.
| you are making it harder than it needs to be. In the global settings you can specify a common prefix regex that strips the unwanted parts. that would clean up both operation_id and tags for you. In your case that would probably be:
SPECTACULAR_SETTINGS = {
'SCHEMA_PATH_PREFIX': r'/api/v[0-9]',
}
that should result in tags: home, jwt-auth, swagger.json, swagger.yaml
the tags on @extend_schema is merely a convenience to deviate from the default where needed. it would be cumbersome to do this for every operation. see the settings for more details:
https://drf-spectacular.readthedocs.io/en/latest/settings.html
for even more elaborate tagging you can always subclass AutoSchema and override get_tags(self) to your liking. cheers!
| OpenAPI | 62,830,171 | 15 |
Problem
I currently have JWT dependency named jwt which makes sure it passes JWT authentication stage before hitting the endpoint like this:
sample_endpoint.py:
from fastapi import APIRouter, Depends, Request
from JWTBearer import JWTBearer
from jwt import jwks
router = APIRouter()
jwt = JWTBearer(jwks)
@router.get("/test_jwt", dependencies=[Depends(jwt)])
async def test_endpoint(request: Request):
return True
Below is the JWT dependency which authenticate users using JWT (source: https://medium.com/datadriveninvestor/jwt-authentication-with-fastapi-and-aws-cognito-1333f7f2729e):
JWTBearer.py
from typing import Dict, Optional, List
from fastapi import HTTPException
from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials
from jose import jwt, jwk, JWTError
from jose.utils import base64url_decode
from pydantic import BaseModel
from starlette.requests import Request
from starlette.status import HTTP_403_FORBIDDEN
JWK = Dict[str, str]
class JWKS(BaseModel):
keys: List[JWK]
class JWTAuthorizationCredentials(BaseModel):
jwt_token: str
header: Dict[str, str]
claims: Dict[str, str]
signature: str
message: str
class JWTBearer(HTTPBearer):
def __init__(self, jwks: JWKS, auto_error: bool = True):
super().__init__(auto_error=auto_error)
self.kid_to_jwk = {jwk["kid"]: jwk for jwk in jwks.keys}
def verify_jwk_token(self, jwt_credentials: JWTAuthorizationCredentials) -> bool:
try:
public_key = self.kid_to_jwk[jwt_credentials.header["kid"]]
except KeyError:
raise HTTPException(
status_code=HTTP_403_FORBIDDEN, detail="JWK public key not found"
)
key = jwk.construct(public_key)
decoded_signature = base64url_decode(jwt_credentials.signature.encode())
return key.verify(jwt_credentials.message.encode(), decoded_signature)
async def __call__(self, request: Request) -> Optional[JWTAuthorizationCredentials]:
credentials: HTTPAuthorizationCredentials = await super().__call__(request)
if credentials:
if not credentials.scheme == "Bearer":
raise HTTPException(
status_code=HTTP_403_FORBIDDEN, detail="Wrong authentication method"
)
jwt_token = credentials.credentials
message, signature = jwt_token.rsplit(".", 1)
try:
jwt_credentials = JWTAuthorizationCredentials(
jwt_token=jwt_token,
header=jwt.get_unverified_header(jwt_token),
claims=jwt.get_unverified_claims(jwt_token),
signature=signature,
message=message,
)
except JWTError:
raise HTTPException(status_code=HTTP_403_FORBIDDEN, detail="JWK invalid")
if not self.verify_jwk_token(jwt_credentials):
raise HTTPException(status_code=HTTP_403_FORBIDDEN, detail="JWK invalid")
return jwt_credentials
jwt.py:
import os
import requests
from dotenv import load_dotenv
from fastapi import Depends, HTTPException
from starlette.status import HTTP_403_FORBIDDEN
from app.JWTBearer import JWKS, JWTBearer, JWTAuthorizationCredentials
load_dotenv() # Automatically load environment variables from a '.env' file.
jwks = JWKS.parse_obj(
requests.get(
f"https://cognito-idp.{os.environ.get('COGNITO_REGION')}.amazonaws.com/"
f"{os.environ.get('COGNITO_POOL_ID')}/.well-known/jwks.json"
).json()
)
jwt = JWTBearer(jwks)
async def get_current_user(
credentials: JWTAuthorizationCredentials = Depends(auth)
) -> str:
try:
return credentials.claims["username"]
except KeyError:
HTTPException(status_code=HTTP_403_FORBIDDEN, detail="Username missing")
api_key_dependency.py (very simplified right now, it will be changed):
from fastapi import Security, FastAPI, HTTPException
from fastapi.security.api_key import APIKeyHeader
from starlette.status import HTTP_403_FORBIDDEN
async def get_api_key(
api_key_header: str = Security(api_key_header)
):
API_KEY = ... getting API KEY logic ...
if api_key_header == API_KEY:
return True
else:
raise HTTPException(
status_code=HTTP_403_FORBIDDEN, detail="Could not validate credentials"
)
Question
Depending on the situation, I would like to first check if it has API Key in the header, and if its present, use that to authenticate. Otherwise, I would like to use jwt dependency for authentication. I want to make sure that if either api-key authentication or jwt authentication passes, the user is authenticated. Would this be possible in FastAPI (i.e. having multiple dependencies and if one of them passes, authentication passed). Thank you!
| Sorry, got lost with things to do
The endpoint has a unique dependency, call it check from the file check_auth
ENDPOINT
from fastapi import APIRouter, Depends, Request
from check_auth import check
from JWTBearer import JWTBearer
from jwt import jwks
router = APIRouter()
jwt = JWTBearer(jwks)
@router.get("/test_jwt", dependencies=[Depends(check)])
async def test_endpoint(request: Request):
return True
The function check will depend on two separate dependencies, one for api-key and one for JWT. If both or one of these passes, the authentication passes. Otherwise, we raise exception as shown below.
DEPENDENCY
def key_auth(api_key=Header(None)):
if not api_key:
return None
... verification logic goes here ...
def jwt(authorization=Header(None)):
if not authorization:
return None
... verification logic goes here ...
async def check(key_result=Depends(jwt_auth), jwt_result=Depends(key_auth)):
if not (key_result or jwt_result):
raise Exception
| OpenAPI | 64,731,890 | 15 |
I am using Swagger OpenAPI Specification tool. I have a string array property in one of the definitions as follows:
cities:
type: array
items:
type: string
example: "Pune"
My API produces JSON result, so Swagger UI displays the following example for the response:
{
"cities": [
"Pune"
]
}
How can I add multiple example values for the cities array? Expecting the result as:
{
"cities": [
"Pune",
"Mumbai",
"Bangaluru"
]
}
Tried comma-separated strings in the example key like below:
cities:
type: array
items:
type: string
example: "Pune", "Mumbai", "Bangaluru"
But the Swagger Editor shows an error, "Bad indentation".
Is there any way to specify multiple values in the example key?
Update
User Helen below has given the correct answer. I had an indentation problem hence there were nested arrays (2d arrays)
Correct way:
cities:
type: array
items:
type: string
example:
- Pune
- Mumbai
My way (which was wrong)
cities:
type: array
items:
type: string
example:
- Pune
- Mumbai
Look for the indentation of the example key in the above two cases which makes the difference, its YAML indentation matters.
| To display an array example with multiple items, add the example on the array level instead of item level:
cities:
type: array
items:
type: string
example:
- Pune
- Mumbai
- Bangaluru
# or
# example: [Pune, Mumbai, Bangaluru]
In case of array of objects, the example would look like this:
type: array
items:
type: object
properties:
id:
type: integer
name:
type: string
example:
- id: 1
name: Prashant
- id: 2
name: Helen
| OpenAPI | 46,578,110 | 14 |
When I generate a C# client for an API using NSwag,
where the API includes endpoints that can be used with multiple Http request types (e.g. POST, GET)
the client generates a method for each request with the same base name, plus a number.
E.g. Using this API: https://api.premiumfunding.net.au/assets/scripts/swagger/v1/swagger.json
The schema contains an endpoint /contract that supports GET and POST requests, and an endpoint /contract/{ID} that supports GET, POST and DELETE requests.
The generated client has methods:
ContractAsync for GET requests without ID
Contract2Async for POST requests without ID
Contract3Async for GET requests with ID
Contract4Async for POST requests with ID
Contract5Async for DELETE requests with ID
I would like it to generate methods named:
GetContractAsync for GET requests without ID
PostContractAsync for POST requests without ID
GetContractAsync for GET requests with ID (method overload)
PostContractAsync for POST requests with ID (method overload)
DeleteContractAsync for DELETE requests with ID
At the moment I am just renaming the methods manually.
Is it possible to configure NSwag to generated these method names?
(Or is there an alternative tool that will give me this result?)
| You can implement and provide an own IOperationNameGenerator:
https://github.com/RSuter/NSwag/blob/master/src/NSwag.CodeGeneration/OperationNameGenerators/IOperationNameGenerator.cs
Another option would be to preprocess the spec and change the “operationId”s to the form “controller_operation” (simple console app based on the NSwag.Core package)
| OpenAPI | 49,891,178 | 14 |
I'm creating the the API description of our application using Swagger/OpenApi V3 annotations, imported from following dependency:
<dependency>
<groupId>org.springdoc</groupId>
<artifactId>springdoc-openapi-ui</artifactId>
<version>1.1.45</version>
</dependency>
One of the annotations is a @Schema annotation that accepts an attribute named allowableValues which allows a an array of strings:
@Schema(description = "example",
allowableValues = {"exampleV1", "exampleV2"},
example = "exampleV1", required = true)
private String example;
Now I would like to use a custom method constructed on our Enum class that returns the allowable strings array, so it does not needs to be added upon each time we add a type to our Enum. So that we can use it like this:
public enum ExampleEnum {
EXAMPLEV1, EXAMPLEV2;
public static String[] getValues() {...}
}
@Schema(description = "example",
allowableValues = ExampleEnum.getValues(),
example = "exampleV1", required = true)
private String example;
Now this doesn't compile because the method is not known when executing the annotation.
Is there such a solution that allows usage of Enums in the swagger V3 annotation attributes values?
Had a look in following resources:
https://swagger.io/docs/specification/data-models/enums/
You can define reusable enums in the global components section and reference them via $ref elsewhere.
Worst case I can indeed have it defined in one constant place and after adding a type to the Enum only have one other place needed to add the type to. But I first want to explore the above mentioned solution if it's possible.
https://github.com/swagger-api/swagger-core/wiki/Swagger-2.X---Annotations#schema
Doesn't say anything about using any classes or dynamic generated values.
Enum in swagger
Is about documenting enums in swagger and not using them in the swagger annotations API.
| try using @Schema(implementation = ExampleEnum.class, ...), you can add all other properties you want. I would need more info on your implementation but try this first.
| OpenAPI | 59,282,771 | 14 |
I have a bunch of API endpoints that return text/csv content in their responses. How do I document this? Here is what I currently have:
/my_endpoint:
get:
description: Returns CSV content
parameters:
- $ref: '#/components/parameters/myParemeters'
responses:
200:
headers:
$ref: '#/components/headers/myHeaders'
content: text/csv
As it stands, this does not work and I get the note in the Swagger preview:
Could not render this component, see the console.
The question is how do I properly display the content for csv responses? I find if does work if I do add a schema, something like this:
...
content:
text/csv:
schema:
type: array
items:
type: string
...
But there shouldn't be a schema since it is csv. So to go back to the question, what is the proper way to describe the csv response content?
| Your first example is not valid syntax. Replace with:
responses:
'200':
content:
text/csv: {} # <-----
# Also note the correct syntax for referencing response headers:
headers:
Http-Header-Name: # e.g. X-RateLimit-Remaining
$ref: '#/components/headers/myHeader'
components:
headers:
myHeader:
description: Description of this response header
schema:
type: string
As for your second example, OpenAPI Specification does not provide examples of CSV responses. So the schema could be type: string, or an array of strings, or an empty schema {} (this means "any value"), or something else. The actual supported syntax might be tool-dependent. Feel free to ask for clarifications in the OpenAPI Specification repository.
| OpenAPI | 59,722,686 | 14 |
The RabbitMQ windows service will not start:
C:\Program Files (x86)\RabbitMQ Server\rabbitmq_server-3.0.4\sbin>rabbitmq-service.bat start
C:\Program Files (x86)\erl5.10.1\erts-5.10.1\bin\erlsrv: Failed to start service RabbitMQ.
Error: The process terminated unexpectedly.
I can run rabbitmq-server.bat without any problems.
No log entries are made to %appdata%\RabbitMQ\log\ directory when trying to start the service.
How to make it work?
| I faced the same problem and was able to solve the problem following the steps mentioned below.
Run the command prompt as Administrator
Navigate to the sbin directory and uninstall the service. rabbitmq-service remove
Reinstall the service rabbitmq-service install
Enable the plugins. rabbitmq-plugins enable rabbitmq_management
Start the service rabbitmq-service start
Go to "http://localhost:15672/"
| RabbitMQ | 16,001,047 | 31 |
The beloved RabbitMQ Management Plugin has a HTTP API to manage the RabbitMQ through plain HTTP requests.
We need to create users programatically, and the HTTP API was the chosen way to go. The documentation is scarce, but the API it's pretty simple and intuitive.
Concerned about the security, we don't want to pass the user password in plain text, and the API offers a field to send the password hash instead. Quote from there:
[ GET | PUT | DELETE ] /api/users/name
An individual user. To PUT a user, you will need a body looking
something like this:
{"password":"secret","tags":"administrator"}
or:
{"password_hash":"2lmoth8l4H0DViLaK9Fxi6l9ds8=", "tags":"administrator"}
The tags key is mandatory. Either password or password_hash must be set.
So far, so good, the problem is: how to correctly generate the password_hash?
The password hashing algorithm is configured in RabbitMQ's configuration file, and our is configured as the default SHA256.
I'm using C#, and the following code to generate the hash:
var cr = new SHA256Managed();
var simplestPassword = "1";
var bytes = cr.ComputeHash(Encoding.UTF8.GetBytes(simplestPassword));
var sb = new StringBuilder();
foreach (var b in bytes) sb.Append(b.ToString("x2"));
var hash = sb.ToString();
This doesn't work. Testing in some online tools for SHA256 encryption, the code is generating the expected output. However, if we go to the management page and set the user password manually to "1" then it works like a charm.
This answer led me to export the configurations and take a look at the hashes RabbitMQ are generating, and I realized a few things:
hash example of "1": "y4xPTRVfzXg68sz9ALqeQzARam3CwnGo53xS752cDV5+Utzh"
all the user's hashes have fixed length
the hashes change every time (even if the password is the same). I know PB2K also do this to passwords, but don't know the name of this cryptographic property.
if I pass the password_hash the RabbitMQ stores it without changes
I'm accepting suggestions in another programming languages as well, not just C#.
| And for the fun the bash version !
#!/bin/bash
function encode_password()
{
SALT=$(od -A n -t x -N 4 /dev/urandom)
PASS=$SALT$(echo -n $1 | xxd -ps | tr -d '\n' | tr -d ' ')
PASS=$(echo -n $PASS | xxd -r -p | sha256sum | head -c 128)
PASS=$(echo -n $SALT$PASS | xxd -r -p | base64 | tr -d '\n')
echo $PASS
}
encode_password "some-password"
| RabbitMQ | 41,306,350 | 31 |
Are there any advantages of using NServiceBus over simply using the .net driver for RabbitMQ (assuming we can replace MSMQ with AMQP). Does NSB provide any additional functionality or abstractions that are not available directly in AMQP.
| Main advantages include (but are not limited to):
Takes care of serialization/deserialization of messages.
Provides a neat model for dispatching messages w. handlers, polymorphic dispatch, arranging handlers in a pipeline etc.
Handles unit of work.
Provides a neat saga implementation.
Gives you a host process that can be F5-debugged as well as installed as a Windows service.
These are things, that you'd need to roll yourself, if you were to use the RabbitMQ .NET client directly - unless, of course, you don't need any of these things.
Oh, and if you use MSMQ instead of RabbitMQ, you can get all these things in a broker-less model :)
| RabbitMQ | 9,558,128 | 31 |
I'm using the official RabbitMQ Docker image (https://hub.docker.com/_/rabbitmq/)
I've tried editing the rabbitmq.config file inside the container after running
docker exec -it <container-id> /bin/bash
However, this seems to have no effect on the rabbitmq server running in the container. Restarting the container obviously didn't help either since Docker starts a completely new instance.
So I assumed that the only way to configure rabbitmq.config for a Docker container was to set it up before the container starts running, which I was able to partly do using the image's supported environment variables.
Unfortunately, not all configuration options are supported by environment variables. For instance, I want to set {auth_mechanisms, ['PLAIN', 'AMQPLAIN', 'EXTERNAL']} in rabbitmq.config.
I then found the RABBITMQ_CONFIG_FILE environment variable, which should allow me to point to the file I want to use as my conifg file. However, I've tried the following with no luck:
docker service create --name rabbitmq --network rabbitnet \
-e RABBITMQ_ERLANG_COOKIE='mycookie' --hostname = "{{Service.Name}}{{.Task.Slot}}" \
--mount type=bind,source=/root/mounted,destination=/root \
-e RABBITMQ_CONFIG_FILE=/root/rabbitmq.config rabbitmq
The default rabbitmq.config file containing:
[ { rabbit, [ { loopback_users, [ ] } ] } ]
is what's in the container once it starts
What's the best way to configure rabbitmq.config inside Docker containers?
| the config file lives in /etc/rabbitmq/rabbitmq.config so if you mount your own config file with something like this (I'm using docker-compose here to setup the image)
volumes:
- ./conf/myrabbit.conf:/etc/rabbitmq/rabbitmq.config
that should do it.
In case you are having issues that the configuration file get's created as directory, try absolute paths.
| RabbitMQ | 42,003,640 | 30 |
I'd like to write a simple smoke test that runs after deployment to verify that the RabbitMQ credentials are valid. What's the simplest way to check that rabbitmq username/password/vhost are valid?
Edit: Preferably, check using a bash script. Alternatively, using a Python script.
| As you haven't provided any details about language, etc.:
You could simply issue a HTTP GET request to the management api.
$ curl -i -u guest:guest http://localhost:15672/api/whoami
See RabbitMQ Management HTTP API
| RabbitMQ | 17,148,683 | 30 |
We're using amqplib to publish/consume messages. I want to be able to read the number of messages on a queue (ideally both acknowledged and unacknowledged). This will allow me to show a nice status diagram to the admin users and detect if a certain component is not keeping up with the load.
I can't find any information in the amqplib docs about reading queue status.
Can someone point me in the right direction?
| Using pika:
import pika
pika_conn_params = pika.ConnectionParameters(
host='localhost', port=5672,
credentials=pika.credentials.PlainCredentials('guest', 'guest'),
)
connection = pika.BlockingConnection(pika_conn_params)
channel = connection.channel()
queue = channel.queue_declare(
queue="your_queue", durable=True,
exclusive=False, auto_delete=False
)
print(queue.method.message_count)
Using PyRabbit:
from pyrabbit.api import Client
cl = Client('localhost:55672', 'guest', 'guest')
cl.get_messages('example_vhost', 'example_queue')[0]['message_count']
Using HTTP
Syntax:
curl -i -u user:password http://localhost:15672/api/queues/vhost/queue
Example:
curl -i -u guest:guest http://localhost:15672/api/queues/%2f/celery
Note: Default vhost is / which needs to be escaped as %2f
Using CLI:
$ sudo rabbitmqctl list_queues | grep 'my_queue'
| RabbitMQ | 16,691,161 | 30 |
I have installed rabbitmq using helm chart on a kubernetes cluster. The rabbitmq pod keeps restarting. On inspecting the pod logs I get the below error
2020-02-26 04:42:31.582 [warning] <0.314.0> Error while waiting for Mnesia tables: {timeout_waiting_for_tables,[rabbit_durable_queue]}
2020-02-26 04:42:31.582 [info] <0.314.0> Waiting for Mnesia tables for 30000 ms, 6 retries left
When I try to do kubectl describe pod I get this error
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-rabbitmq-0
ReadOnly: false
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: rabbitmq-config
Optional: false
healthchecks:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: rabbitmq-healthchecks
Optional: false
rabbitmq-token-w74kb:
Type: Secret (a volume populated by a Secret)
SecretName: rabbitmq-token-w74kb
Optional: false
QoS Class: Burstable
Node-Selectors: beta.kubernetes.io/arch=amd64
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 3m27s (x878 over 7h21m) kubelet, gke-analytics-default-pool-918f5943-w0t0 Readiness probe failed: Timeout: 70 seconds ...
Checking health of node [email protected] ...
Status of node [email protected] ...
Error:
{:aborted, {:no_exists, [:rabbit_vhost, [{{:vhost, :"$1", :_, :_}, [], [:"$1"]}]]}}
Error:
{:aborted, {:no_exists, [:rabbit_vhost, [{{:vhost, :"$1", :_, :_}, [], [:"$1"]}]]}}
I have provisioned the above on Google Cloud on a kubernetes cluster. I am not sure during what specific situation it started failing. I had to restart the pod and since then it has been failing.
What is the issue here ?
| TLDR
helm upgrade rabbitmq --set clustering.forceBoot=true
Problem
The problem happens for the following reason:
All RMQ pods are terminated at the same time due to some reason (maybe because you explicitly set the StatefulSet replicas to 0, or something else)
One of them is the last one to stop (maybe just a tiny bit after the others). It stores this condition ("I'm standalone now") in its filesystem, which in k8s is the PersistentVolume(Claim). Let's say this pod is rabbitmq-1.
When you spin the StatefulSet back up, the pod rabbitmq-0 is always the first to start (see here).
During startup, pod rabbitmq-0 first checks whether it's supposed to run standalone. But as far as it can see on its own filesystem, it's part of a cluster. So it checks for its peers and doesn't find any. This results in a startup failure by default.
rabbitmq-0 thus never becomes ready.
rabbitmq-1 is never starting because that's how StatefulSets are deployed - one after another. If it were to start, it would start successfully because it sees that it can run standalone as well.
So in the end, it's a bit of a mismatch between how RabbitMQ and StatefulSets work. RMQ says: "if everything goes down, just start everything and the same time, one will be able to start and as soon as this one is up, the others can rejoin the cluster." k8s StatefulSets say: "starting everything all at once is not possible, we'll start with the 0".
Solution
To fix this, there is a force_boot command for rabbitmqctl which basically tells an instance to start standalone if it doesn't find any peers. How you can use this from Kubernetes depends on the Helm chart and container you're using. In the Bitnami Chart, which uses the Bitnami Docker image, there is a value clustering.forceBoot = true, which translates to an env variable RABBITMQ_FORCE_BOOT = yes in the container, which will then issue the above command for you.
But looking at the problem, you can also see why deleting PVCs will work (other answer). The pods will just all "forget" that they were part of a RMQ cluster the last time around, and happily start. I would prefer the above solution though, as no data is being lost.
| RabbitMQ | 60,407,082 | 29 |
I want to use Helm chart of RabbitMQ to set up a cluster but when I try to pass the configuration files that we have at the moment to the values.yaml it doesn't work.
The command that I use:
helm install --dry-run --debug stable/rabbitmq --name testrmq --namespace rmq -f rabbit-values.yaml
rabbit-values.yaml:
rabbitmq:
plugins: "rabbitmq_management rabbitmq_federation rabbitmq_federation_management rabbitmq_shovel rabbitmq_shovel_management rabbitmq_mqtt rabbitmq_web_stomp rabbitmq_peer_discovery_k8s"
advancedConfiguration: |-
{{ .Files.Get "rabbitmq.config" | quote}}
And what I get for advancedConfiguration:
NAME: testrmq
REVISION: 1
RELEASED: Thu Aug 29 10:09:26 2019
CHART: rabbitmq-5.5.0
USER-SUPPLIED VALUES:
rabbitmq:
advancedConfiguration: '{{ .Files.Get "rabbitmq.config" | quote}}'
plugins: rabbitmq_management rabbitmq_federation rabbitmq_federation_management
rabbitmq_shovel rabbitmq_shovel_management rabbitmq_mqtt rabbitmq_web_stomp rabbitmq_peer_discovery_k8s
I have to mention that:
rabbitmq.config is an Erlang file
I tried different things including indentation (indent 4)
| You can't use Helm templating in the values.yaml file. (Unless the chart author has specifically called the tpl function when the value is used; for this variable it doesn't, and that's usually called out in the chart documentation.)
Your two options are to directly embed the file content in the values.yaml file you're passing in, or to use the
Helm --set-file option (link to v2 docs)
Helm --set-file option (link to v3 docs).
helm install --dry-run --debug \
stable/rabbitmq \
--name testrmq \
--namespace rmq \
-f rabbit-values.yaml \
--set-file rabbitmq.advancedConfig=rabbitmq.config
There isn't a way to put a file pointer inside your local values YAML file though.
| RabbitMQ | 57,706,037 | 29 |
I want to use messaging library in my application to interact with rabbitmq. Can anyone please explain the differences between pika and kombu library?
| Kombu and pika are two different python libraries that are fundamentally serving the same purpose: publishing and consuming messages to/from a message broker.
Kombu has a higher level of abstraction than pika. Pika only supports AMQP 0.9.1 protocol while Kombu can support other transports (such as Redis). More generally, Kombu is more feature-rich than pika. It has support for reconnection strategies, connections pooling, failover strategies, among others. Some of those features are must-haves (that you'll have to re-implement or work around if you choose to use Pika in a serious project), some other are just nice to have. The downside of this: the more complex is a library, the more you will be surprised by its behavior and the harder it will be to reason about and trace bugs. Pika's codebase is relatively small and easy to get into. On the other hand, Kombu is tailor made for Celery which is a huge project. Celery's documentation is rather good, yet Kombu's documentation is quite poor in comparison. It feels like Celery is the project intended to be exposed, not Kombu.
Under the hood, when using AMQP as transport, Kombu uses either py-amqp library or librabbitmq to send/receive/parse AMQP 0.9.1 frames. In this respect, pika would be closer to py-amqp than Kombu in term of abstraction level.
RabbitMQ is complex. Choose pika if you think that you shouldn't add complexity over features that are well encapsulated already, or if you need more control and understanding over RabbitMQ. Choose Kombu if you need a turnkey solution and don't want to reinvent the wheel (i.e. reimplement some basic features which are worth a few lines of code most of the time). But whatever library you choose, it shouldn't dispense you from learning in depth the advantages and limitations of the underlying broker.
| RabbitMQ | 48,524,536 | 29 |
I'm quite new to these high level concurrency paradigms, and I've started using the scala RX bindings. So I'm trying to understand how RX differs from messaging queues like RabbitMQ or ZeroMQ?
They both appear to use the subscribe/publish paradigm. Somewhere I saw a tweet about RX being run atop RabbitMQ.
Could someone explain the differences between RX and messaging queues? Why would I choose one over the other? Can one be substituted for the other, or are they mutually exclusive? In what areas do they overlap?
| It's worth clicking the learn more link on the [system.reactive] tag, we put a fair bit of info there!
From the intro there you can see that Rx is not a message queueing technology:
The Reactive Extensions (Rx) is a library for composing asynchronous and event-based programs using observable sequences and LINQ-style query operators. System.Reactive is the root namespace used through the library. Using Rx, developers represent asychronous data streams using LINQ operators, and parameterize the concurrency in the asynchronous data streams using Schedulers. Simply put, Rx = Observables + LINQ + Schedulers.
So, Rx and message queueing are really distinct technologies that can complement each other quite well. A classic example is a stock price ticker service - this might be delivered via message queuing, but then transformed by Rx to group, aggregate and filter prices.
You can go further: much as Entity Framework turns IQueryable<T> queries into SQL run directly on the database, you can create providers that turn Rx IQbservable<T> queries into native queries - for example a Where filter might leverage the native filtering capability that exists in many message queueing technologies to apply filters directly. This is quite a lot of work though, and hard.
It's much easier, and not uncommon to see message queue messages fed into an Rx Subject so that incoming messages from a queue are transformed into an Rx stream for easy consumption in the client. Rx is also used extensively in GUIs to work with client side events like button presses and text box changes to make traditionally difficult scenarios like drag-n-drop and text auto-completion via asynchronous server queries dramatically easier. There's a good hands on lab covering the later scenario here. It was written against an early release of Rx, but still is very relevant.
I recommend looking at this video presentation by Bart de Smet for a fantastic intro: Curing Your Event Processing Blues with Reactive Extensions (Rx) - including the classic Rx demo that writes a query over a live feed from Kinect to interpret hand-waving!
| RabbitMQ | 20,740,114 | 29 |
Is there any simple way to install RabbitMQ for Ubuntu? I did the the following:
Add the following line to /etc/apt/sources.list:
deb http://www.rabbitmq.com/debian/ testing main
then install with apt-get:
$ sudo apt-get install rabbitmq-server
But I get the following error every time:
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
Since you only requested a single operation it is extremely likely that
the package is simply not installable and a bug report against
that package should be filed.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
rabbitmq-server: Depends: erlang-nox (>= 1:12.b.3) but 1:11.b.5dfsg-11 is to be installed
E: Broken packages
How am I supposed to install dependencies and to control the version of erlang-nox since it is installed already?
| Simplest way to install rabbitMQ in ubuntu:
echo "deb http://www.rabbitmq.com/debian/ testing main" | sudo tee /etc/apt/sources.list.d/rabbitmq.list > /dev/null
wget https://www.rabbitmq.com/rabbitmq-signing-key-public.asc
sudo apt-key add rabbitmq-signing-key-public.asc
sudo apt-get update
sudo apt-get install rabbitmq-server -y
sudo service rabbitmq-server start
sudo rabbitmq-plugins enable rabbitmq_management
sudo service rabbitmq-server restart
Default username / password will be guest / guest and port for will be 15672; for UI follow - http://localhost:15672
if you want to change the username and password or add new user please follow these
sudo rabbitmqctl add_user user_name password_for_this_user
sudo rabbitmqctl set_user_tags user_name administrator
sudo rabbitmqctl set_permissions -p / user_name ".*" ".*" ".*"
and to delete guest user please run this command
sudo rabbitmqctl delete_user guest
| RabbitMQ | 8,808,909 | 29 |
I have been fighting the Django/Celery documentation for a while now and need some help.
I would like to be able to run Periodic Tasks using django-celery. I have seen around the internet (and the documentation) several different formats and schemas for how one should go about achieving this using Celery...
Can someone help with a basic, functioning example of the creation, registration and execution of a django-celery periodic task? In particular, I want to know whether I should write a task that extends the PeriodicTask class and register that, or whether I should use the @periodic_task decorator, or whether I should use the @task decorator and then set up a schedule for the task's execution.
I don't mind if all three ways are possible, but I would like to see an example of at least one way that works. Really appreciate your help.
| What's wrong with the example from the docs?
from celery.task import PeriodicTask
from clickmuncher.messaging import process_clicks
from datetime import timedelta
class ProcessClicksTask(PeriodicTask):
run_every = timedelta(minutes=30)
def run(self, **kwargs):
process_clicks()
You could write the same task using a decorator:
from celery.task.schedules import crontab
from celery.task import periodic_task
@periodic_task(run_every=crontab(minute="*/30"))
def process_clicks():
....
The decorator syntax simply allows you to turn an existing function/task into a periodic task without modifying them directly.
For the tasks to be executed celerybeat must be running.
| RabbitMQ | 8,224,482 | 29 |
I'm a new one just start to learn and install RabbitMQ on Windows System.
I install Erlang VM and RabbitMQ in custom folder, not default folder (Both of them).
Then I have restarted my computer.
By the way,My Computer name is "NULL"
I cd to the RabbitMQ/sbin folder and use command:
rabbitmqctl status
But the return message is:
Status of node rabbit@NULL ...
Error: unable to perform an operation on node 'rabbit@NULL'.
Please see diagnostics information and suggestions below.
Most common reasons for this are:
Target node is unreachable (e.g. due to hostname resolution, TCP connection or firewall issues)
CLI tool fails to authenticate with the server (e.g. due to CLI tool's Erlang cookie not matching that of the server)
Target node is not running
In addition to the diagnostics info below:
See the CLI, clustering and networking guides on http://rabbitmq.com/documentation.html to learn more
Consult server logs on node rabbit@NULL
DIAGNOSTICS
attempted to contact: [rabbit@NULL]
rabbit@NULL:
connected to epmd (port 4369) on NULL
epmd reports node 'rabbit' uses port 25672 for inter-node and CLI tool traffic
TCP connection succeeded but Erlang distribution failed
Authentication failed (rejected by the remote node), please check the Erlang cookie
Current node details:
node name: rabbitmqcli70@NULL
effective user's home directory: C:\Users\Jerry Song
Erlang cookie hash: 51gvGHZpn0gIK86cfiS7vp==
I have try to RESTART RabbitMQ, What I get is:
ERROR: node with name "rabbit" already running on "NULL"
By the way,My Computer name is "NULL"
And I have enable all ports in firewall.
| ERROR: type should be string, got "https://groups.google.com/forum/#!topic/rabbitmq-users/a6sqrAUX_Fg\ndescribes the problem where there is a cookie mismatch on a fresh installation of Rabbit MQ. The easy solution on windows is to synchronize the cookies \nAlso described here: http://www.rabbitmq.com/clustering.html#erlang-cookie\nEnsure cookies are synchronized across 1, 2 and Optionally 3 below \n\n%HOMEDRIVE%%HOMEPATH%\\.erlang.cookie (usually C:\\Users\\%USERNAME%\\.erlang.cookie for user %USERNAME%) if both the HOMEDRIVE and HOMEPATH environment variables are set\n%USERPROFILE%\\.erlang.cookie (usually C:\\Users\\%USERNAME%\\.erlang.cookie) if HOMEDRIVE and HOMEPATH are not both set\nFor the RabbitMQ Windows service - %USERPROFILE%\\.erlang.cookie (usually C:\\WINDOWS\\system32\\config\\systemprofile)\n\nThe cookie file used by the Windows service account and the user running CLI tools must be synchronized by copying the one from C:\\WINDOWS\\system32\\config\\systemprofile folder.\n" | RabbitMQ | 47,874,958 | 28 |
I am new to RabbitMQ, so please excuse me for asking some trivial questions:
When running a RabbitMQ cluster, if a node fails the load shifts to another node (without stopping the other nodes). Similarly, we can also add new nodes to the existing cluster without stopping existing nodes in the cluster. Is that correct?
Assume that we start with a single RabbitMQ node, and create 100 queues on it. Now let's say that producers start sending messages at a faster rate. To handle this load, we add more nodes and make a cluster. But queues exist on the first node only. How does the load get balanced among nodes now? And if we need to add more queues, on which node should we add them? Or can we add them using the load balancer?
|
When running a RabbitMQ cluster, if a node fails the load shifts to another node (without stopping the other nodes). Similarly, we can also add new nodes to the existing cluster without stopping existing nodes in the cluster. Is that correct?
If a node on which the queue was created fails, RabbitMQ will elect a new master for that queue in the cluster as long as mirroring for the queue is enabled. Clustering provides high availability (HA) based on a policy that you can define.
Assume that we start with a single RabbitMQ node, and create 100 queues on it. Now let's say that producers start sending messages at a faster rate. To handle this load, we add more nodes and make a cluster. But queues exist on the first node only. How does the load get balanced among nodes now?
The load is not balanced. The distributed cluster provides high availability and not load balancing. Your requests will be redirected to the node in the cluster on which the queue resides.
And if we need to add more queues, on which node should we add them? Or can we add them using the load balancer?
That depends on your use case. Some folks use a round robin and create queues on separate nodes.
In summary
For high availability, use mirroring in the cluster.
To balance load across nodes, use a load balancer (LB) to distribute across Queues.
If you'd like to load balance the queue itself take a look at federated queues. They allow you to fetch messages on a downstream queue from an upstream queue.
| RabbitMQ | 28,207,327 | 28 |
My Java application sends messages to RabbitMQ exchange, then exchange redirects messages to binded queue.
I use Springframework AMQP java plugin with RabbitMQ.
The problem: message comes to queue, but it stays in "Unacknowledged" state, it never becomes "Ready".
What could be the reason?
| Just to add my 2 cents for another possible reason for messages staying in an unacknowledged state, even though the consumer makes sure to use the basicAck method-
Sometimes multiple instances of a process with an open RabbitMQ connection stay running, one of which may cause a message to get stuck in an unacknowledged state, preventing another instance of the consumer to ever refetch this message.
You can access the RabbitMQ management console (for a local machine this should be available at localhost:15672), and check whether multiple instances get hold of the channel, or if only a single instance is currently active:
Find the redundant running task (in this case - java) and terminate it. After removing the rogue process, you should see the message jumps to Ready state again.
| RabbitMQ | 11,926,077 | 28 |
Right now I'm looking at Play Framework and like it a lot. One of the parts heavy advertised amongst the features offered in Play is Akka.
In order to better understand Akka and how to use it properly, can you tell me what are the alternatives in other languages or products?
How does RabbitMQ compare to it? Is there a lot of overlap? Is it practical using them together? IN what use cases?
| I use RabbitMQ + Spring AMQP + Guava's EventBus to automatically register Actor-like messengers using Guava's EventBus for pattern matching the received messages.
The similarity to Spring AMQP and Akka is uncanny. Spring AMQP's SimpleMessageListenerContainer + MessageListener is pretty much equivalent to an Actor.
However for all intents and purposes RabbitMQ is more powerful than Akka in that it has many client implementations in different languages, provides persistence (durable queues), topological routing and pluggable QoS algorithms.
That being said Akka is way more convenient and in theory Akka can do all of the above and some people have written extensions but most just use Akka and then have Akka deliver the messages over RabbitMQ. Also Spring AMQP SimpleMessageListener container is kind of heavy and its unclear what would happen if you created a couple of million of them.
In hindsight I would consider using Akka to RabbbitMQ instead of Spring AMQP for future projects.
| RabbitMQ | 10,268,613 | 28 |
I am looking for a messaging service for a new project that will have to interface some C# applications with some Java applications. I really like RabbitMQ because it seems to have amazing support for both technologies. I see in the RabbitMQ specs that at the moment only AMQP 0-9-1 model is provided.
Is that a show stopper? Should I maybe address to ActiveMQ which provides AMQP 1.0?
Thanks for your advice
| Your question is perfectly addressed in the official protocol overview:
AMQP 1.0
Despite the name, AMQP 1.0 is a radically different protocol
from AMQP 0-9-1 / 0-9 / 0-8, sharing essentially nothing at the wire
level. AMQP 1.0 imposes far fewer semantic requirements; it is
therefore easier to add support for AMQP 1.0 to existing brokers. The
protocol is substantially more complex than AMQP 0-9-1, and there are
fewer client implementations.
RabbitMQ supports AMQP 1.0 via a plugin.
If your clients all implement AMQP 1.0 and it offers significant advantages to you over 0.9.x and you simply cannot live without it and another broker offers better support for 1.0 than RabbitMQ (whose plugin is "experimental" at the time of writing), then yes, maybe you should look at another broker. Otherwise, I doubt it will make a big practical difference for you, and RabbitMQ is working on adding full 1.0 support it seems, so it may be a viable upgrade path in the future. If you yourself cannot point to concrete evidence why 0.9.x alone is a showstopper, I can't either.
| RabbitMQ | 28,402,374 | 27 |
I've been working on getting some distributed tasks working via RabbitMQ.
I spent some time trying to get Celery to do what I wanted and couldn't make it work.
Then I tried using Pika and things just worked, flawlessly, and within minutes.
Is there anything I'm missing out on by using Pika instead of Celery?
| What pika provides is just a small piece of what Celery is doing. Pika is Python library for interacting with RabbitMQ. RabbitMQ is a message broker; at its core, it just sends messages to/receives messages from queues. It can be used as a task queue, but it could also just be used to pass messages between processes, without actually distributing "work".
Celery implements an distributed task queue, optionally using RabbitMQ as a broker for IPC. Rather than just providing a way of sending messages between processes, it's providing a system for distributing actual tasks/jobs between processes. Here's how Celery's site describes it:
Task queues are used as a mechanism to distribute work across threads
or machines.
A task queue’s input is a unit of work, called a task, dedicated
worker processes then constantly monitor the queue for new work to
perform.
Celery communicates via messages, usually using a broker to mediate
between clients and workers. To initiate a task a client puts a
message on the queue, the broker then delivers the message to a
worker.
A Celery system can consist of multiple workers and brokers, giving
way to high availability and horizontal scaling.
Celery has a whole bunch of functionality built-in that is outside of pika's scope. You can take a look at the Celery docs to get an idea of the sort of things it can do, but here's an example:
>>> from proj.tasks import add
>>> res = add.chunks(zip(range(100), range(100)), 10)()
>>> res.get()
[[0, 2, 4, 6, 8, 10, 12, 14, 16, 18],
[20, 22, 24, 26, 28, 30, 32, 34, 36, 38],
[40, 42, 44, 46, 48, 50, 52, 54, 56, 58],
[60, 62, 64, 66, 68, 70, 72, 74, 76, 78],
[80, 82, 84, 86, 88, 90, 92, 94, 96, 98],
[100, 102, 104, 106, 108, 110, 112, 114, 116, 118],
[120, 122, 124, 126, 128, 130, 132, 134, 136, 138],
[140, 142, 144, 146, 148, 150, 152, 154, 156, 158],
[160, 162, 164, 166, 168, 170, 172, 174, 176, 178],
[180, 182, 184, 186, 188, 190, 192, 194, 196, 198]]
This code wants to add every x+y where x is in range(0, 100) and y is in range(0,100). It does this by taking a task called add, which adds two numbers, and distributing the work of adding 1+1, 2+2, 3+3, etc, into chunks of 10, and distributing each chunk to as many Celery workers as there are available. Each worker will run add on its 10 item chunk, until all the work is complete. Then the results are gathered up by the res.get() call. I'm sure you can imagine a way to do this using pika, but I'm sure you can also imagine how much work would be required. You're getting that functionality out of the box with Celery.
You can certainly use pika to implement a distributed task queue if you want, especially if you have a fairly simple use-case. Celery is just providing a "batteries included" solution for task scheduling, management, etc. that you'll have to manually implement if you decide you want them with your pika solution.
| RabbitMQ | 23,766,658 | 27 |
I want to run rabbitmq-server in one docker container and connect to it from another container using celery (http://celeryproject.org/)
I have rabbitmq running using the below command...
sudo docker run -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
and running the celery via
sudo docker run -i -t markellul/celery /bin/bash
When I am trying to do the very basic tutorial to validate the connection on http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html
I am getting a connection refused error:
consumer: Cannot connect to amqp://[email protected]:5672//: [Errno 111]
Connection refused.
When I install rabbitmq on the same container as celery it works fine.
What do I need to do to have container interacting with each other?
| [edit 2016]
Direct links are deprecated now. The new way to do link containers is docker network connect. It works quite similar to virtual networks and has a wider feature set than the old way of linking.
First you create your named containers:
docker run --name rabbitmq -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
docker run --name celery -it markellul/celery /bin/bash
Then you create a network (last parameter is your network name):
docker network create -d bridge --subnet 172.25.0.0/16 mynetwork
Connect the containers to your newly created network:
docker network connect mynetwork rabbitmq
docker network connect mynetwork celery
Now, both containers are in the same network and can communicate with each other.
A very detailed user guide can be found at Work with networks: Connect containers.
[old answer]
There is a new feature in Docker 0.6.5 called linking, which is meant to help the communication between docker containers.
First, create your rabbitmq container as usual. Note that i also used the new "name" feature which makes life a litte bit easier:
docker run --name rabbitmq -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
You can use the link parameter to map a container (we use the name here, the id would be ok too):
docker run --link rabbitmq:amq -i -t markellul/celery /bin/bash
Now you have access to the IP and Port of the rabbitmq container because docker automatically added some environmental variables:
$AMQ_PORT_5672_TCP_ADDR
$AMQ_PORT_5672_TCP_PORT
In addition Docker adds a host entry for the source container to the /etc/hosts file. In this example amq will be a defined host in the container.
From Docker documentation:
Unlike host entries in the /etc/hosts file, IP addresses stored in the environment variables are not automatically updated if the source container is restarted. We recommend using the host entries in /etc/hosts to resolve the IP address of linked containers.
| RabbitMQ | 18,460,016 | 27 |
In my (limited) experience with rabbit-mq, if you create a new listener for a queue that doesn't exist yet, the queue is automatically created. I'm trying to use the Spring AMQP project with rabbit-mq to set up a listener, and I'm getting an error instead. This is my xml config:
<rabbit:connection-factory id="rabbitConnectionFactory" host="172.16.45.1" username="test" password="password" />
<rabbit:listener-container connection-factory="rabbitConnectionFactory" >
<rabbit:listener ref="testQueueListener" queue-names="test" />
</rabbit:listener-container>
<bean id="testQueueListener" class="com.levelsbeyond.rabbit.TestQueueListener">
</bean>
I get this in my RabbitMq logs:
=ERROR REPORT==== 3-May-2013::23:17:24 ===
connection <0.1652.0>, channel 1 - soft error:
{amqp_error,not_found,"no queue 'test' in vhost '/'",'queue.declare'}
And a similar error from AMQP:
2013-05-03 23:17:24,059 ERROR [org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer] (SimpleAsyncTaskExecutor-1) - Consumer received fatal exception on startup
org.springframework.amqp.rabbit.listener.FatalListenerStartupException: Cannot prepare queue for listener. Either the queue doesn't exist or the broker will not allow us to use it.
It would seem from the stack trace that the queue is getting created in a "passive" mode- Can anyone point out how I would create the queue not using the passive mode so I don't see this error? Or am I missing something else?
| Older thread, but this still shows up pretty high on Google, so here's some newer information:
2015-11-23
Since Spring 4.2.x with Spring-Messaging and Spring-Amqp 1.4.5.RELEASE and Spring-Rabbit 1.4.5.RELEASE, declaring exchanges, queues and bindings has become very simple through an @Configuration class some annotations:
@EnableRabbit
@Configuration
@PropertySources({
@PropertySource("classpath:rabbitMq.properties")
})
public class RabbitMqConfig {
private static final Logger logger = LoggerFactory.getLogger(RabbitMqConfig.class);
@Value("${rabbitmq.host}")
private String host;
@Value("${rabbitmq.port:5672}")
private int port;
@Value("${rabbitmq.username}")
private String username;
@Value("${rabbitmq.password}")
private String password;
@Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory(host, port);
connectionFactory.setUsername(username);
connectionFactory.setPassword(password);
logger.info("Creating connection factory with: " + username + "@" + host + ":" + port);
return connectionFactory;
}
/**
* Required for executing adminstration functions against an AMQP Broker
*/
@Bean
public AmqpAdmin amqpAdmin() {
return new RabbitAdmin(connectionFactory());
}
/**
* This queue will be declared. This means it will be created if it does not exist. Once declared, you can do something
* like the following:
*
* @RabbitListener(queues = "#{@myDurableQueue}")
* @Transactional
* public void handleMyDurableQueueMessage(CustomDurableDto myMessage) {
* // Anything you want! This can also return a non-void which will queue it back in to the queue attached to @RabbitListener
* }
*/
@Bean
public Queue myDurableQueue() {
// This queue has the following properties:
// name: my_durable
// durable: true
// exclusive: false
// auto_delete: false
return new Queue("my_durable", true, false, false);
}
/**
* The following is a complete declaration of an exchange, a queue and a exchange-queue binding
*/
@Bean
public TopicExchange emailExchange() {
return new TopicExchange("email", true, false);
}
@Bean
public Queue inboundEmailQueue() {
return new Queue("email_inbound", true, false, false);
}
@Bean
public Binding inboundEmailExchangeBinding() {
// Important part is the routing key -- this is just an example
return BindingBuilder.bind(inboundEmailQueue()).to(emailExchange()).with("from.*");
}
}
Some sources and documentation to help:
Spring annotations
Declaring/configuration RabbitMQ for queue/binding support
Direct exchange binding (for when routing key doesn't matter)
Note: Looks like I missed a version -- starting with Spring AMQP 1.5, things get even easier as you can declare the full binding right at the listener!
| RabbitMQ | 16,370,911 | 27 |
Can you recommend what Python library to use for accessing AMQP (RabbitMQ)? From my research pika seems to be the preferred one.
| My own research led me to believe that the right library to use would be Kombu, as this is also what Celery (mentioned by @SteveMc) has transitioned to. I am also using RabbitMQ and have used Kombu with the default amqplib backend successfully.
Kombu also supports other transports behind the same API. Useful if you need to replace AMQP or add something like redis to the mix. Haven't tried that though.
Sidenote: Kombu does currently not support the latest pika release (should you rely on it for some reason). Only 5.2.0 is currently supported, this bit me a while back.
| RabbitMQ | 5,031,606 | 27 |
I have a reactor that fetches messages from a RabbitMQ broker and triggers worker methods to process these messages in a process pool, something like this:
This is implemented using python asyncio, loop.run_in_executor() and concurrent.futures.ProcessPoolExecutor.
Now I want to access the database in the worker methods using SQLAlchemy. Mostly the processing will be very straightforward and quick CRUD operations.
The reactor will process 10-50 messages per second in the beginning, so it is not acceptable to open a new database connection for every request. Rather I would like to maintain one persistent connection per process.
My questions are: How can I do this? Can I just store them in a global variable? Will the SQA connection pool handle this for me? How to clean up when the reactor stops?
[Update]
The database is MySQL with InnoDB.
Why choosing this pattern with a process pool?
The current implementation uses a different pattern where each consumer runs in its own thread. Somehow this does not work very well. There are already about 200 consumers each running in their own thread, and the system is growing quickly. To scale better, the idea was to separate concerns and to consume messages in an I/O loop and delegate the processing to a pool. Of course, the performance of the whole system is mainly I/O bound. However, CPU is an issue when processing large result sets.
The other reason was "ease of use." While the connection handling and consumption of messages is implemented asynchronously, the code in the worker can be synchronous and simple.
Soon it became evident that accessing remote systems through persistent network connections from within the worker are an issue. This is what the CommunicationChannels are for: Inside the worker, I can grant requests to the message bus through these channels.
One of my current ideas is to handle DB access in a similar way: Pass statements through a queue to the event loop where they are sent to the DB. However, I have no idea how to do this with SQLAlchemy.
Where would be the entry point?
Objects need to be pickled when they are passed through a queue. How do I get such an object from an SQA query?
The communication with the database has to work asynchronously in order not to block the event loop. Can I use e.g. aiomysql as a database driver for SQA?
| Your requirement of one database connection per process-pool process can be easily satisfied if some care is taken on how you instantiate the session, assuming you are working with the orm, in the worker processes.
A simple solution would be to have a global session which you reuse across requests:
# db.py
engine = create_engine("connection_uri", pool_size=1, max_overflow=0)
DBSession = scoped_session(sessionmaker(bind=engine))
And on the worker task:
# task.py
from db import engine, DBSession
def task():
DBSession.begin() # each task will get its own transaction over the global connection
...
DBSession.query(...)
...
DBSession.close() # cleanup on task end
Arguments pool_size and max_overflow customize the default QueuePool used by create_engine.pool_size will make sure your process only keeps 1 connection alive per process in the process pool.
If you want it to reconnect you can use DBSession.remove() which will remove the session from the registry and will make it reconnect at the next DBSession usage. You can also use the recycle argument of Pool to make the connection reconnect after the specified amount of time.
During development/debbuging you can use AssertionPool which will raise an exception if more than one connection is checked-out from the pool, see switching pool implementations on how to do that.
| RabbitMQ | 39,613,476 | 26 |
I want to explicitly revoke a task from celery. This is how I'm currently doing:-
from celery.task.control import revoke
revoke(task_id, terminate=True)
where task_id is string(have also tried converting it into UUID uuid.UUID(task_id).hex).
After the above procedure, when I start celery again celery worker -A proj it still consumes the same message and starts processing it. Why?
When viewed via flower, the message is still there in the broker section. how do I delete the message so that it cant be consumed again?
| How does revoke works?
When calling the revoke method the task doesn't get deleted from the queue immediately, all it does is tell celery(not your broker!) to save the task_id in a in-memory set(look here if you like reading source code like me).
When the task gets to the top of the queue, Celery will check if is it in the revoked set, if it does, it won't execute it.
It works this way to prevent O(n) search for each revoke call, where checking if the task_id is in the in-memory set is just O(1)
Why after restarting celery, your revoked tasks executed?
Understanding how things works, you now know that the set is just a normal python set, that being saved in-memory - that means when you restart, you lose this set, but the task is(of course) persistence and when the tasks turn comes, it will be executed as normal.
What can you do?
You will need to have a persistence set, this is done by initial your worker like this:
celery worker -A proj --statedb=/var/run/celery/worker.state
This will save the set on the filesystem.
References:
Celery source code of the in-memory set
Revoke doc
Persistent revokes docs
| RabbitMQ | 39,191,238 | 26 |
Imagine the situation:
var txn = new DatabaseTransaction();
var entry = txn.Database.Load<Entry>(id);
entry.Token = "123";
txn.Database.Update(entry);
PublishRabbitMqMessage(new EntryUpdatedMessage { ID = entry.ID });
// A bit more of processing
txn.Commit();
Now a consumer of EntryUpdatedMessage can potentially get this message before the transaction txn is committed and therefore will not be able to see the update.
Now, I know that RabbitMQ does support transactions by itself, but we cannot really use them because we create a new IModel for each publish and having a per-thread model is really cumbersome in our scenario (ASP.NET web application).
I thought of having a list of messages due to be published when a DB transaction is committed, but that's a really smelly solution.
What is the correct way of handling this?
| RabbitMQ encourages you to use publisher confirms rather than transactions. Transactions do not perform well.
In any case, transactions don't usually work very well with a service oriented architecture. It's better to adopt an 'eventually consistent' approach, where failure can be retried at a later date and duplicate idempotent messages are ignored.
In your example I would do the database update and commit it before publishing the message. When the publisher confirm returns, I would update a field in the database record to indicate that the message had been sent. You can then have a sweeper process come along later, check for unsent messages and send them on their way. If the message did get through, but for some reason the confirm, or subsequent database write failed, you will get a duplicate message. But that won't matter, because you've designed your messages to be idempotent.
| RabbitMQ | 17,810,112 | 26 |
it seems like I do have a problem with the rabbitmq and rabbitmq-management docker image on my Windows machine running docker-desktop.
When trying to run it, the following log comes up before it shuts down:
21:01:21.726 [error] Failed to write to cookie file '/var/lib/rabbitmq/.erlang.cookie': enospc
21:01:22.355 [error] Too short cookie string
21:01:22.356 [error] Too short cookie string
21:01:23.161 [error] Too short cookie string
21:01:23.162 [error] Too short cookie string
21:01:23.783 [error] Too short cookie string
21:01:23.784 [error] Too short cookie string
21:01:24.405 [error] Too short cookie string
21:01:24.406 [error] Too short cookie string
21:01:25.027 [error] Too short cookie string
21:01:25.028 [error] Too short cookie string
21:01:25.661 [error] Too short cookie string
21:01:25.662 [error] Too short cookie string
21:01:26.281 [error] Too short cookie string
21:01:26.282 [error] Too short cookie string
21:01:26.910 [error] Too short cookie string
21:01:26.911 [error] Too short cookie string
21:01:27.533 [error] Too short cookie string
21:01:27.534 [error] Too short cookie string
21:01:28.161 [error] Too short cookie string
Distribution failed: {{:shutdown, {:failed_to_start_child, :auth, {'Too short cookie string', [{:auth, :init_cookie, 0, [file: 'auth.erl', line: 290]}, {:auth, :init, 1, [file: 'auth.erl', line: 144]}, {:gen_server, :init_it, 2, [file: 'gen_server.erl', line: 417]}, {:gen_server, :init_it, 6, [file: 'gen_server.erl', line: 385]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 226]}]}}}, {:child, :undefined, :net_sup_dynamic, {:erl_distribution, :start_link, [[:"rabbitmqcli-47-rabbit@90cc77cefcb8", :shortnames, 15000], false, :net_sup_dynamic]}, :permanent, 1000, :supervisor, [:erl_distribution]}}
Configuring logger redirection
21:01:29.717 [error]
21:01:29.715 [error] Too short cookie string
21:01:29.715 [error] Supervisor net_sup had child auth started with auth:start_link() at undefined exit with reason "Too short cookie string" in auth:init_cookie/0 line 290 in context start_error
21:01:29.715 [error] CRASH REPORT Process <0.201.0> with 0 neighbours crashed with reason: "Too short cookie string" in auth:init_cookie/0 line 290
21:01:29.719 [error] BOOT FAILED
BOOT FAILED
21:01:29.719 [error] ===========
===========
21:01:29.719 [error] Exception during startup:
Exception during startup:
21:01:29.720 [error]
21:01:29.720 [error] supervisor:children_map/4 line 1171
supervisor:children_map/4 line 1171
supervisor:'-start_children/2-fun-0-'/3 line 355
21:01:29.721 [error] supervisor:'-start_children/2-fun-0-'/3 line 355
21:01:29.721 [error] supervisor:do_start_child/2 line 371
supervisor:do_start_child/2 line 371
21:01:29.721 [error] supervisor:do_start_child_i/3 line 385
supervisor:do_start_child_i/3 line 385
21:01:29.721 [error] rabbit_prelaunch:run_prelaunch_first_phase/0 line 27
rabbit_prelaunch:run_prelaunch_first_phase/0 line 27
21:01:29.721 [error] rabbit_prelaunch:do_run/0 line 111
rabbit_prelaunch:do_run/0 line 111
21:01:29.722 [error] rabbit_prelaunch_dist:setup/1 line 15
rabbit_prelaunch_dist:setup/1 line 15
rabbit_prelaunch_dist:duplicate_node_check/1 line 51
21:01:29.722 [error] rabbit_prelaunch_dist:duplicate_node_check/1 line 51
21:01:29.722 [error] error:{badmatch,
error:{badmatch,
{error,
21:01:29.722 [error] {error,
21:01:29.722 [error] {{shutdown,
{{shutdown,
21:01:29.722 [error] {failed_to_start_child,auth,
{failed_to_start_child,auth,
21:01:29.723 [error] {"Too short cookie string",
{"Too short cookie string",
21:01:29.723 [error] [{auth,init_cookie,0,[{file,"auth.erl"},{line,290}]},
[{auth,init_cookie,0,[{file,"auth.erl"},{line,290}]},
21:01:29.723 [error] {auth,init,1,[{file,"auth.erl"},{line,144}]},
{auth,init,1,[{file,"auth.erl"},{line,144}]},
21:01:29.723 [error] {gen_server,init_it,2,
{gen_server,init_it,2,
[{file,"gen_server.erl"},{line,417}]},
21:01:29.723 [error] [{file,"gen_server.erl"},{line,417}]},
21:01:29.724 [error] {gen_server,init_it,6,
{gen_server,init_it,6,
[{file,"gen_server.erl"},{line,385}]},
21:01:29.724 [error] [{file,"gen_server.erl"},{line,385}]},
21:01:29.724 [error] {proc_lib,init_p_do_apply,3,
{proc_lib,init_p_do_apply,3,
21:01:29.724 [error] [{file,"proc_lib.erl"},{line,226}]}]}}},
[{file,"proc_lib.erl"},{line,226}]}]}}},
21:01:29.724 [error] {child,undefined,net_sup_dynamic,
{child,undefined,net_sup_dynamic,
21:01:29.725 [error] {erl_distribution,start_link,
{erl_distribution,start_link,
21:01:29.725 [error] [[rabbit_prelaunch_510@localhost,shortnames],
[[rabbit_prelaunch_510@localhost,shortnames],
21:01:29.725 [error] false,net_sup_dynamic]},
false,net_sup_dynamic]},
21:01:29.725 [error] permanent,1000,supervisor,
permanent,1000,supervisor,
21:01:29.725 [error] [erl_distribution]}}}}
21:01:29.726 [error]
[erl_distribution]}}}}
21:01:30.726 [error] Supervisor rabbit_prelaunch_sup had child prelaunch started with rabbit_prelaunch:run_prelaunch_first_phase() at undefined exit with reason {badmatch,{error,{{shutdown,{failed_to_start_child,auth,{"Too short cookie string",[{auth,init_cookie,0,[{file,"auth.erl"},{line,290}]},{auth,init,1,[{file,"auth.erl"},{line,144}]},{gen_server,init_it,2,[{file,"gen_server.erl"},{line,417}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,385}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,226}]}]}}},{child,undefined,net_sup_dynamic,{erl_distribution,start_link,[[rabbit_prelaunch_510@localhost,shortnames],false,net_sup_dynamic]},...}}}} in context start_error
21:01:30.726 [error] CRASH REPORT Process <0.153.0> with 0 neighbours exited with reason: {{shutdown,{failed_to_start_child,prelaunch,{badmatch,{error,{{shutdown,{failed_to_start_child,auth,{"Too short cookie string",[{auth,init_cookie,0,[{file,"auth.erl"},{line,290}]},{auth,init,1,[{file,"auth.erl"},{line,144}]},{gen_server,init_it,2,[{file,"gen_server.erl"},{line,417}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,385}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,226}]}]}}},{child,undefined,net_sup_dynamic,{erl_distribution,start_link,[[rabbit_prelaunch_510@localhost,...],...]},...}}}}}},...} in application_master:init/4 line 138
{"Kernel pid terminated",application_controller,"{application_start_failure,rabbitmq_prelaunch,{{shutdown,{failed_to_start_child,prelaunch,{badmatch,{error,{{shutdown,{failed_to_start_child,auth,{\"Too short cookie string\",[{auth,init_cookie,0,[{file,\"auth.erl\"},{line,290}]},{auth,init,1,[{file,\"auth.erl\"},{line,144}]},{gen_server,init_it,2,[{file,\"gen_server.erl\"},{line,417}]},{gen_server,init_it,6,[{file,\"gen_server.erl\"},{line,385}]},{proc_lib,init_p_do_apply,3,[{file,\"proc_lib.erl\"},{line,226}]}]}}},{child,undefined,net_sup_dynamic,{erl_distribution,start_link,[[rabbit_prelaunch_510@localhost,shortnames],false,net_sup_dynamic]},permanent,1000,supervisor,[erl_distribution]}}}}}},{rabbit_prelaunch_app,start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,rabbitmq_prelaunch,{{shutdown,{failed_to_start_child,prelaunch,{badmatch,{error,{{shutdown,{failed_to_start_child,auth,{"Too
Crash dump is being written to: erl_crash.dump...
I've been using this image for months now without any problems, but all of sudden it doesn't work anymore.
I also tried running this on my raspberry pi. Turns out it works there, so it has to be more of a local thing for me, which is kind of weird as docker is basically meant to avoid these problems.
I also tried setting the RABBITMQ_ERLANG_COOKIE environment variable to a long name, but with no success. Any ideas?
| I had exactly the same error and followed the advice of Owen Brown. Unfortunately, deleting the rabbitmq images could not solve my problems but deleting all other redundant images as well (especially the ones of big size) did it for me.
In case you are wondering how much all images are in size, you can check these statistics with
docker system df
Edit:
As i encountered the same issue again just a couple of hours later i researched again and found this answer which resolved my troubles.
More important than deleting images is the deletion of volumes using:
docker volume rm $(docker volume ls -f dangling=true -q) // removes all volumes
| RabbitMQ | 63,384,705 | 25 |
My team wants to move to microservices architecture. Currently we are using Redis Pub/Sub as message broker for some legacy parts of our system. My colleagues think that it is naturally to continue use redis as service bus as they don't want spend their time on studying new product. But in my opinion RabbitMQ (especially with MassTransit) is a better approach for microservices. Could you please compare Redis Pub/Sub with Rabbit MQ and give me some arguments for Rabbit?
| Redis is a fast in-memory key-value store with optional persistence. The pub/sub feature of Redis is a marginal case for Redis as a product.
RabbitMQ is the message broker that does nothing else. It is optimized for reliable delivery of messages, both in command style (send to an endpoint exchange/queue) and publish-subscribe. RabbitMQ also includes the management plugin that delivers a helpful API to monitor the broker status, check the queues and so on.
Dealing with Redis pub/sub on a low level of Redis client can be a very painful experience. You could use a library like ServiceStack that has a higher level abstraction to make it more manageable.
However, MassTransit adds a lot of value compared to raw messaging over RMQ. As soon as you start doing stuff for real, no matter what transport you decide to use, you will hit typical issues that are associated with messaging like handling replies, scheduling, long-running processes, re-delivery, dead-letter queues, and poison queues. MassTransit does it all for you. Neither Redis or RMQ client would deliver any of those. If your team wants to spend time dealing with those concerns in their own code - that's more like reinventing the wheel. Using the argument of "not willing to learn a new product" in this context sounds a bit weird, since, instead of delivering value for the product, developers want to spend their time dealing with infrastructure concerns.
| RabbitMQ | 52,592,796 | 25 |
I am using amqplib in Node.js, and I am not clear about the best practices in my code.
Basically, my current code calls the amqp.connect() when the Node server starts up, and then uses a different channel for each producer and each consumer, never actually closing any of them. I'd like to know if that makes any sense, or should I create the channel, publish and close it every time I want to publish a message. And what about the connection? Is that a "good practice" to connect once, and then keep it open for the lifetime of my server?
On the Consumer side - can I use a single connection and a single channel to listen on multiple queues?
Thank you for any clarifications
| In general, it's not a good practice to open and close connections and channels per message. Connections are long lived and it takes resources to keep opening and closing them. For channels, they share the TCP connection with the connection so they are more lightweight, but they will still consume memory and definitely should not be left open after done using them.
It is recommended to have a channel per thread, and a channel per consumer. But for publishing it is totally ok to use the same channel. But keep in mind that depending on the operations, the protocol might kill the channel in certain situations (e.g. queue existence check), so prepare for that. There is also soft (configurable) and hard (usually 65535) limits on the maximum number of channels on many of the client implementations.
So to sum up, depending on your use case use one to a few connections, open channels when you need them and share them when it makes sense, but remember to close them when done.
The rabbitmq documentation explains the nature of the connections and channels (end of the document). And the accepted answer on this question has good information on the subject.
| RabbitMQ | 44,358,076 | 25 |
Background
I am making a publish/subscribe typical application where a publisher sends messages to a consumer.
The publisher and the consumer are on different machines and the connection between them can break occasionally.
Objective
The goal here is to make sure that no matter what happens to the connection, or to the machines themselves, a message sent by a publisher is always received by the consumer.
Ordering of messages is not a must.
Problem
According to my research, RabbitMQ is the right choice for this scenario:
Redis Vs RabbitMQ as a data broker/messaging system in between Logstash and elasticsearch
However, although RabbitMQ has a tutorial about publish and subscriber this tutorial does not present us to persistent queues nor does it mention confirms which I believe are the key to making sure messages are delivered.
On the other hand, Redis is also capable of doing this:
http://abhinavsingh.com/customizing-redis-pubsub-for-message-persistence-part-2/
but I couldn't find any official tutorials or examples and my current understatement leads to me to believe that persistent queues and message confirms must be done by us, as Redis is mainly an in memory-datastore instead of a message broker like RabbitMQ.
Questions
For this use case, which solution would be the easiest to implement? (Redis solution or RabbitMQ solution?)
Please provide a link to an example with what you think would be best!
| Background
I originally wanted publish and subscribe with message and queue persistence.
This in theory, does not exactly fit publish and subscribe:
this pattern doesn't care if the messages are received or not. The publisher simply fans out messages and if there are any subscribers listening, good, otherwise it doesn't care.
Indeed, looking at my needs I would need more of a Work Queue pattern, or even an RPC pattern.
Analysis
People say both should be easy, but that really is subjective.
RabbitMQ has a better official documentation overall with clear examples in most languages, while Redis information is mainly in third party blogs and in sparse github repos - which makes it considerably harder to find.
As for the examples, RabbitMQ has two examples that clearly answer my questions:
Work queues
RPC example
By mixing the two I was able to have a publisher send to several consumers reliable messages - even if one of them fails. Messages are not lost, nor forgotten.
Downfall of rabbitMQ:
The greatest problem of this approach is that if a consumer/worker crashes, you need to define the logic yourself to make sure that tasks are not lost. This happens because once a task is completed, following the RPC pattern with durable queues from Work Queues, the server will keep sending messages to the worker until it comes back up again. But the worker doesn't know if it already read the reply from the server or not, so it will take several ACK from the server. To fix this, each worker message needs to have an ID, that you save to the disk (in case of failure) or the requests must be idempotent.
Another issue is that if the connection is lost, the clients blow up with errors as they cannot connect. This is also something you must prepare in advance.
As for redis, it has a good example of durable queues in this blog:
https://danielkokott.wordpress.com/2015/02/14/redis-reliable-queue-pattern/
Which follows the official recommendation. You can check the github repo for more info.
Downfall of redis:
As with rabbitmq, you also need to handle worker crashes yourself, otherwise tasks in progress will be lost.
You have to do polling. Each consumer needs to ask the producer if there are any news, every X seconds.
This is, in my opinion, a worst rabbitmq.
Conclusion
I ending up going with rabbitmq for the following reasons:
More robust official online documentation, with examples.
No need for consumers to poll the producer.
Error handling is just as simple as in redis.
With this in mind, for this specific case, I am confident in saying that redis is a worst rabbitmq in this scenario.
Hope it helps.
| RabbitMQ | 43,777,807 | 25 |
I'm trying to install and be able to run rabbitmqadmin on a linux machine. Following the instructions described here do not help.
After downloading the file linked, it prompts to copy the file (which looks like a python script) into /usr/local/bin.
Trying to run it by simply invoking rabbitmqadmin results in rabbitmqadmin: command not found. There seems to be no information anywhere about how to get this to work and assumes that all the steps listed on the site should work for all. It seems odd that simply copying a python script to the bin folder should allow it to become a recognised command without having to invoke the python interpreter every time.
Any help is appreciated.
| I spent several hours to figure out this, use rabbitmqadmin on linux environment, Finally below steps solve my issue.
On my ubuntu server, python3 was installed, I checked it using below command,
python3 -V
Step 1: download the python script to your linux server
wget https://raw.githubusercontent.com/rabbitmq/rabbitmq-management/v3.7.8/bin/rabbitmqadmin
Step2: change the permission
chmod 777 rabbitmqadmin
Step3: change the header of the script as below(first line)
#!/usr/bin/env python3
Thant's all, Now you can run below commands,
To list down queues,
./rabbitmqadmin -f tsv -q list queues
To Delete ques,
./rabbitmqadmin delete queue name=name_of_queue
To add binding between exchange and queue
./rabbitmqadmin declare binding source="exchangename" destination_type="queue" destination="queuename" routing_key="routingkey"
| RabbitMQ | 36,336,071 | 25 |
All of a sudden when I try to access RabbitMQ it only displays this on screen:
undefined: There is no template at js/tmpl/login.ejs
Any help will be appreciated.
UPDATE:
Now it is showing browser default error:
Connection Refused
| The problem was solved by restarting the Linux server as rabbitMQ commands were hanging and required force stop.
Hope this helps someone.
| RabbitMQ | 33,935,430 | 25 |
I'm using RabbitMQ in C# with the EasyNetQ library. I'm using a pub/sub pattern here. I still have a few issues that I hope anyone can help me with:
When there's an error while consuming a message, it's automatically moved to an error queue. How can I implement retries (so that it's placed back on the originating queue, and when it fails to process X times, it's moved to a dead letter queue)?
As far as I can see there's always 1 error queue that's used to dump messages from all other queues. How can I have 1 error queue per type, so that each queue has its own associated error queue?
How can I easily retry messages that are in an error queue? I tried Hosepipe, but it justs republishes the messages to the error queue instead of the originating queue. I don't really like this option either because I don't want to be fiddling around in a console. Preferably I'd just program against the error queue.
Anyone?
| The problem you are running into with EasyNetQ/RabbitMQ is that it's much more "raw" when compared to other messaging services like SQS or Azure Service Bus/Queues, but I'll do my best to point you in the right direction.
Question 1.
This will be on you to do. The simplest way is that you can No-Ack a message in RabbitMQ/EasyNetQ, and it will be placed at the head of the queue for you to retry. This is not really advisable because it will be retried almost immediately (With no time delay), and will also block other messages from being processed (If you have a single subscriber with a prefetch count of 1).
I've seen other implementations of using a "MessageEnvelope". So a wrapper class that when a message fails, you increment a retry variable on the MessageEnvelope and redeliver the message back onto the queue. YOU would have to do this and write the wrapping code around your message handlers, it would not be a function of EasyNetQ.
Using the above, I've also seen people use envelopes, but allow the message to be dead lettered. Once it's on the dead letter queue, there is another application/worker reading items from the dead letter queue.
All of these approaches above have a small issue in that there isn't really any nice way to have a logarithmic/exponential/any sort of increasing delay in processing the message. You can "hold" the message in code for some time before returning it to the queue, but it's not a nice way around.
Out of all of these options, your own custom application reading the dead letter queue and deciding whether to reroute the message based on an envelope that contains the retry count is probably the best way.
Question 2.
You can specify a dead letter exchange per queue using the advanced API. (https://github.com/EasyNetQ/EasyNetQ/wiki/The-Advanced-API#declaring-queues). However this means you will have to use the advanced API pretty much everywhere as using the simple IBus implementation of subscribe/publish looks for queues that are named based on both the message type and subscriber name. Using a custom declare of queue means you are going to be handling the naming of your queues yourself, which means when you subscribe, you will need to know the name of what you want etc. No more auto subscribing for you!
Question 3
An Error Queue/Dead Letter Queue is just another queue. You can listen to this queue and do what you need to do with it. But there is not really any out of the box solution that sounds like it would fit your needs.
| RabbitMQ | 30,914,640 | 25 |
I'm a newcomer to celery and I try to integrate this task queue into my project but I still don't figure out how celery handles the failed tasks and I'd like to keep all those in a amqp dead-letter queue.
According to the doc here it seems that raising Reject in a Task having acks_late enabled produces the same effect as acking the message and then we have a few words about dead-letter queues.
So I added a custom default queue to my celery config
celery_app.conf.update(CELERY_ACCEPT_CONTENT=['application/json'],
CELERY_TASK_SERIALIZER='json',
CELERY_QUEUES=[CELERY_QUEUE,
CELERY_DLX_QUEUE],
CELERY_DEFAULT_QUEUE=CELERY_QUEUE_NAME,
CELERY_DEFAULT_EXCHANGE=CELERY_EXCHANGE
)
and my kombu objects are looking like
CELERY_DLX_EXCHANGE = Exchange(CELERY_DLX_EXCHANGE_NAME, type='direct')
CELERY_DLX_QUEUE = Queue(CELERY_DLX_QUEUE_NAME, exchange=DLX_EXCHANGE,
routing_key='celery-dlq')
DEAD_LETTER_CELERY_OPTIONS = {'x-dead-letter-exchange': CELERY_DLX_EXCHANGE_NAME,
'x-dead-letter-routing-key': 'celery-dlq'}
CELERY_EXCHANGE = Exchange(CELERY_EXCHANGE_NAME,
arguments=DEAD_LETTER_CELERY_OPTIONS,
type='direct')
CELERY_QUEUE = Queue(CELERY_QUEUE_NAME,
exchange=CELERY_EXCHANGE,
routing_key='celery-q')
And the task I'm executing is:
class HookTask(Task):
acks_late = True
def run(self, ctx, data):
logger.info('{0} starting {1.name}[{1.request.id}]'.format(self.__class__.__name__.upper(), self))
self.hook_process(ctx, data)
def on_failure(self, exc, task_id, args, kwargs, einfo):
logger.error('task_id %s failed, message: %s', task_id, exc.message)
def hook_process(self, t_ctx, body):
# Build context
ctx = TaskContext(self.request, t_ctx)
logger.info('Task_id: %s, handling request %s', ctx.task_id, ctx.req_id)
raise Reject('no_reason', requeue=False)
I made a little test with it but with no results when raising a Reject exception.
Now I'm wondering if it's a good idea to force the failed task route to the dead-letter queue by overriding the Task.on_failure. I think this would work but I also think that this solution is not so clean because according to what I red celery should do this all alone.
Thanks for your help.
| I think you should not add arguments=DEAD_LETTER_CELERY_OPTIONS in CELERY_EXCHANGE. You should add it to CELERY_QUEUE with queue_arguments=DEAD_LETTER_CELERY_OPTIONS.
The following example is what I did and it works fine:
from celery import Celery
from kombu import Exchange, Queue
from celery.exceptions import Reject
app = Celery(
'tasks',
broker='amqp://guest@localhost:5672//',
backend='redis://localhost:6379/0')
dead_letter_queue_option = {
'x-dead-letter-exchange': 'dlx',
'x-dead-letter-routing-key': 'dead_letter'
}
default_exchange = Exchange('default', type='direct')
dlx_exchange = Exchange('dlx', type='direct')
default_queue = Queue(
'default',
default_exchange,
routing_key='default',
queue_arguments=dead_letter_queue_option)
dead_letter_queue = Queue(
'dead_letter', dlx_exchange, routing_key='dead_letter')
app.conf.task_queues = (default_queue, dead_letter_queue)
app.conf.task_default_queue = 'default'
app.conf.task_default_exchange = 'default'
app.conf.task_default_routing_key = 'default'
@app.task
def add(x, y):
return x + y
@app.task(acks_late=True)
def div(x, y):
try:
z = x / y
return z
except ZeroDivisionError as exc:
raise Reject(exc, requeue=False)
After the creation of queue, you should see that on the 'Features' column, it shows DLX (dead-letter-exchange) and DLK (dead-letter-routing-key) labels.
NOTE: You should delete the previous queues, if you have already created them in RabbitMQ. This is because celery won't delete the existing queue and re-create a new one.
| RabbitMQ | 38,111,122 | 24 |
The following API call to RabbitMQ:
http -a USER:PASS localhost:15001/api/queues/
Returns a list of queues:
[
{
...
"messages_unacknowledged_ram": 0,
"name": "foo_queue",
"node": "rabbit@queue-monster-01",
"policy": "",
"state": "running",
"vhost": "/"
},
...
]
Note that the vhost parameter is /.
How do I use a / vhost for the /api/queues/vhost/name call, which returns the details for a specific queue?
I have tried:
localhost:15001/api/queues/\//foo_queue
localhost:15001/api/queues///foo_queue
But both failed with 404 Object Not Found:
| URL Encoding did the trick. The URL should be:
localhost:15001/api/queues/%2F/foo_queue
⬆⬆⬆
For the record, I think that REST resources should not be named /, especially not by default.
| RabbitMQ | 33,119,611 | 24 |
So I am using rabbitmqs http api to do some very basic actions in rabbit. It works great in most situations but I am having an issue figuring out how to use it to publish a message to the default rabbitmq exchange. This exchange is always present, cannot be deleted and has a binding to every queue with a routing key equal to the queue name.
My problem is that this queue does not have a name, or rather, it's name is an empty string "". And the URL I have to use to publish this message with the HTTP api includes the name of the exchange.
http://localhost:15672/api/exchanges/vhost/name/publish
(Source: http://hg.rabbitmq.com/rabbitmq-management/raw-file/rabbitmq_v3_3_4/priv/www/api/index.html)
The same article mentions that in order to use the default vhost which has a name of "/", you must use %2f in place of the vhost name. This makes me think there should be a similar way to represent the deafault exchange in the url.
I tried a few different things and none of them worked:
/api/exchanges/vhost//publish
/api/exchanges/vhost/""/publish
/api/exchanges/vhost/''/publish
/api/exchanges/vhost/ /publish
/api/exchanges/vhost/%00/publish
I'm sure I can't be the only person that has run into this issue. any help would be much appreciated.
thanks,
Tom
| This is the way to publish a message to amq.default:
http://localhost:15672/api/exchanges/%2f/amq.default/publish
with this body
{"properties":{},
"routing_key":"queue_test",
"payload":"message test ",
"payload_encoding":"string"}
routing_key is the queue where you will publish the message.
Following an example using a chrome plug-in:
| RabbitMQ | 32,486,398 | 24 |
I'm using RabbitMQ as a message queue in a service-oriented architecture, where many separate web services publish messages bound for RabbitMQ queues. Those queues are in turn subscribed to by various consumers, which perform background work; a pretty vanilla use-case for RabbitMQ.
Now I'd like to change some of the queue parameters (specifically, I'd like to bind queues to a new dead-letter exchange with a certain routing key). My problem is that making this change in place on a production system is problematic for a couple reasons.
Whats the best way for me to transition to these new queues without losing messages in a production system?
I've considered everything from versioning queue names to making a new vhost with the new settings to doing all the changes in place.
Here are some of the problems I'm facing:
Because RabbitMQ queues are idempotent, the disparate web services have been declaring the queues before publishing to them (in case they don't already exist). Once you change the queue parameters (but maintain the same routing key), the queue declare fails and RabbitMQ closes the channel.
I'd like to not lose messages when changing a queue (here I'm planning on subscribing an exclusive consumer that saves the messages and then republishes to the new queue).
General coordination between disparate publishers and the consumer base (or, even better, a way to avoid needing to coordinate them).
| Queues bindings can be added and removed at runtime without any impact on clients, unless clients manually modify bindings. So if your question only about bindings just change them via CLI or web management panel and skip what written below.
It's a common problem to make back-incompatible changes, especially in heterogeneous environment, especially when multiple applications attempts to declare same entity in their own way (with their specific settings). There are no easy way to change queue declaration at the same time in multiple applications and it highly depends on how whole working process organized, how critical your apps are, what is your infrastructure and etc.
Fast and dirty way:
While the publishers doesn't deals with queues declaration and bindings (at least they should not do that), you can focus on consumers. Wrapping queues declaration in try-except block may be the fast and dirty choice. Also most projects, even numerous can survive small downtime, so you can block rabbitmq user in one shell, alter queue as you wish (create new one and make your consumers use it instead of old one) and then unblock user and let consumers works as before (your workers are under supervisor or monit, right?). Then migrate manually messages from old queue to new one.
Fast and safe solution:
Is is a bit tricky and based on a hack how to migrate messages from one queue to another inside single vhost. The whole solution works inside single vhost but requires extra queue for every queue you want to modify. Set up Dead Letter Exchanges on source queue and point it to route expired messages to your new target queue. Then apply Per-Queue Message TTL to source queue, set x-message-ttl=0 (to it's minimal value, see No Queueing at all note about immediate delivery). Both actions can be done via CLI or management panel and can be done on already declared queue. In this way your publishers can publish messages as usual and even old consumers can work as expected for the first time, but in parallel new consumers can consume from new queue which can be pre-declared with new args manually or in other way.
Note, that on queues with large messages number and huge messages flow there are some risks to met flow control limits, especially if your server utilize almost all of it resources.
Much more complicated but safer approach (for cases when whole messages workflow logic changed):
Make all necessary changes to applications and run new codebase in parallel to existing one, but on the different RabbitMQ vhost (or even use separate server, it depends on your applications load and hardware). Actually, it may be possible to run on the same vhost but change exchanges and queues name, but it even doesn't sound good and smells even in written form. After you set up new apps, switch them with old one and run messages migration from old queues to new one (or just let old system empty the queues). It guaranties seamless migration with minimal downtime. If you have your deployment automatized, whole process will not takes too much efforts.
P.S.: in any case above, if you can, let old consumers to empty queues so you don't need to migrate messages manually.
Update:
You may find very useful Shovel plugin, especially Dynamic Shovels to move messages between exchanges and queues, even between different vhosts and servers. It's the fastest and safest way to migrate messages between queues/exchanges.
| RabbitMQ | 25,274,182 | 24 |
Is there any way to count how many time a job is requeued (via Reject or Nak) without manually requeu the job?
I need to retry a job for 'n' time and then drop it after 'n' time.
ps : Currently I requeue a job manually (drop old job, create a new job with the exact content and an extra Counter header if the Counter is not there or the value is less than 'n')
| Update from 2023 based on quorum queue's way of poison message handling:
Quorum queues keep track of the number of unsuccessful delivery
attempts and expose it in the "x-delivery-count" header that is included with any redelivered message.
Original answer (before queue and stream queues were added):
There are redelivered message property that set to true when message redelivered one or more time.
If you want to track redelivery count or left redelivers number (aka hop limit or ttl in IP stack) you have to store that value in message body or headers (literally - consume message, modify it and then publish it modified back to broker).
There are also similar question with answer which may help you: How do I set a number of retry attempts in RabbitMQ?
| RabbitMQ | 25,226,080 | 24 |
I've got a python worker client that spins up a 10 workers which each hook onto a RabbitMQ queue. A bit like this:
#!/usr/bin/python
worker_count=10
def mqworker(queue, configurer):
connection = pika.BlockingConnection(pika.ConnectionParameters(host='mqhost'))
channel = connection.channel()
channel.queue_declare(queue=qname, durable=True)
channel.basic_consume(callback,queue=qname,no_ack=False)
channel.basic_qos(prefetch_count=1)
channel.start_consuming()
def callback(ch, method, properties, body):
doSomeWork();
ch.basic_ack(delivery_tag = method.delivery_tag)
if __name__ == '__main__':
for i in range(worker_count):
worker = multiprocessing.Process(target=mqworker)
worker.start()
The issue I have is that despite setting basic_qos on the channel, the first worker to start accepts all the messages off the queue, whilst the others sit there idle. I can see this in the rabbitmq interface, that even when I set worker_count to be 1 and dump 50 messages on the queue, all 50 go into the 'unacknowledged' bucket, whereas I'd expect 1 to become unacknowledged and the other 49 to be ready.
Why isn't this working?
| I appear to have solved this by moving where basic_qos is called.
Placing it just after channel = connection.channel() appears to alter the behaviour to what I'd expect.
| RabbitMQ | 12,426,927 | 24 |
I'm doing research to figure out what messaging solution to settle on for our future products and I can't really figure this one out.
There is a bunch of AMQP 0.9.1 implementations (RabbitMQ, Apache Qpid, OpenAMQ, to name a few), but no AMQP 1.0 implementation, although 1.0 has been finalized October 2011. Well, except for SwiftMQ [1].
Reading up on 1.0, it seems to be a major departure from the pre-1.0 spec, so it seems understandable that there's little enthusiasm for a major rewrite of something that is working fine. In fact, I can't see why RabbitMQ and others wouldn't just decide to migrate to ZeroMQ instead of AMQP 1.0.
Still, I cannot find any clear statement on that by implementors of the pre-1.0 AMQP spec, except some vague commitments like 'striving to always implement the latest AMQP spec'.
Edit: RabbitMQ actually does say
A future version of RabbitMQ will implement AMQP 1.0. Please contact us for details.
However, something tells me that statement is more than 3 years old, i.e. it predates the release of AMQP 1.0.
So are there any indications AMQP 1.0 could become a standard, except for the fact that major banks - and Microsoft - are behind it? The latter btw. without an implementation of its own.
It almost seems like AMQP 0.9.1 is more standard than 1.0 will be.
Well, there's https://github.com/rabbitmq/rabbitmq-amqp1.0, it's self-proclaimed status is prototype, with no work on it apparently for half a year.
[1] My first impression of SwiftMQ I got by means of its author's rant on Spring's lacking AMQP support, which is why I'm not considering it for the time being. I wouldn't want to count on support from that guy.
| AMQP 1.0 is an alternative to AMQP 0-9-1 in name only. The two are so different that it might have been clearer to give them different names.
Choosing a current 0-9-1 implementation does not limit you:
0-9-1 defines a broker and messaging model, while 1.0 defines a messaging transport. Therefore it is possible to combine the AMQP 1.0 transport with 0-9-1, as RabbitMQ demonstrated at the AMQP 1.0 conference in NYC in 2011. Because it is a transport, AMQP 1.0 can also be attached to proprietary and/or closed non-royalty-free brokers.
AMQP 1.0 has just entered "a 60-day public review period in preparation for a member ballot to consider its approval as an OASIS Standard".
"The 60-day public review starts 14 August 2012 and ends 13 October 2012.
This is an open invitation to comment. OASIS solicits feedback from potential users, developers and others, whether OASIS members or not, for the sake of improving the interoperability and quality of its technical work."
Full details here:
https://www.oasis-open.org/news/announcements/60-day-public-review-for-advanced-message-queueing-protocol-amqp-v1-0-candidate-o
| RabbitMQ | 11,928,655 | 24 |
OK, I have been reading about the celery and rabbitmq, while I appreciate the effort of the project and the documentation, I am still confused about a lot of things.
http://www.celeryproject.org/
http://ask.github.com/django-celery/
I am super confused about if celery is only for Django or a standalone server, as the second link claims celery is tightly used with Django. Both sites show different ways of setting up and using celery, which to me is chaotic.
Enough rant, is there a proper book available that I can buy?
| Well not a book but I recently did setup in Dotcloud for Django+Celery, and here's the short doc:
http://web.archive.org/web/20150329132442/http://docs.dotcloud.com/tutorials/python/django-celery/
It's intended for simple tasks to be run asynchronously. There is a dotcloud-specific setup, but the rest might clear things up a bit. AFAIK, Celery started tightly coupled with Django but later became an entirely different animal, although it still retains superb compatibility with Django.
| RabbitMQ | 7,843,345 | 24 |
There are a couple of threads talking about license issue. Mostly focusing on GPL/LGPL/BSD. I am trying to use RabbitMQ in commercial applications, which is licensed under Mozilla Public License(MPL). Is MPL friendly to commercial use?
I found a different question on Stack Overflow, and one of the comments mentions:
MPL: people can take your code, modify it, but if they distribute the modifications, they need to make sure modifications are publicly available for 3 years.
If I don't touch the source code at all, but only use the .jar files in my code, do I need to license my code under MPL as well?
| In a more serious tone, you should always consult to your lawyer, but it could be helpful if you read the annotated MPL 1.1 at: http://www.mozilla.org/MPL/MPL-1.1-annotated.html but it basically means that the files can be combined with proprietary files on a "larger work", still, it would be wise if you read the annotated version and consult it with a lawyer.
I'm not sure about if you have doubts about Apache license since you mention it on the question topic but not in the body of your question.
| RabbitMQ | 2,073,477 | 24 |
I think I am missing something here..I am trying to create simple rabbit listner which can accept custom object as message type. Now as per doc it says
In versions prior to 1.6, the type information to convert the JSON had to be provided in message headers, or a custom ClassMapper was required. Starting with version 1.6, if there are no type information headers, the type can be inferred from the target method arguments.
I am putting message manually in to queue using rabbit mq adm in dashboard,getting error like
Caused by: org.springframework.messaging.converter.MessageConversionException: Cannot convert from [[B] to [com.example.Customer] for GenericMessage [payload=byte[21], headers={amqp_receivedDeliveryMode=NON_PERSISTENT, amqp_receivedRoutingKey=customer, amqp_deliveryTag=1, amqp_consumerQueue=customer, amqp_redelivered=false, id=81e8a562-71aa-b430-df03-f60e6a37c5dc, amqp_consumerTag=amq.ctag-LQARUDrR6sUcn7FqAKKVDA, timestamp=1485635555742}]
My configuration:
@Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory("localhost");
connectionFactory.setUsername("test");
connectionFactory.setPassword("test1234");
connectionFactory.setVirtualHost("/");
return connectionFactory;
}
@Bean
RabbitTemplate rabbitTemplate(ConnectionFactory connectionFactory) {
RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setMessageConverter(new Jackson2JsonMessageConverter());
return rabbitTemplate;
}
@Bean
public AmqpAdmin amqpAdmin() {
RabbitAdmin rabbitAdmin = new RabbitAdmin(connectionFactory());
return rabbitAdmin;
}
@Bean
public Jackson2JsonMessageConverter jackson2JsonMessageConverter() {
return new Jackson2JsonMessageConverter();
}
Also question is with this exception message is not put back in the queue.
I am using spring boot 1.4 which brings amqp 1.6.1.
Edit1 : I added jackson converter as above (prob not required with spring boot) and given contenty type on rmq admin but still got below, as you can see above I am not configuring any listener container yet.
Caused by: org.springframework.messaging.converter.MessageConversionException: Cannot convert from [[B] to [com.example.Customer] for GenericMessage [payload=byte[21], headers={amqp_receivedDeliveryMode=NON_PERSISTENT, amqp_receivedRoutingKey=customer, content_type=application/json, amqp_deliveryTag=3, amqp_consumerQueue=customer, amqp_redelivered=false, id=7f84d49d-037a-9ea3-e936-ed5552d9f535, amqp_consumerTag=amq.ctag-YSemzbIW6Q8JGYUS70WWtA, timestamp=1485643437271}]
| If you are using boot, you can simply add a Jackson2JsonMessageConverter @Bean to the configuration and it will be automatically wired into the listener (as long as it's the only converter). You need to set the content_type property to application/json if you are using the administration console to send the message.
Conversion errors are considered fatal by default because there is generally no reason to retry; otherwise they'd loop for ever.
EDIT
Here's a working boot app...
@SpringBootApplication
public class So41914665Application {
public static void main(String[] args) {
SpringApplication.run(So41914665Application.class, args);
}
@Bean
public Queue queue() {
return new Queue("foo", false, false, true);
}
@Bean
public Jackson2JsonMessageConverter converter() {
return new Jackson2JsonMessageConverter();
}
@RabbitListener(queues = "foo")
public void listen(Foo foo) {
System.out.println(foo);
}
public static class Foo {
public String bar;
public String getBar() {
return this.bar;
}
public void setBar(String bar) {
this.bar = bar;
}
@Override
public String toString() {
return "Foo [bar=" + this.bar + "]";
}
}
}
I sent this message
With this result:
2017-01-28 21:49:45.509 INFO 11453 --- [ main] com.example.So41914665Application : Started So41914665Application in 4.404 seconds (JVM running for 5.298)
Foo [bar=baz]
Boot will define an admin and template for you.
| RabbitMQ | 41,914,665 | 23 |
The RabbitMQ documentation states:
Default Virtual Host and User
When the server first starts running, and detects that its database is uninitialised or has been deleted, it initialises a fresh database with the following resources:
a virtual host named /
The api has things like:
/api/exchanges/#vhost#/?name?/bindings
where "?name?" is a specific exchange-name.
However, what does one put in for the #vhost# for the default-vhost?
| As write here: http://hg.rabbitmq.com/rabbitmq-management/raw-file/3646dee55e02/priv/www-api/help.html
As the default virtual host is called "/", this will need to be encoded as "%2f".
so:
/api/exchanges/%2f/{exchange_name}/bindings/source
full:
http://localhost:15672/api/exchanges/%2f/test_ex/bindings/source
as result:
[{"source":"test_ex","vhost":"/","destination":"test_queue","destination_type":"queue","routing_key":"","arguments":{},"properties_key":"~"}]
| RabbitMQ | 37,124,375 | 23 |
I have a nodejs client that uses bramqp for connecting to RabbitMQ server. My client can connect to a Rabbit MQ server in localhost and works well. But it's unable to connect to a remote RabbitMQ server on other machine. I opened port 5672 in the remote server, so I think that the problem is in the configuration of rabbitMQ server. How can I solve this problem?
| The problem seems the new rabbitmq access control policy
Please read this post:
Can't access RabbitMQ web management interface after fresh install
I think it can help you!
| RabbitMQ | 23,487,238 | 23 |
I'm spec'ing a design right now that uses RabbitMQ as a message queue. The message are going to have a JSON body, and for one message in particular I'd like to add a small binary file.
What I'd like to know is, should the binary file's data be part of the JSON message, or can it be appended to the message separately?
| Since RabbitMQ message payload is just a binary array you should encode your message body with 3 fields:
File size
Binary data of a file
Json
I disagree with a previous answer about embedding a file in json.
If you encode file data inside of json you will get wasted space because of json escaping + unnecessary CPU usage because of json encoding/decoding of the file data + you will need to read file data twice (once for json deserialization and once more to copy it where it needs to go)
| RabbitMQ | 22,070,639 | 23 |
II'm trying to make the "hello world" application from here: RabbitMQ Hello World
Here is the code of my producer class:
package com.mdnaRabbit.producer;
import com.rabbitmq.client.ConnectionFactory;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.Channel;
import java.io.IOException;
public class App {
private final static String QUEUE_NAME = "hello";
public static void main( String[] argv) throws IOException{
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
String message = "Hello World!";
channel.basicPublish("", QUEUE_NAME, null, message.getBytes());
System.out.println(" [x] Sent" + "'");
channel.close();
connection.close();
}
}
And here what I get when implement this:
Exception in thread "main" java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
at java.net.Socket.connect(Socket.java:579)
at com.rabbitmq.client.ConnectionFactory.createFrameHandler(ConnectionFactory.java:445)
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:504)
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:533)
at com.mdnaRabbit.producer.App.main(App.java:16)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
Process finished with exit code 1
What is causing this?
I found the solution to my problem here Error in making a socket connection
| To deal with it I installed RabbitMQ server. If rabbitmq-server is not installed this error will be thrown.
Make sure you have installed RabbitMQ server and it's up and running by hitting http://localhost:15672/
| RabbitMQ | 15,434,810 | 23 |
Eventhough redis and message queueing software are usually used for different purposes, I would like to ask pros and cons of using redis for the following use case:
group of event collectors write incoming messages as key/value . consumers fetch and delete processed keys
load starting from 100k msg/s and going beyond 250k in short period of time (like months) target is to achieve million msg/s
persistency is not strictly required. it is ok to lose non-journaled messages during failure
performance is very important (so, the number of systems required to handle load)
messages does not have to be processed in the order they arrive
do you know such use cases where redis chosen over traditional message queueing software ? or would you consider something else ?
note: I have also seen this but did not help:
Real-time application newbie - Node.JS + Redis or RabbitMQ -> client/server how?
thanks
| Given your requirements I would try Redis. It will perform better than other solutions and give you much finer grained control over the persistence characteristics. Depending on the language you're using you may be able to use a sharded Redis cluster (you need Redis bindings that support consistent hashing -- not all do). This will let you scale out to the volume you indicated. I've seen 10k/sec on my laptop in some basic tests.
You'll probably want to use the list operations in Redis (LPUSH for writes, BRPOP for reads) if you want queue semantics.
I have a former client that deployed Redis in production as a message queue last spring and they've been very happy with it.
| RabbitMQ | 7,506,118 | 23 |
I have one .NET 4.5.2 Service Publishing messages to RabbitMq via MassTransit.
And multiple instances of a .NET Core 2.1 Service Consuming those messages.
At the moment competing instances of the .NET core consumer service steal messages from the others.
i.e. The first one to consume the message takes it off the queue and the rest of the service instances don't get to consume it.
I want ALL instances to consume the same message.
How can I achieve this?
Publisher Service is configured as follows:
builder.Register(context =>
{
MessageCorrelation.UseCorrelationId<MyWrapper>(x => x.CorrelationId);
return Bus.Factory.CreateUsingRabbitMq(configurator =>
{
configurator.Host(new Uri("rabbitmq://localhost:5671"), host =>
{
host.Username(***);
host.Password(***);
});
configurator.Message<MyWrapper>(x => { x.SetEntityName("my.exchange"); });
configurator.Publish<MyWrapper>(x =>
{
x.AutoDelete = true;
x.Durable = true;
x.ExchangeType = true;
});
});
})
.As<IBusControl>()
.As<IBus>()
.SingleInstance();
And the .NET Core Consumer Services are configured as follows:
serviceCollection.AddScoped<MyWrapperConsumer>();
serviceCollection.AddMassTransit(serviceConfigurator =>
{
serviceConfigurator.AddBus(provider => Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(new Uri("rabbitmq://localhost:5671"), hostConfigurator =>
{
hostConfigurator.Username(***);
hostConfigurator.Password(***);
});
cfg.ReceiveEndpoint(host, "my.exchange", exchangeConfigurator =>
{
exchangeConfigurator.AutoDelete = true;
exchangeConfigurator.Durable = true;
exchangeConfigurator.ExchangeType = "topic";
exchangeConfigurator.Consumer<MyWrapperConsumer>(provider);
});
}));
});
serviceCollection.AddSingleton<IHostedService, BusService>();
And then MyWrapperConsumer looks like this:
public class MyWrapperConsumer :
IConsumer<MyWrapper>
{
.
.
public MyWrapperConsumer(...) => (..) = (..);
public async Task Consume(ConsumeContext<MyWrapper> context)
{
//Do Stuff
}
}
| It sounds like you want to publish messages and have multiple consumer service instances receive them. In that case, each service instance needs to have its own queue. That way, every published message will result in a copy being delivered to each queue. Then, each receive endpoint will read that message from its own queue and consume it.
All that excessive configuration you're doing is going against what you want. To make it work, remove all that exchange type configuration, and just configure each service instance with a unique queue name (you can generate it from host, machine, whatever) and just call Publish on the message producer.
You can see how RabbitMQ topology is configured: https://masstransit-project.com/advanced/topology/rabbitmq.html
| RabbitMQ | 57,209,798 | 22 |
I was having trouble, so I went into the registry and removed the service entry for rabbitmq. Now when I try to reinstall it says it already exists but it doesn't start (since I removed it) and I can do a sc delete rabbitmq. How do I totally remove all traces of it and reinstall from scratch? I guess it exists somewhere and the registry entry is all that is gone and the install program says it us just updating it when I do the rabbitmq-service install. I tried
rabbitmq-service remove but it says it doesn't exist.
| I would suggest as follows:
sudo apt-get remove --auto-remove rabbitmq-server
sudo apt-get purge --auto-remove rabbitmq-server
It will uninstall rabbitmq and purge all data (users, vhost..)
| RabbitMQ | 39,664,283 | 22 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.