question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
We've got an application which will be using RabbitMQ and have several different queues for passing messages between tiers.
Initially, I was planning to use multiple direct exchanges, with one for each message type, but it looks like having a single topic exchange with queues using different routing key bindings will achieve the same thing.
Having a single exchange also seems like it would be a bit easier to maintain, but I was wondering if there is any benefit (if any) of doing it one way over the other?
Option 1, using multiple direct exchanges:
ExchangeA (type: direct)
-QueueA
ExchangeB (type: direct)
-QueueB
ExchangeC (type: direct)
-QueueC
Option 2, using single topic exchange:
Exchange (type: topic)
-QueueA (receives messages from exchange with routing key of "TypeA")
-QueueB (receives messages from exchange with routing key of "TypeB")
-QueueC (receives messages from exchange with routing key of "TypeC")
| Assuming both models are being considered to be implemented using one broker running, there's little difference that I can see.
Option 2 seems more common in the real world for solving this kind of routing problem (at least in my anecdotal experience) and it's exactly the challenge that Topic Exchanges exist to solve.
The only difference that you might encounter would relate to routing speed. I don't know for sure if Exchange routing (based always on an exact string match) is faster in RabbitMQ when compared to the routing key technique used in Topic Exchanges (which can include wildcards like # and *). My hunch would be that Exchange discrimination would be faster, but you could experiment for yourself to find out, or try contacting the RabbitMQ team to ask them.
Finally, if you go with option 1 end up with lots of queues then you'll have a proportional proliferation of Exchanges. That sounds like a maintenance headache. If you'll only have a handful of queues then it won't be too much of an issue.
| RabbitMQ | 9,704,590 | 62 |
Does anyone know if there's a way to check the number of messages in a RabbitMQ queue from a client application?
I'm using the .NET client library.
| You can actually retrieve this via the client.
When you perform a queue_declare operation, RabbitMQ returns a tuple with three values: (<queue name>, <message count>, <consumer count>). The passive argument to queue_declare allows you to check whether a queue exists without modifying the server state, so you can use queue_declare with the passive option to check the queue length.
Not sure about .NET, but in Python, it looks something like this:
name, jobs, consumers = chan.queue_declare(queue=queuename, passive=True)
| RabbitMQ | 1,038,318 | 62 |
I'm trying to setup my first RabbitMQ dead letter exchange, here are the steps I'm using through the web admin interface:
Create new DIRECT exchange with the name "dead.letter.test"
Create new queue "dead.letter.queue"
Bind "dead.letter.queue" to "dead.letter.test"
Create new queue "test1" with the dead letter exchange set to "dead.letter.test"
Send a message into "test1"
Nack (with requeue = false) the message in "test1"
I am expecting that these steps should put a record into the "dead.letter.queue" through the "dead.letter.test" exchange. This is not happening.
I can manually put a message into the "dead.letter.test" exchange and it shows up in "dead.letter.queue" so I know that is fine.
When I look at the admin UI it shows that the DLX parameter is setup on the queue "test1".
Where am I going wrong?
| Gentilissimo Signore was kind enough to answer my question on Twitter. The problem is that if your dead letter exchange is setup as DIRECT you must specify a dead letter routing key. If you just want all your NACKed message to go into a dead letter bucket for later investigation (as I do) then your dead letter exchange should be setup as a FANOUT.
Here are the updated steps that work:
Create new FANOUT exchange with the name "dead.letter.test"
Create new queue "dead.letter.queue"
Bind "dead.letter.queue" to "dead.letter.test"
Create new queue "test1" with the dead letter exchange set to "dead.letter.test"
Send a message into "test1"
Nack (with requeue = false) the message in "test1"
| RabbitMQ | 21,742,232 | 61 |
RabbitMQ in docker lost data after remove container without volume.
My Dockerfile:
FROM rabbitmq:3-management
ENV RABBITMQ_HIPE_COMPILE 1
ENV RABBITMQ_ERLANG_COOKIE "123456"
ENV RABBITMQ_DEFAULT_VHOST "123456"
My run script:
IMAGE_NAME="service-rabbitmq"
TAG="${REGISTRY_ADDRESS}/${IMAGE_NAME}:${VERSION}"
echo $TAG
docker rm -f $IMAGE_NAME
docker run \
-itd \
-v "rabbitmq_log:/var/log/rabbitmq" \
-v "rabbitmq_data:/var/lib/rabbitmq" \
--name "service-rabbitmq" \
--dns=8.8.8.8 \
-p 8080:15672 \
$TAG
After removing the container, all data are lost.
How do I configure RabbitMQ in docker with persistent data?
|
Rabbitmq uses the hostname as part of the folder name in the mnesia
directory. Maybe add a --hostname some-rabbit to your docker run?
I had the same issue and I found the answer here.
| RabbitMQ | 41,330,514 | 60 |
I am using django-celery for my django project. Last day I have changed my computer's hostname (I am using Ubuntu 12.04, edited file '/etc/hostname'), and after next restart django-celery was failing with error
Consumer: Connection Error: [Errno 111] Connection refused. Trying again in 4 seconds...
After some research on this error I could find that, changing my host name caused this error from here. My rabbitmq startup log shows
file: /var/log/rabbitmq/startup_log
Activating RabbitMQ plugins ...
********************************************************************************
********************************************************************************
0 plugins activated:
ERROR: epmd error for host "jinesh": nxdomain (non-existing domain)
My startup_err file is empty.
when I run
root@jinesh:/home/jinesh# rabbitmqctl list_users
Listing users ...
Error: unable to connect to node rabbit@jinesh: nodedown
DIAGNOSTICS
===========
nodes in question: [rabbit@jinesh]
hosts, their running nodes and ports:
- unable to connect to epmd on jinesh: nxdomain
current node details:
- node name: rabbitmqctl4956@jinesh
- home dir: /var/lib/rabbitmq
- cookie hash: RGhmB2JR1LbZ57j7xWWTxg==
I hope changing the nodename may fix this issue. But I couldn't found a way to do this. Anyone have idea about how solve this issue?
update
while changing hostname you have to change both /etc/hostname and /etc/hosts files.
I reinstalled rabbitmq and solved this issue, Will answer this question.
| Remove the old installation of RabbitMQ to fix this problem. Here are steps to reinstall RabbitMQ. These commands are run as the root user:
Stop RabbitMQ: rabbitmqctl stop
Change /etc/hosts
Change /etc/hostname
Uninstall old RabbitMQ: dpkg -P rabbitmq-server
Remove RabbitMQ’s database: rm -rf /var/lib/rabbitmq
Find erlang’s process that is running rabbit: ps ax | grep rabbit
Kill the listed process
Reinstall RabbitMQ: apt-get install rabbitmq-server
I wrote about these steps on my blog.
REVISION
I moved my blog to a new website.
| RabbitMQ | 14,659,335 | 60 |
I have thousands of unacked messages in my dev environment which I can't restart.
Is there a way to remove (purge) all messages even if they are unacknowledged?
| Close the channel that the unacked messages reside on, which will nack them back into the queue, then call purge.
| RabbitMQ | 25,114,230 | 59 |
I've created an ASP.NET Core MVC/WebApi site that has a RabbitMQ subscriber based off James Still's blog article Real-World PubSub Messaging with RabbitMQ.
In his article he uses a static class to start the queue subscriber and define the event handler for queued events. This static method then instantiates the event handler classes via a static factory class.
using RabbitMQ.Client;
using RabbitMQ.Client.Events;
using System;
using System.Text;
namespace NST.Web.MessageProcessing
{
public static class MessageListener
{
private static IConnection _connection;
private static IModel _channel;
public static void Start(string hostName, string userName, string password, int port)
{
var factory = new ConnectionFactory
{
HostName = hostName,
Port = port,
UserName = userName,
Password = password,
VirtualHost = "/",
AutomaticRecoveryEnabled = true,
NetworkRecoveryInterval = TimeSpan.FromSeconds(15)
};
_connection = factory.CreateConnection();
_channel = _connection.CreateModel();
_channel.ExchangeDeclare(exchange: "myExchange", type: "direct", durable: true);
var queueName = "myQueue";
QueueDeclareOk ok = _channel.QueueDeclare(queueName, true, false, false, null);
_channel.QueueBind(queue: queueName, exchange: "myExchange", routingKey: "myRoutingKey");
var consumer = new EventingBasicConsumer(_channel);
consumer.Received += ConsumerOnReceived;
_channel.BasicConsume(queue: queueName, noAck: false, consumer: consumer);
}
public static void Stop()
{
_channel.Close(200, "Goodbye");
_connection.Close();
}
private static void ConsumerOnReceived(object sender, BasicDeliverEventArgs ea)
{
// get the details from the event
var body = ea.Body;
var message = Encoding.UTF8.GetString(body);
var messageType = "endpoint"; // hardcoding the message type while we dev...
// instantiate the appropriate handler based on the message type
IMessageProcessor processor = MessageHandlerFactory.Create(messageType);
processor.Process(message);
// Ack the event on the queue
IBasicConsumer consumer = (IBasicConsumer)sender;
consumer.Model.BasicAck(ea.DeliveryTag, false);
}
}
}
It works great up to the point where I now need to resolve a service in my message processor factory rather than just write to the console.
using NST.Web.Services;
using System;
namespace NST.Web.MessageProcessing
{
public static class MessageHandlerFactory
{
public static IMessageProcessor Create(string messageType)
{
switch (messageType.ToLower())
{
case "ipset":
// need to resolve IIpSetService here...
IIpSetService ipService = ???????
return new IpSetMessageProcessor(ipService);
case "endpoint":
// need to resolve IEndpointService here...
IEndpointService epService = ???????
// create new message processor
return new EndpointMessageProcessor(epService);
default:
throw new Exception("Unknown message type");
}
}
}
}
Is there any way to access the ASP.NET Core IoC container to resolve the dependencies? I don't really want to have to spin up the whole stack of dependencies by hand :(
Or, is there a better way to subscribe to RabbitMQ from an ASP.NET Core application? I found RestBus but it's not been updated for Core 1.x
| You can avoid the static classes and use Dependency Injection all the way through combined with:
The use of IApplicationLifetime to start/stop the listener whenever the application starts/stops.
The use of IServiceProvider to create instances of the message processors.
First thing, let's move the configuration to its own class that can be populated from the appsettings.json:
public class RabbitOptions
{
public string HostName { get; set; }
public string UserName { get; set; }
public string Password { get; set; }
public int Port { get; set; }
}
// In appsettings.json:
{
"Rabbit": {
"hostName": "192.168.99.100",
"username": "guest",
"password": "guest",
"port": 5672
}
}
Next, convert MessageHandlerFactory into a non-static class that receives an IServiceProvider as a dependency. It will use the service provider to resolve the message processor instances:
public class MessageHandlerFactory
{
private readonly IServiceProvider services;
public MessageHandlerFactory(IServiceProvider services)
{
this.services = services;
}
public IMessageProcessor Create(string messageType)
{
switch (messageType.ToLower())
{
case "ipset":
return services.GetService<IpSetMessageProcessor>();
case "endpoint":
return services.GetService<EndpointMessageProcessor>();
default:
throw new Exception("Unknown message type");
}
}
}
This way your message processor classes can receive in the constructor any dependencies they need (as long as you configure them in Startup.ConfigureServices). For example, I am injecting an ILogger into one of my sample processors:
public class IpSetMessageProcessor : IMessageProcessor
{
private ILogger<IpSetMessageProcessor> logger;
public IpSetMessageProcessor(ILogger<IpSetMessageProcessor> logger)
{
this.logger = logger;
}
public void Process(string message)
{
logger.LogInformation("Received message: {0}", message);
}
}
Now convert MessageListener into a non-static class that depends on IOptions<RabbitOptions> and MessageHandlerFactory.It's very similar to your original one, I just replaced the parameters of the Start methods with the options dependency and the handler factory is now a dependency instead of a static class:
public class MessageListener
{
private readonly RabbitOptions opts;
private readonly MessageHandlerFactory handlerFactory;
private IConnection _connection;
private IModel _channel;
public MessageListener(IOptions<RabbitOptions> opts, MessageHandlerFactory handlerFactory)
{
this.opts = opts.Value;
this.handlerFactory = handlerFactory;
}
public void Start()
{
var factory = new ConnectionFactory
{
HostName = opts.HostName,
Port = opts.Port,
UserName = opts.UserName,
Password = opts.Password,
VirtualHost = "/",
AutomaticRecoveryEnabled = true,
NetworkRecoveryInterval = TimeSpan.FromSeconds(15)
};
_connection = factory.CreateConnection();
_channel = _connection.CreateModel();
_channel.ExchangeDeclare(exchange: "myExchange", type: "direct", durable: true);
var queueName = "myQueue";
QueueDeclareOk ok = _channel.QueueDeclare(queueName, true, false, false, null);
_channel.QueueBind(queue: queueName, exchange: "myExchange", routingKey: "myRoutingKey");
var consumer = new EventingBasicConsumer(_channel);
consumer.Received += ConsumerOnReceived;
_channel.BasicConsume(queue: queueName, noAck: false, consumer: consumer);
}
public void Stop()
{
_channel.Close(200, "Goodbye");
_connection.Close();
}
private void ConsumerOnReceived(object sender, BasicDeliverEventArgs ea)
{
// get the details from the event
var body = ea.Body;
var message = Encoding.UTF8.GetString(body);
var messageType = "endpoint"; // hardcoding the message type while we dev...
//var messageType = Encoding.UTF8.GetString(ea.BasicProperties.Headers["message-type"] as byte[]);
// instantiate the appropriate handler based on the message type
IMessageProcessor processor = handlerFactory.Create(messageType);
processor.Process(message);
// Ack the event on the queue
IBasicConsumer consumer = (IBasicConsumer)sender;
consumer.Model.BasicAck(ea.DeliveryTag, false);
}
}
Almost there, you will need to update the Startup.ConfigureServices method so it knows about your services and options (You can create interfaces for the listener and handler factory if you want):
public void ConfigureServices(IServiceCollection services)
{
// ...
// Add RabbitMQ services
services.Configure<RabbitOptions>(Configuration.GetSection("rabbit"));
services.AddTransient<MessageListener>();
services.AddTransient<MessageHandlerFactory>();
services.AddTransient<IpSetMessageProcessor>();
services.AddTransient<EndpointMessageProcessor>();
}
Finally, update the Startup.Configure method to take an extra IApplicationLifetime parameter and start/stop the message listener in the ApplicationStarted/ApplicationStopped events (Although I noticed a while ago some issues with the ApplicationStopping event using IISExpress, as in this question):
public MessageListener MessageListener { get; private set; }
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory, IApplicationLifetime appLifetime)
{
appLifetime.ApplicationStarted.Register(() =>
{
MessageListener = app.ApplicationServices.GetService<MessageListener>();
MessageListener.Start();
});
appLifetime.ApplicationStopping.Register(() =>
{
MessageListener.Stop();
});
// ...
}
| RabbitMQ | 40,611,683 | 58 |
Does RabbitMQ have any concept of message priority? I have an issue where some more important messages are being slowed down due to less important messages sitting before them in the queue. I want the high-priority ones to take precedence and move to the front of the queue.
I know I can approximate this using two queues, a "fast" queue and a "slow" queue, but that seems like a hack.
Does anyone know of a better solution using RabbitMQ?
| The answers on this question are out-of-date. As of RabbitMQ 3.5.0, there is now in-core support for AMQP standard per-message priorities. The documentation has all the gory details, but in short:
You need to define the queue's priority range at the time the queue is created;
Messages without a priority set get a priority of 0;
Messages with a numeric priority higher than the maximum set on the queue get the highest priority the queue supports.
More interesting caveats are in the docs. It's well worth reading them.
| RabbitMQ | 10,745,084 | 58 |
What is the benefit of building on top of MassTransit compared to building directly on top of RabbitMQ?
I believe one benefit provided by MassTransit is 'type' exchange (publish subscribe by interface / type) so the content of the message is structured, compared to plain RabbitMQ exchanges where the content of the message is unstructured text / blob.
What other benefits provided by MassTransit?
| Things that MT adds on top of just using RabbitMQ:
Optimized, asynchronous multithreaded, concurrent consumers
Message serialization, with support for interfaces, classes, and records, including guidance on versioning message contracts
Automatic exchange bindings, publish conventions
Saga state machines, including persistent state via Entity Framework Core, MongoDB, Redis, etc.
Built-in metrics, Open Telemetry, Prometheus
Message headers
Fault handling, message retry, message redelivery
Those are just a few, some more significant than others. The fact that the bus hosts your consumers, handlers, sagas, and manages all of the threading is probably the biggest advantage, and the fact that you can host multiple buses in the same process.
Serialization is the next biggest benefit, since that can be painful to figure out, and getting an interface-based message contract with automatic deserialized into types (including dynamically-backed interface types) is huge. Publishing a single class that implements multiple interfaces, and seeing all interested consumers pick up their piece of the message asynchronously is just awesome in production as new interfaces can be added to producers and down-level consumers are unaffected.
Those are a few, you can check out the documentation for more information, or give the really old .NET Rocks! podcast a listen for some related content by yours truly.
UPDATE: There is an entire series on YouTube covering MassTransit now.
| RabbitMQ | 12,296,787 | 57 |
I haven't found an existing post asking this but apologize if I missed it.
I'm trying to get my head round microservices and have come across articles where RabbitMQ is used. I'm confused why RabbitMQ is needed. Is the intention that the services will use a web api to communicate with the outside world and RabbitMQ to communicate with each other?
| In Microservices architecture you have two ways to communicate between the microservices:
Synchronous - that is, each service calls directly the other microservice , which results in dependency between the services
Asynchronous - you have some central hub (or message queue) where you place all requests between the microservices and the corresponding service takes the request, process it and return the result to the caller. This is what RabbitMQ (or any other message queue - MSMQ and Apache Kafka are good alternatives) is used for. In this case all microservices know only about the existance of the hub.
microservices.io has some very nice articles about using microservices
| RabbitMQ | 45,208,766 | 56 |
I have installed rabbitmq on ubuntu and trying to start it using rabbitmq-server start, however, I'm getting this error:
Activating RabbitMQ plugins ...
0 plugins activated:
node with name "rabbit" already running on "mybox"
diagnostics:
- nodes and their ports on mybox: [{rabbit,38618},
{rabbitmqprelaunch13346,41776}]
- current node: rabbitmqprelaunch13346@mybox
- current node home dir: /var/lib/rabbitmq
- current node cookie hash: 8QRKGluOJOcZ4AAkEdFwQg==
so I try to stop it or restart it using service rabbitmq-server restart but I get the following error: Restarting rabbitmq-server: RabbitMQ is not running
The server's host name hostname -s is mybox.
How do I stop the currently running instance, or at least, how do I manage it? I have no access to it and yet I'm not able to run rabbitmq properly.
Thank you.
| Rabbitmq is set to start automatically after it's installed.
I don't think it is configured run with the service command.
To see the status of rabbitmq
sudo rabbitmqctl status
To stop the rabbitmq
sudo rabbitmqctl stop
(Try the status command again to see that it's stopped).
To start it again, the recommended method is
sudo invoke-rc.d rabbitmq-server start
These all work with the vanilla ubuntu install using apt-get
Still not working?
If you've tried unsuccessfully to start or restart rabbitmq, check to see how many processes are running.
ps -ef | grep rabbit
There should be 5 processes running as the user rabbitmq.
If you have more, particularly if they're running as other users (such as root, or your own user) you should stop these processes.
The cleanest way is probably to reboot your machine.
| RabbitMQ | 10,347,751 | 56 |
As a way to learn RabbitMQ and python I'm working on a project that allows me to distribute h264 encodes between a number of computers. The basics are done, I have a daemon that runs on Linux or Mac that attaches to queue, accepts jobs and encodes them using HandBrakeCLI and acks the message once the encode is complete. I've also built a simple tool to push items into the queue.
Now I want to expand the capabilities of the tool that pushes items into the queue so that I can view what is in the queue. I'm aware of the ability to see how many items are in the queue, but I want to be able to get the actual messages so I can show what movie or TV show is waiting to be encoded yet. The idea is that the queue manager would receive messages from the encoder clients when a job has completed and then refresh the queue list.
I know there is a convoluted way of keeping the queue manager's list in sync with the actual work queue but I'd like this to be "persistent" in that I should be able to close the queue manager and reopen it later to see the queue.
| Queue browsing is not supported directly, but if you declare a queue with NO auto acknowledgements and do not ACK the messages that you receive, then you can see everything in it. After you have had a look, send a CANCEL on the channel, or disconnect and reconnect to cause all the messages to be requeued. This does increment a number in the message headers, but otherwise leaves the messages untouched.
I built an app where message ordering was not terribly important, and I frequently scanned through the queue in this way. If I found a problem, I would dump the messages into a file, fix them and resubmit.
If you only need to peek at a message or two once in a while you can do that with the RabbitMQ management plugin.
In addition, if you only need a message count, you can get that every time you declare the queue, or on a basic.get command.
| RabbitMQ | 4,700,292 | 55 |
I get a string through a rabbitmq message system. Before sending,
I use json.Marshal, convert the outcome to string and send through
rabbitmq.
The structs that I convert and send can be: (changed the names and the size of the structs but it should not matter)
type Somthing1 struct{
Thing string `json:"thing"`
OtherThing int64 `json:"other_thing"`
}
or
type Somthing2 struct{
Croc int `json:"croc"`
Odile bool `json:"odile"`
}
The message goes through perfectly as a string and is printed
on the other side (some server)
Up until now everything works.
Now I'm trying to convert them back into their structs and assert the types.
first attempt is by:
func typeAssert(msg string) {
var input interface{}
json.Unmarshal([]byte(msg), &input)
switch input.(type){
case Somthing1:
job := Somthing1{}
job = input.(Somthing1)
queueResults(job)
case Somthing2:
stats := Somthing2{}
stats = input.(Somthing2)
queueStatsRes(stats)
default:
}
This does not work. When Printing the type of input after Unmarshaling
it I get map[string]interface{} (?!?)
and even stranger than that, the map key is the string I got and the map value is empty.
I did some other attempts like:
func typeAssert(msg string) {
var input interface{}
json.Unmarshal([]byte(msg), &input)
switch v := input.(type){
case Somthing1:
v = input.(Somthing1)
queueResults(v)
case Somthing2:
v = input.(Somthing2)
queueStatsRes(v)
default:
}
and also tried writing the switch like was explained in this answer:
Golang: cannot type switch on non-interface value
switch v := interface{}(input).(type)
still with no success...
Any ideas?
| The default types that the json package Unmarshals into are shown in the Unmarshal function documentation
bool, for JSON booleans
float64, for JSON numbers
string, for JSON strings
[]interface{}, for JSON arrays
map[string]interface{}, for JSON objects
nil for JSON null
Since you're unmarshaling into an interface{}, the returned types will only be from that set. The json package doesn't know about Something1 and Something2. You either need to convert from the map[string]interface{} that the json object is being unmarshaled into, or unmarshal directly into the struct type you want.
If you don't want to do unpack the data from a generic interface, or somehow tag the data so you know what type to expect, you could iteratively take the json and try to unmarshal it into each type you want.
You can even pack those into a wrapper struct to do the unmarshaling for you:
type Something1 struct {
Thing string `json:"thing"`
OtherThing int64 `json:"other_thing"`
}
type Something2 struct {
Croc int `json:"croc"`
Odile bool `json:"odile"`
}
type Unpacker struct {
Data interface{}
}
func (u *Unpacker) UnmarshalJSON(b []byte) error {
smth1 := &Something1{}
err := json.Unmarshal(b, smth1)
// no error, but we also need to make sure we unmarshaled something
if err == nil && smth1.Thing != "" {
u.Data = smth1
return nil
}
// abort if we have an error other than the wrong type
if _, ok := err.(*json.UnmarshalTypeError); err != nil && !ok {
return err
}
smth2 := &Something2{}
err = json.Unmarshal(b, smth2)
if err != nil {
return err
}
u.Data = smth2
return nil
}
http://play.golang.org/p/Trwd6IShDW
| RabbitMQ | 35,583,735 | 54 |
For the company I work for we would like to use RabbitMQ as our main message bus. The idea we have is that every single application uses their own vhost for internal communication and that via the shovel or federation plugin we would make it possible to share certain type of the events across multiple vhosts (maybe even multiple machines (non-clustered)).
We chose for application per vhost to separate internal communication from public events and to keep the security adjustable per application.
Based on the information published on the RabbitMQ website I don't get it when I have to choose for shovels or when I have to choose for the federation plugin.
RabbitMQ has the following explanation when to use what:
Typically you would use the shovel to link brokers across the internet when you need more control than federation provides.
What is the fine grain control in shovels which I am missing when I choose for federation?
At this moment I think I would prefer the federation plugin because I could automate the inter-vhost-communication via the REST API provided by the federation plugin.
In case of shovels I would need to change the shovel configuration and reboot the RabbitMQ instance every time we would like to share an event between vhosts. Are my thoughts correct about this?
We are currently running RMQ on Windows with clients connecting from .NET. In the near future Java/Perl/PHP clients will join.
To summarize my questions:
What is the fine grain control in shovels which I am missing when I
choose for federation?
Is it correct that the only way to change the
inter-vhost-communication when I use shovels is by changing theconfig file and rebooting the instance?
Does the setup (vhost per application) make sense or am I missing the point completely?
| Shovels and queue provide different means to be forward messages from one RabbitMQ node to another.
Federated Exchange
With a federated exchange, queues can be connected to the queue on the upstream(source) node. In addition, an exchange on the downstream(destination) node will receive a copy of messages that are published to the upstream node.
Federated exchanges are a similar to exchange-to-exchange bindings, in that, they can (optionally) subscribe to a limited set of messages from an upstream exchange.
Federated Queue
(NOTE: These are new in RabbitMQ 3.2.x)
With a federated queue, consumers can be connected to the queue on both the upstream(source) and downstream(destination) nodes.
In essence the downstream queue is a consumer on the upstream queue, with the expectation that there will be additional downstream consumers that process the messages in the same manner as a consumer attached to the upstream queue.
Any messages consumed by the downstream (federated) queue will not be available for consumers on the upstream queue.
Use Case:
If consumers are being migrated from one node to another, a federated queue will allow this to happen without messages being missed, or processed twice.
Use Case: from the RabbitMQ docs
The typical use would be to have the same "logical" queue distributed
over many brokers. Each broker would declare a federated queue with
all the other federated queues upstream. (The links would form a
complete bi-directional graph on n queues.)
Shovel
Shovels on the other hand, attach an "upstream" queue to a "downstream" exchange. (I place the terms in quotes because the shovel documentation does not describe the nodes with the same semantics as the federation documentation.)
The shovel consumes the messages from the queue and sends them to the exchange on the destination node. (NOTE: While not normally discussed as part of this the pattern, there is nothing stopping a consumer from connecting to the queue on the origin node.)
To answer the specific questions:
What is the fine grain control in shovels which I am missing when I
choose for federation?
A shovel does not have to reside on an "upstream" or "downstream" node. It can be configured and operate from an independent node.
A shovel can create all of the elements of the linkage by itself: the source queue, the bindings of the queue, and the destination exchange. Thus, it is non-invasive to either the source or destination node.
Is it correct that the only way to change the
inter-vhost-communication when I use shovels is by changing theconfig
file and rebooting the instance?
This has generally been the accepted downside of the shovel.
With the following command (caveat: only tested on RabbitMQ 3.1.x, and with a very specific rabbitmq.config file that only contain ) you can reload a shovel configuration from the specified file. (in this case /etc/rabbitmq/rabbitmq.config)
rabbitmqctl eval 'application:stop(rabbitmq_shovel), {ok, [[{rabbit, _}|[{rabbitmq_shovel, [{shovels, Shovels}] }]]]} = file:consult("/etc/rabbitmq/rabbitmq.config"), application:set_env(rabbitmq_shovel, shovels, Shovels), application:start(rabbitmq_shovel).'
.
Does the setup (vhost per application) make sense or am I missing the
point completely?
This decision is going to depend on your use case. vhosts primarily provide logical (and access) separation between queues/exchanges and authorized users.
| RabbitMQ | 19,357,272 | 54 |
I've been evaluating messaging technologies for my company but I've become very confused by the conceptual differences between a few terms:
Pub/Sub vs Multicast vs Fan Out
I am working with the following definitions:
Pub/Sub has publishers delivering a separate copy of each message to
each subscriber which means that the opportunity to guarantee delivery exists
Fan Out has a single queue pushing to all listening
clients.
Multicast just spams out data and if someone is listening
then fine, if not, it doesn't matter. No possibility to guarantee a client definitely gets a message.
Are these definitions right? Or is Pub/Sub the pattern and multicast, direct, fanout etc. ways to acheive the pattern?
I'm trying to work the out-of-the-box RabbitMQ definitions into our architecture but I'm just going around in circles at the moment trying to write the specs for our app.
Please could someone advise me whether I am right?
| I'm confused by your choice of three terms to compare. Within RabbitMQ, Fanout and Direct are exchange types. Pub-Sub is a generic messaging pattern but not an exchange type. And you didn't even mention the 3rd and most important Exchange type, namely Topic. In fact, you can implement Fanout behavior on a Topic exchange just by declaring multiple queues with the same binding key. And you can define Direct behavior on a Topic exchange by declaring a Queue with * as the wildcard binding key.
Pub-Sub is generally understood as a pattern in which an application publishes messages which are consumed by several subscribers.
With RabbitMQ/AMQP it is important to remember that messages are always published to exchanges. Then exchanges route to queues. And queues deliver messages to subscribers. The behavior of the exchange is important. In Topic exchanges, the routing key from the publisher is matched up to the binding key from the subscriber in order to make the routing decision. Binding keys can have wildcard patterns which further influences the routing decision. More complicated routing can be done based on the content of message headers using a headers exchange type
RabbitMQ doesn't guarantee delivery of messages but you can get guaranteed delivery by choosing the right options(delivery mode = 2 for persistent msgs), and declaring exchanges and queues in advance of running your application so that messages are not discarded.
| RabbitMQ | 8,261,654 | 54 |
It seems the longer I keep my rabbitmq server running, the more trouble I have with unacknowledged messages. I would love to requeue them. In fact there seems to be an amqp command to do this, but it only applies to the channel that your connection is using. I built a little pika script to at least try it out, but I am either missing something or it cannot be done this way (how about with rabbitmqctl?)
import pika
credentials = pika.PlainCredentials('***', '***')
parameters = pika.ConnectionParameters(host='localhost',port=5672,\
credentials=credentials, virtual_host='***')
def handle_delivery(body):
"""Called when we receive a message from RabbitMQ"""
print body
def on_connected(connection):
"""Called when we are fully connected to RabbitMQ"""
connection.channel(on_channel_open)
def on_channel_open(new_channel):
"""Called when our channel has opened"""
global channel
channel = new_channel
channel.basic_recover(callback=handle_delivery,requeue=True)
try:
connection = pika.SelectConnection(parameters=parameters,\
on_open_callback=on_connected)
# Loop so we can communicate with RabbitMQ
connection.ioloop.start()
except KeyboardInterrupt:
# Gracefully close the connection
connection.close()
# Loop until we're fully closed, will stop on its own
connection.ioloop.start()
| Unacknowledged messages are those which have been delivered across the network to a consumer but have not yet been ack'ed or rejected -- but that consumer hasn't yet closed the channel or connection over which it originally received them. Therefore the broker can't figure out if the consumer is just taking a long time to process those messages or if it has forgotten about them. So, it leaves them in an unacknowledged state until either the consumer dies or they get ack'ed or rejected.
Since those messages could still be validly processed in the future by the still-alive consumer that originally consumed them, you can't (to my knowledge) insert another consumer into the mix and try to make external decisions about them. You need to fix your consumers to make decisions about each message as they get processed rather than leaving old messages unacknowledged.
| RabbitMQ | 7,063,224 | 54 |
On my team at work, we use the IBM MQ technology a lot for cross-application communication. I've seen lately on Hacker News and other places about other MQ technologies like RabbitMQ. I have a basic understanding of what it is (a commonly checked area to put and get messages), but what I want to know what exactly is it good at? How will I know where I want to use it and when? Why not just stick with more rudimentary forms of interprocess messaging?
| All the explanations so far are accurate and to the point - but might be missing something: one of the main benefits of message queueing: resilience.
Imagine this: you need to communicate with two or three other systems. A common approach these days will be web services which is fine if you need an answers right away.
However: web services can be down and not available - what do you do then? Putting your message into a message queue (which has a component on your machine/server, too) typically will work in this scenario - your message just doesn't get delivered and thus processed right now - but it will later on, when the other side of the service comes back online.
So in many cases, using message queues to connect disparate systems is a more reliable, more robust way of sending messages back and forth. It doesn't work well for everything (if you want to know the current stock price for MSFT, putting that request into a queue might not be the best of ideas) - but in lots of cases, like putting an order into your supplier's message queue, it works really well and can help ease some of the reliability issues with other technologies.
| RabbitMQ | 2,868,800 | 53 |
What is the easiest way to create a delay (or parking) queue with Python, Pika and RabbitMQ? I have seen an similar questions, but none for Python.
I find this an useful idea when designing applications, as it allows us to throttle messages that needs to be re-queued again.
There are always the possibility that you will receive more messages than you can handle, maybe the HTTP server is slow, or the database is under too much stress.
I also found it very useful when something went wrong in scenarios where there is a zero tolerance to losing messages, and while re-queuing messages that could not be handled may solve that. It can also cause problems where the message will be queued over and over again. Potentially causing performance issues, and log spam.
| I found this extremely useful when developing my applications. As it gives you an alternative to simply re-queuing your messages. This can easily reduce the complexity of your code, and is one of many powerful hidden features in RabbitMQ.
Steps
First we need to set up two basic channels, one for the main queue, and one for the delay queue. In my example at the end, I include a couple of additional flags that are not required, but makes the code more reliable; such as confirm delivery, delivery_mode and durable. You can find more information on these in the RabbitMQ manual.
After we have set up the channels we add a binding to the main channel that we can use to send messages from the delay channel to our main queue.
channel.queue_bind(exchange='amq.direct',
queue='hello')
Next we need to configure our delay channel to forward messages to the main queue once they have expired.
delay_channel.queue_declare(queue='hello_delay', durable=True, arguments={
'x-message-ttl' : 5000,
'x-dead-letter-exchange' : 'amq.direct',
'x-dead-letter-routing-key' : 'hello'
})
x-message-ttl (Message - Time To Live)
This is normally used to automatically remove old messages in the
queue after a specific duration, but by adding two optional arguments we
can change this behaviour, and instead have this parameter determine
in milliseconds how long messages will stay in the delay queue.
x-dead-letter-routing-key
This variable allows us to transfer the message to a different queue
once they have expired, instead of the default behaviour of removing
it completely.
x-dead-letter-exchange
This variable determines which Exchange used to transfer the message from hello_delay to hello queue.
Publishing to the delay queue
When we are done setting up all the basic Pika parameters you simply send a message to the delay queue using basic publish.
delay_channel.basic_publish(exchange='',
routing_key='hello_delay',
body="test",
properties=pika.BasicProperties(delivery_mode=2))
Once you have executed the script you should see the following queues created in your RabbitMQ management module.
Example.
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(
'localhost'))
# Create normal 'Hello World' type channel.
channel = connection.channel()
channel.confirm_delivery()
channel.queue_declare(queue='hello', durable=True)
# We need to bind this channel to an exchange, that will be used to transfer
# messages from our delay queue.
channel.queue_bind(exchange='amq.direct',
queue='hello')
# Create our delay channel.
delay_channel = connection.channel()
delay_channel.confirm_delivery()
# This is where we declare the delay, and routing for our delay channel.
delay_channel.queue_declare(queue='hello_delay', durable=True, arguments={
'x-message-ttl' : 5000, # Delay until the message is transferred in milliseconds.
'x-dead-letter-exchange' : 'amq.direct', # Exchange used to transfer the message from A to B.
'x-dead-letter-routing-key' : 'hello' # Name of the queue we want the message transferred to.
})
delay_channel.basic_publish(exchange='',
routing_key='hello_delay',
body="test",
properties=pika.BasicProperties(delivery_mode=2))
print " [x] Sent"
| RabbitMQ | 17,014,584 | 52 |
I'm using rabbitMQ, I take every message from queue with basic_get without automatically acking procedure, which means the message remain in queue until I ack or nack the message.
Sometimes I've messages that can't be processed because of some exception thrown, which prevented them from being fully processed.
Question is what does it matter if I both ack the messages in success and exception thrown, I mean in terms of result messages will always get out of the queue, so what does it matter if I use ack or nack in this scenario?
Maybe I miss something about when using each opration?
| The basic.nack command is apparently a RabbitMQ extension, which extends the functionality of basic.reject to include a bulk processing mode. Both include a "bit" (i.e. boolean) flag of requeue, so you actually have several choices:
nack/reject with requeue=1: the message will be returned to the queue it came from as though it were a new message; this might be useful in case of a temporary failure on the consumer side
nack/reject with requeue=0 and a configured Dead Letter Exchange (DLX), will publish the message to that exchange, allowing it to be picked up by another queue
nack/reject with requeue=0 and no DLX will simply discard the message
ack will remove the message from the queue even if a DLX is configured
If you have no DLX configured, always using ack will be the same as nack/reject with requeue=0; however, using the logically correct function from the start will give you more flexibility to configure things differently later.
| RabbitMQ | 28,794,123 | 49 |
How does RabbitMQ compare to Mule, I am going to build an application using message oriented architecture and AMQP (RabbitMQ) provides everything i want, but i am perplexed with so many related technology choice and similar concepts like ESB. I am having a doubt if i am making a choice without considering other alternatives.
I am mostly clear that RabbitMQ is a message broker and it helps me in mediating message between producer and consumer (all forms or publish subscribe and i could understand how its used from real examples like twitter , or Facebook updates, etc)
What is Mule, if i could achieve what i do in RabbitMQ using mule, should i consider mule similar to RabbitMQ?
Does mule has a different objective than that of a message broker?
Does mule assumes that underlying it there is a message broker that delivers message to the appropriate mule listeners (i could easily write a listener in RabbitMQ)
Is mule a complete Java bases system ( The current experiment i did with RabbitMQ took me less than 30 Min to write a simple RPC Client Server with client as C# and Server as Java , will such things be done in Mule easily).
| Mule is an ESB (Enterprise Service Bus). RabbitMQ is a message broker.
An ESB provides added layers atop of a message broker such as routing, transformations and business process management. It is a mediator between applications, integrating Web Services, REST endpoints, database connections, email and ftp servers - you name it. It is a high-level integration backbone which orchestrates interoperability within a network of applications that speak different protocols.
A message broker is a lower level component which enables you as a developer to relay raw messages between publishers and subscribers, typically between components of the same system but not always. It is used to enable asynchronous processing to keep response times low. Some tasks take longer to process and you don't want them to hold things up if they're not time-sensitive. Instead, post a message to a queue (as a publisher) and have a subscriber pick it up and process it "later".
| RabbitMQ | 3,280,576 | 49 |
In the RabbitMQ/AMQP Java client, you can create an AMQP.BasicProperties.Builder, and use it to build() an instance of AMQP.BasicProperties. This built properties instance can then be used for all sorts of important things. There are lots of "builder"-style methods available on this builder class:
BasicProperties.Builder propsBuilder = new BasicProperties.Builder();
propsBuilder
.appId(???)
.clusterId(???)
.contentEncoding(???)
.contentType(???)
.correlationId(???)
.deliveryMode(2)
.expiration(???)
.headers(???)
.messageId(???)
.priority(???)
.replyTo(???)
.timestamp(???)
.type(???)
.userId(???);
I'm looking for what fields these builer methods help "build-up", and most importantly, what valid values exist for each field. For instance, what is a clusterId, and what are its valid values? What is type, and what are its valid values? Etc.
I have spent all morning scouring:
The Java client documentation; and
The Javadocs; and
The RabbitMQ full reference guide; and
The AMQP specification
In all these docs, I cannot find clear definitions (besides some vague explanation of what priority, contentEncoding and deliveryMode are) of what each of these fields are, and what their valid values are. Does anybody know? More importantly, does anybody know where these are even documented? Thanks in advance!
| Usually I use very simple approach to memorize something. I will provide all details below, but here is a simple picture of BasicProperties field and values. I've also tried to properly highlight queue/server and application context.
If you want me to enhance it a bit - just drop a small comment. What I really want is to provide some visual key and simplify understanding.
High-level description (source 1, source 2):
Please note Clust ID has been deprecated, so I will exclude it.
Application ID - Identifier of the application that produced the message.
Context: application use
Value: Can be any string.
Content Encoding - Message content encoding
Context: application use
Value: MIME content encoding (e.g. gzip)
Content Type - Message content type
Context: application use
Value: MIME content type (e.g. application/json)
Correlation ID - Message correlated to this one, e.g. what request this message is a reply to. Applications are encouraged to use this attribute instead of putting this information into the message payload.
Context: application use
Value: any value
Delivery mode - Should the message be persisted to disk?
Context: queue implementation use
Value: non-persistent (1) or persistent (2)
Expiration - Expiration time after which the message will be deleted. The value of the expiration field describes the TTL period in milliseconds. Please see details below.
Context: queue implementation use
Headers - Arbitrary application-specific message headers.
Context: application use
Message ID - Message identifier as a string. If applications need to identify messages, it is recommended that they use this attribute instead of putting it into the message payload.
Context: application use
Value: any value
Priority - Message priority.
Context: queue implementation use
Values: 0 to 9
ReplyTo - Queue name other apps should send the response to. Commonly used to name a reply queue (or any other identifier that helps a consumer application to direct its response). Applications are encouraged to use this attribute instead of putting this information into the message payload.
Context: application use
Value: any value
Time-stamp - Timestamp of the moment when message was sent.
Context: application use
Value: Seconds since the Epoch.
Type - Message type, e.g. what type of event or command this message represents. Recommended to be used by applications instead of including this information into the message payload.
Context: application use
Value: Can be any string.
User ID - Optional user ID. Verified by RabbitMQ against the actual connection username.
Context: queue implementation use
Value: Should be authenticated user.
BTW, I've finally managed to review latest sever code (rabbitmq-server-3.1.5), there is an example in rabbit_stomp_test_util.erl:
content_type = <<"text/plain">>,
content_encoding = <<"UTF-8">>,
delivery_mode = 2,
priority = 1,
correlation_id = <<"123">>,
reply_to = <<"something">>,
expiration = <<"my-expiration">>,
message_id = <<"M123">>,
timestamp = 123456,
type = <<"freshly-squeezed">>,
user_id = <<"joe">>,
app_id = <<"joe's app">>,
headers = [{<<"str">>, longstr, <<"foo">>},
{<<"int">>, longstr, <<"123">>}]
Good to know somebody wants to know all the details. Because it is much better to use well-known message attributes when possible instead of placing information in the message body. BTW, basic message properties are far from being clear and useful. I would say it is better to use a custom one.
Good example (source)
Update - Expiration field
Important note: expiration belongs to queue context. So message might be dropped by the servers.
README says the following:
expiration is a shortstr; since RabbitMQ will expect this to be
an encoded string, we translate a ttl to the string representation
of its integer value.
Sources:
Additional source 1
Additional source 2
| RabbitMQ | 18,403,623 | 48 |
I need to have a python client that can discover queues on a restarted RabbitMQ server exchange, and then start up a clients to resume consuming messages from each queue. How can I discover queues from some RabbitMQ compatible python api/library?
| There does not seem to be a direct AMQP-way to manage the server but there is a way you can do it from Python. I would recommend using a subprocess module combined with the rabbitmqctl command to check the status of the queues.
I am assuming that you are running this on Linux. From a command line, running:
rabbitmqctl list_queues
will result in:
Listing queues ...
pings 0
receptions 0
shoveled 0
test1 55199
...done.
(well, it did in my case due to my specific queues)
In your code, use this code to get output of rabbitmqctl:
import subprocess
proc = subprocess.Popen("/usr/sbin/rabbitmqctl list_queues", shell=True, stdout=subprocess.PIPE)
stdout_value = proc.communicate()[0]
print stdout_value
Then, just come up with your own code to parse stdout_value for your own use.
| RabbitMQ | 4,287,941 | 48 |
I'm debugging some Java code that uses Apache POI to pull data out of Microsoft Office documents. Occasionally, it encounter a large document and POI crashes when it runs out of memory. At that point, it tries to publish the error to RabbitMQ, so that other components can know that this step failed and take the appropriate actions. However, when it tries to publish to the queue, it gets a com.rabbitmq.client.AlreadyClosedException (clean connection shutdown; reason: Attempt to use closed channel).
Here's the error handler code:
try {
//Extraction and indexing code
}
catch(Throwable t) {
// Something went wrong! We'll publish the error and then move on with
// our lives
System.out.println("Error received when indexing message: ");
t.printStackTrace();
System.out.println();
String error = PrintExc.format(t);
message.put("error", error);
if(mime == null) {
mime = "application/vnd.unknown";
}
message.put("mime", mime);
publish("IndexFailure", "", MessageProperties.PERSISTENT_BASIC, message);
}
For completeness, here's the publish method:
private void publish(String exch, String route,
AMQP.BasicProperties props, Map<String, Object> message) throws Exception{
chan.basicPublish(exch, route, props,
JSONValue.toJSONString(message).getBytes());
}
I can't find any code within the try block that appears to close the RabbitMQ channel. Are there any circumstances in which the channel could be closed implicitly?
EDIT: I should note that the AlreadyClosedException is thrown by the basicPublish call inside publish.
| An AMQP channel is closed on a channel error. Two common things that can cause a channel error:
Trying to publish a message to an exchange that doesn't exist
Trying to publish a message with the immediate flag set that doesn't have a queue with an active consumer set
I would look into setting up a ShutdownListener on the channel you're trying to use to publish a message using the addShutdownListener() to catch the shutdown event and look at what caused it.
| RabbitMQ | 8,839,094 | 47 |
I want to know how does RabbitMQ store the messages physically in its RAM and Disk?
I know that RabbitMQ tries to keep the messages in memory (But I don't know how the messages are put in the Ram). But the messages can be spilled into disk when the messages are with persistent mode or when the broker has the memory pressure. (But I don't know how the messages are stored in Disk.)
I'd like to know the internals about these. Unfortunately, the official documentation in its homepage do not expose the internal details.
Which document should I read for this?
| RabbitMQ uses a custom DB to store the messages, the db is usually located here:
/var/lib/rabbitmq/mnesia/rabbit@hostname/queues
Starting form the version 3.5.5 RabbitMQ introduced the new New Credit Flow
https://www.rabbitmq.com/blog/2015/10/06/new-credit-flow-settings-on-rabbitmq-3-5-5/
Let’s take a look at how RabbitMQ queues store messages. When a
message enters the queue, the queue needs to determine if the message
should be persisted or not. If the message has to be persisted, then
RabbitMQ will do so right away[3]. Now even if a message was persisted
to disk, this doesn’t mean the message got removed from RAM, since
RabbitMQ keeps a cache of messages in RAM for fast access when
delivering messages to consumers. Whenever we are talking about paging
messages out to disk, we are talking about what RabbitMQ does when it
has to send messages from this cache to the file system.
This post blog is enough detailed.
I also suggest to read about lazy queue:
https://www.rabbitmq.com/lazy-queues.html
and
https://www.rabbitmq.com/blog/2015/12/28/whats-new-in-rabbitmq-3-6-0/
Lazy Queues This new type of queues work by sending every message that
is delivered to them straight to the file system, and only loading
messages in RAM when consumers arrive to the queues. To optimize disk
reads messages are loaded in batches.
| RabbitMQ | 38,444,425 | 46 |
After the consumer gets a message, consumer/worker does some validations and then call web service. In this phase, if any error occurs or validation fails, we want the message put back to the queue it was originally consumed from.
I have read RabbitMQ documentation. But I am confused about differences between reject, nack and cancel methods.
| Short answer:
To requeue specific message you can pick both basic.reject or basic.nack with multiple flag set to false.
basic.consume calling may also results to messages redelivering if you are using message acknowledge and there are un-acknowledged message on consumer at specific time and consumer exit without ack-ing them.
basic.recover will redeliver all un-acked messages on specific channel.
Long answer:
basic.reject and basic.nack both serves to same purpose - drop or requeue message that can't be handled by specific consumer (at the given moment, under certain conditions or at all). The main difference between them is that basic.nack supports bulk messages processing, whilst basic.reject doesn't.
This difference described in Negative Acknowledgements article on official RabbitMQ web site:
The AMQP specification defines the basic.reject method that allows clients to reject individual, delivered messages, instructing the broker to either discard them or requeue them. Unfortunately, basic.reject provides no support for negatively acknowledging messages in bulk.
To solve this, RabbitMQ supports the basic.nack method that provides all the functionality of basic.reject whilst also allowing for bulk processing of messages.
To reject messages in bulk, clients set the multiple flag of the basic.nack method to true. The broker will then reject all unacknowledged, delivered messages up to and including the message specified in the delivery_tag field of the basic.nack method. In this respect, basic.nack complements the bulk acknowledgement semantics of basic.ack.
Note, that basic.nack method is RabbitMQ-specific extension while basic.reject method is part of AMQP 0.9.1 specification.
As to basic.cancel method, it used to notify server that client stops message consuming. Note, that client may receive arbitrary messages number between basic.cancel method sending an receiving the cancel-ok reply. If message acknowledge is used by client and it has any un-acknowledged messages they will be moved back to the queue they originally was consumed from.
basic.recover has some limitations in RabbitMQ: it
- basic.recover with requeue=false
- basic.recover synchronicity
In addition to errata, according to RabbitMQ specs basic.recover has partial support (Recovery with requeue=false is not supported.)
Note about basic.consume:
When basic.consume started without auto-ack (noack=false) and there are some pending messages non-acked messages, then when consumer get canceled (dies, fatal error, exception, whatever) that pending messages will be redelivered. Technically, that pending messages will not be processed (even dead-lettered) until consumer release them (ack/nack/reject/recover). Only after that they will be processed (e.g. deadlettered).
For example, let say we post originally 5 message in a row:
Queue(main) (tail) { [4] [3] [2] [1] [0] } (head)
And then consume 3 of them, but not ack them, and then cancel consumer. We will have this situation:
Queue(main) (tail) { [4] [3] [2*] [1*] [0*] } (head)
where star (*) notes that redelivered flag set to true.
Assume that we have situation with dead-lettered exchange set and queue for dead-lettered messages
Exchange(e-main) Exchange(e-dead)
Queue(main){x-dead-letter-exchange: "e-dead"} Queue(dead)
And assume we post 5 message with expire property set to 5000 (5 sec):
Queue(main) (tail) { [4] [3] [2] [1] [0] } (head)
Queue(dead) (tail) { }(head)
and then we consume 3 message from main queue and hold them for 10 second:
Queue(main) (tail) { [2!] [1!] [0!] } (head)
Queue(dead) (tail) { [4*] [3*] } (head)
where exclamation mark (!) stands for unacked message. Such messages can't be delivered to any consumer and they normally can't be viewed in management panel. But let's cancel consumer, remember, that it still hold 3 un-acked message:
Queue(main) (tail) { } (head)
Queue(dead) (tail) { [2*] [1*] [0*] [4*] [3*] } (head)
So now that 3 messages which was in the head put back to original queue, but as they has per-message TTL set, they are dead-lettered to the tail of dead-letter queue (sure, via dead-letter exchange).
P.S.:
Consuming message aka listening for new one is somehow different from direct queue access (getting one or more message without taking care of others). See basic.get method description for more.
| RabbitMQ | 24,107,913 | 46 |
How can I check whether a message Queue already exists or not?
I have 2 different applications, one creating a queue and the other reading from that queue.
So if I run the Client which reads from the queue first, than it crashes.
So to avoid that i would like to check first whether the queue exists or not.
here is the code snippet of how I read the queue:
QueueingBasicConsumer <ConsumerName> = new QueueingBasicConsumer(<ChannelName>);
<ChannelName>.BasicConsume("<queuename>", null, <ConsumerName>);
BasicDeliverEventArgs e = (BasicDeliverEventArgs)<ConsumerName>.Queue.Dequeue();
| Don't bother checking.
queue.declare is an idempotent operation. So, if you run it once, twice, N times, the result will still be the same.
If you want to ensure that the queue exists, just declare it before using it. Make sure you declare it with the same durability, exclusivity, auto-deleted-ness every time, otherwise you'll get an exception.
If you actually do need to check if a queue exists (you shouldn't normally need to), do a passive declare of the queue. That operation succeeds if the queue exists, or fails in an error if it doesn't.
| RabbitMQ | 3,457,305 | 45 |
I'm following this guide to learn how to use spring-rabbit with RabbitMQ. However in this guide, the RabbitMQ configuration is as default(localhost server and with credential as guest/guest). What should I do if I want to connect to an remote RabbitMQ with ip address and credential? I don't know where to set these information in my application.
| The application for that guide is a Spring Boot Application.
Add a file application.properties to src/main/resources.
You can then configure rabbitmq properties according to the Spring Boot Documentation - scroll down to the rabbitmq properties...
...
spring.rabbitmq.host=localhost # RabbitMQ host.
...
spring.rabbitmq.password= # Login to authenticate against the broker.
spring.rabbitmq.port=5672 # RabbitMQ port.
...
spring.rabbitmq.username= # Login user to authenticate to the broker.
...
To connect to a cluster, use
spring.rabbitmq.addresses= # Comma-separated list of addresses to which the client should connect.
e.g. server1:5672,server2:5672.
If you don't want to use boot auto configuration, declare a CachingConnectionFactory @Bean yourself and configure it as desired.
| RabbitMQ | 42,200,317 | 44 |
I'm trying to send a python dictionary from a python producer to a python consumer using RabbitMQ. The producer first establishes the connection to local RabbitMQ server. Then it creates a queue to which the message will be delivered, and finally sends the message. The consumer first connects to RabbitMQ server and then makes sure the queue exists by creating the same queue. It then receives the message from producer within the callback function, and prints the 'id' value (1). Here are the scripts for producer and consumer:
producer.py script:
import pika
import sys
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()
channel.queue_declare(queue='task_queue', durable=True)
message = {'id': 1, 'name': 'name1'}
channel.basic_publish(exchange='',
routing_key='task_queue',
body=message,
properties=pika.BasicProperties(
delivery_mode = 2, # make message persistent
))
print(" [x] Sent %r" % message)
connection.close()
consumer.py script:
import pika
import time
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()
channel.queue_declare(queue='task_queue', durable=True)
print(' [*] Waiting for messages. To exit press CTRL+C')
def callback(ch, method, properties, body):
print(" [x] Received %r" % body)
print(body['id'])
print(" [x] Done")
ch.basic_ack(delivery_tag = method.delivery_tag)
channel.basic_qos(prefetch_count=1)
channel.basic_consume(callback,
queue='task_queue')
channel.start_consuming()
But, when I run the producer.py, I get this error:
line 18, in <module>
delivery_mode = 2, # make message persistent
File "/Library/Python/2.7/site-packages/pika/adapters/blocking_connection.py", line 1978, in basic_publish
mandatory, immediate)
File "/Library/Python/2.7/site-packages/pika/adapters/blocking_connection.py", line 2064, in publish
immediate=immediate)
File "/Library/Python/2.7/site-packages/pika/channel.py", line 338, in basic_publish
(properties, body))
File "/Library/Python/2.7/site-packages/pika/channel.py", line 1150, in _send_method
self.connection._send_method(self.channel_number, method_frame, content)
File "/Library/Python/2.7/site-packages/pika/connection.py", line 1571, in _send_method
self._send_message(channel_number, method_frame, content)
File "/Library/Python/2.7/site-packages/pika/connection.py", line 1596, in _send_message
content[1][s:e]).marshal())
TypeError: unhashable type
Could anybody help me? Thanks!
| You can't send native Python types as your payload, you have to serialize them first. I recommend using JSON:
import json
channel.basic_publish(exchange='',
routing_key='task_queue',
body=json.dumps(message),
properties=pika.BasicProperties(
delivery_mode = 2, # make message persistent
))
and
def callback(ch, method, properties, body):
print(" [x] Received %r" % json.loads(body))
| RabbitMQ | 34,534,178 | 44 |
What are the differences between those amqp client libraries?
Which one is the most recommended?
What are the major differences?
| I would recommend amqp.node and bramqp over node-amqp. node-amqp has a lot of bugs and is poorly maintained, and it hides the "channel" concept which introduces a lot of problems for rabbitmq servers (because they are never closed).
| RabbitMQ | 20,128,124 | 44 |
I started to use rabbit.js to connect to RabbitMQ from a node.js application.
I'm blocked at:
Error: Channel closed by server: 403 (ACCESS-REFUSED) with message "ACCESS_REFUSED -operation not permitted on the default exchange"
at Channel.C.accept (/.../rabbit.js/node_modules/amqplib/lib/channel.js:398:24)
at Connection.mainAccept [as accept] (/.../rabbit.js/node_modules/amqplib/lib/connection.js:63:33)
at Socket.go (/.../rabbit.js/node_modules/amqplib/lib/connection.js:448:48)
at Socket.EventEmitter.emit (events.js:92:17)
...
which is expected, since the instance of RabbitMQ I use is configured to require the publishers and subscribers to provide credentials before being able to use the message queue, and guest account is disabled.
The official documentation of rabbit.js has no mention of credentials. Google searches for “rabbit.js specify credentials” and “rabbit.js login password” were inconclusive.
Are credentials supported by rabbit.js? If not, what other RabbitMQ clients for node.js support them?
| So I never used rabbit.js myself, but after diving into the code, it seems to be using amqplib. The code that parses it can be seen here and it seems it's calling the standard nodejs URL module. So perhaps you can try something like this:
amqp://user:[email protected]/vhost
Hope it helps!
Cheers.
| RabbitMQ | 24,945,112 | 42 |
Does the RabbitMQ .NET client have any sort of asynchronous support? I'd like to be able to connect and consume messages asynchronously, but haven't found a way to do either so far.
(For consuming messages I can use the EventingBasicConsumer, but that's not a complete solution.)
Just to give some context, this is an example of how I'm working with RabbitMQ at the moment (code taken from my blog):
var factory = new ConnectionFactory() { HostName = "localhost" };
using (var connection = factory.CreateConnection())
{
using (var channel = connection.CreateModel())
{
channel.QueueDeclare("testqueue", true, false, false, null);
var consumer = new EventingBasicConsumer(channel);
consumer.Received += Consumer_Received;
channel.BasicConsume("testqueue", true, consumer);
Console.ReadLine();
}
}
| Rabbit supports dispatching to asynchronous message handlers using the AsyncEventingBasicConsumer class. It works similarly to EventingBasicConsumer, but allows you to register a callback which returns a Task. The callback is dispatched to and the returned Task is awaited by the RabbitMQ client.
var factory = new ConnectionFactory
{
HostName = "localhost",
DispatchConsumersAsync = true
};
using(var connection = cf.CreateConnection())
{
using(var channel = conn.CreateModel())
{
channel.QueueDeclare("testqueue", true, false, false, null);
var consumer = new AsyncEventingBasicConsumer(model);
consumer.Received += async (o, a) =>
{
Console.WriteLine("Message Get" + a.DeliveryTag);
await Task.Yield();
};
}
Console.ReadLine();
}
| RabbitMQ | 31,961,261 | 39 |
I've been trying to figure out which form of connection i should use when using pika, I've got two alternatives as far as I understand.
Either the BlockingConnection or the SelectConnection, however I'm not really sure about the differences between these two (i.e. what is the BlockingConnection blocking? and more)
The documentation for pika says that SelectConnection is the preferred way to connect to rabbit since it provides "multiple event notification methods including select, epoll, kqueue and poll."
So I'm wondering what are the implications of these two different kinds of connections?
PS: I know I shouldn't put a tag in the title but in this case I think it does help to clarify the question.
| The SelectConnection is useful if your application architecture can benefit from an asynchronous design, e.g. doing something else while the RabbitMQ IO completes (e.g. switch to some other IO etc) . This type of connection uses callbacks to indicate when functions return. For example you can declare callbacks for
on_connected, on_channel_open, on_exchange_declared, on_queue_declared etc.
...to perform operations when these events are triggered.
The benefit is especially good if your RabbitMQ server (or connection to that server) is slow or overloaded.
BlockingConnection on the hand is just that - it blocks until the called function returns. so it will block the execution thread until connected or channel_open or exchange_declared or queue_declared return for example. That said, its often simpler to program this sort of serialized logic than the async SelectConnection logic. For simple apps with responsive RabbitMQ servers these also work OK IMO.
I suppose you've read the Pika documentation already http://pika.readthedocs.io/en/stable/intro.html, if not, then this is absolutely vital information before you use Pika!
Cheers!
| RabbitMQ | 11,987,838 | 39 |
Do you have any pointers how to determine when a subscription problem has occurred so I can reconnect?
My service uses RabbitMQ.Client.MessagePatterns.Subscription for it's subscription. After some time, my client silently stops receiving messages. I suspect network issues as I our VPN connection is not the most reliable.
I've read through the docs for awhile looking for a key to find out when this subscription might be broken due to a network issue without much luck. I've tried checking that the connection and channel are still open, but it always seems to report that it is still open.
The messages it does process work quite well and are acknowledged back to the queue so I don't think it's an issue with the "ack".
I'm sure I must be just missing something simple, but I haven't yet found it.
public void Run(string brokerUri, Action<byte[]> handler)
{
log.Debug("Connecting to broker: {0}".Fill(brokerUri));
ConnectionFactory factory = new ConnectionFactory { Uri = brokerUri };
using (IConnection connection = factory.CreateConnection())
{
using (IModel channel = connection.CreateModel())
{
channel.QueueDeclare(queueName, true, false, false, null);
using (Subscription subscription = new Subscription(channel, queueName, false))
{
while (!Cancelled)
{
BasicDeliverEventArgs args;
if (!channel.IsOpen)
{
log.Error("The channel is no longer open, but we are still trying to process messages.");
throw new InvalidOperationException("Channel is closed.");
}
else if (!connection.IsOpen)
{
log.Error("The connection is no longer open, but we are still trying to process message.");
throw new InvalidOperationException("Connection is closed.");
}
bool gotMessage = subscription.Next(250, out args);
if (gotMessage)
{
log.Debug("Received message");
try
{
handler(args.Body);
}
catch (Exception e)
{
log.Debug("Exception caught while processing message. Will be bubbled up.", e);
throw;
}
log.Debug("Acknowledging message completion");
subscription.Ack(args);
}
}
}
}
}
}
UPDATE:
I simulated a network failure by running the server in a virtual machine and I do get an exception (RabbitMQ.Client.Exceptions.OperationInterruptedException: The AMQP operation was interrupted) when I break the connection for long enough so perhaps it isn't a network issue. Now I don't know what it would be but it fails after just a couple hours of running.
| EDIT: Since I'm sill getting upvotes on this, I should point out that the .NET RabbitMQ client now has this functionality built in: https://www.rabbitmq.com/dotnet-api-guide.html#connection-recovery
Ideally, you should be able to use this and avoid manually implementing reconnection logic.
I recently had to implement nearly the same thing. From what I can tell, most of the available information on RabbitMQ assumes that either your network is very reliable or that you run a RabbitMQ broker on the same machine as any client sending or receiving messages, allowing Rabbit to deal with any connection issues.
It's really not that hard to set up the Rabbit client to be robust against dropped connections, but there are a few idiosyncrasies that you need to deal with.
The first thing you need to do turn on the heartbeat:
ConnectionFactory factory = new ConnectionFactory()
{
Uri = brokerUri,
RequestedHeartbeat = 30,
};
Setting the "RequestedHeartbeat" to 30 will make the client check every 30 seconds if the connection is still alive. Without this turned on, the message subscriber will sit there happily waiting for another message to come in without a clue that its connection has gone bad.
Turning the heartbeat on also makes the server check to see if the connection is still up, which can be very important. If a connection goes bad after a message has been picked up by the subscriber but before it's been acknowledged, the server just assumes that the client is taking a long time, and the message gets "stuck" on the dead connection until it gets closed. With the heartbeat turned on, the server will recognize when the connection goes bad and close it, putting the message back in the queue so another subscriber can handle it. Without the heartbeat, I've had to go in manually and close the connection in the Rabbit management UI so that the stuck message can get passed to a subscriber.
Second, you will need to handle OperationInterruptedException. As you noticed, this is usually the exception the Rabbit client will throw when it notices the connection has been interrupted. If IModel.QueueDeclare() is called when the connection has been interrupted, this is the exception you will get. Handle this exception by disposing of your subscription, channel, and connection and creating new ones.
Finally, you will have to handle what your consumer does when trying to consume messages from a closed connection. Unfortunately, each different way of consuming messages from a queue in the Rabbit client seems to react differently. QueueingBasicConsumer throws EndOfStreamException if you call QueueingBasicConsumer.Queue.Dequeue on a closed connection. EventingBasicConsumer does nothing, since it's just waiting for a message. From what I can tell from trying it, the Subscription class you're using seems to return true from a call to Subscription.Next, but the value of args is null. Once again, handle this by disposing of your connection, channel, and subscription and recreating them.
The value of connection.IsOpen will be updated to False when the connection fails with the heartbeat on, so you can check that if you would like. However, since the heartbeat runs on a separate thread, you will still need to handle the case where the connection is open when you check it, but closes before subscription.Next() is called.
One final thing to watch out for is IConnection.Dispose(). This call will throw a EndOfStreamException if you call dispose after the connection has been closed. This seems like a bug to me, and I don't like not calling dispose on an IDisposable object, so I call it and swallow the exception.
Putting that all together in a quick and dirty example:
public bool Cancelled { get; set; }
IConnection _connection = null;
IModel _channel = null;
Subscription _subscription = null;
public void Run(string brokerUri, string queueName, Action<byte[]> handler)
{
ConnectionFactory factory = new ConnectionFactory()
{
Uri = brokerUri,
RequestedHeartbeat = 30,
};
while (!Cancelled)
{
try
{
if(_subscription == null)
{
try
{
_connection = factory.CreateConnection();
}
catch(BrokerUnreachableException)
{
//You probably want to log the error and cancel after N tries,
//otherwise start the loop over to try to connect again after a second or so.
continue;
}
_channel = _connection.CreateModel();
_channel.QueueDeclare(queueName, true, false, false, null);
_subscription = new Subscription(_channel, queueName, false);
}
BasicDeliverEventArgs args;
bool gotMessage = _subscription.Next(250, out args);
if (gotMessage)
{
if(args == null)
{
//This means the connection is closed.
DisposeAllConnectionObjects();
continue;
}
handler(args.Body);
_subscription.Ack(args);
}
}
catch(OperationInterruptedException ex)
{
DisposeAllConnectionObjects();
}
}
DisposeAllConnectionObjects();
}
private void DisposeAllConnectionObjects()
{
if(_subscription != null)
{
//IDisposable is implemented explicitly for some reason.
((IDisposable)_subscription).Dispose();
_subscription = null;
}
if(_channel != null)
{
_channel.Dispose();
_channel = null;
}
if(_connection != null)
{
try
{
_connection.Dispose();
}
catch(EndOfStreamException)
{
}
_connection = null;
}
}
| RabbitMQ | 12,499,174 | 38 |
I'm ask/answering this question because it hung me up & it's likely someone else will have the same problem.
Install of RabbitMQ x64 v2.8.6 on Windows Server 2008 x64.
After Erlang install using default install location to C:\Program Files\erl5.9.2, I'm attempting to start the server via running the rabbitmq-service.bat. Fail:
Please either set ERLANG_HOME to point to your Erlang installation
or place the RabbitMQ server distribution in the Erlang lib folder.
Problem is the .bat file does not have the correct subpath. with 5.9.2 (R15B02) version of erlang. My ERLANG_HOME directory is set correctly, but the script does not use it correctly for this version of Erlang, which, it appears to this Erlang noob to have a new subdirectory called "erts-5.9.2" which is causing the problems. Maybe someone intimate with these scripts can describe how to make this work correctly without the hack workaround I'm about to describe?
| 1- Set environment variable:
Variable name : ERLANG_HOME
Variable value: C:\Program Files (x86)\erl6.4
note: don't include bin on above step.
2- Add %ERLANG_HOME%\bin to the PATH environmental variable:
Variable name : PATH
Variable value: %ERLANG_HOME%\bin
This works well.
| RabbitMQ | 12,323,621 | 38 |
There is a list of PHP clients on the RabbitMQ site. I'm asking this question in hopes that people who have used any of these can share their experiences here. E.g.
Did you have any trouble installing?
Is it stable?
Were there any performance issues?
How is the documentation / support?
Even if you've just used one of these libraries, please share your experiences.
For reference, here are some of the clients listed:
PHP manual page for AMQP
php-amqp - a client developed and used by StudiVZ, originally based on RabbitMQ-C
php-amqplib a port of py-amqplib
php-amqplib a fork of php-amqplib updated to support PHP 5.3
PECL release of the AMQP client
P.S. I know that "Best ..." is "subjective", but the point of this question is to collect experiences and help people make an informed decision about these AMQP libraries. Please don't knee-jerk close this question just because it has the word "best" in it.
P.P.S. I'm using PHP 5.3 on RHEL 5.
| For reference, PECL AMQP Extension and http://php.net/manual/fa/book.amqp.php are the same thing, one is the package, the other the documentation for the package.
As a maintainer of the official PHP AMQP extension, I am a little biased. Many people use this extension in high volume low latency production environments since it is far faster than one written in native PHP. Furthermore, since I constantly use this at my job, I have to keep it working and up to date.
The drawback to this extension is that it is not available for Windows, yet, because the library on which it depends is currently being ported. There is not ETA for Windows support, but as soon as the dependencies support Windows, it is our goal to port the extension over to Windows as well.
| RabbitMQ | 4,405,992 | 38 |
Web Dynos can handle HTTP Requests
and while Web Dynos handles them Worker Dynos can handle jobs from it.
But I don't know how to make Web Dynos and Worker Dynos to communicate each other.
For example, I want to receive a HTTP request by Web Dynos
, send it to Worker Dynos
, process the job and send back result to Web Dynos
, show results on Web.
Is this possible in Node.js? (With RabbitMQ or Kue or etc)?
I could not find an example in Heroku Documentation
Or Should I implement all codes in Web Dynos and scaling Web Dynos only?
| As the high-level article on background jobs and queuing suggests, your web dynos will need to communicate with your worker dynos via an intermediate mechanism (often a queue).
To accomplish what it sounds like you're hoping to do follow this general approach:
Web request is received by the web dyno
Web dyno adds a job to the queue
Worker dyno receives job off the queue
Worker dyno executes job, writing incremental progress to a shared component
Browser-side polling requests status of job from the web dyno
Web dyno queries shared component for progress of background job and sends state back to browser
Worker dyno completes execution of the job and marks it as complete in shared component
Browser-side polling requests status of job from the web dyno
Web dyno queries shared component for progress of background job and sends completed state back to browser
As far as actual implementation goes I'm not too familiar with the best libraries in Node.js, but the components that glue this process together are available on Heroku as add-ons.
Queue: AMQP is a well-supported queue protocol and the CloudAMQP add-on can serve as the message queue between your web and worker dynos.
Shared state: You can use one of the Postgres add-ons to share the state of an job being processed or something more performant such as Memcache or Redis.
So, to summarize, you must use an intermediate add-on component to communicate between dynos on Heroku. While this approach involves a little more engineering, the result is a properly-decoupled and scalable architecture.
| RabbitMQ | 11,429,774 | 37 |
I am using RabbitMQ with Grails, and a problem cropped up this morning. When I run rabbitmqctl status it tells me:
C:\Users\BuildnTest2>rabbitmqctl status
Status of node 'rabbit@BUILDNTEST2-PC' ...
Error: unable to connect to node 'rabbit@BUILDNTEST2-PC': nodedown diagnostics:
- nodes and their ports on BUILDNTEST2-PC: [{rabbit,49164},
{rabbitmqctl27693,49286}]
- current node: 'rabbitmqctl27693@BuildnTest2-PC'
- current node home dir: C:\Users\BuildnTest2
- current node cookie hash: cSYB8tsT4mGGZHSUGQi08w==
When I go to the Rabbit troubleshooting page they say:
then you should make sure the Erlang cookies are the same.
What does this mean and how is it accomplished?
Googling found this forum thread which claims to have instructions to solving this problem, but alas it just redirects back to the rabbit site where there is no answer.
| For what it's worth, in 2018, the docs are WRONG. In windows 10, the default location of the cookie file appears to be:
C:\Windows\System32\config\systemprofile
and NOT
C:\Windows
as the docs say.
The best thing to do is to look at the log file, which is typically located in your user %AppData%\Roaming\RabbitMQ\log directory.
The log file contains this entry, which helped me determine the cookie location:
node : rabbit@computername
home dir : C:\WINDOWS\system32\config\systemprofile
| RabbitMQ | 9,673,172 | 37 |
I am trying to start RMQ inside docker container, with precreated queue qwer.
Prior to this, I was using simple docker-compose.yml file:
rabbit:
image: rabbitmq:management-alpine
environment:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
And it worked fine, except that it has no queues pre-created at start.
Now I've switched to custom image, with following Dockerfile:
FROM rabbitmq:management-alpine
ADD rabbitmq.conf /etc/rabbitmq/
ADD definitions.json /etc/rabbitmq/
RUN chown rabbitmq:rabbitmq /etc/rabbitmq/rabbitmq.conf /etc/rabbitmq/definitions.json
where rabbitmq.conf is v3.7+ sysctl-styled config, with line:
management.load_definitions = /etc/rabbitmq/definitions.json
and definitions.json contains attempt to create queue:
{
"vhosts":[
{"name":"/"}
],
"queues":[
{"name":"qwer","vhost":"/","durable":true,"auto_delete":false,"arguments":{}}
]
}
Now it started to refuse login:
Error on AMQP connection <0.660.0> (172.18.0.6:48916 -> 172.18.0.10:5672, state: starting):
PLAIN login refused: user 'guest' - invalid credentials
I thought that the task is somewhat simple, but configuration process of rabbit itself is most complex task, and documentation is somewhat unclear.
I was unable to figure out how should it work, even after 4 days of trials and googling..
Could you help me, how to write configuration file, in order to create a queue and preserve ability to connect and talk to it?
| You can predefine queues and exchanges without creating own rabbit-mq docker image.
Your docker-compose should look like this:
rabbit:
container_name: rabbitmq-preload-conf
image: rabbitmq:3-management
volumes:
- ./init/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf:ro
- ./init/definitions.json:/etc/rabbitmq/definitions.json:ro
ports:
- "5672:5672"
- "15672:15672"
In this case rabbitmq.conf and definitions.json files should be in init folder in the same parent folder as docker-compose file
rabbitmq.conf file
management.load_definitions = /etc/rabbitmq/definitions.json
definitions.json file
{
"queues": [
{
"name": "externally_configured_queue",
"vhost": "/",
"durable": true,
"auto_delete": false,
"arguments": {
"x-queue-type": "classic"
}
}
],
"exchanges": [
{
"name": "externally_configured_exchange",
"vhost": "/",
"type": "direct",
"durable": true,
"auto_delete": false,
"internal": false,
"arguments": {}
}
],
"bindings": [
{
"source": "externally_configured_exchange",
"vhost": "/",
"destination": "externally_configured_queue",
"destination_type": "queue",
"routing_key": "externally_configured_queue",
"arguments": {}
}
]
}
NOTE: After update in rabbit images, additional configuration may be required. If rabbit container failed to start with the configuration mentioned above (error message contains "Exception during startup:
exit:{error,{no_such_vhost,<<"/">>}}") following configuration should be added into definitions file:
"users": [
{
"name": "guest",
"password_hash": "BMfxN8drrYcIqXZMr+pWTpDT0nMcOagMduLX0bjr4jwud/pN",
"hashing_algorithm": "rabbit_password_hashing_sha256",
"tags": [
"administrator"
],
"limits": {}
}
],
"vhosts": [
{
"name": "/"
}
],
"permissions": [
{
"user": "guest",
"vhost": "/",
"configure": ".*",
"write": ".*",
"read": ".*"
}
]
Using this additional configuration, default user will be guest with password guest.
Apart from queues, exchanges and bindings, definitions.json file can contain additional configuration
| RabbitMQ | 58,266,688 | 36 |
In our project, we want to use the RabbitMQ in "Task Queues" pattern to pass data.
On the producer side, we build a few TCP server(in node.js) to recv
high concurrent data and send it to MQ without doing anything.
On the consumer side, we use JAVA client to get the task data from
MQ, handle it and then ack.
So the question is:
To get the maximum message passing throughput/performance( For example, 400,000 msg/second) , How many queues is best? Does that more queue means better throughput/performance? And is there anything else should I notice?
Any known best practices guide for using RabbitMQ in such scenario?
Any comments are highly appreciated!!
| For best performance in RabbitMQ, follow the advice of its creators. From the RabbitMQ blog:
RabbitMQ's queues are fastest when they're empty. When a queue is
empty, and it has consumers ready to receive messages, then as soon as
a message is received by the queue, it goes straight out to the
consumer. In the case of a persistent message in a durable queue, yes,
it will also go to disk, but that's done in an asynchronous manner and
is buffered heavily. The main point is that very little book-keeping
needs to be done, very few data structures are modified, and very
little additional memory needs allocating.
If you really want to dig deep into the performance of RabbitMQ queues, this other blog entry of theirs goes into the data much further.
| RabbitMQ | 10,030,227 | 36 |
I have an ASP.NET Core application where I would like to consume RabbitMQ messages.
I have successfully set up the publishers and consumers in command line applications, but I'm not sure how to set it up properly in a web application.
I was thinking of initializing it in Startup.cs, but of course it dies once startup is complete.
How to initialize the consumer in a the right way from a web app?
| Use the Singleton pattern for a consumer/listener to preserve it while the application is running. Use the IApplicationLifetime interface to start/stop the consumer on the application start/stop.
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddSingleton<RabbitListener>();
}
public void Configure(IApplicationBuilder app)
{
app.UseRabbitListener();
}
}
public static class ApplicationBuilderExtentions
{
//the simplest way to store a single long-living object, just for example.
private static RabbitListener _listener { get; set; }
public static IApplicationBuilder UseRabbitListener(this IApplicationBuilder app)
{
_listener = app.ApplicationServices.GetService<RabbitListener>();
var lifetime = app.ApplicationServices.GetService<IApplicationLifetime>();
lifetime.ApplicationStarted.Register(OnStarted);
//press Ctrl+C to reproduce if your app runs in Kestrel as a console app
lifetime.ApplicationStopping.Register(OnStopping);
return app;
}
private static void OnStarted()
{
_listener.Register();
}
private static void OnStopping()
{
_listener.Deregister();
}
}
You should take care of where your app is hosted. For example, IIS can recycle and stop your code from running.
This pattern can be extended to a pool of listeners.
| RabbitMQ | 43,609,345 | 35 |
I am at a loss here so I'm reaching out to the collective knowledge in hope of a miracle.
I have installed RabbitMQ on a Linux box using the defaults.
When I use this code (and the default RabbitMQ installation configuration) everything works nice.
var connectionFactory = new ConnectionFactory();
connectionFactory.HostName = "192.168.0.12";
IConnection connection = connectionFactory.CreateConnection();
But when I add a user to RabbitMQ and try to use the following code (username and password has been changed to protect the innocent. :) )
var connectionFactory = new ConnectionFactory();
connectionFactory.HostName = "192.168.0.12";
connectionFactory.UserName = "user";
connectionFactory.Password = "password";
IConnection connection = connectionFactory.CreateConnection();
the connectionFactory.CreateConnection() method throws the following exception:
BrokerUnreachableException
None of the specified endpoints were reachable
Checking the RabbitMQ logfile I can see it complaining about the credentials:
{amqp_error,access_refused,
"PLAIN login refused: user 'user' - invalid credentials",
'connection.start_ok'}}
The thing is that I am confident about the username and password and I cannot for the love of coding find a solution to this anywhere.
I must be missing something obvious but I can't figure out what it is.
I would be grateful for any helpful pointers.
| It seems that I have found a solution to my own problem.
The following code works:
ConnectionFactory factory = new ConnectionFactory();
factory.UserName = "user";
factory.Password = "password";
factory.VirtualHost = "/";
factory.Protocol = Protocols.FromEnvironment();
factory.HostName = "192.168.0.12";
factory.Port = AmqpTcpEndpoint.UseDefaultPort;
IConnection conn = factory.CreateConnection();
Thanks for listening and perhaps this at least could be useful to someone else. :)
| RabbitMQ | 4,987,438 | 34 |
I want to run RabbitMQ in one container, and a worker process in another. The worker process needs to access RabbitMQ.
I'd like these to be managed through docker-compose.
This is my docker-compose.yml file so far:
version: "3"
services:
rabbitmq:
image: rabbitmq
command: rabbitmq-server
expose:
- "5672"
- "15672"
worker:
build: ./worker
depends_on:
- rabbitmq
# Allow access to docker daemon
volumes:
- /var/run/docker.sock:/var/run/docker.sock
So I've exposed the RabbitMQ ports. The worker process accesses RabbitMQ using the following URL:
amqp://guest:guest@rabbitmq:5672/
Which is what they use in the official tutorial, but localhost has been swapped for rabbitmq, since the the containers should be discoverable with a hostname identical to the container name:
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
Whenever I run this, I get an connection refused error:
Recreating ci_rabbitmq_1 ... done
Recreating ci_worker_1 ... done
Attaching to ci_rabbitmq_1, ci_worker_1
worker_1 | dial tcp 127.0.0.1:5672: connect: connection refused
ci_worker_1 exited with code 1
I find this interesting because it's using the IP 127.0.0.1 which (I think) is localhost, even though I specified rabbitmq as the hostname. I'm not an expert on docker networking, so maybe this is desired.
I'm happy to supply more information if needed!
Edit
There is an almost identical question here. I think I need to wait until rabbitmq is up and running before starting worker. I tried doing this with a healthcheck:
version: "2.1"
services:
rabbitmq:
image: rabbitmq
command: rabbitmq-server
expose:
- "5672"
- "15672"
healthcheck:
test: [ "CMD", "nc", "-z", "localhost", "5672" ]
interval: 10s
timeout: 10s
retries: 5
worker:
build: .
depends_on:
rabbitmq:
condition: service_healthy
(Note the different version). This doesn't work, however - it will always fail as not-healthy.
| Aha! I fixed it. @Ijaz was totally correct - the RabbitMQ service takes a while to start, and my worker tries to connect before it's running.
I tried using a delay, but this failed when the RabbitMQ took longer than usual.
This is also indicative of a larger architectural problem - what happens if the queuing service (RabbitMQ in my case) goes offline during production? Right now, my entire site fails. There needs to be some built-in redundancy and polling.
As described this this related answer, we can use healthchecks in docker-compose 3+:
version: "3"
services:
rabbitmq:
image: rabbitmq
command: rabbitmq-server
expose:
- 5672
- 15672
healthcheck:
test: [ "CMD", "nc", "-z", "localhost", "5672" ]
interval: 5s
timeout: 15s
retries: 1
worker:
image: worker
restart: on-failure
depends_on:
- rabbitmq
Now, the worker container will restart a few times while the rabbitmq container stays unhealthy. rabbitmq immediately becomes healthy when nc -z localhost 5672 succeeds - i.e. when the queuing is live!
| RabbitMQ | 53,031,439 | 33 |
Why do we need routing key to route messages from exchange to queue? Can't we simply use the queue name to route the message? Also, in case of publishing to multiple queues, we can use multiple queue names. Can anyone point out the scenario where we actually need routing key and queue name won't be suffice?
| There are several types of exchanges. The fanout exchange ignores the routing key and sends messages to all queues. But pretty much all other exchange types use the routing key to determine which queue, if any, will receive a message.
The tutorials on the RabbitMQ website describes several usecases where different exchange types are useful and where the routing key is relevant.
For instance, tutorial 5 demonstrates how to use a topic exchange to route log messages to different queues depending on the log level of each message.
If you want to target multiple queues, you need to bind them to a fanout exchange and use that exchange in your publisher.
You can't specify multiple queue names in your publisher. In AMQP, you do not publish a message to queues, you publish a message to an exchange. It's the exchange responsability to determine the relevant queues. It's possible that a message is routed to no queue at all and just dropped.
| RabbitMQ | 36,302,341 | 33 |
So, this is what I've done:
Installed Erlang on my Windows x64 bit machine
Installed RabbitMQ
Started RabbitMQ service
At this step I have no errors. When, however, I try to enabe rabbitmq-management, I get some error messages in the console. The way I try to enable it is this one:
C:\...\rabbitmq-server-3.5.6\sbin>rabbitmq-plugins.bat enable rabbitmq_management
This results in:
Applying plugin configuration to rabbit@Jacobian... failed
To add to this, I know about this thread, but I'm not sure what this command means SET HOMEDRIVE=C:. Nevertheless, I tried it like so:
C:\...\rabbitmq-server-3.5.6\sbin> SET HOMEDRIVE=C:
C:\...\rabbitmq-server-3.5.6\sbin> rabbitmq-plugins.bat enable rabbitmq_management
But I still got the same error message. Thanks!
EDIT:
EDIT
It seems, like RabbitMQ became RubbishMQ. The catch is I followed very standard and very basic steps to install RabbitMQ now on Ubuntu machine and got a terrible list of error messages once again. These are the steps I followed:
apt-get install pkg-config automake autoconf libsigc++-2.0-dev
git clone git://github.com/alanxz/rabbitmq-c.git
cd rabbitmq-c
# Enable and update the codegen git submodule
git submodule init
git submodule update
# Configure, compile and install
autoreconf -i && ./configure && make && sudo make install
rabbitmq-plugins enable rabbitmq_management
When I run the last command I get tons of error messages. Among them I see such as "error_logger ... Error when reading ./.erlang.cookie: eaccess". So, I guess there are some secret missing steps or some voodoo spell, that can make it work. But I do not know all that stuff and hope to hear some advice. This is what I expect to see - 1) step by step installation of RabbitMQ on Windows and step by step test, that all works 2) the same for Ubuntu. Ready, Steady, Go!
| I faced the same problem and my investigations led me to https://stackoverflow.com/a/34538688 which helped me solve it. After following the steps in that answer, start the service and the problem should be solved.
Basically, the problem is caused by the RabbitMQ installer not registering the service correctly.
| RabbitMQ | 33,951,516 | 33 |
I am using RabbitMQ successfully. However, I have a problem where if I get in the situation where there are lots of messages on the queue then the consumer (a Windows service) tries to get them all and then just holds on to them but never actions or acknowledges them.
When the number of messages in the ready state is low then the consumers deal with the throughput fine, it is just if there has been an issue and there is a backlog then it gets far too greedy.
Is there a way to configure the maximum number of messages that a consumer will try and take responsibility for at any one time?
I can see the RequestedChannelMax field on RabbitMQ.Client.ConnectionFactory is that the correct setting to limit this?
Thanks
| A consumer, by default will read as many messages as the bandwidth can handle regardless of actual message processing time by the consumer.
You need to set prefetch values by modifying the Quality of Service (QoS) of the channel to restrict how many messages it will try to pick up at one time. Check out basic.qos here. It has 3 parameters, a size (in octets), a count (the number of whole messages it will pick up at one time) and a global flag.
This blog post is an interesting read if you are interested in optimising throughput and talks about prefetching about 2/3 of the way down the page.
Hope that helps!
| RabbitMQ | 19,163,021 | 33 |
I am trying to run the following command
rabbitmq-plugins.bat enable rabbitmq_management
and its giving me an error like this:
11:36:55.464 [error] Failed to create cookie file 'h:/.erlang.cookie': enoent
I am using windows 7, Erlang Version R16B01 and RabbitMQ-Server version 3.1.5
I am using my work PC and our Corporate policy sets the HOMEDRIVE to h: and HOMEPATH to /
and i dont think they will let me change this.
I can see the .erlang.cookie file under C:\Windows.
Could someone let me know of a workaround for this ?
Thanks in advance !
| Set the home drive to some dir in the dos shell before executing the cli.
Create a startup file, e.g start-rabbit.bat, with contents below.
set HOMEDRIVE=C:/conf/rabbitmq :: Or your favorite dir
rabbitmq-plugins.bat enable rabbitmq_management
Use a folder in C drive c:/conf/rabbitmq. The rabbitmq system will write the cookie file there.
It is a good idea not to dirty rabbitmq-xxx.bat installed files.
| RabbitMQ | 18,495,874 | 33 |
At the rabbitMQ web interface at the queue tab I see "Overview" panel where I found these:
Queued messages :
Ready
Unacknowledged
Total
I guess what is the "Total" messages. But what is "Ready" and "Unacknowledged" ?
"Ready" - messages that were delivered to the consumer?
"Unacknowledged" - ?
Message rates:
Publish
Deliver
Redelivered
Acknowledge
And what are these messages? Especially "Redelivered" and "Acknowledge"? What does this mean?
| Ready
Is the number of messages that are available to be delivered.
Unacknowledged
Is the number of messages for which the server is waiting for acknowledgement(If a client recieved the message but dont send a acknowledge yet).
Total
Is the sum of Ready and Unacknowledged messages.
About your second question:
Publish
This is the rate how many messages are incomming to the RabbitMQ server.
Deliver
This is the rate at which messages requiring acknowledgement are being delivered in response to basic.consume.
Acknowledge
Rate at which messages are being acknowledged by the client/consumer.
Redelivered
Rate at which messages with the 'redelivered' flag set are being delivered. For example if you dont got a acknowledge message for a delivered message, you will deliver this message again.
| RabbitMQ | 18,110,077 | 33 |
We have a PHP app that forwards messages from RabbitMQ to connected devices down a WebSocket connection (PHP AMQP pecl extension v1.7.1 & RabbitMQ 3.6.6).
Messages are consumed from an array of queues (1 per websocket connection), and are acknowledged by the consumer when we receive confirmation over the websocket that the message has been received (so we can requeue messages that are not delivered in an acceptable timeframe). This is done in a non-blocking fashion.
99% of the time, this works perfectly, but very occasionally we receive an error "RabbitMQ PRECONDITION_FAILED - unknown delivery tag ". This closes the channel. In my understanding, this exception is a result of one of the following conditions:
The message has already been acked or rejected.
An ack is attempted over a channel the message was not delivered on.
An ack is attempted after the message timeout (ttl) has expired.
We have implemented protections for each of the above cases but yet the problem continues.
I realise there are number of implementation details that could impact this, but at a conceptual level, are there any other failure cases that we have not considered and should be handling? or is there a better way of achieving the functionality described above?
| "PRECONDITION_FAILED - unknown delivery tag" usually happens because of double ack-ing, ack-ing on wrong channels or ack-ing messages that should not be ack-ed.
So in same case you are tying to execute basic.ack two times or basic.ack using another channel
| RabbitMQ | 42,567,689 | 32 |
How to acknowledge the messages manually without using auto acknowledgement.
Is there a way to use this along with the @RabbitListener and @EnableRabbit style of configuration.
Most of the documentation tells us to use SimpleMessageListenerContainer along with ChannelAwareMessageListener.
However using that we lose the flexibility that is provided with the annotations.
I have configured my service as below :
@Service
public class EventReceiver {
@Autowired
private MessageSender messageSender;
@RabbitListener(queues = "${eventqueue}")
public void receiveMessage(Order order) throws Exception {
// code for processing order
}
My RabbitConfiguration is as below
@EnableRabbit
public class RabbitApplication implements RabbitListenerConfigurer {
public static void main(String[] args) {
SpringApplication.run(RabbitApplication.class, args);
}
@Bean
public MappingJackson2MessageConverter jackson2Converter() {
MappingJackson2MessageConverter converter = new MappingJackson2MessageConverter();
return converter;
@Bean
public SimpleRabbitListenerContainerFactory myRabbitListenerContainerFactory() {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(rabbitConnectionFactory());
factory.setMaxConcurrentConsumers(5);
factory.setMessageConverter((MessageConverter) jackson2Converter());
factory.setAcknowledgeMode(AcknowledgeMode.MANUAL);
return factory;
}
@Bean
public ConnectionFactory rabbitConnectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
connectionFactory.setHost("localhost");
return connectionFactory;
}
@Override
public void configureRabbitListeners(RabbitListenerEndpointRegistrar registrar) {
registrar.setContainerFactory(myRabbitListenerContainerFactory());
}
@Autowired
private EventReceiver receiver;
}
}
Any help will be appreciated on how to adapt manual channel acknowledgement along with the above style of configuration.
If we implement the ChannelAwareMessageListener then the onMessage signature will change.
Can we implement ChannelAwareMessageListener on a service ?
| Add the Channel to the @RabbitListener method...
@RabbitListener(queues = "${eventqueue}")
public void receiveMessage(Order order, Channel channel,
@Header(AmqpHeaders.DELIVERY_TAG) long tag) throws Exception {
...
}
and use the tag in the basicAck, basicReject.
EDIT
@SpringBootApplication
@EnableRabbit
public class So38728668Application {
public static void main(String[] args) throws Exception {
ConfigurableApplicationContext context = SpringApplication.run(So38728668Application.class, args);
context.getBean(RabbitTemplate.class).convertAndSend("", "so38728668", "foo");
context.getBean(Listener.class).latch.await(60, TimeUnit.SECONDS);
context.close();
}
@Bean
public Queue so38728668() {
return new Queue("so38728668");
}
@Bean
public Listener listener() {
return new Listener();
}
public static class Listener {
private final CountDownLatch latch = new CountDownLatch(1);
@RabbitListener(queues = "so38728668")
public void receive(String payload, Channel channel, @Header(AmqpHeaders.DELIVERY_TAG) long tag)
throws IOException {
System.out.println(payload);
channel.basicAck(tag, false);
latch.countDown();
}
}
}
application.properties:
spring.rabbitmq.listener.acknowledge-mode=manual
| RabbitMQ | 38,728,668 | 32 |
I would like to set a timeout after which a dequeued message is automatically NACKed.
When I dequeue a message I wait until it is transfered over a socket and the other party confirms its reception.
Do I need to keep a list of Timers or can RMQ handle this automatically?
private void Run()
{
_rmqConnection = _queueConnectionFactory.CreateFactory().CreateConnection();
_rmqReadchannel = _rmqConnection.CreateModel();
_rmqReadchannel.QueueDeclare(QueueIdOutgoing(), true, false, false, null);
_rmqReadchannel.BasicQos(0, 1, false);
var consumer = new QueueingBasicConsumer(_rmqReadchannel);
_rmqReadchannel.BasicConsume(QueueIdOutgoing(), false, consumer);
while (true)
{
if (!_rmqReadchannel.IsOpen)
{
throw new Exception("Channel is closed");
}
var ea = consumer.Queue.Dequeue();
string jsonData = Encoding.UTF8.GetString(ea.Body);
if (OnOutgoingMessageReady != null)
{
OnOutgoingMessageReady(this, new QueueDataEventArgs(jsonData, ea.DeliveryTag));
}
//waiting for ACK from a different thread
}
}
| Yes. This is discussed in the official Python tutorial:
A timeout (30 minutes by default) is enforced on consumer delivery acknowledgement. This helps detect buggy (stuck) consumers that never acknowledge deliveries.
You can find more information available on the RabbitMQ documentation for Delivery Acknowledgement Timeout
However, this was not always the case. Older versions of RabbitMQ (at least through version 3.6.x) did not provide any sort of timeout mechanism for acknowledging messages. This was mentioned in older versions of the official Python tutorial:
There aren't any message timeouts; RabbitMQ will redeliver the message only when the worker connection dies. It's fine even if processing a message takes a very, very long time.
Section 3.1.8 of the AMQP 0-9-1 specification describes Acknowledgements, and is very clear that they can either be Automatic (the client doesn't have to do anything, messages are acknowledged as soon as they are delivered) or Explicit (the client must an Ack for each message or group of messages that it has processed).
Here's some past discussion from back in 2009 confirming this behavior.
The first reference to changing this behavior that I can see is this PR from April 2019. I'm not sure what version of the server that change was included in, but it sounds like the default was initially "no timeout", then 15 minutes in RabbitMQ 3.8.15, then 30 minutes in RabbitMQ 3.8.17 (where it remains as of October 2021).
So: This behavior is dependent on your version of RabbitMQ. Older versions required you to explicitly send NACKs after some interval. Newer versions have a default timeout.
| RabbitMQ | 30,546,977 | 32 |
I couldn't find in RabbitMQ documentation the default x-message-ttl value comes with the installation.
I know how to set it to a desired value but I am curious to know the default value.
| There is no x-message-ttl argument set by default from the broker side, so basically you can interpret default value as infinity.
If you publish message without ttl to queue without ttl set (yupp, there are per-message and per-queue ttl arguments, see note below):
if message published as persistent and queue declared as persistent message will stay in queue as long as it will not be consumed;
if message was not published as persistent or queue was not declared as persistent, then message will stay in queue as long as it will not be consumed or until broker restart.
TTL note:
When both per-message and per-queue ttl set broker use the minimal value. For example, if per-message ttl is 10000 (10 sec) and per-queue ttl is 20000 (20 sec) then per-message ttl will applied.
Per-message TTL note:
Messages with expired ttl will stay in queue as long as they not reached queue head. Don't worry, they will not be sent to consumer, but they will take some resources until they reach head. This is how RabbitMQ queues works (they stick to FIFO idea, which sometimes may break strict compatibility with AMQP protocol). See Caveats section in Time-To-Live Extensions for more.
| RabbitMQ | 24,946,181 | 32 |
I'm new to RabbitMQ and was wondering of a good approach to this problem I'm mulling over. I want to create a service that subscribes to a queue and only pulls messages that meet a specific criteria; for instance, if a specific subject header is in the message.
I'm still learning about RabbitMQ, and was looking for tips on how to approach this. My questions include: how can the consumer pull only specific messages from the queue? How can the producer set a subject header in the message (if that's even the right term?)
| RabbitMQ is perfect for this situation. You have a number of options to do what you want. I suggest reading the documentation to get a better understanding. I would suggest that you use a topic or direct exchange. Topic is more flexible. It goes like this.
Producer code connects to the RabbitMQ Broker and creates and Exchange with a specific name.
Producer publishes to exchange. Each message published will be published with a routing key.
Consumer connects to RabbitMQ broker.
Consumer creates Queue
Consumer binds Queue to the exchange, the same exchange defined in the producer. The binding also includes the routing keys for each message require for this particular consumer.
Lets say you were publishing log messages. The routing key might be something like "log.info", "log.warn", "log.error". Each message published by the producer will have the relevant routing key attached. You will then have a consumer that sends and email for all the error messages and another one that writes all the error messages to a file. So the emailer will define the binding from its queue to the exchange with the routing key "log.error". This way though the exchange receives all messages, the queue defined for the emailer will only contain the error messages. The filelogger will define a new separate queue bound to the same exchange and set up a different routing key. You could do three separate bindings for the three different routing keys require or just use the wildcard "log.*" to request all messages from the exchange starting with log.
This is a simple example that shows how you can achieve what you want to do.
look here for code examples specifically number tutorial number 5.
| RabbitMQ | 11,142,071 | 32 |
I installed rabbitmq service on the server and on my system.
I want to use RPC pattern:
var factory = new ConnectionFactory() {
HostName = "158.2.14.42",
Port = Protocols.DefaultProtocol.DefaultPort,
UserName = "Administrator",
Password = "@server@",
VirtualHost = "/"
ContinuationTimeout = new TimeSpan(10, 0, 0, 0)
};
connection = factory.CreateConnection();
I have an error on creating connection with this message:
None of the specified endpoints were reachable
When I use it on localhost instance of the server it works, but when I create the connection from local to that server,it returned the error.
It not work with local ip and username and password of the my local computer.
Can anyone help me?
| As this question mentioned.
After I installed RabbitMQ, I enabled management tools on the server and on my local computer with this:
rabbitmq-plugins enable rabbitmq_management
Then I restarted RabbitMQ service from services.msc
I could see the Rabbitmq management at http://localhost:15672
I logged in to rabbit management with user:guest and pass:guest
I added my favorite user pass with administrator access, so it worked.
| RabbitMQ | 47,869,390 | 31 |
If the column in Postgres' table has the name year, how should look INSERT query to set the value for that column?
E.g.: INSERT INTO table (id, name, year) VALUES ( ... ); gives an error near the year word.
| Simply enclose year in double quotes to stop it being interpreted as a keyword:
INSERT INTO table (id, name, "year") VALUES ( ... );
From the documentation:
There is a second kind of identifier: the delimited identifier or
quoted identifier. It is formed by enclosing an arbitrary sequence of
characters in double-quotes ("). A delimited identifier is always an
identifier, never a key word. So "select" could be used to refer to a
column or table named "select", whereas an unquoted select would be
taken as a key word and would therefore provoke a parse error when
used where a table or column name is expected.
| PostgreSQL | 7,651,417 | 215 |
I'm trying to run psql on my Vagrant machine, but I get this error:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting connections on
Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Note: Vagrant 1.9.2
Box: ubuntu/trusty64, https://atlas.hashicorp.com/ubuntu/boxes/trusty64
EDIT
Commands I've used in order to install and run postgres:
sudo apt-get update
sudo apt-get install postgresql
sudo su postgres
psql -d postgres -U postgres
| I've had this same issue, related to the configuration of my pg_hba.conf file (located in /etc/postgresql/9.6/main). Please note that 9.6 is the postgresql version I am using.
The error itself is related to a misconfiguration of postgresql, which causes the server to crash before it starts.
I would suggest following these instructions:
Certify that postgresql service is running, using sudo service postgresql start
Run pg_lsclusters from your terminal
Check what is the cluster you are running, the output should be something like:
Version - Cluster Port Status Owner Data directory
9.6 ------- main -- 5432 online postgres /var/lib/postgresql/9.6/main
Disregard the '---' signs, as they are being used there only for alignment.
The important information are the version and the cluster. You can also check whether the server is running or not in the status column.
Copy the info from the version and the cluster, and use like so:
pg_ctlcluster <version> <cluster> start, so in my case, using version 9.6 and cluster 'main', it would be pg_ctlcluster 9.6 main start
If something is wrong, then postgresql will generate a log, that can be accessed on /var/log/postgresql/postgresql-<version>-main.log, so in my case, the full command would be sudo nano /var/log/postgresql/postgresql-9.6-main.log.
The output should show what is the error.
2017-07-13 16:53:04 BRT [32176-1] LOG: invalid authentication method "all"
2017-07-13 16:53:04 BRT [32176-2] CONTEXT: line 90 of configuration file "/etc/postgresql/9.5/main/pg_hba.conf"
2017-07-13 16:53:04 BRT [32176-3] FATAL: could not load pg_hba.conf
Fix the errors and restart postgresql service through sudo service postgresql restart and it should be fine.
I have searched a lot to find this, credit goes to this post.
Best of luck!
| PostgreSQL | 42,653,690 | 215 |
I came across this post (What is the difference between tinyint, smallint, mediumint, bigint and int in MySQL?) and realized that PostgreSQL does not support unsigned integer.
Can anyone help to explain why is it so?
Most of the time, I use unsigned integer as auto incremented primary key in MySQL. In such design, how can I overcome this when I port my database from MySQL to PostgreSQL?
Thanks.
| It's not in the SQL standard, so the general urge to implement it is lower.
Having too many different integer types makes the type resolution system more fragile, so there is some resistance to adding more types into the mix.
That said, there is no reason why it couldn't be done. It's just a lot of work.
| PostgreSQL | 20,810,134 | 215 |
I'm converting a db from postgres to mysql.
Since i cannot find a tool that does the trick itself, i'm going to convert all postgres sequences to autoincrement ids in mysql with autoincrement value.
So, how can i list all sequences in a Postgres DB (8.1 version) with information about the table in which it's used, the next value etc with a SQL query?
Be aware that i can't use the information_schema.sequences view in the 8.4 release.
| The following query gives names of all sequences.
SELECT c.relname FROM pg_class c WHERE c.relkind = 'S' order BY c.relname;
Typically a sequence is named as ${table}_id_seq. Simple regex pattern matching will give you the table name.
To get last value of a sequence use the following query:
SELECT last_value FROM test_id_seq;
| PostgreSQL | 1,493,262 | 214 |
How can I get a list of column names and datatypes of a table in PostgreSQL using a query?
| SELECT
column_name,
data_type
FROM
information_schema.columns
WHERE
table_name = 'table_name';
with the above query you can retrieve columns and its datatype.
| PostgreSQL | 20,194,806 | 211 |
I've tried the following, but I was unsuccessful:
ALTER TABLE person ALTER COLUMN dob POSITION 37;
| "Alter column position" in the PostgreSQL Wiki says:
PostgreSQL currently defines column
order based on the attnum column of
the pg_attribute table. The only way
to change column order is either by
recreating the table, or by adding
columns and rotating data until you
reach the desired layout.
That's pretty weak, but in their defense, in standard SQL, there is no solution for repositioning a column either. Database brands that support changing the ordinal position of a column are defining an extension to SQL syntax.
One other idea occurs to me: you can define a VIEW that specifies the order of columns how you like it, without changing the physical position of the column in the base table.
| PostgreSQL | 285,733 | 211 |
I have a table on pgsql with names (having more than 1 mio. rows), but I have also many duplicates. I select 3 fields: id, name, metadata.
I want to select them randomly with ORDER BY RANDOM() and LIMIT 1000, so I do this is many steps to save some memory in my PHP script.
But how can I do that so it only gives me a list having no duplicates in names.
For example [1,"Michael Fox","2003-03-03,34,M,4545"] will be returned but not [2,"Michael Fox","1989-02-23,M,5633"]. The name field is the most important and must be unique in the list everytime I do the select and it must be random.
I tried with GROUP BY name, bu then it expects me to have id and metadata in the GROUP BY as well or in a aggragate function, but I dont want to have them somehow filtered.
Anyone knows how to fetch many columns but do only a distinct on one column?
| To do a distinct on only one (or n) column(s):
select distinct on (name)
name, col1, col2
from names
This will return any of the rows containing the name. If you want to control which of the rows will be returned you need to order:
select distinct on (name)
name, col1, col2
from names
order by name, col1
Will return the first row when ordered by col1.
distinct on:
SELECT DISTINCT ON ( expression [, ...] ) keeps only the first row of each set of rows where the given expressions evaluate to equal. The DISTINCT ON expressions are interpreted using the same rules as for ORDER BY (see above). Note that the “first row” of each set is unpredictable unless ORDER BY is used to ensure that the desired row appears first.
The DISTINCT ON expression(s) must match the leftmost ORDER BY expression(s). The ORDER BY clause will normally contain additional expression(s) that determine the desired precedence of rows within each DISTINCT ON group.
| PostgreSQL | 16,913,969 | 209 |
Does anyone know if it's even possible (and how, if yes) to query a database server setting in PostgreSQL (9.1)?
I need to check the max_connections (maximum number of open db connections) setting.
| You can use SHOW:
SHOW max_connections;
This returns the currently effective setting. Be aware that it can differ from the setting in postgresql.conf as there are a multiple ways to set run-time parameters in PostgreSQL. To reset the "original" setting from postgresql.conf in your current session:
RESET max_connections;
However, not applicable to this particular setting. The manual:
This parameter can only be set at server start.
To see all settings:
SHOW ALL;
There is also pg_settings:
The view pg_settings provides access to run-time parameters of the
server. It is essentially an alternative interface to the SHOW and
SET commands. It also provides access to some facts about each
parameter that are not directly available from SHOW, such as minimum
and maximum values.
For your original request:
SELECT *
FROM pg_settings
WHERE name = 'max_connections';
Finally, there is current_setting(), which can be nested in DML statements:
SELECT current_setting('max_connections');
Related:
How to test my ad-hoc SQL with parameters in Postgres query window
| PostgreSQL | 8,288,823 | 209 |
I am looking at some PostgreSQL table creation and I stumbled upon this:
CREATE TABLE (
...
) WITH ( OIDS = FALSE );
I read the documentation provided by postgres and I know the concept of object identifier from OOP but still I do not grasp,
why such identifier would be useful in a database?
to make queries shorter?
when should it be used?
| OIDs basically give you a built-in id for every row, contained in a system column (as opposed to a user-space column). That's handy for tables where you don't have a primary key, have duplicate rows, etc. For example, if you have a table with two identical rows, and you want to delete the oldest of the two, you could do that using the oid column.
OIDs are implemented using 4-byte unsigned integers. They are not unique–OID counter will wrap around at 2³²-1. OID are also used to identify data types (see /usr/include/postgresql/server/catalog/pg_type_d.h).
In my experience, the feature is generally unused in most postgres-backed applications (probably in part because they're non-standard), and their use is essentially deprecated:
In PostgreSQL 8.1 default_with_oids is
off by default; in prior versions of
PostgreSQL, it was on by default.
The use of OIDs in user tables is
considered deprecated, so most
installations should leave this
variable disabled. Applications that
require OIDs for a particular table
should specify WITH OIDS when creating
the table. This variable can be
enabled for compatibility with old
applications that do not follow this
behavior.
| PostgreSQL | 5,625,585 | 209 |
I want to be able to connect to a PostgreSQL database and find all of the functions for a particular schema.
My thought was that I could make some query to pg_catalog or information_schema and get a list of all functions, but I can't figure out where the names and parameters are stored. I'm looking for a query that will give me the function name and the parameter types it takes (and what order it takes them in).
Is there a way to do this?
| \df <schema>.*
in psql gives the necessary information.
To see the query that's used internally connect to a database with psql and supply an extra "-E" (or "--echo-hidden") option and then execute the above command.
| PostgreSQL | 1,347,282 | 209 |
I want to select sql:
SELECT "year-month" from table group by "year-month" AND order by date, where
year-month - format for date "1978-01","1923-12".
select to_char of couse work, but not "right" order:
to_char(timestamp_column, 'YYYY-MM')
| to_char(timestamp, 'YYYY-MM')
You say that the order is not "right", but I cannot see why it is wrong (at least until year 10000 comes around).
| PostgreSQL | 4,531,577 | 208 |
I would like to "declare" what are effectively multiple TEMP tables using the WITH statement.
The query I am trying to execute is along the lines of:
WITH table_1 AS (
SELECT GENERATE_SERIES('2012-06-29', '2012-07-03', '1 day'::INTERVAL) AS date
)
WITH table_2 AS (
SELECT GENERATE_SERIES('2012-06-30', '2012-07-13', '1 day'::INTERVAL) AS date
)
SELECT * FROM table_1
WHERE date IN table_2
I've read PostgreSQL documentation and researched into using multiple WITH statements and was unable to find an answer.
| Per the other comments the second Common Table Expression [CTE] is preceded by a comma not a WITH statement so
WITH cte1 AS (SELECT...)
, cte2 AS (SELECT...)
SELECT *
FROM
cte1 c1
INNER JOIN cte2 c2
ON ........
In terms of your actual query this syntax should work in PostgreSql, Oracle, and sql-server, well the later typically you will proceed WITH with a semicolon (;WTIH), but that is because typically sql-server folks (myself included) don't end previous statements which need to be ended prior to a CTE being defined...
Note however that you had a second syntax issue in regards to your WHERE statement. WHERE date IN table_2 is not valid because you never actually reference a value/column from table_2. I prefer INNER JOIN over IN or Exists so here is a syntax that should work with a JOIN:
WITH table_1 AS (
SELECT GENERATE_SERIES('2012-06-29', '2012-07-03', '1 day'::INTERVAL) AS date
)
, table_2 AS (
SELECT GENERATE_SERIES('2012-06-30', '2012-07-13', '1 day'::INTERVAL) AS date
)
SELECT *
FROM
table_1 t1
INNER JOIN
table_2 t2
ON t1.date = t2.date
;
If you want to keep the way you had it which typically EXISTS would be better than IN but to to use IN you need an actual SELECT statement in your where.
SELECT *
FROM
table_1 t1
WHERE t1.date IN (SELECT date FROM table_2);
IN is very problematic when date could potentially be NULL so if you don't want to use a JOIN then I would suggest EXISTS. AS follows:
SELECT *
FROM
table_1 t1
WHERE EXISTS (SELECT * FROM table_2 t2 WHERE t2.date = t1.date);
| PostgreSQL | 38,136,854 | 207 |
I am attempting to create a DB for my app and one thing I'd like to find the best way of doing is creating a one-to-many relationship between my Users and Items tables.
I know I can make a third table, ReviewedItems, and have the columns be a User id and an Item id, but I'd like to know if it's possible to make a column in Users, let's say reviewedItems, which is an integer array containing foreign keys to Items that the User has reviewed.
If PostgreSQL can do this, please let me know! If not, I'll just go down my third table route.
| It may soon be possible to do this: https://commitfest.postgresql.org/17/1252/ - Mark Rofail has been doing some excellent work on this patch!
The patch will (once complete) allow
CREATE TABLE PKTABLEFORARRAY (
ptest1 float8 PRIMARY KEY,
ptest2 text
);
CREATE TABLE FKTABLEFORARRAY (
ftest1 int[],
FOREIGN KEY (EACH ELEMENT OF ftest1) REFERENCES PKTABLEFORARRAY,
ftest2 int
);
However, author currently needs help to rebase the patch (beyond my own ability) so anyone reading this who knows Postgres internals please help if you can.
| PostgreSQL | 41,054,507 | 206 |
I'm writing a shell script (will become a cronjob) that will:
1: dump my production database
2: import the dump into my development database
Between step 1 and 2, I need to clear the development database (drop all tables?). How is this best accomplished from a shell script? So far, it looks like this:
#!/bin/bash
time=`date '+%Y'-'%m'-'%d'`
# 1. export(dump) the current production database
pg_dump -U production_db_name > /backup/dir/backup-${time}.sql
# missing step: drop all tables from development database so it can be re-populated
# 2. load the backup into the development database
psql -U development_db_name < backup/dir/backup-${time}.sql
| I'd just drop the database and then re-create it. On a UNIX or Linux system, that should do it:
$ dropdb development_db_name
$ createdb development_db_name
That's how I do it, actually.
| PostgreSQL | 2,056,876 | 206 |
I recently installed Postgresql 11, during the installation, there's no step to put password and username for Postgres. Now in pgAdmin 4, I wanted to connect the database to server and it's asking me to input password, and I haven't put any in the first place.
Any one knows what's going on?
| The default authentication mode for PostgreSQL is set to ident.
You can access your pgpass.conf via pgAdmin -> Files -> open pgpass.conf
That will give you the path of pgpass.conf at the bottom of the window (official documentation).
After knowing the location, you can open this file and edit it to your liking.
If that doesn't work, you can:
Find your pg_hba.conf, usually located under C:\Program Files\PostgreSQL\9.1\data\pg_hba.conf
If necessary, set the permissions on it so that you can modify it. Your user account might not be able to do so until you use the security tab in the properties dialog to give yourself that right by using an admin override.
Alternately, find notepad or notepad++ in your start menu, right click, choose "Run as administrator", then use File->Open to open pg_hba.conf that way.
Edit it to set the "host" line for user "postgres" on host "127.0.0.1/32" to "trust". You can add the line if it isn't there; just insert host all postgres 127.0.0.1/32 trust before any other lines. (You can ignore comments, lines beginning with #).
Restart the PostgreSQL service from the Services control panel (start->run->services.msc)
Connect using psql or pgAdmin4 or whatever you prefer
Run ALTER USER postgres PASSWORD 'fooBarEatsBarFoodBareFoot'
Remove the line you added to pg_hba.conf or change it back
Restart PostgreSQL again to bring the changes to effect.
Here is an example of the pg_hba.conf file (METHOD is already set to trust):
# TYPE DATABASE USER ADDRESS METHOD
# IPv4 local connections:
host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 trust
NOTE: Remember to change the METHOD back to md5 or other auth-methods listed here after changing your password (as stated above).
| PostgreSQL | 55,038,942 | 205 |
The suggested query to list ENUM types is great. But, it merely lists of the schema and the typname. How do I list out the actual ENUM values? For example, in the linked answer above, I would want the following result
schema type values
------------- -------- -------
communication channels 'text_message','email','phone_call','broadcast'
| select n.nspname as enum_schema,
t.typname as enum_name,
e.enumlabel as enum_value
from pg_type t
join pg_enum e on t.oid = e.enumtypid
join pg_catalog.pg_namespace n ON n.oid = t.typnamespace;
| PostgreSQL | 9,540,681 | 205 |
It seems many others have had problems installing the pg gem. None of the solutions posed for others have worked for me.
I have tried to install the pg gem and postgres.app. The pg gem won't install. The first error I get is:
An error occurred while installing pg (0.17.0), and Bundler cannot continue.
Make sure that gem install pg -v '0.17.0' succeeds before bundling.
The installation advice about pointing my gem install to the config for pg fails with the following error message (which many others on this forum have encountered):
Failed to build native extensions... Results logged to /Users/melanie/.rvm/gems/ruby-1.9.3-p448/gems/pg-0.17.0/ext/gem_make.out
I don't know how to find or access this log file to search for further clues.
I also get an error message (command not found) when I try using the sudo apt-get install command. I've scoured this forum for the last 6 hours, trying each piece of advice to get pg working with my rails project.
I can't find advice about how to change a path, or specifically, what change is required. My which pg_config returns a file source. I've used that with a command to install pg using that config. It fails.
There are so many people that have had trouble with this. Many answers suggest homebrew. I've had to remove that because it threw up other issues.
| Same error for me and I didn't experience it until I downloaded OS X 10.9 (Mavericks). Sigh, another OS upgrade headache.
Here's how I fixed it (with homebrew):
Install another build of Xcode Tools (typing brew update in the terminal will prompt you to update the Xcode build tools)
brew update
brew install postgresql
After that gem install pg worked for me.
| PostgreSQL | 19,262,312 | 202 |
I use DBeaver v 5.2.5 on Windows and use it to connect to PostgreSQL databases.
To create a connection, I must specify the database and I have no mean to see other databases on the same server.
A colleague using DBeaver 5.3 on Mac has an option to see all databases, not just the default one.
Is there an equivalent setup on the windows version?
| On the connection, right-click -> Edit connection -> Connection settings -> on the tabbed panel, select PostgreSQL, check the box Show all databases.
UPDATE 19.02.2024
Checkbox is moved to Main Tab. So flow is:
On the connection, right-click -> Edit connection -> Connection settings -> check the box Show all databases.
| PostgreSQL | 54,235,029 | 201 |
Here is an extract of my table:
gid | datepose | pvc
---------+----------------+------------
1 | 1961 | 01
2 | 1949 |
3 | 1990 | 02
1 | 1981 |
1 | | 03
1 | |
I want to fill the PVC column using a SELECT CASE as bellow:
SELECT
gid,
CASE
WHEN (pvc IS NULL OR pvc = '') AND datpose < 1980) THEN '01'
WHEN (pvc IS NULL OR pvc = '') AND datpose >= 1980) THEN '02'
WHEN (pvc IS NULL OR pvc = '') AND (datpose IS NULL OR datpose = 0) THEN '03'
END AS pvc
FROM my_table ;
The result is the same content as source table, nothing has happened and I get no error message in pg_log files. It might be a syntax error, or a problem with using multiple conditions within WHEN clauses?
Thanks for help and advice!
| This kind of code perhaps should work for You
SELECT
*,
CASE
WHEN (pvc IS NULL OR pvc = '') AND (datepose < 1980) THEN '01'
WHEN (pvc IS NULL OR pvc = '') AND (datepose >= 1980) THEN '02'
WHEN (pvc IS NULL OR pvc = '') AND (datepose IS NULL OR datepose = 0) THEN '03'
ELSE '00'
END AS modifiedpvc
FROM my_table;
gid | datepose | pvc | modifiedpvc
-----+----------+-----+-------------
1 | 1961 | 01 | 00
2 | 1949 | | 01
3 | 1990 | 02 | 00
1 | 1981 | | 02
1 | | 03 | 00
1 | | | 03
(6 rows)
| PostgreSQL | 27,800,119 | 201 |
This question may look like a duplicate of: How to uninstall postgresql on my Mac (running Snow Leopard) however, there are two major differences. I'm running Lion and I'm trying to uninstall PostgreSQL 9.0.4. I've looked at the last question and the link that it referenced, but I did not find a file called "uninstall-postgresql" when I run this command:
sudo find / -name "*uninstall-*"
So, I assume this means that the uninstall process for 9.0.4 is different than that of 8.x.
I've seen a couple of posts in different places describing a method for manual uninstallation but, similarly, some of the directories/files referenced are not present on my machine.
Any assistance or direction you can provide would be greatly appreciated.
Just for reference, this is the link the other poster used to uninstall postgres from snow leopard. As I tried to step through these commands, most of them choked with some variant of "command not found".
UPDATE:
In addition to brew uninstall postgres, should I remove any of the following files/directories manually? Keep in mind I want to completely wipe the slate clean, no data files/database tables or anything.
> sudo find / -name "*postgres*"
find: /dev/fd/3: Not a directory
find: /dev/fd/4: Not a directory
/Library/Ruby/Gems/1.8/doc/activerecord-3.1.1/rdoc/lib/active_record/connection_adapters/postgresql_adapter_rb.html
/Library/Ruby/Gems/1.8/doc/activerecord-3.1.1/ri/ActiveRecord/ConnectionAdapters/PostgreSQLAdapter/postgresql_version-i.ri
/Library/Ruby/Gems/1.8/doc/arel-2.2.1/rdoc/lib/arel/visitors/postgresql_rb.html
/Library/Ruby/Gems/1.8/gems/activerecord-3.1.1/lib/active_record/connection_adapters/postgresql_adapter.rb
/Library/Ruby/Gems/1.8/gems/arel-2.2.1/lib/arel/visitors/postgresql.rb
/Library/Ruby/Gems/1.8/gems/arel-2.2.1/test/visitors/test_postgres.rb
/Library/Ruby/Gems/1.8/gems/railties-3.1.1/lib/rails/generators/rails/app/templates/config/databases/jdbcpostgresql.yml
/Library/Ruby/Gems/1.8/gems/railties-3.1.1/lib/rails/generators/rails/app/templates/config/databases/postgresql.yml
/Library/WebServer/Documents/postgresql
/Library/WebServer/Documents/postgresql/html/app-postgres.html
/Library/WebServer/Documents/postgresql/html/postgres-user.html
/private/etc/apache2/users/postgres.conf
/private/var/db/dslocal/nodes/Default/groups/_postgres.plist
/private/var/db/dslocal/nodes/Default/sharepoints/postgres's Public Folder.plist
/private/var/db/dslocal/nodes/Default/users/_postgres.plist
/private/var/db/dslocal/nodes/Default/users/postgres.plist
/System/Library/DirectoryServices/DefaultLocalDB/Default/groups/_postgres.plist
/System/Library/DirectoryServices/DefaultLocalDB/Default/users/_postgres.plist
/Users/postgres
/Users/remcat/dev/working/startwire/vendor/plugins/foreign_keys/lib/foreign_keys/postgresql_adapter.rb
/Users/remcat/Library/Application Support/CrashReporter/postgres_DCCEF98F-4602-5FF7-964F-5E717AC007B4.plist
/Users/remcat/Library/Caches/Homebrew/postgresql-9.0.4.tar.bz2
/Users/remcat/Library/Caches/Metadata/Safari/History/http:%2F%2Fwww.postgresql.org%2Fdocs%2Fcurrent%2Fstatic%2Findex.html.webhistory
/Users/remcat/Library/Logs/CrashReporter/postgres_2011-11-06-194716_Ramys-MacBook-Pro.crash
/Users/remcat/Library/Logs/CrashReporter/postgres_2011-11-06-194742_Ramys-MacBook-Pro.crash
/Users/remcat/Library/Logs/CrashReporter/postgres_2011-11-06-194757_Ramys-MacBook-Pro.crash
/Users/remcat/Library/Logs/CrashReporter/postgres_2011-11-06-194958_Ramys-MacBook-Pro.crash
/Users/remcat/Library/Logs/CrashReporter/postgres_2011-11-06-203352_Ramys-MacBook-Pro.crash
/Users/remcat/Library/Logs/CrashReporter/postgres_2011-11-06-203359_Ramys-MacBook-Pro.crash
/Users/remcat/Library/Logs/DiagnosticReports/.postgres_2011-11-06-194716_Ramys-MacBook-Pro.crash.plist
/Users/remcat/Library/Logs/DiagnosticReports/.postgres_2011-11-06-194742_Ramys-MacBook-Pro.crash.plist
/Users/remcat/Library/Logs/DiagnosticReports/.postgres_2011-11-06-194757_Ramys-MacBook-Pro.crash.plist
/Users/remcat/Library/Logs/DiagnosticReports/.postgres_2011-11-06-194958_Ramys-MacBook-Pro.crash.plist
/Users/remcat/Library/Logs/DiagnosticReports/.postgres_2011-11-06-203352_Ramys-MacBook-Pro.crash.plist
/Users/remcat/Library/Logs/DiagnosticReports/.postgres_2011-11-06-203359_Ramys-MacBook-Pro.crash.plist
/Users/remcat/Library/Logs/DiagnosticReports/postgres_2011-11-06-194716_Ramys-MacBook-Pro.crash
/Users/remcat/Library/Logs/DiagnosticReports/postgres_2011-11-06-194742_Ramys-MacBook-Pro.crash
/Users/remcat/Library/Logs/DiagnosticReports/postgres_2011-11-06-194757_Ramys-MacBook-Pro.crash
/Users/remcat/Library/Logs/DiagnosticReports/postgres_2011-11-06-194958_Ramys-MacBook-Pro.crash
/Users/remcat/Library/Logs/DiagnosticReports/postgres_2011-11-06-203352_Ramys-MacBook-Pro.crash
/Users/remcat/Library/Logs/DiagnosticReports/postgres_2011-11-06-203359_Ramys-MacBook-Pro.crash
/Users/remcat/Library/Saved Application State/org.postgresql.pgadmin.savedState
/usr/bin/postgres_real
/usr/include/postgres_ext.h
/usr/include/postgresql
/usr/include/postgresql/internal/postgres_fe.h
/usr/include/postgresql/server/postgres.h
/usr/include/postgresql/server/postgres_ext.h
/usr/include/postgresql/server/postgres_fe.h
/usr/lib/postgresql
/usr/local/Library/Aliases/postgres
/usr/local/Library/Formula/postgresql.rb
/usr/local/var/postgres
/usr/local/var/postgres/postgresql.conf
/usr/share/devicemgr/backend/vendor/rails/activerecord/lib/active_record/connection_adapters/postgresql_adapter.rb
/usr/share/devicemgr/backend/vendor/rails/railties/configs/databases/postgresql.yml
/usr/share/devicemgr/webserver/gems/gems/eventmachine-0.12.10/lib/em/protocols/postgres3.rb
/usr/share/man/man1/postgres.1.gz
/usr/share/postgresql
/usr/share/postgresql/postgres.bki
/usr/share/postgresql/postgres.description
/usr/share/postgresql/postgres.shdescription
/usr/share/postgresql/postgresql.conf
/usr/share/postgresql/postgresql.conf.sample
| The following is the un-installation for PostgreSQL 9.1 installed using the EnterpriseDB installer. You most probably have to replace folder /9.1/ with your version number. If /Library/Postgresql/ doesn't exist then you probably installed PostgreSQL with a different method like homebrew or Postgres.app.
To remove the EnterpriseDB One-Click install of PostgreSQL 9.1:
Open a terminal window. Terminal is found in: Applications->Utilities->Terminal
Run the uninstaller:
sudo /Library/PostgreSQL/9.1/uninstall-postgresql.app/Contents/MacOS/installbuilder.sh
If you installed with the Postgres Installer, you can do:
open /Library/PostgreSQL/9.2/uninstall-postgresql.app
It will ask for the administrator password and run the uninstaller.
Remove the PostgreSQL and data folders. The Wizard will notify you that these were not removed.
sudo rm -rf /Library/PostgreSQL
Remove the ini file:
sudo rm /etc/postgres-reg.ini
Remove the PostgreSQL user using System Preferences -> Users & Groups.
Unlock the settings panel by clicking on the padlock and entering your password.
Select the PostgreSQL user and click on the minus button.
Restore your shared memory settings:
sudo rm /etc/sysctl.conf
That should be all! The uninstall wizard would have removed all icons and start-up applications files so you don't have to worry about those.
| PostgreSQL | 8,037,729 | 201 |
I'm not sure if its standard SQL:
INSERT INTO tblA
(SELECT id, time
FROM tblB
WHERE time > 1000)
What I'm looking for is: what if tblA and tblB are in different DB Servers.
Does PostgreSql gives any utility or has any functionality that will help to use INSERT query with PGresult struct
I mean SELECT id, time FROM tblB ... will return a PGresult* on using PQexec. Is it possible to use this struct in another PQexec to execute an INSERT command.
EDIT:
If not possible then I would go for extracting the values from PQresult* and create a multiple INSERT statement syntax like:
INSERT INTO films (code, title, did, date_prod, kind) VALUES
('B6717', 'Tampopo', 110, '1985-02-10', 'Comedy'),
('HG120', 'The Dinner Game', 140, DEFAULT, 'Comedy');
Is it possible to create a prepared statement out of this!! :(
| As Henrik wrote you can use dblink to connect remote database and fetch result. For example:
psql dbtest
CREATE TABLE tblB (id serial, time integer);
INSERT INTO tblB (time) VALUES (5000), (2000);
psql postgres
CREATE TABLE tblA (id serial, time integer);
INSERT INTO tblA
SELECT id, time
FROM dblink('dbname=dbtest', 'SELECT id, time FROM tblB')
AS t(id integer, time integer)
WHERE time > 1000;
TABLE tblA;
id | time
----+------
1 | 5000
2 | 2000
(2 rows)
PostgreSQL has record pseudo-type (only for function's argument or result type), which allows you query data from another (unknown) table.
Edit:
You can make it as prepared statement if you want and it works as well:
PREPARE migrate_data (integer) AS
INSERT INTO tblA
SELECT id, time
FROM dblink('dbname=dbtest', 'SELECT id, time FROM tblB')
AS t(id integer, time integer)
WHERE time > $1;
EXECUTE migrate_data(1000);
-- DEALLOCATE migrate_data;
Edit (yeah, another):
I just saw your revised question (closed as duplicate, or just very similar to this).
If my understanding is correct (postgres has tbla and dbtest has tblb and you want remote insert with local select, not remote select with local insert as above):
psql dbtest
SELECT dblink_exec
(
'dbname=postgres',
'INSERT INTO tbla
SELECT id, time
FROM dblink
(
''dbname=dbtest'',
''SELECT id, time FROM tblb''
)
AS t(id integer, time integer)
WHERE time > 1000;'
);
I don't like that nested dblink, but AFAIK I can't reference to tblB in dblink_exec body. Use LIMIT to specify top 20 rows, but I think you need to sort them using ORDER BY clause first.
| PostgreSQL | 6,083,132 | 201 |
In MS SQL Server, I create my scripts to use customizable variables:
DECLARE @somevariable int
SELECT @somevariable = -1
INSERT INTO foo VALUES ( @somevariable )
I'll then change the value of @somevariable at runtime, depending on the value that I want in the particular situation. Since it's at the top of the script it's easy to see and remember.
How do I do the same with the PostgreSQL client psql?
| Postgres variables are created through the \set command, for example ...
\set myvariable value
... and can then be substituted, for example, as ...
SELECT * FROM :myvariable.table1;
... or ...
SELECT * FROM table1 WHERE :myvariable IS NULL;
edit: As of psql 9.1, variables can be expanded in quotes as in:
\set myvariable value
SELECT * FROM table1 WHERE column1 = :'myvariable';
In older versions of the psql client:
... If you want to use the variable as the value in a conditional string query, such as ...
SELECT * FROM table1 WHERE column1 = ':myvariable';
... then you need to include the quotes in the variable itself as the above will not work. Instead define your variable as such ...
\set myvariable 'value'
However, if, like me, you ran into a situation in which you wanted to make a string from an existing variable, I found the trick to be this ...
\set quoted_myvariable '\'' :myvariable '\''
Now you have both a quoted and unquoted variable of the same string! And you can do something like this ....
INSERT INTO :myvariable.table1 SELECT * FROM table2 WHERE column1 = :quoted_myvariable;
| PostgreSQL | 36,959 | 199 |
I have a table:
CREATE TABLE tblproducts
(
productid integer,
product character varying(20)
)
With the rows:
INSERT INTO tblproducts(productid, product) VALUES (1, 'CANDID POWDER 50 GM');
INSERT INTO tblproducts(productid, product) VALUES (2, 'SINAREST P SYP 100 ML');
INSERT INTO tblproducts(productid, product) VALUES (3, 'ESOZ D 20 MG CAP');
INSERT INTO tblproducts(productid, product) VALUES (4, 'HHDERM CREAM 10 GM');
INSERT INTO tblproducts(productid, product) VALUES (5, 'CREAM 15 GM');
INSERT INTO tblproducts(productid, product) VALUES (6, 'KZ LOTION 50 ML');
INSERT INTO tblproducts(productid, product) VALUES (7, 'BUDECORT 200 Rotocap');
If I execute string_agg() on tblproducts:
SELECT string_agg(product, ' | ') FROM "tblproducts"
It will return the following result:
CANDID POWDER 50 GM | ESOZ D 20 MG CAP | HHDERM CREAM 10 GM | CREAM 15 GM | KZ LOTION 50 ML | BUDECORT 200 Rotocap
How can I sort the aggregated string, in the order I would get using ORDER BY product?
I'm using PostgreSQL 9.2.4.
| With postgres 9.0+ you can write:
select string_agg(product,' | ' order by product) from "tblproducts"
Details here.
| PostgreSQL | 24,906,826 | 198 |
I'd like to add a constraint which enforces uniqueness on a column only in a portion of a table.
ALTER TABLE stop ADD CONSTRAINT myc UNIQUE (col_a) WHERE (col_b is null);
The WHERE part above is wishful thinking.
Any way of doing this? Or should I go back to the relational drawing board?
| PostgreSQL doesn't define a partial (i.e. conditional) UNIQUE constraint - however, you can create a partial unique index.
PostgreSQL uses unique indexes to implement unique constraints, so the effect is the same, with an important caveat: you can't perform upserts (ON CONFLICT DO UPDATE) against a unique index like you would against a unique constraint. (Edit: apparently you can use it with ON CONFLICT now)
Also, you won't see the constraint listed in information_schema.
CREATE UNIQUE INDEX stop_myc ON stop (col_a) WHERE (col_b is NOT null);
See partial indexes.
| PostgreSQL | 16,236,365 | 198 |
I want to remove null=True from a TextField:
- footer=models.TextField(null=True, blank=True)
+ footer=models.TextField(blank=True, default='')
I created a schema migration:
manage.py schemamigration fooapp --auto
Since some footer columns contain NULL I get this error if I run the migration:
django.db.utils.IntegrityError: column "footer" contains null values
I added this to the schema migration:
for sender in orm['fooapp.EmailSender'].objects.filter(footer=None):
sender.footer=''
sender.save()
Now I get:
django.db.utils.DatabaseError: cannot ALTER TABLE "fooapp_emailsender" because it has pending trigger events
What is wrong?
| Another reason for this maybe because you try to set a column to NOT NULL when it actually already has NULL values.
| PostgreSQL | 12,838,111 | 197 |
I typed psql and I get this:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
I used sudo netstat -nlp | grep 5432 to see the status but nothing showed.
And I searched online, somebody told me to modify pg_hba.conf but I can't locate this file. And I also tried this command sudo ln -s /tmp/.s.PGSQL.5432 /var/run/postgresql/.s.PGSQL.5432. It can't work.
| The error states that the psql utility can't find the socket to connect to your database server. Either you don't have the database service running in the background, or the socket is located elsewhere, or perhaps the pg_hba.conf needs to be fixed.
Step 1: Verify that the database is running
The command may vary depending on your operating system. But on most *ix systems the following would work, it will search for postgres among all running processes
ps -ef | grep postgres
On my system, mac osx, this spits out
501 408 1 0 2Jul15 ?? 0:21.63 /usr/local/opt/postgresql/bin/postgres -D /usr/local/var/postgres -r /usr/local/var/postgres/server.log
The last column shows the command used to start the server, and the options.
You can look at all the options available to start the postgres server using the following.
man postgres
From there, you'd see that the options -D and -r are respectively the datadir & the logfilename.
Step 2: If the postgres service is running
Use find to search for the location of the socket, which should be somewhere in the /tmp
sudo find /tmp/ -name .s.PGSQL.5432
If postgres is running and accepting socket connections, the above should tell you the location of the socket. On my machine, it turned out to be:
/tmp/.s.PGSQL.5432
Then, try connecting via psql using this file's location explicitly, eg.
psql -h /tmp/ dbname
Step 3: If the service is running but you don't see a socket
If you can't find the socket, but see that the service is running, Verify that the pg_hba.conf file allows local sockets.
Browse to the datadir and you should find the pg_hba.conf file.
By default, near the bottom of the file you should see the following lines:
# "local" is for Unix domain socket connections only
local all all trust
If you don't see it, you can modify the file, and restart the postgres service.
| PostgreSQL | 31,645,550 | 195 |
I recently upgraded to OSX 10.7, at which point my rails installation completely borked when trying to connect to the psql server. When I do it from the command line using
psql -U postgres
it works totally fine, but when I try to run the rails server or console with the same username and password, I get this error
...activerecord-3.0.9/lib/active_record/connection_adapters/postgresql_adapter.rb:950:in `initialize': could not connect to server: Permission denied (PGError)
Is the server running locally and accepting
connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"?
Any ideas what might be going on would be super helpful! Thanks!
| It's a PATH issue. Mac OSX Lion includes Postgresql in the system now. If you do a which psql you'll likely see usr/bin/psql instead of usr/local/bin/psql which is HomeBrew's correct one. If you run brew doctor you should get a message stating that you need to add usr/local/bin to the head of your PATH env variable.
Editing your .bash_profile or .profile, or whichever shell you're using and adding:
export PATH=/usr/local/bin:$PATH
as the first export for the PATH then either quit you shell session or source your file with source ~/.bash_profile and it should now be OK again.
| PostgreSQL | 6,770,649 | 195 |
I am working on a Ruby on Rails application and installed PostgreSQL using postgresql-9.1.2-1-osx.dmg. I installed the pg gem.
Then when I executed rake db:create, I got
the following error:
dlopen(/Users/sathishvc/.rvm/gems/ruby-1.9.3-head@knome-vivacious/gems/pg-0.12.2/lib/pg_ext.bundle,
9): Library not loaded: /usr/local/lib/libpq.5.4.dylib
I checked if /usr/local/lib/libpq.5.4.dylib existed or not. It did not.
So, it should be existing somewhere else in the system or I do not know, if I need to install any other piece of software for this.
What should I do?
| If you have upgraded
PostgreSQL with Homebrew (brew update && brew upgrade),
macOS (e.g., from v10.15 (Catalina) to v11 (Big Sur))
Then simply uninstall the pg gem:
gem uninstall pg
bundle install
And the path will be corrected for you. There isn't any need to uninstall the whole PostgreSQL cluster.
| PostgreSQL | 9,023,482 | 194 |
I have two tables with binding primary keys in the database and I want to find a disjoint set between them. For example,
Table1
ID
Name
1
John
2
Peter
3
Mary
Table2
ID
Address
1
address2
2
address2
So how do I create a SQL query so I can fetch the row with ID from table1 that is not in table2? In this case, (3, Mary) should be returned.
PS: The ID is the primary key for those two tables.
| Try this
SELECT ID, Name
FROM Table1
WHERE ID NOT IN (SELECT ID FROM Table2)
| PostgreSQL | 12,048,633 | 193 |
I have a postgresql function
CREATE OR REPLACE FUNCTION fixMissingFiles() RETURNS VOID AS $$
DECLARE
deletedContactId integer;
BEGIN
SELECT INTO deletedContactId contact_id FROM myContacts WHERE id=206351;
-- print the value of deletedContactId variable to the console
END;
$$ LANGUAGE plpgsql;
How can I print the value of the deletedContactId to the console?
| You can raise a notice in Postgres as follows:
RAISE NOTICE 'Value: %', deletedContactId;
Read here for more details.
| PostgreSQL | 23,465,429 | 191 |
I've been looking for a solution for this and could not find a working solution.
I've installed postgres using brew (brew install postgres) in my MacBook and I am currently running it using brew services (brew services list displays postgres as a running service). However, when I try to run psql I get following error.
psql: could not connect to server: No such file or directory Is the
server running locally and accepting connections on Unix domain
socket "/tmp/.s.PGSQL.5432"?
Anyone has already solved similar problem?
| I had the same error and I fixed it by removing the process pid file:
rm -f /usr/local/var/postgres/postmaster.pid
or for a specific version:
rm -f /usr/local/var/postgresql@16/postmaster.pid
[Updated Answer For Arm-based Chips (Apple M1)]
When you use brew to install postgresql on Apple M1 computers, the postmaster.pid will be located in:
/opt/homebrew/var/postgresql/postmaster.pid
Follow following three steps:
# 1. Stop PostgreSQL
brew services stop postgresql@16
# 2. Delete the postmaster.pid
rm -f /opt/homebrew/var/postgresql@16/postmaster.pid
# 3. Start the PostgreSQL again
brew services start postgresql@16
After the above, you can also check the status of the service by brew services info postgresql
brew services info postgresql@16
postgresql@16 (homebrew.mxcl.postgresql)
Running: ✔
Loaded: ✔
Schedulable: ✘
User: root
PID: 34884
| PostgreSQL | 39,710,384 | 189 |
shopkeeper table has following fields:
id (bigint),amount (numeric(19,2)),createddate (timestamp)
Let's say, I have the above table. I want to get the records for yesterday and
generate a report by having the amount printed to cents.
One way of doing is to perform calculations in my java application and execute a simple query
Date previousDate ;// $1 calculate in application
Date todayDate;// $2 calculate in application
select amount where createddate between $1 and $2
and then loop through the records and convert amount to cents in my java application and generate the report
Another way is like performing calculations in sql query itself:
select cast(amount * 100 as int) as "Cents"
from shopkeeper where createddate between date_trunc('day', now()) - interval '1 day' and date_trunc('day', now())
and then loop through the records and generate the report
In one way , all my processing is done in java application and a simple query is fired.
In other case all the conversions and calculations is done in Sql query.
The above use case is just an example, in a real scenario a table can have many columns that require processing of the similar kind.
Can you please tell me which approach is better in terms of performance and other aspects and why?
| It depends on a lot of factors - but most crucially:
complexity of calculations (prefer doing complex crunching on an app-server, since that scales out; rather than a db server, which scales up)
volume of data (if you need to access/aggregate a lot of data, doing it at the db server will save bandwidth, and disk io if the aggregates can be done inside indexes)
convenience (sql is not the best language for complex work - especially not great for procedural work, but very good for set-based work; lousy error-handling, though)
As always, if you do bring the data back to the app-server, minimising the columns and rows will be to your advantage. Making sure the query is tuned and appropriately indexed will help either scenario.
Re your note:
and then loop through the records
Looping through records is almost always the wrong thing to do in sql - writing a set-based operation is preferred.
As a general rule, I prefer to keep the database's job to a minimum "store this data, fetch this data" - however, there are always examples of scenarios where an elegant query at the server can save a lot of bandwidth.
Also consider: if this is computationally expensive, can it be cached somewhere?
If you want an accurate "which is better"; code it both ways and compare it (noting that a first draft of either is likely not 100% tuned). But factor in typical usage to that: if, in reality, it is being called 5 times (separately) at once, then simulate that: don't compare just a single "1 of these vs 1 of those".
| PostgreSQL | 7,510,092 | 189 |
I'm looking for a way to get all rows as INSERT statements from one specific table within a database using pg_dump in PostgreSQL.
E.g., I have table A and all rows in table A I need as INSERT statements, it should also dump those statements to a file.
Is this possible?
| if version < 8.4.0
pg_dump -D -t <table> <database>
Add -a before the -t if you only want the INSERTs, without the CREATE TABLE etc to set up the table in the first place.
version >= 8.4.0
pg_dump --column-inserts --data-only --table=<table> <database>
| PostgreSQL | 2,857,989 | 189 |
I am trying to connect to a Postgresql database, I am getting the following Error:
Error:org.postgresql.util.PSQLException: FATAL: sorry, too many clients already
What does the error mean and how do I fix it?
My server.properties file is following:
serverPortData=9042
serverPortCommand=9078
trackConnectionURL=jdbc:postgresql://127.0.0.1:5432/vTrack?user=postgres password=postgres
dst=1
DatabaseName=vTrack
ServerName=127.0.0.1
User=postgres
Password=admin
MaxConnections=90
InitialConnections=80
PoolSize=100
MaxPoolSize=100
KeepAliveTime=100
TrackPoolSize=120
TrackMaxPoolSize=120
TrackKeepAliveTime=100
PortNumber=5432
Logging=1
| An explanation of the following error:
org.postgresql.util.PSQLException: FATAL: sorry, too many clients already.
Summary:
Your code opened up more than the allowed limit of connections to the postgresql database. It ran something like this: Connection conn = myconn.Open(); inside a loop, and forgot to run conn.close();. Just because your class is destroyed and garbage collected does not release the connection to the database. The quickest fix to this is to make sure you have the following code with whatever class that creates a connection:
protected void finalize() throws Throwable
{
try { your_connection.close(); }
catch (SQLException e) {
e.printStackTrace();
}
super.finalize();
}
Place that code in any class where you create a Connection. Then when your class is garbage collected, your connection will be released.
Run this SQL to see postgresql max connections allowed:
show max_connections;
The default is 100. PostgreSQL on good hardware can support a few hundred connections at a time. If you want to have thousands, you should consider using connection pooling software to reduce the connection overhead.
Take a look at exactly who/what/when/where is holding open your connections:
SELECT * FROM pg_stat_activity;
The number of connections currently used is:
SELECT COUNT(*) from pg_stat_activity;
Debugging strategy
You could give different usernames/passwords to the programs that might not be releasing the connections to find out which one it is, and then look in pg_stat_activity to find out which one is not cleaning up after itself.
Do a full exception stack trace when the connections could not be created and follow the code back up to where you create a new Connection, make sure every code line where you create a connection ends with a connection.close();
How to set the max_connections higher:
max_connections in the postgresql.conf sets the maximum number of concurrent connections to the database server.
First find your postgresql.conf file
If you don't know where it is, query the database with the sql: SHOW config_file;
Mine is in: /var/lib/pgsql/data/postgresql.conf
Login as root and edit that file.
Search for the string: "max_connections".
You'll see a line that says max_connections=100.
Set that number bigger, restart postgresql database.
What's the maximum max_connections?
Use this query:
select min_val, max_val from pg_settings where name='max_connections';
I get the value 8388607 so in theory that's the most you are allowed to have, but then a runaway process can eat up thousands of connections, and surprise, your database is unresponsive until reboot. If you had a sensible max_connections like 100. The offending program would be denied a new connection and the database is safu.
| PostgreSQL | 2,757,549 | 188 |
John uses CHARACTER VARYING in the places where I use VARCHAR.
I am a beginner, while he is an expert.
This suggests me that there is something which I do not know.
What is the difference between CHARACTER VARYING and VARCHAR in PostgreSQL?
| VARCHAR is an alias for CHARACTER VARYING, so no difference, see documentation :)
The notations varchar(n) and char(n) are aliases for character varying(n) and character(n), respectively. character without length specifier is equivalent to character(1). If character varying is used without length specifier, the type accepts strings of any size. The latter is a PostgreSQL extension.
Note on capitalization: The PostgreSQL documentation uses the all lower case stylization: character varying. In contrast the official SQL standard uses the stylization with all caps throughout its 1000 pages: CHARACTER VARYING.
| PostgreSQL | 1,199,468 | 188 |
I'm creating a lot of migrations that have foreign keys in PostgreSQL 9.4.
This is creating a headache because the tables must all be in the exact order expected by the foreign keys when they are migrated. It gets even stickier if I have to run migrations from other packages that my new migrations depend on for a foreign key.
In MySQL, I can simplify this by simply adding SET FOREIGN_KEY_CHECKS = 0; to the top of my migration file. How can I do this temporarily in PostgresSQL only for the length of the migration code?
BTW, using the Laravel Schema Builder for this.
| For migration, it is easier to disable all triggers with:
SET session_replication_role = 'replica';
And after migration reenable all with
SET session_replication_role = 'origin';
| PostgreSQL | 38,112,379 | 187 |
SELECT Table.date FROM Table WHERE date > current_date - 10;
Does this work on PostgreSQL?
| Yes this does work in PostgreSQL (assuming the column "date" is of datatype date)
Why don't you just try it?
The standard ANSI SQL format would be:
SELECT Table.date
FROM Table
WHERE date > current_date - interval '10' day;
I prefer that format as it makes things easier to read (but it is the same as current_date - 10).
| PostgreSQL | 5,465,484 | 187 |
I need to set schema path in Postgres so that I don't every time specify schema dot table e.g. schema2.table.
Set schema path:
SET SCHEMA PATH a,b,c
only seems to work for one query session on mac, after I close query window the path variable sets itself back to default.
How can I make it permanent?
| (And if you have no admin access to the server)
ALTER ROLE <your_login_role> SET search_path TO a,b,c;
Two important things to know about:
When a schema name is not simple, it needs to be wrapped in double quotes.
The order in which you set default schemas a, b, c matters, as it is also the order in which the schemas will be looked up for tables. So if you have the same table name in more than one schema among the defaults, there will be no ambiguity, the server will always use the table from the first schema you specified for your search_path.
| PostgreSQL | 2,875,610 | 187 |
There is DataFrame.to_sql method, but it works only for mysql, sqlite and oracle databases. I cant pass to this method postgres connection or sqlalchemy engine.
| Starting from pandas 0.14 (released end of May 2014), postgresql is supported. The sql module now uses sqlalchemy to support different database flavors. You can pass a sqlalchemy engine for a postgresql database (see docs). E.g.:
from sqlalchemy import create_engine
engine = create_engine('postgresql://username:password@localhost:5432/mydatabase')
df.to_sql('table_name', engine)
You are correct that in pandas up to version 0.13.1 postgresql was not supported. If you need to use an older version of pandas, here is a patched version of pandas.io.sql: https://gist.github.com/jorisvandenbossche/10841234.
I wrote this a time ago, so cannot fully guarantee that it always works, buth the basis should be there). If you put that file in your working directory and import it, then you should be able to do (where con is a postgresql connection):
import sql # the patched version (file is named sql.py)
sql.write_frame(df, 'table_name', con, flavor='postgresql')
| PostgreSQL | 23,103,962 | 186 |
I want to know the principle of "Bitmap heap scan", I know this often happens
when I execute a query with OR in the condition.
Who can explain the principle behind a "Bitmap heap scan"?
| The best explanation comes from Tom Lane, which is the algorithm's author unless I'm mistaking. See also the wikipedia article.
In short, it's a bit like a seq scan. The difference is that, rather than visiting every disk page, a bitmap index scan ANDs and ORs applicable indexes together, and only visits the disk pages that it needs to.
This is different from an index scan, where the index is visited row by row in order -- meaning a disk page may get visited multiple times.
Re: the question in your comment... Yep, that's exactly it.
An index scan will go through rows one by one, opening disk pages again and again, as many times as necessary (some will of course stay in memory, but you get the point).
A bitmap index scan will sequentially open a short-list of disk pages, and grab every applicable row in each one (hence the so-called recheck cond you see in query plans).
Note, as an aside, how clustering/row order affects the associated costs with either method. If rows are all over the place in a random order, a bitmap index will be cheaper. (And, in fact, if they're really all over the place, a seq scan will be cheapest, since a bitmap index scan is not without some overhead.)
| PostgreSQL | 6,592,626 | 186 |
I have a table in PostgreSQL 8.3 with 2 timestamp columns. I would like to get the difference between these timestamps in seconds. Could you please help me how to get this done?
TableA
(
timestamp_A timestamp,
timestamp_B timestamp
)
I need to get something like (timestamo_B - timestamp_A) in seconds (not just the difference between seconds, it should include hours, minutes etc).
| Try:
SELECT EXTRACT(EPOCH FROM (timestamp_B - timestamp_A))
FROM TableA
Details here: EXTRACT.
| PostgreSQL | 14,020,919 | 185 |
I installed PostgreSQL via the graphical install on http://www.postgresql.org/download/macosx/
I see it in my applications and also have the psql terminal in my applications. I need psql to work in the regular terminal for another bash script I'm running for an app.
For some reason, when I run
psql
in the Mac terminal, my output is
-bash: psql: command not found
I ran the following in the terminal:
locate psql | grep /bin
and the output was
/Library/PostgreSQL/9.5/bin/psql
I then edited my ~/.bash_profile and added it to the path like so:
export PATH = /Library/PostgreSQL/9.5/bin/psql:$PATH
The only other thing in ~/.bash_profile is SDK man and it's at the bottom of the script as it says it should be. I've tried setting the bath to just the /Library/PostgreSQL/9.5/bin/ as well. I've restarted my terminal also.
How can I get psql to work?
EDIT
After adding to .bashrc, this output is returned when I open terminal
-bash: export: `/Library/PostgreSQL/9.5/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin': not a valid identifier
| You have got the PATH slightly wrong. You need the PATH to "the containing directory", not the actual executable itself.
Your PATH should be set like this:
export PATH=/Library/PostgreSQL/9.5/bin:$PATH
without the extra sql part in it. Also, you must remove the spaces around the equals sign.
| PostgreSQL | 36,155,219 | 184 |
I have a table in a PostgreSQL 8.3.8 database, which has no keys/constraints on it, and has multiple rows with exactly the same values.
I would like to remove all duplicates and keep only 1 copy of each row.
There is one column in particular (named "key") which may be used to identify duplicates, i.e. there should only exist one entry for each distinct "key".
How can I do this? (Ideally, with a single SQL command.)
Speed is not a problem in this case (there are only a few rows).
| A faster solution is to:
find the first occurence of the duplicate,
then remove all rows that are not the first duplicate occurence.
This looks like the following:
DELETE FROM dups a USING (
SELECT MIN(ctid) as ctid, key
FROM dups
GROUP BY key HAVING COUNT(*) > 1
) b
WHERE a.key = b.key
AND a.ctid <> b.ctid
Notice that whith this solution you don't have control over which row is kept.
Toy example
CREATE TABLE people (
name varchar(50) NOT NULL,
surname varchar(50) NOT NULL,
age integer NOT NULL
);
INSERT INTO people (name, surname, age) VALUES
('A.', 'Tom', 30),
('A.', 'Tom', 10),
('B.', 'Tom', 20),
('B', 'Chris', 20);
-- The inner command to find duplicates first occurences:
SELECT MIN(ctid) as ctid, name, surname
FROM people
GROUP BY (name, surname) HAVING COUNT(*) > 1;
DELETE FROM people a USING (
SELECT MIN(ctid) as ctid, name, surname
FROM people
GROUP BY (name, surname) HAVING COUNT(*) > 1
) b
WHERE a.name = b.name
AND a.surname = b.surname
AND a.ctid <> b.ctid;
SELECT * FROM people;
The inner request outputs:
ctid
name
surname
(0,1)
A.
Tom
And the final request (after deletion) outputs:
name
surname
age
A.
Tom
30
B.
Tom
20
B
Chris
20
View the toy example on DB Fiddle
| PostgreSQL | 6,583,916 | 184 |
I have a database, and I need to know the default encoding for the database. I want to get it from the command line.
| From the command line:
psql my_database -c 'SHOW SERVER_ENCODING'
From within psql, an SQL IDE or an API:
SHOW SERVER_ENCODING;
| PostgreSQL | 6,454,146 | 184 |
I'm trying to make restricted DB users for the app I'm working on, and I want to drop the Postgres database user I'm using for experimenting. Is there any way to drop the user without having to revoke all his rights manually first, or revoke all the grants a user has?
| How about
DROP USER <username>
This is actually an alias for DROP ROLE.
You have to explicity drop any privileges associated with that user, also to move its ownership to other roles (or drop the object).
This is best achieved by
REASSIGN OWNED BY <olduser> TO <newuser>
and
DROP OWNED BY <olduser>
The latter will remove any privileges granted to the user.
See the postgres docs for DROP ROLE and the more detailed description of this.
Addition:
Apparently, trying to drop a user by using the commands mentioned here will only work if you are executing them while being connected to the same database that the original GRANTS were made from, as discussed here:
https://www.postgresql.org/message-id/83894A1821034948BA27FE4DAA47427928F7C29922%40apde03.APD.Satcom.Local
| PostgreSQL | 3,023,583 | 184 |
I have installed postgresql on OSX. When I run psql, I get
$ psql
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5433"?
However, from /etc/services
postgresql 5432/udp # PostgreSQL Database
postgresql 5432/tcp # PostgreSQL Database
# Tom Lane <[email protected]>
pyrrho 5433/tcp # Pyrrho DBMS
pyrrho 5433/udp # Pyrrho DBMS
5433 is occupied by pyrrho, 5432 is assigned to pg. I can connect with
psql -p 5432
but why does psql think it is 5433 and how do I make psql look in the right place by default?
| /etc/services is only advisory, it's a listing of well-known ports. It doesn't mean that anything is actually running on that port or that the named service will run on that port.
In PostgreSQL's case it's typical to use port 5432 if it is available. If it isn't, most installers will choose the next free port, usually 5433.
You can see what is actually running using the netstat tool (available on OS X, Windows, and Linux, with command line syntax varying across all three).
This is further complicated on Mac OS X systems by the horrible mess of different PostgreSQL packages - Apple's ancient version of PostgreSQL built in to the OS, Postgres.app, Homebrew, Macports, the EnterpriseDB installer, etc etc.
What ends up happening is that the user installs Pg and starts a server from one packaging, but uses the psql and libpq client from a different packaging. Typically this occurs when they're running Postgres.app or homebrew Pg and connecting with the psql that shipped with the OS. Not only do these sometimes have different default ports, but the Pg that shipped with Mac OS X has a different default unix socket path, so even if the server is running on the same port it won't be listening to the same unix socket.
Most Mac users work around this by just using tcp/ip with psql -h localhost. You can also specify a port if required, eg psql -h localhost -p 5433. You might have multiple PostgreSQL instances running so make sure you're connecting to the right one by using select version() and SHOW data_directory;.
You can also specify a unix socket directory; check the unix_socket_directories setting of the PostgreSQL instance you wish to connect to and specify that with psql -h, e.g.psql -h /tmp.
A cleaner solution is to correct your system PATH so that the psql and libpq associated with the PostgreSQL you are actually running is what's found first on the PATH. The details of that depend on your Mac OS X version and which Pg packages you have installed. I don't use Mac and can't offer much more detail on that side without spending more time than is currently available.
| PostgreSQL | 15,100,368 | 183 |
If I use array_agg to collect names, I get my names separated by commas, but in case there is a null value, that null is also taken as a name in the aggregate. For example :
SELECT g.id,
array_agg(CASE WHEN g.canonical = 'Y' THEN g.users ELSE NULL END) canonical_users,
array_agg(CASE WHEN g.canonical = 'N' THEN g.users ELSE NULL END) non_canonical_users
FROM groups g
GROUP BY g.id;
it returns ,Larry,Phil instead of just Larry,Phil (in my 9.1.2, it shows NULL,Larry,Phil).
Instead, if I use string_agg(), it shows me only the names (without empty commas or nulls).
The problem is that I have Postgres 8.4 installed on the server, and string_agg() doesn't work there. Is there any way to make array_agg work similar to string_agg() ?
| With postgresql-9.3 one can do this;
SELECT g.id,
array_remove(array_agg(CASE WHEN g.canonical = 'Y' THEN g.users ELSE NULL END), NULL) canonical_users,
array_remove(array_agg(CASE WHEN g.canonical = 'N' THEN g.users ELSE NULL END), NULL) non_canonical_users
FROM groups g
GROUP BY g.id;
Update: with postgresql-9.4;
SELECT g.id,
array_agg(g.users) FILTER (WHERE g.canonical = 'Y') canonical_users,
array_agg(g.users) FILTER (WHERE g.canonical = 'N') non_canonical_users
FROM groups g
GROUP BY g.id;
Update (2022-02-19): also with postgresql-9.4;
This results in an empty array when all values in an array are null instead of returning null;
SELECT g.id,
coalesce( array_agg(g.users) FILTER (WHERE g.canonical = 'Y'), '{}' ) canonical_users,
coalesce( array_agg(g.users) FILTER (WHERE g.canonical = 'N'), '{}' ) non_canonical_users
FROM groups g
GROUP BY g.id;
| PostgreSQL | 13,122,912 | 183 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.