question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
Running a worker on a different machine results in errors specified below. I have followed the configuration instructions and have sync the dags folder.
I would also like to confirm that RabbitMQ and PostgreSQL only needs to be installed on the Airflow core machine and does not need to be installed on the workers (the workers only connect to the core).
The specification of the setup is detailed below:
Airflow core/server computer
Has the following installed:
Python 2.7 with
airflow (AIRFLOW_HOME = ~/airflow)
celery
psycogp2
RabbitMQ
PostgreSQL
Configurations made in airflow.cfg:
sql_alchemy_conn = postgresql+psycopg2://username:[email protected]:5432/airflow
executor = CeleryExecutor
broker_url = amqp://username:[email protected]:5672//
celery_result_backend = postgresql+psycopg2://username:[email protected]:5432/airflow
Tests performed:
RabbitMQ is running
Can connect to PostgreSQL and have confirmed that Airflow has created tables
Can start and view the webserver (including custom dags)
.
.
Airflow worker computer
Has the following installed:
Python 2.7 with
airflow (AIRFLOW_HOME = ~/airflow)
celery
psycogp2
Configurations made in airflow.cfg are exactly the same as in the server:
sql_alchemy_conn = postgresql+psycopg2://username:[email protected]:5432/airflow
executor = CeleryExecutor
broker_url = amqp://username:[email protected]:5672//
celery_result_backend = postgresql+psycopg2://username:[email protected]:5432/airflow
Output from commands run on the worker machine:
When running airflow flower:
ubuntu@airflow_client:~/airflow$ airflow flower
[2016-06-13 04:19:42,814] {__init__.py:36} INFO - Using executor CeleryExecutor
Traceback (most recent call last):
File "/home/ubuntu/anaconda2/bin/airflow", line 15, in <module>
args.func(args)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/airflow/bin/cli.py", line 576, in flower
os.execvp("flower", ['flower', '-b', broka, port, api])
File "/home/ubuntu/anaconda2/lib/python2.7/os.py", line 346, in execvp
_execvpe(file, args)
File "/home/ubuntu/anaconda2/lib/python2.7/os.py", line 382, in _execvpe
func(fullname, *argrest)
OSError: [Errno 2] No such file or directory
When running airflow worker:
ubuntu@airflow_client:~$ airflow worker
[2016-06-13 04:08:43,573] {__init__.py:36} INFO - Using executor CeleryExecutor
[2016-06-13 04:08:43,935: ERROR/MainProcess] Unrecoverable error: ImportError('No module named postgresql',)
Traceback (most recent call last):
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/celery/worker/__init__.py", line 206, in start
self.blueprint.start(self)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/celery/bootsteps.py", line 119, in start
self.on_start()
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/celery/apps/worker.py", line 169, in on_start
string(self.colored.cyan(' \n', self.startup_info())),
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/celery/apps/worker.py", line 230, in startup_info
results=self.app.backend.as_uri(),
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/kombu/utils/__init__.py", line 325, in __get__
value = obj.__dict__[self.__name__] = self.__get(obj)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/celery/app/base.py", line 626, in backend
return self._get_backend()
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/celery/app/base.py", line 444, in _get_backend
self.loader)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/celery/backends/__init__.py", line 68, in get_backend_by_url
return get_backend_cls(backend, loader), url
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/celery/backends/__init__.py", line 49, in get_backend_cls
cls = symbol_by_name(backend, aliases)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/kombu/utils/__init__.py", line 96, in symbol_by_name
module = imp(module_name, package=package, **kwargs)
File "/home/ubuntu/anaconda2/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
ImportError: No module named postgresql
When celery_result_backend is changed to the default db+mysql://airflow:airflow@localhost:3306/airflow and the airflow worker is run again the result is:
ubuntu@airflow_client:~/airflow$ airflow worker
[2016-06-13 04:17:32,387] {__init__.py:36} INFO - Using executor CeleryExecutor
-------------- celery@airflow_client2 v3.1.23 (Cipater)
---- **** -----
--- * *** * -- Linux-3.19.0-59-generic-x86_64-with-debian-jessie-sid
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: airflow.executors.celery_executor:0x7f5cb65cb510
- ** ---------- .> transport: amqp://username:**@192.168.1.2:5672//
- ** ---------- .> results: mysql://airflow:**@localhost:3306/airflow
- *** --- * --- .> concurrency: 16 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> default exchange=default(direct) key=celery
[2016-06-13 04:17:33,385] {__init__.py:36} INFO - Using executor CeleryExecutor
Starting flask
[2016-06-13 04:17:33,737] {_internal.py:87} INFO - * Running on http://0.0.0.0:8793/ (Press CTRL+C to quit)
[2016-06-13 04:17:34,536: WARNING/MainProcess] celery@airflow_client2 ready.
What am I missing? How can I diagnose this further?
| The ImportError: No module named postgresql error is due to the invalid prefix used in your celery_result_backend. When using a database as a Celery backend, the connection URL must be prefixed with db+. See
https://docs.celeryproject.org/en/stable/userguide/configuration.html#conf-database-result-backend
So replace:
celery_result_backend = postgresql+psycopg2://username:[email protected]:5432/airflow
with something like:
celery_result_backend = db+postgresql://username:[email protected]:5432/airflow
| RabbitMQ | 37,785,061 | 22 |
I've done a ton of research on this, and I'm surprised I haven't found a good answer to this yet anywhere.
I'm running a large application on Heroku, and I have certain celery tasks that run for a very long time processing, and at the end of the task save a result. Every time I redeploy on Heroku, it sends SIGTERM (and eventually, SIGKILL) and kills my running worker. I'm trying to find a way for the worker instance to shut itself down gracefully and re-queue itself for processing later so that eventually we can save the required result instead of losing the queued task.
I cannot find a way that works to have the worker listen for SIGTERM properly. The closest I've gotten, which works when running python manage.py celeryd directly but NOT when emulating Heroku using foreman, is the following:
@app.task(bind=True, max_retries=1)
def slow(self, x):
try:
for x in range(100):
print 'x: ' + unicode(x)
time.sleep(10)
except exceptions.MaxRetriesExceededError:
logger.error('whoa')
except (exceptions.WorkerShutdown, exceptions.WorkerTerminate) as exc:
logger.error(u'retrying, ' + unicode(exc))
raise self.retry(exc=exc, countdown=10)
except (KeyboardInterrupt, SystemExit) as exc:
print 'retrying'
raise self.retry(exc=exc, countdown=10)
else:
return x
finally:
logger.info('task ended!')
When I start this celery task running within foreman and hit Ctrl+C, the following happens:
^CSIGINT received
22:20:59 system | sending SIGTERM to all processes
22:20:59 web.1 | exited with code 0
22:21:04 system | sending SIGKILL to all processes
Killed: 9
So it's clear that none of the celery exceptions, nor the KeyboardInterrupt or SystemExit exceptions I've seen in other posts, properly catch SIGTERM and shut down the worker.
What is the right way to do this?
| Starting in version >= 4, Celery comes with a special feature, just for Heroku, that supports this functionality out of the box:
$ REMAP_SIGTERM=SIGQUIT celery -A proj worker -l info
source: https://devcenter.heroku.com/articles/celery-heroku#using-remap_sigterm
| RabbitMQ | 29,872,998 | 22 |
Is there a way to get the size (remaining messages) of a queue in rabbitmq with a simple Curl?
Something like curl -xget http://host:1234/api/queue/test/stats
Thank you
| Finally I did the trick with the following:
curl -s -i -u guest:guest http://host:port/api/queues/vhost/queue_name | sed 's/,/\n/g' | grep '"messages"' | sed 's/"messages"://g'
| RabbitMQ | 24,402,399 | 22 |
I am running code on python to send and receive from RabbitMQ queue from another application where I can't allow threading.
This is very newbie question but, is there a possibility to just check if there is message and if there are no any then just quit listening ? How should I change basic "Hello world" example for such task? Currently I've managed to stop consuming if I get a message, but if there are no messages my method receive() just continue waiting. How to force it not to wait if there are no messages? Or maybe wait only for given amount of time?
import pika
global answer
def send(msg):
connection = pika.BlockingConnection(pika.ConnectionParameters())
channel = connection.channel()
channel.queue_declare(queue='toJ')
channel.basic_publish(exchange='', routing_key='toJ', body=msg)
connection.close()
def receive():
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()
channel.queue_declare(queue='toM')
channel.basic_consume(callback, queue='toM', no_ack=True)
global answer
return answer
def callback(ch, method, properties, body):
ch.stop_consuming()
global answer
answer = body
| Ok, I found following solution:
def receive():
parameters = pika.ConnectionParameters(RabbitMQ_server)
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
channel.queue_declare(queue='toM')
method_frame, header_frame, body = channel.basic_get(queue = 'toM')
if method_frame.NAME == 'Basic.GetEmpty':
connection.close()
return ''
else:
channel.basic_ack(delivery_tag=method_frame.delivery_tag)
connection.close()
return body
| RabbitMQ | 9,876,227 | 22 |
Is there a way to determine if any task is lost and retry it?
I think that the reason for lost can be dispatcher bug or worker thread crash.
I was planning to retry them but I'm not sure how to determine which tasks need to be retired?
And how to make this process automatically? Can I use my own custom scheduler which will create new tasks?
Edit: I found from the documentation that RabbitMQ never loose tasks, but what happens when worker thread crash in the middle of task execution?
| What you need is to set
CELERY_ACKS_LATE = True
Late ack means that the task messages will be acknowledged after the task has been executed,
not just before, which is the default behavior.
In this way if the worker crashes rabbit MQ will still have the message.
Obviously of a total crash (Rabbit + workers) at the same time there is no way of recovering the task, except if you implement a logging on task start and task end.
Personally I write in a mongodb a line every time a task start and another one when the task finish (independently form the result), in this way I can know which task was interrupted by analyzing the mongo logs.
You can do it easily by overriding the methods __call__ and after_return of the celery base task class.
Following you see a piece of my code that uses a taskLogger class as context manager (with entry and exit point).
The taskLogger class simply writes a line containing the task info in a mongodb instance.
def __call__(self, *args, **kwargs):
"""In celery task this function call the run method, here you can
set some environment variable before the run of the task"""
#Inizialize context managers
self.taskLogger = TaskLogger(args, kwargs)
self.taskLogger.__enter__()
return self.run(*args, **kwargs)
def after_return(self, status, retval, task_id, args, kwargs, einfo):
#exit point for context managers
self.taskLogger.__exit__(status, retval, task_id, args, kwargs, einfo)
I hope this could help
| RabbitMQ | 5,336,645 | 22 |
I'm trying to use rabbitmq for a django tutorial but when I want to start the server I get this error:
~$ sudo rabbitmq-server
Configuring logger redirection
14:49:57.041 [error]
14:49:57.044 [error] BOOT FAILED
BOOT FAILED
14:49:57.044 [error] ===========
===========
14:49:57.044 [error] ERROR: could not bind to distribution port 25672, it is in use by another node: rabbit@wss
ERROR: could not bind to distribution port 25672, it is in use by another node: rabbit@wss
14:49:57.045 [error]
14:49:58.046 [error] Supervisor rabbit_prelaunch_sup had child prelaunch started with rabbit_prelaunch:run_prelaunch_first_phase() at undefined exit with reason {dist_port_already_used,25672,"rabbit","wss"} in context start_error
14:49:58.046 [error] CRASH REPORT Process <0.153.0> with 0 neighbours exited with reason: {{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,"rabbit","wss"}}},{rabbit_prelaunch_app,start,[normal,[]]}} in application_master:init/4 line 138
{"Kernel pid terminated",application_controller,"{application_start_failure,rabbitmq_prelaunch,{{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,\"rabbit\",\"wss\"}}},{rabbit_prelaunch_app,start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,rabbitmq_prelaunch,{{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,"rabbit","wss"}}},{rabbit_prelau
Crash dump is being written to: erl_crash.dump...done
I've searched for port to see that if it's in use or not and I used lsof -i :25672 and I get nothing.
I don't know too much about these things so if you need anything please tell me.
| Try:
sudo lsof -i :25672
sudo kill <PID>
sudo rabbitmq-server
Where <PID> is the process ID that is occupying port 25672
| RabbitMQ | 63,263,177 | 21 |
When I set permissions to the rabbitmq user, there is output the vhost:
[root@ha-node1 my.cnf.d]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
What is the meaning of the vhost when I set permission, and what function does it have?
| In RabbitMQ virtual hosts are logical groups of entities, they are similar to virtual hosts in Apache or server blocks in Nginx.
Virtual hosts are created using rabbitmqctl or HTTP API and they provide logical grouping and separation of resources.
Every virtual host has a name. When an AMQP 0-9-1 client connects to RabbitMQ, it specifies a vhost name to connect to.
If authentication succeeds and the username provided was granted permissions to the vhost, connection is established.
| RabbitMQ | 45,250,282 | 21 |
From spring boot tutorial:
https://spring.io/guides/gs/messaging-rabbitmq/
They give an example of creating 1 queue and 1 queue only, but, what if I want to be able to create more then 1 queue? how would it be possible?
Obviously, I can't just create the same bean twice:
@Bean
Queue queue() {
return new Queue(queueNameAAA, false);
}
@Bean
Queue queue() {
return new Queue(queueNameBBB, false);
}
You can't create the same bean twice, it will make ambiguous.
| Give the bean definition factory methods different names. Usually, by convention, you would name them the same as the queue, but that's not required...
@Bean
Queue queue1() {
return new Queue(queueNameAAA, false);
}
@Bean
Queue queue2() {
return new Queue(queueNameBBB, false);
}
The method name is the bean name.
EDIT
When using the queues in the binding beans, there are two options:
@Bean
Binding binding1(@Qualifier("queue1") Queue queue, TopicExchange exchange) {
return BindingBuilder.bind(queue).to(exchange).with(queueNameAAA);
}
@Bean
Binding binding2(@Qualifier("queue2") Queue queue, TopicExchange exchange) {
return BindingBuilder.bind(queue).to(exchange).with(queueNameBBB);
}
or
@Bean
Binding binding1(TopicExchange exchange) {
return BindingBuilder.bind(queue1()).to(exchange).with(queueNameAAA);
}
@Bean
Binding binding2(TopicExchange exchange) {
return BindingBuilder.bind(queue2()).to(exchange).with(queueNameBBB);
}
or even better...
@Bean
Binding binding1(TopicExchange exchange) {
return BindingBuilder.bind(queue1()).to(exchange).with(queue1().getName());
}
@Bean
Binding binding2(TopicExchange exchange) {
return BindingBuilder.bind(queue2()).to(exchange).with(queue2().getName());
}
| RabbitMQ | 41,210,688 | 21 |
I've done a lot of searching but I cannot fix this issue.
I have a basic Rabbitmq container running via this command:
docker run -d --hostname rabbitmqhost --name rabbitmq -p 15672:15672 -p 5672:5672 rabbitmq:3-management
I am using nameko to create a microservice which connects to this container. Here's a basic microservice module main.py:
from nameko.rpc import rpc
class Service_Name(object):
name = "service_name"
@rpc
def service_endpoint(self, arg=None):
logging.info('service_one endpoint, arg = %s', arg)
This service runs and connects to the rabbitmq from my host machine with the command:
nameko run main --broker amqp://guest:guest@localhost
I wanted to put the service into a Docker container (called service_one) but when I do so and run the previous nameko command I get socket.error: [Errno 111] ECONNREFUSED no matter how I try and link the two containers.
What would be the correct method? The aim is to have each service in a container, all talking to each other through rabbit. Thanks.
| If you're running a service inside a container, then amqp://guest:guest@localhost won't do you any good; localhost refers to the network namespace of the container...so of course you get an ECONNREFUSED, because there's nothing listening there.
If you want to connect to a service in another container, you need to use the ip address of that container, or a hostname that resolves to the ip address of that container.
If you are running your containers in a user-defined network, then Docker maintains a DNS server that will map container names to addresses. That is, if I first create a network:
docker network create myapp_net
And then start a rabbitmq container in that network:
docker run -d --network myapp_net --hostname rabbitmqhost \
--name rabbitmq -p 15672:15672 -p 5672:5672 rabbitmq:3-management
Then other containers started in that network will be able to use the hostname rabbitmq to connect to that container.
For containers running in the default network (no --network parameter on the command line), you can use the --link option to achieve a similar, though less flexible, effect, as documented here.
| RabbitMQ | 40,563,469 | 21 |
I want to scale my Node.js Socket application vertically and horizontally and I haven´t found a sophisticated solution yet.
My application has two use-cases:
Broadcast messages from one user to all others
Push messages from one user to a subset of users
On one hand, I´ve read that I need Redis for both cases together with socket.io-redis
On the other hand, I´ve watched this video and read this SO answer where it says that Redis isn´t reliable and it´s not guaranteed that the published messages will arrive, so you should only use it for clustering/vertical scaling
Microsoft Azures solution to use ServiceBus is out of question, because I don´t want to use Azure.
Instead of Redis, the guy recommends using RabbitMQ for horizontal scaling.
For the vertical scaling there is also socket.io-clusterhub, an IPC for node processes, but it seems to work only on Socket.io <= v0.9.0
Then there is this guy, who has implemented his own method to pass messages to other nodes via HTTP requests, which makes somehow sense. But why HTTP requests if you could also establish direct socket connections between servers, push the message to all servers simultaneously and overcome the delay of going from one server to another?
As a conclusion I thought maybe I could go with Redis on EACH server, just for the exchange of messages when clustering my application on multiple processes, together with RabbitMQ as a S2S communication solution.
But it seems a bit like an overkill to have one Redis per Server and another central RabbitMQ.
Is there any known shorter/better solution to scale Socket.io reliably in both directions?
EDIT:
I´ve tried using a single Redis Server for multiple Node.js Servers, where each of them uses Clustering via sticky-session over all cores. While the Clustering at its own works like a charm with redis, there seems to be a problem when using multiple servers. Messages won´t arrive at the other nodes.
| I'd say Kafka is a good fit for the horizontal scaling. It is a fairly sophisticated way of distributing a huge amount of events across servers (which at the end is what you want). This is a good read about it: https://engineering.linkedin.com/kafka/running-kafka-scale
Regarding the vertical scale, instead of socket.io-clusterhub I would use something called PM2 (https://github.com/Unitech/pm2) which allows you to resize the scale of the apps in every computer dynamically as well as controlling the logs and reporting to keymetrics.io (if you are using it).
If you need any snippets ask me and I will edit the answer but in the PM2 github there are quite few.
| RabbitMQ | 37,116,615 | 21 |
I've set up RabbitMQ in order to parse some 20.000 requests from an external API but it keeps timing out after a few minutes. It does get to correctly parse about 2000 out of the total 20.000 requests.
The log file says:
=INFO REPORT==== 16-Feb-2016::17:02:50 ===
accepting AMQP connection <0.1648.0> (127.0.0.1:33091 -> 127.0.0.1:5672)
=ERROR REPORT==== 16-Feb-2016::17:03:21 ===
closing AMQP connection <0.1648.0> (127.0.0.1:33091 -> 127.0.0.1:5672):
{writer,send_failed,{error,timeout}}
I've already increased the heartbeat value but I cannot figure out why it's timing out. Configuration is: Ubuntu 14.04, NGINX 1.8.1, RabbitMQ 3.6.0
I'd appreciate your time and input !
| I've just solved a similar problem in python. In my case, it was solved by reducing the prefetch count on the consumer, so that it had fewer messages queued up in its receive buffer.
My theory is that the receive buffer on the consumer gets full, and then RMQ tries to write some other message to the consumer's socket and can't due to the consumer's socket being full. RMQ blocks on this socket, and eventually timeouts and just closes the connection on the consumer.
Having a smaller prefetch queue means the socket receive buffer doesn't get filled, and RMQ is able to write whatever bookkeeping messages it was trying to do and so doesn't timeout on its writes nor close the connection.
This is just a theory though, but it seems to hold in my testing.
In Python, setting the prefetch count can be done like so:
subChannel.basicQos(10);
(Thanks to @shawn-guo for reminding me to add this code snippet)
| RabbitMQ | 35,438,843 | 21 |
I am trying to create integration test for a Scala / Java application that connects to a RabbitMQ broker. To achieve this I would like an embedded broker that speaks AMQP that I start and stop before each test. Originally I tried to introduce ActiveMQ as an embedded broker with AMQP however the application uses RabbitMQ so only speaks AMQP version 0.9.3 whereas ActiveMQ requires AMQP version 1.0.
Is there another embedded broker I can use in place of ActiveMQ?
| A completely in-memory solution. Replace the spring.* properties as required.
<dependency>
<groupId>org.apache.qpid</groupId>
<artifactId>qpid-broker</artifactId>
<version>6.1.1</version>
<scope>test</scope>
</dependency>
public class EmbeddedBroker {
public void start() {
Broker broker = new Broker();
BrokerOptions brokerOptions = new BrokerOptions();
brokerOptions.setConfigProperty("qpid.amqp_port", environment.getProperty("spring.rabbitmq.port"));
brokerOptions.setConfigProperty("qpid.broker.defaultPreferenceStoreAttributes", "{\"type\": \"Noop\"}");
brokerOptions.setConfigProperty("qpid.vhost", environment.getProperty("spring.rabbitmq.virtual-host"));
brokerOptions.setConfigurationStoreType("Memory");
brokerOptions.setStartupLoggedToSystemOut(false);
broker.startup(brokerOptions);
}
}
Add initial-config.json as a resource:
{
"name": "Embedded Test Broker",
"modelVersion": "6.1",
"authenticationproviders" : [{
"name": "password",
"type": "Plain",
"secureOnlyMechanisms": [],
"users": [{"name": "guest", "password": "guest", "type": "managed"}]
}],
"ports": [{
"name": "AMQP",
"port": "${qpid.amqp_port}",
"authenticationProvider": "password",
"protocols": [ "AMQP_0_9_1" ],
"transports": [ "TCP" ],
"virtualhostaliases": [{
"name": "${qpid.vhost}",
"type": "nameAlias"
}]
}],
"virtualhostnodes" : [{
"name": "${qpid.vhost}",
"type": "Memory",
"virtualHostInitialConfiguration": "{ \"type\": \"Memory\" }"
}]
}
| RabbitMQ | 30,918,557 | 21 |
I am using rabbitmctl using pika library.
I use the following code to create a Producer
#!/usr/bin/env python
import pika
import time
import json
import datetime
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='localhost'))
channel = connection.channel()
channel.queue_declare(queue='hello')
def callback(ch, method, properties, body):
#print " current time: %s " % (str(int((time.time())*1000)))
print body
channel.basic_consume(callback,
queue='hello',
no_ack=True)
channel.start_consuming()
Since I create an existing queue everytime (Over-write the creation of queue in case if queue is not created) The queue has been corrupted due to this.and now I want to delete the queue..how do i do that?
| Since this seems to be a maintenance procedure, and not something you'll be doing routinely on your code, you should probably be using the RabbitMQ management plugin and delete the queue from there.
Anyway, you can delete it from pika with:
channel.queue_delete(queue='hello')
https://pika.readthedocs.org/en/latest/modules/channel.html#pika.channel.Channel.queue_delete
| RabbitMQ | 19,912,344 | 21 |
I have Celery running with RabbitMQ broker.
Today, I have a failure of a Celery node, it doesn't execute tasks and doesn't respond on service celeryd stop command. After few repeats, the node stopped, but on start I get this message:
[WARNING/MainProcess] celery@nodename ready.
[WARNING/MainProcess] /home/ubuntu/virtualenv/project_1/local/lib/python2.7/site-packages/kombu/pidbox.py:73: UserWarning: A node named u'nodename' is already using this process mailbox!
Maybe you forgot to shutdown the other node or did not do so properly?
Or if you meant to start multiple nodes on the same host please make sure
you give each node a unique node name!
warnings.warn(W_PIDBOX_IN_USE % {'hostname': self.hostname})
Can anyone suggest how to unlock process mailbox?
| From here http://celery.readthedocs.org/en/latest/userguide/workers.html#starting-the-worker you might need to name each node uniquely. Example:
$ celery -A proj worker --loglevel=INFO --concurrency=10 -n worker1.%h
In supervisor escape by using %%h.
| RabbitMQ | 18,673,319 | 21 |
I know that there are similar questions to this, such as:
https://stackoverflow.com/questions/8232194/pros-and-cons-of-celery-vs-disco-vs-hadoop-vs-other-distributed-computing-packag
Differentiate celery, kombu, PyAMQP and RabbitMQ/ironMQ
but I'm asking this because I'm looking for a more particular distinction backed by a couple of use-case examples, please.
So, I'm a python user who wants to make programs that either/both:
Are too large to
Take too long to
do on a single machine, and process them on multiple machines. I am familiar with the (single-machine) multiprocessing package in python, and I write mapreduce style code right now. I know that my function, for example, is easily parallelizable.
In asking my usual smart CS advice-givers, I have phrased my question as:
"I want to take a task, split it into a bunch of subtasks that are executed simultaneously on a bunch of machines, then those results to be aggregated and dealt with according to some other function, which may be a reduce, or may be instructions to serially add to a database, for example."
According to this break-down of my use-case, I think I could equally well use Hadoop or a set of Celery workers + RabbitMQ broker. However, when I ask the sage advice-givers, they respond to me as if I'm totally crazy to look at Hadoop and Celery as comparable solutions. I've read quite a bit about Hadoop, and also about Celery---I think I have a pretty good grasp on what both do---what I do not seem to understand is:
Why are they considered so separate, so different?
Given that they seem to be received as totally different technologies---in what ways? What are the use cases that distinguish one from the other or are better for one than another?
What problems could be solved with both, and what areas would it be particularly foolish to use one or the other for?
Are there possibly better, simpler ways to achieve multiprocessing-like Pool.map()-functionality to multiple machines? Let's imagine my problem is not constrained by storage, but by CPU and RAM required for calculation, so there isn't an issue in having too little space to hold the results returned from the workers. (ie, I'm doing something like simulation where I need to generate a lot of things on the smaller machines seeded by a value from a database, but these are reduced before they return to the source machine/database.)
I understand Hadoop is the big data standard, but Celery also looks well supported; I appreciate that it isn't java (the streaming API python has to use for hadoop looked uncomfortable to me), so I'd be inclined to use the Celery option.
|
They are the same in that both can solve the problem that you describe (map-reduce). They are different in that Hadoop is entirely build to solve only that usecase and Celey/RabbitMQ is build to facilitate Task execution on different nodes using message passing. Celery also supports different usecases.
Hadoop is solving the map-reduce problem by having a large and special filesystem from which the mapper takes its data, sends it to a bunch of map nodes and reduces it to that filesystem. This has the advantage that it is really fast in doing this. The downsides are that it only operates on text based data input, Python is not really supported and that if you can't do (slightly) different usecases.
Celery is a message based task executor. In it you define tasks and group them together in a workflow (which can be a map-reduce workflow). Its advantages are that it is python based, that you can stitch tasks together in a custom workflow. Disadvantages are its reliance on single broker/result backend and its setup time.
So if you have a couple of Gb's worth of logfiles and don't care to write in Java and have some servers to spare that are exclusively used to run Hadoop, use that. If you want flexibility in running workflowed tasks use Celery. Or.....
Yes! There is a new project from one of the companies that helped create the messaging protocol AMQP that is used by RabbitMQ (and others). It is called ZeroMQ and it takes distributed messaging/execution to the next level by strangely going down a level in abstraction compared to Celery. It defines sockets that you can link together in various ways to create messaging links between nodes. Anything you want to do with these messages is up to you to write. Although this might sounds like "what good is a thin wrapper around a socket" it is actually at the right level of abstraction. Right now at our company we are factoring out all our celery messaging and rebuilding it with ZeroMQ. We found that Celery is just too opinionated about how tasks should be executed and that the setup/config in general is a pain. Also that broker in the middle that has to handle all traffic was becoming to much of a bottleneck.
Resume:
Count the occurrences of "the" in a book with as less programming as possible and lots of setup/config time: Hadoop
Create atomic Tasks and be able to have them work together with not to much programming and a lot of setup/config time: Celery
Have complete control over what to do with your messages and how to program them with almost no setup/config time: ZeroMQ
Have pain with no setup/config time: Sockets
| RabbitMQ | 18,521,196 | 21 |
It looks like celery does not release memory after task finished. Every time a task finishes, there would be 5m-10m memory leak. So with thousands of tasks, soon it will use up all memory.
BROKER_URL = 'amqp://user@localhost:5672/vhost'
# CELERY_RESULT_BACKEND = 'amqp://user@localhost:5672/vhost'
CELERY_IMPORTS = (
'tasks.tasks',
)
CELERY_IGNORE_RESULT = True
CELERY_DISABLE_RATE_LIMITS = True
# CELERY_ACKS_LATE = True
CELERY_TASK_RESULT_EXPIRES = 3600
# maximum time for a task to execute
CELERYD_TASK_TIME_LIMIT = 600
CELERY_DEFAULT_ROUTING_KEY = "default"
CELERY_DEFAULT_QUEUE = 'default'
CELERY_DEFAULT_EXCHANGE = "default"
CELERY_DEFAULT_EXCHANGE_TYPE = "direct"
# CELERYD_MAX_TASKS_PER_CHILD = 50
CELERY_DISABLE_RATE_LIMITS = True
CELERYD_CONCURRENCY = 2
Might be same with issue, but it does not has an answer:
RabbitMQ/Celery/Django Memory Leak?
I am not using django, and my packages are:
Chameleon==2.11
Fabric==1.6.0
Mako==0.8.0
MarkupSafe==0.15
MySQL-python==1.2.4
Paste==1.7.5.1
PasteDeploy==1.5.0
SQLAlchemy==0.8.1
WebOb==1.2.3
altgraph==0.10.2
amqp==1.0.11
anyjson==0.3.3
argparse==1.2.1
billiard==2.7.3.28
biplist==0.5
celery==3.0.19
chaussette==0.9
distribute==0.6.34
flower==0.5.1
gevent==0.13.8
greenlet==0.4.1
kombu==2.5.10
macholib==1.5.1
objgraph==1.7.2
paramiko==1.10.1
pycrypto==2.6
pyes==0.20.0
pyramid==1.4.1
python-dateutil==2.1
redis==2.7.6
repoze.lru==0.6
requests==1.2.3
six==1.3.0
tornado==3.1
translationstring==1.1
urllib3==1.6
venusian==1.0a8
wsgiref==0.1.2
zope.deprecation==4.0.2
zope.interface==4.0.5
I just added a test task like, test_string is a big string, and it still has memory leak:
@celery.task(ignore_result=True)
def process_crash_xml(test_string, client_ip, request_timestamp):
logger.info("%s %s" % (client_ip, request_timestamp))
test = [test_string] * 5
| There are two settings which can help you mitigate growing memory consumption of celery workers:
Max tasks per child setting (v2.0+):
With this option you can configure the maximum number of tasks a worker can execute before it’s replaced by a new process. This is useful if you have memory leaks you have no control over for example from closed source C extensions.
Max memory per child setting (v4.0+):
With this option you can configure the maximum amount of resident memory a worker can execute before it’s replaced by a new process.
This is useful if you have memory leaks you have no control over for example from closed source C extensions.
However, those options only work with the default pool (prefork).
For safe guarding against memory leaks for threads and gevent pools you can add an utility process called memmon, which is part of the superlance extension to supervisor.
Memmon can monitor all running worker processes and will restart them automatically when they exceed a predefined memory limit.
Here is an example configuration for your supervisor.conf:
[eventlistener:memmon]
command=/path/to/memmon -p worker=512MB
events=TICK_60
| RabbitMQ | 17,541,452 | 21 |
I'm trying to get a Django Celery worker to connect to a RabbitMQ server, all running on the same host.
However, when I run manage.py celery worker all I get is:
[2013-06-11 17:33:41,185: WARNING/MainProcess] celery@localhost has started.
[2013-06-11 17:33:44,192: ERROR/MainProcess] Consumer: Connection Error: Socket closed. Trying again in 2 seconds...
[2013-06-11 17:33:50,203: ERROR/MainProcess] Consumer: Connection Error: Socket closed. Trying again in 4 seconds...
[2013-06-11 17:34:03,214: ERROR/MainProcess] Consumer: Connection Error: Socket closed. Trying again in 6 seconds...
[2013-06-11 17:34:27,232: ERROR/MainProcess] Consumer: Connection Error: Socket closed. Trying again in 8 seconds...
When I inspect my /var/log/rabbitmq/[email protected] I see several messages like:
=ERROR REPORT==== 11-Jun-2013::17:33:44 ===
exception on TCP connection <0.201.0> from 127.0.0.1:43461
{channel0_error,opening,
{amqp_error,access_refused,
"access to vhost 'myapp' refused for user 'guest'",
'connection.open'}}
I'm using the standard package out of Ubuntu 12.04's repo, with the default settings and my django-celery settings look like:
BROKER_HOST = "localhost"
BROKER_PORT = 5672
BROKER_USER = "guest"
BROKER_PASSWORD = "guest"
BROKER_VHOST = "myapp"
Why is RabbitMQ refusing connections?
| It looks like you need to grant access to the "/myapp" vhost for the "guest" user.
From the docs:
set_permissions [-p vhostpath] {user} {conf} {write} {read}
So something similar to this will give your guest user unlimited access:
rabbitmqctl set_permissions -p /myvhost guest ".*" ".*" ".*"
| RabbitMQ | 17,054,533 | 21 |
I might be misunderstanding how this works (which is why I'm asking), but I think when a celery worker consumes a task from RabbitMQ it puts a lock on it -- so to speak -- and then must acknowledge it completed that task onces it's done. So say I have 4 workers which all have the prefetch setting at 1 and queue of 6 tasks which take a long time. Once I start those workers and I run:
rabbitmqctl -q list_queues name messages messages_ready messages_unacknowledged
I'd expect to see something like:
celery 6 2 4
indicating that 4 tasks are running (but not yet acknowledged) and 2 are ready to be consumed.
I think my understanding is wrong because what I actually see is:
celery 2 0 2
So it's as if the acknowledging happens when a message is received by a worker, but before that worker finishes processing that task.
So to sum up, my question is, when does a celery worker acknowledge it has a task? It seems like it's once it receives that task and starts working on it, not when it completes working on it. Can someone confirm?
| This is mentioned in the FAQ, but I can't blame you for not finding it:
http://docs.celeryproject.org/en/latest/faq.html#should-i-use-retry-or-acks-late
The default behavior of early ack is there because we don't want to enforce users
to write idempotent tasks.
| RabbitMQ | 12,594,802 | 21 |
I am using Celery with RabbitMQ. Lately, I have noticed that a large number of temporary queues are getting made.
So, I experimented and found that when a task fails (that is a tasks raises an Exception), then a temporary queue with a random name (like c76861943b0a4f3aaa6a99a6db06952c) is formed and the queue remains.
Some properties of the temporary queue as found in rabbitmqadmin are as follows -
auto_delete : True
consumers : 0
durable : False
messages : 1
messages_ready : 1
And one such temporary queue is made everytime a task fails (that is, raises an Exception). How to avoid this situation? Because in my production environment a large number of such queues get formed.
| It sounds like you're using the amqp as the results backend. From the docs here are the pitfalls of using that particular setup:
Every new task creates a new queue on the server, with thousands of
tasks the broker may be overloaded with queues and this will affect
performance in negative ways. If you’re using RabbitMQ then each
queue will be a separate Erlang process, so if you’re planning to
keep many results simultaneously you may have to increase the Erlang
process limit, and the maximum number of file descriptors your OS
allows
Old results will not be cleaned automatically, so you must make
sure to consume the results or else the number of queues will
eventually go out of control. If you’re running RabbitMQ 2.1.1 or
higher you can take advantage of the x-expires argument to queues,
which will expire queues after a certain time limit after they are
unused. The queue expiry can be set (in seconds) by the
CELERY_AMQP_TASK_RESULT_EXPIRES setting (not enabled by default).
From what I've read in the changelog, this is no longer the default backend in versions >=2.3.0 because users were getting bit in the rear end by this behavior. I'd suggest changing the results backend if this not the functionality you need.
| RabbitMQ | 7,144,025 | 21 |
I send the following message with content type application/json:
However whene i get messages from the same RabbitMQ Web console, it shows the payload as String.
What am I doing wrong? Or am I fundamentally misunderstanding and the Payload is always of type String?
| From the official docs:
AMQP messages also have a payload (the data that they carry), which AMQP brokers treat as an opaque byte array. The broker will not inspect or modify the payload. It is possible for messages to contain only attributes and no payload. It is common to use serialisation formats like JSON, Thrift, Protocol Buffers and MessagePack to serialize structured data in order to publish it as the message payload. AMQP peers typically use the "content-type" and "content-encoding" fields to communicate this information, but this is by convention only.
So basically, RabbitMQ has no knowledge on JSON, messages all are just byte arrays to it
| RabbitMQ | 49,788,162 | 20 |
Is it possible to use a different message broker with celery?
For example: I would like to use PostgreSQL instead of RabbitMQ.
AFAIK it is only supported in the result backend: http://docs.celeryproject.org/en/latest/userguide/configuration.html#database-backend-settings
Since PostgreSQL 9.5 there is SKIP LOCKED which enables implementing robust message/work queues. See https://blog.2ndquadrant.com/what-is-select-skip-locked-for-in-postgresql-9-5/
| Yes, you can use postgres as broker instead of rabbitmq. Here is a simple example to demonstrate it.
from celery import Celery
broker = 'sqla+postgresql://user:pass@host/dbname'
app = Celery(broker=broker)
@app.task
def add(x, y):
return x + y
Queuing tasks
In [1]: from demo import add
In [2]: add.delay(1,2)
Out[2]: <AsyncResult: 4853190f-d355-48ae-8aba-6169d38fad39>
Worker results:
[2017-12-02 08:11:08,483: INFO/MainProcess] Received task: t.add[809060c0-dc7e-4a38-9e4e-9fdb44dd6a31]
[2017-12-02 08:11:08,496: INFO/ForkPoolWorker-1] Task t.add[809060c0-dc7e-4a38-9e4e-9fdb44dd6a31] succeeded in 0.0015781960000822437s: 3
Tested on latest(celery==4.1.0, kombu==4.1.0, SQLAlchemy==1.1.1) versions.
| RabbitMQ | 47,473,583 | 20 |
I have configured the RabbitMQ rabbitmq.config file with new port number i.e. 5671 with SSL.
Now I want to disable the default port i.e. 5672.
Config file as below :-
[
{rabbit, [
{ssl_listeners, [5671]},
{ssl_options, [{cacertfile,"/ay/app/xxx/softwares/rabbitmq_server-3.1.1/etc/ssl/cacert.pem"},
{certfile,"/ay/app/xxx/softwares/rabbitmq_server-3.1.1/etc/ssl/cert.pem"},
{keyfile,"/ay/app/xxx/softwares/rabbitmq_server-3.1.1/etc/ssl/key.pem"},
{verify,verify_peer},
{fail_if_no_peer_cert,false},
{ciphers,[{dhe_rsa,aes_256_cbc,sha},
{dhe_dss,aes_256_cbc,sha},
{rsa,aes_256_cbc,sha}]}
]
}
]}
].
Now its working on both port 5671 and 5672.But I need to disable the port 5672.
Give some comments or suggestion.
Thanks in advance.
| To disable standart RabbitMQ 5672 port add {tcp_listeners, []} to your rabbitmq.conf:
[
{rabbit, [
{tcp_listeners, []},
{ssl_listeners, [5671]},
{ssl_options, [{cacertfile,"/ay/app/xxx/softwares/rabbitmq_server-3.1.1/etc/ssl/cacert.pem"},
{certfile,"/ay/app/xxx/softwares/rabbitmq_server-3.1.1/etc/ssl/cert.pem"},
{keyfile,"/ay/app/xxx/softwares/rabbitmq_server-3.1.1/etc/ssl/key.pem"},
{verify,verify_peer},
{fail_if_no_peer_cert,false},
{ciphers,[{dhe_rsa,aes_256_cbc,sha},
{dhe_dss,aes_256_cbc,sha},
{rsa,aes_256_cbc,sha}]}
]
}
]}
].
It works with RabbitMQ 3.1.5
| RabbitMQ | 19,806,313 | 20 |
When I route a task to a particular queue it works:
task.apply_async(queue='beetroot')
But if I create a chain:
chain = task | task
And then I write
chain.apply_async(queue='beetroot')
It seems to ignore the queue keyword and assigns to the default 'celery' queue.
It would be nice if celery supported routing in chains - all tasks executed sequentially in the same queue.
| I do it like this:
subtask = task.s(*myargs, **mykwargs).set(queue=myqueue)
mychain = celery.chain(subtask, subtask2, ...)
mychain.apply_async()
| RabbitMQ | 14,953,521 | 20 |
I am using celery with a rabbitmq backend. It is producing thousands of queues with 0 or 1 items in them in rabbitmq like this:
$ sudo rabbitmqctl list_queues
Listing queues ...
c2e9b4beefc7468ea7c9005009a57e1d 1
1162a89dd72840b19fbe9151c63a4eaa 0
07638a97896744a190f8131c3ba063de 0
b34f8d6d7402408c92c77ff93cdd7cf8 1
f388839917ff4afa9338ef81c28aad75 0
8b898d0c7c7e4be4aa8007b38ccc00ea 1
3fb4be51aaaa4ac097af535301084b01 1
This seems to be inefficient, but further I have observed that these queues persist long after processing is finished.
I have found the task that appears to be doing this:
@celery.task(ignore_result=True)
def write_pages(page_generator):
g = group(render_page.s(page) for page in page_generator)
res = g.apply_async()
for rendered_page in res:
print rendered_page # TODO: print to file
It seems that because these tasks are being called in a group, they are being thrown into the queue but never being released. However, I am clearly consuming the results (as I can view them being printed when I iterate through res. So, I do not understand why those tasks are persisting in the queue.
Additionally, I am wondering if the large number queues that are being created is some indication that I am doing something wrong.
Thanks for any help with this!
| Celery with the AMQP backend will store task tombstones (results) in an AMQP queue named with the task ID that produced the result. These queues will persist even after the results are drained.
A couple recommendations:
Apply ignore_result=True to every task you can. Don't depend on results from other tasks.
Switch to a different backend (perhaps Redis -- it's more efficient anyway): http://docs.celeryproject.org/en/latest/userguide/tasks.html
| RabbitMQ | 14,636,534 | 20 |
I have tried to use Rabbitmq server for some reason the connection closes abruptly even though I passed the correct username and password.
Rabbitmq server is running on port 5672 and telneting to my server at port 5672 says its running fine.
I have installed rabbitmq server in CentOS and my rabbitmq server log are as follows:
=INFO REPORT==== 19-Dec-2012::06:25:44 ===
accepted TCP connection on [::]:5672 from <host>:42048
=INFO REPORT==== 19-Dec-2012::06:25:44 ===
starting TCP connection <0.357.0> from <host>:42048
=WARNING REPORT==== 19-Dec-2012::06:25:44 ===
exception on TCP connection <0.357.0> from <host>:42048
connection_closed_abruptly
=INFO REPORT==== 19-Dec-2012::06:25:44 ===
closing TCP connection <0.357.0> from <host>:42048
What might be the possible reasons for this to happen.
Thanks
| connection_closed_abruptly means the client closed the TCP connection without going through the proper AMQP connection termination process.
Is your rabbit server behind a load balancer? A common cause for connections being abruptly closed as soon as they're started is a TCP load balancer's heartbeat. If this is the case you should see these messages at very regular intervals, and the generally accepted practice seems to be to ignore them. To avoid log file buildup you could also consider raising the log level to "error".
On the other hand, if your client connects to the rabbitmq server directly, this probably means your client does not close the connection in an AMQP-approved way. You could try a different client to confirm whether this is the case.
Btw, telnetting to your server is likely to cause abrupt closings too. :)
| RabbitMQ | 13,946,153 | 20 |
When using RabbitMQ for sending messages you basically have exchanges, queues and bindings. I've understood their idea and how they relate to each other, but I am not quite sure who sets up what.
Basically, I have three scenarios in my application.
Scenario 1: One publisher, several worker processes
What I want to achieve is one component that sends messages to a queue, and there shall be several worker processes that handle items in that queue. This seems quite easy to me. The setup is as follows:
Exchange: 1 exchange with type 'direct'
Queue: 1 queue
Binding: The queue is bound to the exchange
Whenever a message is sent to the exchange, it gets delivered to the queue, and the worker processes get their tasks.
Everything shall be durable.
So who sets up what? In my opinion:
Producer creates exchange
Producer creates queue (as there currently may be no worker processes running, and the message would be lost otherwise if there was no queue)
Producer does the binding of the queue to the exchange
Consumers simply listen on the queue
Right?
Scenario 2: One publisher, several subscribers, volatile messages
The second scenario is quite different. Basically, it's a pub / sub scenario where each message is send to every currently listening client. If a client goes offline, it does not receive messages any longer and they are not stored anywhere for him. This means the following setup:
Exchange: 1 exchange with type 'fanout'
Queue: n queues, one for each consumer
Binding: Each queue needs to be bound to the exchange
So who sets up what? In my opinion:
Producer creates exchange
Consumer creates queue (as it is its own queue, and the producer can not know whoever is interested in the messages)
Consumer creates binding for its queue to the exchange
Consumer listens to its queue
Right?
Scenario 3: One publisher, several subscribers, durable messages
Basically the same as scenario 2, but the messages should not be lost if a consumer goes offline. In my opinion this should not change anything - right?
| I think what you say is right except on Scenario 3.
If messages should not be lost if a consumer goes offline then you need durable queues and the queues can't be auto_delete'd.
Everything else seems right to me.
In the case of scenario 2 you could also let RabbitMQ auto-generate queue names for you and then let those queues be auto-delete'd once the consumer disconnects.
| RabbitMQ | 12,597,006 | 20 |
I'm just looking in to the config details of RabbitMQ and came across
[{rabbit, [{vm_memory_high_watermark, 0},
{disk_free_limit, {mem_relative, 1.0}}
]
}]
What does this config mean?
vm_memory_high_watermark set to 0 means => Block all publishers immediately the rabbitmq app starts? But we still see rabbitmq able to queue whatever msgs we send.
16720 rabbitmq 20 0 142m 62m 2408 S 0 **1.6** 0:06.88 beam.smp
Whenever we send msgs to the broker we se this process' memory usage increasing. So, Does this mean the msgs are in memory although the watermark is set to 0?
We are curious to know what happens if the mem limit of ram reaches and still msgs are being sent? Either publishers are blocked? or The messages are swapped out to disk if available?
| The vm_memory_high_watermark is a percentage value is related to memory flow control in RabbitMQ.
If you take a look at Memory flow control you will see that it says, under "Memory-Based Flow Control" heading:
The RabbitMQ server detects the total amount of RAM installed in the computer on startup and when rabbitmqctl set_vm_memory_high_watermark fraction is executed. By default, when the RabbitMQ server uses above 40% of the installed RAM, it raises a memory alarm and blocks all connections. Once the memory alarm has cleared (e.g. due to the server paging messages to disk or delivering them to clients) normal service resumes.
So by you setting this value to 0, then of course it will trigger straight away! If you want RabbitMQ to be allowed to use more memory then you will want to INCREASE the value.
Another important note:
The default memory threshold is set to 40% of installed RAM. Note that this does not prevent the RabbitMQ server from using more than 40%, it is merely the point at which publishers are throttled.
So if you try yo publish messages when the alarm has been raised then your publishers will be blocked from sending messages.
If you want to block all publishers then you would set the vm_memory_high_watermark to 0. If you want to 'disable' memory based flow control then set the vm_memory_high_watermark to 100. See details from above link:
A value of 0 makes the memory alarm go off immediately and thus disables all publishing (this may be useful if you wish to disable publishing globally; use rabbitmqctl set_vm_memory_high_watermark 0). To prevent the memory alarm from going off at all, set some high multiplier such as 100.
| RabbitMQ | 12,175,156 | 20 |
I'm trying to install the RabbitMQ PECL extension but after running
sudo pecl install amqp
I get the following cryptic error message, which extensive googling hasn't helped resolve.
I have these packages installed:
librabbitmq - RabbitMQ C client itself)
librabbitmq-dev - dev headers etc.
and RabbitMQ running successfully on localhost
Maybe it could be a mismatch in the version of the C client and what the PECL extension expects, anybody else come across this one?
Make output below....
Cheers
running: make
/bin/bash /tmp/pear/temp/pear-build-rootZNUmac/amqp-1.0.0/libtool --mode=compile cc -I. -I/tmp/pear/temp/amqp -DPHP_ATOM_INC -I/tmp/pear/temp/pear-build-rootZNUmac/amqp- 1.0.0/include -I/tmp/pear/temp/pear-build-rootZNUmac/amqp-1.0.0/main -I/tmp/pear/temp/amqp - I/usr/include/php5 -I/usr/include/php5/main -I/usr/include/php5/TSRM - I/usr/include/php5/Zend -I/usr/include/php5/ext -I/usr/include/php5/ext/date/lib - D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -DHAVE_CONFIG_H -g -O2 -c /tmp/pear/temp/amqp/amqp.c -o amqp.lo
libtool: compile: cc -I. -I/tmp/pear/temp/amqp -DPHP_ATOM_INC -I/tmp/pear/temp/pear- build-rootZNUmac/amqp-1.0.0/include -I/tmp/pear/temp/pear-build-rootZNUmac/amqp-1.0.0/main - I/tmp/pear/temp/amqp -I/usr/include/php5 -I/usr/include/php5/main -I/usr/include/php5/TSRM - I/usr/include/php5/Zend -I/usr/include/php5/ext -I/usr/include/php5/ext/date/lib - D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -DHAVE_CONFIG_H -g -O2 -c /tmp/pear/temp/amqp/amqp.c -fPIC -DPIC -o .libs/amqp.o
/bin/bash /tmp/pear/temp/pear-build-rootZNUmac/amqp-1.0.0/libtool --mode=compile cc -I. -I/tmp/pear/temp/amqp -DPHP_ATOM_INC -I/tmp/pear/temp/pear-build-rootZNUmac/amqp- 1.0.0/include -I/tmp/pear/temp/pear-build-rootZNUmac/amqp-1.0.0/main -I/tmp/pear/temp/amqp - I/usr/include/php5 -I/usr/include/php5/main -I/usr/include/php5/TSRM - I/usr/include/php5/Zend -I/usr/include/php5/ext -I/usr/include/php5/ext/date/lib - D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -DHAVE_CONFIG_H -g -O2 -c /tmp/pear/temp/amqp/amqp_exchange.c -o amqp_exchange.lo
libtool: compile: cc -I. -I/tmp/pear/temp/amqp -DPHP_ATOM_INC -I/tmp/pear/temp/pear- build-rootZNUmac/amqp-1.0.0/include -I/tmp/pear/temp/pear-build-rootZNUmac/amqp-1.0.0/main - I/tmp/pear/temp/amqp -I/usr/include/php5 -I/usr/include/php5/main -I/usr/include/php5/TSRM - I/usr/include/php5/Zend -I/usr/include/php5/ext -I/usr/include/php5/ext/date/lib - D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -DHAVE_CONFIG_H -g -O2 -c /tmp/pear/temp/amqp/amqp_exchange.c -fPIC -DPIC -o .libs/amqp_exchange.o
/bin/bash /tmp/pear/temp/pear-build-rootZNUmac/amqp-1.0.0/libtool --mode=compile cc -I. -I/tmp/pear/temp/amqp -DPHP_ATOM_INC -I/tmp/pear/temp/pear-build-rootZNUmac/amqp- 1.0.0/include -I/tmp/pear/temp/pear-build-rootZNUmac/amqp-1.0.0/main -I/tmp/pear/temp/amqp - I/usr/include/php5 -I/usr/include/php5/main -I/usr/include/php5/TSRM - I/usr/include/php5/Zend -I/usr/include/php5/ext -I/usr/include/php5/ext/date/lib - D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -DHAVE_CONFIG_H -g -O2 -c /tmp/pear/temp/amqp/amqp_queue.c -o amqp_queue.lo
libtool: compile: cc -I. -I/tmp/pear/temp/amqp -DPHP_ATOM_INC -I/tmp/pear/temp/pear-build-rootZNUmac/amqp-1.0.0/include -I/tmp/pear/temp/pear-build-rootZNUmac/amqp-1.0.0/main - I/tmp/pear/temp/amqp -I/usr/include/php5 -I/usr/include/php5/main -I/usr/include/php5/TSRM - I/usr/include/php5/Zend -I/usr/include/php5/ext -I/usr/include/php5/ext/date/lib - D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -DHAVE_CONFIG_H -g -O2 -c /tmp/pear/temp/amqp/amqp_queue.c -fPIC -DPIC -o .libs/amqp_queue.o
/tmp/pear/temp/amqp/amqp_queue.c: In function 'read_message_from_channel':
/tmp/pear/temp/amqp/amqp_queue.c:316:11: error: 'AMQP_FIELD_KIND_U64' undeclared (first use in this function)
/tmp/pear/temp/amqp/amqp_queue.c:316:11: note: each undeclared identifier is reported only once for each function it appears in
/tmp/pear/temp/amqp/amqp_queue.c: In function 'zim_amqp_queue_class_nack':
/tmp/pear/temp/amqp/amqp_queue.c:1020:2: error: unknown type name 'amqp_basic_nack_t'
/tmp/pear/temp/amqp/amqp_queue.c:1039:3: error: request for member 'delivery_tag' in something not a structure or union
/tmp/pear/temp/amqp/amqp_queue.c:1040:3: error: request for member 'multiple' in something not a structure or union
/tmp/pear/temp/amqp/amqp_queue.c:1041:3: error: request for member 'requeue' in something not a structure or union
/tmp/pear/temp/amqp/amqp_queue.c:1046:3: error: 'AMQP_BASIC_NACK_METHOD' undeclared (first use in this function)
make: *** [amqp_queue.lo] Error 1
ERROR: `make' failed
| I had to install it applying following steps found here:
# Download the rabbitmq-c library @ version 0-9-1
git clone git://github.com/alanxz/rabbitmq-c.git
cd rabbitmq-c
# Enable and update the codegen git submodule
git submodule init
git submodule update
# Configure, compile and install
autoreconf -i && ./configure && make && sudo make install
After that, sudo pecl install amqp did the work.
Using Ubuntu 12.10 with PHP 5.4.3.
| RabbitMQ | 9,520,914 | 20 |
I have a RabbitMQ cluster with two nodes in production and the cluster is breaking with these error messages:
=ERROR REPORT==== 23-Dec-2011::04:21:34 ===
** Node rabbit@rabbitmq02 not responding **
** Removing (timedout) connection **
=INFO REPORT==== 23-Dec-2011::04:21:35 ===
node rabbit@rabbitmq02 lost 'rabbit'
=ERROR REPORT==== 23-Dec-2011::04:21:49 ===
Mnesia(rabbit@rabbitmq01): ** ERROR ** mnesia_event got {inconsistent_database, running_partitioned_network, rabbit@rabbitmq02}
I tried to simulate the problem by killing the connection between the two nodes using "tcpkill". The cluster has disconnected, and surprisingly the two nodes are not trying to reconnect!
When the cluster breaks, HAProxy load balancer still marks both nodes as active and send requests to both of them, although they are not in a cluster.
My questions:
If the nodes are configured to work as a cluster, when I get a network failure, why aren't they trying to reconnect afterwards?
How can I identify broken cluster and shutdown one of the nodes? I have consistency problems when working with the two nodes separately.
| RabbitMQ Clusters do not work well on unreliable networks (part of RabbitMQ documentation). So when the network failure happens (in a two node cluster) each node thinks that it is the master and the only node in the cluster. Two master nodes don't automatically reconnect, because their states are not automatically synchronized (even in case of a RabbitMQ slave - the actual message synchronization does not happen - the slave just "catches up" as messages get consumed from the queue and more messages get added).
To detect whether you have a broken cluster, run the command:
rabbitmqctl cluster_status
on each of the nodes that form part of the cluster. If the cluster is broken then you'll only see one node. Something like:
Cluster status of node rabbit@rabbitmq1 ...
[{nodes,[{disc,[rabbit@rabbitmq1]}]},{running_nodes,[rabbit@rabbitmq1]}]
...done.
In such cases, you'll need to run the following set of commands on one of the nodes that formed part of the original cluster (so that it joins the other master node (say rabbitmq1) in the cluster as a slave):
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl join_cluster rabbit@rabbitmq1
rabbitmqctl start_app
Finally check the cluster status again .. this time you should see both the nodes.
Note: If you have the RabbitMQ nodes in an HA configuration using a Virtual IP (and the clients are connecting to RabbitMQ using this virtual IP), then the node that should be made the master should be the one that has the Virtual IP.
| RabbitMQ | 8,654,053 | 20 |
I am new to Spring AMQP. I am having an application which is a producer sending messages to the other application which is a consumer.
Once the consumer receives the message, we will do validation of the data.
If the data is proper we have to ACK and message should be removed from the Queue.
If the data is improper we have to NACK(Negative Acknowledge) the data so that it will be re-queued in RabbitMQ.
I came across
**factory.setDefaultRequeueRejected(false);**( It will not requeue the message at all)
**factory.setDefaultRequeueRejected(true);**( It will requeue the message when exception occurs)
But my case i will acknowledge the message based on validation. Then it should remove the message. If NACK then requeue the message.
I have read in RabbitMQ website
The AMQP specification defines the basic.reject method that allows clients to reject individual, delivered messages, instructing the broker to either discard them or requeue them
How to achieve the above scenario? Please provide me some examples.
I tried a small Program
logger.info("Job Queue Handler::::::::::" + new Date());
try {
}catch(Exception e){
logger.info("Activity Object Not Found Exception so message should be Re-queued the Message::::::::::::::");
}
factory.setErrorHandler(new ConditionalRejectingErrorHandler(cause ->{
return cause instanceof XMLException;
}));
Message is not re queuing for different exception
factory.setDefaultRequeueRejected(true)
09:46:38,854 ERROR [stderr] (SimpleAsyncTaskExecutor-1)
org.activiti.engine.ActivitiObjectNotFoundException: no processes deployed with key 'WF89012'
09:46:39,102 INFO
[com.example.bip.rabbitmq.handler.ErrorQueueHandler]
(SimpleAsyncTaskExecutor-1) Received from Error Queue: {ERROR=Could
not commit JPA transaction; nested exception is
javax.persistence.RollbackException: Transaction marked as
rollbackOnly}
| See the documentation.
By default, (with defaultRequeueRejected=true) the container will ack the message (causing it to be removed) if the listener exits normally or reject (and requeue) it if the listener throws an exception.
If the listener (or error handler) throws an AmqpRejectAndDontRequeueException, the default behavior is overridden and the message is discarded (or routed to a DLX/DLQ if so configured) - the container calls basicReject(false) instead of basicReject(true).
So, if your validation fails, throw an AmqpRejectAndDontRequeueException. Or, configure your listener with a custom error handler to convert your exception to an AmqpRejectAndDontRequeueException.
That is described in this answer.
If you really want to take responsibility for acking yourself, set the acknowledge mode to MANUAL and use a ChannelAwareMessageListener or this technique if you are using a @RabbitListener.
But most people just let the container take care of things (once they understand what's going on). Generally, using manual acks is for special use cases, such as deferring acks, or early acking.
EDIT
There was a mistake in the answer I pointed you to (now fixed); you have to look at the cause of the ListenerExecutionFailedException. I just tested this and it works as expected...
@SpringBootApplication
public class So39530787Application {
private static final String QUEUE = "So39530787";
public static void main(String[] args) throws Exception {
ConfigurableApplicationContext context = SpringApplication.run(So39530787Application.class, args);
RabbitTemplate template = context.getBean(RabbitTemplate.class);
template.convertAndSend(QUEUE, "foo");
template.convertAndSend(QUEUE, "bar");
template.convertAndSend(QUEUE, "baz");
So39530787Application bean = context.getBean(So39530787Application.class);
bean.latch.await(10, TimeUnit.SECONDS);
System.out.println("Expect 1 foo:" + bean.fooCount);
System.out.println("Expect 3 bar:" + bean.barCount);
System.out.println("Expect 1 baz:" + bean.bazCount);
context.close();
}
@Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(ConnectionFactory connectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setErrorHandler(new ConditionalRejectingErrorHandler(
t -> t instanceof ListenerExecutionFailedException && t.getCause() instanceof FooException));
return factory;
}
@Bean
public Queue queue() {
return new Queue(QUEUE, false, false, true);
}
private int fooCount;
private int barCount;
private int bazCount;
private final CountDownLatch latch = new CountDownLatch(5);
@RabbitListener(queues = QUEUE)
public void handle(String in) throws Exception {
System.out.println(in);
latch.countDown();
if ("foo".equals(in) && ++this.fooCount < 3) {
throw new FooException();
}
else if ("bar".equals(in) && ++this.barCount < 3) {
throw new BarException();
}
else if ("baz".equals(in)) {
this.bazCount++;
}
}
@SuppressWarnings("serial")
public static class FooException extends Exception { }
@SuppressWarnings("serial")
public static class BarException extends Exception { }
}
Result:
Expect 1 foo:1
Expect 3 bar:3
Expect 1 baz:1
| RabbitMQ | 39,530,787 | 19 |
I have a Spring AMQP message listener running.
public class ConsumerService implements MessageListener {
@Autowired
RabbitTemplate rabbitTemplate;
@Override
public void onMessage(Message message) {
try {
testService.process(message); //This process method can throw Business Exception
} catch (BusinessException e) {
//Here we can just log the exception. How the retry attempt is made?
} catch (Exception e) {
//Here we can just log the exception. How the retry attempt is made?
}
}
}
As you can see, there could be exception coming out during process. I want to retry because of a particular error in Catch block. I cannot through exception in onMessage.
How to tell RabbitMQ to there is an exception and retry?
| Since onMessage() doesn't allow to throw checked exceptions you can wrap the exception in a RuntimeException and re-throw it.
try {
testService.process(message);
} catch (BusinessException e) {
throw new RuntimeException(e);
}
Note however that this may result in the message to be re-delivered indefinitely. Here is how this works:
RabbitMQ supports rejecting a message and asking the broker to requeue it. This is shown here. But RabbitMQ doesn't natively have a mechanism for retry policy, e.g. setting max retries, delay, etc.
When using Spring AMQP, "requeue on reject" is the default option. Spring's SimpleMessageListenerContainer will by default do this when there is an unhandled exception. So in your case you just need to re-throw the caught exception. Note however that if you cannot process a message and you always throw the exception this will be re-delivered indefinitely and will result in an infinite loop.
You can override this behaviour per message by throwing a AmqpRejectAndDontRequeueException exception, in which case the message will not be requeued.
You can also switch off the "requeue on reject" behavior of SimpleMessageListenerContainer entirely by setting
container.setDefaultRequeueRejected(false)
When a message is rejected and not requeued it will either be lost or transferred to a DLQ, if one is set in RabbitMQ.
If you need a retry policy with max attempts, delay, etc the easiest is to setup a spring "stateless" RetryOperationsInterceptor which will do all retries within the thread (using Thread.sleep()) without rejecting the message on each retry (so without going back to RabbitMQ for each retry). When retries are exhausted, by default a warning will be logged and the message will be consumed. If you want to send to a DLQ you will need either a RepublishMessageRecoverer or a custom MessageRecoverer that rejects the message without requeuing (in that latter case you should also setup a RabbitMQ DLQ on the queue). Example with default message recoverer:
container.setAdviceChain(new Advice[] {
org.springframework.amqp.rabbit.config.RetryInterceptorBuilder
.stateless()
.maxAttempts(5)
.backOffOptions(1000, 2, 5000)
.build()
});
This obviously has the drawback that you will occupy the Thread for the entire duration of the retries. You also have the option to use a "stateful" RetryOperationsInterceptor, which will send the message back to RabbitMQ for each retry, but the delay will still be implemented with Thread.sleep() within the application, plus setting up a stateful interceptor is a bit more complicated.
Therefore, if you want retries with delays without occupying a Thread you will need a much more involved custom solution using TTL on RabbitMQ queues. If you don't want exponential backoff (so delay doesn't increase on each retry) it's a bit simpler. To implement such a solution you basically create another queue on rabbitMQ with arguments: "x-message-ttl": <delay time in milliseconds> and "x-dead-letter-exchange":"<name of the original queue>". Then on the main queue you set "x-dead-letter-exchange":"<name of the queue with the TTL>". So now when you reject and don't requeue a message RabbitMQ will redirect it to the second queue. When TTL expires it will be redirected to the original queue and thus redelivered to the application. So now you need a retry interceptor that rejects the message to RabbitMQ after each failure and also keeps track of the retry count. To avoid the need to keep state in the application (because if your application is clustered you need to replicate state) you can calculate the retry count from the x-death header that RabbitMQ sets. See more info about this header here. So at that point implementing a custom interceptor is easier than customising the Spring stateful interceptor with this behaviour.
Also check the section about retries in the Spring AMQP reference.
| RabbitMQ | 36,979,840 | 19 |
I have a RabbitMQ 3.4.2 instance with a web management plugin installed.
When I push to the message {'operationId': 194} to the queue using Python's kombu queue package, the message is read on the other end as a dictionary.
However, when I send the message using the web console:
I get the following error on the receiving end:
operation_id = payload['operationId']
TypeError: string indices must be integers
I have tried adding a content-type header and property, with no success.
Since the reader code is the same, it means that the web sender does not mark the sent message as a JSON / dictionary payload, and therefore it is read as a string on the other end.
Any idea how to mark a message as a JSON message using the RabbitMQ web console?
| I had to use content_type instead of content-type (an underscore instead of a hyphen).
This is a pretty questionable design decision, because the standard everybody knows is content-type.
| RabbitMQ | 34,200,756 | 19 |
I need to limit the rate of consuming messages from rabbitmq queue.
I have found many suggestions, but most of them offer to use prefetch option. But this option doesn't do what I need. Even if I set prefetch to 1 the rate is about 6000 messages/sec. This is too many for consumer.
I need to limit for example about 70 to 200 messages per second. This means consuming one message every 5-14ms. No simultaneous messages.
I'm using Node.JS with amqp.node library.
| Implementing a token bucket might help:
https://en.wikipedia.org/wiki/Token_bucket
You can write a producer that produces to the "token bucket queue" at a fixed rate with a TTL on the message (maybe expires after a second?) or just set a maximum queue size equal to your rate per second. Consumers that receive a "normal queue" message must also receive a "token bucket queue" message in order to process the message effectively rate limiting the application.
NodeJS + amqplib Example:
var queueName = 'my_token_bucket';
rabbitChannel.assertQueue(queueName, {durable: true, messageTtl: 1000, maxLength: bucket.ratePerSecond});
writeToken();
function writeToken() {
rabbitChannel.sendToQueue(queueName, new Buffer(new Date().toISOString()), {persistent: true});
setTimeout(writeToken, 1000 / bucket.ratePerSecond);
}
| RabbitMQ | 29,226,590 | 19 |
I've been learning RabbitMQ various topologies, however, I couldn't find any reference to dynamic queue creation (aka Declare Queue) emitted from a producer.
The idea would be to create queues dynamically depending on a particular event (e.g a HTTP request). The queue would be temporary with a TTL set and named after the event ID.
A consumer could then, subscribe to the topic "event.*" and merge all the messages related to it.
Example:
HTTP POST "Create user" received
producer creates a queue user.ID
push all the subsequent messages concerning the user in his queue (e.g "Add username", "Add email" ...)
worker gets assigned to a random queue "user.*" and merges everything into a user account
queue is automatically deleted after the TTL expired
Now, is this scenario feasible with RabbitMQ ?
| Essentially, what you want to do is use RabbitMQ to buffer messages waiting in a set of queues (which is what a message queuing system does by definition). :)
Assuming you know what your queues are from the consuming side, you won't have any issues. There is no constraint that a producer can't create a queue. As a caveat, when queues expire, all messages in the queue are discarded (or optionally, they can be set to go to a dead-letter queue).
What code have you tried?
Edit
Upon further clarification (from your comment) - you are looking for "wildcard consuming" vs wildcard publishing. RabbitMQ does not support such a topology at the present time (this post asks for a similar feature).
What you would need to do is periodically enumerate the queues (using the RabbitMQ API); following that, your app could decide which ones to consume from. When a queue is deleted, the consumer is automatically closed.
Special Note
It should be understood that what is being asked here is an anti-pattern. The typical behavior of a system using queues is to route messages to queues based upon content. Thus, a properly-orchestrated system would have a set of workers operating on one or more statically-defined queues. Different workers may take different queues, depending upon specialization. When a series of interactions results in messages being published to the queue, the workers assigned to the queues will handle the messages in a first-come-first-served fashion (but, as this post discusses, order cannot be guaranteed with multiple consumers). The desired system behavior then emerges as a composition of workers performing various functions operating on queues.
| RabbitMQ | 21,265,242 | 19 |
I have create a simple publisher and a consumer which subscribes on the queue using basic.consume.
My consumer acknowledges the messages when the job runs without an exception. Whenever I run into an exception I don´t ack the message and return early. Only the acknowledged messages disappear from the queue, so that´s working correctly.
Now I want the consumer to pick up the failed messages again, but the only way to reconsume those messages is by restarting the consumer.
How do I need to approach this use case?
Setup code
$channel = new AMQPChannel($connection);
$exchange = new AMQPExchange($channel);
$exchange->setName('my-exchange');
$exchange->setType('fanout');
$exchange->declare();
$queue = new AMQPQueue($channel);
$queue->setName('my-queue');
$queue->declare();
$queue->bind('my-exchange');
Consumer code
$queue->consume(array($this, 'callback'));
public function callback(AMQPEnvelope $msg)
{
try {
//Do some business logic
} catch (Exception $ex) {
//Log exception
return;
}
return $queue->ack($msg->getDeliveryTag());
}
Producer code
$exchange->publish('message');
| If message was not acknowledged and application fails, it will be redelivered automatically and redelivered property on envelope will be set to true (unless you consume them with no-ack = true flag).
UPD:
You have to nack message with redelivery flag in your catch block
try {
//Do some business logic
} catch (Exception $ex) {
//Log exception
return $queue->nack($msg->getDeliveryTag(), AMQP_REQUEUE);
}
Beware infinitely nacked messages while redelivery count doesn't implemented in RabbitMQ and in AMQP protocol at all.
If you doesn't want to mess with such messages and simply want to add some delay you may want to add some sleep() or usleep() before nack method call, but it is not a good idea at all.
There are multiple techniques to deal with cycle redeliver problem:
1. Rely on Dead Letter Exchanges
pros: reliable, standard, clear
cons: require additional logic
2. Use per message or per queue TTL
pros: easy to implement, also standard, clear
cons: with long queues you may loose some message
Examples (note, that for queue ttl we pass only number and for message ttl - anything that will be numeric string):
2.1 Per message ttl:
$queue = new AMQPQueue($channel);
$queue->setName('my-queue');
$queue->declareQueue();
$queue->bind('my-exchange');
$exchange->publish(
'message at ' . microtime(true),
null,
AMQP_NOPARAM,
array(
'expiration' => '1000'
)
);
2.2. Per queue ttl:
$queue = new AMQPQueue($channel);
$queue->setName('my-queue');
$queue->setArgument('x-message-ttl', 1000);
$queue->declareQueue();
$queue->bind('my-exchange');
$exchange->publish('message at ' . microtime(true));
3. Hold redelivers count or left redelivers number (aka hop limit or ttl in IP stack) in message body or headers
pros: give you extra control on messages lifetime on application level
cons: significant overhead while you have to modify message and publish it again, application specific, not clear
Code:
$queue = new AMQPQueue($channel);
$queue->setName('my-queue');
$queue->declareQueue();
$queue->bind('my-exchange');
$exchange->publish(
'message at ' . microtime(true),
null,
AMQP_NOPARAM,
array(
'headers' => array(
'ttl' => 100
)
)
);
$queue->consume(
function (AMQPEnvelope $msg, AMQPQueue $queue) use ($exchange) {
$headers = $msg->getHeaders();
echo $msg->isRedelivery() ? 'redelivered' : 'origin', ' ';
echo $msg->getDeliveryTag(), ' ';
echo isset($headers['ttl']) ? $headers['ttl'] : 'no ttl' , ' ';
echo $msg->getBody(), PHP_EOL;
try {
//Do some business logic
throw new Exception('business logic failed');
} catch (Exception $ex) {
//Log exception
if (isset($headers['ttl'])) {
// with ttl logic
if ($headers['ttl'] > 0) {
$headers['ttl']--;
$exchange->publish($msg->getBody(), $msg->getRoutingKey(), AMQP_NOPARAM, array('headers' => $headers));
}
return $queue->ack($msg->getDeliveryTag());
} else {
// without ttl logic
return $queue->nack($msg->getDeliveryTag(), AMQP_REQUEUE); // or drop it without requeue
}
}
return $queue->ack($msg->getDeliveryTag());
}
);
There are may be some other ways to better control message redelivers flow.
Conclusion: there are no silver bullet solution. You have to decide what solution fit your need the best or find out something other, but don't forget to share it here ;)
| RabbitMQ | 17,654,475 | 19 |
How can I use two different celery project which consumes messages from single RabbitMQ installation.
Generally, these scripts work fine if I use different rabbitmq for them. But on production machine, I need to share the same RabbitMQ backend for them.
Note: Due to some constraint, I cannot merge new projects in existing, so it will be two different project.
| RabbitMQ has the ability to create virtual message brokers called virtual
hosts or vhosts. Each one is essentially a mini-RabbitMQ server with its own queues. This lets you safely use one RabbitMQ server for multiple applications.
rabbitmqctl add_vhost command creates a vhost.
By default Celery uses the / default vhost:
celery worker --broker=amqp://guest@localhost//
But you can use any custom vhost:
celery worker --broker=amqp://guest@localhost/myvhost
Examples:
rabbitmqctl add_vhost new_host
rabbitmqctl add_vhost /another_host
celery worker --broker=amqp://guest@localhost/new_host
celery worker --broker=amqp://guest@localhost//another_host
| RabbitMQ | 12,209,652 | 19 |
I just switched from ForkPool to gevent with concurrency (5) as the pool method for Celery workers running in Kubernetes pods. After the switch I've been getting a non recoverable erro in the worker:
amqp.exceptions.PreconditionFailed: (0, 0): (406) PRECONDITION_FAILED - delivery acknowledgement on channel 1 timed out. Timeout value used: 1800000 ms. This timeout value can be configured, see consumers doc guide to learn more
The broker logs gives basically the same message:
2021-11-01 22:26:17.251 [warning] <0.18574.1> Consumer None4 on channel 1 has timed out waiting for delivery acknowledgement. Timeout used: 1800000 ms. This timeout value can be configured, see consumers doc guide to learn more
I have the CELERY_ACK_LATE set up, but was not familiar with the necessity to set a timeout for the acknowledgement period. And that never happened before using processes. Tasks can be fairly long (60-120 seconds sometimes), but I can't find a specific setting to allow that.
I've read in another post in other forum a user who set the timeout on the broker configuration to a huge number (like 24 hours), and was also having the same problem, so that makes me think there may be something else related to the issue.
Any ideas or suggestions on how to make worker more resilient?
| The accepted answer is the correct answer. However, if you have an existing RabbitMQ server running and do not want to restart it, you can dynamically set the configuration value by running the following command on the RabbitMQ server:
rabbitmqctl eval 'application:set_env(rabbit, consumer_timeout, 36000000).'
This will set the new timeout to 10 hrs (36000000ms). For this to take effect, you need to restart your workers though. Existing worker connections will continue to use the old timeout.
You can check the current configured timeout value as well:
rabbitmqctl eval 'application:get_env(rabbit, consumer_timeout).'
If you are running RabbitMQ via Docker image, here's how to set the value: Simply add -e RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="-rabbit consumer_timeout 36000000" to your docker run OR set the environment RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS to "-rabbit consumer_timeout 36000000".
Hope this helps!
| RabbitMQ | 69,828,547 | 18 |
I have a .net micro-service receiving messages using RabbitMQ client, I need to test the following:
1- consumer is successfully connected to rabbitMq host.
2- consumer is listening to queue.
3- consumer is receiving messages successfully.
To achieve the above, I have created a sample application that sends messages and I am debugging consumer to be sure that it is receiving messages.
How can I automate this test? hence include it in my micro-service CI.
I am thinking to include my sample app in my CI so I can fire a message then run a consumer unit test that waits a specific time then passes if the message received, but this seems like a wrong practice to me because the test will not start until a few seconds the message is fired.
Another way I am thinking of is firing the sample application from the unit test itself, but if the sample app fails to work that would make it the service fault.
Is there any best practices for integration testing of micro-services connecting through RabbitMQ?
| I have built many such tests. I have thrown up some basic code on
GitHub here with .NET Core 2.0.
You will need a RabbitMQ cluster for these automated tests. Each test starts by eliminating the queue to ensure that no messages already exist. Pre existing messages from another test will break the current test.
I have a simple helper to delete the queue. In my applications, they always declare their own queues, but if that is not your case then you'll have to create the queue again and any bindings to any exchanges.
public class QueueDestroyer
{
public static void DeleteQueue(string queueName, string virtualHost)
{
var connectionFactory = new ConnectionFactory();
connectionFactory.HostName = "localhost";
connectionFactory.UserName = "guest";
connectionFactory.Password = "guest";
connectionFactory.VirtualHost = virtualHost;
var connection = connectionFactory.CreateConnection();
var channel = connection.CreateModel();
channel.QueueDelete(queueName);
connection.Close();
}
}
I have created a very simple consumer example that represents your microservice. It runs in a Task until cancellation.
public class Consumer
{
private IMessageProcessor _messageProcessor;
private Task _consumerTask;
public Consumer(IMessageProcessor messageProcessor)
{
_messageProcessor = messageProcessor;
}
public void Consume(CancellationToken token, string queueName)
{
_consumerTask = Task.Run(() =>
{
var factory = new ConnectionFactory() { HostName = "localhost" };
using (var connection = factory.CreateConnection())
{
using (var channel = connection.CreateModel())
{
channel.QueueDeclare(queue: queueName,
durable: false,
exclusive: false,
autoDelete: false,
arguments: null);
var consumer = new EventingBasicConsumer(channel);
consumer.Received += (model, ea) =>
{
var body = ea.Body;
var message = Encoding.UTF8.GetString(body);
_messageProcessor.ProcessMessage(message);
};
channel.BasicConsume(queue: queueName,
autoAck: false,
consumer: consumer);
while (!token.IsCancellationRequested)
Thread.Sleep(1000);
}
}
});
}
public void WaitForCompletion()
{
_consumerTask.Wait();
}
}
The consumer has an IMessageProcessor interface that will do the work of processing the message. In my integration test I created a fake. You would probably use your preferred mocking framework for this.
The test publisher publishes a message to the queue.
public class TestPublisher
{
public void Publish(string queueName, string message)
{
var factory = new ConnectionFactory() { HostName = "localhost", UserName="guest", Password="guest" };
using (var connection = factory.CreateConnection())
using (var channel = connection.CreateModel())
{
var body = Encoding.UTF8.GetBytes(message);
channel.BasicPublish(exchange: "",
routingKey: queueName,
basicProperties: null,
body: body);
}
}
}
My example test looks like this:
[Fact]
public void If_SendMessageToQueue_ThenConsumerReceiv4es()
{
// ARRANGE
QueueDestroyer.DeleteQueue("queueX", "/");
var cts = new CancellationTokenSource();
var fake = new FakeProcessor();
var myMicroService = new Consumer(fake);
// ACT
myMicroService.Consume(cts.Token, "queueX");
var producer = new TestPublisher();
producer.Publish("queueX", "hello");
Thread.Sleep(1000); // make sure the consumer will have received the message
cts.Cancel();
// ASSERT
Assert.Equal(1, fake.Messages.Count);
Assert.Equal("hello", fake.Messages[0]);
}
My fake is this:
public class FakeProcessor : IMessageProcessor
{
public List<string> Messages { get; set; }
public FakeProcessor()
{
Messages = new List<string>();
}
public void ProcessMessage(string message)
{
Messages.Add(message);
}
}
Additional advice is:
If you can append randomized text to your queue and exchange names on each test run then do so to avoid concurrent tests interfering with each other
I have some helpers in the code for declaring queues, exchanges and bindings also, if your applications don't do that.
Write a connection killer class that will force close connections and check your applications still work and can recover. I have code for that, but not in .NET Core. Just ask me for it and I can modify it to run in .NET Core.
In general, I think you should avoid including other microservices in your integration tests. If you send a message from one service to another and expect a message back for example, then create a fake consumer that can mock the expected behaviour. If you receive messages from other services then create fake publishers in your integration test project.
| RabbitMQ | 50,176,793 | 18 |
I am trying to build my airflow using docker and rabbitMQ. I am using rabbitmq:3-management image. And I am able to access rabbitMQ UI, and API.
In airflow I am building airflow webserver, airflow scheduler, airflow worker and airflow flower. Airflow.cfg file is used to config airflow.
Where I am using broker_url = amqp://user:[email protected]:5672/ and celery_result_backend = amqp://user:[email protected]:5672/
My docker compose file is as follows
version: '3'
services:
rabbit1:
image: "rabbitmq:3-management"
hostname: "rabbit1"
environment:
RABBITMQ_ERLANG_COOKIE: "SWQOKODSQALRPCLNMEQG"
RABBITMQ_DEFAULT_USER: "user"
RABBITMQ_DEFAULT_PASS: "password"
RABBITMQ_DEFAULT_VHOST: "/"
ports:
- "5672:5672"
- "15672:15672"
labels:
NAME: "rabbitmq1"
webserver:
build: "airflow/"
hostname: "webserver"
restart: always
environment:
- EXECUTOR=Celery
ports:
- "8080:8080"
depends_on:
- rabbit1
command: webserver
scheduler:
build: "airflow/"
hostname: "scheduler"
restart: always
environment:
- EXECUTOR=Celery
depends_on:
- webserver
- flower
- worker
command: scheduler
worker:
build: "airflow/"
hostname: "worker"
restart: always
depends_on:
- webserver
environment:
- EXECUTOR=Celery
command: worker
flower:
build: "airflow/"
hostname: "flower"
restart: always
environment:
- EXECUTOR=Celery
ports:
- "5555:5555"
depends_on:
- rabbit1
- webserver
- worker
command: flower
I am able to build images using docker compose. However, I am not able to connect my airflow scheduler to rabbitMQ. I am getting following error:
consumer: Cannot connect to amqp://user:**@localhost:5672//: [Errno
111] Connection refused.
I have tried using 127.0.0.1 and localhost both.
What I am doing wrong ?
| From within your airflow containers, you should be able to connect to the service rabbit1. So all you need to do is to change amqp://user:**@localhost:5672//: to amqp://user:**@rabbit1:5672//: and it should work.
Docker compose creates a default network and attaches services that do not explicitly define a network to it.
You do not need to expose the 5672 & 15672 ports on rabbit1 unless you want to be able to access it from outside the application.
Also, generally it is not recommended to build images inside docker-compose.
| RabbitMQ | 44,710,248 | 18 |
I'm working on a personnal project which is to transform a monolithic web application into microservices (each service has its own database).
At this moment the monolithic backend is made with NodeJS and is able to reply REST request.
When I began to split the application into multiple services I faced the next problem : How to make the communication between them nicely ?
First I tried to use REST call with the next example :
"Register Service" inserts interesting things into its database, then forward (HTTP POST) the user information to the "User Service" in order to persist it into the "user" database.
From this example we have 2 services thus 2 databases.
I realized at this moment it wasn't a good choice. Because my "Register Service" depends on "User service". They are kind of coupled and this is an anti-pattern of the microservices conception ( from what I read about ).
The second idea was to use a message broker like RabbitMQ. "Register Service" still insert interesting things into its own database and publish a message in a queue with the user information as data. "User Service" consumes this message and persists data into its "user" database. By using this conception, both of the services are fully isolated and could be a great idea.
BUT, how about the response to send to the client ( who made the request to "Register Service"). With the first idea we could send "200, everything's ok !" or 400. It is not a problem. With the second idea, we don't know if the consumer ("User Service") persisted the user data, so what do I need to reply to the client ?
I have the same problem with the shop side of the web application. The client post the product he wants to buy to "Order Service". This one needs to check the virtual money he has into "User Service" then forward the product detail to "Deliver Service" if the user has enough money. How to do that with fully isolated services ?
I don't want to use the http request time from the client to make async request/reply on the message broker.
I hope some of you will enlighten me.
| Tom suggested a pretty good link, where the top-voted answer with its reasoning and solution is the one you can rely on. Your specific problem may be rooted in the fact that Register Service and User Service are separate. Maybe they should not be?
Ideally, Register service should publish "UserRegistered" event to a bus and return 200 and nothing more. It should not care (know) at all about any subscribers to that event.
| RabbitMQ | 41,636,566 | 18 |
We have a wrapper library around RabbitMQ at my workplace, created by someone who no longer works here. I'm designing a new system using Rabbit, and am working out the best approach for declaring queues, exchanges and bindings. Our Rabbit architecture has a few federated global zones, and each zone has multiple Rabbit nodes.
The wrapper code to publish messages and subscribe to queues re-declares the relevant exchanges, queues and bindings each time. My concern is that this may introduce significant latency into every message publish, especially if it needs to wait for confirmation the queue/exchange exists in the remote global zones. I expect the benchmark of millions of messages a second don't re-declare the exchange for each publish.
In short, this approach seems a bit wasteful and paranoid to me, but perhaps I'm missing something.
So I have a few questions:
Is re-declaring the queues and exchanges a significant performance hit, given global federation?
Is re-declaring on each use a good approach because it handles queues/exchanges disappearing due to broker restarts or explicit deletion?
Should we just declare queues and exchanges once per process
and expect them to last the whole lifetime?
Should durable exchanges and queues be declared in Rabbit config and not declared by the applications at all?
How should config changes for queues/exchanges be handled if applications may continue to declare them with old config? Should applications just handle the declare failure and continue to publish/consume?
|
Is re-declaring the queues and exchanges a significant performance hit
it can be for a very large volume of messages
Is re-declaring on each use a good approach because it handles queues/exchanges disappearing due to broker restarts or explicit deletion?
"good approach" - no.
"effective" at preventing disappeared exchanges / queues / bindings from causing problems, yes... but it's not a good thing to do, in most cases
(maybe ok if you only send a message very infrequently, there is a real cause for concern about the topology being wiped clean)
Should we just declare queues and exchanges once per process and expect them to last the whole lifetime?
this is my general approach.
it opens the possibility of topology being destroyed and you not knowing it. it comes down to whether or not you think this will really happen.
Should durable exchanges and queues be declared in Rabbit config and not declared by the applications at all?
there's nothing wrong with pre-defined topology, but it misses a lot of the power and flexibility of rabbitmq and the amqp protocol.
many messaging systems require predefined topologies and specialized tools to manage the topology. amqp is quite different in that it allows you to define the topology as needed.
if you deal with a static topology, then this might be a good option for you
How should config changes for queues/exchanges be handled if applications may continue to declare them with old config? Should applications just handle the declare failure and continue to publish/consume?
i would crash the app and report it through whatever error reporting mechanism you are using.
having a topology change is usually something important, and done for a reason. if the exchange or queue declaration needs to change, there is probably a good reason for it and the code should not continue with the old declaration.
| RabbitMQ | 35,445,391 | 18 |
RabbitMQ allows you to "heartbeat" a connection, i.e. from time to time the client and the server check (using empty messages) that the other party is still there and available. So far, so good.
Unfortunately, I was not able to find a place in the documentation where a suggestion is made what a reasonable value for this is. I know that you need to specify the heartbeat in seconds, but what is a real-world best practice value?
Obviously, it should not be too often (traffic), but also not too rare (proxies, …). Any suggestions?
Is 15 seconds fine? 30? 60? …?
| This answer if for RabbitMQ < 3.5.5, for newer versions see the answer from @bmaupin.
It depends on your application needs. Out of the box it is 10 min for RabbitMQ. If you fail to ack heartbeat twice (20min of inactivity), connection will be closed immediately without sending any connection.close method or any error from the broker side.
The case to use heartbeat is firewalls that closes inactive for a long time connection or some other network settings that doesn't allow you to have waiting connections.
In fact, hearbeat is not a must, from RabbitMQ config doc
heartbeat
Value representing the heartbeat delay, in seconds, that the server sends in the connection.tune frame. If set to 0, heartbeats are disabled. Clients might not follow the server suggestion, see the AMQP reference for more details. Disabling heartbeats might improve performance in situations with a great number of connections, but might lead to connections dropping in the presence of network devices that close inactive connections.
Default: 580
Note, that having hearbeat interval too short may result in significant network overhead. Keep in mind, that hearbeat frames are sent when there are no other activity on the connection for a hearbeat time interval.
| RabbitMQ | 25,984,602 | 18 |
Ok here is an overview of what's going on:
M <-- Message with unique id of 1234
|
+-Start Queue
|
|
| <-- Exchange
/|\
/ | \
/ | \ <-- bind to multiple queues
Q1 Q2 Q3
\ | / <-- start of the problem is here
\ | /
\ | /
\|/
|
Q4 <-- Queues 1,2 and 3 must finish first before Queue 4 can start
|
C <-- Consumer
So I have an exchange that pushes to multiple queues, each queue has a task, once all tasks are completed, only then can Queue 4 start.
So message with unique id of 1234 gets sent to the exchange, the exchange routes it to all the task queues ( Q1, Q2, Q3, etc... ), when all the tasks for message id 1234 have completed, run Q4 for message id 1234.
How can I implement this?
Using Symfony2, RabbitMQBundle and RabbitMQ 3.x
Resources:
http://www.rabbitmq.com/tutorials/amqp-concepts.html
http://www.rabbitmq.com/tutorials/tutorial-six-python.html
UPDATE #1
Ok I think this is what I'm looking for:
https://github.com/videlalvaro/Thumper/tree/master/examples/parallel_processing
RPC with Parallel Processing, but how do I set the Correlation Id to be my unique id to group the messages and also identify what queue?
| You need to implement this: http://www.eaipatterns.com/Aggregator.html but the RabbitMQBundle for Symfony doesn't support that so you would have to use the underlying php-amqplib.
A normal consumer callback from the bundle will get an AMQPMessage. From there you can access the channel and manually publish to whatever exchanges comes next in your "pipes and filters" implementation
| RabbitMQ | 13,861,459 | 18 |
I've got rabbitmq 2.8.2 set up with the web management interface running. The Queues and Exchanges show no data.
rabbitmqctl list_queues works and shows my queues.
I've done rabbitmqctl stop_app, start_app.. and also service rabbitmq-server restart.
Any idea how to get the queue & exchange details to populate?
| I had removed the guest user and created a new user for myself. My new user did not have permission to access the / vhost. Adding that permission fixed my issue.
| RabbitMQ | 10,939,545 | 18 |
This is probably a very simple answer, but I'm not seeing an obvious solution in the MassTransit docs or forums.
When you have some messages that have been moved over to the error queue in RabbitMQ, what's the best mechanism for getting them back into the processing queue? Also, is there any built-in logging of why they got moved over there in the first place?
| Enable logging with the right plugin (NLog, log4net, etc) and failures should be in the log, assuming the right log level is enabled.
There is no great way to move messages back. Dru has worked on a busdriver tool https://github.com/MassTransit/MassTransit/tree/master/src/Tools/BusDriver. This, I believe, will allow you move items from one queue to another - but it's not a tool I've used. I have historically written tools that are related to business processes to move items back to the proper queue for processing that ops will manage.
| RabbitMQ | 10,502,905 | 18 |
I know that we can do this to list queue in a rabbitmq:
rabbitmqctl list_queues
but how can I do this via pika?
| No.
Pika is an AMQP library.
If you want to manage an MQ Broker, then you need an MQ Broker management tool. Fortunately, RabbitMQ comes with such a tool if you install a recent version of RabbitMQ such as 2.7.1 and you install the RabbitMQ management plugins. That gives you a web GUI as well as a RESTful API that you can use in your scripts.
But it's all outside of the scope of AMQP itself.
http://www.rabbitmq.com/management.html for the management plugin with a web GUI and http://www.rabbitmq.com/management-cli.html for a CLI type of interface.
| RabbitMQ | 9,652,295 | 18 |
Does RabbitMQ call the callback function for a consumer when it has some message for it, or does the consumer have to poll the RabbitMQ client?
So on the consumer side, if there is a PHP script, can RabbitMQ call it and pass the message/parameters to it. e.g. if rating is submitted on shard 1 and the aggregateRating table is on shard 2, then would RabbitMQ consumer on shard 2 trigger the script say aggRating.php and pass the parameters that were inserted in shard 1?
| The AMQPQueue::consume method is now a "proper" implementation of basic.consume as of version 1.0 of the PHP AMQP library (http://www.php.net/manual/en/amqpqueue.consume.php). Unfortunately, since PHP is a single threaded language, you cant do other things while waiting for a message in the same process space. If you call AMQPQueue::consume and pass it a callback, your entire application will block and wait for the next message to be sent by the broker, at which point it will call the provided callback function. If you want a non blocking method, you will have to use AMQPQueue::get (http://www.php.net/manual/en/amqpqueue.get.php), which will poll the server for a message, and return a boolean FALSE if there is no message.
I disagree with scvatex's suggestion to use a separate language for using a "push" approach to this problem though. PHP is not IO driven, and therefore using a separate language to call a PHP script when a message arrives seems like unnecessary complexity: why not just use AMQPQueue::consume and let the process block (wait for a message), and either put all the logic in the callback or make the callback run a separate PHP script.
We have done the latter at my work as a large scale job processing system so that we can segregate errors and keep the parent job processor running no matter what happens in the children. If you would like a detailed description of how we set this up and some code samples, I would be more than happy to post them.
| RabbitMQ | 9,151,698 | 18 |
Just upgraded to a new version of RabbitMQ -- 2.3.1 -- and now the following error occurs:
PRECONDITION_FAILED unknown delivery tag 1
...followed by the channel closing. This worked on an older RabbitMQ with no client-side changes.
In terms of application behavior:
When App A wants to send an async message to App b and receive an answer from B, this is the algorithm:
App A generate a unique ID and puts it in the message object
Then App A subscribes to a new Queue with both the queue name and routing key equals to the uuid.
App B open the message, do some calculations and return the result to the channel with the routkey that it recieved.
App A gets the answer and close the queue.
So far everything went really well in 1.7.0. what went wrong in 2.3.1?
When Application A calls basicPublish(), application B immediately throws the following exception:
com.rabbitmq.client.ShutdownSignalException: channel error; reason: {#method<channel.close>(reply-code=406,reply-text=PRECONDITION_FAILED - unknown delivery tag 1,class-id=60,method-id=80),null,""}
at com.rabbitmq.client.impl.ChannelN.processAsync(ChannelN.java:191)
at com.rabbitmq.client.impl.AMQChannel.handleCompleteInboundCommand(AMQChannel.java:159)
at com.rabbitmq.client.impl.AMQChannel.handleFrame(AMQChannel.java:110)
at com.rabbitmq.client.impl.AMQConnection$MainLoop.run(AMQConnection.java:438)
Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; reason: {#method<channel.close>(reply-code=406,reply-text=PRECONDITION_FAILED - unknown delivery tag 1,class-id=60,method-id=80),null,""}
| The only codepath that can cause that exception is through the broker handling a 'basic.ack', so this sounds like a client issue; check the client code.
In particular, check that you aren't ack'ing messages more than once. Doing so is in violation of the AMQP 0-9-1 spec:
A message MUST not be acknowledged more than once. The receiving peer MUST validate that a non-zero delivery-tag refers to a delivered message, and raise a channel exception if this is not the case
A great place to ask such questions is the rabbitmq-discuss mainling-list; all the RabbitMQ developers read that list and make a point of not leaving questions unanswered.
It's also worth noting that previous versions of Rabbit were more lax and did not throw an error in this case, but more recent versions do.
| RabbitMQ | 5,075,694 | 18 |
I'm a little confused as to which one I should use. I think either will work, but is one better or more appropriate than the other?
http://github.com/ask/carrot/tree/master
http://github.com/ask/celery/tree/master
| If you need to send/receive messages to/from AMQP message queues, use carrot.
If you want to run scheduled tasks on a number of machines, use celery.
If you're making soup, use both ;-)
| RabbitMQ | 1,102,254 | 18 |
I want to set message header while sending a message to rabbit.
I am using below code, but confused how to set message header in it.
public static <T> void sendMessage(String routingKey,final Object message,Class<T> type){
DefaultClassMapper typeMapper = new DefaultClassMapper();
typeMapper.setDefaultType(type);
Jackson2JsonMessageConverter converter = new Jackson2JsonMessageConverter();
converter.setClassMapper(typeMapper);
RabbitTemplate template = new RabbitTemplate(getConnectionFactory));
template.setMessageConverter(converter);
template.convertAndSend(routingKey, message);
}
In above method i am simply arguementing java POJO object and its type to send. I want to know where should i set message header here.
How to listen the message properties at listener end?
| Java 8:
template.convertAndSend(routingKey, message, m -> {
m.getMessageProperties().getHeaders().put("foo", "bar");
m.getMessageProperties().setPriority(priority);
return m;
});
Java 6,7:
template.convertAndSend(routingKey, message, new MessagePostProcessor() {
@Override
public Message postProcessMessage(Message m) throws AmqpException {
m.getMessageProperties().getHeaders().put("foo", "bar");
m.getMessageProperties().setPriority(priority);
return m;
}
});
| RabbitMQ | 39,853,393 | 17 |
I have two, separate RabbitMQ instances. I'm trying to find the best way to listen to events from both.
For example, I can consume events on one with the following:
credentials = pika.PlainCredentials(user, pass)
connection = pika.BlockingConnection(pika.ConnectionParameters(host="host1", credentials=credentials))
channel = connection.channel()
result = channel.queue_declare(Exclusive=True)
self.channel.queue_bind(exchange="my-exchange", result.method.queue, routing_key='*.*.*.*.*')
channel.basic_consume(callback_func, result.method.queue, no_ack=True)
self.channel.start_consuming()
I have a second host, "host2", that I'd like to listen to as well. I thought about creating two separate threads to do this, but from what I've read, pika isn't thread safe. Is there a better way? Or would creating two separate threads, each listening to a different Rabbit instance (host1, and host2) be sufficient?
| The answer to "what is the best way" depends heavily on your usage pattern of queues and what you mean by "best". Since I can't comment on questions yet, I'll just try to suggest some possible solutions.
In each example I'm going to assume exchange is already declared.
Threads
You can consume messages from two queues on separate hosts in single process using pika.
You are right - as its own FAQ states, pika is not thread safe, but it can be used in multi-threaded manner by creating connections to RabbitMQ hosts per thread. Making this example run in threads using threading module looks as follows:
import pika
import threading
class ConsumerThread(threading.Thread):
def __init__(self, host, *args, **kwargs):
super(ConsumerThread, self).__init__(*args, **kwargs)
self._host = host
# Not necessarily a method.
def callback_func(self, channel, method, properties, body):
print("{} received '{}'".format(self.name, body))
def run(self):
credentials = pika.PlainCredentials("guest", "guest")
connection = pika.BlockingConnection(
pika.ConnectionParameters(host=self._host,
credentials=credentials))
channel = connection.channel()
result = channel.queue_declare(exclusive=True)
channel.queue_bind(result.method.queue,
exchange="my-exchange",
routing_key="*.*.*.*.*")
channel.basic_consume(self.callback_func,
result.method.queue,
no_ack=True)
channel.start_consuming()
if __name__ == "__main__":
threads = [ConsumerThread("host1"), ConsumerThread("host2")]
for thread in threads:
thread.start()
I've declared callback_func as a method purely to use ConsumerThread.name while printing message body. It might as well be a function outside the ConsumerThread class.
Processes
Alternatively, you can always just run one process with consumer code per queue you want to consume events.
import pika
import sys
def callback_func(channel, method, properties, body):
print(body)
if __name__ == "__main__":
credentials = pika.PlainCredentials("guest", "guest")
connection = pika.BlockingConnection(
pika.ConnectionParameters(host=sys.argv[1],
credentials=credentials))
channel = connection.channel()
result = channel.queue_declare(exclusive=True)
channel.queue_bind(result.method.queue,
exchange="my-exchange",
routing_key="*.*.*.*.*")
channel.basic_consume(callback_func, result.method.queue, no_ack=True)
channel.start_consuming()
and then run by:
$ python single_consume.py host1
$ python single_consume.py host2 # e.g. on another console
If the work you're doing on messages from queues is CPU-heavy and as long as number of cores in your CPU >= number of consumers, it is generally better to use this approach - unless your queues are empty most of the time and consumers won't utilize this CPU time*.
Async
Another alternative is to involve some asynchronous framework (for example Twisted) and running whole thing in single thread.
You can no longer use BlockingConnection in asynchronous code; fortunately, pika has adapter for Twisted:
from pika.adapters.twisted_connection import TwistedProtocolConnection
from pika.connection import ConnectionParameters
from twisted.internet import protocol, reactor, task
from twisted.python import log
class Consumer(object):
def on_connected(self, connection):
d = connection.channel()
d.addCallback(self.got_channel)
d.addCallback(self.queue_declared)
d.addCallback(self.queue_bound)
d.addCallback(self.handle_deliveries)
d.addErrback(log.err)
def got_channel(self, channel):
self.channel = channel
return self.channel.queue_declare(exclusive=True)
def queue_declared(self, queue):
self._queue_name = queue.method.queue
self.channel.queue_bind(queue=self._queue_name,
exchange="my-exchange",
routing_key="*.*.*.*.*")
def queue_bound(self, ignored):
return self.channel.basic_consume(queue=self._queue_name)
def handle_deliveries(self, queue_and_consumer_tag):
queue, consumer_tag = queue_and_consumer_tag
self.looping_call = task.LoopingCall(self.consume_from_queue, queue)
return self.looping_call.start(0)
def consume_from_queue(self, queue):
d = queue.get()
return d.addCallback(lambda result: self.handle_payload(*result))
def handle_payload(self, channel, method, properties, body):
print(body)
if __name__ == "__main__":
consumer1 = Consumer()
consumer2 = Consumer()
parameters = ConnectionParameters()
cc = protocol.ClientCreator(reactor,
TwistedProtocolConnection,
parameters)
d1 = cc.connectTCP("host1", 5672)
d1.addCallback(lambda protocol: protocol.ready)
d1.addCallback(consumer1.on_connected)
d1.addErrback(log.err)
d2 = cc.connectTCP("host2", 5672)
d2.addCallback(lambda protocol: protocol.ready)
d2.addCallback(consumer2.on_connected)
d2.addErrback(log.err)
reactor.run()
This approach would be even better, the more queues you would consume from and the less CPU-bound the work performing by consumers is*.
Python 3
Since you've mentioned pika, I've restricted myself to Python 2.x-based solutions, because pika is not yet ported.
But in case you would want to move to >=3.3, one possible option is to use asyncio with one of AMQP protocol (the protocol you speak in with RabbitMQ) , e.g. asynqp or aioamqp.
* - please note that these are very shallow tips - in most cases choice is not that obvious; what will be the best for you depends on queues "saturation" (messages/time), what work do you do upon receiving these messages, what environment you run your consumers in etc.; there's no way to be sure other than to benchmark all implementations
| RabbitMQ | 28,550,140 | 17 |
I have a set up to send messages to durable queues from server (NodeJS) and the client (android app) listens to messages on their respective queues (each android device listens to its corresponding queue which is unique).
As per the RabbitMQ document, when we try to connect to a queue with empty name (i.e "") then RabbitMQ generates a random queue with name starting with "amq.gen-". But, no where from the client or server code I see that I am trying to connect to a queue with empty name but still see lot of random queues getting generated.
Can anyone help me in understanding what other scenarios might create random queues with name "amq.gen-*"?
| If you are creating queue with blank name a random queue name amq.gen-* will be generated.
If you are connecting to queue with blank name then, depending of method, last declared on this channel queue name will be used. If no queue was declared or method doesn't support blank queue name error will be thrown.
See queue.declare method and domain.queue-name domain documentation for details.
| RabbitMQ | 22,194,675 | 17 |
It seems PostgreSQL does not allow to create a database table named 'user'. But MySQL will allow to create such a table.
Is that because it is a key word? But Hibernate cannot identify any issue (even if we set the PostgreSQLDialect).
| user is a reserved word and it's usually not a good idea use reserved words for identifiers (tables, columns).
If you insist on doing that you have to put the table name in double quotes:
create table "user" (...);
But then you always need to use double quotes when referencing the table. Additionally the table name is then case-sensitive. "user" is a different table name than "User".
If you want to save yourself a lot of trouble use a different name. users, user_account, ...
More details on quoted identifiers can be found in the manual: http://www.postgresql.org/docs/current/static/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS
| PostgreSQL | 22,256,124 | 157 |
I've just got myself a little bit stuck with some SQL. I don't think I can phrase the question brilliantly - so let me show you.
I have two tables, one called person, one called appointment. I'm trying to return the number of appointments a person has (including if they have zero). Appointment contains the person_id and there is a person_id per appointment. So COUNT(person_id) is a sensible approach.
The query:
SELECT person_id, COUNT(person_id) AS "number_of_appointments"
FROM appointment
GROUP BY person_id;
Will return correctly, the number of appointments a person_id has. However, a person who has 0 appointments isn't returned (obviously as they are not in that table).
Tweaking the statement to take person_id from the person table gives me something like:
SELECT person.person_id, COUNT(appointment.person_id) AS "number_of_appointments"
FROM appointment
JOIN person ON person.person_id = appointment.person_id
GROUP BY person.person_id;
This however, will still only return a person_id who has an appointment and not what I want which is a return with persons who have 0 appointments!
Any suggestions please?
| You want an outer join for this (and you need to use person as the "driving" table)
SELECT person.person_id, COUNT(appointment.person_id) AS "number_of_appointments"
FROM person
LEFT JOIN appointment ON person.person_id = appointment.person_id
GROUP BY person.person_id;
The reason why this is working, is that the outer (left) join will return NULL for those persons that do not have an appointment. The aggregate function count() will not count NULL values and thus you'll not get a zero.
If you want to learn more about outer joins, here is a nice tutorial: http://sqlzoo.net/wiki/Using_Null
| PostgreSQL | 14,793,057 | 157 |
Ok I have a table with a indexed key and a non indexed field.
I need to find all records with a certain value and return the row.
I would like to know if I can order by multiple values.
Example:
id x_field
-- -----
123 a
124 a
125 a
126 b
127 f
128 b
129 a
130 x
131 x
132 b
133 p
134 p
135 i
pseudo: would like the results to be ordered like this, where ORDER BY x_field = 'f', 'p', 'i', 'a'
SELECT *
FROM table
WHERE id NOT IN (126)
ORDER BY x_field 'f', 'p', 'i', 'a'
So the results would be:
id x_field
-- -----
127 f
133 p
134 p
135 i
123 a
124 a
125 a
129 a
The syntax is valid but when I execute the query it never returns any results, even if I limit it to 1 record. Is there another way to go about this?
Think of the x_field as test results and I need to validate all the records that fall in the condition. I wanted to order the test results by failed values, passed values. So I could validate the failed values first and then the passed values using the ORDER BY.
What I can't do:
GROUP BY, as I need to return the specific record values
WHERE x_field IN('f', 'p', 'i', 'a'), I need all the values as I'm trying to use one query for several validation tests. And x_field values are not in DESC/ASC order
After writing this question I'm starting to think that I need to rethink this, LOL!
| ...
WHERE
x_field IN ('f', 'p', 'i', 'a') ...
ORDER BY
CASE x_field
WHEN 'f' THEN 1
WHEN 'p' THEN 2
WHEN 'i' THEN 3
WHEN 'a' THEN 4
ELSE 5 -- fallback for values not inside the IN clause. eg : x_field = 'b'
END, id
| PostgreSQL | 6,332,043 | 157 |
How do you view a stored function or procedure?
Say I have an old function without the original definition - I want to see what it is doing in pg/psql but I can't seem to figure out a way to do that.
using Postgres version 8.4.1
| \df+ <function_name> in psql.
| PostgreSQL | 3,524,859 | 157 |
I am running my development on Ubuntu 11.10, and RubyMine
Here is my development settings for the database.yml: which RubyMine created for me
development:
adapter: postgresql
encoding: unicode
database: mydb_development
pool: 5
username: myuser
password:
when I try to run the app, I get this error below, it seems that I didn't create a 'project' user yet, but, how can I create a user and grant it a database in postgres ? if this is the problem, then, what is the recommended tool to use in Ubuntu for this task ? if this is not the problem, then, please advice.
Exiting
/home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/activerecord-3.2.3/lib/active_record/connection_adapters/postgresql_adapter.rb:1194:in `initialize': FATAL: Peer authentication failed for user "project" (PG::Error)
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/activerecord-3.2.3/lib/active_record/connection_adapters/postgresql_adapter.rb:1194:in `new'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/activerecord-3.2.3/lib/active_record/connection_adapters/postgresql_adapter.rb:1194:in `connect'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/activerecord-3.2.3/lib/active_record/connection_adapters/postgresql_adapter.rb:329:in `initialize'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/activerecord-3.2.3/lib/active_record/connection_adapters/postgresql_adapter.rb:28:in `new'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/activerecord-3.2.3/lib/active_record/connection_adapters/postgresql_adapter.rb:28:in `postgresql_connection'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/activerecord-3.2.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:303:in `new_connection'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/activerecord-3.2.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:313:in `checkout_new_connection'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/activerecord-3.2.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:237:in `block (2 levels) in checkout'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/activerecord-3.2.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:232:in `loop'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/activerecord-3.2.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:232:in `block in checkout'
from /home/sam/.rvm/rubies/ruby-1.9.3-p0/lib/ruby/1.9.1/monitor.rb:211:in `mon_synchronize'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/activerecord-3.2.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:229:in `checkout'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/activerecord-3.2.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:95:in `connection'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/activerecord-3.2.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:398:in `retrieve_connection'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/activerecord-3.2.3/lib/active_record/connection_adapters/abstract/connection_specification.rb:168:in `retrieve_connection'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/activerecord-3.2.3/lib/active_record/connection_adapters/abstract/connection_specification.rb:142:in `connection'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/activerecord-3.2.3/lib/active_record/model_schema.rb:308:in `clear_cache!'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/activerecord-3.2.3/lib/active_record/railtie.rb:91:in `block (2 levels) in <class:Railtie>'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/activesupport-3.2.3/lib/active_support/callbacks.rb:418:in `_run__757346023__prepare__404863399__callbacks'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/activesupport-3.2.3/lib/active_support/callbacks.rb:405:in `__run_callback'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/activesupport-3.2.3/lib/active_support/callbacks.rb:385:in `_run_prepare_callbacks'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/activesupport-3.2.3/lib/active_support/callbacks.rb:81:in `run_callbacks'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/actionpack-3.2.3/lib/action_dispatch/middleware/reloader.rb:74:in `prepare!'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/actionpack-3.2.3/lib/action_dispatch/middleware/reloader.rb:48:in `prepare!'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/railties-3.2.3/lib/rails/application/finisher.rb:47:in `block in <module:Finisher>'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/railties-3.2.3/lib/rails/initializable.rb:30:in `instance_exec'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/railties-3.2.3/lib/rails/initializable.rb:30:in `run'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/railties-3.2.3/lib/rails/initializable.rb:55:in `block in run_initializers'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/railties-3.2.3/lib/rails/initializable.rb:54:in `each'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/railties-3.2.3/lib/rails/initializable.rb:54:in `run_initializers'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/railties-3.2.3/lib/rails/application.rb:136:in `initialize!'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/railties-3.2.3/lib/rails/railtie/configurable.rb:30:in `method_missing'
from /home/sam/RubymineProjects/project/config/environment.rb:5:in `<top (required)>'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/activesupport-3.2.3/lib/active_support/dependencies.rb:251:in `require'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/activesupport-3.2.3/lib/active_support/dependencies.rb:251:in `block in require'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/activesupport-3.2.3/lib/active_support/dependencies.rb:236:in `load_dependency'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/activesupport-3.2.3/lib/active_support/dependencies.rb:251:in `require'
from /home/sam/RubymineProjects/project/config.ru:4:in `block in <main>'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/rack-1.4.1/lib/rack/builder.rb:51:in `instance_eval'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/rack-1.4.1/lib/rack/builder.rb:51:in `initialize'
from /home/sam/RubymineProjects/project/config.ru:1:in `new'
from /home/sam/RubymineProjects/project/config.ru:1:in `<main>'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/rack-1.4.1/lib/rack/builder.rb:40:in `eval'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/rack-1.4.1/lib/rack/builder.rb:40:in `parse_file'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/rack-1.4.1/lib/rack/server.rb:200:in `app'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/railties-3.2.3/lib/rails/commands/server.rb:46:in `app'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/rack-1.4.1/lib/rack/server.rb:301:in `wrapped_app'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/rack-1.4.1/lib/rack/server.rb:252:in `start'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/railties-3.2.3/lib/rails/commands/server.rb:70:in `start'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/railties-3.2.3/lib/rails/commands.rb:55:in `block in <top (required)>'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/railties-3.2.3/lib/rails/commands.rb:50:in `tap'
from /home/sam/.rvm/gems/ruby-1.9.3-p0@project/gems/railties-3.2.3/lib/rails/commands.rb:50:in `<top (required)>'
from /home/sam/RubymineProjects/project/script/rails:6:in `require'
from /home/sam/RubymineProjects/project/script/rails:6:in `<top (required)>'
from -e:1:in `load'
from -e:1:in `<main>'
Process finished with exit code 1
| If you installed postresql on your server then just host: localhost to database.yml, I usually throw it in around where it says pool: 5. Otherwise if it's not localhost definitely tell that app where to find its database.
development:
adapter: postgresql
encoding: unicode
database: kickrstack_development
host: localhost
pool: 5
username: kickrstack
password: secret
Make sure your user credentials are set correctly by creating a database and assigning ownership to your app's user to establish the connection. To create a new user in postgresql 9 run:
sudo -u postgres psql
set the postgresql user password if you haven't, it's just backslash password.
postgres=# \password
Create a new user and password and the user's new database:
postgres=# create user "guy_on_stackoverflow" with password 'keepitonthedl';
postgres=# create database "dcaclab_development" owner "guy_on_stackoverflow";
Now update your database.yml file after you've confirmed creating the database, user, password and set these privileges. Don't forget host: localhost.
| PostgreSQL | 9,987,171 | 155 |
myCol
------
true
true
true
false
false
null
In the above table, if I do :
select count(*), count(myCol);
I get 6, 5
I get 5 as it doesn't count the null entry.
How do I also count the number of true values (3 in the example)?
(This is a simplification and I'm actually using a much more complicated expression within the count function)
Edit summary: I also want to include a plain count(*) in the query, so can't use a where clause
| SELECT COALESCE(sum(CASE WHEN myCol THEN 1 ELSE 0 END),0) FROM <table name>
or, as you found out for yourself:
SELECT count(CASE WHEN myCol THEN 1 END) FROM <table name>
| PostgreSQL | 5,396,498 | 155 |
I would like to list all tables in the liferay database in my PostgreSQL install. How do I do that?
I would like to execute SELECT * FROM applications; in the liferay database. applications is a table in my liferay db. How is this done?
Here's a list of all my databases:
postgres=# \list
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
liferay | postgres | UTF8 | en_GB.UTF-8 | en_GB.UTF-8 | =Tc/postgres +
| | | | | postgres=CTc/postgres+
| | | | | liferay=CTc/postgres
lportal | postgres | UTF8 | en_GB.UTF-8 | en_GB.UTF-8 |
postgres | postgres | UTF8 | en_GB.UTF-8 | en_GB.UTF-8 |
template0 | postgres | UTF8 | en_GB.UTF-8 | en_GB.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_GB.UTF-8 | en_GB.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
(5 rows)
postgres=#
| If you wish to list all tables, you must use:
\dt *.*
to indicate that you want all tables in all schemas. This will include tables in pg_catalog, the system tables, and those in information_schema. There's no built-in way to say "all tables in all user-defined schemas"; you can, however, set your search_path to a list of all schemas of interest before running \dt.
You may want to do this programmatically, in which case psql backslash-commands won't do the job. This is where the INFORMATION_SCHEMA comes to the rescue. To list tables:
SELECT table_name FROM information_schema.tables WHERE table_schema = 'public';
BTW, if you ever want to see what psql is doing in response to a backslash command, run psql with the -E flag. eg:
$ psql -E regress
regress=# \list
********* QUERY **********
SELECT d.datname as "Name",
pg_catalog.pg_get_userbyid(d.datdba) as "Owner",
pg_catalog.pg_encoding_to_char(d.encoding) as "Encoding",
d.datcollate as "Collate",
d.datctype as "Ctype",
pg_catalog.array_to_string(d.datacl, E'\n') AS "Access privileges"
FROM pg_catalog.pg_database d
ORDER BY 1;
**************************
so you can see that psql is searching pg_catalog.pg_database when it gets a list of databases. Similarly, for tables within a given database:
SELECT n.nspname as "Schema",
c.relname as "Name",
CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'i' THEN 'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' WHEN 'f' THEN 'foreign table' END as "Type",
pg_catalog.pg_get_userbyid(c.relowner) as "Owner"
FROM pg_catalog.pg_class c
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind IN ('r','')
AND n.nspname <> 'pg_catalog'
AND n.nspname <> 'information_schema'
AND n.nspname !~ '^pg_toast'
AND pg_catalog.pg_table_is_visible(c.oid)
ORDER BY 1,2;
It's preferable to use the SQL-standard, portable INFORMATION_SCHEMA instead of the Pg system catalogs where possible, but sometimes you need Pg-specific information. In those cases it's fine to query the system catalogs directly, and psql -E can be a helpful guide for how to do so.
| PostgreSQL | 12,445,608 | 154 |
I have a two tables
Student
--------
Id Name
1 John
2 David
3 Will
Grade
---------
Student_id Mark
1 A
2 B
2 B+
3 C
3 A
Is it possible to make native Postgresql SELECT to get results like below:
Name Array of marks
-----------------------
'John', {'A'}
'David', {'B','B+'}
'Will', {'C','A'}
But not like below
Name Mark
----------------
'John', 'A'
'David', 'B'
'David', 'B+'
'Will', 'C'
'Will', 'A'
| Use array_agg: http://www.sqlfiddle.com/#!1/5099e/1
SELECT s.name, array_agg(g.Mark) as marks
FROM student s
LEFT JOIN Grade g ON g.Student_id = s.Id
GROUP BY s.Id
By the way, if you are using Postgres 9.1, you don't need to repeat the columns on SELECT to GROUP BY, e.g. you don't need to repeat the student name on GROUP BY. You can merely GROUP BY on primary key. If you remove the primary key on student, you need to repeat the student name on GROUP BY.
CREATE TABLE grade
(Student_id int, Mark varchar(2));
INSERT INTO grade
(Student_id, Mark)
VALUES
(1, 'A'),
(2, 'B'),
(2, 'B+'),
(3, 'C'),
(3, 'A');
CREATE TABLE student
(Id int primary key, Name varchar(5));
INSERT INTO student
(Id, Name)
VALUES
(1, 'John'),
(2, 'David'),
(3, 'Will');
| PostgreSQL | 10,928,210 | 154 |
If I have a docker-compose file like:
version: "3"
services:
postgres:
image: postgres:9.4
volumes:
- db-data:/var/lib/db
volumes:
db-data:
... then doing docker-compose up creates a named volume for db-data. Is there a way to remove this volume via docker-compose? If it were an anonymous volume, then docker-compose rm -v postgres would do the trick. But as it stands, I don't know how to remove the db-data volume without reverting to docker commands. It feels like this should be possible from within the docker-compose CLI. Am I missing something?
| docker-compose down -v
removes all volumes attached. See the docs
| PostgreSQL | 45,511,956 | 153 |
I'm looking to write a postgresql query to do the following :
if(field1 > 0, field2 / field1 , 0)
I've tried this query, but it's not working
if (field1 > 0)
then return field2 / field1 as field3
else return 0 as field3
thank youu
| As stated in PostgreSQL docs here:
The SQL CASE expression is a generic conditional expression, similar to if/else statements in other programming languages.
Code snippet specifically answering your question:
SELECT field1, field2,
CASE
WHEN field1>0 THEN field2/field1
ELSE 0
END
AS field3
FROM test
| PostgreSQL | 19,029,842 | 153 |
I have a small table (~30 rows) in my Postgres 9.0 database with an integer ID field (the primary key) which currently contains unique sequential integers starting at 1, but which was not created using the 'serial' keyword.
How can I alter this table such that from now on inserts to this table will cause this field to behave as if it had been created with 'serial' as a type?
| Look at the following commands (especially the commented block).
DROP TABLE foo;
DROP TABLE bar;
CREATE TABLE foo (a int, b text);
CREATE TABLE bar (a serial, b text);
INSERT INTO foo (a, b) SELECT i, 'foo ' || i::text FROM generate_series(1, 5) i;
INSERT INTO bar (b) SELECT 'bar ' || i::text FROM generate_series(1, 5) i;
-- blocks of commands to turn foo into bar
CREATE SEQUENCE foo_a_seq;
ALTER TABLE foo ALTER COLUMN a SET DEFAULT nextval('foo_a_seq');
ALTER TABLE foo ALTER COLUMN a SET NOT NULL;
ALTER SEQUENCE foo_a_seq OWNED BY foo.a; -- 8.2 or later
SELECT MAX(a) FROM foo;
SELECT setval('foo_a_seq', 5); -- replace 5 by SELECT MAX result
INSERT INTO foo (b) VALUES('teste');
INSERT INTO bar (b) VALUES('teste');
SELECT * FROM foo;
SELECT * FROM bar;
| PostgreSQL | 9,490,014 | 153 |
Does Postgres have any way to say ALTER TABLE foo ADD CONSTRAINT bar ... which will just ignore the command if the constraint already exists, so that it doesn't raise an error?
| A possible solution is to simply use DROP IF EXISTS before creating the new constraint.
ALTER TABLE foo DROP CONSTRAINT IF EXISTS bar;
ALTER TABLE foo ADD CONSTRAINT bar ...;
Seems easier than trying to query information_schema or catalogs, but might be slow on huge tables since it always recreates the constraint.
Edit 2015-07-13:
Kev pointed out in his answer that my solution creates a short window when the constraint doesn't exist and is not being enforced. While this is true, you can avoid such a window quite easily by wrapping both statements in a transaction.
| PostgreSQL | 6,801,919 | 153 |
We use copy command to copy data of one table to a file outside database.
Is it possible to copy data of one table to another table using command.
If yes can anyone please share the query.
Or is there any better approach like we can use pg_dump or something like that.
| You cannot easily do that, but there's also no need to do so.
CREATE TABLE mycopy AS
SELECT * FROM mytable;
or
CREATE TABLE mycopy (LIKE mytable INCLUDING ALL);
INSERT INTO mycopy
SELECT * FROM mytable;
If you need to select only some columns or reorder them, you can do this:
INSERT INTO mycopy(colA, colB)
SELECT col1, col2 FROM mytable;
You can also do a selective pg_dump and restore of just the target table.
| PostgreSQL | 31,284,514 | 152 |
I have this query I have written in PostgreSQL that returns an error saying:
[Err] ERROR:
LINE 3: FROM (SELECT DISTINCT (identifiant) AS made_only_recharge
This is the whole query:
SELECT COUNT (made_only_recharge) AS made_only_recharge
FROM (
SELECT DISTINCT (identifiant) AS made_only_recharge
FROM cdr_data
WHERE CALLEDNUMBER = '0130'
EXCEPT
SELECT DISTINCT (identifiant) AS made_only_recharge
FROM cdr_data
WHERE CALLEDNUMBER != '0130'
)
I have a similar query in Oracle that works fine. The only change is where I have EXCEPT in Oracle I have replaced it with the MINUS key word. I am new to Postgres and don't know what it is asking for. What's the correct way of handling this?
| Add an ALIAS onto the subquery,
SELECT COUNT(made_only_recharge) AS made_only_recharge
FROM
(
SELECT DISTINCT (identifiant) AS made_only_recharge
FROM cdr_data
WHERE CALLEDNUMBER = '0130'
EXCEPT
SELECT DISTINCT (identifiant) AS made_only_recharge
FROM cdr_data
WHERE CALLEDNUMBER != '0130'
) AS derivedTable -- <<== HERE
| PostgreSQL | 14,767,209 | 152 |
I have a query like this that nicely generates a series of dates between 2 given dates:
select date '2004-03-07' + j - i as AllDate
from generate_series(0, extract(doy from date '2004-03-07')::int - 1) as i,
generate_series(0, extract(doy from date '2004-08-16')::int - 1) as j
It generates 162 dates between 2004-03-07 and 2004-08-16 and this what I want. The problem with this code is that it wouldn't give the right answer when the two dates are from different years, for example when I try 2007-02-01 and 2008-04-01.
Is there a better solution?
| Can be done without conversion to/from int (but to/from timestamp instead)
SELECT date_trunc('day', dd):: date
FROM generate_series
( '2007-02-01'::timestamp
, '2008-04-01'::timestamp
, '1 day'::interval) dd
;
| PostgreSQL | 14,113,469 | 152 |
I would like to take a look at the PostgreSQL log files to see what my app writes to them but I can't find them.
Any ideas?
| On OSX Homebrew installation the log can be found at:
Latest Homebrew:
/opt/homebrew/var/log/postgres.log
or older:
/usr/local/var/log/postgres.log
or for older version of postgres (< 9.6)
/usr/local/var/postgres/server.log
Bonus - check if PostgreSQL is running using Homebrew:
brew services info --all
| PostgreSQL | 2,563,494 | 152 |
I am working with a fresh postgresql install, with 'postgres' super user. Logged in via:
sudo -u postgres psql
postgres=# createdb database
postgres-# \list
List of databases
Name | Owner | Encoding | Collation | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
postgres | postgres | UTF8 | en_GB.UTF-8 | en_GB.UTF-8 |
template0 | postgres | UTF8 | en_GB.UTF-8 | en_GB.UTF-8 | =c/postgres
: postgres=CTc/postgres
template1 | postgres | UTF8 | en_GB.UTF-8 | en_GB.UTF-8 | =c/postgres
: postgres=CTc/postgres
No errors, yet table is not being created. Any ideas?
| createdb is a command line utility which you can run from bash and not from psql.
To create a database from psql, use the create database statement like so:
create database [databasename];
Note: be sure to always end your SQL statements with ;
| PostgreSQL | 13,321,005 | 151 |
I have a json array stored in my postgres database.
The json looks like this:
[
{
"operation": "U",
"taxCode": "1000",
"description": "iva description",
"tax": "12"
},
{
"operation": "U",
"taxCode": "1001",
"description": "iva description",
"tax": "12"
},
{
"operation": "U",
"taxCode": "1002",
"description": "iva description",
"tax": "12"
}
]
Now I need to SELECT the array so that any element is in a different row of the query result. So the SELECT statement I perform must return the data in this way:
data
--------------------------------------------------------------------------------------
{ "operation": "U", "taxCode": "1000", "description": "iva description", "tax":"12"}
{ "operation": "U", "taxCode": "1001", "description": "iva description", "tax":"12"}
{ "operation": "U", "taxCode": "1002", "description": "iva description", "tax":"12"}
I tried using the unnest() function
SELECT unnest(json_data::json)
FROM my_table
but it doesn't accept the jsonb type.
| I post the answer originally written by pozs in the comment section.
unnest() is for PostgreSQL's array types.
Instead one of the following function can be used:
json_array_elements(json) (9.3+)
jsonb_array_elements(jsonb) (9.4+)
json[b]_array_elements_text(json[b]) (9.4+)
Example:
select * from json_array_elements('[1,true, [2,false]]')
output value
-------------
| 1 |
-------------
| true |
-------------
| [2,false] |
-------------
Here where the documentation for v9.4 can be found.
| PostgreSQL | 36,174,881 | 150 |
I would like to delete rows which contain a foreign key, but when I try something like this:
DELETE FROM osoby WHERE id_osoby='1'
I get this statement:
ERROR: update or delete on table "osoby" violates foreign key constraint "kontakty_ibfk_1" on table "kontakty"
DETAIL: Key (id_osoby)=(1) is still referenced from table "kontakty".
How can I delete these rows?
| To automate this, you could define the foreign key constraint with ON DELETE CASCADE.
I quote the the manual for foreign key constraints:
CASCADE specifies that when a referenced row is deleted, row(s)
referencing it should be automatically deleted as well.
Look up the current FK definition like this:
SELECT pg_get_constraintdef(oid) AS constraint_def
FROM pg_constraint
WHERE conrelid = 'public.kontakty'::regclass -- assuming public schema
AND conname = 'kontakty_ibfk_1';
Then add or modify the ON DELETE ... part to ON DELETE CASCADE (preserving everything else as is) in a statement like:
ALTER TABLE kontakty
DROP CONSTRAINT kontakty_ibfk_1
, ADD CONSTRAINT kontakty_ibfk_1
FOREIGN KEY (id_osoby) REFERENCES osoby (id_osoby) ON DELETE CASCADE;
There is no ALTER CONSTRAINT command. Drop and recreate the constraint in a single ALTER TABLE statement to avoid possible race conditions with concurrent write access.
You need the privileges to do so, obviously. The operation takes an ACCESS EXCLUSIVE lock on table kontakty and a SHARE ROW EXCLUSIVE lock on table osoby.
If you can't ALTER the table, then deleting by hand (once) or by trigger BEFORE DELETE (every time) are the remaining options.
| PostgreSQL | 14,182,079 | 150 |
I'm trying to do something like this in postgres:
UPDATE table1 SET (col1, col2) = (SELECT col2, col3 FROM othertable WHERE othertable.col1 = 123);
INSERT INTO table1 (col1, col2) VALUES (SELECT col1, col2 FROM othertable)
But point 1 is not possible even with postgres 9.0 as mentioned in the docs (http://www.postgresql.org/docs/9.0/static/sql-update.html)
Also point 2 seems not working. i'm getting the following error: subquery must return only one column.
Hope somebody has a workaround for me. otherwise the queries will take a looot of time :(.
FYI: I'm trying to select different columns from several tables and store them into a temporary table, so that another application can easily fetch the prepared data.
| For the UPDATE
Use:
UPDATE table1
SET col1 = othertable.col2,
col2 = othertable.col3
FROM othertable
WHERE othertable.col1 = 123;
For the INSERT
Use:
INSERT INTO table1 (col1, col2)
SELECT col1, col2
FROM othertable
You don't need the VALUES syntax if you are using a SELECT to populate the INSERT values.
| PostgreSQL | 3,736,732 | 150 |
I cannot understand the syntax error in creating a composite key. It may be a logic error, because I have tested many varieties.
How do you create composite keys in Postgres?
CREATE TABLE tags
(
(question_id, tag_id) NOT NULL,
question_id INTEGER NOT NULL,
tag_id SERIAL NOT NULL,
tag1 VARCHAR(20),
tag2 VARCHAR(20),
tag3 VARCHAR(20),
PRIMARY KEY(question_id, tag_id),
CONSTRAINT no_duplicate_tag UNIQUE (question_id, tag_id)
);
ERROR: syntax error at or near "("
LINE 3: (question_id, tag_id) NOT NULL,
^
| Your compound PRIMARY KEY specification already does what you want. Omit the line that's giving you a syntax error, and omit the redundant CONSTRAINT (already implied), too:
CREATE TABLE tags
(
question_id INTEGER NOT NULL,
tag_id SERIAL NOT NULL,
tag1 VARCHAR(20),
tag2 VARCHAR(20),
tag3 VARCHAR(20),
PRIMARY KEY(question_id, tag_id)
);
NOTICE: CREATE TABLE will create implicit sequence "tags_tag_id_seq" for serial column "tags.tag_id"
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "tags_pkey" for table "tags"
CREATE TABLE
pg=> \d tags
Table "public.tags"
Column | Type | Modifiers
-------------+-----------------------+-------------------------------------------------------
question_id | integer | not null
tag_id | integer | not null default nextval('tags_tag_id_seq'::regclass)
tag1 | character varying(20) |
tag2 | character varying(20) |
tag3 | character varying(20) |
Indexes:
"tags_pkey" PRIMARY KEY, btree (question_id, tag_id)
| PostgreSQL | 1,285,967 | 150 |
I have just install Postgres 9.3 on Windows 7. The installation completed successfully. It has never asked me to provide the password for postgres user.
The service postgresql-x64-9.3 is up and running. However, I cannot connect: I do not not know the password. I've found the following answer, but it did not help:
similar question on Ubuntu
| [WINDOWS]
https://stackoverflow.com/a/27108276/1087499
[LINUX]
might work for windows too
After installing postgres follow following steps in order to setup password for default system account of Linux execute following in terminal:
user:~$ sudo -i -u postgres
postgres@user:~$ psql
after executing above two commands you will get into postgres shell
Execute this query in postgres shell:
postgres=# ALTER USER postgres PASSWORD 'mynewpassword';
your new password is 'mynewpassword' without quotes and now you can connect with external GUI tools like DBeaver
| PostgreSQL | 27,107,557 | 149 |
I'd like to perform division in a SELECT clause. When I join some tables and use aggregate function I often have either null or zero values as the dividers. As for now I only come up with this method of avoiding the division by zero and null values.
(CASE(COALESCE(COUNT(column_name),1)) WHEN 0 THEN 1
ELSE (COALESCE(COUNT(column_name),1)) END)
I wonder if there is a better way of doing this?
| You can use NULLIF function e.g.
something/NULLIF(column_name,0)
If the value of column_name is 0 - result of entire expression will be NULL
| PostgreSQL | 17,681,375 | 149 |
How can I add comment to column in PostgreSQL?
create table session_log (
UserId int index not null,
PhoneNumber int index);
| Comments are attached to a column using the comment statement:
create table session_log
(
userid int not null,
phonenumber int
);
comment on column session_log.userid is 'The user ID';
comment on column session_log.phonenumber is 'The phone number including the area code';
You can also add a comment to the table:
comment on table session_log is 'Our session logs';
Additionally: int index is invalid.
If you want to create an index on a column, you do that using the create index statement:
create index on session_log(phonenumber);
If you want an index over both columns use:
create index on session_log(userid, phonenumber);
You probably want to define the userid as the primary key. This is done using the following syntax (and not using int index):
create table session_log
(
UserId int primary key,
PhoneNumber int
);
Defining a column as the primary key implicitly makes it not null
| PostgreSQL | 32,070,876 | 148 |
MySQL's explain output is pretty straightforward. PostgreSQL's is a little more complicated. I haven't been able to find a good resource that explains it either.
Can you describe what exactly explain is saying or at least point me in the direction of a good resource?
| The part I always found confusing is the startup cost vs total cost. I Google this every time I forget about it, which brings me back to here, which doesn't explain the difference, which is why I'm writing this answer. This is what I have gleaned from the Postgres EXPLAIN documentation, explained as I understand it.
Here's an example from an application that manages a forum:
EXPLAIN SELECT * FROM post LIMIT 50;
Limit (cost=0.00..3.39 rows=50 width=422)
-> Seq Scan on post (cost=0.00..15629.12 rows=230412 width=422)
Here's the graphical explanation from PgAdmin:
(When you're using PgAdmin, you can point your mouse at a component to read the cost details.)
The cost is represented as a tuple, e.g. the cost of the LIMIT is cost=0.00..3.39 and the cost of sequentially scanning post is cost=0.00..15629.12. The first number in the tuple is the startup cost and the second number is the total cost. Because I used EXPLAIN and not EXPLAIN ANALYZE, these costs are estimates, not actual measures.
Startup cost is a tricky concept. It doesn't just represent the amount of time before that component starts. It represents the amount of time between when the component starts executing (reading in data) and when the component outputs its first row.
Total cost is the entire execution time of the component, from when it begins reading in data to when it finishes writing its output.
As a complication, each "parent" node's costs includes the cost's of its child nodes. In the text representation, the tree is represented by indentation, e.g. LIMIT is a parent node and Seq Scan is its child. In the PgAdmin representation, the arrows point from child to parent — the direction of the flow of data — which might be counterintuitive if you are familiar with graph theory.
The documentation says that costs are inclusive of all child nodes, but notice that the total cost of the parent 3.39 is much smaller than the total cost of it's child 15629.12. Total cost is not inclusive because a component like LIMIT doesn't need to process its entire input. See the EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 100 AND unique2 > 9000 LIMIT 2; example in Postgres EXPLAIN documentation.
In the example above, startup time is zero for both components, because neither component needs to do any processing before it starts writing rows: a sequential scan reads the first row of the table and emits it. The LIMIT reads its first row and then emits it.
When would a component need to do a lot of processing before it can start to output any rows? There are a lot of possible reasons, but let's look at one clear example. Here's the same query from before but now containing an ORDER BY clause:
EXPLAIN SELECT * FROM post ORDER BY body LIMIT 50;
Limit (cost=23283.24..23283.37 rows=50 width=422)
-> Sort (cost=23283.24..23859.27 rows=230412 width=422)
Sort Key: body
-> Seq Scan on post (cost=0.00..15629.12 rows=230412 width=422)
And graphically:
Once again, the sequential scan on post has no startup cost: it starts outputting rows immediately. But the sort has a significant startup cost 23283.24 because it has to sort the entire table before it can output even a single row. The total cost of the sort 23859.27 is only slightly higher than the startup cost, reflecting the fact that once the entire dataset has been sorted, the sorted data can be emitted very quickly.
Notice that the startup time of the LIMIT 23283.24 is exactly equal to the startup time of the sort. This is not because LIMIT itself has a high startup time. It actually has zero startup time by itself, but EXPLAIN rolls up all of the child costs for each parent, so the LIMIT startup time includes the sum startup times of its children.
This rollup of costs can make it difficult to understand the execution cost of each individual component. For example, our LIMIT has zero startup time, but that's not obvious at first glance. For this reason, several other people linked to explain.depesz.com, a tool created by Hubert Lubaczewski (a.k.a. depesz) that helps understand EXPLAIN by — among other things — subtracting out child costs from parent costs. He mentions some other complexities in a short blog post about his tool.
| PostgreSQL | 117,262 | 148 |
Before anything, please note that I have found several similar questions on Stack Overflow and articles all over the web, but none of those helped me fix my issue:
PG Error could not connect to server: Connection refused Is the server running on port 5432?
PG::ConnectionBad - could not connect to server: Connection refused
psql: could not connect to server: Connection refused
Now, here is the issue:
I have a Rails app that works like a charm.
With my collaborator, we use GitHub to work together.
We have a master and an mvp branches.
I recently updated my git version with Homebrew (Mac).
We use Foreman to start our app locally.
Now, when I try to launch the app locally, I get the following error:
PG::ConnectionBad at /
could not connect to server: Connection refused
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
I tried to reboot my computers several times.
I also checked the content of /usr/local/var/postgres:
PG_VERSION pg_dynshmem pg_multixact pg_snapshots pg_tblspc postgresql.conf
base pg_hba.conf pg_notify pg_stat pg_twophase postmaster.opts
global pg_ident.conf pg_replslot pg_stat_tmp pg_xlog server.log
pg_clog pg_logical pg_serial pg_subtrans postgresql.auto.conf
As you can see, there is no postmaster.pid file in there.
Any idea how I could fix this?
| run postgres -D /usr/local/var/postgres and you should see something like:
FATAL: lock file "postmaster.pid" already exists
HINT: Is another postmaster (PID 379) running in data directory "/usr/local/var/postgres"?
Then run kill -9 PID in HINT
And you should be good to go.
| PostgreSQL | 37,307,346 | 147 |
I have a query that returns avg(price)
select avg(price)
from(
select *, cume_dist() OVER (ORDER BY price desc) from web_price_scan
where listing_Type='AARM'
and u_kbalikepartnumbers_id = 1000307
and (EXTRACT(Day FROM (Now()-dateEnded)))*24 < 48
and price>( select avg(price)* 0.50
from(select *, cume_dist() OVER (ORDER BY price desc)
from web_price_scan
where listing_Type='AARM'
and u_kbalikepartnumbers_id = 1000307
and (EXTRACT(Day FROM (Now()-dateEnded)))*24 < 48
)g
where cume_dist < 0.50
)
and price<( select avg(price)*2
from( select *, cume_dist() OVER (ORDER BY price desc)
from web_price_scan
where listing_Type='AARM'
and u_kbalikepartnumbers_id = 1000307
and (EXTRACT(Day FROM (Now()-dateEnded)))*24 < 48
)d
where cume_dist < 0.50)
)s
having count(*) > 5
how to make it return 0 if no value is available?
| use coalesce
COALESCE(value [, ...])
The COALESCE function returns the first of its arguments that is not null.
Null is returned only if all arguments are null. It is often
used to substitute a default value for null values when data is
retrieved for display.
Edit
Here's an example of COALESCE with your query:
SELECT AVG( price )
FROM(
SELECT *, cume_dist() OVER ( ORDER BY price DESC ) FROM web_price_scan
WHERE listing_Type = 'AARM'
AND u_kbalikepartnumbers_id = 1000307
AND ( EXTRACT( DAY FROM ( NOW() - dateEnded ) ) ) * 24 < 48
AND COALESCE( price, 0 ) > ( SELECT AVG( COALESCE( price, 0 ) )* 0.50
FROM ( SELECT *, cume_dist() OVER ( ORDER BY price DESC )
FROM web_price_scan
WHERE listing_Type='AARM'
AND u_kbalikepartnumbers_id = 1000307
AND ( EXTRACT( DAY FROM ( NOW() - dateEnded ) ) ) * 24 < 48
) g
WHERE cume_dist < 0.50
)
AND COALESCE( price, 0 ) < ( SELECT AVG( COALESCE( price, 0 ) ) *2
FROM( SELECT *, cume_dist() OVER ( ORDER BY price desc )
FROM web_price_scan
WHERE listing_Type='AARM'
AND u_kbalikepartnumbers_id = 1000307
AND ( EXTRACT( DAY FROM ( NOW() - dateEnded ) ) ) * 24 < 48
) d
WHERE cume_dist < 0.50)
)s
HAVING COUNT(*) > 5
IMHO COALESCE should not be use with AVG because it modifies the value. NULL means unknown and nothing else. It's not like using it in SUM. In this example, if we replace AVG by SUM, the result is not distorted. Adding 0 to a sum doesn't hurt anyone but calculating an average with 0 for the unknown values, you don't get the real average.
In that case, I would add price IS NOT NULL in WHERE clause to avoid these unknown values.
| PostgreSQL | 11,007,009 | 147 |
I need to remove some attributes from a json type column.
The Table:
CREATE TABLE my_table( id VARCHAR(80), data json);
INSERT INTO my_table (id, data) VALUES (
'A',
'{"attrA":1,"attrB":true,"attrC":["a", "b", "c"]}'
);
Now, I need to remove attrB from column data.
Something like alter table my_table drop column data->'attrB'; would be nice. But a way with a temporary table would be enough, too.
| Update: for 9.5+, there are explicit operators you can use with jsonb (if you have a json typed column, you can use casts to apply a modification):
Deleting a key (or an index) from a JSON object (or, from an array) can be done with the - operator:
SELECT jsonb '{"a":1,"b":2}' - 'a', -- will yield jsonb '{"b":2}'
jsonb '["a",1,"b",2]' - 1 -- will yield jsonb '["a","b",2]'
Deleting, from deep in a JSON hierarchy can be done with the #- operator:
SELECT '{"a":[null,{"b":[3.14]}]}' #- '{a,1,b,0}'
-- will yield jsonb '{"a":[null,{"b":[]}]}'
For 9.4, you can use a modified version of the original answer (below), but instead of aggregating a JSON string, you can aggregate into a json object directly with json_object_agg().
Related: other JSON manipulations whithin PostgreSQL:
How do I modify fields inside the new PostgreSQL JSON datatype?
Original answer (applies to PostgreSQL 9.3):
If you have at least PostgreSQL 9.3, you can split your object into pairs with json_each() and filter your unwanted fields, then build up the json again manually. Something like:
SELECT data::text::json AS before,
('{' || array_to_string(array_agg(to_json(l.key) || ':' || l.value), ',') || '}')::json AS after
FROM (VALUES ('{"attrA":1,"attrB":true,"attrC":["a","b","c"]}'::json)) AS v(data),
LATERAL (SELECT * FROM json_each(data) WHERE "key" <> 'attrB') AS l
GROUP BY data::text
With 9.2 (or lower) it is not possible.
Edit:
A more convenient form is to create a function, which can remove any number of attributes in a json field:
Edit 2: string_agg() is less expensive than array_to_string(array_agg())
CREATE OR REPLACE FUNCTION "json_object_delete_keys"("json" json, VARIADIC "keys_to_delete" TEXT[])
RETURNS json
LANGUAGE sql
IMMUTABLE
STRICT
AS $function$
SELECT COALESCE(
(SELECT ('{' || string_agg(to_json("key") || ':' || "value", ',') || '}')
FROM json_each("json")
WHERE "key" <> ALL ("keys_to_delete")),
'{}'
)::json
$function$;
With this function, all you need to do is to run the query below:
UPDATE my_table
SET data = json_object_delete_keys(data, 'attrB');
| PostgreSQL | 23,490,965 | 146 |
I am looking for a way to implement the SQLServer-function datediff in PostgreSQL. That is, this function returns the count (as a signed integer value) of the specified datepart boundaries crossed between the specified start date and end date.
datediff(dd, '2010-04-01', '2012-03-05') = 704 // 704 changes of day in this interval
datediff(mm, '2010-04-01', '2012-03-05') = 23 // 23 changes of month
datediff(yy, '2010-04-01', '2012-03-05') = 2 // 2 changes of year
I know I could do 'dd' by simply using subtraction, but any idea about the other two?
| Simply subtract them:
SELECT ('2015-01-12'::date - '2015-01-01'::date) AS days;
The result:
days
------
11
| PostgreSQL | 17,833,176 | 146 |
I have been migrating a MySQL db to Pg (9.1), and have been emulating MySQL ENUM data types by creating a new data type in Pg, and then using that as the column definition. My question -- could I, and would it be better to, use a CHECK CONSTRAINT instead? The MySQL ENUM types are implemented to enforce specific values entries in the rows. Could that be done with a CHECK CONSTRAINT? and, if yes, would it be better (or worse)?
| Based on the comments and answers here, and some rudimentary research, I have the following summary to offer for comments from the Postgres-erati. Will really appreciate your input.
There are three ways to restrict entries in a Postgres database table column. Consider a table to store "colors" where you want only 'red', 'green', or 'blue' to be valid entries.
Enumerated data type
CREATE TYPE valid_colors AS ENUM ('red', 'green', 'blue');
CREATE TABLE t (
color VALID_COLORS
);
Advantages are that the type can be defined once and then reused in as many tables as needed. A standard query can list all the values for an ENUM type, and can be used to make application form widgets.
SELECT n.nspname AS enum_schema,
t.typname AS enum_name,
e.enumlabel AS enum_value
FROM pg_type t JOIN
pg_enum e ON t.oid = e.enumtypid JOIN
pg_catalog.pg_namespace n ON n.oid = t.typnamespace
WHERE t.typname = 'valid_colors'
enum_schema | enum_name | enum_value
-------------+---------------+------------
public | valid_colors | red
public | valid_colors | green
public | valid_colors | blue
Disadvantages are, the ENUM type is stored in system catalogs, so a query as above is required to view its definition. These values are not apparent when viewing the table definition. And, since an ENUM type is actually a data type separate from the built in NUMERIC and TEXT data types, the regular numeric and string operators and functions don't work on it. So, one can't do a query like
SELECT FROM t WHERE color LIKE 'bl%';
Check constraints
CREATE TABLE t (
colors TEXT CHECK (colors IN ('red', 'green', 'blue'))
);
Two advantage are that, one, "what you see is what you get," that is, the valid values for the column are recorded right in the table definition, and two, all native string or numeric operators work.
Foreign keys
CREATE TABLE valid_colors (
id SERIAL PRIMARY KEY NOT NULL,
color TEXT
);
INSERT INTO valid_colors (color) VALUES
('red'),
('green'),
('blue');
CREATE TABLE t (
color_id INTEGER REFERENCES valid_colors (id)
);
Essentially the same as creating an ENUM type, except, the native numeric or string operators work, and one doesn't have to query system catalogs to discover the valid values. A join is required to link the color_id to the desired text value.
| PostgreSQL | 10,923,213 | 145 |
How can I tell if my Postgresql server is running or not?
I'm getting this message:
[~/dev/working/sw] sudo bundle exec rake db:migrate
rake aborted!
could not connect to server: Connection refused
Is the server running on host "localhost" and accepting
TCP/IP connections on port 5432?
Update:
> which postgres
/usr/local/bin/postgres
> pg_ctl -D /usr/local/bin/postgres -l /usr/local/bin/postgres/server.log start
pg_ctl: could not open PID file "/usr/local/bin/postgres/postmaster.pid": Not a directory
Update 2:
>pg_ctl -D /usr/local/var/postgres -l /usr/local/var/postgres/server.log start
server starting
sh: /usr/local/var/postgres/server.log: No such file or directory
| The simplest way to to check running processes:
ps auxwww | grep postgres
And look for a command that looks something like this (your version may not be 8.3):
/Library/PostgreSQL/8.3/bin/postgres -D /Library/PostgreSQL/8.3/data
To start the server, execute something like this:
/Library/PostgreSQL/8.3/bin/pg_ctl start -D /Library/PostgreSQL/8.3/data -l postgres.log
| PostgreSQL | 7,975,414 | 145 |
Is there a way with PostgreSQL to sort rows with NULL values in fields to the end of the selected table?
Like:
SELECT * FROM table ORDER BY somevalue, PUT_NULL_TO_END
| NULL values are sorted last in default ascending order. You don't have to do anything extra.
The issue applies to descending order, which is the perfect inverse and thus sorts NULL values on top.
PostgreSQL 8.3 introduced NULLS LAST:
ORDER BY somevalue DESC NULLS LAST
For PostgreSQL 8.2 and older or other RDBMS without this standard SQL feature:
ORDER BY (somevalue IS NULL), somevalue DESC
FALSE sorts before TRUE, so NULL values come last, just like in the example above.
See:
Sort by column ASC, but NULL values first?
The manual on SELECT
| PostgreSQL | 7,621,205 | 145 |
This is a summary of what I am trying to do:
$array[0] = 1;
$array[1] = 2;
$sql = "SELECT * FROM table WHERE some_id = $array"
Obviously, there are some syntax issues, but this is what I want to do, and I haven't found anything yet that shows how to do it.
Currently, my plan is to do something along these lines:
foreach($idList as $is)
$where .= 'some_id=' . $id . ' OR';
endforeach
$sql = "SELECT * FROM table WHERE " . $where;
So is there support in PostgreSQL to use an array to search, or do I have to do something similar to my solution?
| SELECT *
FROM table
WHERE some_id = ANY(ARRAY[1, 2])
or ANSI-compatible:
SELECT *
FROM table
WHERE some_id IN (1, 2)
The ANY syntax is preferred because the array as a whole can be passed in a bound variable:
SELECT *
FROM table
WHERE some_id = ANY(?::INT[])
You would need to pass a string representation of the array: {1,2}
| PostgreSQL | 10,738,446 | 143 |
I am using following query:
ALTER TABLE presales ALTER COLUMN code TYPE numeric(10,0);
to change the datatype of a column from character(20) to numeric(10,0) but I am getting the error:
column "code" cannot be cast to type numeric
| You can try using USING:
The optional USING clause specifies how to compute the new column value from the old; if omitted, the default conversion is the same as an assignment cast from old data type to new. A USING clause must be provided if there is no implicit or assignment cast from old to new type.
So this might work (depending on your data):
alter table presales alter column code type numeric(10,0) using code::numeric;
-- Or if you prefer standard casting...
alter table presales alter column code type numeric(10,0) using cast(code as numeric);
This will fail if you have anything in code that cannot be cast to numeric; if the USING fails, you'll have to clean up the non-numeric data by hand before changing the column type.
| PostgreSQL | 7,683,359 | 143 |
I have a somewhat detailed query in a script that uses ? placeholders. I wanted to test this same query directly from the psql command line (outside the script). I want to avoid going in and replacing all the ? with actual values, instead I'd like to pass the arguments after the query.
Example:
SELECT *
FROM foobar
WHERE foo = ?
AND bar = ?
OR baz = ? ;
Looking for something like:
%> {select * from foobar where foo=? and bar=? or baz=? , 'foo','bar','baz' };
| You can use the -v option e.g:
$ psql -v v1=12 -v v2="'Hello World'" -v v3="'2010-11-12'"
and then refer to the variables in SQL as :v1, :v2 etc:
select * from table_1 where id = :v1;
Please pay attention to how we pass string/date values using two quotes " '...' " But this way of interpolation is prone to SQL injections, because it's you who's responsible for quoting. E.g. need to include a single quote? -v v2="'don''t do this'".
A better/safer way is to let PostgreSQL handle it:
$ psql -c 'create table t (a int, b varchar, c date)'
$ echo "insert into t (a, b, c) values (:'v1', :'v2', :'v3')" \
| psql -v v1=1 -v v2="don't do this" -v v3=2022-01-01
| PostgreSQL | 7,389,416 | 143 |
Attempting to insert an escape character into a table results in a warning.
For example:
create table EscapeTest (text varchar(50));
insert into EscapeTest (text) values ('This is the first part \n And this is the second');
Produces the warning:
WARNING: nonstandard use of escape in a string literal
(Using PSQL 8.2)
Anyone know how to get around this?
| Partially. The text is inserted, but the warning is still generated.
I found a discussion that indicated the text needed to be preceded with 'E', as such:
insert into EscapeTest (text) values (E'This is the first part \n And this is the second');
This suppressed the warning, but the text was still not being returned correctly. When I added the additional slash as Michael suggested, it worked.
As such:
insert into EscapeTest (text) values (E'This is the first part \\n And this is the second');
| PostgreSQL | 935 | 143 |
I am trying to configure ssl certificate for PostgreSQL server. I have created a certificate file (server.crt) and key (server.key) in data directory and update the parameter SSL to "on" to enable secure connection.
I just want only the server to be authenticated with server certificates on the client side and don't require the authenticity of client at server side. I am using psql as a client to connect and execute the commands.
I am using PostgreSQL 8.4 and Linux. I tried with the below command to connect to server with SSL enabled
psql "postgresql://localhost:2345/postgres?sslmode=require"
but I am getting
psql: invalid connection option "postgresql://localhost:2345/postgres?sslmode"
What am doing wrong here? Is the way I am trying to connect to server with SSL mode enabled is correct? Is it fine to authenticate only server and not the client ?
| psql below 9.2 does not accept this URL-like syntax for options.
The use of SSL can be driven by the sslmode=value option on the command line or the PGSSLMODE environment variable, but the default being prefer, SSL connections will be tried first automatically without specifying anything.
Example with a conninfo string (updated for psql 8.4)
psql "sslmode=require host=localhost dbname=test"
Read the manual page for more options.
| PostgreSQL | 14,021,998 | 142 |
I have the following table projects.
id title created_at claim_window
1 Project One 2012-05-08 13:50:09.924 5
2 Project Two 2012-06-01 13:50:09.924 10
A) I want to find the deadline with the calculation deadline = created_at + claim_window, where claim_window is the number of days. Something like following:
id title created_at claim_window deadline
1 Project One 2012-05-08 13:50:09.924 5 2012-05-13 13:50:09.924
2 Project Two 2012-06-01 13:50:09.924 10 2012-06-11 13:50:09.924
B) I also want to find the projects whose deadline is gone:
id title created_at claim_window deadline
1 Project One 2012-05-08 13:50:09.924 5 2012-05-13 13:50:09.924
I tried something like following, but it didn't work.
SELECT * FROM "projects"
WHERE (DATE_PART('day', now()- created_at) >= (claim_window+1))
| This will give you the deadline :
select id,
title,
created_at + interval '1' day * claim_window as deadline
from projects
Alternatively the function make_interval can be used:
select id,
title,
created_at + make_interval(days => claim_window) as deadline
from projects
To get all projects where the deadline is over, use:
select *
from (
select id,
created_at + interval '1' day * claim_window as deadline
from projects
) t
where localtimestamp at time zone 'UTC' > deadline
| PostgreSQL | 10,909,902 | 142 |
I am trying to create table with Postgis. I do it by this page. But when I import postgis.sql file, I get a lot of errors:
ERROR: type "geometry" does not exist
Does anybody know how can I fix it?
| I had the same problem, but it was fixed by running following code
CREATE EXTENSION postgis;
In detail,
open pgAdmin
select (click) your database
click "SQL" icon on the bar
run "CREATE EXTENSION postgis;" code
| PostgreSQL | 6,850,500 | 142 |
Did a new install of postgres 8.4 on mint ubuntu. How do I create a user for postgres and login using psql?
When I type psql, it just tells me
psql: FATAL: Ident authentication failed for user "my-ubuntu-username"
| There are two methods you can use. Both require creating a user and a database.
By default psql connects to the database with the same name as the user. So there is a convention to make that the "user's database". And there is no reason to break that convention if your user only needs one database. We'll be using mydatabase as the example database name.
Using createuser and createdb, we can be explicit about the database name,
$ sudo -u postgres createuser -s $USER
$ createdb mydatabase
$ psql -d mydatabase
You should probably be omitting that entirely and letting all the commands default to the user's name instead.
$ sudo -u postgres createuser -s $USER
$ createdb
$ psql
Using the SQL administration commands, and connecting with a password over TCP
$ sudo -u postgres psql postgres
And, then in the psql shell
CREATE ROLE myuser LOGIN PASSWORD 'mypass';
CREATE DATABASE mydatabase WITH OWNER = myuser;
Then you can login,
$ psql -h localhost -d mydatabase -U myuser -p <port>
If you don't know the port, you can always get it by running the following, as the postgres user,
SHOW port;
Or,
$ grep "port =" /etc/postgresql/*/main/postgresql.conf
Sidenote: the postgres user
I suggest NOT modifying the postgres user.
It's normally locked from the OS. No one is supposed to "log in" to the operating system as postgres. You're supposed to have root to get to authenticate as postgres.
It's normally not password protected and delegates to the host operating system. This is a good thing. This normally means in order to log in as postgres which is the PostgreSQL equivalent of SQL Server's SA, you have to have write-access to the underlying data files. And, that means that you could normally wreck havoc anyway.
By keeping this disabled, you remove the risk of a brute force attack through a named super-user. Concealing and obscuring the name of the superuser has advantages.
| PostgreSQL | 2,172,569 | 142 |
When I have a column with separated values, I can use the unnest() function:
myTable
id | elements
---+------------
1 |ab,cd,efg,hi
2 |jk,lm,no,pq
3 |rstuv,wxyz
select id, unnest(string_to_array(elements, ',')) AS elem
from myTable
id | elem
---+-----
1 | ab
1 | cd
1 | efg
1 | hi
2 | jk
...
How can I include element numbers? I.e.:
id | elem | nr
---+------+---
1 | ab | 1
1 | cd | 2
1 | efg | 3
1 | hi | 4
2 | jk | 1
...
I want the original position of each element in the source string. I've tried with window functions (row_number(), rank() etc.) but I always get 1. Maybe because they are in the same row of the source table?
I know it's a bad table design. It's not mine, I'm just trying to fix it.
| Postgres 14 or later
Use string_to_table() instead of unnest(string_to_array()) for a comma-separated string:
SELECT t.id, a.elem, a.nr
FROM tbl t
LEFT JOIN LATERAL string_to_table(t.elements, ',')
WITH ORDINALITY AS a(elem, nr) ON true;
fiddle
Related:
Split column into multiple rows in Postgres
Unnesting an actual array didn't change since Postgres 9.4.
Postgres 9.4 or later
Use WITH ORDINALITY for set-returning functions:
When a function in the FROM clause is suffixed by WITH ORDINALITY, a
bigint column is appended to the output which starts from 1 and
increments by 1 for each row of the function's output. This is most
useful in the case of set returning functions such as unnest().
In combination with the LATERAL feature in pg 9.3+, and according to this thread on pgsql-hackers, the above query can now be written as:
SELECT t.id, a.elem, a.nr
FROM tbl AS t
LEFT JOIN LATERAL unnest(string_to_array(t.elements, ','))
WITH ORDINALITY AS a(elem, nr) ON true;
LEFT JOIN ... ON true preserves all rows from the left table, even if the table expression to the right returns no rows. See:
What is the difference between a LATERAL JOIN and a subquery in PostgreSQL?
If that's of no concern you can use the otherwise equivalent, less verbose form with an implicit CROSS JOIN LATERAL:
SELECT t.id, a.elem, a.nr
FROM tbl t, unnest(string_to_array(t.elements, ',')) WITH ORDINALITY a(elem, nr);
Or simpler, based off an actual array (arr being an array column):
SELECT t.id, a.elem, a.nr
FROM tbl t, unnest(t.arr) WITH ORDINALITY a(elem, nr);
Or just go with default column names:
SELECT id, a, ordinality
FROM tbl, unnest(arr) WITH ORDINALITY a;
Or shorter, yet:
SELECT id, a.* FROM tbl, unnest(arr) WITH ORDINALITY a;
Or minimal syntax:
SELECT * FROM tbl, unnest(arr) WITH ORDINALITY a;
The last one returns all columns of tbl, of course.
a is automatically table and column alias (for the first column). The default name of the added ordinality column is ordinality. But it's clearer to add explicit column aliases and table-qualify columns.
The original order of array elements is preserved this way. The manual for unnest():
Expands an array into a set of rows. The array's elements are read out in storage order.
Postgres 8.4 - 9.3
With row_number() OVER (PARTITION BY id ORDER BY elem) you get numbers according to the sort order, not the ordinal number of the original ordinal position in the string.
You can simply omit ORDER BY:
SELECT *, row_number() OVER (PARTITION by id) AS nr
FROM (SELECT id, regexp_split_to_table(elements, ',') AS elem FROM tbl) t;
While this normally works and I have never seen it fail in simple queries, PostgreSQL asserts nothing concerning the order of rows without ORDER BY. It happens to work due to an implementation detail.
To guarantee ordinal numbers of elements in the blank-separated string:
SELECT id, arr[nr] AS elem, nr
FROM (
SELECT *, generate_subscripts(arr, 1) AS nr
FROM (SELECT id, string_to_array(elements, ' ') AS arr FROM tbl) t
) sub;
Or simpler if based off an actual array:
SELECT id, arr[nr] AS elem, nr
FROM (SELECT *, generate_subscripts(arr, 1) AS nr FROM tbl) t;
Related answer on dba.SE:
How to preserve the original order of elements in an unnested array?
Postgres 8.1 - 8.4
None of these features are available, yet: RETURNS TABLE, generate_subscripts(), unnest(), array_length(). But this works:
CREATE FUNCTION f_unnest_ord(anyarray, OUT val anyelement, OUT ordinality integer)
RETURNS SETOF record
LANGUAGE sql IMMUTABLE AS
'SELECT $1[i], i - array_lower($1,1) + 1
FROM generate_series(array_lower($1,1), array_upper($1,1)) i';
Note in particular, that the array index can differ from ordinal positions of elements. Consider this demo with an extended function:
CREATE FUNCTION f_unnest_ord_idx(anyarray, OUT val anyelement, OUT ordinality int, OUT idx int)
RETURNS SETOF record
LANGUAGE sql IMMUTABLE AS
'SELECT $1[i], i - array_lower($1,1) + 1, i
FROM generate_series(array_lower($1,1), array_upper($1,1)) i';
SELECT id, arr, (rec).*
FROM (
SELECT *, f_unnest_ord_idx(arr) AS rec
FROM (
VALUES
(1, '{a,b,c}'::text[]) -- short for: '[1:3]={a,b,c}'
, (2, '[5:7]={a,b,c}')
, (3, '[-9:-7]={a,b,c}')
) t(id, arr)
) sub;
id | arr | val | ordinality | idx
----+-----------------+-----+------------+-----
1 | {a,b,c} | a | 1 | 1
1 | {a,b,c} | b | 2 | 2
1 | {a,b,c} | c | 3 | 3
2 | [5:7]={a,b,c} | a | 1 | 5
2 | [5:7]={a,b,c} | b | 2 | 6
2 | [5:7]={a,b,c} | c | 3 | 7
3 | [-9:-7]={a,b,c} | a | 1 | -9
3 | [-9:-7]={a,b,c} | b | 2 | -8
3 | [-9:-7]={a,b,c} | c | 3 | -7
Compare:
Normalize array subscripts so they start with 1
| PostgreSQL | 8,760,419 | 141 |
I'd like to make a random string for use in session verification using PostgreSQL. I know I can get a random number with SELECT random(), so I tried SELECT md5(random()), but that doesn't work. How can I do this?
| You can fix your initial attempt like this:
SELECT md5(random()::text);
Much simpler than some of the other suggestions. :-)
| PostgreSQL | 3,970,795 | 141 |
I have pgAdmin version 1.16.1 installed on my machine.
For exporting a table dump, I do:
Right click on the table => Choose backup => Set Format to Plain => Save the file as some_name.sql
Then I remove the table.
Ok, now I need to import the backup I just created from some_name.sql into the database.
How am I supposed to do this? I can't find any clear instructions on how to import table's .sql dump into database using pgAdmin.
I'd appreciate some guidance.
|
In pgAdmin, select the required target schema in object tree (databases ->your_db_name -> schemas -> your_target_schema)
Click on Plugins/PSQL Console (in top-bar)
Write \i /path/to/yourfile.sql
Press enter
| PostgreSQL | 18,736,345 | 140 |
For development I'm using SQLite database with production in PostgreSQL. I updated my local database with data and need to transfer a specific table to the production database.
Running sqlite database .dump > /the/path/to/sqlite-dumpfile.sql, SQLite outputs a table dump in the following format:
BEGIN TRANSACTION;
CREATE TABLE "courses_school" ("id" integer PRIMARY KEY, "department_count" integer NOT NULL DEFAULT 0, "the_id" integer UNIQUE, "school_name" varchar(150), "slug" varchar(50));
INSERT INTO "courses_school" VALUES(1,168,213,'TEST Name A',NULL);
INSERT INTO "courses_school" VALUES(2,0,656,'TEST Name B',NULL);
....
COMMIT;
How do I convert this into a PostgreSQL compatible dump file I can import into my production server?
| You should be able to feed that dump file straight into psql:
/path/to/psql -d database -U username -W < /the/path/to/sqlite-dumpfile.sql
If you want the id column to "auto increment" then change its type from "int" to "serial" in the table creation line. PostgreSQL will then attach a sequence to that column so that INSERTs with NULL ids will be automatically assigned the next available value. PostgreSQL will also not recognize AUTOINCREMENT commands, so these need to be removed.
You'll also want to check for datetime columns in the SQLite schema and change them to timestamp for PostgreSQL. (Thanks to Clay for pointing this out.)
If you have booleans in your SQLite then you could convert 1 and 0 to 1::boolean and 0::boolean (respectively) or you could change the boolean column to an integer in the schema section of the dump and then fix them up by hand inside PostgreSQL after the import.
If you have BLOBs in your SQLite then you'll want to adjust the schema to use bytea. You'll probably need to mix in some decode calls as well. Writing a quick'n'dirty copier in your favorite language might be easier than mangling the SQL if you a lot of BLOBs to deal with though.
As usual, if you have foreign keys then you'll probably want to look into set constraints all deferred to avoid insert ordering problems, placing the command inside the BEGIN/COMMIT pair.
Thanks to Nicolas Riley for the boolean, blob, and constraints notes.
If you have ` on your code, as generated by some SQLite3 clients, you need to remove them.
PostGRESQL also doesn't recognize unsigned columns, so you might want to drop that or add a custom-made constraint such as this:
CREATE TABLE tablename (
...
unsigned_column_name integer CHECK (unsigned_column_name > 0)
);
While SQLite defaults null values to '', PostgreSQL requires them to be set as NULL.
The syntax in the SQLite dump file appears to be mostly compatible with PostgreSQL so you can patch a few things and feed it to psql. Importing a big pile of data through SQL INSERTs might take a while but it'll work.
| PostgreSQL | 4,581,727 | 140 |
I have an application using hibernate 3.1 and JPA annotations. It has a few objects with byte[] attributes (1k - 200k in size). It uses the JPA @Lob annotation, and hibernate 3.1 can read these just fine on all major databases -- it seems to hide the JDBC Blob vendor peculiarities (as it should do).
@Entity
public class ConfigAttribute {
@Lob
public byte[] getValueBuffer() {
return m_valueBuffer;
}
}
We had to upgrade to 3.5, when we discovered that hibernate 3.5 breaks (and won't fix) this annotation combination in postgresql (with no workaround). I have not found a clear fix so far, but I did notice that if I just remove the @Lob, it uses the postgresql type bytea (which works, but only on postgres).
annotation postgres oracle works on
-------------------------------------------------------------
byte[] + @Lob oid blob oracle
byte[] bytea raw(255) postgresql
byte[] + @Type(PBA) oid blob oracle
byte[] + @Type(BT) bytea blob postgresql
once you use @Type, @Lob seems to not be relevant
note: oracle seems to have deprecated the "raw" type since 8i.
I am looking for a way to have a single annotated class (with a blob property) which is portable across major databases.
What is the portable way to annotate a byte[] property?
Is this fixed in some recent version of hibernate?
Update:
After reading this blog I have finally figured out what the original workaround in the JIRA issue was: Apparently you are supposed to drop @Lob and annotate the property as:
@Type(type="org.hibernate.type.PrimitiveByteArrayBlobType")
byte[] getValueBuffer() {...
However, this does not work for me -- I still get OIDs instead of bytea; it did however work for the author of the JIRA issue, who seemed to want oid.
After the answer from A. Garcia, I then tried this combo, which actually does work on postgresql, but not on oracle.
@Type(type="org.hibernate.type.BinaryType")
byte[] getValueBuffer() {...
What I really need to do is control which @org.hibernate.annotations.Type the combination (@Lob + byte[] gets mapped) to (on postgresql).
Here is the snippet from 3.5.5.Final from MaterializedBlobType (sql type Blob). According to Steve's blog, postgresql wants you to use Streams for bytea (don't ask me why) and postgresql's custom Blob type for oids. Note also that using setBytes() on JDBC is also for bytea (from past experience). So this explains why use-streams has no affect they both assume 'bytea'.
public void set(PreparedStatement st, Object value, int index) {
byte[] internalValue = toInternalFormat( value );
if ( Environment.useStreamsForBinary() ) {
// use streams = true
st.setBinaryStream( index,
new ByteArrayInputStream( internalValue ), internalValue.length );
}
else {
// use streams = false
st.setBytes( index, internalValue );
}
}
This results in:
ERROR: column "signature" is of type oid but expression is of type bytea
Update
The next logical question is: "why not just change the table definitions manually to bytea" and keep the (@Lob + byte[])? This does work, UNTIL you try to store a null byte[]. Which the postgreSQL driver thinks is an OID type expression and the column type is bytea -- this is because hibernate (rightly) calls JDBC.setNull() instead of JDBC.setBytes(null) which PG driver expects.
ERROR: column "signature" is of type bytea but expression is of type oid
The type system in hibernate is currently a 'work in progress' (according to 3.5.5 deprecation comment). In fact so much of the 3.5.5 code is deprecated, it is hard to know what to look at when sub-classing the PostgreSQLDialect).
AFAKT, Types.BLOB/'oid' on postgresql should be mapped to some custom type which uses OID style JDBC access (i.e. PostgresqlBlobType object and NOT MaterializedBlobType). I've never actually successfully used Blobs with postgresql, but I do know that bytea just simply works as one / I would expect.
I am currently looking at the BatchUpdateException -- its possible that the driver doesn't support batching.
Great quote from 2004:
"To sum up my ramblings, I'd say they we should wait for the JDBC driver to do LOBs properly before changing Hibernate."
References:
https://forum.hibernate.org/viewtopic.php?p=2393203
https://forum.hibernate.org/viewtopic.php?p=2435174
http://hibernate.atlassian.net/browse/HHH-4617
http://postgresql.1045698.n5.nabble.com/Migration-to-Hibernate-3-5-final-td2175339.html
https://jira.springframework.org/browse/SPR-2318
https://forums.hibernate.org/viewtopic.php?p=2203382&sid=b526a17d9cf60a80f13d40cf8082aafd
http://virgo47.wordpress.com/2008/06/13/jpa-postgresql-and-bytea-vs-oid-type/
|
What is the portable way to annotate a byte[] property?
It depends on what you want. JPA can persist a non annotated byte[]. From the JPA 2.0 spec:
11.1.6 Basic Annotation
The Basic annotation is the simplest
type of mapping to a database column.
The Basic annotation can be applied
to a persistent property or instance
variable of any of the following
types: Java primitive, types, wrappers
of the primitive types,
java.lang.String,
java.math.BigInteger,
java.math.BigDecimal,
java.util.Date,
java.util.Calendar, java.sql.Date,
java.sql.Time, java.sql.Timestamp,
byte[], Byte[], char[], Character[], enums, and any other
type that implements Serializable.
As described in Section 2.8, the use
of the Basic annotation is optional
for persistent fields and properties
of these types. If the Basic
annotation is not specified for such a
field or property, the default values
of the Basic annotation will apply.
And Hibernate will map a it "by default" to a SQL VARBINARY (or a SQL LONGVARBINARY depending on the Column size?) that PostgreSQL handles with a bytea.
But if you want the byte[] to be stored in a Large Object, you should use a @Lob. From the spec:
11.1.24 Lob Annotation
A Lob annotation specifies that a
persistent property or field should be
persisted as a large object to a
database-supported large object type.
Portable applications should use the
Lob annotation when mapping to a
database Lob type. The Lob annotation
may be used in conjunction with the
Basic annotation or with the
ElementCollection annotation when the
element collection value is of basic
type. A Lob may be either a binary or
character type. The Lob type is
inferred from the type of the
persistent field or property and,
except for string and character types,
defaults to Blob.
And Hibernate will map it to a SQL BLOB that PostgreSQL handles with a oid
.
Is this fixed in some recent version of hibernate?
Well, the problem is that I don't know what the problem is exactly. But I can at least say that nothing has changed since 3.5.0-Beta-2 (which is where a changed has been introduced)in the 3.5.x branch.
But my understanding of issues like HHH-4876, HHH-4617 and of PostgreSQL and BLOBs (mentioned in the javadoc of the PostgreSQLDialect) is that you are supposed to set the following property
hibernate.jdbc.use_streams_for_binary=false
if you want to use oid i.e. byte[] with @Lob (which is my understanding since VARBINARY is not what you want with Oracle). Did you try this?
As an alternative, HHH-4876 suggests using the deprecated PrimitiveByteArrayBlobType to get the old behavior (pre Hibernate 3.5).
References
JPA 2.0 Specification
Section 2.8 "Mapping Defaults for Non-Relationship Fields or Properties"
Section 11.1.6 "Basic Annotation"
Section 11.1.24 "Lob Annotation"
Resources
http://opensource.atlassian.com/projects/hibernate/browse/HHH-4876
http://opensource.atlassian.com/projects/hibernate/browse/HHH-4617
http://relation.to/Bloggers/PostgreSQLAndBLOBs
| PostgreSQL | 3,677,380 | 140 |
Is it possible to change the constraint name in Postgres?
I have a PK added with:
ALTER TABLE contractor_contractor ADD CONSTRAINT commerce_contractor_pkey PRIMARY KEY(id);
And I want to to have different name for it, to be consistent with the rest of the system.
Shall I delete the existing PK constraint and create a new one? Or is there a 'soft' way to
manage it?
Thanks!
| To rename an existing constraint in PostgreSQL 9.2 or newer, you can use ALTER TABLE:
ALTER TABLE name RENAME CONSTRAINT constraint_name TO new_constraint_name;
| PostgreSQL | 971,786 | 140 |
I am trying to do a like query like so
def self.search(search, page = 1 )
paginate :per_page => 5, :page => page,
:conditions => ["name LIKE '%?%' OR postal_code like '%?%'", search, search], order => 'name'
end
But when it is run something is adding quotes which causes the sql statement to come out like so
SELECT COUNT(*)
FROM "schools"
WHERE (name LIKE '%'havard'%' OR postal_code like '%'havard'%')):
So you can see my problem.
I am using Rails 4 and Postgres 9 both of which I have never used so not sure if its and an activerecord thing or possibly a postgres thing.
How can I set this up so I have like '%my_search%' in the end query?
| Your placeholder is replaced by a string and you're not handling it right.
Replace
"name LIKE '%?%' OR postal_code LIKE '%?%'", search, search
with
"name LIKE ? OR postal_code LIKE ?", "%#{search}%", "%#{search}%"
| PostgreSQL | 19,105,706 | 139 |
I'll need to invoke REFRESH MATERIALIZED VIEW on each change to the tables involved, right? I'm surprised to not find much discussion of this on the web.
How should I go about doing this?
I think the top half of the answer here is what I'm looking for: https://stackoverflow.com/a/23963969/168143
Are there any dangers to this? If updating the view fails, will the transaction on the invoking update, insert, etc. be rolled back? (this is what I want... I think)
|
I'll need to invoke REFRESH MATERIALIZED VIEW on each change to the tables involved, right?
Yes, PostgreSQL by itself will never call it automatically, you need to do it some way.
How should I go about doing this?
Many ways to achieve this. Before giving some examples, keep in mind that REFRESH MATERIALIZED VIEW command does block the view in AccessExclusive mode, so while it is working, you can't even do SELECT on the table.
Although, if you are in version 9.4 or newer, you can give it the CONCURRENTLY option:
REFRESH MATERIALIZED VIEW CONCURRENTLY my_mv;
This will acquire an ExclusiveLock, and will not block SELECT queries, but may have a bigger overhead (depends on the amount of data changed, if few rows have changed, then it might be faster). Although you still can't run two REFRESH commands concurrently.
Refresh manually
It is an option to consider. Specially in cases of data loading or batch updates (e.g. a system that only loads tons of information/data after long periods of time) it is common to have operations at end to modify or process the data, so you can simple include a REFRESH operation in the end of it.
Scheduling the REFRESH operation
The first and widely used option is to use some scheduling system to invoke the refresh, for instance, you could configure the like in a cron job:
*/30 * * * * psql -d your_database -c "REFRESH MATERIALIZED VIEW CONCURRENTLY my_mv"
And then your materialized view will be refreshed at each 30 minutes.
Considerations
This option is really good, specially with CONCURRENTLY option, but only if you can accept the data not being 100% up to date all the time. Keep in mind, that even with or without CONCURRENTLY, the REFRESH command does need to run the entire query, so you have to take the time needed to run the inner query before considering the time to schedule the REFRESH.
Refreshing with a trigger
Another option is to call the REFRESH MATERIALIZED VIEW in a trigger function, like this:
CREATE OR REPLACE FUNCTION tg_refresh_my_mv()
RETURNS trigger LANGUAGE plpgsql AS $$
BEGIN
REFRESH MATERIALIZED VIEW CONCURRENTLY my_mv;
RETURN NULL;
END;
$$;
Then, in any table that involves changes on the view, you do:
CREATE TRIGGER tg_refresh_my_mv AFTER INSERT OR UPDATE OR DELETE
ON table_name
FOR EACH STATEMENT EXECUTE PROCEDURE tg_refresh_my_mv();
Considerations
It has some critical pitfalls for performance and concurrency:
Any INSERT/UPDATE/DELETE operation will have to execute the query (which is possible slow if you are considering MV);
Even with CONCURRENTLY, one REFRESH still blocks another one, so any INSERT/UPDATE/DELETE on the involved tables will be serialized.
The only situation I can think that as a good idea is if the changes are really rare.
Refresh using LISTEN/NOTIFY
The problem with the previous option is that it is synchronous and impose a big overhead at each operation. To ameliorate that, you can use a trigger like before, but that only calls a NOTIFY operation:
CREATE OR REPLACE FUNCTION tg_refresh_my_mv()
RETURNS trigger LANGUAGE plpgsql AS $$
BEGIN
NOTIFY refresh_mv, 'my_mv';
RETURN NULL;
END;
$$;
So then you can build an application that keep connected and uses LISTEN operation to identify the need to call REFRESH. One nice project that you can use to test this is pgsidekick, with this project you can use shell script to do LISTEN, so you can schedule the REFRESH as:
pglisten --listen=refresh_mv --print0 | xargs -0 -n1 -I? psql -d your_database -c "REFRESH MATERIALIZED VIEW CONCURRENTLY ?;"
Or use pglater (also inside pgsidekick) to make sure you don't call REFRESH very often. For example, you can use the following trigger to make it REFRESH, but within 1 minute (60 seconds):
CREATE OR REPLACE FUNCTION tg_refresh_my_mv()
RETURNS trigger LANGUAGE plpgsql AS $$
BEGIN
NOTIFY refresh_mv, '60 REFRESH MATERIALIZED VIEW CONCURRENLTY my_mv';
RETURN NULL;
END;
$$;
So it will not call REFRESH in less the 60 seconds apart, and also if you NOTIFY many times in less than 60 seconds, the REFRESH will be triggered only once.
Considerations
As the cron option, this one also is good only if you can bare with a little stale data, but this has the advantage that the REFRESH is called only when really needed, so you have less overhead, and also the data is updated more closer to when needed.
OBS: I haven't really tried the codes and examples yet, so if someone finds a mistake, typo or tries it and works (or not), please let me know.
| PostgreSQL | 29,437,650 | 138 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.