question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
In Git I can use an interactive rebase to re-write history, this is great because in my feature branch I made a ton of commits with partially working code as I explored different refactors and ways of getting it done.
I'd like to squash a lot of the commits together before rebasing or merging the branch onto master.
Some made up commits in order from first (top) to bottom (last)
1. Initial commit on feature branch "Automatic coffee maker UI"
2. Add hot chocolate as product
3. Add tea as product. Products are now generic
4. Create in memory data store for adapter tests
5. Cry because I can't get entity framework to create a composite key. Integration tests broken.
6. Implemented composite key!!
7. All tests green and feature done!
Lets say I want to keep commits 3, 4 and 7.
Using rebase I want to "squash" commits
1 and 2 go into 3.
4 stays
5 and 6 go into 7
Ideally in the interactive rebase I would do
1. squash
2. squash
3. pick (contains the work of 1 & 2)
4. pick
5. squash
6. squash
7. pick (contains the work of 5 & 6)
But that is backwards because squash merges a commit with its previous commit. I can't figure out how to make it squash forward.
Am I being difficult, and should I accept that won't work (I'd rather get it working), or is there a way to accomplish this?
I'm invoking this command with
git checkout My-feature-branch
git rebase master -i
Then I'm editing the list of commits that come up, and trying to finish it by saving the file and editing the editor, which typically works for me.
| You either need to also reorder the commits so the to-be-kept commit comes before the to-be-squashed commits if this is feasible. (For an alternative, see the update at the end of the answer)
If this is not feasible, because you then would get conflicts you don't want to resolve, just make it
1. pick
2. squash
3. squash
4. pick
5. pick
6. squash
7. squash
When the squashes are done, you can edit the commit message to contain the message you like the final commits to have. Easy as pie. :-)
You might even be able to do
1. pick
2. fixup
3. squash
4. pick
5. pick
6. fixup
7. squash
Then I think there should only once the commit message editor being fired up, as with fixup the previous commit message is simply taken without launching the editor.
On squash when the commit message editor fires, you also get both commit messages, the one from the to-be-squashed-into and the to-be-squashed commit, so you then can simply delete the commit message you don't want to keep.
Update:
To use the author date and commit message of 4. and 7. and thus be more what OP originally wanted, you can use this setting:
1. edit
2. fixup
3. fixup
4. pick
5. edit
6. fixup
7. fixup
Then on the first break you use:
git commit --amend -C master~4 # or any other way to reference commit number 3
Then you continue the rebase and on the second break you use:
git commit --amend -C master # or any other way to reference commit number 7
Then you continue the rebase and are done.
or to automate it:
1. pick
exec git commit --amend -C master~4 # or any other way to reference commit number 3
2. fixup
3. fixup
4. pick
5. pick
exec git commit --amend -C master # or any other way to reference commit number 7
6. fixup
7. fixup
| Squash | 44,210,747 | 20 |
I have a large number of commits, about 20, that I've done since my last push to origin/master. I have never had more than one branch, master, and all commits were done on master. How can I squash all 20 commits into one commit, preferably using sourcetree? I want to do this so I can just push one commit to origin/master.
In sourcetree I have figured out to use the interactive rebase command in the repository menu. It brings up the exact list of commits I want to squash. I tried hitting the squash button repeatedly until it shows one commit containing all of them. But when I hit OK I end up with only the two most recent commits squashed. So even though the dialog seems to show it can squash multiple in practice I can't get it to work.
| Easier solution (than a rebase):
Select the "origin/master" commit in the log entry, click on "Reset <branch> to this commit".
Use the default mixed mode.
Then add and commit: all your changes will be registered again in one new commit, that you will be able to push.
See git reset Demystified for more.
| Squash | 25,102,750 | 16 |
I use an optimistic work-flow in Gitlab, which assumes the majority of my merge requests will be accepted without change. The flow looks like this:
Submit a merge request for branch cool-feature-A
Create a new branch, based on cool-feature-A, called cool-feature-B. Begin developing on this branch.
A colleague approves my merge request for cool-feature-A.
I rebase cool-feature-B against master (which is painless) and continue development.
The problem occurs if a colleague does a squash merge at step 3. This rewrites large chunks of history which are present in cool-feature-B. When I get to step 4, there is a world of merge pain ahead of me.
How can I avoid this happening?
| Essentially, you have to tell Git: I want to rebase cool-feature-B against master, but I want to copy a different set of commits than the ones you'd normally compute here. The easiest way to do this is going to be to use --onto. That is, normally you run, as you said:
git checkout cool-feature-B
git rebase master
but you'll need to do:
git rebase --onto master cool-feature-A
before you delete your own branch-name / label cool-feature-A.
You can always do this, even if they use a normal merge. It won't hurt to do it, except in that you have to type a lot more and remember (however you like) that you'll want this --onto, which needs the right name or hash ID, later.
(If you can get the hash ID of the commit to which cool-feature-A points at the moment into the reflog for your upstream of cool-feature-B, you can use the --fork-point feature to make Git compute this for you automatically later, provided the reflog entry in question has not expired. But that's probably harder, in general, than just doing this manually. Plus it has that whole "provided" part.)
Why this is the case
Let's start, as usual, with the graph drawings. Initially, you have this setup in your own repository:
...--A <-- master, origin/master
\
B--C <-- cool-feature-A
\
D <-- cool-feature-B
Once you have run git push origin cool-feature-A cool-feature-B, you have:
...--A <-- master, origin/master
\
B--C <-- cool-feature-A, origin/cool-feature-A
\
D <-- cool-feature-B, origin/cool-feature-B
Note that all we did here was add two origin/ names (two remote-tracking names): in their repository, over at origin, they acquired commits B, C, and D and they set their cool-feature-A and cool-feature-B names to remember commits C and D respectively, so your own Git added your origin/ variants of these two names.
If they (whoever "they" are—the people who control the repository over on origin) do a fast-forward merge, they'll slide their master up to point to commit C. (Note that the GitHub web interface has no button to make a fast-forward merge. I have not used GitLab's web interface; it may be different in various ways.) If they force a real merge—which is what the GitHub web page "merge this now" clicky button does by default; again, I don't know what GitLab does here—they'll make a new merge commit E:
...--A------E
\ /
B--C
\
D
(here I've deliberately stripped off all the names as theirs don't quite match yours). They'll presumably delete (or maybe even never actually created) their cool-feature-A name. Either way, you can have your own Git fast-forward your master name, while updating your origin/* names:
...--A------E <-- master, origin/master
\ /
B--C <-- cool-feature-A [, origin/cool-feature-A if it exists]
\
D <-- cool-feature-B, origin/cool-feature-B
or:
...--A
\
B--C <-- cool-feature-A, master, origin/master [, origin/cool-feature-A]
\
D <-- cool-feature-B, origin/cool-feature-B
Whether or not you delete your name cool-feature-A now—for convenience in later drawings, let's say you do—if you run:
git checkout cool-feature-B
git rebase master
Git will now enumerate the list of commits reachable from D—D, then C, then B, and so on—and subtract away the list of commits reachable from master: E (if it exists), then A (if E exists) and C (whether or not E exists), then B, and so on. The result of the subtraction is just commit D.
Your Git now copies the commits in this list so that the new copies come after the tip of master: i.e., after E if they made a real merge, or after C if they did a fast-forward merge. So Git either copies D to a new commit D' that comes after E:
D' <-- cool-feature-B
/
...--A------E <-- master, origin/master
\ /
B--C
\
D <-- origin/cool-feature-B
or it leaves D alone because it already comes after C (so there's nothing new to draw).
The tricky parts occur when they use whatever GitLab's equivalent is of GitHub's "squash and merge" or "rebase and merge" clicky buttons. I'll skip the "rebase and merge" case (which usually causes no problems because Git's rebase checks patch-IDs too) and go straight for the hard case, the "squash and merge". As you correctly noted, this makes a new and different, single, commit. When you bring that new commit into your own repository—e.g., after git fetch—you have:
...--A--X <-- master, origin/master
\
B--C <-- cool-feature-A [, origin/cool-feature-A]
\
D <-- cool-feature-B, origin/cool-feature-B
where X is the result of making a new commit whose snapshot would match merge E (if they were to make merge E), but whose (single) parent is existing commit A. So the history—the list of commits enumerated by working backwards—from X is just X, then A, then whatever commits come before A.
If you run a regular git rebase master while on cool-feature-B, Git:
enumerates the commits reachable from D: D, C, B, A, ...;
enumerates the commits reachable from X: X, A, ...;
subtracts the set in step 2 from the set in step 1, leaving D, C, B;
copies those commits (in un-backwards-ized order) so that they come after X.
Note that steps 2 and 3 both use the word master to find the commits: the commits to copy, for step 2, are those that aren't reachable from master. The place to put the copies, for step 3, is after the tip of master.
But if you run:
git rebase --onto master cool-feature-A
you have Git use different items in steps 2 and 3:
The list of commits to copy, from step 2, comes from cool-feature-A..cool-feature-B: subtract C-B-A-... from D-C-B-A-.... That leaves just commit D.
The place to put the copies, in step 3, comes from --onto master: put them after X.
So now Git only copies D to D', after which git rebase yanks the name cool-feature-B over to point to D':
D' <-- cool-feature-B
/
...--A--X <-- master, origin/master
\
B--C <-- cool-feature-A [, origin/cool-feature-A]
\
D <-- origin/cool-feature-B
which is what you wanted.
Had they—the people in control of the GitLab repo—used a true merge or a fast-forward not-really-a-merge-at-all, this would all still work: you would have your Git enumerate D-on-backwards but remove C-on-backwards from the list, leaving just D to copy; and then Git would copy D so that it comes after either E (the true merge case) or C (the fast-forward case). The fast-forward case, "copy D so that it comes where it already is", would cleverly not bother to copy at all and just leave everything in place.
| Squash | 56,804,649 | 15 |
My work flow:
branch from master
work in my branch, commit frequently (100+)
when the job is done in my branch, merge master into my branch, resolve all the conflict.
CODE REVIEW TIME before merging back to master
For CODE REVIEW, I need to show the differences between two heads and squash/organize my commits ( in about 5 commits ). What's the best GUI (cross-platform?) for this task?
| The Sourcetree free Git GUI for Windows and Mac supports this.
Alternatively, to do it without a GUI, you can run
git rebase --interactive --autosquash
because you committed with commit message beginning with !squash (when those intermediate commits were about the same task)
See "Trimming GIT Checkins/Squashing GIT History".
| Squash | 7,694,911 | 13 |
My history looks somewhat like this but times 10:
i - j - e'- k - h'- l - m feature-branch
/
a - b - c - d - e - f - g - h master
(the apostrophes meaning cherry picks)
I want to rebase to this:
i - j - k - l - m feature-branch
/
a - b - c - d - e - f - g - h master
I don't mind if the feature gets squashed into 1 commit. The main problem with a regular rebase is that it tries to rebase one commit at a time, and i have to fix and re-fix similar conflicts over and over.
All I want is to take the difference between the tip of my branch and the tip of master and apply those on top of master.
| This is actually quite easy.
Merge master into feature-branch. You will solve all of your merge conflicts here at once. (I highly recommend stopping here.)
i - j - e'- k - h'- l - m - n feature-branch
/ /
a - b - c - d - e - f - g - h -------- master
Then, git reset --soft master. This will make it so that feature-branch points at master, but it will keep all of your changes in the index, ready to be committed.
i - j - e'- k - h'- l - m - n (orphaned)
/ /
a - b - c - d - e - f - g - h -------- master, feature-branch
\
(index)
git commit
i - j - e'- k - h'- l - m - n (orphaned)
/ /
a - b - c - d - e - f - g - h -------- master
\
n' feature-branch
The only purpose of #2 and #3 is to destroy the history of feature-branch. If you are certain you will never ever need that history, that's fine. But it seems like a waste to go to all this extra trouble just to delete an accurate record of what actually happened.
| Squash | 30,197,694 | 13 |
When doing a normal merge, e.g. git merge <branch>, git creates a merge commit and updates the HEAD of the current branch to that commit.
When doing a squash merge, e.g. git merge <branch> --squash, however, it does not make a commit, saying (on a clean merge):
Squash commit -- not updating HEAD
Automatic merge went well; stopped before committing as requested
What's the rationale behind this? To me, this seems like an unexpected difference between a normal merge and a squash merge. It would be more intuitive to me if the only difference between a normal merge and a squash merge were that a squash merge squashes commits.
|
What's the rationale behind this?
The draft merge commit message will contain all your squashed messages from the other branch. Something like this:
Squashed commit of the following:
commit 2fb77b77f813501ae2c8159e7bf751c216572a57
Author: Your Name <[email protected]>
Date: Tue May 22 22:47:50 2018 +0200
Drop baz
commit 894f1ef07af29d25c4716dce9db4402032f854d4
Author: Your Name <[email protected]>
Date: Tue May 22 22:47:39 2018 +0200
Fix bar
commit 7f60998ab1949e9e8db9229f9ef9e7c0333cd04f
Author: Your Name <[email protected]>
Date: Tue May 22 22:47:19 2018 +0200
Add foo
Usually, you will want to customize that message before committing.
If you are happy with the default message you could do:
git merge <branch> --squash && git commit --no-edit
| Squash | 50,475,519 | 12 |
I have a lot of commits that I want to squash together into one commit. Of course I may replace pick with squash for every commit, but I have a hundreds commits.
Is there a way to do this automatically?
| If you have a sequence of commits
... - C1 - C2 - C3 - C4 - C5 <- HEAD
and you want to squash C2 to C5 into a single commit, you can reset your branch to C1 while keeping the state of your working directory and staging area, ans then commit again:
git reset --soft C1
git commit
This will require you to re-enter a commit message. You can of course use git log before resetting and copy the parts of the commit messages you want to keep.
If you want to squash a feature branch into a single commit ontop of the master branch, another option is to use the --squash option to git merge.
| Squash | 28,218,410 | 11 |
I am using git subtree to organize my git repositories. Let's say I have a main repository called repo and a library called lib.
I successfully "imported" the lib repository by squashing its history. I would now like to contribute back to lib by squashing the history too. This does not seem to work: I specify the --squash option to git subtree push but when looking at the history I still send all the commits.
How to reproduce
Here is a script showing the minimal commands needed to reproduce the problem:
#!/bin/bash
rm -rf lib lib-work repo
# repo is the main repository
git init repo
# lib is the 'subtreed' repository (bare to accept pushes)
git init --bare lib
git clone lib lib-work
cd lib-work
# adding a bunch of commits to lib
echo "v1" > README
git add README
git commit -m 'lib commit 1'
echo "v2" > README
git add README
git commit -m 'lib commit 2'
echo "v3" > README
git add README
git commit -m 'lib commit 3'
git push origin master
cd ..
cd repo
# adding initial commit to have a valid HEAD
echo "v1" > README
git add README
git commit -m 'repo commit 1'
git remote add lib ../lib
git subtree add --prefix lib lib master --squash
echo "v4" > lib/README
git add lib/README
git commit -m 'repo commit 2'
echo "v5" > lib/README
git add lib/README
git commit -m 'repo commit 3'
echo "v6" > lib/README
git add lib/README
git commit -m 'repo commit 4'
#git log --all --color --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s%Creset' --abbrev-commit
# "not working" command :
git subtree push --prefix lib lib master --squash
# pretty print the history
git log --all --color --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s%Creset' --abbrev-commit
cd ../lib
echo
git log --all --color --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s%Creset' --abbrev-commit
git log showing the problem
The output of the two git log blabla commands are:
* b075d5e - (HEAD, master) repo commit 4
* ebdc7c7 - repo commit 3
* 9f1edab - repo commit 2
* 3d48bca - Merge commit '34e16a547819da7e228f3add35efe86197d2ddcb' as 'lib'
|\
| * 34e16a5 - Squashed 'lib/' content from commit 2643625
* 3f1490c - repo commit 1
* 1f86fe3 - (lib/master) repo commit 4
* 9f1639a - repo commit 3
* 8bd01bd - repo commit 2
* 2643625 - lib commit 3
* 3d64b8c - lib commit 2
* aba9fcb - lib commit 1
and :
* 1f86fe3 - (HEAD, master) repo commit 4
* 9f1639a - repo commit 3
* 8bd01bd - repo commit 2
* 2643625 - lib commit 3
* 3d64b8c - lib commit 2
* aba9fcb - lib commit 1
As you can see, lib sees the "repo commit 2,3,4" although I specified the squash option.
The other way around worked hence the Squashed 'lib/' content from commit f28bf8e.
I tried on windows with git version 1.8.1.msysgit.1 and on linux with git version 1.8.3.4.
So why doesn't the --squash option do a squash?
Side question
Why does lib/master appears in the log of the repo repository ?
Knowing it appears only after the "failed" git push: if you uncomment the first git log blabla you get the following output showing the stashed history but no sign of lib/master :
* b075d5e - (HEAD, master) repo commit 4
* ebdc7c7 - repo commit 3
* 9f1edab - repo commit 2
* 3d48bca - Merge commit '34e16a547819da7e228f3add35efe86197d2ddcb' as 'lib'
|\
| * 34e16a5 - Squashed 'lib/' content from commit 2643625
* 3f1490c - repo commit 1
| It is possible that this is an error in the documentation of the subtree command.
The manual in git states:
options for 'add', 'merge', 'pull' and 'push'
--squash merge subtree changes as a single commit
If you check the more extended documentation in the original subtree project you will notice that the --squash option is only explained for add and merge, as the functionality is described for the process of bringing content into your repository. Since pull is a modified form of merge, it is also implied that it can use --squash.
The push in the manual list what does not make sense. The git subtree push subcommand is a combination of git subtree split and git push. This means that --squash should be an option also supported by split, but split is not listed in the manual list. It is neither ever stated in the documentation that it can use --squash.
The --squash option is indeed accepted by split and push without error, but after experiment with it it seems it makes no difference, just as your example states. My take is that it is there by mistake and just ignored by the split and push commands.
| Squash | 20,102,594 | 10 |
Most CI services provide a way to shallow clone a repository. For example, on Travis:
git:
depth: 1
or on AppVeyor:
clone_depth: 1
or
shallow_clone: true
This has the obvious benefit of speed, since you don't have to clone the whole repository.
Is there any disadvantages to shallow cloning on CI services? Is there any situation where a shallow clone would make a CI build fail? Otherwise, why isn't shallow cloning the default setting for these CI services?
| There's two reasons why it doesn't usually happen.
Firstly, the hash of a shallow clone is going to be different from any version that you may have in the repository. As a result, it's not going to be possible to track a build that you've done to any particular result.
Secondly most Git servers have the ability to send the optimised 'everything.pack' if you have no details. Otherwise the server will have to provide a custom commit pack which contained just your shallow copy to send to you. So although there may be more data transmitted across the wire, it may actually result in more work on the server.
Finally quite a lot of CI builds will perform some kind of tag operation and upload it to the repository, and you can't practically tag a shallow clone (see point 1).
| Appveyor | 31,278,233 | 17 |
Is it possible to use AppVeyor as a Windows Qt continuous integration service?
| Qt is preinstalled on all configurations. See http://www.appveyor.com/docs/installed-software#qt
Here is an example script for appveyor.yml :
install:
- set QTDIR=C:\Qt\5.5\mingw492_32
- set PATH=%PATH%;%QTDIR%\bin;C:\MinGW\bin
build_script:
- qmake QtTest.pro
- mingw32-make
Supported compiler environments are mingw492_32, msvc2013 and msvc2013_64.
| Appveyor | 26,586,006 | 15 |
Follow up from this question, I'm currently setting up AppVeyor for my project (here) and my .NET Core tests are only shown in the console output but not in the Tests window.
This is the link for the AppVeyor project: ci.appveyor.com/project/Sergio0694/neuralnetwork-net
If some tests fail, the console correctly shows an error and the build is marked as failing, but the Tests window is empty anyways. Same goes for the badge from shields.io which shows 0 total tests, even if I can see many of them being executed from the console output.
Here's the console output:
And here's the Tests window:
Is there something else I have to setup in order for them to be reported correctly outside the console window?
| Please add https://www.nuget.org/packages/Appveyor.TestLogger to your test projects.
| Appveyor | 48,235,374 | 10 |
I have seen a lot of examples on the internet of chats using web sockets and RabbitMQ (https://github.com/videlalvaro/rabbitmq-chat), however I do not understand why it is need it a message queue for a chat application.
Why it is not ok to send the message from the browser via web sockets to the server and then the server to broadcast that message to the rest of active browsers using again web sockets with broadcast method? (maybe I am missing something)
Pseudo code examples (using socket.io):
// client (browser)
socket.emit("message","my great message that will be received by all"
// server (any server can be, but let's just say that it is also written in JavaScript
socket.on("message", function(msg) {
socket.broadcast.emit(data);
});
// the rest of the browsers
socket.on("message", function(msg) {
// display on the screen the message
});
| i don't think RabbitMQ should be used for a chat room, personally. at least, not in the "chat" or "room" part of the application.
unless your chat rooms don't care about history at all - and i think most do care about that - a message queue like RMQ doesn't make much sense.
you would be better off storing the message in a database and keeping a marker for each user to say what message they last saw.
now, you may end up needing something like RMQ to facilitate the process of the chat application. you can offload process from the web servers, for example, and push all messages through RMQ to a back-end service that updates the database and cache layers, for example.
this would allow you to scale the front-end web servers much faster, and support more users per web server. and that sounds like a good use of RMQ, but is not specific to chat apps. it's just good practice for scaling web apps / systems.
the key, in my experience, is that RMQ is not responsible for delivery of the messages to the users / chat rooms. that happens through websockets or similar technologies that are designed to be used per user.
| RabbitMQ | 39,122,247 | 17 |
I am using a RabbitMQ producer to send long running tasks (30 mins+) to a consumer. The problem is that the consumer is still working on a task when the connection to the server is closed and the unacknowledged task is requeued.
From researching I understand that either a heartbeat or an increased connection timeout can be used to solve this. Both these solutions raise errors when attempting them. In reading answers to similar posts I've also learned that many changes have been implemented to RabbitMQ since the answers were posted (e.g. the default heartbeat timeout has changed to 60 from 580 prior to RabbitMQ 3.5.5).
When specifying a heartbeat and blocked connection timeout:
credentials = pika.PlainCredentials('user', 'password')
parameters = pika.ConnectionParameters('XXX.XXX.XXX.XXX', port, '/', credentials, blocked_connection_timeout=2000)
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
The following error is displayed:
TypeError: __init__() got an unexpected keyword argument 'blocked_connection_timeout'
When specifying heartbeat_interval=1000 in the connection parameters a similar error is shown: TypeError: __init__() got an unexpected keyword argument 'heartbeat_interval'
And similarly for socket_timeout = 1000 the following error is displayed: TypeError: __init__() got an unexpected keyword argument 'socket_timeout'
I am running RabbitMQ 3.6.1, pika 0.10.0 and python 2.7 on Ubuntu 14.04.
Why are the above approaches producing errors?
Can a heartbeat approach be used where there is a long running continuous task? For example can heartbeats be used when performing large database joins which take 30+ mins? I am in favour of the heartbeat approach as many times it is difficult to judge how long a task such as database join will take.
I've read through answers to similar questions
Update: running code from the pika documentation produces the same error.
| I've run into the same problem with my systems, that you are seeing, with dropped connection during very long tasks.
It's possible the heartbeat might help keep your connection alive, if your network setup is such that idle TCP/IP connections are forcefully dropped. If that's not the case, though, changing the heartbeat won't help.
Changing the connection timeout won't help at all. This setting is only used when initially creating the connection.
I am using a RabbitMQ producer to send long running tasks (30 mins+) to a consumer. The problem is that the consumer is still working on a task when the connection to the server is closed and the unacknowledged task is requeued.
there are two reasons for this, both of which you have run into already:
Connections drop randomly, even under the best of circumstances
Re-starting a process because of a re-queued message can cause problems
Having deployed RabbitMQ code with tasks that range from less than a second, out to several hours in time, I found that acknowledging the message immediately and updating the system with status messages works best for very long tasks, like this.
You will need to have a system of record (probably with a database) that keeps track of the status of a given job.
When the consumer picks up a message and starts the process, it should acknowledge the message right away and send a "started" status message to the system of record.
As the process completes, send another message to say it's done.
This won't solve the dropped connection problem, but nothing will 100% solve that anyways. Instead, it will prevent the message re-queueing problem from happening when a connection is dropped.
This solution does introduce another problem, though: when the long running process crashes, how do you resume the work?
The basic answer is to use the system of record (your database) status for the job to tell you that you need to pick up that work again. When the app starts, check the database to see if there is work that is unfinished. If there is, resume or restart that work in whatever manner is appropriate.
| RabbitMQ | 36,123,006 | 17 |
I am starting to use celery by following this "First Steps with Celery".
I exactly used the tasks.py indicated on that link.
However when I ran the task using,
celery -A tasks worker --loglevel=info
I am getting this error:
[2014-09-16 20:52:57,427: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**@127.0.0.1:5672//: Socket closed. Trying again in 2.00 seconds...
The rabbitmq server is for sure running and below is the snippet of the log regarding the error:
=ERROR REPORT==== 16-Sep-2014::20:53:09 ===
exception on TCP connection <0.235.0> from 127.0.0.1:58162
{channel0_error,starting,
{amqp_error,access_refused,
"AMQPLAIN login refused: user 'guest' - invalid credentials",
'connection.start_ok'}}
=INFO REPORT==== 16-Sep-2014::20:53:09 ===
closing TCP connection <0.235.0> from 127.0.0.1:58162
=INFO REPORT==== 16-Sep-2014::20:53:15 ===
accepted TCP connection on [::]:5672 from 127.0.0.1:58163
=INFO REPORT==== 16-Sep-2014::20:53:15 ===
starting TCP connection <0.239.0> from 127.0.0.1:58163
=ERROR REPORT==== 16-Sep-2014::20:53:18 ===
exception on TCP connection <0.239.0> from 127.0.0.1:58163
{channel0_error,starting,
{amqp_error,access_refused,
"AMQPLAIN login refused: user 'guest' - invalid credentials",
'connection.start_ok'}}
=INFO REPORT==== 16-Sep-2014::20:53:18 ===
closing TCP connection <0.239.0> from 127.0.0.1:58163
With this, I did the following to ensure that the 'guest' user has permissions to / vhost:
sudo rabbitmqctl set_permissions -p / guest ".*" ".*" ".*"
And then I reloaded/restarted rabbitmq service to make sure the changes will take effect,
then ran the task again. However, the error is still the same.
I even tried creating a different vhost (jm-vhost) and user (jm-user1) and set the permission again to allow all:
sudo rabbitmqctl add_vhost jm-vhost
sudo rabbitmqctl add_user jm-user1 "" --> "" to make it passwordless (is this correct?)
sudo rabbitmqctl set_permissions -p /jm-vhost jm-user1 ".*" ".*" ".*"
And then modified tasks.py to this:
app = Celery('tasks', broker='amqp://jm-user1@localhost//jm-vhost')
But when I started the tasks, still, I get the same error.
How should I resolve this? Thanks in advance!
| I was able to resolve this (for those who have and will have the same issue) by doing the following.
I recreated the user I mentioned on my question, but this time with a password. Like this:
sudo rabbitmqctl add_user jm-user1 sample
Then I set the permissions again with this:
sudo rabbitmqctl set_permissions -p jm-vhost jm-user1 ".*" ".*" ".*"
Restarted rabbitmq server to make sure the changes take effect and made modifications to tasks.py:
app = Celery('tasks', broker='amqp://jm-user1:sample@localhost/jm-vhost')
When I ran,
celery -A tasks worker --loglevel=info
it worked :).
Hopefully, this will be of help to others.
Thanks guys!
| RabbitMQ | 25,869,858 | 17 |
I am running RabbitMQ v3.3.5 with Erlang OTP 17.1 on Windows 2008 R2. My Dev and QA environments are stand-alone. My staging and production environments are clustered.
I am finding this one problem happening often where the RabbitMQ service is running, the RabbitMQ management console is seeing everything, but when I try running rabbitmqctl from the command line it fails with an error saying that the node is down (tried locally and on a remote server).
This problem is resolved if I restart the Windows service.
I see no error message in the RabbitMQ error log. The last message indicated that the node was up.
Below is an example output of the issue that I recently experienced on node 2 of our staging windows cluster:
PS C:\Program Files (x86)\RabbitMQ Server\rabbitmq_server-3.3.5\sbin> .\rabbitmqctl.bat status
Status of node rabbit@MYSERVER2 ...
Error: unable to connect to node rabbit@MYSERVER2: nodedown
DIAGNOSTICS
===========
attempted to contact: [rabbit@MYSERVER2]
rabbit@MYSERVER2:
* connected to epmd (port 4369) on MYSERVER2
* epmd reports: node 'rabbit' not running at all
no other nodes on MYSERVER2
* suggestion: start the node
current node details:
- node name: rabbitmqctl2199771@MYSERVER2
- home dir: C:\Users\RabbitMQ
- cookie hash: mn6OaTX9mS4DnZaiOzg8pA==
at this point I restart the RabbitMQ service and then try again
PS C:\Program Files (x86)\RabbitMQ Server\rabbitmq_server-3.3.5\sbin> .\rabbitmqctl.bat status
Status of node rabbit@MYSERVER2...
[{pid,3784},
{running_applications,
[{rabbitmq_management_agent,"RabbitMQ Management Agent","3.3.5"},
{rabbit,"RabbitMQ","3.3.5"},
{os_mon,"CPO CXC 138 46","2.2.15"},
{mnesia,"MNESIA CXC 138 12","4.12.1"},
{xmerl,"XML parser","1.3.7"},
{sasl,"SASL CXC 138 11","2.4"},
{stdlib,"ERTS CXC 138 10","2.1"},
{kernel,"ERTS CXC 138 10","3.0.1"}]},
{os,{win32,nt}},
{erlang_version,
"Erlang/OTP 17 [erts-6.1] [64-bit] [smp:4:4] [async-threads:30]\n"},
{memory,
[{total,35960208},
{connection_procs,2704},
{queue_procs,5408},
{plugins,111936},
{other_proc,13695792},
{mnesia,102296},
{mgmt_db,0},
{msg_index,21816},
{other_ets,884704},
{binary,25776},
{code,16672826},
{atom,602729},
{other_system,3834221}]},
{alarms,[]},
{listeners,[{clustering,25672,"::"},{amqp,5672,"::"},{amqp,5672,"0.0.0.0"}]},
{vm_memory_high_watermark,0.4},
{vm_memory_limit,3435787059},
{disk_free_limit,50000000},
{disk_free,74911649792},
{file_descriptors,
[{total_limit,8092},
{total_used,4},
{sockets_limit,7280},
{sockets_used,2}]},
{processes,[{limit,1048576},{used,139}]},
{run_queue,0},
{uptime,5}]
...done.
Any idea as to what causes this and how to automatically detect the situation?
Is this specifically a problem with running RabbitMQ on Windows?
| Hostnames are case-insensitives when you are trying to resolve them. For example, LOCALHOST and localhost are the same host.
However, when Erlang constructs the name of a node (eg. rabbit@<hostname> in the case of RabbitMQ), this name is case-sensitive. So rabbit@LOCALHOST and rabbit@localhost are two different node names, even if they run on the same host.
Recently, we (the RabbitMQ team) found out that, on Windows, the node name constructed for RabbitMQ was inconsistent. Therefore, sometimes, RabbitMQ started as a Windows service could be named rabbit@MYHOST but rabbitmqctl would try to reach rabbit@myhost and fail.
Since RabbitMQ 3.6.0, the node name should be consistent.
| RabbitMQ | 25,409,626 | 17 |
I've installed rabbitmq and it's running.
I've successfully add_user as well as add_vhost. But in the next step of the documentation it says to set_permissions and I'm failing.
I get Error: could not recognise command when I enter the following:
$ sudo rabbitmqctl set_permissions -p myvhost myuser ".*" ".*" ".*"
(this is copy and pasted verbatim from the documentation so it seems a bit ridiculous that it doesn't work.. And 'recognise' being misspelled in the error msg isn't helping)
My question is what does ".*" ".*" ".*" mean/stand for?
| From the documentation
set_permissions [-p vhostpath] {user} {conf} {write} {read}
vhostpath - The name of the virtual host to which to grant the user
access, defaulting to /.
user - The name of the user to grant access to the specified virtual
host.
conf - A regular expression matching resource names for which the
user is granted configure permissions.
write - A regular expression matching resource names for which the
user is granted write permissions.
read - A regular expression matching
resource names for which the user is granted read permissions.
To answer your question specifically, ".*" ".*" ".*" is a set of three regular expressions (applying to configure, write, and read) which will match all the permissions available (. will match any character, * will match any number of the preceding character)
| RabbitMQ | 24,639,448 | 17 |
My container XML config:
<rabbit:listener-container
connection-factory="myConnectionFactory"
acknowledge="none"
concurrency="10"
requeue-rejected="false">
<rabbit:listener ref="myListener" queues="myQueue"/>
</rabbit:listener-container>
and myListener is just a class
@Component("myListener")
public class MyListener implements MessageListener {
@Autowired
SomeDependency dependency;
....
}
I've specified concurrency="10" in my XML. What does this mean exactly?
I've found some docs. They are not that helpful stating:
Specify the number of concurrent consumers to create. Default is 1.
What I am interested in is whether MyListener has to be thread safe i.e.
are there many instances created or single instance used by many threads?
can I access instance fields w/o synchronization?
is SomeDependency dependency instantiated once or for each thread/instance?
does dependency need to be thread safe?
| Yes, to use concurrency, your listener has to be thread-safe. There is one listener instance per container. However, the <rabbit:listener-container/> namespace element is actually just a convenience for adding "shared" attributes, each listener element gets its own container.
It's generally best to use stateless objects (no fields that are written to), but that's not always possible.
If your listener is not thread-safe, you can use...
<rabbit:listener-container
connection-factory="myConnectionFactory"
acknowledge="none"
requeue-rejected="false">
<rabbit:listener ref="myListener" queues="myQueue"/>
<rabbit:listener ref="myListener" queues="myQueue"/>
<rabbit:listener ref="myListener" queues="myQueue"/>
<rabbit:listener ref="myListener" queues="myQueue"/>
...
</rabbit:listener-container>
...and add @Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE). You will then get a container for each listener and a different instance of the listener will be injected into each.
You will also need prototype scope for any non-thread-safe dependencies injected into the listener.
| RabbitMQ | 23,341,811 | 17 |
I've used docker to start my rabbitmqserver. How can I use rabbitmqctl to connect to the rabbitmqserver in the docker container?
Port 5672 has been exposed and map to the 5672 port of my host. But I still get the following error:
Status of node rabbit@m2 ...
Error: unable to connect to node rabbit@m2: nodedown
| Assuming your container is called rabbitmq and is running:
docker exec rabbitmq rabbitmqctl start_app
| RabbitMQ | 20,345,658 | 17 |
ConnectionFactory factory = new ConnectionFactory {HostName = "localhost"};
using (IConnection connection = factory.CreateConnection())
using (IModel channel = connection.CreateModel())
{
channel.QueueDeclare("hello", false, false, false, null);
for (int i = 0; i < 100000; i++)
{
MemoryStream stream = new MemoryStream();
var user = new User
{
Id = i
};
Serializer.Serialize(stream, user);
channel.BasicPublish("", "hello", null, stream.ToArray());
}
}
I have the code above, and I'm curious about thread safety.
I am not sure, but I would imagine ConnectionFactory is thread safe. But is IConnection thread safe? Should I create a connection per request? Or rather a single persistent connection? And what about channel (IModel)?
Also, should I store the connection as ThreadLocal? Or should I create a connection per request?
| IConnection is thread safe, IModel is not. Generally you should endeavour to keep a connection open for the lifetime of your application. This is especially true if you have consumers which need an open connection in order to receive messages. It's important to detect and recover from interrupted connections, either because of network or Broker failure. I'd recommend reading 'RabbitMQ in Action' by Videla and Williams, especially chapter 6 'Writing code that survives failure'.
Now for a shameless plug. I'm the author of EasyNetQ, a high-level .NET API for RabbitMQ. It does all the connection management for you and will automatically re-connect and rebuild all your subscribers if there's a network or broker outage. It also provides cluster and fail-over support out of the box. Give it a try.
| RabbitMQ | 12,024,241 | 17 |
I'm using Pika to process data from RabbitMQ.
As I seemed to run into different kind of problems I decided to write a small test application to see how I can handle disconnects.
I wrote this test app which does following:
Connect to Broker, retry until successful
When connected create a queue.
Consume this queue and put result into a python Queue.Queue(0)
Get item from Queue.Queue(0) and produce it back into the broker queue.
What I noticed were 2 issues:
When I run my script from one host connecting to rabbitmq on another host (inside a vm) then this scripts exits on random moments without producing an error.
When I run my script on the same host on which RabbitMQ is installed it runs fine and keeps running.
This might be explained because of network issues, packets dropped although I find the connection not really robust.
When the script runs locally on the RabbitMQ server and I kill the RabbitMQ then the script exits with error: "ERROR pika SelectConnection: Socket Error on 3: 104"
So it looks like I can't get the reconnection strategy working as it should be. Could someone have a look at the code so see what I'm doing wrong?
Thanks,
Jay
#!/bin/python
import logging
import threading
import Queue
import pika
from pika.reconnection_strategies import SimpleReconnectionStrategy
from pika.adapters import SelectConnection
import time
from threading import Lock
class Broker(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.logging = logging.getLogger(__name__)
self.to_broker = Queue.Queue(0)
self.from_broker = Queue.Queue(0)
self.parameters = pika.ConnectionParameters(host='sandbox',heartbeat=True)
self.srs = SimpleReconnectionStrategy()
self.properties = pika.BasicProperties(delivery_mode=2)
self.connection = None
while True:
try:
self.connection = SelectConnection(self.parameters, self.on_connected, reconnection_strategy=self.srs)
break
except Exception as err:
self.logging.warning('Cant connect. Reason: %s' % err)
time.sleep(1)
self.daemon=True
def run(self):
while True:
self.submitData(self.from_broker.get(block=True))
pass
def on_connected(self,connection):
connection.channel(self.on_channel_open)
def on_channel_open(self,new_channel):
self.channel = new_channel
self.channel.queue_declare(queue='sandbox', durable=True)
self.channel.basic_consume(self.processData, queue='sandbox')
def processData(self, ch, method, properties, body):
self.logging.info('Received data from broker')
self.channel.basic_ack(delivery_tag=method.delivery_tag)
self.from_broker.put(body)
def submitData(self,data):
self.logging.info('Submitting data to broker.')
self.channel.basic_publish(exchange='',
routing_key='sandbox',
body=data,
properties=self.properties)
if __name__ == '__main__':
format=('%(asctime)s %(levelname)s %(name)s %(message)s')
logging.basicConfig(level=logging.DEBUG, format=format)
broker=Broker()
broker.start()
try:
broker.connection.ioloop.start()
except Exception as err:
print err
| The main problem with your script is that it is interacting with a single channel from both your main thread (where the ioloop is running) and the "Broker" thread (calls submitData in a loop). This is not safe.
Also, SimpleReconnectionStrategy does not seem to do anything useful. It does not cause a reconnect if the connection is interrupted. I believe this is a bug in Pika: https://github.com/pika/pika/issues/120
I attempted to refactor your code to make it work as I think you wanted it to, but ran into another problem. Pika does not appear to have a way to detect delivery failure, which means that data may be lost if the connection drops. This seems like such an obvious requirement! How can there be no way to detect that basic_publish failed? I tried all kinds of stuff including transactions and add_on_return_callback (all of which seemed clunky and overly complicated), but came up with nothing. If there truly is no way then pika only seems to be useful in situations that can tolerate loss of data sent to RabbitMQ, or in programs that only need to consume from RabbitMQ.
This is not reliable, but for reference, here's some code that solves your multi-thread problem:
import logging
import pika
import Queue
import sys
import threading
import time
from functools import partial
from pika.adapters import SelectConnection, BlockingConnection
from pika.exceptions import AMQPConnectionError
from pika.reconnection_strategies import SimpleReconnectionStrategy
log = logging.getLogger(__name__)
DEFAULT_PROPERTIES = pika.BasicProperties(delivery_mode=2)
class Broker(object):
def __init__(self, parameters, on_channel_open, name='broker'):
self.parameters = parameters
self.on_channel_open = on_channel_open
self.name = name
def connect(self, forever=False):
name = self.name
while True:
try:
connection = SelectConnection(
self.parameters, self.on_connected)
log.debug('%s connected', name)
except Exception:
if not forever:
raise
log.warning('%s cannot connect', name, exc_info=True)
time.sleep(10)
continue
try:
connection.ioloop.start()
finally:
try:
connection.close()
connection.ioloop.start() # allow connection to close
except Exception:
pass
if not forever:
break
def on_connected(self, connection):
connection.channel(self.on_channel_open)
def setup_submitter(channel, data_queue, properties=DEFAULT_PROPERTIES):
def on_queue_declared(frame):
# PROBLEM pika does not appear to have a way to detect delivery
# failure, which means that data could be lost if the connection
# drops...
channel.confirm_delivery(on_delivered)
submit_data()
def on_delivered(frame):
if frame.method.NAME in ['Confirm.SelectOk', 'Basic.Ack']:
log.info('submission confirmed %r', frame)
# increasing this value seems to cause a higher failure rate
time.sleep(0)
submit_data()
else:
log.warn('submission failed: %r', frame)
#data_queue.put(...)
def submit_data():
log.info('waiting on data queue')
data = data_queue.get()
log.info('got data to submit')
channel.basic_publish(exchange='',
routing_key='sandbox',
body=data,
properties=properties,
mandatory=True)
log.info('submitted data to broker')
channel.queue_declare(
queue='sandbox', durable=True, callback=on_queue_declared)
def blocking_submitter(parameters, data_queue,
properties=DEFAULT_PROPERTIES):
while True:
try:
connection = BlockingConnection(parameters)
channel = connection.channel()
channel.queue_declare(queue='sandbox', durable=True)
except Exception:
log.error('connection failure', exc_info=True)
time.sleep(1)
continue
while True:
log.info('waiting on data queue')
try:
data = data_queue.get(timeout=1)
except Queue.Empty:
try:
connection.process_data_events()
except AMQPConnectionError:
break
continue
log.info('got data to submit')
try:
channel.basic_publish(exchange='',
routing_key='sandbox',
body=data,
properties=properties,
mandatory=True)
except Exception:
log.error('submission failed', exc_info=True)
data_queue.put(data)
break
log.info('submitted data to broker')
def setup_receiver(channel, data_queue):
def process_data(channel, method, properties, body):
log.info('received data from broker')
data_queue.put(body)
channel.basic_ack(delivery_tag=method.delivery_tag)
def on_queue_declared(frame):
channel.basic_consume(process_data, queue='sandbox')
channel.queue_declare(
queue='sandbox', durable=True, callback=on_queue_declared)
if __name__ == '__main__':
if len(sys.argv) != 2:
print 'usage: %s RABBITMQ_HOST' % sys.argv[0]
sys.exit()
format=('%(asctime)s %(levelname)s %(name)s %(message)s')
logging.basicConfig(level=logging.DEBUG, format=format)
host = sys.argv[1]
log.info('connecting to host: %s', host)
parameters = pika.ConnectionParameters(host=host, heartbeat=True)
data_queue = Queue.Queue(0)
data_queue.put('message') # prime the pump
# run submitter in a thread
setup = partial(setup_submitter, data_queue=data_queue)
broker = Broker(parameters, setup, 'submitter')
thread = threading.Thread(target=
partial(broker.connect, forever=True))
# uncomment these lines to use the blocking variant of the submitter
#thread = threading.Thread(target=
# partial(blocking_submitter, parameters, data_queue))
thread.daemon = True
thread.start()
# run receiver in main thread
setup = partial(setup_receiver, data_queue=data_queue)
broker = Broker(parameters, setup, 'receiver')
broker.connect(forever=True)
| RabbitMQ | 9,508,246 | 17 |
I'am creating a microservice in NestJS. Now I want to use RabbitMQ to send messages to another service.
My question is: is it possible to import the RabbitmqModule based on a .env variable? Such as:
USE_BROKER=false. If this variable is false, than don't import the module?
RabbitMQ is imported in the GraphQLModule below.
@Module({
imports: [
GraphQLFederationModule.forRoot({
autoSchemaFile: true,
context: ({ req }) => ({ req }),
}),
DatabaseModule,
AuthModule,
RabbitmqModule,
],
providers: [UserResolver, FamilyResolver, AuthResolver],
})
export class GraphQLModule {}
RabbitmqModule:
import { Global, Module } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
import { RabbitMQModule } from '@golevelup/nestjs-rabbitmq';
import { UserProducer } from './producers/user.producer';
@Global()
@Module({
imports: [
RabbitMQModule.forRootAsync(RabbitMQModule, {
useFactory: async (config: ConfigService) => ({
exchanges: [
{
name: config.get('rabbitMQ.exchange'),
type: config.get('rabbitMQ.exchangeType'),
},
],
uri: config.get('rabbitMQ.url'),
connectionInitOptions: { wait: false },
}),
inject: [ConfigService],
}),
],
providers: [UserProducer],
exports: [UserProducer],
})
export class RabbitmqModule {}
| I think the recommended way to do so is to use the DynamicModule feature from NestJS.
It is explained here: https://docs.nestjs.com/fundamentals/dynamic-modules
Simply check your environment variable in the register function and return your Module object.
Something like:
@Module({})
export class GraphQLModule {
static register(): DynamicModule {
const imports = [
GraphQLFederationModule.forRoot({
autoSchemaFile: true,
context: ({ req }) => ({ req }),
}),
DatabaseModule,
AuthModule]
if (process.env.USE_BROKER) {
imports.push(RabbitmqModule)
}
return {
imports,
providers: [UserResolver, FamilyResolver, AuthResolver],
};
}
}
| RabbitMQ | 65,355,892 | 16 |
I need some help.
I'm developing a spring boot application, and I want wo publish messages to a rabbitMQ. I want to send it to a queue, that is named in the message itself. This way i want to create queues dynamicly.
I only found examples that use a "static" queue.
I have reserched some things but didn't find anything.
I'm new to RabbitMQ and learned the basic concepts.
I'm also fairly new to spring.
RabbotMQ Config
@Configuration
public class RabbitMQConfig {
@Value("amq.direct")
String exchange;
@Value("queue-name") // Don't want to do this
String queueName;
@Value("routing-key") // Or this
String routingkey;
@Bean
Queue queue() {
return new Queue(queueName, true);
}
@Bean
DirectExchange exchange() {
return new DirectExchange(exchange);
}
@Bean
Binding binding(Queue queue, DirectExchange exchange) {
return BindingBuilder.bind(queue).to(exchange).with(routingkey);
}
@Bean
public MessageConverter jsonMessageConverter() {
return new Jackson2JsonMessageConverter();
}
@Bean
public AmqpTemplate template(ConnectionFactory connectionFactory) {
final RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setMessageConverter(jsonMessageConverter());
return rabbitTemplate;
}
}
MessageSender
@Service
public class RabbitMQSender {
@Autowired
private AmqpTemplate template;
@Value("amq.direct")
private String exchange;
public void send(MessageDTO message) {
template.convertAndSend(exchange, message);
}
}
| I came to a solution:
You need to create a AmqpAdmin in your config:
@Bean
public AmqpAdmin amqpAdmin() {
return new RabbitAdmin(connectionFactory);
}
Then you add it to your service:
@Autowired
private AmqpAdmin admin;
Finally you can use it to create queues and bindings.
Queue queue = new Queue(queueName, durable, false, false);
Binding binding = new Binding(queueName, Binding.DestinationType.QUEUE, EXCHANGE, routingKey, null);
admin.declareQueue(queue);
admin.declareBinding(binding);
I found the solution here
| RabbitMQ | 57,870,894 | 16 |
I have created a REST API - in a few words, my client hits a particular URL and she gets back a JSON response.
Internally, quite a complicated process starts when the URL is hit, and there are various services involved as a microservice architecture is being used.
I was observing some performance bottlenecks and decided to switch to a message queue system. The idea is that now, once the user hits the URL, a request is published on internal message queue waiting for it to be consumed. This consumer will process and publish back on a queue and this will happen quite a few times until finally, the same node servicing the user will receive back the processed response to be delivered to the user.
An asynchronous "fire-and-forget" pattern is now being used. But my question is, how can the node servicing a particular person remember who it was servicing once the processed result arrives back and without blocking (i.e. it can handle several requests until the response is received)? If it makes any difference, my stack looks a little like this: TomCat, Spring, Kubernetes and RabbitMQ.
In summary, how can the request node (whose job is to push items on the queue) maintain an open connection with the client who requested a JSON response (i.e. client is waiting for JSON response) and receive back the data of the correct client?
| You have few different scenarios according to how much control you have on the client.
If the client behaviour cannot be changed, you will have to keep the session open until the request has not been fully processed. This can be achieved employing a pool of workers (futures/coroutines, threads or processes) where each worker keeps the session open for a given request.
This method has few drawbacks and I would keep it as last resort. Firstly, you will only be able to serve a limited amount of concurrent requests proportional to your pool size. Lastly as your processing is behind a queue, your front-end won't be able to estimate how long it will take for a task to complete. This means you will have to deal with long lasting sessions which are prone to fail (what if the user gives up?).
If the client behaviour can be changed, the most common approach is to use a fully asynchronous flow. When the client initiates a request, it is placed within the queue and a Task Identifier is returned. The client can use the given TaskId to poll for status updates. Each time the client requests updates about a task you simply check if it was completed and you respond accordingly. A common pattern when a task is still in progress is to let the front-end return to the client the estimated amount of time before trying again. This allows your server to control how frequently clients are polling. If your architecture supports it, you can go the extra mile and provide information about the progress as well.
Example response when task is in progress:
{"status": "in_progress",
"retry_after_seconds": 30,
"progress": "30%"}
A more complex yet elegant solution would consist in using HTTP callbacks. In short, when the client makes a request for a new task it provides a tuple (URL, Method) the server can use to signal the processing is done. It then waits for the server to send the signal to the given URL. You can see a better explanation here. In most of the cases this solution is overkill. Yet I think it's worth to mention it.
| RabbitMQ | 53,525,239 | 16 |
In the docs it refers to it as the command line tool but that's clt not ctl.
| RabbitMQ has a bunch of command line tools, one of which is RabbitMQCtl.
The ctl part stands for control. You use it to control RabbitMQ for general administrative/operator tasks.
| RabbitMQ | 52,807,913 | 16 |
I have a RabbitMQ C# Client running in a WCF service.
It catches System.NotSupportedException: Pipelining of requests forbidden exception now and then.
| Accroding to the gudie.You need to lock the channel for multi-threading.
As a rule of thumb, IModel instances should not be used by more than one thread simultaneously: application code should maintain a clear notion of thread ownership for IModel instances.
| RabbitMQ | 47,152,466 | 16 |
I want to test with gitlab-ci.yml a rpc nameko server.
I can't succeed to make work the Rabitt inside .gitlab-ci.yml::
image: python:latest
before_script:
- apt-get update -yq
- apt-get install -y python-dev python-pip tree
- curl -I http://guest:guest@rabbitmq:8080/api/overview
mytest:
artifacts:
paths:
- dist
script:
- pip install -r requirements.txt
- pip install .
- pytest --amqp-uri=amqp://guest:guest@rabbitmq:5672 --rabbit-ctl-uri=http://guest:guest@rabbitmq:15672 tests
# - python setup.py test
- python setup.py bdist_wheel
look:
stage: deploy
script:
- ls -lah dist
services:
- rabbitmq:3-management
The Rabbit start correctly::
2017-04-13T18:19:23.436309219Z
2017-04-13T18:19:23.436409026Z RabbitMQ 3.6.9. Copyright (C) 2007-2016 Pivotal Software, Inc.
2017-04-13T18:19:23.436432568Z ## ## Licensed under the MPL. See http://www.rabbitmq.com/
2017-04-13T18:19:23.436451431Z ## ##
2017-04-13T18:19:23.436468542Z ########## Logs: tty
2017-04-13T18:19:23.436485607Z ###### ## tty
2017-04-13T18:19:23.436501886Z ##########
2017-04-13T18:19:23.436519036Z Starting broker...
2017-04-13T18:19:23.440790736Z
2017-04-13T18:19:23.440809836Z =INFO REPORT==== 13-Apr-2017::18:19:23 ===
2017-04-13T18:19:23.440819014Z Starting RabbitMQ 3.6.9 on Erlang 19.3
2017-04-13T18:19:23.440827601Z Copyright (C) 2007-2016 Pivotal Software, Inc.
2017-04-13T18:19:23.440835737Z Licensed under the MPL. See http://www.rabbitmq.com/
2017-04-13T18:19:23.443408721Z
2017-04-13T18:19:23.443429311Z =INFO REPORT==== 13-Apr-2017::18:19:23 ===
2017-04-13T18:19:23.443439837Z node : rabbit@ea1a207b738e
2017-04-13T18:19:23.443449307Z home dir : /var/lib/rabbitmq
2017-04-13T18:19:23.443460663Z config file(s) : /etc/rabbitmq/rabbitmq.config
2017-04-13T18:19:23.443470393Z cookie hash : h6vFB5LezZ4GR1nGuQOVSg==
2017-04-13T18:19:23.443480053Z log : tty
2017-04-13T18:19:23.443489256Z sasl log : tty
2017-04-13T18:19:23.443498676Z database dir : /var/lib/rabbitmq/mnesia/rabbit@ea1a207b738e
2017-04-13T18:19:27.717290199Z
2017-04-13T18:19:27.717345348Z =INFO REPORT==== 13-Apr-2017::18:19:27 ===
2017-04-13T18:19:27.717355143Z Memory limit set to 3202MB of 8005MB total.
2017-04-13T18:19:27.726821043Z
2017-04-13T18:19:27.726841925Z =INFO REPORT==== 13-Apr-2017::18:19:27 ===
2017-04-13T18:19:27.726850927Z Disk free limit set to 50MB
2017-04-13T18:19:27.732864417Z
2017-04-13T18:19:27.732882507Z =INFO REPORT==== 13-Apr-2017::18:19:27 ===
2017-04-13T18:19:27.732891347Z Limiting to approx 1048476 file handles (943626 sockets)
2017-04-13T18:19:27.733030868Z
2017-04-13T18:19:27.733041770Z =INFO REPORT==== 13-Apr-2017::18:19:27 ===
2017-04-13T18:19:27.733049763Z FHC read buffering: OFF
2017-04-13T18:19:27.733126168Z FHC write buffering: ON
2017-04-13T18:19:27.793026622Z
2017-04-13T18:19:27.793043832Z =INFO REPORT==== 13-Apr-2017::18:19:27 ===
2017-04-13T18:19:27.793052900Z Database directory at /var/lib/rabbitmq/mnesia/rabbit@ea1a207b738e is empty. Initialising from scratch...
2017-04-13T18:19:27.800414211Z
2017-04-13T18:19:27.800429311Z =INFO REPORT==== 13-Apr-2017::18:19:27 ===
2017-04-13T18:19:27.800438013Z application: mnesia
2017-04-13T18:19:27.800464988Z exited: stopped
2017-04-13T18:19:27.800473228Z type: temporary
2017-04-13T18:19:28.129404329Z
2017-04-13T18:19:28.129482072Z =INFO REPORT==== 13-Apr-2017::18:19:28 ===
2017-04-13T18:19:28.129491680Z Waiting for Mnesia tables for 30000 ms, 9 retries left
2017-04-13T18:19:28.153509130Z
2017-04-13T18:19:28.153526528Z =INFO REPORT==== 13-Apr-2017::18:19:28 ===
2017-04-13T18:19:28.153535638Z Waiting for Mnesia tables for 30000 ms, 9 retries left
2017-04-13T18:19:28.193558406Z
2017-04-13T18:19:28.193600316Z =INFO REPORT==== 13-Apr-2017::18:19:28 ===
2017-04-13T18:19:28.193611144Z Waiting for Mnesia tables for 30000 ms, 9 retries left
2017-04-13T18:19:28.194448672Z
2017-04-13T18:19:28.194464866Z =INFO REPORT==== 13-Apr-2017::18:19:28 ===
2017-04-13T18:19:28.194475629Z Priority queues enabled, real BQ is rabbit_variable_queue
2017-04-13T18:19:28.208882072Z
2017-04-13T18:19:28.208912016Z =INFO REPORT==== 13-Apr-2017::18:19:28 ===
2017-04-13T18:19:28.208921824Z Starting rabbit_node_monitor
2017-04-13T18:19:28.211145158Z
2017-04-13T18:19:28.211169236Z =INFO REPORT==== 13-Apr-2017::18:19:28 ===
2017-04-13T18:19:28.211182089Z Management plugin: using rates mode 'basic'
2017-04-13T18:19:28.224499311Z
2017-04-13T18:19:28.224527962Z =INFO REPORT==== 13-Apr-2017::18:19:28 ===
2017-04-13T18:19:28.224538810Z msg_store_transient: using rabbit_msg_store_ets_index to provide index
2017-04-13T18:19:28.226355958Z
2017-04-13T18:19:28.226376272Z =INFO REPORT==== 13-Apr-2017::18:19:28 ===
2017-04-13T18:19:28.226385706Z msg_store_persistent: using rabbit_msg_store_ets_index to provide index
2017-04-13T18:19:28.227832476Z
2017-04-13T18:19:28.227870221Z =WARNING REPORT==== 13-Apr-2017::18:19:28 ===
2017-04-13T18:19:28.227891823Z msg_store_persistent: rebuilding indices from scratch
2017-04-13T18:19:28.230832501Z
2017-04-13T18:19:28.230872729Z =INFO REPORT==== 13-Apr-2017::18:19:28 ===
2017-04-13T18:19:28.230893941Z Adding vhost '/'
2017-04-13T18:19:28.385440862Z
2017-04-13T18:19:28.385520360Z =INFO REPORT==== 13-Apr-2017::18:19:28 ===
2017-04-13T18:19:28.385540022Z Creating user 'guest'
2017-04-13T18:19:28.398092244Z
2017-04-13T18:19:28.398184254Z =INFO REPORT==== 13-Apr-2017::18:19:28 ===
2017-04-13T18:19:28.398206496Z Setting user tags for user 'guest' to [administrator]
2017-04-13T18:19:28.413704571Z
2017-04-13T18:19:28.413789806Z =INFO REPORT==== 13-Apr-2017::18:19:28 ===
2017-04-13T18:19:28.413810378Z Setting permissions for 'guest' in '/' to '.*', '.*', '.*'
2017-04-13T18:19:28.451109821Z
2017-04-13T18:19:28.451162892Z =INFO REPORT==== 13-Apr-2017::18:19:28 ===
2017-04-13T18:19:28.451172185Z started TCP Listener on [::]:5672
2017-04-13T18:19:28.475429729Z
2017-04-13T18:19:28.475491074Z =INFO REPORT==== 13-Apr-2017::18:19:28 ===
2017-04-13T18:19:28.475501172Z Management plugin started. Port: 15672
2017-04-13T18:19:28.475821397Z
2017-04-13T18:19:28.475835599Z =INFO REPORT==== 13-Apr-2017::18:19:28 ===
2017-04-13T18:19:28.475844143Z Statistics database started.
2017-04-13T18:19:28.487572236Z completed with 6 plugins.
2017-04-13T18:19:28.487797794Z
2017-04-13T18:19:28.487809763Z =INFO REPORT==== 13-Apr-2017::18:19:28 ===
2017-04-13T18:19:28.487818426Z Server startup complete; 6 plugins started.
2017-04-13T18:19:28.487826288Z * rabbitmq_management
2017-04-13T18:19:28.487833914Z * rabbitmq_web_dispatch
2017-04-13T18:19:28.487841610Z * rabbitmq_management_agent
2017-04-13T18:19:28.487861057Z * amqp_client
2017-04-13T18:19:28.487875546Z * cowboy
2017-04-13T18:19:28.487883514Z * cowlib
*********
But I get this error
$ pytest --amqp-uri=amqp://guest:guest@rabbitmq:5672 --rabbit-ctl-uri=http://guest:guest@rabbitmq:15672 tests
============================= test session starts ==============================
platform linux -- Python 3.6.1, pytest-3.0.7, py-1.4.33, pluggy-0.4.0
...
E Exception: Connection error for the RabbitMQ management HTTP API at http://guest:guest@rabbitmq:15672/api/overview, is it enabled?
...
source:565: DeprecationWarning: invalid escape sequence \*
ERROR: Job failed: exit code 1
| I used it the following way and it worked for me
image: "ruby:2.3.3" //not required by rabbitmq
services:
- rabbitmq:latest
variables:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
AMQP_URL: 'amqp://guest:guest@rabbitmq:5672'
Now you can use the AMQP_URL env variable to connect to the rabbimq server. The general rule of thumb is any services declared will have the name (e.g. rabbitmq from rabbitmq:latest) as host or url or server. However in case you are running it in your own server or kubernetes cluster it will be localhost or 127.0.0.1. In my humble opinion that might be issue in your code. Hope it helps. :)
| RabbitMQ | 43,409,988 | 16 |
recently, I did a quick implementation on producer/ consumer queue system.
<?php
namespace Queue;
use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Message\AMQPMessage;
use PhpAmqpLib\Wire\AMQPTable;
class Amqp
{
private $connection;
private $queueName;
private $delayedQueueName;
private $channel;
private $callback;
public function __construct($host, $port, $login, $password, $queueName)
{
$this->connection = new AMQPStreamConnection($host, $port, $login, $password);
$this->queueName = $queueName;
$this->delayedQueueName = null;
$this->channel = $this->connection->channel();
// First, we need to make sure that RabbitMQ will never lose our queue.
// In order to do so, we need to declare it as durable. To do so we pass
// the third parameter to queue_declare as true.
$this->channel->queue_declare($queueName, false, true, false, false);
}
public function __destruct()
{
$this->close();
}
// Just in case : http://stackoverflow.com/questions/151660/can-i-trust-php-destruct-method-to-be-called
// We should call close explicitly if possible.
public function close()
{
if (!is_null($this->channel)) {
$this->channel->close();
$this->channel = null;
}
if (!is_null($this->connection)) {
$this->connection->close();
$this->connection = null;
}
}
public function produceWithDelay($data, $delay)
{
if (is_null($this->delayedQueueName))
{
$delayedQueueName = $this->queueName . '.delayed';
// First, we need to make sure that RabbitMQ will never lose our queue.
// In order to do so, we need to declare it as durable. To do so we pass
// the third parameter to queue_declare as true.
$this->channel->queue_declare($this->delayedQueueName, false, true, false, false, false,
new AMQPTable(array(
'x-dead-letter-exchange' => '',
'x-dead-letter-routing-key' => $this->queueName
))
);
$this->delayedQueueName = $delayedQueueName;
}
$msg = new AMQPMessage(
$data,
array(
'delivery_mode' => AMQPMessage::DELIVERY_MODE_PERSISTENT,
'expiration' => $delay
)
);
$this->channel->basic_publish($msg, '', $this->delayedQueueName);
}
public function produce($data)
{
$msg = new AMQPMessage(
$data,
array('delivery_mode' => AMQPMessage::DELIVERY_MODE_PERSISTENT)
);
$this->channel->basic_publish($msg, '', $this->queueName);
}
public function consume($callback)
{
$this->callback = $callback;
// This tells RabbitMQ not to give more than one message to a worker at
// a time.
$this->channel->basic_qos(null, 1, null);
// Requires ack.
$this->channel->basic_consume($this->queueName, '', false, false, false, false, array($this, 'consumeCallback'));
while(count($this->channel->callbacks)) {
$this->channel->wait();
}
}
public function consumeCallback($msg)
{
call_user_func_array(
$this->callback,
array($msg)
);
// Very important to ack, in order to remove msg from queue. Ack after
// callback, as exception might happen in callback.
$msg->delivery_info['channel']->basic_ack($msg->delivery_info['delivery_tag']);
}
public function getQueueSize()
{
// three tuple containing (<queue name>, <message count>, <consumer count>)
$tuple = $this->channel->queue_declare($this->queueName, false, true, false, false);
if ($tuple != null && isset($tuple[1])) {
return $tuple[1];
}
return -1;
}
}
public function produce and public function consume pair works as expected.
However, when it comes with delayed queue system
public function produceWithDelay and public function consume pair doesn't work as expected. The consumer which calls consume, not able to receive any item, even waiting for some period of time.
I believe something not right with my produceWithDelay implementation. May I know what's wrong is that?
| Fist of all verify that your plugin rabbitmq_delayed_message_exchange enabled by running command: rabbitmq-plugins list, If not - read more info here.
And you have to update your __construct method because you need to declare queue in a little bit another way. I do not pretend to update your construct, but would like to provide my simple example:
Declare queue:
<?php
require_once __DIR__ . '/../vendor/autoload.php';
use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Message\AMQPMessage;
use PhpAmqpLib\Wire\AMQPTable;
$connection = new AMQPStreamConnection('localhost', 5672, 'guest', 'guest');
$channel = $connection->channel();
$args = new AMQPTable(['x-delayed-type' => 'fanout']);
$channel->exchange_declare('delayed_exchange', 'x-delayed-message', false, true, false, false, false, $args);
$args = new AMQPTable(['x-dead-letter-exchange' => 'delayed']);
$channel->queue_declare('delayed_queue', false, true, false, false, false, $args);
$channel->queue_bind('delayed_queue', 'delayed_exchange');
Send message:
$data = 'Hello World at ' . date('Y-m-d H:i:s');
$delay = 7000;
$message = new AMQPMessage($data, ['delivery_mode' => AMQPMessage::DELIVERY_MODE_PERSISTENT]);
$headers = new AMQPTable(['x-delay' => $delay]);
$message->set('application_headers', $headers);
$channel->basic_publish($message, 'delayed_exchange');
printf(' [x] Message sent: %s %s', $data, PHP_EOL);
$channel->close();
$connection->close();
Receive message:
$callback = function (AMQPMessage $message) {
printf(' [x] Message received: %s %s', $message->body, PHP_EOL);
$message->delivery_info['channel']->basic_ack($message->delivery_info['delivery_tag']);
};
$channel->basic_consume('delayed_queue', '', false, false, false, false, $callback);
while(count($channel->callbacks)) {
$channel->wait();
}
$channel->close();
$connection->close();
Also you can find source files here.
Hope it will help you!
| RabbitMQ | 42,990,585 | 16 |
I have brew installed rabbitmq on my mac and have tried the following
rabbitmq-server start
sbin/service rabbitmq-server start
and neither work.How do I start it?
| You should be able to run /usr/local/sbin/rabbitmq-server or use brew services start rabbitmq
| RabbitMQ | 39,397,646 | 16 |
This seems like a simple question, but I'm having a hard time finding a definitive answer. If in RabbitMQ 3.6.1 I have a queue that looks like this:
5 4 3 2 1 <= head
And I consume message 1, then do:
channel.BasicReject(ea.DeliveryTag, true);
Will the 1 end up on the end of the queue or at the head of the queue (assuming for the sake of simplicity that nobody else is consuming the queue at the time)? So will I end up with:
1 5 4 3 2 <= head
or:
5 4 3 2 1 <= head
And is there anyway to control it (one way would be to ack the message and repost it entirely I suppose)? I actually want the first situation because I'm rejecting 1 because a particular resource needed to process that message is currently unavailable. So I'd like to throw it back on the queue to be processed later (when the resource is available) or get picked up by somebody else (who has resources available). But I don't want to throw it back just to keep picking it up again.
| I'd say the answer is here, I'll quote a part:
Messages can be returned to the queue using AMQP methods that feature
a requeue parameter (basic.recover, basic.reject and basic.nack), or
due to a channel closing while holding unacknowledged messages. Any of
these scenarios caused messages to be requeued at the back of the
queue for RabbitMQ releases earlier than 2.7.0. From RabbitMQ release
2.7.0, messages are always held in the queue in publication order, even in the presence of requeueing or channel closure.
So we could assume that RMQ is implemented in that way the messages are not deleted form the queue (physically deleted) until they are ACKed, they may have a ACKed flag or whatever.
| RabbitMQ | 37,709,896 | 16 |
I'm new to golang, and I would like to refactorate my code so that the rabbitmq initialization is in another function that main. So I use a struct pointer (containing all the rabbitmq infos initilized) and pass it to the send function, but it tells me : Failed to publish a message: Exception (504) Reason: "channel/connection is not open"
struct :
type RbmqConfig struct {
q amqp.Queue
ch *amqp.Channel
conn *amqp.Connection
rbmqErr error
}
the init function :
func initRabbitMq() *RbmqConfig {
config := &RbmqConfig{}
config.conn, config.rbmqErr = amqp.Dial("amqp://guest:guest@localhost:5672/")
failOnError(config.rbmqErr, "Failed to connect to RabbitMQ")
defer config.conn.Close()
config.ch, config.rbmqErr = config.conn.Channel()
failOnError(config.rbmqErr, "Failed to open a channel")
defer config.ch.Close()
config.q, config.rbmqErr = config.ch.QueueDeclare(
"<my_queue_name>",
true, // durable
false, // delete when unused
false, // exclusive
false, // no-wait
nil, // arguments
)
failOnError(config.rbmqErr, "Failed to declare a queue")
return config
}
main :
config := initRabbitMq()
fmt.Println("queue name : ", config.q.Name)
sendMessage(config, <message_to_send>)
in send message :
func sendMessage(config *RbmqConfig, <message_to_send>) {
config.rbmqErr = config.ch.Publish(
"", // exchange
config.q.Name, // routing key
false, // mandatory
false,
amqp.Publishing{
DeliveryMode: amqp.Persistent,
ContentType: "text/plain",
Body: []byte(<message_to_send>),
})
failOnError(config.rbmqErr, "Failed to publish a message")
If someone has any idea, that would be very helpful. Thank you in advance
| Inside your init, you wrote defer config.conn.Close(), which will be executed when the function return. That is to say, whenever init finished, your connection will be closed, which causes unopen connection.
You need to defer the connection closing in main, or somewhere you want it to be closed.
| RabbitMQ | 36,579,759 | 16 |
TL;DR: I need to "replay" dead letter messages back into their original queues once I've fixed the consumer code that was originally causing the messages to be rejected.
I have configured the Dead Letter Exchange (DLX) for RabbitMQ and am successfully routing rejected messages to a dead letter queue. But now I want to look at the messages in the dead letter queue and try to decide what to do with each of them. Some (many?) of these messages should be replayed (requeued) to their original queues (available in the "x-death" headers) once the offending consumer code has been fixed. But how do I actually go about doing this? Should I write a one-off program that reads messages from the dead letter queue and allows me to specify a target queue to send them to? And what about searching the dead letter queue? What if I know that a message (let's say which is encoded in JSON) has a certain attribute that I want to search for and replay? For example, I fix a defect which I know will allow message with PacketId: 1234 to successfully process now. I could also write a one-off program for this I suppose.
I certainly can't be the first one to encounter these problems and I'm wondering if anyone else has already solved them. It seems like there should be some sort of Swiss Army Knife for this sort of thing. I did a pretty extensive search on Google and Stack Overflow but didn't really come up with much. The closest thing I could find were shovels but that doesn't really seem like the right tool for the job.
|
Should I write a one-off program that reads messages from the dead letter queue and allows me to specify a target queue to send them to?
generally speaking, yes.
you could set up a delayed re-try to resend the message back to the original queue, using a combination of the delay message exchange plugin.
but this would only automate the retries on an interval, and you may not have fixed the problem before the retries happen.
in some circumstances this is ok - like when the error is caused by an external resource being temporarily unavailable.
in your case, though, i believe your thoughts on creating an app to handle the dead letters is the best way to go, for several reasons:
you need to search through the messages, which isn't possible RMQ
this means you'll need a database to store the messages from the DLX/queue
because you're pulling the messages out of the DLX/queue, you'll need to ensure you get all the header info from the message so that you can re-publish to the correct queue when the time comes.
I certainly can't be the first one to encounter these problems and I'm wondering if anyone else has already solved them.
and you're not!
there are many solutions to this problem that all come down to the solution you've suggested.
some larger "service bus" implementations have this type of feature built in to them. i believe NServiceBus (or the SaaS version of it) has this built in, for example - though I'm not 100% sure of it.
if you want to look into this further, do some search for the term "poison message" - this is generally the term used for this situation. I've found a few things on google with a quick search, that may help you down the path:
http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2013-January/025019.html
https://web.archive.org/web/20170809194056/http://tafakari.co.ke/2014/07/rabbitmq-poison-messages/
https://web.archive.org/web/20170809170555/http://kjnilsson.github.io/blog/2014/01/30/spread-the-poison/
hope that helps!
| RabbitMQ | 36,186,578 | 16 |
I am relatively new to docker, celery and rabbitMQ.
In our project we currently have the following setup:
1 physical host with multiple docker containers running:
1x rabbitmq:3-management container
# pull image from docker hub and install
docker pull rabbitmq:3-management
# run docker image
docker run -d -e RABBITMQ_NODENAME=my-rabbit --name some-rabbit -p 8080:15672 -p 5672:5672 rabbitmq:3-management
1x celery container
# pull docker image from docker hub
docker pull celery
# run celery container
docker run --link some-rabbit:rabbit --name some-celery -d celery
(there are some more containers, but they should not have to do anything with the problem)
Task File
To get to know celery and rabbitmq a bit, I created a tasks.py file on the physical host:
from celery import Celery
app = Celery('tasks', backend='amqp', broker='amqp://guest:[email protected]/')
@app.task(name='tasks.add')
def add(x, y):
return x + y
The whole setup seems to be working quite fine actually. So when I open a python shell in the directory where tasks.py is located and run
>>> from tasks import add
>>> add.delay(4,4)
The task gets queued and directly pulled from the celery worker.
However, the celery worker does not know the tasks module regarding to the logs:
$ docker logs some-celery
[2015-04-08 11:25:24,669: ERROR/MainProcess] Received unregistered task of type 'tasks.add'.
The message has been ignored and discarded.
Did you remember to import the module containing this task?
Or maybe you are using relative imports?
Please see http://bit.ly/gLye1c for more information.
The full contents of the message body was:
{'callbacks': None, 'timelimit': (None, None), 'retries': 0, 'id': '2b5dc209-3c41-4a8d-8efe-ed450d537e56', 'args': (4, 4), 'eta': None, 'utc': True, 'taskset': None, 'task': 'tasks.add', 'errbacks': None, 'kwargs': {}, 'chord': None, 'expires': None} (256b)
Traceback (most recent call last):
File "/usr/local/lib/python3.4/site-packages/celery/worker/consumer.py", line 455, in on_task_received
strategies[name](message, body,
KeyError: 'tasks.add'
So the problem obviously seems to be, that the celery workers in the celery container do not know the tasks module.
Now as I am not a docker specialist, I wanted to ask how I would best import the tasks module into the celery container?
Any help is appreciated :)
EDIT 4/8/2015, 21:05:
Thanks to Isowen for the answer. Just for completeness here is what I did:
Let's assume my tasks.py is located on my local machine in /home/platzhersh/celerystuff. Now I created a celeryconfig.py in the same directory with the following content:
CELERY_IMPORTS = ('tasks')
CELERY_IGNORE_RESULT = False
CELERY_RESULT_BACKEND = 'amqp'
As mentioned by Isowen, celery searches /home/user of the container for tasks and config files. So we mount the /home/platzhersh/celerystuff into the container when starting:
run -v /home/platzhersh/celerystuff:/home/user --link some-rabbit:rabbit --name some-celery -d celery
This did the trick for me. Hope this helps some other people with similar problems.
I'll now try to expand that solution by putting the tasks also in a separate docker container.
| As you suspect, the issue is because the celery worker does not know the tasks module. There are two things you need to do:
Get your tasks definitions "into" the docker container.
Configure the celery worker to load those task definitions.
For Item (1), the easiest way is probably to use a "Docker Volume" to mount a host directory of your code onto the celery docker instance. Something like:
docker run --link some-rabbit:rabbit -v /path/to/host/code:/home/user --name some-celery -d celery
Where /path/to/host/code is the your host path, and /home/user is the path to mount it on the instance. Why /home/user in this case? Because the Dockerfile for the celery image defines the working directory (WORKDIR) as /home/user.
(Note: Another way to accomplish Item (1) would be to build a custom docker image with the code "built in", but I will leave that as an exercise for the reader.)
For Item (2), you need to create a celery configuration file that imports the tasks file. This is a more general issue, so I will point to a previous stackoverflow answer: Celery Received unregistered task of type (run example)
| RabbitMQ | 29,513,813 | 16 |
The scenario (I've simplified things):
Many end users can start jobs (heavy jobs, like rendering a big PDF for example), from a front end web application (producer).
The jobs are sent to a single durable RabbitMQ queue.
Many worker applications (consumers) processes those jobs and write the results back in a datastore.
This fairly standard pattern is working fine.
The problem: if a user starts 10 jobs in the same minute, and only 10 worker applications are up at that time of day, this end user is effectively taking over all the compute time for himself.
The question: How can I make sure only one job per end user is processed at any time ? (Bonus: some end users (admins for example) must not be throttled)
Also, I do not want the front end application to block end users from starting concurrent jobs. I just want the end users to wait for their concurrent jobs to finish one at a time.
The solution?: Should I dynamically create one auto-delete exclusive queue per end users ? If yes, how can I tell the worker applications to start consuming this queue ? How to ensure one (and only one) worker will consume from this queue ?
| You would need to build something yourself to implement this as Dimos says. Here is an alternative implementation which requires an extra queue and some persistent storage.
As well as the existing queue for jobs, create a "processable job queue". Only jobs that satisfy your business rules are added to this queue.
Create a consumer (named "Limiter") for the job queue. The Limiter also needs persistent storage (e.g. Redis or a relational database) to record which jobs are currently processing. The limiter reads from the job queue and writes to the processable job queue.
When a worker application finishes processing a job, it adds a "job finished" event to the job queue.
------------ ------------ -----------
| Producer | -> () job queue ) -> | Limiter |
------------ ------------ -----------
^ |
| V
| ------------------------
| () processable job queue )
job finished | ------------------------
| |
| V
| ------------------------
\-----| Job Processors (x10) |
------------------------
The logic for the limiter is as follows:
When a job message is received, check the persistent storage to see if a job is already running for the current user:
If not, record the job in the storage as running and add the job message to the processable job queue.
If an existing job is running, record the job in the storage as a pending job.
If the job is for an admin user, always add it to the processable job queue.
When a "job finished" message is received, remove that job from the "running jobs" list in the persistent storage. Then check the storage for a pending job for that user:
If a job is found, change the status of that job from pending to running and add it to the processable job queue.
Otherwise, do nothing.
Only one instance of the limiter process can run at a time. This could be achieved either by only starting a single instance of the limiter process, or by using locking mechanisms in the persistent storage.
It's fairly heavyweight, but you can always inspect the persistent storage if you need to see what's going on.
| RabbitMQ | 28,414,484 | 16 |
I am trying to start the rabbitmq server in centos 7. I installed erlang as it is a dependency to rabbitmq-server. Package erlang.x86_64 0:R16B-03.7.el7 .I then Installed rabbitmq using package rabbitmq-server-3.2.2-1.noarch.rpm. Installation was successful. I enabled management console uisng rabbitmq-plugins enable rabbitmq_management. But while starting the service rabbitmq-server it fails.
[root@tve-centos ~]# systemctl start rabbitmq-server.service
Job for rabbitmq-server.service failed. See 'systemctl status rabbitmq-server.service' and 'journalctl -xn' for details.
[root@tve-centos ~]# systemctl status rabbitmq-server.service
rabbitmq-server.service - LSB: Enable AMQP service provided by RabbitMQ broker
Loaded: loaded (/etc/rc.d/init.d/rabbitmq-server)
Active: failed (Result: exit-code) since Fri 2014-09-12 13:07:05 PDT; 8s ago
Process: 20235 ExecStart=/etc/rc.d/init.d/rabbitmq-server start (code=exited, status=1/FAILURE)
Sep 12 13:07:04 tve-centos su[20245]: (to rabbitmq) root on none
Sep 12 13:07:05 tve-centos su[20296]: (to rabbitmq) root on none
Sep 12 13:07:05 tve-centos su[20299]: (to rabbitmq) root on none
Sep 12 13:07:05 tve-centos rabbitmq-server[20235]: Starting rabbitmq-server: FAILED - check /var/log/rabbitmq/startup_{log, _err}
Sep 12 13:07:05 tve-centos rabbitmq-server[20235]: rabbitmq-server.
Sep 12 13:07:05 tve-centos systemd[1]: rabbitmq-server.service: control process exited, code=exited status=1
Sep 12 13:07:05 tve-centos systemd[1]: Failed to start LSB: Enable AMQP service provided by RabbitMQ broker.
Sep 12 13:07:05 tve-centos systemd[1]: Unit rabbitmq-server.service entered failed state.
and logs shows /var/log/rabbitmq/startup_log
BOOT FAILED
===========
Error description:
{could_not_start,rabbitmq_management,
{could_not_start_listener,[{port,15672}],eacces}}
Log files (may contain more information):
/var/log/rabbitmq/[email protected]
/var/log/rabbitmq/[email protected]
but no process is using port 15672
But if I try to start it using /usr/sbin/rabbitmq-server .I successfully started the service. But my requirements are to start it using the systemctl.
| Better answer would be to actually fix SELinux and the firewall.
Open the port:
firewall-cmd --permanent --add-port=5672/tcp
firewall-cmd --reload
setsebool -P nis_enabled 1
That works for me.
| RabbitMQ | 25,816,918 | 16 |
I'm using rabbitmq to send messages from a single server to multiple clients. I want to send a message to all clients so I have created an exchange which they all bind to. This works great. However, what if I want to send a message to a handful of these clients based on a wildcard in the routing key (not the binding key). For instance, I have say red clients, blue clients and green clients. Sometimes I want all clients to receive the message, sometimes I want just the blue, or just the blue and the red. This is a simplified example. To extend this to my actual system, imagine I have hundreds of "color" distinctions. I can't figure out how to do this as wildcards seem to only exist in binding keys not routing keys.
Any advice will be greatly appreciated.
| I think you are trying to do too much with one queue. Considering that you know ahead of time whether the message will go to all clients or just one then you should set up two exchanges. One as a topic, or direct, where the clients will only get the messages specifically intended for them the other as a fanout exchange that will distribute to a different set of queues that will be read by all clients. Header exchanges may allow you the flexibility you want as well and the other possibility is writing a custom exchange to do exactly what you need.
| RabbitMQ | 22,546,840 | 16 |
I have a node.js app that connects to RabbitMQ to receive messages. When the messages come in and I output them to the console I get:
{ data: <Buffer 62 6c 61 68>, contentType: undefined }
How do I get a proper JSON or string out of this? Here is my example:
var amqp = require('amqp');
var connection = amqp.createConnection({ host: 'localhost' });
process.on('uncaughtException', function(err) {
console.error(err.stack);
});
connection.on('ready', function () {
// Use the default 'amq.topic' exchange
connection.queue('my-queue', function(q){
q.bind('#');
q.subscribe(function (message) {
console.log(message);
});
});
});
The messages are being sent using the RabbitMQ management console (for testing purposes currently). In this example I sent a simple message with the topic of "test" and the body "blah".
I'm new to Node.js but I have tried to do
console.log(message.toJSON());
and I get nothing. Not even an error message. (not sure how to catch the issue)
console.log(message.toString());
When I do this I get [object Object] which doesn't help
console.log(JSON.parse(message.toString('utf8')));
Also does nothing and I get no error message. I assuming it's failing but why I don't get an exception is unknown to me.
| If you are using amqplib then the below code solves the issue.
In the sender.js file i convert data to JSON string
var data = [{
name: '********',
company: 'JP Morgan',
designation: 'Senior Application Engineer'
}];
ch.sendToQueue(q, Buffer.from(JSON.stringify(data)));
And in the receiver.js i use the below code to print the content from the queue. Here i parse the msg.content to JSON format.
ch.consume(q, function(msg) {
console.log(" [x] Received");
console.log(JSON.parse(msg.content));
}, {noAck: true});
| RabbitMQ | 22,464,858 | 16 |
I want to build a RabbitMQ cluster in my dev machine (windows).
reason is that I would like to test and study it.
Is it possible to run more than one rabbitmq instance on one machine?
I am guessing I need to:
Change the listening port
Change the appdata folder (C:\Users\MyUser\AppData\Roaming)
Change the ui plugin port so I can view all instances.
Remove the service and run from cli
Has anyone tried it?
Is there a known guide?
| Now the official RabbitMQ documentation contains a section "A Cluster on a Single Machine", which describes how to run multiple Rabbit nodes on a single machine.
See https://www.rabbitmq.com/clustering.html#single-machine
| RabbitMQ | 21,453,910 | 16 |
Can anyone give me examples of how in production a correlation id can be used?
I have read it is used in request/response type messages but I don't understand where I would use it?
One example (which maybe wrong) I can think off is in a publish subscribe scenario where I could have 5 subscribers and if I get 5 replies with the same correlation id then I could say all my subscribers have received it. Not sure if this would the be correct usage of it.
Or if I send a simple message, the I can use the correlation to guarantee that the client received it.
Any other examples?
| A web application that is providing HTTP API for outsiders for performing a processing task and you want to give the results for the caller as a response to the HTTP request they made.
A request comes in, message describing the task is pushed to queue by the frontend server. After that the frontend server blocks to wait for response message with the same correlation id. A pool of worker machines are listening on queue and one of them picks up the task, performs it and returns the result as message. Once a message with right correlation id comes in, frontend server continues to return the response to the caller.
| RabbitMQ | 20,184,755 | 16 |
RabbitMQ's Channel#basicConsume method gives us the following arguments:
channel.basicConsume(queueName, autoAck, consumerTag, noLocal,
exclusive, arguments, callback);
Giving us the ability to tell RabbitMQ exactly which queue we want to consume from.
But Channel#basicPublish has no such equivalency:
channel.basicPublish(exchangeName, routingKey, mandatory, immediateFlag,
basicProperties, messageAsBytes);
Why can't I specify the queue to publish to here?!? How do I get a Channel publishing to, say, a queue named logging? Thanks in advance!
| To expand on @Tien Nguyen's answer, there is a "cheat" in RabbitMQ that effectively lets you publish directly to a queue. Each queue is automatically bound to the AMQP default exchange, with the queue's name as the routing key. The default exchange is also known as the "nameless exchange" - ie its name is the empty string. So if you publish to the exchange named "" with routing key equal to your queue's name, the message will go to just that queue. It is going through an exchange as @John said, it's just not one that you need to declare or bind yourself.
I don't have the Java client handy to try this code, but it should work.
channel.basicPublish("", myQueueName, false, false, null, myMessageAsBytes);
That said, this is mostly contrary to the spirit of how RabbitMQ works. For normal application flow you should declare and bind exchanges. But for exceptional cases the "cheat" can be useful. For example, I believe this is how the Rabbit Admin Console allows you to manually publish messages to a queue without all the ceremony of creating and binding exchanges.
| RabbitMQ | 18,531,308 | 16 |
I have a Java client which monitors RabbitMQ queue. I am able to get the count of messages currently in queue with this code
@Resource
RabbitAdmin rabbitAdmin;
..........
DeclareOk declareOk = rabbitAdmin.getRabbitTemplate().execute(new ChannelCallback<DeclareOk>() {
public DeclareOk doInRabbit(Channel channel) throws Exception {
return channel.queueDeclarePassive("test.pending");
}
});
return declareOk.getMessageCount();
I want to get some more additional details like -
Message body of currently enqueued items.
Total number of messages that was enqueued in the queue since the queue was created.
Is there any way to retrieve these data in Java client?
| With AMQP protocol (including RabbitMQ implementation) you can't get such info with 100% guarantee.
The closest number to messages count is messages count returned with queue.declare-ok (AMQP.Queue.DeclareOk in java AMQP client library).
Whilst messages count you receive with queue.declare-ok may match exact messages number enqueues, you can't rely on it as it doesn't count messages which waiting acknowledges or published to queue during transaction but not committed yet.
It really depends what kind of precission do you need.
As to enqueued messages body, you may want to manually extract all messages in queue, view their body and put them back to queue. This is the only way to do what you want.
You can get some information about messages count with Management Plugin, RabbitMQ Management HTTP API and rabbitmqctl util (see list_queues, list_channels).
You can't get total published messages count since queue was created and I think nobody implement such stats while it useless (FYI, with messages flow in average 10k per second you will not even reach uint64 in a few thousand years).
| RabbitMQ | 17,684,848 | 16 |
This RabbitMQ page states:
Queue names may be up to 255 bytes of UTF-8 characters.
In ruby (1.9.3), how would I truncate a UTF-8 string by byte-count without breaking in the middle of a character? The resulting string should be the longest possible valid UTF-8 string that fits in the byte limit.
| For Rails >= 3.0 you have ActiveSupport::Multibyte::Chars limit method.
From API docs:
- (Object) limit(limit)
Limit the byte size of the string to a number of bytes without breaking characters. Usable when the storage for a string is limited for some reason.
Example:
'こんにちは'.mb_chars.limit(7).to_s # => "こん"
| RabbitMQ | 12,536,080 | 16 |
I am try to install amqp for php (Integrating PHP with RabbitMQ)
using this http://code.google.com/p/php-amqp/.
after run
phpize && ./configure --with-amqp && make && sudo make install
it give error like this
Cannot find config.m4.
Make sure that you run '/usr/bin/phpize' in the top level source directory of the module
Please help me, my environment is ubuntu
| You need to download the code for the PHP library from here: http://code.google.com/p/php-amqp/downloads/list
Then cd into that folder and run the command they tell you to run.
UPDATE: That page is actually an old page, they haven't updated it in a long time. You can grab the latest amqp from http://pecl.php.net/get/amqp:
wget http://pecl.php.net/get/amqp -O amqp.tar.gz
tar -zxvf amqp.tar.gz
cd amqp-1.0.7 # replace this with the current version
phpize
./configure --with-amqp
make
sudo make install
Then you'll need to add the following line to your php.ini file:
extension=amqp.so
| RabbitMQ | 7,831,407 | 16 |
I am a newbie to real-time application development and am trying to wrap my head around the myriad options out there. I have read as many blog posts, notes and essays out there that people have been kind enough to share. Yet, a simple problem seems unanswered in my tiny brain. I thought a number of other people might have the same issues, so I might as well sign up and post here on SO. Here goes:
I am building a tiny real-time app which is asynchronous chat + another fun feature. I boiled my choices down to the following two options:
LAMP + RabbitMQ
Node.JS + Redis + Pub-Sub
I believe that I get the basics to start learning and building this out. However, my (seriously n00b) questions are:
How do I communicate with the end-user -> Client to/from Server in both of those? Would that be simple Javascript long/infinite polling?
Of the two, which might more efficient to build out and manage from a single Slice (assuming 100 - 1,000 users)?
Should I just build everything out with jQuery in the 'old school' paradigm and then identify which stack might make more sense? Just so that I can get the product fleshed out as a prototype and then 'optimize' it. Or is writing in one over the other more than mere optimization? ( I feel so, but I am not 100% on this personally )
I hope this isn't a crazy question and won't get flamed right away. Would love some constructive feedback, love this community!
Thank you.
| Architecturally, both of your choices are the same as storing data in an Oracle database server for another application to retrieve.
Both the RabbitMQ and the Redis solution require your apps to connect to an intermediary server that handles the data communications. Redis is most like Oracle, because it can be used simply as a persistent database with a network API. But RabbitMQ is a little different because the MQ Broker is not really responsible for persisting data. If you configure it right and use the right options when publishing a message, then RabbitMQ will actually persist the data for you but you can't get the data out except as part of the normal message queueing process. In other words, RabbitMQ is for communicating messages and only offers persistence as a way of recovering from network problems or system crashes.
I would suggest using RabbitMQ and whatever programming languages you are already familiar with. Since the M in LAMP is usually interpreted as MySQL, this means that you would either not use MySQL at all, or only use it for long term storage of data, not for the realtime communications.
The RabbitMQ site has a huge amount of documentation about building apps with AMQP. I suggest that after you install RabbitMQ, you read through the docs for rabbitmqctl and then create a vhost to experiment in. That way it is easy to clean up your experiments without resetting everything. I also suggest using only topic exchanges because you can emulate the behavior of direct and fanout exchanges by using wildcards in the routing_key.
Remember, you only publish messages to exchanges, and you only receive messages from queues. The exchange is responsible for pattern matching the message's routing_key to the queue's binding_key to determine which queues should receive a copy of the message. It is worthwhile learning the whole AMQP model even if you only plan to send messages to one queue with the same name as the routing_key.
If you are building your client in the browser, and you want to build a prototype, then you should consider just using XHR today, and then move to something like Kamaloka-js which is a pure Javascript implementation of AMQP (the AMQ Protocol) which is the standard protocol used to communicate to a RabbitMQ message broker. In other words, build it with what you know today, and then speed it up later which something (AMQP) that has a long term future in your toolbox.
| RabbitMQ | 6,169,658 | 16 |
I would like to learn what are the scenarios/usecases/ where messaging like RabbitMQ can help consumer web applications.
Are there any specific resources to learn from?
What web applications currently are making use of such messaging schemes and how?
| In general, a message bus (such as RabbitMQ, but not limited to) allows for a reliable queue of job processing.
What this means to you in terms of a web application is the ability to scale your app as demand grows and to keep your UI quick and responsive.
Instead of forcing the user to wait while a job is processed they can request a job to be processed (for example, clicking a button on a web page to begin transcoding a video file on your server) which sends a message to your bus, let's the backend service pick it up when it's turn in the queue comes up, and maybe notify the user that work has/will begin. You can then return control to the UI, so the user can continue working with the application.
In this situation, your web interface does zero heavy lifting, instead just giving the user visibility into stages of the process as you see fit (for example, the job could incrementally update database records with the state of process which you can query and display to your user).
I would assume that any web application that experiences any kind of considerable traffic would have this type of infrastructure. While there are downsides (network glitches could potentially disrupt message delivery, more complex infrastructure, etc.) the advantages of scaling your backend become increasingly evident. If you're using cloud services this type of infrastructure makes it trivial to add additional message handlers to process your jobs by subscribing to the job queue and just picking off messages to process.
| RabbitMQ | 6,104,418 | 16 |
I'd like to selectively delete messages from an AMQP queue without even reading them.
The scenario is as follows:
Sending side wants to expire messages of type X based on a fact that new information of type X arrived. Because it's very probable that the subscriber didn't consume latest message of type X yet, publisher should just delete previous X-type messages and put a newest one into the queue. The whole operation should be transparent to the subscriber - in fact he should use something as simple as STOMP to get the messages.
How to do it using AMQP? Or maybe it's more convenient in another messaging protocol?
I'd like to avoid a complicated infrastructure. The whole messaging needed is as simple as above: one queue, one subscriber, one publisher, but the publisher must have an ability to ad-hoc deleting the messages for a given criteria.
The publisher client will use Ruby but actually I'd deal with any language as soon as I discover how to do it in the protocol.
| You do not want a message queue, you want a key-value database. For instance you could use Redis or Tokyo Tyrant to get a simple network-accessible key-value database. Or just use a memcache.
Each message type is a key. When you write a new message with the same key, it overwrites the previous value so the reader of this database will never be able to get out of date information.
At this point, you only need a message queue to establish the order in which keys should be read, if that is important. Otherwise, just continually scan the database. If you do continually scan the database, it is best to put the database near the readers to reduce network traffic.
I would probably do something like this
key: typecode
value: lastUpdated, important data
Then I would send messages that contain
typecode, lastUpdated That way the reader can compare lastupdated for that key to the one that they last read from the database and skip reading it because they are already up to date.
If you really need to do this with AMQP, then use RabbitMQ and a custom exchange type, specifically a Last Value Cache Exchange. Example code is here https://github.com/squaremo/rabbitmq-lvc-plugin
| RabbitMQ | 3,434,763 | 16 |
Does the content type header in RabbitMQ have any special meaning, or is it only a standardized way for my producers and consumers to signal what kind of data they are sending? In other words: will messages with certain content types get any special treatment, or is it just bytes, either way?
| RabbitMQ doesn't use the content-type header internally at all. It's for producers and consumers to signal message types, as you guessed.
| RabbitMQ | 3,278,590 | 16 |
I cannot seem to start or install my RabbitMQ server anymore for my Ubuntu 18.04 anymore. I tried to remove and install it again, but it cannot finish the install because configuration fails. When I try to run sudo apt-get install --fix-broken. This is the result of it failing:
Reading package lists... Done
Building dependency tree
Reading state information... Done
0 upgraded, 0 newly installed, 0 to remove and 61 not upgraded.
1 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Setting up rabbitmq-server (3.6.10-1) ...
Job for rabbitmq-server.service failed because the control process exited with error code.
See "systemctl status rabbitmq-server.service" and "journalctl -xe" for details.
invoke-rc.d: initscript rabbitmq-server, action "start" failed.
● rabbitmq-server.service - RabbitMQ Messaging Server
Loaded: loaded (/lib/systemd/system/rabbitmq-server.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Wed 2018-08-22 09:16:51 EEST; 5ms ago
Process: 20997 ExecStartPost=/usr/lib/rabbitmq/bin/rabbitmq-server-wait (code=exited, status=70)
Process: 20996 ExecStart=/usr/sbin/rabbitmq-server (code=exited, status=0/SUCCESS)
Main PID: 20996 (code=exited, status=0/SUCCESS)
elo 22 09:16:48 ubuntu-dev systemd[1]: Starting RabbitMQ Messaging Server...
elo 22 09:16:49 ubuntu-dev rabbitmq[20997]: Waiting for 'rabbit@ubuntu-dev'
elo 22 09:16:49 ubuntu-dev rabbitmq[20997]: pid is 21001
elo 22 09:16:51 ubuntu-dev rabbitmq[20997]: Error: process_not_running
elo 22 09:16:51 ubuntu-dev systemd[1]: rabbitmq-server.service: Control process exited, code=exited status=70
elo 22 09:16:51 ubuntu-dev systemd[1]: rabbitmq-server.service: Failed with result 'exit-code'.
elo 22 09:16:51 ubuntu-dev systemd[1]: Failed to start RabbitMQ Messaging Server.
dpkg: error processing package rabbitmq-server (--configure):
installed rabbitmq-server package post-installation script subprocess returned error exit status 1
Errors were encountered while processing:
rabbitmq-server
E: Sub-process /usr/bin/dpkg returned an error code (1)
Then when checking the log files they doesn't provide much more information either. Here is startup_err log file content:
init terminating in do_boot (noproc)
Crash dump is being written to: erl_crash.dump...done'
And here is startup_log file content:
BOOT FAILED
===========
Error description:
noproc
Log files (may contain more information):
/var/log/rabbitmq/rabbit.log
/var/log/rabbitmq/rabbit-sasl.log
Stack trace:
[{gen,do_for_proc,2,[{file,"gen.erl"},{line,228}]},
{gen_event,rpc,2,[{file,"gen_event.erl"},{line,239}]},
{rabbit,ensure_working_log_handlers,0,
[{file,"src/rabbit.erl"},{line,842}]},
{rabbit,'-boot/0-fun-0-',0,[{file,"src/rabbit.erl"},{line,281}]},
{rabbit,start_it,1,[{file,"src/rabbit.erl"},{line,417}]},
{init,start_em,1,[]},
{init,do_boot,3,[]}]
=INFO REPORT==== 22-Aug-2018::09:16:49.691453 ===
Error description:
noproc
Log files (may contain more information):
/var/log/rabbitmq/rabbit.log
/var/log/rabbitmq/rabbit-sasl.log
Stack trace:
[{gen,do_for_proc,2,[{file,"gen.erl"},{line,228}]},
{gen_event,rpc,2,[{file,"gen_event.erl"},{line,239}]},
{rabbit,ensure_working_log_handlers,0,
[{file,"src/rabbit.erl"},{line,842}]},
{rabbit,'-boot/0-fun-0-',0,[{file,"src/rabbit.erl"},{line,281}]},
{rabbit,start_it,1,[{file,"src/rabbit.erl"},{line,417}]},
{init,start_em,1,[]},
{init,do_boot,3,[]}]
{"init terminating in do_boot",noproc}
The other log files it claim to use for logging are empty. For example log file [email protected] and [email protected].
I also found this post, which explains to check your hostname in /etc/hostname file but I checked and it's correct.
kazhu@ubuntu-dev:/var/log/rabbitmq$ cat /etc/hostname
ubuntu-dev
I also checked RabbitMQ troubleshoot guide and they said to check log folder permissions and they are right to my eye:
kazhu@ubuntu-dev:/var/log/rabbitmq$ ll
total 48
drwxr-xr-x 2 rabbitmq rabbitmq 4096 kesä 14 06:16 ./
drwxrwxr-x 16 root syslog 4096 elo 22 00:09 ../
-rw-r--r-- 1 rabbitmq rabbitmq 0 kesä 14 06:16 '[email protected]'
-rw-r--r-- 1 rabbitmq rabbitmq 5247 kesä 14 06:16 '[email protected]'
-rw-r--r-- 1 rabbitmq rabbitmq 954 touko 28 08:36 '[email protected]'
-rw-r--r-- 1 rabbitmq rabbitmq 768 touko 21 07:11 '[email protected]'
-rw-r--r-- 1 rabbitmq rabbitmq 708 touko 16 00:12 '[email protected]'
-rw-r--r-- 1 rabbitmq rabbitmq 955 touko 7 07:26 '[email protected]'
-rw-r--r-- 1 rabbitmq rabbitmq 4264 huhti 22 00:07 '[email protected]'
-rw-r--r-- 1 rabbitmq rabbitmq 0 huhti 17 15:58 '[email protected]'
-rw-r--r-- 1 rabbitmq rabbitmq 95 elo 22 09:16 startup_err
-rw-r--r-- 1 rabbitmq rabbitmq 1212 elo 22 09:16 startup_log
Guide also stated that perl chrash dump file contains detailed information of the problem and requires Erlang expertises, which I don't have. So decided to upload the file to my Dropbox for you to see.
Can somebody help me solve this? I've tried some time myself but gave up because cannot figure out what the problem seems to be :/
| I solved the problem with help of my colleague. I had installed newest erlang and rabbitmq from outside apt source separately. Now when I removed and purged everything related to rabbitmq and erlang, and removed the added apt sources too. Then I just ran sudo apt install rabbitmq-server and it wanted to install erlang packages too because of the dependency. And it installed and everything is working fine after that.
Wanted to share this solution if somebody else has the same problem as me.
UPDATE 9.12.2020:
Someone asked how I removed RabbitMQ and Erlang. I don't fully remember but I think I was following this guide: https://www.rabbitmq.com/install-debian.html.
The point is to remove the installed RabbitMQ and Erlang packages from added repositories and their configuration with
sudo apt purge rabbitmq-server erlang
You might need to search for rest of the erlang packages with
apt list | grep erlang
Then you need to remove added apt repositories. Usually added repositories in Ubuntu goes under /etc/apt/sources.list.d/ folder. Look for files names like rabbitmq and erlang. Make sure you are not deleting any other files!
After this run sudo apt update and apt should remove removed apt repositories. Then just running sudo apt install rabbitmq-server should do the trick and install Erlang package as a dependence. Of course installing this way you get much older version than using added repositories.
| RabbitMQ | 51,961,253 | 15 |
Just to make things tricky, I'd like to consume messages from the rabbitMQ queue. Now I know there is a plugin for MQTT on rabbit (https://www.rabbitmq.com/mqtt.html).
However I cannot seem to make an example work where Spark consumes a message that has been produced from pika.
For example I am using the simple wordcount.py program here (https://spark.apache.org/docs/1.2.0/streaming-programming-guide.html) to see if I can I see a message producer in the following way:
import sys
import pika
import json
import future
import pprofile
def sendJson(json):
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()
channel.queue_declare(queue='analytics', durable=True)
channel.queue_bind(exchange='analytics_exchange',
queue='analytics')
channel.basic_publish(exchange='analytics_exchange', routing_key='analytics',body=json)
connection.close()
if __name__ == "__main__":
with open(sys.argv[1],'r') as json_file:
sendJson(json_file.read())
The sparkstreaming consumer is the following:
import sys
import operator
from pyspark import SparkContext
from pyspark.streaming import StreamingContext
from pyspark.streaming.mqtt import MQTTUtils
sc = SparkContext(appName="SS")
sc.setLogLevel("ERROR")
ssc = StreamingContext(sc, 1)
ssc.checkpoint("checkpoint")
#ssc.setLogLevel("ERROR")
#RabbitMQ
"""EXCHANGE = 'analytics_exchange'
EXCHANGE_TYPE = 'direct'
QUEUE = 'analytics'
ROUTING_KEY = 'analytics'
RESPONSE_ROUTING_KEY = 'analytics-response'
"""
brokerUrl = "localhost:5672" # "tcp://iot.eclipse.org:1883"
topic = "analytics"
mqttStream = MQTTUtils.createStream(ssc, brokerUrl, topic)
#dummy functions - nothing interesting...
words = mqttStream.flatMap(lambda line: line.split(" "))
pairs = words.map(lambda word: (word, 1))
wordCounts = pairs.reduceByKey(lambda x, y: x + y)
wordCounts.pprint()
ssc.start()
ssc.awaitTermination()
However unlike the simple wordcount example, I cannot get this to work and get the following error:
16/06/16 17:41:35 ERROR Executor: Exception in task 0.0 in stage 7.0 (TID 8)
java.lang.NullPointerException
at org.eclipse.paho.client.mqttv3.MqttConnectOptions.validateURI(MqttConnectOptions.java:457)
at org.eclipse.paho.client.mqttv3.MqttAsyncClient.<init>(MqttAsyncClient.java:273)
So my questions are, what should be the settings in terms of MQTTUtils.createStream(ssc, brokerUrl, topic) to listen into the queue and whether there are any more fuller examples and how these map onto those of rabbitMQ.
I am running my consumer code with: ./bin/spark-submit ../../bb/code/skunkworks/sparkMQTTRabbit.py
I have updated the producer code as follows with TCP parameters as suggested by one comment:
url_location = 'tcp://localhost'
url = os.environ.get('', url_location)
params = pika.URLParameters(url)
connection = pika.BlockingConnection(params)
and the spark streaming as:
brokerUrl = "tcp://127.0.0.1:5672"
topic = "#" #all messages
mqttStream = MQTTUtils.createStream(ssc, brokerUrl, topic)
records = mqttStream.flatMap(lambda line: json.loads(line))
count = records.map(lambda rec: len(rec))
total = count.reduce(lambda a, b: a + b)
total.pprint()
| It looks like you are using wrong port number. Assuming that:
you have a local instance of RabbitMQ running with default settings and you've enabled MQTT plugin (rabbitmq-plugins enable rabbitmq_mqtt) and restarted RabbitMQ server
included spark-streaming-mqtt when executing spark-submit / pyspark (either with packages or jars / driver-class-path)
you can connect using TCP with tcp://localhost:1883. You have to also remember that MQTT is using amq.topic.
Quick start:
create Dockerfile with following content:
FROM rabbitmq:3-management
RUN rabbitmq-plugins enable rabbitmq_mqtt
build Docker image:
docker build -t rabbit_mqtt .
start image and wait until server is ready:
docker run -p 15672:15672 -p 5672:5672 -p 1883:1883 rabbit_mqtt
create producer.py with following content:
import pika
import time
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='localhost'))
channel = connection.channel()
channel.exchange_declare(exchange='amq.topic',
type='topic', durable=True)
for i in range(1000):
channel.basic_publish(
exchange='amq.topic', # amq.topic as exchange
routing_key='hello', # Routing key used by producer
body='Hello World {0}'.format(i)
)
time.sleep(3)
connection.close()
start producer
python producer.py
and visit management console http://127.0.0.1:15672/#/exchanges/%2F/amq.topic
to see that messages are received.
create consumer.py with following content:
from pyspark import SparkContext
from pyspark.streaming import StreamingContext
from pyspark.streaming.mqtt import MQTTUtils
sc = SparkContext()
ssc = StreamingContext(sc, 10)
mqttStream = MQTTUtils.createStream(
ssc,
"tcp://localhost:1883", # Note both port number and protocol
"hello" # The same routing key as used by producer
)
mqttStream.count().pprint()
ssc.start()
ssc.awaitTermination()
ssc.stop()
download dependencies (adjust Scala version to the one used to build Spark and Spark version):
mvn dependency:get -Dartifact=org.apache.spark:spark-streaming-mqtt_2.11:1.6.1
make sure SPARK_HOME and PYTHONPATH point to the correct directories.
submit consumer.py with (adjust versions as before):
spark-submit --packages org.apache.spark:spark-streaming-mqtt_2.11:1.6.1 consumer.py
If you followed all the steps you should see Hello world messages in the Spark log.
| RabbitMQ | 37,863,801 | 15 |
I have this issue, I want to know my rabbit is working great.
I am not gonna send the message, so, Im not 100% sure is being sent correctly. But the problem is this.
After all is configured and all....
I see at the RabbitMQ web manager
And when I supposedly send a message the I see activity on the "message rates" chart but nothing at the "queued messages" .
I frankly dont know whats going on, is it too fast that doesnt need to queue the messages? Or something is misconfigured?
Any idea of the difference?
Thanks.
| In case RabbitMQ receive non-routable message it drop it. So while message was received, it was not queued.
You may configure Alternate Exchanges to catch such messages.
| RabbitMQ | 37,625,376 | 15 |
in this guide https://www.rabbitmq.com/api-guide.html RabbitMQ guys state:
Channels and Concurrency Considerations (Thread Safety)
Channel instances must not be shared between threads. Applications should prefer using a Channel per thread instead of sharing the same Channel across multiple threads. While some operations on channels are safe to invoke concurrently, some are not and will result in incorrect frame interleaving on the wire. Sharing channels between threads will also interfere with * Publisher Confirms.
Thread safety is very important so I tried to be as diligent as possible, but here's the problem:
I have this application that receives messages from Rabbit. When a message is received, it processes it and then acks when it's done. The application can process just 2 items at the same time in a fixed thread pool with 2 threads. The QOS prefetch for Rabbit is set to 2, because I don't want to feed the app with more than it can handle in a time frame.
Now, my consumer's handleDelivery does the following:
Task run = new Task(JSON.parse(message));
service.execute(new TestWrapperThread(getChannel(),run,envelope.getDeliveryTag()));
At this point, you already figured out that TestWrapperThread does the channel.basicAck(deliveryTag, false); call as last operation.
By my understanding of the documentation, this is incorrect and potentially harmful because channel is not thread safe and this behavior could screw things up. But how I am supposed to do then? I mean, I have a few ideas but they would def make everything more complex and I'd like to figure it out if it's really necessary or not.
Thanks in advance
| I suppose you are using Channel only for your consumer and not for other operations like publish etc..
In your case the only potential problem is here:
channel.basicAck(deliveryTag, false);
because you call this across two thread, btw this operation is safe, if you see the java code:
the class ChannelN.java calls:
public void basicAck(long deliveryTag, boolean multiple)
throws IOException
{
transmit(new Basic.Ack(deliveryTag, multiple));
}
see github code for ChannelN.java
the transmit method inside AMQChannel uses:
public void transmit(Method m) throws IOException {
synchronized (_channelMutex) {
transmit(new AMQCommand(m));
}
}
_channelMutex is a protected final Object _channelMutex = new Object();
created with the class.
see github code for AMQChannel.java
EDIT
As you can read on the official documentation, "some" operations are thread-safe, now it is not clear which ones.
I studied the code, an I think there are not problems to call the ACK across more threads.
Hope it helps.
EDIT2
I add also Nicolas's comment:
Note that consuming (basicConsume) and acking from more than one thread is a common rabbitmq pattern that is already used by the java client.
So you can use it safe.
| RabbitMQ | 30,695,375 | 15 |
I am running a RabbitMQ Management console on a machine where port above 10000 range are blocked using firewall. Can I change the port so that I can use any one of 9000 range ports ?
Please help!
| RabbitMQ has a config file rabbitmq.config.example or just rabbitmq.config under /etc/rabbitmq directory on linux servers.
Locate the rabbitmq_management tuple and change the port value (default is 12345, change it to whatever you want).
Be sure to uncomment or add the following content into /etc/rabbitmq/rabbitmq.config file as shown below.
{rabbitmq_management,[{listener, [{port, 12345}]}]}
Then restart the RabbitMQ server instance once
$ sudo /etc/init.d/rabbitmq-server restart
| RabbitMQ | 30,616,800 | 15 |
I am on Ubuntu 14.04 and I installed rabbitmq. As I was reading through the configuration documentation, I wanted to create my own rabbitmq.config file in /etc/rabbitmq/rabbitmq.config, so I searched for an example of a configuration file which I found under /usr/share/doc/rabbitmq-server/rabbitmq.config.example.gz.
I unzipped it in /etc/rabbitmq/rabbitmq.config and started to uncomment many options. Once I tried to restart rabbitmq through sudo service rabbitmq-server restart it failed. I looked in the logs and I found the following error:
==> /var/log/rabbitmq/startup_log <==
{"could not start kernel pid",application_controller,"error in config file \"/etc/rabbitmq/rabbitmq.config\" (214): syntax error before: ']'"}
So I though I didn't write one of the option correctly and I tried to fixed the line 214 but the line 214 is the end of the configuration dictionary for the first section. I erased the file and restarted from scratch thinking that I would uncomment line by line also by restarting rabbitmq between each uncomment to find on which line I did an error.
First thing I did was to uncomment a line I don't have to modify like this one:
%% {reverse_dns_lookups, true}, %% I only removed the % signs
It didn't change anything it was unable to restart rabbitmq-server throwing exactly the same syntax error at line 214. I checked where the list and the dictionary were starting and ending and everything looks fine to me. By the way if you leave the file unchanged it will allow you to restart rabbitmq.
Did I forget to uncomment something in this file?
Original example file:
%% -*- mode: erlang -*-
%% ----------------------------------------------------------------------------
%% RabbitMQ Sample Configuration File.
%%
%% See http://www.rabbitmq.com/configure.html for details.
%% ----------------------------------------------------------------------------
[
{rabbit,
[%%
%% Network Connectivity
%% ====================
%%
%% By default, RabbitMQ will listen on all interfaces, using
%% the standard (reserved) AMQP port.
%%
%% {tcp_listeners, [5672]},
%% To listen on a specific interface, provide a tuple of {IpAddress, Port}.
%% For example, to listen only on localhost for both IPv4 and IPv6:
%%
%% {tcp_listeners, [{"127.0.0.1", 5672},
%% {"::1", 5672}]},
%% SSL listeners are configured in the same fashion as TCP listeners,
%% including the option to control the choice of interface.
%%
%% {ssl_listeners, [5671]},
%% Log levels (currently just used for connection logging).
%% One of 'info', 'warning', 'error' or 'none', in decreasing order
%% of verbosity. Defaults to 'info'.
%%
%% {log_levels, [{connection, info}]},
%% Set to 'true' to perform reverse DNS lookups when accepting a
%% connection. Hostnames will then be shown instead of IP addresses
%% in rabbitmqctl and the management plugin.
%%
%% {reverse_dns_lookups, true},
%%
%% Security / AAA
%% ==============
%%
%% Configuring SSL.
%% See http://www.rabbitmq.com/ssl.html for full documentation.
%%
%% {ssl_options, [{cacertfile, "/path/to/testca/cacert.pem"},
%% {certfile, "/path/to/server/cert.pem"},
%% {keyfile, "/path/to/server/key.pem"},
%% {verify, verify_peer},
%% {fail_if_no_peer_cert, false}]},
%% Choose the available SASL mechanism(s) to expose.
%% The two default (built in) mechanisms are 'PLAIN' and
%% 'AMQPLAIN'. Additional mechanisms can be added via
%% plugins.
%%
%% See http://www.rabbitmq.com/authentication.html for more details.
%%
%% {auth_mechanisms, ['PLAIN', 'AMQPLAIN']},
%% Select an authentication database to use. RabbitMQ comes bundled
%% with a built-in auth-database, based on mnesia.
%%
%% {auth_backends, [rabbit_auth_backend_internal]},
%% Configurations supporting the rabbitmq_auth_mechanism_ssl and
%% rabbitmq_auth_backend_ldap plugins.
%%
%% NB: These options require that the relevant plugin is enabled.
%% See http://www.rabbitmq.com/plugins.html for further details.
%% The RabbitMQ-auth-mechanism-ssl plugin makes it possible to
%% authenticate a user based on the client's SSL certificate.
%%
%% To use auth-mechanism-ssl, add to or replace the auth_mechanisms
%% list with the entry 'EXTERNAL'.
%%
%% {auth_mechanisms, ['EXTERNAL']},
%% The rabbitmq_auth_backend_ldap plugin allows the broker to
%% perform authentication and authorisation by deferring to an
%% external LDAP server.
%%
%% For more information about configuring the LDAP backend, see
%% http://www.rabbitmq.com/ldap.html.
%%
%% Enable the LDAP auth backend by adding to or replacing the
%% auth_backends entry:
%%
%% {auth_backends, [rabbit_auth_backend_ldap]},
%% This pertains to both the rabbitmq_auth_mechanism_ssl plugin and
%% STOMP ssl_cert_login configurations. See the rabbitmq_stomp
%% configuration section later in this fail and the README in
%% https://github.com/rabbitmq/rabbitmq-auth-mechanism-ssl for further
%% details.
%%
%% To use the SSL cert's CN instead of its DN as the username
%%
%% {ssl_cert_login_from, common_name},
%%
%% Default User / VHost
%% ====================
%%
%% On first start RabbitMQ will create a vhost and a user. These
%% config items control what gets created. See
%% http://www.rabbitmq.com/access-control.html for further
%% information about vhosts and access control.
%%
%% {default_vhost, <<"/">>},
%% {default_user, <<"guest">>},
%% {default_pass, <<"guest">>},
%% {default_permissions, [<<".*">>, <<".*">>, <<".*">>]},
%% Tags for default user
%%
%% For more details about tags, see the documentation for the
%% Management Plugin at http://www.rabbitmq.com/management.html.
%%
%% {default_user_tags, [administrator]},
%%
%% Additional network and protocol related configuration
%% =====================================================
%%
%% Set the default AMQP heartbeat delay (in seconds).
%%
%% {heartbeat, 600},
%% Set the max permissible size of an AMQP frame (in bytes).
%%
%% {frame_max, 131072},
%% Customising Socket Options.
%%
%% See (http://www.erlang.org/doc/man/inet.html#setopts-2) for
%% further documentation.
%%
%% {tcp_listen_options, [binary,
%% {packet, raw},
%% {reuseaddr, true},
%% {backlog, 128},
%% {nodelay, true},
%% {exit_on_close, false}]},
%%
%% Resource Limits & Flow Control
%% ==============================
%%
%% See http://www.rabbitmq.com/memory.html for full details.
%% Memory-based Flow Control threshold.
%%
%% {vm_memory_high_watermark, 0.4},
%% Fraction of the high watermark limit at which queues start to
%% page message out to disc in order to free up memory.
%%
%% {vm_memory_high_watermark_paging_ratio, 0.5},
%% Set disk free limit (in bytes). Once free disk space reaches this
%% lower bound, a disk alarm will be set - see the documentation
%% listed above for more details.
%%
%% {disk_free_limit, 50000000},
%% Alternatively, we can set a limit relative to total available RAM.
%%
%% {disk_free_limit, {mem_relative, 1.0}},
%%
%% Misc/Advanced Options
%% =====================
%%
%% NB: Change these only if you understand what you are doing!
%%
%% To announce custom properties to clients on connection:
%%
%% {server_properties, []},
%% How to respond to cluster partitions.
%% See http://www.rabbitmq.com/partitions.html for further details.
%%
%% {cluster_partition_handling, ignore},
%% Make clustering happen *automatically* at startup - only applied
%% to nodes that have just been reset or started for the first time.
%% See http://www.rabbitmq.com/clustering.html#auto-config for
%% further details.
%%
%% {cluster_nodes, {['[email protected]'], disc}},
%% Set (internal) statistics collection granularity.
%%
%% {collect_statistics, none},
%% Statistics collection interval (in milliseconds).
%%
%% {collect_statistics_interval, 5000},
%% Explicitly enable/disable hipe compilation.
%%
%% {hipe_compile, true}
]},
%% ----------------------------------------------------------------------------
%% Advanced Erlang Networking/Clustering Options.
%%
%% See http://www.rabbitmq.com/clustering.html for details
%% ----------------------------------------------------------------------------
{kernel,
[%% Provide an explicit port-range for inter-node communications.
%% See http://www.rabbitmq.com/clustering.html#firewall for further details.
%% Sets the minimum / maximum port numbers
%%
%% {inet_dist_listen_min, 10000},
%% {inet_dist_listen_max, 10005},
%% Sets the net_kernel tick time.
%% Please see http://erlang.org/doc/man/kernel_app.html and
%% http://www.rabbitmq.com/nettick.html for further details.
%%
%% {net_ticktime, 60}
]},
%% ----------------------------------------------------------------------------
%% RabbitMQ Management Plugin
%%
%% See http://www.rabbitmq.com/management.html for details
%% ----------------------------------------------------------------------------
{rabbitmq_management,
[%% Pre-Load schema definitions from the following JSON file. See
%% http://www.rabbitmq.com/management.html#load-definitions
%%
%% {load_definitions, "/path/to/schema.json"},
%% Log all requests to the management HTTP API to a file.
%%
%% {http_log_dir, "/path/to/access.log"},
%% Change the port on which the HTTP listener listens,
%% specifying an interface for the web server to bind to.
%% Also set the listener to use SSL and provide SSL options.
%%
%% {listener, [{port, 12345},
%% {ip, "127.0.0.1"},
%% {ssl, true},
%% {ssl_opts, [{cacertfile, "/path/to/cacert.pem"},
%% {certfile, "/path/to/cert.pem"},
%% {keyfile, "/path/to/key.pem"}]}]},
%% Configure how long aggregated data (such as message rates and queue
%% lengths) is retained. Please read the plugin's documentation in
%% https://www.rabbitmq.com/management.html#configuration for more
%% details.
%%
%% {sample_retention_policies,
%% [{global, [{60, 5}, {3600, 60}, {86400, 1200}]},
%% {basic, [{60, 5}, {3600, 60}]},
%% {detailed, [{10, 5}]}]}
]},
{rabbitmq_management_agent,
[%% Misc/Advanced Options
%%
%% NB: Change these only if you understand what you are doing!
%%
%% {force_fine_statistics, true}
]},
%% ----------------------------------------------------------------------------
%% RabbitMQ Shovel Plugin
%%
%% See http://www.rabbitmq.com/shovel.html for details
%% ----------------------------------------------------------------------------
{rabbitmq_shovel,
[{shovels,
[%% A named shovel worker.
%% {my_first_shovel,
%% [
%% List the source broker(s) from which to consume.
%%
%% {sources,
%% [%% URI(s) and pre-declarations for all source broker(s).
%% {brokers, ["amqp://user:[email protected]/my_vhost"]},
%% {declarations, []}
%% ]},
%% List the destination broker(s) to publish to.
%% {destinations,
%% [%% A singular version of the 'brokers' element.
%% {broker, "amqp://"},
%% {declarations, []}
%% ]},
%% Name of the queue to shovel messages from.
%%
%% {queue, <<"your-queue-name-goes-here">>},
%% Optional prefetch count.
%%
%% {prefetch_count, 10},
%% when to acknowledge messages:
%% - no_ack: never (auto)
%% - on_publish: after each message is republished
%% - on_confirm: when the destination broker confirms receipt
%%
%% {ack_mode, on_confirm},
%% Overwrite fields of the outbound basic.publish.
%%
%% {publish_fields, [{exchange, <<"my_exchange">>},
%% {routing_key, <<"from_shovel">>}]},
%% Static list of basic.properties to set on re-publication.
%%
%% {publish_properties, [{delivery_mode, 2}]},
%% The number of seconds to wait before attempting to
%% reconnect in the event of a connection failure.
%%
%% {reconnect_delay, 2.5}
%% ]} %% End of my_first_shovel
]}
%% Rather than specifying some values per-shovel, you can specify
%% them for all shovels here.
%%
%% {defaults, [{prefetch_count, 0},
%% {ack_mode, on_confirm},
%% {publish_fields, []},
%% {publish_properties, [{delivery_mode, 2}]},
%% {reconnect_delay, 2.5}]}
]},
%% ----------------------------------------------------------------------------
%% RabbitMQ Stomp Adapter
%%
%% See http://www.rabbitmq.com/stomp.html for details
%% ----------------------------------------------------------------------------
{rabbitmq_stomp,
[%% Network Configuration - the format is generally the same as for the broker
%% Listen only on localhost (ipv4 & ipv6) on a specific port.
%% {tcp_listeners, [{"127.0.0.1", 61613},
%% {"::1", 61613}]},
%% Listen for SSL connections on a specific port.
%% {ssl_listeners, [61614]},
%% Additional SSL options
%% Extract a name from the client's certificate when using SSL.
%%
%% {ssl_cert_login, true},
%% Set a default user name and password. This is used as the default login
%% whenever a CONNECT frame omits the login and passcode headers.
%%
%% Please note that setting this will allow clients to connect without
%% authenticating!
%%
%% {default_user, [{login, "guest"},
%% {passcode, "guest"}]},
%% If a default user is configured, or you have configured use SSL client
%% certificate based authentication, you can choose to allow clients to
%% omit the CONNECT frame entirely. If set to true, the client is
%% automatically connected as the default user or user supplied in the
%% SSL certificate whenever the first frame sent on a session is not a
%% CONNECT frame.
%%
%% {implicit_connect, true}
]},
%% ----------------------------------------------------------------------------
%% RabbitMQ MQTT Adapter
%%
%% See http://hg.rabbitmq.com/rabbitmq-mqtt/file/stable/README.md for details
%% ----------------------------------------------------------------------------
{rabbitmq_mqtt,
[%% Set the default user name and password. Will be used as the default login
%% if a connecting client provides no other login details.
%%
%% Please note that setting this will allow clients to connect without
%% authenticating!
%%
%% {default_user, <<"guest">>},
%% {default_pass, <<"guest">>},
%% Enable anonymous access. If this is set to false, clients MUST provide
%% login information in order to connect. See the default_user/default_pass
%% configuration elements for managing logins without authentication.
%%
%% {allow_anonymous, true},
%% If you have multiple chosts, specify the one to which the
%% adapter connects.
%%
%% {vhost, <<"/">>},
%% Specify the exchange to which messages from MQTT clients are published.
%%
%% {exchange, <<"amq.topic">>},
%% Specify TTL (time to live) to control the lifetime of non-clean sessions.
%%
%% {subscription_ttl, 1800000},
%% Set the prefetch count (governing the maximum number of unacknowledged
%% messages that will be delivered).
%%
%% {prefetch, 10},
%% TCP/SSL Configuration (as per the broker configuration).
%%
%% {tcp_listeners, [1883]},
%% {ssl_listeners, []},
%% TCP/Socket options (as per the broker configuration).
%%
%% {tcp_listen_options, [binary,
%% {packet, raw},
%% {reuseaddr, true},
%% {backlog, 128},
%% {nodelay, true}]}
]},
%% ----------------------------------------------------------------------------
%% RabbitMQ AMQP 1.0 Support
%%
%% See http://hg.rabbitmq.com/rabbitmq-amqp1.0/file/default/README.md
%% for details
%% ----------------------------------------------------------------------------
{rabbitmq_amqp1_0,
[%% Connections that are not authenticated with SASL will connect as this
%% account. See the README for more information.
%%
%% Please note that setting this will allow clients to connect without
%% authenticating!
%%
%% {default_user, "guest"},
%% Enable protocol strict mode. See the README for more information.
%%
%% {protocol_strict_mode, false}
]},
%% ----------------------------------------------------------------------------
%% RabbitMQ LDAP Plugin
%%
%% See http://www.rabbitmq.com/ldap.html for details.
%%
%% ----------------------------------------------------------------------------
{rabbitmq_auth_backend_ldap,
[%%
%% Connecting to the LDAP server(s)
%% ================================
%%
%% Specify servers to bind to. You *must* set this in order for the plugin
%% to work properly.
%%
%% {servers, ["your-server-name-goes-here"]},
%% Connect to the LDAP server using SSL
%%
%% {use_ssl, false},
%% Specify the LDAP port to connect to
%%
%% {port, 389},
%% Enable logging of LDAP queries.
%% One of
%% - false (no logging is performed)
%% - true (verbose logging of the logic used by the plugin)
%% - network (as true, but additionally logs LDAP network traffic)
%%
%% Defaults to false.
%%
%% {log, false},
%%
%% Authentication
%% ==============
%%
%% Pattern to convert the username given through AMQP to a DN before
%% binding
%%
%% {user_dn_pattern, "cn=${username},ou=People,dc=example,dc=com"},
%% Alternatively, you can convert a username to a Distinguished
%% Name via an LDAP lookup after binding. See the documentation for
%% full details.
%% When converting a username to a dn via a lookup, set these to
%% the name of the attribute that represents the user name, and the
%% base DN for the lookup query.
%%
%% {dn_lookup_attribute, "userPrincipalName"},
%% {dn_lookup_base, "DC=gopivotal,DC=com"},
%% Controls how to bind for authorisation queries and also to
%% retrieve the details of users logging in without presenting a
%% password (e.g., SASL EXTERNAL).
%% One of
%% - as_user (to bind as the authenticated user - requires a password)
%% - anon (to bind anonymously)
%% - {UserDN, Password} (to bind with a specified user name and password)
%%
%% Defaults to 'as_user'.
%%
%% {other_bind, as_user},
%%
%% Authorisation
%% =============
%%
%% The LDAP plugin can perform a variety of queries against your
%% LDAP server to determine questions of authorisation. See
%% http://www.rabbitmq.com/ldap.html#authorisation for more
%% information.
%% Set the query to use when determining vhost access
%%
%% {vhost_access_query, {in_group,
%% "ou=${vhost}-users,ou=vhosts,dc=example,dc=com"}},
%% Set the query to use when determining resource (e.g., queue) access
%%
%% {resource_access_query, {constant, true}},
%% Set queries to determine which tags a user has
%%
%% {tag_queries, []}
]}
].
| It's a standard array format. Delete the comma from the LAST line you uncomment.
Right now you're basically making it look like
[{blah},]
| RabbitMQ | 27,692,045 | 15 |
Documentation says that rabbitmq has config: /etc/rabbitmq/rabbitmq.conf
but I have nothing there, but rabbitmq-server is running and consuming messages.
Where is my config file?
| It depends in which way you install RabbitMQ. The file usually is not present.
If you need it, you have to create it.
For example if you use the package:
rabbitmq-server-mac-standalone-3.4.2.tar.gz
You can find the example file:
etc/rabbitmq/rabbitmq.config.example
and not the file.
Using RABBITMQ_CONFIG_FILE you can specify the rabbitmq.config file, to be sure you can check this variable.
| RabbitMQ | 27,379,736 | 15 |
I'm currently working on a rabbit-amqp implementation project and use spring-rabbit to programmatically setup all my queues, bindings and exchanges. (spring-rabbit-1.3.4 and spring-framework versions 3.2.0)
The declaration in a javaconfiguration class or xml-based configuration are both quite static in my opinion declared. I know how to set a more dynamic value (ex. a name) for a queue, exchange
or binding like this:
@Configuration
public class serverConfiguration {
private String queueName;
...
@Bean
public Queue buildQueue() {
Queue queue = new Queue(this.queueName, false, false, true, getQueueArguments());
buildRabbitAdmin().declareQueue(queue);
return queue;
}
...
}
But I was wondering if it was possible to create a undefined amount instances of Queue and
register them as beans like a factory registering all its instances.
I'm not really familiar with the Spring @Bean annotation and its limitations, but I tried
@Configuration
public class serverConfiguration {
private String queueName;
...
@Bean
@Scope("prototype")
public Queue buildQueue() {
Queue queue = new Queue(this.queueName, false, false, true, getQueueArguments());
buildRabbitAdmin().declareQueue(queue);
return queue;
}
...
}
And to see if the multiple beans instances of Queue are registered I call:
Map<String, Queue> queueBeans = ((ListableBeanFactory) applicationContext).getBeansOfType(Queue.class);
But this will only return 1 mapping:
name of the method := the last created instance.
Is it possible to dynamically add beans during runtime to the SpringApplicationContext?
| You can add beans dynamically to the context:
context.getBeanFactory().registerSingleton("foo", new Queue("foo"));
but they won't be declared by the admin automatically; you will have to call admin.initialize() to force it to re-declare all the AMQP elements in the context.
You would not do either of these in @Beans, just normal runtime java code.
| RabbitMQ | 24,241,880 | 15 |
If I'm connected to RabbitMQ and listening for events using an EventingBasicConsumer, how can I tell if I've been disconnected from the server?
I know there is a Shutdown event, but it doesn't fire if I unplug my network cable to simulate a failure.
I've also tried the ModelShutdown event, and CallbackException on the model but none seem to work.
EDIT-----
The one I marked as the answer is correct, but it was only part of the solution for me. There is also HeartBeat functionality built into RabbitMQ. The server specifies it in the configuration file. It defaults to 10 minutes but of course you can change that.
The client can also request a different interval for the heartbeat by setting the RequestedHeartbeat value on the ConnectionFactory instance.
| I'm guessing that you're using the C# library? (but even so I think the others have a similar event).
You can do the following:
public class MyRabbitConsumer
{
private IConnection connection;
public void Connect()
{
connection = CreateAndOpenConnection();
connection.ConnectionShutdown += connection_ConnectionShutdown;
}
public IConnection CreateAndOpenConnection() { ... }
private void connection_ConnectionShutdown(IConnection connection, ShutdownEventArgs reason)
{
}
}
| RabbitMQ | 15,033,848 | 15 |
I would like to check if a Consumer/Worker is present to consume a Message I am about to send.
If there isn't any Worker, I would start some workers (both consumers and publishers are on a single machine) and then go about publishing Messages.
If there is a function like connection.check_if_has_consumers, I would implement it somewhat like this -
import pika
import workers
# code for publishing to worker queue
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()
# if there are no consumers running (would be nice to have such a function)
if not connection.check_if_has_consumers(queue="worker_queue", exchange=""):
# start the workers in other processes, using python's `multiprocessing`
workers.start_workers()
# now, publish with no fear of your queues getting filled up
channel.queue_declare(queue="worker_queue", auto_delete=False, durable=True)
channel.basic_publish(exchange="", routing_key="worker_queue", body="rockin",
properties=pika.BasicProperties(delivery_mode=2))
connection.close()
But I am unable to find any function with check_if_has_consumers functionality in pika.
Is there some way of accomplishing this, using pika? or maybe, by talking to The Rabbit directly?
I am not completely sure, but I really think RabbitMQ would be aware of the number of consumers subscribed to different queues, since it does dispatch messages to them and accepts acks
I just got started with RabbitMQ 3 hours ago... any help is welcome...
here is the workers.py code I wrote, if its any help....
import multiprocessing
import pika
def start_workers(num=3):
"""start workers as non-daemon processes"""
for i in xrange(num):
process = WorkerProcess()
process.start()
class WorkerProcess(multiprocessing.Process):
"""
worker process that waits infinitly for task msgs and calls
the `callback` whenever it gets a msg
"""
def __init__(self):
multiprocessing.Process.__init__(self)
self.stop_working = multiprocessing.Event()
def run(self):
"""
worker method, open a channel through a pika connection and
start consuming
"""
connection = pika.BlockingConnection(
pika.ConnectionParameters(host='localhost')
)
channel = connection.channel()
channel.queue_declare(queue='worker_queue', auto_delete=False,
durable=True)
# don't give work to one worker guy until he's finished
channel.basic_qos(prefetch_count=1)
channel.basic_consume(callback, queue='worker_queue')
# do what `channel.start_consuming()` does but with stopping signal
while len(channel._consumers) and not self.stop_working.is_set():
channel.transport.connection.process_data_events()
channel.stop_consuming()
connection.close()
return 0
def signal_exit(self):
"""exit when finished with current loop"""
self.stop_working.set()
def exit(self):
"""exit worker, blocks until worker is finished and dead"""
self.signal_exit()
while self.is_alive(): # checking `is_alive()` on zombies kills them
time.sleep(1)
def kill(self):
"""kill now! should not use this, might create problems"""
self.terminate()
self.join()
def callback(channel, method, properties, body):
"""pika basic consume callback"""
print 'GOT:', body
# do some heavy lifting here
result = save_to_database(body)
print 'DONE:', result
channel.basic_ack(delivery_tag=method.delivery_tag)
EDIT:
I have to move forward so here is a workaround that I am going to take, unless a better approach comes along,
So, RabbitMQ has these HTTP management apis, they work after you have turned on the management plugin and at middle of HTTP apis page there is
/api/connections - A list of all open connections.
/api/connections/name - An individual connection. DELETEing it will close the connection.
So, if I connect my Workers and my Produces both by different Connection names / users, I'll be able to check if the Worker Connection is open... (there might be issues when worker dies...)
will be waiting for a better solution...
EDIT:
just found this in the rabbitmq docs, but this would be hacky to do in python:
shobhit@oracle:~$ sudo rabbitmqctl -p vhostname list_queues name consumers
Listing queues ...
worker_queue 0
...done.
so i could do something like,
subprocess.call("echo password|sudo -S rabbitmqctl -p vhostname list_queues name consumers | grep 'worker_queue'")
hacky... still hope pika has some python function to do this...
Thanks,
| I was just looking into this as well. After reading through the source and docs I came across the following in channel.py:
@property
def consumer_tags(self):
"""Property method that returns a list of currently active consumers
:rtype: list
"""
return self._consumers.keys()
My own testing was successful. I used the following where my channel object is self._channel:
if len(self._channel.consumer_tags) == 0:
LOGGER.info("Nobody is listening. I'll come back in a couple of minutes.")
...
| RabbitMQ | 13,037,121 | 15 |
Would have loved to use Amazon SQS if it provided some semblance of FIFO access, but the sequence seems to completely random.
Is there something that would provide me FIFO queuing as-a-cloud-service with the high availability of SQS?
If that is asking for too much - what would be the easiest way of putting together something with the above requirements in EC2? Or maybe in other words, what's the easiest highly available queuing solution that works in EC2?
Thanks for your insights!
| Update 2016-11-19
Amazon SQS has just gained FIFO Queues with Exactly-Once Processing & Deduplication:
Today we are making SQS even more powerful and flexible with support
for FIFO (first-in, first-out) queues. We are rolling out this new
type of queue in two regions now, and plan to make it available in
many others in early 2017.
These queues are designed to guarantee that messages are processed
exactly once, in the order that they are sent, and without duplicates.
[...]
[emphasis mine]
As emphasized, these new FIFO SQS queues will cover the use case at hand, but are not yet available in all SQS regions [initially only in US East (Ohio) and US West (Oregon)]. Also, the SQS FAQ for FIFO queues outlines notable differences between standard and FIFO queues that should be considered upfront, for example a throughput limit of 300 transactions per second.
Initial Answer
Would have loved to use Amazon SQS if it provided some semblance of
FIFO access, but the sequence seems to completely random.
While I haven't experienced completely random message ordering yet (this likely depends on the use case and especially the message volume though), there is no FIFO guarantee indeed, see the respective FAQ Does Amazon SQS provide first-in-first-out (FIFO) access to messages?:
No, Amazon SQS does not guarantee FIFO access to messages in Amazon
SQS queues, mainly because of the distributed nature of the Amazon
SQS. If you require specific message ordering, you should design your
application to handle it.
Given you expressed interest in hosted RabbitMQ as well as StormMQ, I might as well point you to other commercial offerings:
CloudAMQP
CloudAMQP is RabbitMQ as a Service, thus exactly what you have been asking for, given RabbitMQ supports the desired true FIFO message ordering (see Amazon SQS vs. RabbitMQ for a nice comparison).
According to their Plans & Prices, it is apparently only offered as an addon to first class platform providers though, hence you'll have to look into these in turn:
AppHarbor CloudAMQP Add-on
Heroku CloudAMQP Add-on
cloudControl CloudAMQP Add-on
IronMQ
IronMQ offers developers ready-to-use messaging with highly reliable delivery and cloud-optimized performance. It is complying to nowadays expectations on a Software as a service (SaaS) product, not the least regarding an easy to understand and especially published pricing model As rightfully critizised by Sleavely, Iron.io has seemingly dropped its former exemplary pricing model - see Alex Payne's How Not To Sell Software in 2012 for a nice rant and advise regarding this.
I've only tested it shortly myself so far, but have been pretty pleased with the offered features and language integrations (see Client Libraries and Beanstalkd Support) - given their competitive price tag and especially the free tier makes it a good candidate for exploring a message queuing as a service solution, not the least in combination with their second product IronWorker (An easy-to-use and massively scalable task queue [...]), which provides functionality not even available from AWS as of today.
FIFO message ordering
Unfortunately I haven't been able to figure out whether true FIFO is supported by IronMQ directly, consequently I actually doubt it - accordingly, you'd need to file a support request to verify this.
Evan Shaw from Iron.io confirmed that IronMQ provides FIFO message ordering in fact (thanks much).
| RabbitMQ | 10,375,137 | 15 |
We're working on an application that supports AMQP for queuing. Some of our clients are using Websphere MQ. I'm just wondering at a high level how interchangeable these two protocols are in terms of functionality. I'm using celery, which should allow me to abstract out the lower-level stuff as long as I can write a Websphere MQ backend. What I'm trying to figure out is how difficult a challenge this will be.
Does Websphere MQ provide a superset of AMQP's functionality? Does either one have any "features" that might make my life difficult?
| UPDATE 23 June 2015
IBM has announced MQ Light which is their implementation of AMQP. their Statement of Direction says that they intend to deliver features to allow programs designed to run on MQ Light to run in MQ at some point in the future but have yet to announce when that will be. MQ Light is in open Beta as of this writing.
Getting payloads moved between these systems will be relatively straightforward with a simple bridging app that reads off one system and writes to the other. They both have queues and topics and explicit routing is possible.
The interesting parts include such fun concepts as...
Mapping reply-to destinations. Especially dynamic reply-to destinations.
Transactionality
Any kind of routing more complicated than "pick up off this queue/topic, put to this one. For example a gateway that routes to multiple destinations based on queue name.
Message-level security.
Mapping identities for connection-level security.
No possibility of end-to-end message encryption.
So if all you need is on the order of "get from AMQP:QUEUEA, put to WMQ:QUEUEB" and transactionality is not important you should have a an easy time of it. Beyond that it depends on exactly what you want to do.
| RabbitMQ | 3,151,966 | 15 |
Greetings,
I'm evaluating some components for a multi-data center distributed system. We're going to be using message queues (via either RabbitMQ or Qpid) so agents can make asynchronous requests to other agents without worrying about addressing, routing, load balancing or retransmission.
In many cases, the agents will be interacting with components that were not designed for highly concurrent access, so locking and cross-agent coordination will be needed to avoid race conditions. Also, we'd like the system to automatically respond to agent or data center failures.
With the above use cases in mind, ZooKeeper seemed like it might be a good fit. But I'm wondering if trying to use both ZK and message queuing is overkill. It seems like what Zookeeper does could be accomplished by my own cluster manager using AMQP messaging, but that would be hard to get really right. On the other hand, I've seen some examples where ZooKeeper was used to implement message queuing, but I think RabbitMQ/Qpid are a more natural fit for that.
Has anyone out there used a combination like this?
Thanks in advance,
-Chris
| Coming into this late, but maybe it will be of some use. The primary consideration should be the performance characteristics of your system. ZooKeeper, like you said, is more than capable of implementing a task distribution system using a distributed queue, but zk currently, is more optimized for reads than it is for writes (this only comes into play in the 1000's of ops per second range). If your throughput needs are less than this, then using just zk to implement your system would reduce number of runtime components and make it simpler. Of course, you should always run your performance tests before deciding.
Distributed coordination is really hard to get right, so I would definitely recommend using zookeeper for that and not rolling your own.
| RabbitMQ | 2,669,573 | 15 |
Problem: I want to implement several php-worker processes who are listening on a MQ-server queue for asynchronous jobs. The problem now is that simply running this processes as daemons on a server doesn't really give me any level of control over the instances (Load, Status, locked up)...except maybe for dumping ps -aux.
Because of that I'm looking for a runtime environment of some kind that lets me monitor and control the instances, either on system (process) level or on a higher layer (some kind of Java-style appserver)
Any pointers?
| Here's some code that may be useful.
<?
define('WANT_PROCESSORS', 5);
define('PROCESSOR_EXECUTABLE', '/path/to/your/processor');
set_time_limit(0);
$cycles = 0;
$run = true;
$reload = false;
declare(ticks = 30);
function signal_handler($signal) {
switch($signal) {
case SIGTERM :
global $run;
$run = false;
break;
case SIGHUP :
global $reload;
$reload = true;
break;
}
}
pcntl_signal(SIGTERM, 'signal_handler');
pcntl_signal(SIGHUP, 'signal_handler');
function spawn_processor() {
$pid = pcntl_fork();
if($pid) {
global $processors;
$processors[] = $pid;
} else {
if(posix_setsid() == -1)
die("Forked process could not detach from terminal\n");
fclose(stdin);
fclose(stdout);
fclose(stderr);
pcntl_exec(PROCESSOR_EXECUTABLE);
die('Failed to fork ' . PROCESSOR_EXECUTABLE . "\n");
}
}
function spawn_processors() {
global $processors;
if($processors)
kill_processors();
$processors = array();
for($ix = 0; $ix < WANT_PROCESSORS; $ix++)
spawn_processor();
}
function kill_processors() {
global $processors;
foreach($processors as $processor)
posix_kill($processor, SIGTERM);
foreach($processors as $processor)
pcntl_waitpid($processor);
unset($processors);
}
function check_processors() {
global $processors;
$valid = array();
foreach($processors as $processor) {
pcntl_waitpid($processor, $status, WNOHANG);
if(posix_getsid($processor))
$valid[] = $processor;
}
$processors = $valid;
if(count($processors) > WANT_PROCESSORS) {
for($ix = count($processors) - 1; $ix >= WANT_PROCESSORS; $ix--)
posix_kill($processors[$ix], SIGTERM);
for($ix = count($processors) - 1; $ix >= WANT_PROCESSORS; $ix--)
pcntl_waitpid($processors[$ix]);
} elseif(count($processors) < WANT_PROCESSORS) {
for($ix = count($processors); $ix < WANT_PROCESSORS; $ix++)
spawn_processor();
}
}
spawn_processors();
while($run) {
$cycles++;
if($reload) {
$reload = false;
kill_processors();
spawn_processors();
} else {
check_processors();
}
usleep(150000);
}
kill_processors();
pcntl_wait();
?>
| RabbitMQ | 752,214 | 15 |
I try to start a Docker container with RabbitMQ, as a result, the image is downloaded, but the container does not start. I get the following message in the logs:
error: RABBITMQ_DEFAULT_PASS is set but deprecated
error: RABBITMQ_DEFAULT_USER is set but deprecated
error: RABBITMQ_DEFAULT_VHOST is set but deprecated
error: RABBITMQ_ERLANG_COOKIE is set but deprecated
error: deprecated environment variables detected
This problem appeared recently, before that everything worked fine and started.
This is my docker-compose rabbit:
rabbit:
image: "rabbitmq:3-management"
hostname: "rabbit"
environment:
RABBITMQ_ERLANG_COOKIE: 'SWQOKODSQALRPCLNMEQGW'
RABBITMQ_DEFAULT_USER: 'user'
RABBITMQ_DEFAULT_PASS: 'bitnami'
RABBITMQ_DEFAULT_VHOST: '/'
ports:
- "15672:15672"
- "5672:5672"
labels:
NAME: "rabbitmq"
networks:
- postgres
| The latest stable docker image for RabbitMQ (3.9) has been recently updated and the official image page says:
As of RabbitMQ 3.9, all of the docker-specific variables listed below are deprecated and no longer used.
I have resolved the issue in following way:
Create a rabbitmq.conf file in the same folder where docker compose
file is present
Put the variables in there following guidelines and naming convention from here. Something like:
default_vhost = /
default_user = user
default_pass = bitnami
In docker compose file, instead of an environment section put a volumes section and mounted the rabbitmq.conf file to proper path (depending on OS, follow here). For linux container it will be like:
rabbit:
image: "rabbitmq:3-management"
hostname: "rabbit"
volumes:
- "./rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf"
ports:
- "15672:15672"
- "5672:5672"
labels:
NAME: "rabbitmq"
networks:
- postgres
| RabbitMQ | 68,600,215 | 14 |
I am running RabbitMQ inside a container on localhost; my /etc/rabbitmq/rabbitmq.conf is pretty straightforward:
loopback_users.guest = false
listeners.tcp.default = 5672
management.tcp.port = 15672
management.disable_stats = false
I can access management ui with no problem (as a default guest user), but I see no graphs and stats on an Overview tab. And when I enter Channels tab there is only a message:
Stats in management UI are disabled on this node
What can be the reason of this behaviour?
| I encountered exactly the same problem today.
If you are using rabbitmq inside a container, make sure you are using the correct image, as stated in their website:
docker run -it --rm --name rabbitmq -p 5672:5672 -p 15672:15672 rabbitmq:3-management.
The rabbitmq_management plugin is enabled by default.
I was using docker run -it --rm --name rabbitmq -p 5672:5672 -p 15672:15672 rabbitmq. I had to manually turn on the management plugin and I encountered your problem.
The reason is that the default image disables metrics collector in the management_agent plugin:
# cat /etc/rabbitmq/conf.d/management_agent.disable_metrics_collector.conf
management_agent.disable_metrics_collector = true
For deployment, you could turn it on or off through the configuration file. The instruction could be found HERE.
| RabbitMQ | 63,708,061 | 14 |
What is the difference between ConcurrencyLimit and PrefetchCount in masstransit? and what is the optimize configuration for them.
| PrefetchCount is a broker-level setting. It indicates to RabbitMQ (or Azure Service Bus) how many messages should be pushed to the client application so that they're ready for processing.
In addition, if a RabbitMQ consumer has prefetch space available, published messages are immediately written to the consumer, reducing overall message latency. Because of this, having prefetch space available on a consumer can improve overall message throughput.
ConcurrentMessageLimit is a client-level thing, that indicates the maximum number of messages that will be consumed concurrently. This may be due to resource limits, or to avoid database overloading, etc.
In cases where messages process very quickly, but cannot be processed too concurrently, a limit may be set using ConcurrentMessageLimit to avoid overloading the CPU. However, super fast message consumption increases the sensitivity to the time it takes to request more messages from the broker. So a higher prefetch count is recommended for fast message consumers.
For slow consumers, such as those that make external calls, where the consumer duration is more dependent on slow external systems, a higher concurrency limit can increase overall throughput. In this case, a higher prefetch count doesn't add much, but it should at least be as high as the concurrency limit.
If you're scaling out (competing consumer), then it's a tuning exercise to figure out how many instances, concurrent consumers, and prefetched messages make sense.
For example, we have a database consumer, that can run up to 100 concurrent transactions on the SQL server before it starts to block, so we run a concurrency limit of 100 with a prefetch of 110.
| RabbitMQ | 57,258,424 | 14 |
I am trying to reproduce the first example of a Java publisher that can be found in RabbitMQ's main page.
First, I did it in Java and it worked fine. Then, I tried it on Android and here is where the weird part comes.
I have added manually the same jar libraries that I used in my Java program and that are suggested in RabbitMQ's tutorial. That is to say, amqp-client-5.4.1, slf4j-api-1.7.21 and slf4j-simple-1.7.22 are added in /libs directory and then referenced in the buid.gradle (module:app) with the commands implementation files('libs/amqp-client-5.4.1.jar') and so on.
Then, I have added the required package dependencies in my MainActivity.java file without encountering any error. But when adding the piece of code that should publish the data, the different methods of the imported libraries are not found, for instance, factory appears as it did not have the method setHost.
I attach the code bellow I am currently using.
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import java.io.IOException;
import java.util.concurrent.TimeoutException;
import com.rabbitmq.client.ConnectionFactory;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.Channel;
public class MainActivity extends AppCompatActivity {
String QUEUE_NAME = "hello";
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("192.0.0.0"); //Marked as error
factory.setUsername("test");
factory.setPassword("test");
Connection connection;
Channel channel;
connection = factory.newConnection();
channel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
String message = "Example3";
channel.basicPublish("", QUEUE_NAME, null, message.getBytes());
System.out.println(" [x] Sent '" + message + "'");
channel.close();
connection.close();
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
}
}
Any ideas of why this code is working fine on Java but these libraries fails to be correctly imported in Android?
| In java you cannot have code outside of a method. All what you can do is initializing the class members. IMHO it's not a jar import problem.
Try this:
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import java.io.IOException;
import java.util.concurrent.TimeoutException;
import com.rabbitmq.client.ConnectionFactory;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.Channel;
public class MainActivity extends AppCompatActivity {
String QUEUE_NAME = "hello";
ConnectionFactory factory = new ConnectionFactory();
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
init();
}
private void init() {
try {
factory.setHost("192.0.0.0");
factory.setUsername("test");
factory.setPassword("test");
Connection connection;
Channel channel;
connection = factory.newConnection();
channel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
String message = "Example3";
channel.basicPublish("", QUEUE_NAME, null, message.getBytes());
System.out.println(" [x] Sent '" + message + "'");
channel.close();
connection.close();
} catch (IOException | TimeoutException e) {
throw new RuntimeException("Rabbitmq problem", e);
}
}
}
Coming back to your original concern, I don't see any reason why you manually download all you dependencies rather than using built-in gradle dependency management.
If you update the dependencies section in the build.gradle file, the required dependencies will be automatically downloaded. It's much more easier to add/remove/upgrade dependencies.
dependencies {
compile group: 'com.rabbitmq', name: 'amqp-client', version: '5.4.1'
compile group: 'org.slf4j', name: 'slf4j-api', version: '1.7.21'
compile group: 'org.slf4j', name: 'slf4j-simple', version: '1.7.21'
}
| RabbitMQ | 52,487,851 | 14 |
I am dealing with communication between microservices.
For example (fictive example, just for the illustration):
Microservice A - Store Users (getUser, etc.)
Microservice B - Store Orders (createOrder, etc.)
Now if I want to add new Order from the Client app, I need to know user address. So the request would be like this:
Client -> Microservice B (createOrder for userId 5) -> Microservice A (getUser with id 5)
The microservice B will create order with details (address) from the User Microservice.
PROBLEM TO SOLVE: How effectively deal with communication between microservice A and microservice B, as we have to wait until the response come back?
OPTIONS:
Use RestAPI,
Use AMQP, like RabbitMQ and deal with this issue via RPC. (https://www.rabbitmq.com/tutorials/tutorial-six-dotnet.html)
I don't know what will be better for the performance. Is call faster via RabbitMQ, or RestAPI? What is the best solution for microservice architecture?
| In your case using direct REST calls should be fine.
Option 1 Use Rest API :
When you need synchronous communication. For example, your case. This option is suitable.
Option 2 Use AMQP :
When you need asynchronous communication. For example when your order service creates order you may want to notify product service to reduce the product quantity. Or you may want to nofity user service that order for user is successfully placed.
I highly recommend having a look at http://microservices.io/patterns/index.html
| RabbitMQ | 50,454,109 | 14 |
Tl;dr: "How can I push a message through a bunch of asynchronous, unordered microservices and know when that message has made it through each of them?"
I'm struggling to find the right messaging system/protocol for a specific microservices architecture. This isn't a "which is best" question, but a question about what my options are for a design pattern/protocol.
I have a message on the beginning queue. Let's say a RabbitMQ message with serialized JSON
I need that message to go through an arbitrary number of microservices
Each of those microservices are long running, must be independent, and may be implemented in a variety of languages
The order of services the message goes through does not matter. In fact, it should not be synchronous.
Each service can append data to the original message, but that data is ignored by the other services. There should be no merge conflicts (each service writes a unique key). No service will change or destroy data.
Once all the services have had their turn, the message should be published to a second RabbitMQ queue with the original data and the new data.
The microservices will have no other side-effects. If this were all in one monolithic application (and in the same language), functional programming would be perfect.
So, the question is, what is an appropriate way to manage that message through the various services? I don't want to have to do one at a time, and the order isn't important. But, if that's the case, how can the system know when all the services have had their whack and the final message can be written onto the ending queue (to have the next batch of services have their go).
The only, semi-elegant solution I could come up with was
to have the first service that encounters a message write that message to common storage (say mongodb)
Have each service do its thing, mark that it has completed for that message, and then check to see if all the services have had their turn
If so, that last service would publish the message
But that still requires each service to be aware of all the other services and requires each service to leave its mark. Neither of those is desired.
I am open to a "Shepherd" service of some kind.
I would appreciate any options that I have missed, and am willing to concede that their may be a better, fundamental design.
Thank you.
| There are two methods of managing a long running process (or a processing involving multiple microservices): Orchestration and choreography. There are a lot of articles describing them.
Long story short: In Orchestration you have a microservice that keeps track of the process status and in Choreography all the microservices know where to send next the message and/or when the process is done.
This article explains the benefits and tradeofs of the two styles.
Orchestration
Orchestration Benefits
Provides a good way for controlling the flow of the application when there is synchronous processing. For example, if Service A needs to complete successfully before Service B is invoked.
Orchestration Tradeoffs
Couples the services together creating dependencies. If service A is down, service B and C will never be called.
If there is a central shared instance of the orchestrator for all requests, then the orchestrator is a single point of failure. If it goes down, all processing stops.
Leverages synchronous processing that blocks requests. In this example, the total end-to-end processing time is the sum of time it takes for Service A + Service B + Service C to be called.
Choreography
Choreography Benefits
Enables faster end-to-end processing as services can be executed in parallel/asynchronously.
Easier to add/update services as they can be plugged in/out of the event stream easily.
Aligns well with an agile delivery model as teams can focus on particular services instead of the entire application.
Control is distributed, so there is no longer a single orchestrator serving as a central point of failure.
Several patterns can be used with a reactive architecture to provide additional benefits. For example, Event Sourcing is when the Event Stream stores all of the events and enables event replay. This way, if a service went down while events were still being produced, when it came back online it could replay those events to catch back up. Also, Command Query Responsibility Segregation (CQRS) can be applied to separate out the read and write activities. This enables each of these to be scaled independently. This comes in handy if you have an application that is read-heavy and light on writes or vice versa.
Choreography Tradeoffs
Async programming is often a significant mindshift for developers. I tend to think of it as similar to recursion, where you can’t figure out how code will execute by just looking at it, you have to think through all of the possibilities that could be true at a particular point in time.
Complexity is shifted. Instead of having the flow control centralized in the orchestrator, the flow control is now broken up and distributed across the individual services. Each service would have its own flow logic, and this logic would identify when and how it should react based on specific data in the event stream.
| RabbitMQ | 47,918,407 | 14 |
What are advantages of using NServiceBus + RabbitMQ against pure RabbitMQ?
I guess it provides additional infrastracture. But what else?
| You can definitely just use pure RabbitMQ. You just have to keep a couple things in mind.
Warning: This answer will be a bit extremely tongue-in-cheek.
First you should read Enterprise Integration Patterns cover to cover and make sure you understand it well. It is 736 pages, and a bit dry, but extremely useful information. It also wouldn't hurt to become an expert in all the peculiarities of RabbitMQ.
Then you just have to decide how you'll define messages, how to define message handlers, how to send messages and publish events. Before you get too far you'll want a good logging infrastructure. You'll need to create a message serializer and infrastructure for message routing. You'll need to include a bunch of infrastructure-related metadata with the content of each business message. You'll want to build a message dequeuing strategy that performs well and uses broker connections efficiently, keeping concurrency needs in mind.
Next you'll need to figure out how to retry messages automatically when the handling logic fails, but not too many times. You have to have a strategy for dealing with poison messages, so you'll need to move them aside so your handling logic doesn't get jammed preventing valid messages from being processed. You'll need a way to show those messages that have failed and figure out why, so you can fix the problem. You'll want some sort of alerting options so you know when that happens. It would be nice if that poison message display also showed you where that message came from and what the exception was so you don't need to go digging through log files. After that you'll need to be able to reroute the poison messages back into the queue to try again. In the event of a bad deployment you might have a lot of failed messages, so it would be really nice if you didn't have to retry the messages one at a time.
Since you're using RabbitMQ, there are no transactions on the message broker, so ghost messages and duplicate entities are very real problems. You'll need to code all message handling logic with idempotency in mind or your RabbitMQ messages and database entities will begin to get inconsistent. Alternatively you could design infrastructure to mimic distributed transactions by storing outgoing messaging operations in your business database and then executing the message dispatch operations separately. That results in duplicate messages (by design) so you'll need to deduplicate messages as they come in, which means you need well a well-defined strategy for consistent message IDs across your system. Be careful, as anything dealing with transactions and concurrency can be extremely tricky.
You'll probably want to do some workflow type stuff, where an incoming message starts a process that's essentially a message-driven state machine. Then you can do things like trigger an action once 2 required messages have been received. You'll need to design a storage system for that data. You'll probably also need a way to have delayed messages, so you can do things like the buyer's remorse pattern. RabbitMQ has no way to have an arbitrary delay on a message, so you'll have to come up with a way to implement that.
You'll probably want some metrics and performance counters on this system to know how it's performing. You'll want some way to be able to have tests on your message handling logic, so if you need to swap out some dependencies to make that work you might want to integrate a dependency injection framework.
Because these systems are decentralized by nature it can get pretty difficult to accurately picture what your system looks like. If you send a copy of every message to a central location, you can write some code to stitch together all the message conversations, and then you can use that data to build message flow diagrams, sequence diagrams, etc. This kind of living documentation based on live data can be critical for explaining things to managers or figuring out why a process isn't working as expected.
Speaking of documentation, make sure you write a whole lot of it for your message queue wrapper, otherwise it will be pretty difficult for other developers to help you maintain it. Of if someone else on your team is writing it, you'll be totally screwed when they get a different job and leave the company. You're also going to want a ton of unit tests on the RabbitMQ wrapper you've built. Infrastructure code like this should be rock-solid. You don't want losing a message to result in lost sales or anything like that.
So if you keep those few things in mind, you can totally use pure RabbitMQ without NServiceBus.
Hopefully, when you're done, your boss won't decide that you need to switch from RabbitMQ to Azure Service Bus or Amazon SQS.
| RabbitMQ | 47,060,893 | 14 |
I am using RabbitMQ together with Spring's RabbitTemplate.
When sending messages to queues using the template send methods, I want the queue to automatically be created/declared if it is not already exists.
It is very important since according to our business logic queue names are generated on run-time and I cannot declare them in advance.
Previously we have used JmsTemplate and any call to send or receive automatically created the queue.
| You can use a RabbitAdmin to automatically declare the exchange, queue, and binding. Check out this thread for more detail. This forum also bit related to your scenario. I have not tried spring with AMQP though, but I believe this would do it.
/**
* Required for executing adminstration functions against an AMQP Broker
*/
@Bean
public AmqpAdmin amqpAdmin() {
return new RabbitAdmin(connectionFactory());
}
Keep coding !
| RabbitMQ | 46,872,274 | 14 |
Basic nack provides facility to return negative acknowledgement for one or multiple messages.
Basic reject has facility to return negative acknowledgement for only one message.
Do we have any use case where we definitely need basic reject?
| The answer by @cantSleepNow is correct, I would also like to add one more difference which is in their default behaviour.
By default, nack will put the message back in the queue for later handling. You can change the setting to not re-queue with nack.
With reject, by default, the message is not re-queued by RabbitMQ but will drop the message from the queue entirely.
| RabbitMQ | 43,406,639 | 14 |
How can I implement a queue with configurable x-message-ttl?
I have a queue with x-message-ttl set to 1 minute and I want to change it to 2 minute at runtime. How can this be achieved?
I already tried declaring queue again with x-message-ttl = 2 minutes but neither ttl is changing by this nor message is being published.
| if you create a queue with arguments x-message-ttl you can't change it, you have to remove and recreate the queue.
but you can use the policies:
Create queues without ttl arguments
create the policy, for example:rabbitmqctl set_policy expiry ".*" "{""expires"":1800000}" --apply-to queues
In this way you can change the queue TTL value
| RabbitMQ | 42,202,437 | 14 |
I am trying to start RabbitMQ service on my local Windows laptop but I keep getting this error:
I first downloaded erlang (OTP 19.0 Windows 64-bit Binary File) from here: http://www.erlang.org/downloads.
Then I downloaded RabbitMQ from here: https://www.rabbitmq.com/install-windows.html
Erlang seems to have installed correctly - I don't see any errors in the logs. RabbitMQ shows this message in the installation logs:
Installing RabbitMQ service...
The filename, directory name, or volume label syntax is incorrect.
The filename, directory name, or volume label syntax is incorrect.
The filename, directory name, or volume label syntax is incorrect.
C:\Program Files\erl8.0\erts-8.0\bin\erlsrv: Service RabbitMQ added to system.
Error spawning C:\Program Files\erl8.0\erts-8.0\bin\epmd -daemon (error 0)
Starting RabbitMQ service...
The filename, directory name, or volume label syntax is incorrect.
The filename, directory name, or volume label syntax is incorrect.
The filename, directory name, or volume label syntax is incorrect.
C:\Program Files\erl8.0\erts-8.0\bin\erlsrv: Failed to start service RabbitMQ.
Error: The process terminated unexpectedly.
I uninstalled both, restarted my laptop and reinstalled but still doesn't work.
I also added Firewall Rules but still no luck. The 2nd firewall rule is for allowing connection for these ports: 4369, 25672, 5672, 5671, 15672, 61613, 61614, 1883, 8883
| I think I had the same problem which lies in the error
The filename, directory name, or volume label syntax is incorrect.
... and that maybe when erlang was installed it for some reason is sets the HOMEDRIVE to u: or something silly.
From the command line run:
SET HOMEDRIVE=C:
Then try to run your rabbitmq-service again. You may have to stop, remove, install, start it again.
rabbitmq-service stop
rabbitmq-service remove
rabbitmq-service install
rabbitmq-service start
and please make sure that you have copied the .erlang.cookie from c:\Windows to the root of your user folder ( C:\Users\{user}\ )
| RabbitMQ | 38,900,125 | 14 |
I'm developing distributed application with help of MassTransit and rabbitmq
I have to provide ability to generate report on a web page without page reloading by click on a button, also I should call a windows service for data preparation (The service handles each request for 30sek - 1min).
My first try based on this sample: https://github.com/MassTransit/Sample-RequestResponse
[HttpPost]
public async Task<HttpStatusCodeResult> GenerateReport(string someJsonData)
{
var serviceAddress = new Uri(ConfigurationManager.AppSettings["BaseLineRecordService"]);
var client = this.Bus.CreateRequestClient<ICreateReportRequest, ICreateReportResponse>(serviceAddress, TimeSpan.FromHours(1));
ICreateReportResponse response = await client.Request(new CreateReportRequest());
reportHub.ShowRepordData(response); // Update data by SingleR
return new HttpStatusCodeResult(200);
}
But as I understand it' not a better approach, because I'm keeping connection during all data preparation.
I've read many articles and I have found three ways. Which way is preferred?
1) Like on this article http://www.maldworth.com/2015/07/19/signalrchat-with-masstransit-v3/
2) As first but with Rest API calling instead of Consumers from IIS side
3) Idea from this article http://weblog.west-wind.com/posts/2013/Sep/04/SelfHosting-SignalR-in-a-Windows-Service
| I do this using hubs from SignalR, and observe events at the server using regular MassTransit consumers. When events are observed, I trigger the event handler, which dispatches using the Hub to connected clients. That way, the events are pushed down to the browser instantly without leaving an async call pending at the server in a controller.
You can see this in Fooidity which does something similar:
https://github.com/phatboyg/Fooidity/blob/develop/src/Fooidity.Management.Web/Hubs/ApplicationHubEventHandler.cs#L18
Using the GlobalHost to resolve the Hub, then raising the method on the hub. The event context can be discriminated by using groups, which are a SignalR feature that is handled per node. So as long as every node is observing the event, clients can be connected to any hub and get notified. This works nicely for load balancing, without having to use a heavy cluster backplane for SignalR -- since RabbitMQ is super lightweight for event distribution.
You can do it with non-durable queues as well, which makes it even faster -- since a server reset/connection drop is more likely than a broker crash.
Authentication is handled inside the ApplicationHub, as shown in the adjacent source file:
https://github.com/phatboyg/Fooidity/blob/develop/src/Fooidity.Management.Web/Hubs/ApplicationHub.cs
Check it out, hopefully it helps.
| RabbitMQ | 37,457,140 | 14 |
I am very new to Celery and here is the question I have:
Suppose I have a script that is constantly supposed to fetch new data from DB and send it to workers using Celery.
tasks.py
# Celery Task
from celery import Celery
app = Celery('tasks', broker='amqp://guest@localhost//')
@app.task
def process_data(x):
# Do something with x
pass
fetch_db.py
# Fetch new data from DB and dispatch to workers.
from tasks import process_data
while True:
# Run DB query here to fetch new data from DB fetched_data
process_data.delay(fetched_data)
sleep(30);
Here is my concern: the data is being fetched every 30 seconds. process_data() function could take much longer and depending on the amount of workers (especially if too few) the queue might get throttled as I understand.
I cannot increase number of workers.
I can modify the code to refrain from feeding the queue when it is full.
The question is how do I set queue size and how do I know it is full? In general, how to deal with this situation?
| You can set rabbitmq x-max-length in queue predeclare using kombu
example :
import time
from celery import Celery
from kombu import Queue, Exchange
class Config(object):
BROKER_URL = "amqp://guest@localhost//"
CELERY_QUEUES = (
Queue(
'important',
exchange=Exchange('important'),
routing_key="important",
queue_arguments={'x-max-length': 10}
),
)
app = Celery('tasks')
app.config_from_object(Config)
@app.task(queue='important')
def process_data(x):
pass
or using Policies
rabbitmqctl set_policy Ten "^one-meg$" '{"max-length-bytes":1000000}' --apply-to queues
| RabbitMQ | 35,231,690 | 14 |
Where should you update celery settings? On the remote worker or the sender?
For example, I have an API using Django and Celery. The API sends remote jobs to my remote workers via a broker (RabbitMQ). The workers are running a python script (not using Django) sometimes these works spawn sub tasks.
I've created celery settings on both sides (sender and worker) i.e. they both need the setting BROKER_URL. However, say I want to add the setting CELERY_ACKS_LATE = True, which end do I add this setting to? Each of the remote workers or the sender (API)?
Both the API and the remote workers connect to the same Broker, each start celery differently. The API creates a celery instance via Django __init__.py and the workers start celery via supervisor i.e. celery -A tasks worker -l info
| the django celery settings affects only workers running on the django server itself.
if all your workers are remote workers (the way as i do it), then on the sender side all you need is to put the configuration necessary to submit a task to the task queue.
and all the other settings need to be set on the remote workers.
and for the tasks, on the sender side, all i need to do is to define the task signature like this:
@app.task(name='report_task')
def reportTask(self, link):
pass
then on the worker side, you need to create a new Celery app with the same name and pointing to the same broker; for other celery settings you need to declare them on the remote workers.
and implement the tasks logic on the remote workeres (you can have different tasks logic per worker as long as they have the same task name and the function arguments)
| RabbitMQ | 35,117,752 | 14 |
I have three clients each with their own RabbitMQ instances and I have an application (let's call it appA) that has its own RabbitMQ instance, the three client applications (app1, app2, app3) wants to make use of a service on appA.
The service on appA requires RPC communication, app1, app2 and app3 each has a booking.request queue and a booking.response queue.
With the shovel plugin, I can forward all booking.request messages from app1-3 to appA:
Shovel1
virtualHost=appA,
name=booking-request-shovel,
sourceURI=amqp://userForApp1:password@app1-server/vhostForApp1
queue=booking.request
destinationURI=amqp://userForAppA:password@appA-server/vhostForAppA
queue=booking.request
setup another shovel to get booking requests from app2 and app3 to appA in the same way as above.
Now appA will respond to the request on the booking.response queue, I need the booking response message on rabbitMQ-appA to go back to the correct booking.response queue either on app1, app2 or app3, but not to all of them - how do I setup a shovel / federated queue on rabbitMQ-appA that will forward the response back to the correct rabbitMQ (app1, app2, app3) that is expecting a response in their own booking.response queue?
All these apps are using spring-amqp (in case that's relevant)
Alternatively, I could setup a rabbitMQ template in Spring that listens to multiple rabbitMQ queues and consumes from each of them.
From the docs, this what a typical consumer looks like:
<rabbit:listener-container connection-factory="rabbitConnectionFactory">
<rabbit:listener queues="some.queue" ref="somePojo" method="handle"/>
</rabbit:listener-container>
Is it possible to specify multiple connection factories in order to do this even if the connection factories are to the same instance of RabbitMQ, but just different vhosts:
Update:
Based on Josh's answer, I'd have multiple connection factories:
<rabbit:connection-factory
id="connectionFactory1"
port="${rabbit.port1}"
virtual-host="${rabbit.virtual1}"
host="${rabbit.host1}"
username="${rabbit.username1}"
password="${rabbit.password1}"
connection-factory="nativeConnectionFactory" />
<rabbit:connection-factory
id="connectionFactory2"
port="${rabbit.port2}"
virtual-host="${rabbit.virtual2}"
host="${rabbit.host2}"
username="${rabbit.username2}"
password="${rabbit.password2}"
connection-factory="nativeConnectionFactory" />
Then I would use the SimpleRoutingConnectionFactory to wrap both connection-factories:
<bean id="connectionFactory" class="org.springframework.amqp.rabbit.connection.SimpleRoutingConnectionFactory">
<property name="targetConnectionFactories">
<map>
<entry key="#{connectionFactory1.virtualHost}" ref="connectionFactory1"/>
<entry key="#{connectionFactory2.virtualHost}" ref="connectionFactory2"/>
</map>
</property>
</bean>
Now when I declare my rabbitMQ template, I would point it to the SimpleRoutingConnectionFactory instead of the individual connection factories:
<rabbit:template id="template" connection-factory="connectionFactory" />
... and then use the template as I would normally use it ...
<rabbit:listener-container
connection-factory="connectionFactory"
channel-transacted="true"
requeue-rejected="true"
concurrency="${rabbit.consumers}">
<rabbit:listener queues="${queue.booking}" ref="TransactionMessageListener" method="handle" />
</rabbit:listener-container>
// and messages are consumed from both rabbitMQ instances
... and ...
@Autowired
private AmqpTemplate template;
template.send(getExchange(), getQueue(), new Message(gson.toJson(message).getBytes(), properties));
// and message publishes to both queues
Am I correct?
| Take a look at org.springframework.amqp.rabbit.connection.AbstractRoutingConnectionFactory. It will allow you to create multiple connection factories to different vhosts or different rabbitmq instances. We are using it for a multi tenant rabbitmq application.
| RabbitMQ | 28,520,784 | 14 |
In my local machine I can have:
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
for both scripts (send.py and recv.py) in order to establish proper communication, but what about to establish communication from 12.23.45.67 to 132.45.23.14 ? I know about all the parameters that ConnectionParameters() take but I am not sure what to pass to the host or what to pass to the client. It would be appreciated if someone could give an example for host scrip and client script.
| first step is to add another account to your rabbitMQ server. To do this in windows...
open a command prompt window (windows key->cmd->enter)
navigate to the "C:\Program Files\RabbitMQ Server\rabbitmq_server-3.6.2\sbin" directory ( type "cd \Program Files\RabbitMQ Server\rabbitmq_server-3.6.2\sbin" and press enter )
enable management plugin (type "rabbitmq-plugins enable rabbitmq_management" and press enter)
open a broswer window to the management console & navigate to the admin section (http://localhost:15672/#/users with credentials "guest" - "guest")
add a new user (for example "the_user" with password "the_pass"
give that user permission to virtual host "/" (click user's name then click "set permission")
Now if you modify the connection info as done in the following modification of send.py you should find success:
#!/usr/bin/env python
import pika
credentials = pika.PlainCredentials('the_user', 'the_pass')
parameters = pika.ConnectionParameters('132.45.23.14',
5672,
'/',
credentials)
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
channel.queue_declare(queue='hello')
channel.basic_publish(exchange='',
routing_key='hello',
body='Hello W0rld!')
print(" [x] Sent 'Hello World!'")
connection.close()
Hope this helps
| RabbitMQ | 27,805,086 | 14 |
I have built a WebSockets server that acts as a chat message router (i.e. receiving messages from clients and pushing them to other clients according to a client ID).
It is a requirement that the service be able to scale to handle many millions of concurrent open socket connections, and I wish to be able to horizontally scale the server.
The architecture I have had in mind is to put the websocket server nodes behind a load balancer, which will create a problem because clients connected to different nodes won't know about each other. While both clients A and B enter via the LoadBalancer, client A might have an open connection with node 1 while client B is connected to node 2 - each node holds it's own dictionary of open socket connections.
To solve this problem, I was thinking of using some MQ system like ZeroMQ or RabbitMQ. All of the websocket server nodes will be subscribers of the MQ server, and when a node gets a request to route a message to a client which is not in the local connections dictionary, it will pub-lish a message to the MQ server, which will tell all the sub-scriber nodes to look for this client and issue the message if it's connected to that node.
Q1: Does this architecture make sense?
Q2: Is the pub-sub pattern described here really what I am looking for?
| ZeroMQ would be my option - both architecture-wise & performance-wise
-- fast & low latency ( can measure your implementation performance & overheads, down to sub [usec] scale )
-- broker-less ( does not introduce another point-of-failure, while itself can have { N+1 | N+M } self-healing architecture )
-- smart Formal Communication Pattern primitives ready to be used ( PUB / SUB is the least cardinal one )
-- fair-queue & load balancing architectures built-in ( invisible for external observer )
-- many transport Classes for server-side internal multi-process / multi-threading distributed / parallel processing
-- ready to almost linear scaleability
Adaptive node re-discovery
This is a bit more complex subject. Your intention to create a feasible architecture will have to drill down into more details to solve.
Node authentication vs. peer-to-peer messaging
Node (re)-discovery vs. legal & privacy issues
Node based autonomous self-organising Agents vs. needs for central policy enforcement
| RabbitMQ | 25,701,094 | 14 |
My RabbitMQ server went down and it is impossible to restart it. I tried to restart, reinstall it... I still don't understand the error.
This is what I get
BOOT FAILED
===========
Error description:
{could_not_start,rabbit,
{bad_return,
{{rabbit,start,[normal,[]]},
{'EXIT',
{rabbit,failure_during_boot,
{badmatch,
{error,
{{{function_clause,
[{rabbit_queue_index,journal_minus_segment1,
[{no_pub,del,no_ack},
{{<<115,254,171,167,171,226,110,171,251,38,217,145,3,12,215,151>>,
{message_properties,1409712663123302,false},
true},
del,ack}],
[{file,"src/rabbit_queue_index.erl"},{line,989}]},
{rabbit_queue_index,'-journal_minus_segment/2-fun-0-',4,
[{file,"src/rabbit_queue_index.erl"},{line,973}]},
{array,sparse_foldl_3,7,[{file,"array.erl"},{line,1675}]},
{array,sparse_foldl_2,9,[{file,"array.erl"},{line,1669}]},
{rabbit_queue_index,'-recover_journal/1-fun-0-',1,
[{file,"src/rabbit_queue_index.erl"},{line,701}]},
{lists,map,2,[{file,"lists.erl"},{line,1224}]},
{rabbit_queue_index,segment_map,2,
[{file,"src/rabbit_queue_index.erl"},{line,819}]},
{rabbit_queue_index,recover_journal,1,
[{file,"src/rabbit_queue_index.erl"},{line,693}]}]},
{gen_server2,call,[<0.186.0>,out,infinity]}},
{child,undefined,msg_store_persistent,
{rabbit_msg_store,start_link,
[msg_store_persistent,
"/var/lib/rabbitmq/mnesia/rabbit@host",[],
{#Fun<rabbit_queue_index.2.132977059>,
{start,
[{resource,<<"/">>,queue,
<<"photos_to_be_tagged_user_36">>}]}}]},
transient,4294967295,worker,
[rabbit_msg_store]}}}}}}}}}
Can anyone help with this?
Thanks a lot
| For anyone else looking for this error rabbit,failure_during_boot,
{badmatch,
{error,
{{{function_clause,
[{rabbit_queue_index,journal_minus_segment1, ...
I just dealt with the same issue and what helped was going to the mnesia directories and deleting the queues and msg_store_transient directories.
From what I understand, what happens is that you end up with a bad queue db (for whatever reason -i.e. a sudden power failure, some other process touches the files) which rabbitmq can't parse and so it crashes. Once you clear the queue of messages, it works fine.
| RabbitMQ | 25,619,201 | 14 |
So I have a Django app that occasionally sends a task to Celery for asynchronous execution. I've found that as I work on my code in development, the Django development server knows how to automatically detect when code has changed and then restart the server so I can see my changes. However, the RabbitMQ/Celery section of my app doesn't pick up on these sorts of changes in development. If I change code that will later be run in a Celery task, Celery will still keep running the old version of the code. The only way I can get it to pick up on the change is to:
stop the Celery worker
stop RabbitMQ
reset RabbitMQ
start RabbitMQ
add the user to RabbitMQ that my Django app is configured to use
set appropriate permissions for this user
restart the Celery worker
This seems like a far more drastic approach than I should have to take, however. Is there a more lightweight approach I can use?
|
I've found that as I work on my code in development, the Django
development server knows how to automatically detect when code has
changed and then restart the server so I can see my changes. However,
the RabbitMQ/Celery section of my app doesn't pick up on these sorts
of changes in development.
What you've described here is exactly correct and expected. Keep in mind that Python will use a module cache, so you WILL need to restart the Python interpreter before you can use the new code.
The question is "Why doesn't Celery pick up the new version", but this is how most libraries will work. The Django development server, however, is an exception. It has special code that helps it automatically reload Python code as necessary. It basically restarts the web server without you needing to restart the web server.
Note that when you run Django in production, you probably WILL have to restart/reload your server (since you won't be using the development server in production, and most production servers don't try to take on the hassle of implementing a problematic feature of detecting file changes and auto-reloading the server).
Finally, you shouldn't need to restart RabbitMQ. You should only have to restart the Celery worker to use the new version of the Python code. You might have to clear the queue if the new version of the code is changing the data in the message, however. For example, the Celery worker might be receiving version 1 of the message when it is expecting to receive version 2.
| RabbitMQ | 22,103,401 | 14 |
I have a java web server and am currently using the Guava library to handle my in-memory caching, which I use heavily. I now need to expand to multiple servers (2+) for failover and load balancing. In the process, I switched from a in-process cache to Memcache (external service) instead. However, I'm not terribly impressed with the results, as now for nearly every call, I have to make an external call to another server, which is significantly slower than the in-memory cache.
I'm thinking instead of getting the data from Memcache, I could keep using a local cache on each server, and use RabbitMQ to notify the other servers when their caches need to be updated. So if one server makes a change to the underlying data, it would also broadcast a message to all other servers telling them their cache is now invalid. Every server is both broadcasting and listening for cache invalidation messages.
Does anyone know any potential pitfalls of this approach? I'm a little nervous because I can't find anyone else that is doing this in production. The only problems I see would be that each server needs more memory (in-memory cache), and it might take a little longer for any given server to get the updated data. Anything else?
| I am a little bit confused about your problem here, so I am going to restate in a way that makes sense to me, then answer my version of your question. Please feel free to comment if I am not in line with what you are thinking.
You have a web application that uses a process-local memory cache for data. You want to expand to multiple nodes and keep this same structure for your program, rather than rely upon a 3rd party tool (memcached, Couchbase, Redis) with built-in cache replication. So, you are thinking about rolling your own using RabbitMQ to publish the changes out to the various nodes so they can update the local cache accordingly.
My initial reaction is that what you want to do is best done by rolling over to one of the above-mentioned tools. In addition to the obvious development and rigorous testing involved, Couchbase, Memcached, and Redis were all designed to solve the problem that you have.
Also, in theory you would run out of available memory in your application nodes as you scale horizontally, and then you will really have a mess. Once you get to the point when this limitation makes your app infeasible, you will end up using one of the tools anyway at which point all your hard work to design a custom solution will be for naught.
The only exceptions to this I can think of are if your app is heavily compute-intensive and does not use much memory. In this case, I think a RabbitMQ-based solution is easy, but you would need to have some sort of procedure in place to synchronize the cache between the servers on occasion, should messages be missed in RMQ. You would also need a way to handle node startup and shutdown.
Edit
In consideration of your statement in the comments that you are seeing access times in the hundreds of milliseconds, I'm going to advise that you first examine your setup. Typical read times for a single item in the cache from a Memcached (or Couchbase, or Redis, etc.) instance are sub-millisecond (somewhere around .1 milliseconds if I remember correctly), so your "problem child" of a cache server is several orders of magnitude from where it should be in terms of performance. Start there, then see if you still have the same problem.
| RabbitMQ | 21,098,502 | 14 |
I'm interested in knowing how other people handle recovering from a faulty connection using the official RabbitMQ java client library. We are using it to connect our application servers to our RabbitMQ cluster and we have implemented a few different ways to recover from a connection failure, but non of them feel quite right.
Imagine this pseudo application:
public class OurClassThatStartsConsumers {
Connection conn;
public void start() {
ConnectionFactory factory = new ConnectionFactory();
factory.setUsername("someusername");
factory.setPassword("somepassword");
factory.setHost("somehost");
conn = factory.newConnection();
new Thread(new Consumer(conn.createChannel())).start();
}
}
class Consumer1 implements Runnable {
public Consumer1(Channel channel) {
this.channel = channel;
}
@Override
public void run() {
while (true) {
... consume incoming messages on the channel...
// How do we handle that the connection dies?
}
}
}
In the real world we have several hundreds of consumers. So what happens if the connection dies? In the above example Consumer1 can not recover, when the connection closes, the Channel also closes, a state from which we can not recover. So lets look at some ways to solve this:
Solution A)
Let every consumer have their own connection and register the events that trigger when the connection dies and then handle reconnecting.
Pros: It works
Cons:
Since we have a lot of consumers, we probably do not want that many
connections.
We might possibly have a lot of duplicated code for
reconnecting to rabbit and handle reconnecting
Solution B)
Have each consumer use the same connection and subscribe to it's connection failure events.
Pros: Less connections than in Solution A
Cons: Since the connection is closed we need to reopen/replace it. The java client library doesn't seem to provide a way to reopen the connection, so we would have to replace it with a new connection and then somehow notify all the consumers about this new connection and they would have to recreate the channels and the consumers. Once again, a lot of logic that I don't want to see in the consumer ends up there.
Solution C)
Wrap Connection and Channel classes is classes that handle the re-connection logic, the consumer only needs to know about the WrappedChannel class. On a connection failure the WrappedConnection will deal with re-establishing the connection and once connected the WrappedConnection will automatically create new Channels and register consumers.
Pros: It works - this is actually the solution we are using today.
Cons: It feels like a hack, I think this is something that should be handled more elegantly by the underlying library.
Maybe there is a much better way? The API documentation does not talk that much about recovering from a faulty connection. Any input is appreciated :)
| Since version 3.3.0 you can use automatic recovery, which is a new feature of the Java client. From the Java API guide (http://www.rabbitmq.com/api-guide.html#recovery)
To enable automatic connection recovery, use
factory.setAutomaticRecovery(true):
| RabbitMQ | 19,695,897 | 14 |
I'm writing an application which needs to run a series of tasks in parallel and then a single task with the results of all the tasks run:
@celery.task
def power(value, expo):
return value ** expo
@celery.task
def amass(values):
print str(values)
It's a very contrived and oversimplified example, but hopefully the point comes across well. Basically, I have many items which need to run through power, but I only want to run amass on the results from all of the tasks. All of this should happen asynchronously, and I don't need anything back from the amass method.
Does anyone know how to set this up in celery so that everything is executed asynchronously and a single callback with a list of the results is called after all is said and done?
I've setup this example to run with a chord as Alexander Afanasiev recommended:
from time import sleep
import random
tasks = []
for i in xrange(10):
tasks.append(power.s((i, 2)))
sleep(random.randint(10, 1000) / 1000.0) # sleep for 10-1000ms
callback = amass.s()
r = chord(tasks)(callback)
Unfortunately, in the above example, all tasks in tasks are started only when the chord method is called. Is there a way that each task can start separately and then I could add a callback to the group to run when everything has finished?
| Here's a solution which worked for my purposes:
tasks.py:
from time import sleep
import random
@celery.task
def power(value, expo):
sleep(random.randint(10, 1000) / 1000.0) # sleep for 10-1000ms
return value ** expo
@celery.task
def amass(results, tasks):
completed_tasks = []
for task in tasks:
if task.ready():
completed_tasks.append(task)
results.append(task.get())
# remove completed tasks
tasks = list(set(tasks) - set(completed_tasks))
if len(tasks) > 0:
# resend the task to execute at least 1 second from now
amass.delay(results, tasks, countdown=1)
else:
# we done
print results
Use Case:
tasks = []
for i in xrange(10):
tasks.append(power.delay(i, 2))
amass.delay([], tasks)
What this should do is start all of the tasks as soon as possible asynchronously. Once they've all been posted to the queue, the amass task will also be posted to the queue. The amass task will keep reposting itself until all of the other tasks have been completed.
| RabbitMQ | 16,308,849 | 14 |
When using the HTTP API I am trying to make a call to the aliveness-test for monitoring purposes. At the moment I am testing using curl and the following command:
curl -i http://guest:guest@localhost:55672/api/aliveness-test/
And I get the following response:
HTTP/1.1 404 Object Not Found
Server: MochiWeb/1.1 WebMachine/1.9.0 (someone had painted it blue)
Date: Mon, 05 Nov 2012 17:18:58 GMT
Content-Type: text/html
Content-Length: 193
<HTML><HEAD><TITLE>404 Not Found</TITLE></HEAD><BODY><H1>Not Found</H1>The requested document was not found on this server.<P><HR><ADDRESS>mochiweb+webmachine web server</ADDRESS></BODY></HTML>
When making a request just to list the users or vhosts, the requests returns successfully:
$ curl -I http://guest:guest@localhost:55672/api/users
HTTP/1.1 200 OK
Server: MochiWeb/1.1 WebMachine/1.9.0 (someone had painted it blue)
Date: Mon, 05 Nov 2012 17:51:44 GMT
Content-Type: application/json
Content-Length: 11210
Cache-Control: no-cache
I'm using the latest stable version (2.8.7) of RabbitMQ and obviously have the management plugin installed for the API to work with the users call (the response is left out due to it containing company data but is just regular JSON as expected).
There isn't much on the internet about this call failing so I am wondering if anyone has seen this before?
Thanks,
Kristian
| Turns out that the '/' at the beginning of the vhosts names is not implicit, even when as part of a URL. To get this to work I simply changed my request from:
curl -i http://guest:guest@localhost:55672/api/aliveness-test/
To
curl -i http://guest:guest@localhost:55672/api/aliveness-test/%2F
As %2F is '/' HTTP encoded, my request now queries the vhost named '/' and returns a 200 response which looks like:
{"status":"ok"}
| RabbitMQ | 13,237,628 | 14 |
At our organization we have a SQL Server 2005 database and a fair number of database clients: web sites (php, zope, asp.net), rich clients (legacy fox pro). Now we need to pass certain events from the core database with other systems (MongoDb, LDAP and others). Messaging paradigm seems pretty capable of solving this kind of problem. So we decided to use RabbitMQ broker as a middleware.
The problem of consuming events from the database at first seemed to have only two possible solutions:
Poll the database for outgoing messages and pass them to a message broker.
Use triggers on certain tables to pass messages to a broker on the same machine.
I disliked the first idea due to latency issues which arise when periodical execution of sql is involved.
But event-based trigger approach has a problem which seems unsolvable to me at the moment. Consider this scenario:
A row is inserted into a table.
Trigger fires and sends a message (using a CLR Stored Procedure written in C#)
Everything is ok unless transaction which writes data is rolled back. In this case data will be consistent, but the message has already been sent and cannot be rolled back because trigger fires at the moment of writing to the database log, not at the time of transaction commit (which is a correct behaviour of a RDBMS).
I realize now that I'm asking too much of triggers and they are not suitable for tasks other than working with data.
So my questions are:
Has anyone managed to extract data events using triggers?
What other methods of consuming data events can you advise?
Is Query Notification (built on top of Service Broker) suitable in my situation?
Thanks in advance!
| Lest first cut out of the of the equation the obvious misfit: Query Notification is not right technology for this, because is designed to address cache invalidation of relatively stable data. With QN you'll only know that table has changed, but you won't be able to know what had changed.
Kudos to you for figuring out why triggers invoking SQLCRL won't work: the consistency is broken on rollback.
So what does work? Consider this: BizTalk Server. In other words, there is an entire business built around this problem space, and solutions are far from trivial (otherwise nobody would buy such products).
You can get quite far though following a few principles:
decoupling. Event based triggers are OK, but do not send the message from the trigger. Aside from the consistency issue on rollback you also have the latency issue of having every DML operation now wait for an external API call (the RabbitMQ send) and the availability issue of the external API call failure (if RabbitMQ is unavailable, your DB is unavailable). The solution is to have the trigger use ordinary tables as queues, the trigger will enqueue a message in the local db queue (ie. will insert into this table) and and external process will service this queue by dequeueing the messages (ie. delete from the table) and forwarding them to RabbitMQ. This decouples the transaction from the RabbitMQ operation (the external process is able to see the message only if the original xact commits), but the cost is some obvious added latency (there is an extra hop involved, the local table acting as a queue).
idempotency. Since RabbitMQ cannot enroll in distributed transactions with the database you cannot guarantee atomicity of the DB operation (the dequeue from local table acting as queue) and the RabbitMQ operation (the send). Either one can succeed when the other failed, and there is simply no way around it w/o explicit distributed transaction enrollment support. Which implies that the application will send duplicate messages every once in a while (usually when things already go bad for some reason). And a quick heads up: enrolling into the act of explicit 'acknowledge' messages and send sequence numbers is a loosing battle as you'll quickly discover that you're reinventing TCP on top of messaging, that road is paved with bodies.
tolerance. For the same reasons as the item above every now in a while a message you believe was sent will never make it. Again, what damage this causes is entirely business specific. The issue is not how to prevent this situation (is almost impossible...) but how to detect this situation, and what to do about it. No silver bullet, I'm afraid.
You do mention in passing Service Broker (the fact that is powering Query Notification is the least interestign aspect of it...). As a messaging platform built into SQL Server which offers Exactly Once In Order delivery guarantees and is fully transacted it would solve all the above pain points (you can SEND from triggers withouth impunity, you can use Activation to solve the latency issue, you'll never see a duplicate or a missing message, there are clear error semantics) and some other pain points I did not mention before (consistency of backup/restore as the data and the messages are on the same unit of storage - the database, cosnsitnecy of HA/DR failover as SSB support both database mirroring and clustering etc). The draw back though is that SSB is only capable of talking to another SSB service, in other words it can only be used to exchange messages between two (or more) SQL Server instances. Any other use requires the parties to use a SQL Server to exchange messages. But if your endpoints are all SQL Server, then consider that there are some large scale deployments using Service Broker. Note that endpoints like php or asp.net can be considered SQL Server endpoints, they are just programming layers on top of the DB API, a different endpoint would, say, the need to send messages from handheld devices (phones) directly to the database (and eve those 99% of the time go through a web service, which means they can reach a SQL Server ultimately). Another consideration is that SSB is geared toward throughput and reliable delivery, not toward low latency. Is definitely not the technology to use to get back the response in a HTTP web request, for instance. IS the technology to use to submit for processing something triggered by a web request.
| RabbitMQ | 13,087,058 | 14 |
I am using RabbitMQ to have worker processes encode video files. I would like to know when all of the files are complete - that is, when all of the worker processes have finished.
The only way I can think to do this is by using a database. When a video finishes encoding:
UPDATE videos SET status = 'complete' WHERE filename = 'foo.wmv'
-- etc etc etc as each worker finishes --
And then to check whether or not all of the videos have been encoded:
SELECT count(*) FROM videos WHERE status != 'complete'
But if I'm going to do this, then I feel like I am losing the benefit of RabbitMQ as a mechanism for multiple distributed worker processes, since I still have to manually maintain a database queue.
Is there a standard mechanism for RabbitMQ dependencies? That is, a way to say "wait for these 5 tasks to finish, and once they are done, then kick off a new task?"
I don't want to have a parent process add these tasks to a queue and then "wait" for each of them to return a "completed" status. Then I have to maintain a separate process for each group of videos, at which point I've lost the advantage of decoupled worker processes as compared to a single ThreadPool concept.
Am I asking for something which is impossible? Or, are there standard widely-adopted solutions to manage the overall state of tasks in a queue that I have missed?
Edit: after searching, I found this similar question: Getting result of a long running task with RabbitMQ
Are there any particular thoughts that people have about this?
| Use a "response" queue. I don't know any specifics about RabbitMQ, so this is general:
Have your parent process send out requests and keep track of how many it sent
Make the parent process also wait on a specific response queue (that the children know about)
Whenever a child finishes something (or can't finish for some reason), send a message to the response queue
Whenever numSent == numResponded, you're done
Something to keep in mind is a timeout -- What happens if a child process dies? You have to do slightly more work, but basically:
With every sent message, include some sort of ID, and add that ID and the current time to a hash table.
For every response, remove that ID from the hash table
Periodically walk the hash table and remove anything that has timed out
This is called the Request Reply Pattern.
| RabbitMQ | 7,734,597 | 14 |
Pretty straightforward question. I can't find it in the docs or the spec.
| From the AMQP spec, section 1.1:
If set, the server will not respond to the method. The client should not wait for a reply method. If the
server could not complete the method it will raise a channel or connection exception.
| RabbitMQ | 6,351,698 | 14 |
I'm trying to access the RabbitMQ interface over HTTPS/SSL with nginx, and I can't figure out what I'm missing.
Here's my rabbitmq.conf file:
[
{ssl, [{versions, ['tlsv1.2', 'tlsv1.1']}]},
{rabbit, [
{reverse_dns_lookups, true},
{hipe_compile, true},
{tcp_listeners, [5672]},
{ssl_listeners, [5671]},
{ssl_options, [
{cacertfile, "/etc/ssl/certs/CA.pem"},
{certfile, "/etc/nginx/ssl/my_domain.crt"},
{keyfile, "/etc/nginx/ssl/my_domain.key"},
{versions, ['tlsv1.2', 'tlsv1.1']}
]}
]
},
{rabbitmq_management, [
{listener, [
{port, 15671},
{ssl, true},
{ssl_opts, [
{cacertfile, "/etc/ssl/certs/CA.pem"},
{certfile, "/etc/nginx/ssl/my_domain.crt"},
{keyfile, "/etc/nginx/ssl/my_domain.key"},
{versions, ['tlsv1.2', 'tlsv1.1']}
]}
]}
]}
].
All works ok when I restart rabbitmq-server
My nginx file looks like this:
location /rabbitmq/ {
if ($request_uri ~* "/rabbitmq/(.*)") {
proxy_pass https://example.com:15671/$1;
}
}
Now, I'm guessing there's something with the ngnix config not being able to resolve the HTTPS URL, as I'm getting 504 timeout errors when trying to browse:
https://example.com/rabbitmq/
Obviously, this is not the correct FQDN, but the SSL cert works fine without the /rabbitmq/
Has anyone been able to use the RabbitMQ Management web interface on an external connection over a FQDN and HTTPS?
Do I need to create a new "server" block in nginx config dedicated to the 15671 port?
Any help would be much appreciated!
| I ended up reverting back to the default rabbitmq.config file, then modified my nginx config block to the below, based on another stackoverflow answer that I can't find right now.
location ~* /rabbitmq/api/(.*?)/(.*) {
proxy_pass http://127.0.0.1:15672/api/$1/%2F/$2?$query_string;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location ~* /rabbitmq/(.*) {
rewrite ^/rabbitmq/(.*)$ /$1 break;
proxy_pass http://127.0.0.1:15672;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
Also, I had browser caching for JS files, which was causing issues and have disabled that.
I will try to re-enable SSL piece-by-piece but do have the example URL working for now:
https://example.com/rabbitmq/
| RabbitMQ | 49,742,269 | 13 |
We have an application that uses Kestrel to serve HTTP requests. We've had some problems in the past where high load caused the thread pool to spawn thousands of threads, at which point we would get lock convoy symptoms. Most of the time, the threads would start blocking each other at Monitor.Enter() somewhere in our code, causing delays and more contentions until the application became unresponsive with a 100% CPU usage due to context switching. The problem would not go away until we restarted the application.
However, we've eliminated most locks and implemented a throttling mechanism so that we don't allow more than 1000 threads to enter the application. We're using the System.Threading.Semaphore class to allow only a set number of threads to continue. This has solved our lock contention problems, but possible introduced a new problem:
We still get cases of 100% CPU usage and high thread count (500-1000 threads), although this time the threads are not blocked on Monitor.Enter(). Instead, when we do thread dump (using Microsoft.Diagnostics.Runtime.ClrRuntime), we see the following call stack (for hundreds of threads):
thread id = 892
GCFrame
GCFrame
HelperMethodFrame
System.Threading.TimerQueueTimer.Fire()
System.Threading.QueueUserWorkItemCallback.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem()
System.Threading.ThreadPoolWorkQueue.Dispatch()
DebuggerU2MCatchHandlerFrame
In this case, the problem would case the application to become unresponsive, but in most cases it solves itself after a few minutes. Sometimes it takes hours.
What does a call stack like this mean? Is this a known problem with Kestrel or is it some kind of combination of Kestrel and Semaphore that is causing this?
UPDATE: A memory dump reveals that the HelperMethodFrame in the call stack is probably a call to Monitor.Enter() after all. However we still cannot pinpoint whether this is in our code or in Kestrel or some other library. When we had our lock convoy problems before, we would see our code in the call stack. Now it seems to be a Monitor.Enter() call inside TimerQueueTimer instead, which we are not using in our code. The memory dump looks like this:
.NET stack trace:
Child SP IP Call Site
0000005a92b5e438 00007ff8a11c0c6a [GCFrame: 0000005a92b5e438]
0000005a92b5e660 00007ff8a11c0c6a [GCFrame: 0000005a92b5e660]
0000005a92b5e698 00007ff8a11c0c6a [HelperMethodFrame: 0000005a92b5e698] System.Threading.Monitor.Enter(System.Object)
0000005a92b5e790 00007ff88f30096b System.Threading.TimerQueueTimer.Fire()
0000005a92b5e7e0 00007ff88f2e1a1d System.Threading.QueueUserWorkItemCallback.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem()
0000005a92b5e820 00007ff88f2e1f70 System.Threading.ThreadPoolWorkQueue.Dispatch()
0000005a92b5ed48 00007ff890413753 [DebuggerU2MCatchHandlerFrame: 0000005a92b5ed48]
Full stack trace:
# Child-SP RetAddr : Args to Child : Call Site
00 0000005a`9cf5e9a8 00007ff8`9e6513ed : 00000000`00000000 00000000`00000001 00000000`00000001 00000000`00000000 : ntdll!ZwWaitForMultipleObjects+0xa
01 0000005a`9cf5e9b0 00007ff8`904e92aa : 0000005a`9cf5ef48 00007ff5`fffce000 0000005a`00000000 00000000`00000000 : KERNELBASE!WaitForMultipleObjectsEx+0xe1
02 0000005a`9cf5ec90 00007ff8`904e91bf : 00000000`00000001 00000000`00000000 0000005a`66b48e20 00000000`ffffffff : clr!WaitForMultipleObjectsEx_SO_TOLERANT+0x62
03 0000005a`9cf5ecf0 00007ff8`904e8fb1 : 0000005a`66b48e20 00000000`00000001 00000000`00000018 00007ff8`00000000 : clr!Thread::DoAppropriateWaitWorker+0x243
04 0000005a`9cf5edf0 00007ff8`90731267 : 00000000`00000000 00007ff8`00000001 0000004f`a419c548 0000004f`a419c548 : clr!Thread::DoAppropriateWait+0x7d
05 0000005a`9cf5ee70 00007ff8`90834a56 : 0000005a`5aec0308 0000005a`9cf5f0d0 00000000`00000000 0000005a`66b48e20 : clr!CLREventBase::WaitEx+0x28e6b7
06 0000005a`9cf5ef00 00007ff8`9083495a : 0000005a`5aec0308 0000005a`66b48e20 00000000`00000000 00000050`22945ab8 : clr!AwareLock::EnterEpilogHelper+0xca
07 0000005a`9cf5efc0 00007ff8`90763c8c : 0000005a`66b48e20 0000005a`5aec0308 0000005a`5aec0308 00000000`002d0d01 : clr!AwareLock::EnterEpilog+0x62
08 0000005a`9cf5f020 00007ff8`908347ed : 00000000`00000000 0000005a`9cf5f0d0 0000005a`5aec0308 0000005a`5aec0301 : clr!AwareLock::Enter+0x24390c
09 0000005a`9cf5f050 00007ff8`908338a5 : 00000050`22945ab8 0000005a`9cf5f201 0000005a`66b48e20 00007ff8`90419050 : clr!AwareLock::Contention+0x2fd
0a 0000005a`9cf5f110 00007ff8`8f30096b : 0000005a`5aec0308 0000005a`9cf5f2d0 0000005a`9cf5f560 00000000`00000000 : clr!JITutil_MonContention+0xc5
0b 0000005a`9cf5f2a0 00007ff8`8f2e1a1d : 00000051`a2bb6bb0 00007ff8`90417d0e 00000050`229491d8 0000005a`9cf5f330 : mscorlib_ni+0x49096b
0c 0000005a`9cf5f2f0 00007ff8`8f2e1f70 : 00000000`00000000 0000005a`9cf5f3a8 00000000`00000001 0000005a`9cf5f370 : mscorlib_ni+0x471a1d
0d 0000005a`9cf5f330 00007ff8`90413753 : 00000000`00000004 00000000`00000000 0000005a`9cf5f600 0000005a`9cf5f688 : mscorlib_ni+0x471f70
0e 0000005a`9cf5f3d0 00007ff8`9041361c : 00000050`22945ab8 00000000`00000000 0000005a`9cf5f640 0000005a`9cf5f6c8 : clr!CallDescrWorkerInternal+0x83
0f 0000005a`9cf5f410 00007ff8`904144d3 : 00000000`00000000 00000000`00000004 0000005a`9cf5f858 0000005a`9cf5f688 : clr!CallDescrWorkerWithHandler+0x4e
10 0000005a`9cf5f450 00007ff8`9041b73d : 0000005a`9cf5fb70 0000005a`9cf5fb20 0000005a`9cf5fb70 00000000`00000001 : clr!MethodDescCallSite::CallTargetWorker+0x2af
11 0000005a`9cf5f5e0 00007ff8`90416810 : 00000000`00000007 00007ff8`00000000 ffffffff`fffffffe 0000005a`66b48e20 : clr!QueueUserWorkItemManagedCallback+0x2a
12 0000005a`9cf5f6d0 00007ff8`904167c0 : 00670061`00500064 00000000`00730065 ffffffff`fffffffe 0000005a`66b48e20 : clr!ManagedThreadBase_DispatchInner+0x29
13 0000005a`9cf5f710 00007ff8`90416705 : ffffffff`ffffffff 00007ff8`90414051 0000005a`9cf5f7b8 00000000`ffffffff : clr!ManagedThreadBase_DispatchMiddle+0x6c
14 0000005a`9cf5f810 00007ff8`90416947 : ffffffff`ffffffff 0000005a`66b48e20 0000005a`66b48e20 00000000`00000001 : clr!ManagedThreadBase_DispatchOuter+0x75
15 0000005a`9cf5f8a0 00007ff8`9041b6a2 : 0000005a`9cf5f988 00000000`00000000 00000000`00000001 00007ff8`9e651118 : clr!ManagedThreadBase_FullTransitionWithAD+0x2f
16 0000005a`9cf5f900 00007ff8`904158ba : 0000005a`9cf5fb70 0000005a`9cf5fb68 00000000`00000000 00000000`00000200 : clr!ManagedPerAppDomainTPCount::DispatchWorkItem+0x11c
17 0000005a`9cf5fa90 00007ff8`904157da : 0000010b`010b010b 0000005a`9cf5fb20 00000000`00000000 0000005a`66b48e20 : clr!ThreadpoolMgr::ExecuteWorkRequest+0x64
18 0000005a`9cf5fac0 00007ff8`90433e1e : 00000000`00000000 00000000`00000000 00000000`00000001 00000000`0000041d : clr!ThreadpoolMgr::WorkerThreadStart+0x3b5
19 0000005a`9cf5fb60 00007ff8`9e7c13d2 : 00007ff8`90433da8 0000005a`5add4db0 00000000`00000000 00000000`00000000 : clr!Thread::intermediateThreadProc+0x7d
1a 0000005a`9cf5fca0 00007ff8`a11454e4 : 00007ff8`9e7c13b0 00000000`00000000 00000000`00000000 00000000`00000000 : kernel32!BaseThreadInitThunk+0x22
1b 0000005a`9cf5fcd0 00000000`00000000 : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : ntdll!RtlUserThreadStart+0x34
UPDATE 2: WinDbg syncblock command gives us this:
The version of SOS does not match the version of CLR you are debugging. Please
load the matching version of SOS for the version of CLR you are debugging.
CLR Version: 4.6.1055.0
SOS Version: 4.6.1637.0
Index SyncBlock MonitorHeld Recursion Owning Thread Info SyncBlock Owner
148 0000005a5aec0308 426 0 0000000000000000 none 0000005022945ab8 System.Threading.TimerQueue
-----------------------------
Total 152
CCW 1
RCW 1
ComClassFactory 0
Free 66
UPDATE 3: More digging shows that we have about 42000 Timer objects:
00007ff8871bedd0 41728 1001472 System.Runtime.Caching.MemoryCacheEqualityComparer
00007ff88f4a0998 42394 1017456 System.Threading.TimerHolder
00007ff8871bbed0 41728 1335296 System.Runtime.Caching.UsageBucket[]
00007ff88f51ab30 41749 1335968 Microsoft.Win32.SafeHandles.SafeWaitHandle
00007ff88f519de0 42394 1356608 System.Threading.Timer
00007ff8871be870 41728 1669120 System.Runtime.Caching.CacheUsage
00007ff88f50ea80 41734 2003232 System.Threading.ManualResetEvent
00007ff8871be810 41728 2336768 System.Runtime.Caching.CacheExpires
00007ff88f519f08 42390 2712960 System.Threading.TimerCallback
00007ff8871be558 41728 3338240 System.Runtime.Caching.MemoryCacheStore
00007ff88f4a0938 42394 3730672 System.Threading.TimerQueueTimer
00007ff8871be8d0 41728 4005888 System.Runtime.Caching.UsageBucket
00007ff8871bb9c8 41728 11016192 System.Runtime.Caching.ExpiresBucket[]
Checking a few of the _methodPtr references, they all point to:
00007ff8`871b22c0 0f1f440000 nop dword ptr [rax+rax]
00007ff8`871b22c5 33d2 xor edx,edx
00007ff8`871b22c7 4533c0 xor r8d,r8d
00007ff8`871b22ca 488d055ffeffff lea rax,[System_Runtime_Caching_ni+0x32130 (00007ff8`871b2130)]
00007ff8`871b22d1 48ffe0 jmp rax
And with GC Traces looking similar to this:
0:000> !gcroot 00000055629e5ca0
The version of SOS does not match the version of CLR you are debugging. Please
load the matching version of SOS for the version of CLR you are debugging.
CLR Version: 4.6.1055.0
SOS Version: 4.6.1637.0
Thread 27a368:
0000005a61c4ed10 00007ff88f2d2490 System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
r14:
-> 0000004b6296f840 System.Threading.ThreadHelper
-> 0000004b6296f7a0 System.Threading.ThreadStart
-> 0000004b6296f750 Quartz.Simpl.SimpleThreadPool+WorkerThread
-> 0000004b6296f7e0 System.Threading.Thread
-> 0000004b62959710 System.Runtime.Remoting.Contexts.Context
-> 0000004aa29315a8 System.AppDomain
-> 0000004c22c4b368 System.EventHandler
-> 00000051e2eb5f48 System.Object[]
-> 00000050629e6180 System.EventHandler
-> 000000506298b268 System.Runtime.Caching.MemoryCache
-> 000000506298b348 System.Runtime.Caching.MemoryCacheStore[]
-> 000000506298d470 System.Runtime.Caching.MemoryCacheStore
-> 000000506298d5a0 System.Runtime.Caching.CacheExpires
-> 000000506298e868 System.Threading.Timer
-> 000000506298eaa8 System.Threading.TimerHolder
-> 000000506298e888 System.Threading.TimerQueueTimer
-> 000000506298fe78 System.Threading.TimerQueueTimer
| Just want to add this for future interwebz travellers. The root cause was that we used a System.Runtime.Caching.MemoryCache instance that we were re-creating frequently without proper disposal. The MemoryCaches created timers for function calls and these timers were not cleared from memory when the cache was replaced, and the timers would grab a threadpool thread every now and then to check if they should fire, and so when the timers built up, so did the CPU usage.
The reason it was hard to detect was that it did not appear in the stack traces, even in dump files. Instead we would see the classes (lambdas, mostly) called by the timers. We found the issue by extensive audit of all code, basically. And the documentation for MemoryCache specifically says not to do what we did.
The solution was to dispose the old cache before creating a new one, and then the problem disappeared.
EDIT: looking at the above stack traces, it looks like it atually gave us quite good evidence that the problem was in System.Runtime.Caching. I guess we were just blind and didn't think the problem would be inside a System namespace.
| RabbitMQ | 43,895,737 | 13 |
Rabbit MQ URL looks like :
BROKER_URL: "amqp://user:[email protected]:port//vhost"
This is not clear where we can find the URL, login and password of RabbitMQ
when we need to acccess from remote worker (outside of Localhost).
In other way, how to set RabbitMQ IP adress, login and password from Celery / RabbitMQ
| You can create new user for accessing your RabbitMQ broker.
Normally port used is 5672 but you can change it in your configuration file.
So suppose your IP is 1.1.1.1 and you created user test with password test and you want to access vhost "dev" (without quotes) then it will look something like this:
amqp://test:[email protected]:5672/dev
I will recommend to enable RabbitMQ Management Plugin to play around RabbitMQ.
https://www.rabbitmq.com/management.html
| RabbitMQ | 40,957,599 | 13 |
I'm using spring STOMP over Websocket with RabbitMQ. All works fine but simpMessagingTemplate.convertAndSend works very slow, call can take 2-10 seconds (synchronously, block thread). What can be a reason??
RabbitTemplate.convertAndSend take < 1s, but I need stomp over websocket..
UPDATE
I try to use ActiveMQ and gets the same result. convertAndSend take 2-10 seconds
ActiveMQ have default configuration.
Web socket config:
@Configuration
@EnableWebSocket
@EnableWebSocketMessageBroker
class WebSocketConfig extends AbstractWebSocketMessageBrokerConfigurer {
@Override
void configureMessageBroker(MessageBrokerRegistry config) {
config.enableStompBrokerRelay("/topic", "/queue", "/exchange");
config.setApplicationDestinationPrefixes("/topic", "/queue"); // prefix in client queries
config.setUserDestinationPrefix("/user");
}
@Override
void registerStompEndpoints(StompEndpointRegistry registry) {
registry.addEndpoint("/board").withSockJS()
}
@Override
void configureWebSocketTransport(WebSocketTransportRegistration registration) {
registration.setMessageSizeLimit(8 * 1024);
}
}
| Problem resolved. Its bug in io.projectreactor library version 2.0.4.RELEASE. I change to 2.0.8.RELEASE and its fixed problem. Sending message now take ~50ms.
<dependency>
<groupId>io.projectreactor</groupId>
<artifactId>reactor-net</artifactId>
<version>2.0.8.RELEASE</version>
</dependency>
| RabbitMQ | 40,380,069 | 13 |
As I understand message brokers like RabbitMQ facilitates different applications written in different language/platform to communicate with each other. So since celery can use RabbitMQ as message broker, I believe we can queue task from any application to Celery, even though the producer isn't written in Python.
Now I am trying to figure out how I can queue a task to Celery from an application written in C# via RabbitMQ. But I could not find any such example yet.
The only information close to this I found is this SO question
Where the accepted answer suggests to use the Celery message format protocol to queue messages to RabbitMQ from Java. However, the link given in the answer does not have any example, only the message format.
Also, the message format says task id (UUID) is required to communicate in this protocol. How is my C# application supposed to know the task id of the celery task? As I understand it can only know about the task name, but not the task id.
| I don't know whether the question is still relevant, but hopefully the answer will help others.
Here is how I succeeded in queening a task to Celery example worker.
You'll need to establish connection between your producer(client) to RabbitMQ as described here.
ConnectionFactory factory = new ConnectionFactory();
factory.UserName = username;
factory.Password = password;
factory.VirtualHost = virtualhost;
factory.HostName = hostname;
factory.Port = port;
IConnection connection = factory.CreateConnection();
IModel channel = connection.CreateModel();
In default RabbitMQ configuration there is only Guest user which can only be used for local connections (from 127.0.0.1). An answer to this question explains how to define users in RabbitMQ.
Next - creating a callback to get results. This example is using Direct reply-to, so an answer listener will look like:
var consumer = new EventingBasicConsumer(channel);
consumer.Received += (model, ea) =>
{
var ansBody = ea.Body;
var ansMessage = Encoding.UTF8.GetString(ansBody);
Console.WriteLine(" [x] Received {0}", ansMessage);
Console.WriteLine(" [x] Done");
};
channel.BasicConsume(queue: "amq.rabbitmq.reply-to", noAck: true, consumer: consumer);
Creating a task message that Celery will consume:
IDictionary<string, object> headers = new Dictionary<string, object>();
headers.Add("task", "tasks.add");
Guid id = Guid.NewGuid();
headers.Add("id", id.ToString());
IBasicProperties props = channel.CreateBasicProperties();
props.Headers = headers;
props.CorrelationId = (string)headers["id"];
props.ContentEncoding = "utf-8";
props.ContentType = "application/json";
props.ReplyTo = "amq.rabbitmq.reply-to";
object[] taskArgs = new object[] { 1, 200 };
object[] arguments = new object[] { taskArgs, new object(), new object()};
MemoryStream stream = new MemoryStream();
DataContractJsonSerializer ser = new DataContractJsonSerializer(typeof(object[]));
ser.WriteObject(stream, arguments);
stream.Position = 0;
StreamReader sr = new StreamReader(stream);
string message = sr.ReadToEnd();
var body = Encoding.UTF8.GetBytes(message);
And finally, publishing the message to RabbitMQ:
channel.BasicPublish(exchange: "",
routingKey: "celery",
basicProperties: props,
body: body);
| RabbitMQ | 40,021,066 | 13 |
I'm trying to use C# to get RabbitMQ 3.6.2 to use SSL/TLS on Windows 7 against Erlang 18.0. I'm running into errors when I'm enabling SSL in my C# code. I have gone through the steps to set up SSL/TLS here. I've also gone through the [troubleshooting steps][2] which show turn up successful (except I couldn't do the stunnel step due to lack of knowledge of stunnel). Here's my C# code trying to connect to RabbitMQ:
var factory = new ConnectionFactory()
{
// NOTE: guest username ONLY works with HostName "localhost"!
//HostName = Environment.MachineName,
HostName = "localhost",
UserName = "guest",
Password = "guest",
};
// Without this line, RabbitMQ.log shows error: "SSL: hello: tls_handshake.erl:174:Fatal error: protocol version"
// When I add this line to go to TLS 1.2, .NET throws an exception: The remote certificate is invalid according to the validation procedure.
// https://stackoverflow.com/questions/9983265/the-remote-certificate-is-invalid-according-to-the-validation-procedure:
// Walked through this tutorial to add the client certificate as a Windows Trusted Root Certificate: http://www.sqlservermart.com/HowTo/Windows_Import_Certificate.aspx
factory.Ssl.Version = SslProtocols.Tls12;
factory.Ssl.ServerName = "localhost"; //System.Net.Dns.GetHostName();
factory.Ssl.CertPath = @"C:\OpenSSL-Win64\client\keycert.p12";
factory.Ssl.CertPassphrase = "Re$sp3cMyS3curi1ae!";
factory.Ssl.Enabled = true;
factory.Port = 5671;
// Error: "The remote certificate is invalid according to the validation procedure."
using (var connection = factory.CreateConnection())
{
}
There's a StackOverflow post regarding the "The remote certificate is invalid according to the validation procedure." exception, but the hack fix doesn't seem to take effect as the callback method suggested is never called. I think that I've added my certificate generated via OpenSSL to the Windows Trusted Root Certification Authorities certificates list for local computer. So I'm at a loss here. Any ideas on how to proceed?
Edit: Here's the final working code for anyone struggling to implement SSL on Rabbit:
var factory = new ConnectionFactory();
factory.HostName = ConfigurationManager.AppSettings["rabbitmqHostName"];
factory.AuthMechanisms = new AuthMechanismFactory[] { new ExternalMechanismFactory() };
// Note: This should NEVER be "localhost"
factory.Ssl.ServerName = ConfigurationManager.AppSettings["rabbitmqServerName"];
// Path to my .p12 file.
factory.Ssl.CertPath = ConfigurationManager.AppSettings["certificateFilePath"];
// Passphrase for the certificate file - set through OpenSSL
factory.Ssl.CertPassphrase = ConfigurationManager.AppSettings["certificatePassphrase"];
factory.Ssl.Enabled = true;
// Make sure TLS 1.2 is supported & enabled by your operating system
factory.Ssl.Version = SslProtocols.Tls12;
// This is the default RabbitMQ secure port
factory.Port = 5671;
factory.VirtualHost = "/";
// Standard RabbitMQ authentication (if not using ExternalAuthenticationFactory)
//factory.UserName = ConfigurationManager.AppSettings["rabbitmqUsername"];
//factory.Password = ConfigurationManager.AppSettings["rabbitmqPassword"];
using (var connection = factory.CreateConnection())
{
using (var channel = connection.CreateModel())
{
// publish some messages...
}
}
Thanks,
Andy
| Usual problem is mismatch between what you provide in Ssl.ServerName and host SSL certificate was issued for.
Also note that server-side SSL (encrypted connection between your client and server) and client-side authentication with certificate (you provide server with information which confirms that you have certificate it expects) are two different things. By providing Ssl.CertPath you intent to authorize at server using this certificate, which might or might not be what you want.
| RabbitMQ | 39,642,777 | 13 |
I've just installed Erlang 19.0, then Rabbitmq Server 3.6.3. OS - Windows 10. Then I installed rabbitmq_management plugin, then I started rabbitmq-server. I can successfully login into management console. The problem is when I go to Queues I get as error:
Got response code 500 with body {"error":"JSON encode error:
{bad_term,#{error_logger => true,kill => true,size =>
0}}","reason":"While encoding: \n[{total_count,1},\n {item_count,1},\n
{filtered_count,1},\n {page,1},\n {page_size,100},\n {page_count,1},\n
{items,\n [[{memory,22048},\n {reductions,6633},\n
{reductions_details,[{rate,0.0}]},\n {messages,0},\n
{messages_details,[{rate,0.0}]},\n {messages_ready,0},\n
{messages_ready_details,[{rate,0.0}]},\n
{messages_unacknowledged,0},\n
{messages_unacknowledged_details,[{rate,0.0}]},\n
{idle_since,<<\"2016-07-08 20:55:04\">>},\n
{consumer_utilisation,''},\n {policy,''},\n
{exclusive_consumer_tag,''},\n {consumers,1},\n
{recoverable_slaves,''},\n {state,running},\n {reductions,6633},\n
{garbage_collection,\n [{max_heap_size,#{error_logger => true,kill =>
true,size => 0}},\n {min_bin_vheap_size,46422},\n
{min_heap_size,233},\n {fullsweep_after,65535},\n {minor_gcs,3}]},\n
{messages_ram,0},\n {messages_ready_ram,0},\n
{messages_unacknowledged_ram,0},\n {messages_persistent,0},\n
{message_bytes,0},\n {message_bytes_ready,0},\n
{message_bytes_unacknowledged,0},\n {message_bytes_ram,0},\n
{message_bytes_persistent,0},\n {head_message_timestamp,''},\n
{disk_reads,0},\n {disk_writes,0},\n {backing_queue_status,\n
{struct,\n [{mode,default},\n {q1,0},\n {q2,0},\n
{delta,[delta,undefined,0,undefined]},\n {q3,0},\n {q4,0},\n
{len,0},\n {target_ram_count,infinity},\n {next_seq_id,0},\n
{avg_ingress_rate,0.0},\n {avg_egress_rate,0.0},\n
{avg_ack_ingress_rate,0.0},\n {avg_ack_egress_rate,0.0}]}},\n
{node,'rabbit@DESKTOP-330SD1I'},\n {arguments,{struct,[]}},\n
{exclusive,false},\n {auto_delete,false},\n {durable,true},\n
{vhost,<<\"/\">>},\n {name,<<\"queue1\">>}]]}]"}
If I remove from myself a privilegy to access "/" virtual host error disappears, but no queues are shown, which I suppose is wrong, because I have a running application which sends and reveives messages.
Here is the closest question to my, but those solution doesn't help.
P.S. I don't even hope somebody help me, I just wanted to post this question so at least this error can be googled.
| there are some known issues with Erlang 19, discussed in the RMQ Google Group, here.
In other words, 3.6.3 effectively isn't Erlang 19.0-compatible:
you'll need to install a prior version of Erlang, until RMQ can be re-built to support the changes in Erlang 19
| RabbitMQ | 38,275,479 | 13 |
I can easily delete queues, like this:
rabbitmqadmin delete queue name='MyQ'
However, I cannot find a way to delete exchanges. What am I missing?
| ➜
./rabbitmqadmin delete exchange name='myexchange'
exchange deleted
| RabbitMQ | 37,867,486 | 13 |
I want to send a persistent mesaage via HTTP API. Im using this command:
curl -u UN:PWD -H "content-type:application/json" -X POST -d'{"properties":{},"routing_key":"QueueName","payload":"HI","payload_encoding":"string", "deliverymode": 2}' http://url:8080/api/exchanges/%2f/amq.default/publish
My queue is durable and deliverymode is also set to 2(Persistent), but the messages published are not durable. What change needs to be done?
When I send the same via Management Console, the message is persistent but not via HTTP API.
| delivery_mode is a properties, so you have to put it inside the "properties" as:
curl -u guest:guest -H "content-type:application/json" -X POST -d'{"properties":{"delivery_mode":2},"routing_key":"QueueName","payload":"HI","payload_encoding":"string"}' http://localhost:15672/api/exchanges/%2f/amq.default/publish
| RabbitMQ | 37,067,467 | 13 |
I'm trying to create a simple spring boot app with spring boot that "produce" messages to a rabbitmq exchange/queue and another sample spring boot app that "consume" these messages.
So I have two apps (or microservices if you wish).
1) "producer" microservice
2) "consumer" microservice
The "producer" has 2 domain objects. Foo and Bar which should be converted to json and send to rabbitmq.
The "consumer" should receive and convert the json message into a domain Foo and Bar respectively.
For some reason I can not make this simple task. There are not much examples about this.
For the message converter I want to use org.springframework.messaging.converter.MappingJackson2MessageConverter
Here is what I have so far:
PRODUCER MICROSERVICE
package demo.producer;
import org.springframework.amqp.core.Binding;
import org.springframework.amqp.core.BindingBuilder;
import org.springframework.amqp.core.Queue;
import org.springframework.amqp.core.TopicExchange;
import org.springframework.amqp.rabbit.core.RabbitMessagingTemplate;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import org.springframework.messaging.converter.MappingJackson2MessageConverter;
import org.springframework.stereotype.Service;
@SpringBootApplication
public class ProducerApplication implements CommandLineRunner {
public static void main(String[] args) {
SpringApplication.run(ProducerApplication.class, args);
}
@Bean
Queue queue() {
return new Queue("queue", false);
}
@Bean
TopicExchange exchange() {
return new TopicExchange("exchange");
}
@Bean
Binding binding(Queue queue, TopicExchange exchange) {
return BindingBuilder.bind(queue).to(exchange).with("queue");
}
@Bean
public MappingJackson2MessageConverter jackson2Converter() {
MappingJackson2MessageConverter converter = new MappingJackson2MessageConverter();
return converter;
}
@Autowired
private Sender sender;
@Override
public void run(String... args) throws Exception {
sender.sendToRabbitmq(new Foo(), new Bar());
}
}
@Service
class Sender {
@Autowired
private RabbitMessagingTemplate rabbitMessagingTemplate;
@Autowired
private MappingJackson2MessageConverter mappingJackson2MessageConverter;
public void sendToRabbitmq(final Foo foo, final Bar bar) {
this.rabbitMessagingTemplate.setMessageConverter(this.mappingJackson2MessageConverter);
this.rabbitMessagingTemplate.convertAndSend("exchange", "queue", foo);
this.rabbitMessagingTemplate.convertAndSend("exchange", "queue", bar);
}
}
class Bar {
public int age = 33;
}
class Foo {
public String name = "gustavo";
}
CONSUMER MICROSERVICE
package demo.consumer;
import org.springframework.amqp.rabbit.annotation.EnableRabbit;
import org.springframework.amqp.rabbit.annotation.RabbitListener;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.stereotype.Service;
@SpringBootApplication
@EnableRabbit
public class ConsumerApplication implements CommandLineRunner {
public static void main(String[] args) {
SpringApplication.run(ConsumerApplication.class, args);
}
@Autowired
private Receiver receiver;
@Override
public void run(String... args) throws Exception {
}
}
@Service
class Receiver {
@RabbitListener(queues = "queue")
public void receiveMessage(Foo foo) {
System.out.println("Received <" + foo.name + ">");
}
@RabbitListener(queues = "queue")
public void receiveMessage(Bar bar) {
System.out.println("Received <" + bar.age + ">");
}
}
class Foo {
public String name;
}
class Bar {
public int age;
}
And here is the exception I'm getting:
org.springframework.amqp.rabbit.listener.exception.ListenerExecutionFailedException: Listener method could not be invoked with the incoming message
Endpoint handler details:
Method [public void demo.consumer.Receiver.receiveMessage(demo.consumer.Bar)]
Bean [demo.consumer.Receiver@1672fe87]
at org.springframework.amqp.rabbit.listener.adapter.MessagingMessageListenerAdapter.invokeHandler(MessagingMessageListenerAdapter.java:116)
at org.springframework.amqp.rabbit.listener.adapter.MessagingMessageListenerAdapter.onMessage(MessagingMessageListenerAdapter.java:93)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:756)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.invokeListener(AbstractMessageListenerContainer.java:679)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.access$001(SimpleMessageListenerContainer.java:83)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$1.invokeListener(SimpleMessageListenerContainer.java:170)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.invokeListener(SimpleMessageListenerContainer.java:1257)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.executeListener(AbstractMessageListenerContainer.java:660)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.doReceiveAndExecute(SimpleMessageListenerContainer.java:1021)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.receiveAndExecute(SimpleMessageListenerContainer.java:1005)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.access$700(SimpleMessageListenerContainer.java:83)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1119)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.springframework.amqp.support.converter.MessageConversionException: Cannot handle message
... 13 common frames omitted
Caused by: org.springframework.messaging.converter.MessageConversionException: No converter found to convert to class demo.consumer.Bar, message=GenericMessage [payload=byte[10], headers={amqp_receivedRoutingKey=queue, amqp_receivedExchange=exchange, amqp_deliveryTag=1, amqp_deliveryMode=PERSISTENT, amqp_consumerQueue=queue, amqp_redelivered=false, id=87cf7e06-a78a-ddc1-71f5-c55066b46b11, amqp_consumerTag=amq.ctag-msWSwB4bYGWVO2diWSAHlw, contentType=application/json;charset=UTF-8, timestamp=1433989934574}]
at org.springframework.messaging.handler.annotation.support.PayloadArgumentResolver.resolveArgument(PayloadArgumentResolver.java:115)
at org.springframework.messaging.handler.invocation.HandlerMethodArgumentResolverComposite.resolveArgument(HandlerMethodArgumentResolverComposite.java:77)
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.getMethodArgumentValues(InvocableHandlerMethod.java:127)
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:100)
at org.springframework.amqp.rabbit.listener.adapter.MessagingMessageListenerAdapter.invokeHandler(MessagingMessageListenerAdapter.java:113)
... 12 common frames omitted
The exception says there is no converter, and that is true, my problem is that I have no idea how to set the MappingJackson2MessageConverter converter in the consumer side (please note that I want to use org.springframework.messaging.converter.MappingJackson2MessageConverter and not org.springframework.amqp.support.converter.JsonMessageConverter)
Any thoughts ?
Just in case, you can fork this sample project at:
https://github.com/gustavoorsi/rabbitmq-consumer-receiver
| Ok, I finally got this working.
Spring uses a PayloadArgumentResolver to extract, convert and set the converted message to the method parameter annotated with @RabbitListener. Somehow we need to set the mappingJackson2MessageConverter into this object.
So, in the CONSUMER app, we need to implement RabbitListenerConfigurer. By overriding configureRabbitListeners(RabbitListenerEndpointRegistrar registrar) we can set a custom DefaultMessageHandlerMethodFactory, to this factory we set the message converter, and the factory will create our PayloadArgumentResolver with the the correct convert.
Here is a snippet of the code, I've also updated the git project.
ConsumerApplication.java
package demo.consumer;
import org.springframework.amqp.rabbit.annotation.EnableRabbit;
import org.springframework.amqp.rabbit.annotation.RabbitListener;
import org.springframework.amqp.rabbit.annotation.RabbitListenerConfigurer;
import org.springframework.amqp.rabbit.listener.RabbitListenerEndpointRegistrar;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import org.springframework.messaging.converter.MappingJackson2MessageConverter;
import org.springframework.messaging.handler.annotation.support.DefaultMessageHandlerMethodFactory;
import org.springframework.stereotype.Service;
@SpringBootApplication
@EnableRabbit
public class ConsumerApplication implements RabbitListenerConfigurer {
public static void main(String[] args) {
SpringApplication.run(ConsumerApplication.class, args);
}
@Bean
public MappingJackson2MessageConverter jackson2Converter() {
MappingJackson2MessageConverter converter = new MappingJackson2MessageConverter();
return converter;
}
@Bean
public DefaultMessageHandlerMethodFactory myHandlerMethodFactory() {
DefaultMessageHandlerMethodFactory factory = new DefaultMessageHandlerMethodFactory();
factory.setMessageConverter(jackson2Converter());
return factory;
}
@Override
public void configureRabbitListeners(RabbitListenerEndpointRegistrar registrar) {
registrar.setMessageHandlerMethodFactory(myHandlerMethodFactory());
}
@Autowired
private Receiver receiver;
}
@Service
class Receiver {
@RabbitListener(queues = "queue")
public void receiveMessage(Foo foo) {
System.out.println("Received <" + foo.name + ">");
}
@RabbitListener(queues = "queue")
public void receiveMessage(Bar bar) {
System.out.println("Received <" + bar.age + ">");
}
}
class Foo {
public String name;
}
class Bar {
public int age;
}
So, if you run the Producer microservice it will add 2 messages in the queue. One that represent a Foo object and another that represent a Bar object.
By running the consumer microservice you will see that both are consumed by the respective method in the Receiver class.
Updated issue:
There is a conceptual problem about queuing from my side I think. What I wanted to achieved can not be possible by declaring 2 methods annotated with @RabbitListener that points to the same queue. The solution above was not working properly. If you send to rabbitmq, let say, 6 Foo messages and 3 Bar messages, they wont be received 6 times by the listener with Foo parameter. It seems that the listener are invoked in parallel so there is no way to discriminate which listener to invoke based on the method argument type.
My solution (and I'm not sure if this is the best way, I'm open to suggestions here) is to create a queue for each entity.
So now, I have queue.bar and queue.foo, and update @RabbitListener(queues = "queue.foo")
Once again, I've updated the code and you can check it out in my git repository.
| RabbitMQ | 30,770,725 | 13 |
With RabbitMQ I am doing something similar to this:
channel.QueueDeclare(QueueName, true, false, false, null);
By default RabbitMQ creates a new queue if none of the existing matches the name provided. I would like to have an exception thrown instead.
Is that possible?
Thanks
| You can bind to existing queue without declaring a new one.
try
{
channel.QueueBind(queueName, exchange, routingKey);
}
catch (RabbitMQ.Client.Exceptions.OperationInterruptedException ex)
{
// Queue not found
}
An example of the exception thrown if the queue you're trying to bind does not exist:
RabbitMQ.Client.Exceptions.OperationInterruptedException: The AMQP operation was interrupted: AMQP close-reason, initiated by Peer, code=404, text="NOT_FOUND - no queue 'TestQueue' in vhost '/'", classId=50, methodId=20, cause=
| RabbitMQ | 28,467,316 | 13 |
We're currently using RabbitMQ, where a continuously super-fast producer is paired with a consumer limited by a limited resource (e.g. slow-ish MySQL inserts).
We don't like declaring a queue with x-max-length, since all messages will be dropped or dead-lettered once the limit is reached, and we don't want to loose messages.
Adding more consumers is easy, but they'll all be limited by the one shared resource, so that won't work. The problem still remains: How to slow down the producer?
Sure, we could put a flow control flag in Redis, memcached, MySQL or something else that the producer reads as pointed out in an answer to a similar question, or perhaps better, the producer could periodically test for queue length and throttle itself, but these seem like hacks to me.
I'm mostly questioning whether I have a fundamental misunderstanding. I had expected this to be a common scenario, and so I'm wondering:
What is best practice for throttling producers? How is this done with RabbitMQ? Or do you do this in a completely different way?
Background
Assume the producer actually knows how to slow himself down with the right input. E.g. a hardware sensor or hardware random number generator, that can generate as many events as needed.
In our particular real case, we have an API that users can use to add messages. Instead of devouring and discarding messages, we'd like to apply back-pressure by having our API return an error if the queue is "full", so the caller/user knows to back-off, or have the API block until the consumer catches up. We don't control our user, so regardless of how fast the consumer is, I can create a producer that is faster.
I was hoping for something like the API for a TCP socket, where a write() can block and where a select() can be used to determine if a handle is writable. So either having the RabbitMQ API block or have it return an error if the queue is full.
| For the x-max-length property, you said you don't want messages to be dropped or dead-lettered. I see there was an update in adding some more capabilities for this. As I see it is specified in the documentation:
"Use the overflow setting to configure queue overflow behaviour. If overflow is set to reject-publish, the most recently published messages will be discarded. In addition, if publisher confirms are enabled, the publisher will be informed of the reject via a basic.nack message"
So as I understand it, you can use queue limit to reject the new messages from publishers thus pushing some backpressure to the upstream.
| RabbitMQ | 28,041,933 | 13 |
I've been working with Celery lately and I don't like it. It's configuration is messy, overcomplicated and poorly documented.
I want to send broadcast messages with Celery from a single producer to multiple consumers. What confuses me is discrepancy between Celery terms and terms of underlying transport RabbitMQ.
In RabbitMQ you can have a single fanout Exchange and multiple Queues to broadcast messages:
But in Celery the terms are all messed up: here you can have a broadcast Queue, which sends messages to multiple consumers:
I don't even understand, how Celery broadcast queue is supposed to work at all, cause RabbitMQ queues with multiple consumers are meant for load balancing. So in RabbitMQ if multiple consumers (i.e. a pool of consumers) are connected to the same queue, only one consumer will receive and process message, which is called round robin in RabbitMQ docs.
Also, Celery documentation on broadcast is really insufficient. What type of RabbitMQ exchange should I specify for Broadcast queue, fanout or not? Could you supply a full example?
So, what I'm asking for is (1) clarification of concept and implementation of Broadcast queues in Celery and (2) a complete example of Broadcast queues configuration. Thank you.
| Having looked at the code (it's in the kombu.common package, not celery) and tried it out, it seems to work like this:
You define a Broadcast 'queue' named 'foo' in your celery config.
This creates an Exchange named 'foo', and an auto_delete queue with a unique id (via uuid), and with the alias 'foo' (I don't think the alias is actually used anywhere, it's just there for reference because the queue's real name is randomly generated)
The unique queue is bound to the 'foo' exchange
So, the class is named Broadcast, but it's really a uniquely named queue that is bound to a fanout exchange. Therefore when each worker is started, it creates its own unique queue and binds to the fanout exchange.
| RabbitMQ | 24,284,518 | 13 |
QueueingConsumer consumer = new QueueingConsumer(channel);
System.out.println(consumer.getConsumerTag());
channel.basicConsume("queue1", consumer);
channel.basicConsume("queue3", consumer);
Is it possible to stop consuming the messages from the queue "queue3" alone dynamically?
| Yes you can, using channel.basicCancel(consumerTag);
EDIT
For example:
String tag3 = channel.basicConsume("queue3", consumer);
channel.basicCancel(tag3)
Here you can find a code that unsubscribe a consumer after 5 seconds:
String tag1 = channel.basicConsume(myQueue, autoAck, consumer);
String tag2 = channel.basicConsume(myQueue2, autoAck, consumer);
executorService.execute(new Runnable() {
@Override
public void run() {
while (true) {
Delivery delivery;
try {
delivery = consumer.nextDelivery();
String message = new String(delivery.getBody());
System.out.println("Received: " + message);
} catch (Exception ex) {
Logger.getLogger(TestMng.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
});
System.out.println("Consumers Ready");
try {
Thread.sleep(5000);
} catch (InterruptedException ex) {
Logger.getLogger(TestMng.class.getName()).log(Level.SEVERE, null, ex);
}
channel.basicCancel(tag2); /// here you remove only the Myqueue2
I hope it can be useful.
| RabbitMQ | 23,333,863 | 13 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.