question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
I'm new to React, and I'd like to ask a strategy question about how best to accomplish a task where data must be communicated between sibling components. First, I'll describe the task: Say I have multiple <select> components that are children of a single parent that passes down the select boxes dynamically, composed from an array. Each box has exactly the same available options in its initial state, but once a user selects a particular option in one box, it must be disabled as an option in all other boxes until it is released. Here's an example of the same in (silly) code. (I'm using react-select as a shorthand for creating the select boxes.) In this example, I need to disable (ie, set disabled: true) the options for "It's my favorite" and "It's my least favorite" when a user selects them in one select box (and release them if a user de-selects them). var React = require('react'); var Select = require('react-select'); var AnForm = React.createClass({ render: function(){ // this.props.fruits is an array passed in that looks like: // ['apples', 'bananas', 'cherries','watermelon','oranges'] var selects = this.props.fruits.map(function(fruit, i) { var options = [ { value: 'first', label: 'It\'s my favorite', disabled: false }, { value: 'second', label: 'I\'m OK with it', disabled: false }, { value: 'third', label: 'It\'s my least favorite', disabled: false } ]; return ( <Child fruit={fruit} key={i} options={options} /> ); }); return ( <div id="myFormThingy"> {fruitSelects} </div> ) } }); var AnChild = React.createClass({ getInitialState: function() { return { value:'', options: this.props.options }; }, render: function(){ function changeValue(value){ this.setState({value:value}); } return ( <label for={this.props.fruit}>{this.props.fruit}</label> <Select name={this.props.fruit} value={this.state.value} options={this.state.options} onChange={changeValue.bind(this)} placeholder="Choose one" /> ) } }); Is updating the child options best accomplished by passing data back up to the parent through a callback? Should I use refs to access the child components in that callback? Does a redux reducer help? I apologize for the general nature of the question, but I'm not finding a lot of direction on how to deal with these sibling-to-sibling component interactions in a unidirectional way. Thanks for any help.
TLDR: Yes, you should use a props-from-top-to-bottom and change-handlers-from-bottom-to-top approach. But this can get unwieldy in a larger application, so you can use design patterns like Flux or Redux to reduce your complexity. Simple React approach React components receive their "inputs" as props; and they communicate their "output" by calling functions that were passed to them as props. A canonical example: <input value={value} onChange={changeHandler}> You pass the initial value in one prop; and a change handler in another prop. Who can pass values and change handlers to a component? Only their parent. (Well, there is an exception: you can use the context to share information between components, but that's a more advanced concept, and will be leveraged in the next example.) So, in any case, it's the parent component of your selects that should manage the input for your selects. Here is an example: class Example extends React.Component { constructor(props) { super(props); this.state = { // keep track of what is selected in each select selected: [ null, null, null ] }; } changeValue(index, value) { // update selected option this.setState({ selected: this.state.selected.map((v, i) => i === index ? value : v)}) } getOptionList(index) { // return a list of options, with anything selected in the other controls disabled return this.props.options.map(({value, label}) => { const selectedIndex = this.state.selected.indexOf(value); const disabled = selectedIndex >= 0 && selectedIndex !== index; return {value, label, disabled}; }); } render() { return (<div> <Select value={this.state.selected[0]} options={this.getOptionList(0)} onChange={v => this.changeValue(0, v)} /> <Select value={this.state.selected[1]} options={this.getOptionList(1)} onChange={v => this.changeValue(1, v)} /> <Select value={this.state.selected[2]} options={this.getOptionList(2)} onChange={v => this.changeValue(2, v)} /> </div>) } } Redux The main drawback of the above approach is that you have to pass a lot of information from the top to the bottom; as your application grows, this becomes difficult to manage. React-Redux leverages React's context feature to enable child components to access your Store directly, thus simplifying your architecture. Example (just some key pieces of your redux application - see the react-redux documentation how to wire these together, e.g. createStore, Provider...): // reducer.js // Your Store is made of two reducers: // 'dropdowns' manages the current state of your three dropdown; // 'options' manages the list of available options. const dropdowns = (state = [null, null, null], action = {}) => { switch (action.type) { case 'CHANGE_DROPDOWN_VALUE': return state.map((v, i) => i === action.index ? action.value : v); default: return state; } }; const options = (state = [], action = {}) => { // reducer code for option list omitted for sake of simplicity }; // actionCreators.js export const changeDropdownValue = (index, value) => ({ type: 'CHANGE_DROPDOWN_VALUE', index, value }); // helpers.js export const selectOptionsForDropdown = (state, index) => { return state.options.map(({value, label}) => { const selectedIndex = state.dropdowns.indexOf(value); const disabled = selectedIndex >= 0 && selectedIndex !== index; return {value, label, disabled}; }); }; // components.js import React from 'react'; import { connect } from 'react-redux'; import { changeDropdownValue } from './actionCreators'; import { selectOptionsForDropdown } from './helpers'; import { Select } from './myOtherComponents'; const mapStateToProps = (state, ownProps) => ({ value: state.dropdowns[ownProps.index], options: selectOptionsForDropdown(state, ownProps.index) }}; const mapDispatchToProps = (dispatch, ownProps) => ({ onChange: value => dispatch(changeDropdownValue(ownProps.index, value)); }); const ConnectedSelect = connect(mapStateToProps, mapDispatchToProps)(Select); export const Example = () => ( <div> <ConnectedSelect index={0} /> <ConnectedSelect index={1} /> <ConnectedSelect index={2} /> </div> ); As you can see, the logic in the Redux example is the same as the vanilla React code. But it is not contained in the parent component, but in reducers and helper functions (selectors). An instead of top-down passing of props, React-Redux connects each individual component to the state, resulting in a simpler, more modular, easier-to-maintain code.
Flux
36,143,767
35
A Egghead tutorial teaches it like this: var React = require('react'); var Router = require('react-router'); var routes = require('./config/routes'); Router.run(routes, function(Root){ React.render(<Root />, document.getElementById('app')); }); Yet I get this error: Uncaught TypeError: Router.run is not a function note: I've already updated react-router to the recent version.
Since the release of React Router v1.0, the run method has been removed, these breaking changes are documented in the upgrade guide. Your code would roughly translate to this: ReactDOM.render(<Router>{routes}</Router>, document.getElementById('app')) https://github.com/rackt/react-router/blob/832c42946c874fe56ffde0066b1088054311cb98/CHANGES.md
Flux
32,682,854
33
I am going around React.js and my question is simple: does my app have to be a single page app if I use React? If no then how do I control components outside of them? By FLUX? Any other standard methods? If yes then are there any libraries to perform permissions/access validation on the client side for React? Thanks a lot!
A react application need not be a single page application. React provides you with a way model HTML in terms of classes with specific render logic, but doesn't impose any sort of specific application logic like single vs multi page. I'm not quite sure I understand the rest of your questions, but I think you are essentially asking how to model a react application as a multi page app. There are many ways, however, one would be to structure your files like so: ./app --> main page for your app ./app/page1/ --> page 1 of your app ./app/page2/ --> page 2 of your app ... In this way, each 'page' would contain a self contained react project. Your main application page could contain hyperlinks that load these pages, or you could load them asynchronously in your javascript code. EDIT: The question as clarified in the comment is how does one make a react component change due to some action on the page: Say react component B is contained within react component A. A user presses button B which is contained in react component B, and when pressed, invokes callback B in react component B, and this click should trigger some action in react component A. callback B should somehow notify react component A that things have changed. This is where you have some choice about what to do. You could have react component B emit a change that react component A listens for (and re-renders accordingly), or you could use the FLUX model. In the FLUX model, react component B would emit a state change to some state store, which would trigger an event to be emitted. react component A will have needed to set an event callback for this event, and when react component B emits it, react component A can react to it.
Flux
32,130,937
31
For example... export const user = (state = { id: localStorage.getItem('id'), name: localStorage.getItem('name'), loggedInAt: null }, action) => { case types.LOGIN: localStorage.setItem('name', action.payload.user.name); localStorage.setItem('id', action.payload.user.id); return { ...state, ...action.payload.user } default: return { ...state, loggedInAt: Date.now() } } That's a scaled down version of what I'm doing, default returns the state from localStorage as expected. However the state of my application is actually blank once I refresh the page.
Redux createStore 2nd param is intended for store initialization: createStore(reducer, [initialState], [enhancer]) So you can do something like this: const initialState = { id: localStorage.getItem('id'), name: localStorage.getItem('name'), loggedInAt: null }; const store = createStore(mainReducer, initialState); Since reducers should be pure functions (i.e. no side effects) and localStorage.setItem is a side effect, you should avoid saving to localStorage in a reducer. Instead you can: store.subscribe(() => { const { id, name } = store.getState(); localStorage.setItem('name', name); localStorage.setItem('id', id); }); This will happen whenever the state changes, so it might affect performance. Another option is to save the state only when the page is closed (refresh counts) using onBeforeUnload: window.onbeforeunload = () => { const { id, name } = store.getState(); localStorage.setItem('name', name); localStorage.setItem('id', id); };
Flux
36,580,963
31
I'm a newbie in redux and es6 syntax. I make my app with official redux tutorial, and with this example. There is JS snippet below. My point - to define REQUEST_POST_BODY and RECEIVE_POST_BODY cases in posts reducer. Main difficult - to find and update right object in store. I try to use code from example: return Object.assign({}, state, { [action.subreddit]: posts(state[action.subreddit], action) }) But it used simple array of posts. It's not needed to find right post by id. Here my code: const initialState = { items: [{id:3, title: '1984', isFetching:false}, {id:6, title: 'Mouse', isFetching:false}] } // Reducer for posts store export default function posts(state = initialState, action) { switch (action.type) { case REQUEST_POST_BODY: // here I need to set post.isFetching => true case RECEIVE_POST_BODY: // here I need to set post.isFetching => false and post.body => action.body default: return state; } } function requestPostBody(id) { return { type: REQUEST_POST_BODY, id }; } function receivePostBody(id, body_from_server) { return { type: RECEIVE_POST_BODY, id, body: body_from_server }; } dispatch(requestPostBody(3)); dispatch(receivePostBody(3, {id:3, body: 'blablabla'}));
With Arrays If you'd prefer to stick with arrays, then you can write a reducer that just tackles single post objects. export default function reducePost(post, action) { if(post.id !== action.id) return post; switch(action.type) { case REQUEST_POST_BODY: return Object.assign({}, post, { isFetching: true }); case RECEIVE_POST_BODY: return Object.assign({}, post, { isFetching: false, body: action.body }); default: return post; } Your root reducer would become: export default function posts(state = initialState, action) { return state.map(post => reducePost(post, action); } We're just running our new reducer over each post in the list, to return an updated array of posts. In this case, the unique id will ensure that only one item will be changed. With Objects If each item has a unique string/number id, then you can flip your array around and use an object instead. const initialState = { items: { 3: {id:3, title: '1984', isFetching:false}, 6: {id:6, title: 'Mouse', isFetching:false} }; } Then you can simplify your reducer. switch (action.type) { case REQUEST_POST_BODY: let id = action.id; return Object.assign({}, state, { [id]: Object.assign({}, state[id], { isFetching: true }) }); case RECEIVE_POST_BODY: let id = action.id; return Object.assign({}, state, { [id]: Object.assign({}, state[id], { isFetching: false, body: action.body }) }); default: return state; } If you're happy to experiment with some ES7 syntax too, you can enable the Object spread operator with Babel and rewrite the calls to Object.assign. switch (action.type) { case REQUEST_POST_BODY: let id = action.id; return { ...state, [id]: {...state[id], isFetching: true } }; case RECEIVE_POST_BODY: let id = action.id; return { ...state, [id]: { ...state[id], isFetching: false, body: action.body } }; default: return state; } If you're not so keen on using the spread syntax, then it's still possible to make Object.assign a bit more palatable. function $set(...objects) { return Object.assign({}, ...objects); } case RECEIVE_POST_BODY: let id = action.id; return $set(state, { [id]: $set(state[id], { isFetching: false, body: action.body }) });
Flux
36,031,590
29
I am a bit confused by the statements: "Renders the whole application" and "Passing state to child components". Example 1: I have a todos app with a AppComponent and TodosListComponent. The AppComponent grabs the array of todos from the store and passes it as a property to the TodosListComponent. Example 2: I have a huge application with lots state. I have like 50 components building up my app. Do I want to pass all the state from the stores from AppComponent down through all the 50 components? So I am wondering, what is the convention? It makes more sense to me to let individual components listen directly to the stores they care about. The advantage is that only individual components rerender, but why then the concept of "the whole application rerender on state change"? What are the pros and cons of each? What is the common convention?
There are a few ways you can handle this. I think they're all valid and have their own trade-offs. Get all the state and pass pieces of it to children This is the technique you specifically asked about. Using this method, you'll have some function or method available to your top-level component that turns all the data from the stores into a "big bag of state" and then you'll selectively pass pieces of this data to child components. If those components have their own children, they'll pass it along as necessary. The upside to this method is that it makes things generally easy to debug. If you have to change the way a piece of state is retrieved from a store, you only have to change it in the top-level component—as long as it gets passed down with the same name, the other components will "just work." If some piece of data is wrong, you should only need to look in one place to figure out why. The downside to this technique what I call "props explosion"—you can end up passing a lot of properties around. I use this method in a medium-sized flux application, and a snippet of the top-level application component looks like this: <section id="col-left"> <Filters loading={this.state.loading} events={this.state.events} playbackRate={this.state.videoPlayback.playbackRate} autoPlayAudio={this.state.audioPlayback.autoPlay} role={this.state.role} /> </section> <section id="col-center" className={leftPaneActive ? "" : "inactive"}> <SessionVideo videoUuid={this.state.session.recording_uuid} lowQualityVideo={this.state.session.low_quality_video_exists} playbackRate={this.state.videoPlayback.playbackRate} /> <section id="transcript"> <Transcript loading={this.state.loading} events={this.state.events} currentEvents={this.state.currentEvents} selection={this.state.selection} users={this.state.session.enrolled_users} confirmedHcs={this.state.ui.confirmedHcs} currentTime={this.state.videoPlayback.position} playing={this.state.videoPlayback.playing} /> </section> </section> In particular, there can be a lot of components between the top-level one and some eventual child that do nothing with the data except pass it along, more closely coupling those components to their position in the hierarchy. Overall, I like the debuggability this technique provides, though as the application grew larger and more complex I found it was not idea to do this with only a single top-level component. Get all the state and pass it as one object One of the developers at Facebook mentioned this technique. Here, you'll get a big bag of state, just as above, but you'll pass the whole thing (or entire sub-sections of it) rather than individual properties. By utilizing React.PropTypes.shape in child components, you can ensure that the right properties are getting passed. The upside is you pass way fewer properties around; the above example might look more like this: <section id="col-left"> <Filters state={this.state} /> </section> <section id="col-center" className={leftPaneActive ? "" : "inactive"}> <SessionVideo session={this.state.session} playback={this.state.videoPlayback} /> <section id="transcript"> <Transcript state={this.state} /> </section> </section> The downside is that it becomes a little more difficult to deal with changes in the shape of the state; rather than just changing the top-level component, you'll have to track down everywhere that piece of data is used and change the way that component access the property. Also, shouldComponentUpdate can potentially become a little trickier to implement. Allow components to get their own state On the other end of the spectrum, you can grant application-specific (that is, non-reusable) child components to access the stores and build up their own state based on the store change events. Components that build their own state like this are sometimes called "controller-views" or, more commonly these days, "container components." The upside, of course, is that you don't have to deal with passing properties around at all (other than change handlers and properties for more reusable components). The downside, though, is that your components are more highly coupled to the stores—changing the stores or the data they provide (or the interface via which they provide that data) may force you to revisit the code for a larger number of components. Also, as mentioned in the comments, this can potentially make server rendering a bit more difficult. If you only use properties (especially at only the top level), you can transport them more easily to the client and re-initialize React with the same properties. By allowing the stores to determine their own data, you need to somehow inject that data into the stores to allow the components to get that data. A common approach, and one that I typically use now, is to make every component in your application only rely on props for global application state, and then decide if it makes more sense to (1) connect them directly to flux by wrapping them in a container, or (2) allow the props to be passed from some parent container. There are abstractions that you might be able to use to make some of these techniques more viable. For example, a Facebook dev had this to say in a comment on Hacker News: Now all your data is in stores, but how do you get it into the specific component that needs it? We started with large top level components which pull all the data needed for their children, and pass it down through props. This leads to a lot of cruft and irrelevant code in the intermediate components. What we settled on, for the most part, is components declaring and fetching the data they need themselves, except for some small, more generic components. Since most of our data is fetched asynchronously and cached, we've created mixins that make it easy to declare which data your component needs, and hook the fetching and listening for updates into the lifecycle methods (componentWillMount, etc).
Flux
26,563,933
26
When reading the redux docs I found this: Still, you should do your best to keep the state serializable. Don't put anything inside it that you can't easily turn into JSON. So my question is, what's the benefit of keeping state serializable? Or, what difficulties I may have if I put non-serializable data into store? And I believe this is not unique to redux - Flux, even React local state suggest the same thing. To make me clear here is an example. Suppose the store structure is like this. { books: { 1: { id: 1, name: "Book 1", author_id: 4 } }, authors: { 4: { id: 4, name: "Author 4" } } } This should all looks good. However when I try to access "the author of Book 1", I have to write code like this: let book = store.getState().books[book_id]; let author = store.getState().authors[book.author_id]; Now, I'm going to define a class: class Book { getAuthor() { return store.getState().authors[this.author_id]; } } And my store will be: { books: { 1: Book(id=1, name="Book 1") }, ... } So that I can get the author easily by using: let author = store.getState().books[book_id].getAuthor(); The 2nd approach could make the "book" object aware of how to retrieve the author data, so the caller does not need to know the relation between books and authors. Then, why we are not using it, instead of keeping "plain object" in the store just as approach #1? Any ideas are appreciated.
Directly from the redux FAQs: Can I put functions, promises, or other non-serializable items in my store state? It is highly recommended that you only put plain serializable objects, arrays, and primitives into your store. It's technically possible to insert non-serializable items into the store, but doing so can break the ability to persist and rehydrate the contents of a store, as well as interfere with time-travel debugging. If you are okay with things like persistence and time-travel debugging potentially not working as intended, then you are totally welcome to put non-serializable items into your Redux store. Ultimately, it's your application, and how you implement it is up to you. As with many other things about Redux, just be sure you understand what tradeoffs are involved. Further reading: What is time travel debugging?
Flux
40,941,079
26
I'm trying to replace a Backbone.Marionette App to React and am facing difficulty thinking about query params. I think I'm missing a really simple peace in understanding this pattern so I apologize if this question is totally nonsense. I would appreciate any support or just pointing me to some direction that I can google more specifically. There's a /users page which lists users and you can filter the users via search bar. So if you want to filter the users which contain 'joe' in their username, I would make a request to the server with query params like /users?username=joe. In addition I am able to paginate by adding a page parameter, too (/users?username=joe&page=1). If I only think about the functionality, the flow would probably be The Client inserts joe to the input element and clicks Search. Clicking the Search button fires an Action (like Action.getUser). The Action makes a request to the server and receives the results The Dispatcher dispatches a function with the results payload to whomever (usually the Store) is interested in the Action. The Store's state changes with the new result received by the Action The View (Component) re-renders by listening to the Store's change. and it works as expected. However, I would like the Client to be able to bookmark the current filtered result and be able to come back to the same page some time later. This means I will need somewhere to save explicit information about the search term the Client made, which is usually the url (am I right?). So I will need to update the url with query parameters to save the search term (/users?username=joe&page=1). What I'm confused is where and when to update the url? What I can come up with right now are the 2 options below - and they don't seem to be clean at all. Option 1 The Client inserts joe to the input element and clicks Search. Clicking the Search button fires a transition of the ReactRouter with the new query params (/users?username=joe&page=1). The View (Component) receives the new params via this.props.params and this.props.query. The View (Component) fires an Action like Action.getUser depending on the query params it receives - in this case username=joe&page=1. after this, it is the same as above Option 2 (only 6 is different from what I explained above) The Client inserts joe to the input element and clicks Search. Clicking the Search button fires an Action (like Action.getUser). The Action makes a request to the server and receives the results The Dispatcher dispatches a function with the results payload to whomever (usually the Store) is interested in the Action. The Store's state changes with the new result received by the Action The View (Component) re-renders by listening to the Store's change. And somehow (I don't know how, yet) updates its url depending on its props (like this.props.searchusername, and this.props.searchpage) What is the best practice on handling query params? (or this may not be specific to query params) Am I completely misunderstanding the design pattern or architecture? Thanks in advance for any support. Some articles I've read Any way to get current params or current query from router (outside of component)? Async data and Flux stores Make it easier to add query parameters React Router and Arbitrary Query Params: Page Refreshes Unintentionally on Load? Add default params?
I would consider best practice to be the submit button only setting the location query (username). The rest should be taken care by the main react component that is assigned as router component. By this, you can be sure that anytime one revisits or shares the url, they can get the same results. And this is very generic too. Something like this: let myQuery = this.props.location.query; if (myQuery.username) { let theUser = myQuery.username; this.setState({ userName = myQuery.username }); } else { this.setState({ userName = false //Show All }); } And then use this state "userName" to send to the server to search with. By this way, you will not need to iterate the code of the component that takes care of listing users since server already sends the relevant data. In my experience with using location queries in React, I have been very content with their reactivity cycles and performance. I'd highly recommend keeping every different app state in relevance with the url.
Flux
32,208,655
24
I'm a bit stuck thinking on how to implement a reducer where its entities can have children of the same type. Let's take reddit comments as an example: each comment can have child comments that can have comments themselves etc. For simplification reason, a comment is a record of type {id, pageId, value, children}, with pageId being the reddit page. How would one model the reducer around that? I was thinking of having the reducer be a map -> id of the comments where you can filter by page using the pageId. The issue is that for example when we want to add a comment to a nested one: we need to create the record on the root of the map and then add its id in the parent children property. To display all the comments we'd need to get all of them, filter those that we have at the top (that would be kept in the page reducers as an orderedList for example) and then iterate on them, fetching from the comments objects when we encounter children using recursion. Is there a better approach than that or is it flawed?
The official solution to this is to use normalizr to keep your state like this: { comments: { 1: { id: 1, children: [2, 3] }, 2: { id: 2, children: [] }, 3: { id: 3, children: [42] }, ... } } You're right that you'd need to connect() the Comment component so each can recursively query the children it's interested in from the Redux store: class Comment extends Component { static propTypes = { comment: PropTypes.object.isRequired, childComments: PropTypes.arrayOf(PropTypes.object.isRequired).isRequired }, render() { return ( <div> {this.props.comment.text} {this.props.childComments.map(child => <Comment key={child.id} comment={child} />)} </div> ); } } function mapStateToProps(state, ownProps) { return { childComments: ownProps.comment.children.map(id => state.comments[id]) }; } Comment = connect(mapStateToProps)(Comment); export default Comment; We think this is a good compromise. You pass comment as a prop, but component retrieves childrenComments from the store.
Flux
32,798,193
24
Tools: Reactjs 0.14.0 Vanilla Flux I need unique identifiers for 2 reasons: Child Reconciliation Keeping track of what child was clicked So let's say I have a list of messages that looks like this: [ { id: 1241241234, // <-----The unique id is kept here authorName: "Nick" text: "Hi!" }, ... ] And now I use a Array.prototype.map() to create "ownee" component (MessageListItem) inside of the owner component MessageSection function getMessageListItem(message) { return ( <MessageListItem key={message.id} message={message} /> ); } var MessageSection = React.createClass({ render: function() { var messageListItems = this.state.messages.map(getMessageListItem); <div> {messageListItems } </div> } }); But the this.props.key is undefined in the MessageListItem even though I know for a fact that is was defined when it was passed down. var ConvoListItem = React.createClass({ render: function() { console.log(this.props.key); // Undefined } }); I'm guessing there is a reason that React is not letting key be used as a prop. Question: If I can't use key as a prop, then what is the proper way to handle the duality need of keying and setting unique identifiers on a dynamic list of child elements that contain state?
key and ref aren't really 'props'. They're used internally by react and not passed to components as props. Consider passing it as a prop such as 'id'.
Flux
33,661,511
24
I would like to pass router params into Vuex actions, without having to fetch them for every single action in a large form like so: edit_sport_type({ rootState, state, commit }, event) { const sportName = rootState.route.params.sportName <------- const payload = {sportName, event} <------- commit(types.EDIT_SPORT_TYPE, payload) }, Or like so, edit_sport_type({ state, commit, getters }, event) { const payload = {sportName, getters.getSportName} <------- commit(types.EDIT_SPORT_TYPE, payload) }, Or even worse: grabbing params from component props and passing them to dispatch, for every dispatch. Is there a way to abstract this for a large set of actions? Or perhaps an alternative approach within mutations themselves?
To get params from vuex store action, import your vue-router's instance, then access params of the router instance from your vuex store via the router.currentRoute object. Sample implementation below: router at src/router/index.js: import Vue from 'vue' import VueRouter from 'vue-router' import routes from './routes' Vue.use(VueRouter) const router = new VueRouter({ mode: 'history', routes }) export default router import the router at vuex store: import router from '@/router' then access params at vuex action function, in this case "id", like below: router.currentRoute.params.id
Flux
42,178,851
23
A common question of newcomers to React is why two-way data binding is not a built-in feature, and the usual response includes an explanation of unidirectional data flow along with the idea that two-way data binding is not always desirable for performance reasons. It's the second point that I'd like to understand in more detail. I am currently working on a form library for apollo-link-state (a new client-side state management tool from Apollo). The concept is very similar to redux-form except using apollo-link-state instead of redux as the state manager. (Note that form state is stored separately from the state of domain entities, although an entity can optionally be used to populate the initial state of a form.) When the user makes changes on the form, the library immediately updates the store via onChange handlers. I was thinking about allowing individual fields to opt-out of that behavior in case the programmer was concerned about performance, but then I started wondering when this would ever be a real performance issue. The browser is going to fire the oninput event no matter what, so the only performance consideration I can think of is whether or not the store is updated as the user types. Granted there is the additional overhead of executing a mutation rather than just calling setState(), but that essentially just amounts to a couple additional function calls. And let's suppose that I weren't using apollo but just calling a function that updates some global store directly - what would be the performance consideration then? My thinking is that if a form is going to support immediately updating the form state as the user types in one field, it might as well do so for all the fields. The user can only type in one field at a time, and I don't see the benefit of making the page sometimes faster (probably negligible) with some fields and sometimes slower with others. Furthermore, my library allows consumers to use whatever input components they want, so if the programmer just wants fewer state updates, they could just write a component that debounces React's onChange event or uses the browser's own change or blur event instead. Am I missing something here? Is there some other reason why a user of my library would want to ignore changes for particular fields until the user submits the form? Or maybe a more useful option would be to ignore changes to the entire form (until submit)? Here's a basic (greatly simplified) illustration of the basic concept behind my current approach: // defined in a globally-accessible module const formState = { // This somehow causes any dependent form components to re-render // when state changes update(formName, updatedState) { ... } } export default formState ... // UserForm.js: export default class UserForm extends PureComponent { componentDidMount() { formState.userForm = { firstName: '', lastName: '', } } handleChange(e) { const { target } = e formState.update('userForm', { [target.name]: target.value }) } //... render() { const { userForm } = formState return ( <form onSubmit={this.handleSubmit}> <label for="name">Name</label> <input id="name" type="text" onChange={this.handleChange} value={userForm.name} /> <label for="email">Email</label> <input id="email" type="email" onChange={this.handleChange} value={userForm.email} /> </form> ) } } Finally, for the sake of completeness, I should mention that there are some API design considerations involved in this as well. Individual input components could have a slightly simpler design if I did not provide an option to opt-out of the automatic 2-way binding. I can post details if anyone is interested.
2 way data binding implications Starting from the first part of your question, there are two primary reasons for react not going with two way data binding: A single source of truth for data changes in a React app, hence less chances of bugs and easier debugging Performance benefits In React, we can share state among different child components by lifting the state up to a common parent component. When a shared piece of state is updated, all the child components can update themselves. Here is a good example from the docs related to forms. Talking about the performance benefits, two way data binding in some other context (say AngularJS) works by means of watchers watching different elements. This sounds easier (and less code than React's one way data flow) for a small number of elements, however as the number of your UI components/elements grow, so does the number of watchers. A single change in this case causes a lot of watchers to fire up in order to keep things in sync. This makes the performance a bit sluggish. In case of React, since there is only one way data flow, it's easier to determine which components need to be updated. Handling state updates Coming to the second part of your question, your state library provides the data to your form components causing any dependent components to update on state change, sweet. Here are my thoughts: I was thinking about allowing individual fields to opt-out of that behavior in case the programmer was concerned about performance, but then I started wondering when this would ever be a real performance issue. The store update in itself will happen pretty quick. JavaScript runs very fast, it's the DOM updates which often times causes bottlenecks. So, unless there are hundreds of dependent form elements on the same page and all of them are getting updated, you'll be just fine. And let's suppose that I weren't using apollo but just calling a function that updates some global store directly - what would be the performance consideration then? I don't think it'll have significant difference. My thinking is that if a form is going to support immediately updating the form state as the user types in one field, it might as well do so for all the fields. The user can only type in one field at a time, and I don't see the benefit of making the page sometimes faster (probably negligibly) with some fields and sometimes slower with others. Agreed with this. My library allows consumers to use whatever input components they want, so if the programmer just wants fewer state updates, they could just write a component that debounces React's onChange event or uses the browser's own change or blur event instead. I think most of the use cases would be solved with a simple input. Again, I don't see a performance benefit with fewer state updates here. Debounce could be useful if for example I'm running an API call on the input (and want to wait before the user stops typing). Is there some other reason why a user of my library would want to ignore changes for particular fields until the user submits the form? Or maybe a more useful option would be to ignore changes for the entire form (until submit)? I don't see a benefit in ignoring changes for a particular field or waiting until submit. On the other hand, when using forms, a common use case I come across implementing things is data validation. For example, provide feedback to the user as and when he is creating a password check if an email is valid perform API calls to see if a username is valid, etc. These cases would need the state to be updated as the user is typing. tl;dr You should be fine with updating state as the user is typing. If you're still concerned about performance, I would suggest to profile your components to isolate bottlenecks if any :)
Flux
48,931,995
23
My code https://gist.github.com/ButuzGOL/707d1605f63eef55e4af So when I get sign-in success callback I want to make redirect, redirect works through dispatcher too. And I am getting Dispatch.dispatch(...): Cannot dispatch in the middle of a dispatch. Is there any hack to call action in the middle ?
I don't see where in the gist that you posted you are doing the redirect. I only see the AUTH_SIGNIN and AUTH_SIGNIN_SUCCESS actions, and they look pretty straightforward. But no, there is no hack to create an action in the middle of a dispatch, and this is by design. Actions are not supposed to be things that cause a change. They are supposed to be like a newspaper that informs the application of a change in the outside world, and then the application responds to that news. The stores cause changes in themselves. Actions just inform them. If you have this error, then you need to back up and look at how you're handling the original action. Most often, you can set up your application to respond to the original action, accomplish everything you need to do, and avoid trying to create a second action.
Flux
26,581,587
21
From the discussion here it seems that the state of Redux reducers should be persisted in a database. How does something like user authentication works in this instance? Wouldn't a new state object be created to replace the previous state in the database for every user (and their application state) created and edited? Would using all of this data on the front end and constantly updating the state in the database be performant? Edit: I've created an example Redux auth project that also happens to exemplify universal Redux, and realtime updating with Redux, Socket.io and RethinkDB.
From the discussion here it seems that the state of Redux reducers should be persisted in a database. To persist the state or not, it's likely not a concern of Redux at all. It's more up to application logic. If something happens in an application, like data upload to server, obviously you need to save state (or a slice of the state to a server). Since network calls are asynchronous, but Redux is synchronous - you need to introduce additional middleware, as redux-thunk or redux-promise. As sign-up example, you likely need that actions, export function creatingAccount() { return { type: 'CREATING_ACCOUNT' }; } export function accountCreated(account) { return { type: 'ACCOUNT_CREATED', payload: account }; } export function accountCreatingFailed(error) { return { type: 'ACCOUNT_CREATING_FAILED', payload: error }; } export function createAccount(data, redirectParam) { return (dispatch) => { dispatch(creatingAccount()); const url = config.apiUrl + '/auth/signup'; fetch(url).post({ body: data }) .then(account => { dispatch(accountCreated(account)); }) .catch(err => { dispatch(accountCreatingFailed(err)); }); }; } Some portion of state, e.g. user object after authorization, might be stored to localStore and re-hydrated on next application run.
Flux
33,726,644
19
Is there a way to cancel an action or ignore it? Or rather what is the best/recommended way to ignore an action? I have the following action creator and when I input an invalid size (say 'some_string') into the action creator, in addition to getting my own warning message I also get: Uncaught Error: Actions must be plain objects. Use custom middleware for async actions. import { SET_SELECTED_PHOTOS_SIZE } from './_reducers'; export default (size=0) => { if (!isNaN(parseFloat(size))) { return { type: SET_SELECTED_PHOTOS_SIZE, size: size, }; } else { app.warn('Size is not defined or not a number'); } }; I've discussed this in the redux-channel in Discord (reactiflux) where one suggestion was to use redux-thunk like this: export default size => dispatch => { if (!isNaN(parseFloat(size))) { dispatch({ type: SET_SELECTED_PHOTOS_SIZE, size: size, }); } else { app.warn('Size is not defined or not a number'); } } The other option was to ignore the action inside the reducer. This does make the reducer "fatter" because it then has more responsibilities, but it uses less thunk-actions which makes it easier to debug. I could see the thunk-pattern getting out of hand since I would be forced to use it for almost every action, making batched actions a bit of a pain to maintain if you have lots of them.
Ignoring actions in Action Creators is basically a way of treating them as Command Handlers, not Event Creators. When the User clicks the button it’s some kind of Event though. So there are basically two ways how to solve the issue: The condition is inside action creator and thunk-middleware is used const cancelEdit = () => (dispatch, getState) => { if (!getState().isSaving) { dispatch({type: CANCEL_EDIT}); } } The condition is inside reducer and no middleware is required function reducer(appState, action) { switch(action.type) { case: CANCEL_EDIT: if (!appState.isSaving) { return {...appState, editingRecord: null } } else { return appState; } default: return appState; } } I strongly prefer treating UI interaction as Events instead of Commands and there two advantages: All your domain logic stays in the synchronous pure reducers which are very easy to test. Just imagine you would need to write unit test for the functionality. const state = { isSaving: true, editingRecord: 'FOO' }; // State is not changed because Saving is in progress assert.deepEqual( reducer(state, {type: 'CANCEL_EDIT'}), state ); // State has been changed because Saving is not in progress anymore assert.deepEqual( reducer({...state, isSaving: false}), {isSaving: false, editingRecord: null} ); As you can see the test is really simply when you treat the interaction as an Event What if you decided that instead of ignoring the action you would rather show some visual indication that the action is not possible? You would need to dispatch another action or basically rebuild it. However, you can’t use hot-reload with replay here because the logic in action creator is not re-playable. If the logic is in reducer though, you can simply change the behaviour, the reducer will get hot-reloaded and all the events gets replayed. The only event that you dispatch is that user clicked some button and you can’t deny that fact. So unless you drastically change the UI you can always hot-reload with replay. When you think about any interaction with the UI as an Event then you will get the best possible replay experience, because Events can’t be denied they have just happened.
Flux
34,184,677
18
I'm using ReactJs with Redux and on some tutorials and codes I see people suggesting and using normalizr to keep the state flat. But what is the real advantage in keeping it flat ? Will I encounter any problems if I don't ? Is it necessary ?
Three main reasons: Updating nested Javascript objects immutably generally results in uglier code that is harder to maintain, unless you use a utility library to wrap up the process Immutably updating nested data requires that you return new copies of all items in the nesting hierarchy. Since components generally do shallow-equality reference comparisons on data to see if they need to update, updating nested data usually means that more objects are updated, and more components will probably have to re-render even if the actual data isn't different. Flat data, and in particular normalized data, enables some more optimized approaches for defining components (such as a list where each list item component is itself connected, given an item ID as a prop, and is responsible for looking up its own item's data by that ID)
Flux
38,842,454
17
For the past weeks I've been trying to learn React and Redux. Now I have met a problem thay I haven't found a right answer to. Suppose I have a page in React that gets props from the link. const id = this.props.params.id; Now on this page, I'd like to display an object from STORE with this ID. const initialState = [ { title: 'Goal', author: 'admin', id: 0 }, { title: 'Goal vol2', author: 'admin', id: 1 } ] My question is: should the function to query the the object from the STORE be in the page file, before the render method, or should I use action creators and include the function in reducers. I've noticed that the reduceres seem to contain only actions that have an impoact on store, but mine just queries the store. Thank you in advance.
You could use the mapStateToProps function to query the store when you connect the component to redux: import React from 'react'; import { connect } from 'react-redux'; import _ from 'lodash'; const Foo = ({ item }) => <div>{JSON.stringify(item)}</div>; const mapStateToProps = (state, ownProps) => ({ item: _.find(state, 'id', ownProps.params.id) }); export default connect(mapStateToProps)(Foo); (This example uses lodash - _) The mapStateToProps function takes in the whole redux state and your component's props, and from that you can decide what to send as props to your component. So given all of our items, look for the one with the id matching our URL. https://github.com/rackt/react-redux/blob/master/docs/api.md#arguments
Flux
34,840,994
16
At times I have components with a large amounts of properties. Is there any inherent problem with this? e.g. render() { const { create, update, categories, locations, sectors, workTypes, organisation } = this.props; // eslint-disable-line no-shadow return ( <div className="job-container"> <JobForm organisationId={organisation.id} userId={user.id} action={action} create={create} update={update} categories={categories} locations={locations} sectors={sectors} workTypes={workTypes} /> </div> ); } What are the best practices?
I think you have justly recognized a code smell. Anytime you have that many inputs(props) to a function(component), you have to question, how do you test this component with all the permutations of argument combinations. Using {...this.props} to pass them down only cuts down on the typing, sort of like spraying Febreeze over a dead decaying corpse. I'd ask why you have both a create and update method vs a submit method? How are you using theorganisationId and userId? If those are only needed to be passed to the create and update (or submit) methods that are also passed in, why not NOT pass them and let the onCreate/onUpdate handlers supply them? Maybe JobForm should be rendered as: <JobForm /* props go here */> <CategoryDroplist categories=this.props.categories /> <LocationDroplist locations=this.props.locations /> </JobForm> Within JobForm you have props.children but those are separate components that might be fine as separate components. I just don't have enough information to answer the question, but by breaking your components into simpler things, the number of props go down and the smell decreases as well.
Flux
37,695,557
16
In response to a state change, I want to trigger another state change. Is that inherently a bad idea? The specific sort of scenario is that the component is modeled as a state machine that renders different information according to the value of this.state.current_state. But external events can prompt it to experience a state transition, via changes to it's state through a flux store. Here's a contrived scenario to get the idea across: I think the correct lifecycle method to do this would be shouldComponentUpdate. Something to this effect: shouldComponentUpdate: function(nextProps, nextState) { if (nextState.counter > 4 && this.state.current_state !== DISPLAY_MANY) { this.setState({ current_state: DISPLAY_MANY }); } return true; } In some child component, counter may get incremented, so rather than inferring what it would display based on the value of some counter variable, I'd like to encode the states explicitly. The real scenario is more complicated than this, but hopefully this scenario is detailed enough to get the idea across. Is it OK to do what I'm thinking? EDIT: fixed code example to avoid triggering infinite loop by adding extra state condition
shouldComponentUpdate is intended specifically to determine if the component should update at all. To do things like: if (nextState.counter == this.state.counter && nextProps.foo == this.Props.foo) { return false; } componentWillReceiveProps is for responding to external (props) changes. There is no equivalent componentWillReceiveState, as pointed out in the docs. Your component (and only your component) triggers it own state changes, usually through one or more of the following events: initial rendering in getInitialState updated props in componentWillReceiveProps user interaction in <input> fields etc, e.g. in custom onChangeInput() functions in your component. in flux: responding to store changes from listeners, typically in custom functions calling getStateFromStores(), where state is updated. I guess it doesn't make sense to have one function inside a component to create a state change, and then another function inside the same component to intervene before state is updated.. In your case, you could move the logic (to determine if state needs to be updated) to a getStateFromStores() function where you handle store updates. Or, you could simply leave it in state, and change your render function, so that it renders differently if counter > 4.
Flux
33,290,189
14
I'm just starting to use flux (with redux for now) and am wondering how relationships are supposed to be handled. For an example we can use Trello that has boards with columns that contains cards. One approach would be to have one store/reducer for boards and have all the data in it there but that means some very fat stores since they would have to contain all the actions for columns and cards as well. Another approach i've seen is separating nested resources into for example BoardStore, ColumnStore and CardStore and use their ids as reference. Here's an example of where I am a bit confused: you could have an action creator called addCard that does a request to the server to create a card with all the data. If you are doing optimistic update, you would have created a card object in one of your store before but you can't know the id it will have until you get back the request. So in short: Firing addCard addCard does a request, in the meantime you return an action of type ADD_CARD_TEMP you get the request and return an action of type ADD_CARD where the store/reducer changes the id. Is there a recommended way to deal with this case? Nested store/reducers look a bit silly to me but otherwise you end up with very complex stores so it looks like a compromise really.
Yes, using ids across multiple stores much like a relational database is the way to do it right. In your example, let's say you want to optimistically put a new card in a particular column, and that a card can only be in one column (one column to many cards). The cards in your CardStore might look like this: _cards: { 'CARD_1': { id: 'CARD_1', columnID: 'COLUMN_3', title: 'Go to sleep', text: 'Be healthy and go to sleep on time.', }, 'CARD_2': { id: 'CARD_2', columnID: 'COLUMN_3', title: 'Eat green vegetables', text: 'They taste better with onions.', }, } Note that I can refer to a card by the id, and I can also retrieve the id within the object. This allows me to have methods like getCard(id) and also be able to retrieve the id of a particular card within the view layer. Thus I can have a method deleteCard(id) that is called in response to an action, because I know the id in the view. Within the card store, you would have getCardsByColumn(columnID), which would be a simple map over the card objects, and this would produce an array of cards that you could use to render the contents of the column. Regarding the mechanics of optimistic updates, and how the use of ids affects it: You can use a client-side id that is established within the same closure that will handle the XHR response, and clear the client-side id when the response comes back as successful, or instead roll back on error. The closure allows you to hold on to the client-side id until the response comes back. Many people will create a WebAPIUtils module that will contain all the methods related to the closure retaining the client-side id and the request/response. The action creator (or the store) can call this WebAPIUtils module to initiate the request. So you have three actions: initiate request handle success handle response In response to the action that initiates the request, your store receives the client-side id and creates the record. In response to success/error, your store again receives the client-side id and either modifies the record to be a confirmed record with a real id, or instead rolls back the record. You would also want to create a good UX around that error, like letting your user try again. Example code: // Within MyAppActions cardAdded: function(columnID, title, text) { var clientID = this.createUUID(); MyDispatcher.dispatch({ type: MyAppActions.types.CARD_ADDED, id: clientID, columnID: columnID, title: title, text: text, }); WebAPIUtils.getRequestFunction(clientID, "http://example.com", { columnID: columnID, title: title, text: text, })(); }, // Within WebAPIUtils getRequestFunction: function(clientID, uri, data) { var xhrOptions = { uri: uri, data: data, success: function(response) { MyAppActions.requestSucceeded(clientID, response); }, error: function(error) { MyAppActions.requestErrored(clientID, error); }, }; return function() { post(xhrOptions); }; }, // Within CardStore switch (action.type) { case MyAppActions.types.CARD_ADDED: this._cards[action.id] = { id: action.id, title: action.title, text: action.text, columnID: action.columnID, }); this._emitChange(); break; case MyAppActions.types.REQUEST_SUCCEEDED: var tempCard = this._cards[action.clientID]; this._cards[action.id] = { id: action.id, columnID: tempCard.columnID, title: tempCard.title, text: tempCard.text, }); delete this._cards[action.clientID]; break; case MyAppActions.types.REQUEST_ERRORED: // ... } Please don't get too caught up on the details of the names and the specifics of this implementation (there are probably typos or other errors). This is just example code to explain the pattern.
Flux
31,641,466
13
I have sign in component, which should be available for unauthenticated users. And right after the authentication this component should become unavailable. var routes = ( <Route handler={App}> <Route name="signIn" handler={signIn}/> {/* redirect, if user is already authenticated */} { localStorage.userToken ? ( <Redirect from="signIn" to="/user"/> ) : null } </Route> ); Router.run(routes, (Handler, state) => { React.render(<Handler {...state}/>, document.getElementById('main')); }); This code works perfect if user have reloaded webapp for any reason after the authentication, but of course it doesn't if user didn't reload the webapp. I've tried to use this.context.router.transitionTo right to the SignUp component, but it works awful - the component gets rendered, then this script is getting executed. So I want to add the redirect right into the routes variable to make the router redirect without even trying to render the component.
Instead of checking your auth-flow and conditionally rendering particular routes, I would recommend another approach: If you're using react-router 0.13.x, I would recommend using the willTransitionTo methods on your components when you need to check authentication. It is called when a handler is about to render, giving you the opportunity to abort or redirect the transition (in this case, check if user is authenticated, redirect to another path if not). See auth-flow example here: https://github.com/ReactTraining/react-router/blob/v0.13.6/examples/auth-flow/app.js For versions >0.13.x, it would be onEnter and Enterhooks. See the auth-flow example here: https://github.com/rackt/react-router/blob/master/examples/auth-flow/app.js Basically you move the auth-check flow away from your routes variable, and into transition events/hooks. Before the route handler actually gets rendered, check the auth, and redirect the user to another route.
Flux
32,804,269
13
I'm creating a simple CRUD app using Facebook's Flux Dispatcher to handle the creation and editing of posts for an English learning site. I currently am dealing with an api that looks like this: /posts/:post_id /posts/:post_id/sentences /sentences/:sentence_id/words /sentences/:sentence_id/grammars On the show and edit pages for the app, I'd like to be able to show all the information for a given post as well as all of it's sentences and the sentences' words and grammar details all on a single page. The issue I'm hitting is figuring out how to initiate all the async calls required to gather all this data, and then composing the data I need from all the stores into a single object that I can set as the state in my top level component. A current (terrible) example of what I've been trying to do is this: The top level PostsShowView: class PostsShow extends React.Component { componentWillMount() { // this id is populated by react-router when the app hits the /posts/:id route PostsActions.get({id: this.props.params.id}); PostsStore.addChangeListener(this._handlePostsStoreChange); SentencesStore.addChangeListener(this._handleSentencesStoreChange); GrammarsStore.addChangeListener(this._handleGrammarsStoreChange); WordsStore.addChangeListener(this._handleWordsStoreChange); } componentWillUnmount() { PostsStore.removeChangeListener(this._handlePostsStoreChange); SentencesStore.removeChangeListener(this._handleSentencesStoreChange); GrammarsStore.removeChangeListener(this._handleGrammarsStoreChange); WordsStore.removeChangeListener(this._handleWordsStoreChange); } _handlePostsStoreChange() { let posts = PostsStore.getState().posts; let post = posts[this.props.params.id]; this.setState({post: post}); SentencesActions.fetch({postId: post.id}); } _handleSentencesStoreChange() { let sentences = SentencesStore.getState().sentences; this.setState(function(state, sentences) { state.post.sentences = sentences; }); sentences.forEach((sentence) => { GrammarsActions.fetch({sentenceId: sentence.id}) WordsActions.fetch({sentenceId: sentence.id}) }) } _handleGrammarsStoreChange() { let grammars = GrammarsStore.getState().grammars; this.setState(function(state, grammars) { state.post.grammars = grammars; }); } _handleWordsStoreChange() { let words = WordsStore.getState().words; this.setState(function(state, words) { state.post.words = words; }); } } And here is my PostsActions.js - the other entities (sentences, grammars, words) also have similar ActionCreators that work in a similar way: let api = require('api'); class PostsActions { get(params = {}) { this._dispatcher.dispatch({ actionType: AdminAppConstants.FETCHING_POST }); api.posts.fetch(params, (err, res) => { let payload, post; if (err) { payload = { actionType: AdminAppConstants.FETCH_POST_FAILURE } } else { post = res.body; payload = { actionType: AdminAppConstants.FETCH_POST_SUCCESS, post: post } } this._dispatcher.dispatch(payload) }); } } The main issue is that the Flux dispatcher throws a "Cannot dispatch in the middle of a dispatch" invariant error when SentencesActions.fetch is called in the _handlePostsStoreChange callback because that SentencesActions method triggers a dispatch before the dispatch callback for the previous action is finished. I'm aware that I can fix this by using something like _.defer or setTimeout - however that really feels like I'm just patching the issue here. Also, I considered doing all this fetching logic in the actions itself, but that seemed not correct either, and would make error handling more difficult. I have each of my entities separated out into their own stores and actions - shouldn't there be some way in the component level to compose what I need from each entity's respective stores? Open to any advice from anyone who has accomplished something similar!
But no, there is no hack to create an action in the middle of a dispatch, and this is by design. Actions are not supposed to be things that cause a change. They are supposed to be like a newspaper that informs the application of a change in the outside world, and then the application responds to that news. The stores cause changes in themselves. Actions just inform them. Also Components should not be deciding when to fetch data. This is application logic in the view layer. Bill Fisher, creator of Flux https://stackoverflow.com/a/26581808/4258088 Your component is deciding when to fetch data. That is bad practice. What you basically should be doing is having your component stating via actions what data it does need. The store should be responsible for accumulating/fetching all the needed data. It is important to note though, that after the store requested the data via an API call, the response should trigger an action, opposed to the store handling/saving the response directly. Your stores could look like something like this: class Posts { constructor() { this.posts = []; this.bindListeners({ handlePostNeeded: PostsAction.POST_NEEDED, handleNewPost: PostsAction.NEW_POST }); } handlePostNeeded(id) { if(postNotThereYet){ api.posts.fetch(id, (err, res) => { //Code if(success){ PostsAction.newPost(payLoad); } } } } handleNewPost(post) { //code that saves post SentencesActions.needSentencesFor(post.id); } } All you need to do then is listening to the stores. Also depending if you use a framework and which one you need to emit the change event (manually).
Flux
32,240,309
12
So say you have a chat aplication with this component structure: <ChatApp> <CurrentUserInfo>...</CurrentUserInfo> <ChatsPanel>...</ChatsPanel> <SelectedChatPanel> <MessagesList> <MessageBaloon> <MessageText></MessageText> <MessageUserHead></MessageUserHead> </MessageBaloon> ... </MessagesList> <SelectedChatPanel> </ChatApp> And a Redux state like: { currentUser: ..., chatsList: ..., selectedChatIndex: ..., messagesList: [ ... ] } How would you make the current user information available to the <MessageUserHead> component (that will render the current user thumbnail for each message) without having to pass it along all the way from the root component thru all the intermediate components? In the same way, how would you make information like current language, theme, etc available to every presentational/dumb component in the component tree without resorting to exposing the whole state object?
(UPDATE: Having spent some time on option 4, I personally think it's the way to go. I published a library, react-redux-controller built around this approach.) There are a few approaches that I know of from getting data from your root component, down to your leaf components, through the branches in the middle. Props chain The Redux docs, in the context of using react-redux, suggest passing the data down the whole chain of branches via props. I don't favor this idea, because it couples all the intermediate branch components to whatever today's app structure is. On the bright side, your React code would be fairly pure, and only coupled to Redux itself at the top level. Selectors in all components Alternatively, you could use connect to make data from your Redux store available, irrespective of where you are in the component tree. This decouples your components from one another, but it couples everything to Redux. I would note that the principle author of Redux is not necessarily opposed to this approach. And it's probably more performant, as it prevents re-renders of intermediary components due to changes in props they don't actually care about. React children I haven't thought a great deal about doing things this way, but you could describe your whole app structure at the highest level as nested components, passing in props directly to remote descendants, and using children to render injected components at the branch levels. However, taken to the extreme, this would make your container component really complicated, especially for intermediate components that have children of more than one type. Not sure if this is really viable at all for that reason. React context As first mentioned by @mattclemens, you can use the experimental context api to decouple your intermediate components. Yes, it's "experimental". Yes, the React team definitely doesn't seem to be in love with it. But keep in mind that this is exactly what Redux's connect uses to inject dispatch and props from selectors. I think it strikes a nice balance. Components remain decoupled, because branch components don't need to care about the descendants' dependencies. If you only use connect at the root to set up the context, then all the descendents only need to couple to React's context API, rather than Redux. Components can freely be rearranged, as long as some ancestor is setting the required context properties. If the only component that sets context is the root component, this is trivially true. The React team compares using context to global variables, but that feel like an exaggeration. It seems a lot more like dependency injection to me.
Flux
34,299,460
12
I don't understand why we need Flux with React as React itself let's us maintain the state of the application. Every component has an initial state and the state can be changed by user actions or any other asynchronous JavaScript. Why is React called as only a view library when it can let's us define state of the application and also update view whenever state changes. This is not what a view does....its what complete MVC does right? For example: here is an Todo app build only with React and here is an Todo app build with Flux and React. If we can build the Todo app with React only then why do we need Flux?
In theory you don't need flux. In small applications you don't need flux for sure. But what if your application consist of hundreds components? And one of your component is form. User populate this form and you send its content to server. And get response from server with new data. And assume that this response data and data from form are necessary to other components. Without flux: You can move your data to root component and then distribute it down to all components. But if you need to distribute data from many other components too? This makes your application very complex. with flux: You move your data to stores, and all components which are interested about this data, can get it from there. You have better control on your application and source data. I prefer redux (only one store and one source of truth) edit: Why is React called as a view library even if it can handle application state? MVC is a software architectural pattern. It divides a given software application into three interconnected parts (models, views, controllers). If we think about react and MVC it fit as View. But this is nothing wrong. It doesn't mean that you can use it only for views. It allows you to create normal applications. But in other hand you can use it as view for other frameworks (for example you can use it with angular). In other words it is very flexible library for many uses.
Flux
35,924,036
12
I've been learning React and Flux over the past few months, and one thing I've not yet dealt with is displaying error messages to users. Specifically, error messages that occur as a result of an ajax http request within a flux action creator method. A simple example is user sign in - if the sign in ajax request fails due to a bad password, the server responds with the failure. At that moment, within my flux action creator method, my only option is to dispatch an action containing the error information, right? I can dispatch the error information and keep that error in a store. I'm not sure what the best way to tie certain errors to certain components is, though. Lets say my react component tree is rendering multiple error-aware components, but an error occurs during server-side user auth attempt and needs to be displayed on that sign in form. Is there good pattern or convention for storing errors and knowing which component they're for? Is there a programmatic way of determining this, instead of passing in some identifier to every action creator function that identifies the component the action creator is called it, etc?
Since you marked the question with Redux tag, I'm assuming you use Redux. If so, this real-world example shows error handling. There's a reducer that reacts to any action with error field: // Updates error message to notify about the failed fetches. function errorMessage(state = null, action) { const { type, error } = action; if (type === ActionTypes.RESET_ERROR_MESSAGE) { return null; } else if (error) { return action.error; } return state; } The custom API middleware puts any error message into error field on the action: return callApi(endpoint, schema).then( response => next(actionWith({ response, type: successType })), error => next(actionWith({ type: failureType, error: error.message || 'Something bad happened' })) ); Finally, the component reads the error message from Redux store: function mapStateToProps(state) { return { errorMessage: state.errorMessage }; } As you can see, this is not any different from how you'd handle displaying fetched data.
Flux
31,822,706
11
I am using reactjs and the flux architecture in a project I'm working on. I am a bit puzzled by how to break up nested data correctly into stores and why I should split up my data into multiple stores. To explain the problem I'll use this example: Imagine a Todo application where you have Projects. Each project has tasks and each task can have notes. The application uses a REST api to retrieve the data, returning the following response: { projects: [ { id: 1, name: "Action Required", tasks: [ { id: 1, name: "Go grocery shopping", notes: [ { id: 1, name: "Check shop 1" }, { id: 2, name: "Also check shop 2" } ] } ] }, ] } The fictive application's interface displays a list of projects on the left and when you select a project, that project becomes active and its tasks are displayed on the right. When you click a task you can see its notes in a popup. What I would do is use 1 single store, the "Project Store". An action does the request to the server, fetches the data and instructs the store to fill itself with the new data. The store internally saves this tree of entities (Projects -> Tasks -> Notes). To be able to show and hide tasks based on which project is selected I'd also keep a variable in the store, "activeProjectId". Based on that the view can get the active project, its tasks and render them. Problem solved. However: after searching a bit online to see if this is a good solution I see a lot of people stating that you should use a separate store per entity. This would mean: A ProjectStore, TaskStore and NoteStore. To be able to manage associations I would possibly also need a "TasksByProjectStore" and a "NotesByTaskStore". Can someone please explain why this would be better? The only thing I see is a lot of overhead in managing the stores and the data flow.
There are pro's and cons to use one store or different stores. Some implementations of flux specifically favour one store to rule them all, so to speak, while others also facilitate multiple stores. Whether one store or multiple stores suit your needs, depend on a) what your app does, and b) which future developments or changes you expect. In a nutshell: One store is better if your key concern is the dependencies between your nested entities. And if you are less worried about dependencies between single entity relation between server-store-component. One store is great if e.g. you want to manage stats on project level about underlying tasks and notes. Many parent-child-like relationships and all-in-one data fetching form server favour one store solution. Multiple stores better if your key concern is dependencies between single entity connections between server-store-component. Weak entity-to-entity relationships and independent single entity server fetches and updates favour multiple stores. In your case: my bet would be that one store is better. You have evident parent-child relationship, and get all project data at once from server. The somewhat longer answer: One store: Great to minimize overhead of managing multiple stores. It works well if your top view component is the only stateful component, and gets all data from the store, and distributes details to stateless children. However, the need to manage dependencies between your entities does not simply go away: instead of managing them between different stores, you need to manage them inside the single store. Which therefore gets bigger (more lines of code). Also, in a typical flux setup, each store emits a single 'I have changed' event, and leaves it up to the component(s) to figure out what changed and if they need top re-render. So if you have many nested entities, and one of the entities receives many updates from the server, then your superstore emits many changes, which might trigger a lot of unnecessary updates of the entire structure and all components. Flux-react can handle a lot, and the detail-changed-update-everything is what it handles well, but it may not suit everyones needs (I had to abandon this approach when it screwed up my transitions between states in one project). Multiple stores: more overhead, yes, but for some projects you get returns as well if you have a close coupling between server data and components, with the flux store in between, it is easier to separate concerns in separate stores if e.g. you are expecting many changes and updates to your notes data structure, than it is easier to have a stateful component for notes, that listens to the notes store, which in turn handles notes data updates from the server. When processing changes in your notes structure, you can focus on notes store only, without figuring out how notes are handled inside some big superstore.
Flux
33,107,081
11
There is interesting article which describes 4 main classes exposed in Flux Utils. Store ReduceStore MapStore (removed from 3.0.0) Container But it's not super clear what should be used for certain situations. There are only 2 examples for ReduceStore and Container, but no samples for others unfortunately. Could you please explain basic usage for these 4 components: when and where they should be used in real life? Extended answers and code examples would be really appreciated! UPDATE: MapStore has been removed starting from 3.0.0
By poking through the code and reading through the method documentation, here's what I can work out (I have not used these classes myself, as I use other Flux frameworks). It's actually useful to go in almost reverse order for these. Container This is not a subclass of FluxStore because it is, unsurprisingly, not a store. The Container is a wrapper class for your React UI components that automatically pulls state from specified stores. For example, if I have a React-driven chat app with a component that lists all my logged-in friends, I probably want to have it pull state from a LoggedInUsersStore, which would hypothetically be an array of these users. My component would look something like this (derived from the code example they provide): import {Component} from 'react'; import {Container} from 'flux/utils'; import {LoggedInUsersStore} from /* somewhere */; import {UserListUI} from /* somewhere */; class UserListContainer extends Component { static getStores() { return [UsersStore]; } static calculateState(prevState) { return { loggedInUsers: LoggedInUsersStore.getState(), }; } render() { return <UserListUI counter={this.state.counter} />; } } const container = Container.create(UserListContainer); This wrapper automatically updates the component's state if its registered stores change state, and it does so efficiently by ignoring any other changes (i.e. it assumes that the component does not depend on other parts of the application state). I believe this is a fairly direct extension of Facebook's React coding principles, in which every bit of UI lives in a high-level "Container." Hence the name. When to use If a given React component is entirely dependent on the state of a few explicit stores. If it does not depend on props from above. Containers cannot accept props. ReduceStore A ReduceStore is a store based entirely on pure functions---functions that are deterministic on their inputs (so the same function always returns the same thing for the same input) and produce no observable side effects (so they don't affect other parts of the code). For example, the lambda (a) => { return a * a; } is pure: it is deterministic and has no side effects. (a) => { echo a; return a; } is impure: it has a side effect (printing a). (a) => { return Math.random(); } is impure: it is nondeterministic. The goal with a ReduceStore is simplification: by making your store is pure, you can make certain assumptions. Because the reductions are deterministic, anyone can perform the reductions at any time and get the same result, so sending a stream of actions is all but identical to sending raw data. Likewise, sending the raw data is perfectly reasonable because you were guaranteed no side effects: if my entire program is made of ReduceStores, and I overwrite the state of one client with the state of another (calling the required redraws), I am guaranteed perfect functionality. Nothing in my program can change because of the actions rather than the data. Anyway, a ReduceStore should only implement the methods explicitly listed in its documentation. getInitialState() should determine the initial state, reduce(state, action) should transform state given action (and not use this at all: that would be non-deterministic/have side effects), and getState() & areEqual(one,two) should handle separating the raw state from the returned state (so that the user can't accidentally modify it). For example, a counter would be a sensible ReduceStore: class TodoStore extends ReduceStore { getInitialState() { return 0; } reduce(state, action) { switch(action.type) { case 'increment': return state + 1; case 'decrement': return state - 1; case 'reset': return 0; default: return state; } getState() { // return `this._state`, which is that one number, in a way that doesn't let the user modify it through something like `store.getState() = 5` // my offhand JS knowledge doens't let me answer that with certainty, but maybe: var a = this._state + 1; return a - 1; } } Notice that none of the transforms explicitly depended on the current state of the object: they only operated on the state variable they were passed. This means that an instance of store can calculate state for another instance of the same store. Not so useful in the current implementation of FB Flux, but still. When to use If you like pure-functional programming (yay!) and if you don't like it enough to use a framework explicitly built with that assumption (redux, NuclearJS) and you can sensibly write a store that is purely-functional (most stores can, and if they can't it might make sense to think about architecture a little more) Note: this class does not ensure that your code is purely-functional. My guess is that it will break if you don't check that yourself. I would always use this store. Unless I could use a... FluxMapStore [DEPRECATED] This class is no longer part of Flux! This is a subclass of ReduceStore. It is for such pure-functional stores that happen to be Maps internally. Specifically, Immutable.JS maps (another FB thing!). They have convenience methods to get keys and values from the state: WarrantiesStore.at('extended') rather than WarrantiesStore.getState().get('extended'). When to use As above, but also if I can represent this store using a Map. FluxStore This brings us to FluxStore: the catch-all Store class and generic implementation of the Flux Store concept. The other two stores are its descendants. The documentation seems to me to be fairly clear on its usage, so I'll leave it at that When to use If you cannot use the other two Store util classes to hold your data and you don't want to roll your own store In my case, that would be never: I prefer immutable frameworks like redux and NuclearJS because they are easier for me to reason about. I take care to structure my stores in a purely functional way. But if you don't, this class is good.
Flux
35,071,384
11
I have Action type defined like this: type Action = { type: 'DO_X' } | { type: 'DO_Y', payload: string } | { type: 'DO_Z', payload: number } It's a union type where each member is a valid action. Now I'd like to create a function createAction that accepts type and returns a new function that accepts payload. const doZ = createAction('DO_Z') console.log(doZ(42)) // { type: 'DO_Z', payload: 42 } Here's my current implementation: const createAction = (type: Action['type']) => (payload?: any) => ({ type, payload }) It typechecks type like I want to. How can I also typecheck payload? I want payload to match type of correct action based on type. For example, doZ should fail when called with a string because it's payload says that it accepts only number.
The canonical answer to this question depends on your exact use case. I'm going to assume that you need Action to evaluate exactly to the type you wrote; that is, an object of type: "DO_X" does not have a payload property of any kind. This implies that createAction("DO_X") should be a function of zero arguments, while createAction("DO_Y") should be a function of a single string argument. I'm also going to assume that you want any type parameters on createAction() to be automatically inferred, so that you don't, for example, need to specify createAction<Blah>("DO_Z") for any value of Blah. If either of these restrictions is lifted, you can simplify the solution to something like the one given by @Arnavion. TypeScript doesn't like mapping types from property values, but it's happy to do so from property keys. So let's build the Action type in a way that provides us with types the compiler can use to help us. First we describe the payloads for each action type like this: type ActionPayloads = { DO_Y: string; DO_Z: number; } Let's also introduce any Action types without a payload: type PayloadlessActionTypes = "DO_X" | "DO_W"; (I've added a 'DO_W' type just to show how it works, but you can remove it). Now we're finally able to express Action: type ActionMap = {[K in keyof ActionPayloads]: { type: K; payload: ActionPayloads[K] }} & {[K in PayloadlessActionTypes]: { type: K }}; type Action = ActionMap[keyof ActionMap]; The ActionMap type is an object whose keys are the type of each Action, and whose values are the corresponding elements of the Action union. It is the intersection of the Actions with payloads, and the Action without payloads. And Action is just the value type of ActionMap. Verify that Action is what you expect. We can use ActionMap to help us with typing the createAction() function. Here it is: function createAction<T extends PayloadlessActionTypes>(type: T): () => ActionMap[T]; function createAction<T extends keyof ActionPayloads>(type: T): (payload: ActionPayloads[T]) => ActionMap[T]; function createAction(type: string) { return (payload?: any) => (typeof payload === 'undefined' ? { type } : { type, payload }); } It's an overloaded function with a type parameter T corresponding to the type of Action you are creating. The top two declarations describe the two cases: If T is the type of an Action with no payload, the return type is a zero-argument function returning the right type of Action. Otherwise, it's a one-argument function that takes the right type of payload and returns the right type of Action. The implementation (the third signature and body) is similar to yours, except that it doesn't add payload to the result if there is no payload passed in. All done! We can see that it works as desired: var x = createAction("DO_X")(); // x: { type: "DO_X"; } var y = createAction("DO_Y")("foo"); // y: { type: "DO_Y"; payload: string; } var z = createAction("DO_Z")(5); // z: { type: "DO_Z"; payload: number; } createAction("DO_X")('foo'); // too many arguments createAction("DO_X")(undefined); // still too many arguments createAction("DO_Y")(5); // 5 is not a string createAction("DO_Z")(); // too few arguments createAction("DO_Z")(5, 5); // too many arguments You can see all this in action on the TypeScript Playground. Hope it works for you. Good luck!
Flux
45,464,815
11
I've been using Vuex, and it's adherence to only altering state through it's mutators or actions makes me think your store should only include as flat an object as you can, with only primitives types. Some threads even prescribe normalising your data (so instead of nested object trees you have objects with arrays of id's to indicate tree relationships). This probably matches closely to your JSON api. This makes me think that storing classes (that may have methods to alter themselves) in your flux store is an anti-pattern. Indeed even hydrating your store's data into a class seems like you're moving against the tide unless your class performs no modifications to its internal data. Which then got me thinking, is using any class in a Vue/Vuex/Reactive/Flux an anti-pattern? The libraries seem explicitly designed to work with plain JS objects and the general interactions you have with the API (data in, data out) makes me feel like a more functional approach (sans immutability) is what the original designers were thinking about. It also seems be easier to write code that runs from function => test => state mutator wrapper around function. I understand that JS objects and JS classes behave very similarly (and are basically the same thing), but my logic is if you don't design with classes in mind, then you're more likely to not pollute your state with non-flux state changes. Is there a general consensus in the community that your flux code should be more functional and less object orientated?
Yes. You are absolutely right in what you are thinking. State containers like Redux, Vuex are supposed to hold your data constructs and not functions. It is true that functions in JavaScript are simply objects which are callable. You can store static data on functions too. But that still doesn't qualify as pure data. It is also for this same reason that we don't put Symbols in our state containers. Coming back to the ES classes, as long as you are using classes as POJO i.e. only to store data then you are free to use those. But why have classes if you can have simple plain objects. Separating data from UI components and moving it into state containers has fundamental roots in functional programming. Most of the strict functional languages like Haskell, Elm, OCaml or even Elixir/Erlang work this way. This provides strong reasoning about your data flows in your application. Additionally, this is complemented by the fact that, in these languages, data is immutable. And, thus there is no place for stateful Class like construct. With JavaScript since things are inherently mutable, the boundaries are a bit blur and it is hard to define good practices. Finally, as a community, there is no definite consensus about using the functional way, but it seems that the community is heading towards more functional, stateless component approaches. Some of the great examples are: Elm ReasonML Hybrids swiss-element Cycle.js And now, even we have functional components in both Vue and React.
Flux
54,345,327
11
I want to use some abstraction in the creation of my React components. For example: class AbstractButton extends React.Component { render() { return ( <button onClick={this.props.onClick} className={this.definitions.className}> {this.props.text} </button> } } class PrimaryButton extends AbstractButton { constructor(options) { super(options); this.definitions = { className: 'btn btn-primary' }; } } class SuccessButton extends AbstractButton { constructor(options) { super(options); this.definitions = { className: 'btn btn-success' }; } } I don't want to pass these definitions via props because I know that these definitions--in this case the class--will never change. Is it an anti-pattern in React? Or is it OK? My question refers to this altjs issue: this kind of abstraction isn't compatible with @connectToStores.
Generally speaking, there's no reason not to use composition here instead of deep inheritance: class Button extends React.Component { render() { return (<button onClick={this.props.onClick} className={this.props.className} > {this.props.text} </button>); } static propTypes = { className: React.PropTypes.string.isRequired, onClick: React.PropTypes.func } } class PrimaryButton extends React.Component { render() { return <Button {...this.props} className="btn btn-primary" />; } } This is just as functional as what you propose, but is a lot simpler and easier to reason about. It makes it very clear what information your Button actually needs to do its work. Once you make this leap, you can eliminate the classes altogether and use stateless components: const Button = (props) => (<button onClick={props.onClick} className={props.className} > {props.text} </button>); Button.propTypes = { className: React.PropTypes.string.isRequired, onClick: React.PropTypes.func }; const PrimaryButton = (props) => <Button {...props} className="btn btn-primary" />; const SuccessButton = (props) => <Button {...props} className="btn btn-success" />; This will allow React to apply more optimizations since your components do not need any special lifecycle event support or state management. It is also even easier to reason about since now you are just working with pure functions. As an aside, if you are trying to make some React components that wrap Bootstrap, then perhaps you should take a look at React-Bootstrap.
Flux
33,894,609
10
How can I update redux's state from a text input? I'm trying to do a very simple "Hello World" with a text input. When someone types into the text input, it should update my store's "searchTerm" value. I can't figure out these things: 1. How can I get and pass the input's value into it's "onChange" handler? 2. The "search" action seems to be called correctly, but my reducer function is never used (no console.log). SearchForm.js (component) import React, {Component, PropTypes} from 'react'; import {bindActionCreators} from 'redux'; import {connect} from 'react-redux'; import {search} from 'redux/modules/search-term'; @connect( null, dispatch => bindActionCreators({ search }, dispatch) ) export default class SearchForm extends Component { static propTypes = { search: PropTypes.func.isRequired, } render() { return ( <input type="text" placeholder="Search" onChange={search} /> ); } } search-term.js (action & reducer) const SEARCH = 'redux-example/repo-filter/SEARCH'; const initialState = { searchTerm: null }; export default function reducer(state = initialState, action = {}) { console.log("reducing"); switch (action.type) { case SEARCH: return { searchTerm: action.term }; default: return state; } } export function search(term) { return { type: SEARCH, term }; } reducer.js import { combineReducers } from 'redux'; import multireducer from 'multireducer'; import { routerStateReducer } from 'redux-router'; import search from './search-term'; export default combineReducers({ router: routerStateReducer, search });
You should use this.props.search when binding the action creator to the change event: <input type="text" placeholder="Search" onChange={(event) => this.props.search(event.target.value)} />
Flux
34,474,272
10
I am a bit lost on what to keep in the state tree of Redux. I saw two conflicting statements on what to store in the state tree(s). React doc tell us that only user input should be stored in state trees. The original list of products is passed in as props, so that's not state. The search text and the checkbox seem to be state since they change over time and can't be computed from anything. And finally, the filtered list of products isn't state because it can be computed by combining the original list of products with the search text and value of the checkbox. Redux doc tells us that we often should store UI state and data in the single state tree: For our todo app, we want to store two different things: The currently selected visibility filter; The actual list of todos. You’ll often find that you need to store some data, as well as some UI state**, in the state tree. This is fine, but try to keep the data separate from the UI state. So React tells that we should not store data (I am talking about data of the todos) and, for me, Redux tells the opposite. In my understand I would tend on the React side because both React and Redux aims to predict a UI state by storing: all what can't be computed (eg: all human inputs) and are part of the UI: checkbox value input value radio value ... All minimal data that could be use to build a query and send it to the API/database that will return the complete user profil, friends lists, whatever...: user Id creation dates range items Ids ... For me that excludes all database/API results because: that stands on data level could be computed by sending the right (and computed by pure reducers) query. So what is your opinion here?
React documentation about the View Component state, but Redux documentation about the Application state. So, there is no conflicts between definitions. If we talk about Redux - you make all your components without state (and transform stateless root component to stateful with help of react-redux's connect function). If you have large response from the server and you show your data with pagination / filters, you can treat your application state as what you see on screen and not put all data in Redux store, only what you need to render (for example, 100 rows to show page and total number of rows to show pagination). There is no restriction for this. The whole data you can put into another place. For example, in another data container in web-worker (I make a full request in web-worker and fetch from there only needed data to display). Added after question edited: The original list of products is passed in as props, so that's not state. In that example, the reason why list of products isn't state - it's already in props. It means that the one of parent components have this as state.
Flux
34,969,754
10
React uses Flux architecture and it is said in https://reactjs.org/docs/thinking-in-react.html that React has two models - state and props. And there are some suggestions for model management in React https://reactjs.org/community/model-management.html - but all of this seems to some additional layers to enhance the Flux. The big questions to which I am seeking the answers are: Should I define model classes in React? I.e. if I have Customer class notion, then I can: 1) define the attributes of Customer directly as the attributes of state/props 2) define the attributes of Customer as the attributes of state.customer/props.customer; 3) define some JavaScript template/class Customer separately and simply say, that state.customer/props.customer is of type Customer and don't repeat attributes in the state/props. I feel, that 3) is the right approach, isn't it? If 3rd options (of the previous point) is the right approach, then how can I define the Customer template and how can I define that state.customer/props.customer is of this template? I could use those template in some serialization, some model validation tasks as well and I could use in ReactNative project as well.
The most basic way is shown in following snippet: const Customer = ({ name, age }) => ( <div> <p>Name: {name}</p> <p>Age: {age}</p> </div> ); const App = () => [{ name: "Bert", age: 22 }, { name: "Alfons", age: 45 }].map( ({ name, age }, i) => ( <> <Customer key={i} name={name} age={age} /> <hr /> </> ) ); Where you define these props depends on where you need them. If only one component needs the props, you define them in that components state. But often you need the props in several components, so you lift them in your hierarchy up. That often results in a very "smart" component (a component that has a large state). If your app becomes large and confusing, I suggest you store your state externally. To do that, you can use react context. It allows you to inject props to components that need it, rather than passing it several layers down in your hierarchy. If you don't want to write your own context, you may use state management solutions like redux or mobx. They use context too, but provide convenient functions to easily connect a component to your external state.
Flux
56,622,793
10
As indicated in the official loadimpact/k6 documentation, we are able to execute a single k6 script as follows: k6 run ../tests/http_get.js How would I go about executing multiple script files in a single run? Specifically all scripts that reside in a given local directory. Something like: k6 run ../tests/ Is this supported out of the box by k6?
Depending on your setup there are a couple different ways you can solve this. A pretty straight forward way is to fork the k6 run command inside bash. #!/bin/sh k6 run test1_spec.js & k6 run test2_spec.js & k6 run test3_spec.js You could easily write some more complicated bash scripting to read in everything from the /tests/ directory and run them like that. I chose to do it like this though because I had some custom input params to give to each specific test. Another way would be to write a docker compose script to do pretty much the same thing. This would start up a docker container for each test and run it inside there. The k6 docker image is nothing more than a tiny linux image with the k6 binary added to it. version: '3' services: k6_test: image: loadimpact/k6 container_name: test_k6 volumes: - ./:/specs command: run /tests/test_spec.js ports: - "6565:6565" k6_test2: image: loadimpact/k6 container_name: test2_k6 volumes: - ./:/specs command: run /tests/test2_spec.js ports: - "6566:6566" Both of these methods should allow you to run multiple tests at the same time in a CI environment as well as on your local machine.
k6
49,113,558
13
Having an application that runs with an insecure certificate results in an error from k6. time="2017-11-29T14:15:16Z" level=warning msg="Request Failed" error="Put https://xxxxxxx: x509: certificate signed by unknown authority"
You need to add the insecureSkipTLSVerify: true in options or add the --insecure-skip-tls-verify flag. https://docs.k6.io/docs/options
k6
47,555,244
10
I would like to leverage Celery (with RabbitMQ as backend MQ) to execute tasks of varying flavors via different Queues. One requirement is that consumption (by the workers) from a particular Queue should have the capability to be paused and resumed. Celery, seems to have this capability via calling add_consumer and cancel_consumer. While I was able to cancel the consumption of tasks from a queue for a particular worker, I cannot get the worker to resume consumption by calling add_consumer. The code to reproduce this issue is provided here. My guess is likely I'm missing some sort of a parameter to be provided either in the celeryconfig or via the arguments when starting the workers? Would be great to get some fresh pairs of eyes on this. There is not much discussion on Stackoverflow regarding add_consumer nor in Github. So I'm hoping there's some experts here willing to share their thoughts/experience. -- I am running the below: Windows OS, RabbitMQ 3.5.6, Erlang 18.1, Python 3.3.5, celery 3.1.15
To resume from queue, you need to specify queue name as well as target workers. Here is how to do it. app.control.add_consumer(queue='high', destination=['celery@asus']) Here is add_consumer signature def add_consumer(state, queue, exchange=None, exchange_type=None, routing_key=None, **options): In your case, you are calling with app.control.add_consumer('high', destination=['celery@high1woka']) So high is getting passed to state and queue is empty. So it is not able to resume.
RabbitMQ
45,784,824
13
I want to run some acceptance tests for my services that are using rabbitMq but I want to ignore all that require inter-service communication (amqp). The problem however is that Spring tries to connect to the (non-exisiting) rabbit host on startup so it can register its consumers. It does that for each method that is annotated with @RabbitListener which can get quite annoying with the long timeout this has if I have more than one listener in my service. How can I reduce this timeout or even prevent @RabbitListener connection all together? Our (simplified) Rabbit Config: @Configuration @EnableRabbit public class RabbitMqConfig { public RabbitMqConfig( @Value("${rabbitmq.host}") String rabbitHost, @Value("${rabbitmq.port}") int rabbitPort, @Value("${exchange.name}") String exchange) { this.rabbitHost = rabbitHost; this.rabbitPort = rabbitPort; this.exchange= exchange; } @Bean DirectExchange directExchangeBean() { return new DirectExchange(this.exchange, true, false); } @Bean public ConnectionFactory connectionFactory() { CachingConnectionFactory connectionFactory = new CachingConnectionFactory(rabbitHost); connectionFactory.setPort(rabbitPort); return connectionFactory; } @Bean public RabbitTemplate rabbitTemplate() { return new RabbitTemplate(connectionFactory()); } @Bean public Queue itemDoneQueue() { return new Queue(ITEM_DONE_QUEUENAME, true); } @Bean Binding itemDoneBinding() { return BindingBuilder.bind(itemDoneQueue()).to(directExchangeBean()).with(ITEM_DONE_KEY); } } Properties rabbitmq.host=192.168.42.100 rabbitmq.port=5672 exchange.name=myExchange The Listener: @RabbitListener(queues = ITEM_DONE_QUEUENAME) public void receiveMessageFromItemDoneQueue(String message) { // do the work } The Test: @RunWith(SpringRunner.class) @SpringBootTest(classes = {Application.class}) public abstract class RabbitTest { Really nothing special here. Obviously during testing the rabbit host is unavailable. That is fine. I want to ignore the fact. And quickly. I've tried spring.rabbitmq.connection-timeout=1 But that didn't change anything. Using spring.rabbitmq.listener.simple.auto-startup=false neither does anything. Using spring.autoconfigure.exclude:org.springframework.boot.autoconfigure.amqp.RabbitAutoConfiguration just kills my application context loading with spring complaining about a NoSuchBeanDefinitionException: No bean named 'rabbitListenerContainerFactory' available Any ideas? Thanks!
I've had a similar problem, but solved it with spring.rabbitmq.listener.direct.auto-startup=false SpringBoot version 2.2.4.RELEASE Spring framework version 5.2.3.RELEASE
RabbitMQ
44,927,085
13
I'm using postwait/node-amqp (link) to connect to a variety of RabbitMQ exchanges and queues in our organization. As my project has moved from dev to production I've encountered several issues with queues not being setup correctly or passwords being incorrect etc. In the latter case, it's obvious, I'll get a ECONNREFUSED error. In the first case though, I don't get any errors, just a timeout on the connect. Given a URI like amqp://USER:[email protected] how can I determine if a queue called "FooWorkItems.Work' is accepting connections for listening? What's the bare minimum code for this, the equivalent of checking if an API is listening or a server is up and listening on the ping port? Code: if (this.amqpLib == null) { this.amqpLib = require('amqp'); } this.connection = this.amqpLib.createConnection({ url: this.endpoint }); this.connection.on('ready', (function(_this) { return function() { var evt, _fn, _fn1, _i, _j, _len, _len1, _ref, _ref1; _this.logger.info("" + _this.stepInfo + " connected to " + _this.endpoint + "; connecting to " + queueName + " now."); if (_this.fullLogging) { _ref = ['connect', 'heartbeat', 'data']; _fn = function(evt) { return _this.connection.on(evt, function() { _this.logger.trace("" + _this.stepInfo + " AMQP event: " + evt); if (arguments != null) { return _this.logger.trace({ args: arguments }); } }); }; for (_i = 0, _len = _ref.length; _i < _len; _i++) { evt = _ref[_i]; _fn(evt); } _ref1 = ['error', 'close', 'blocked', 'unblocked']; _fn1 = function(evt) { return _this.connection.on(evt, function() { if (evt !== 'close') { return _this.logger.error("" + _this.stepInfo + " AMQP event: " + evt); } else { return _this.logger.warn("" + _this.stepInfo + " AMQP event: " + evt); } }); }; for (_j = 0, _len1 = _ref1.length; _j < _len1; _j++) { evt = _ref1[_j]; _fn1(evt); } } return _this.connection.queue(_this.queueName, { passive: true }, function(q) { logger.debug("" + stepInfo + " connected to queue " + queueName + ". Init complete."); return q.subscribe(function(message, headers, deliveryInfo, messageObject) { logger.trace("" + stepInfo + " recvd message"); return logger.trace({ headers: headers }); }); }); };
In amqp, queues and exchanges are concepts unrelated to a connection, they don't listen or broadcast, and you can't connect to those, only to a broker. The RabbitMQ server does of course accept network connections, and the protocol defines a logical Connection on top of the transport, this connection includes a heartbeat, configurable with the heartbeat option in this library. Like you said, connection errors, including timeouts need to be taken care off at startup, for the rest you can rely on the heartbeat, analogous to a "ping" mechanism. If the connection is interrupted and your heartbeat parameter is set, your library will simply throw an error, so there is no need for you to re-implement this. You should also take a look a the reconnect setting in postwait/node-ampq, as it might automatically deal with some of the network failure modes.
RabbitMQ
36,137,343
13
I am a newbie to RabbitMQ, hence need guidance on a basic question: Does RabbitMQ send messages to consumer as they arrive? OR Does RabbitMQ send messages to consumer as they become available? At message consumption endpoint, I am using com.rabbitmq.client.QueueingConsumer. Looking at the sprint client source code, I could figure out that QueueingConsumer keeps listening on socket for any messages the broker sends to it Any message that is received is parsed and stored as Delivery in a LinkedBlockingQueue encapsulated inside the QueueingConsumer. This implies that even if the message processing endpoint is busy, messages will be pushed to QueueingConsumer Is this understanding right?
TLDR: you poll messages from RabbitMQ till the prefetch count is exceeded in which case you will block and only receive heart beat frames till the fetch messages are ACKed. So you can poll but you will only get new messages if the number of non-acked messages is less than the prefetch count. New messages are put on the QueueingConsumer and in theory you should never really have much more than the prefetch count in that QueueingConsumer internal queue. Details: Low level wise for (I'm probably going to get some of this wrong) RabbitMQ itself doesn't actually push messages. The client has to continuously read the connections for Frames based on the AMQP protocol. Its hard to classify this as push or pull but just know the client has to continuously read the connection and because the Java client is sadly BIO it is a blocking/polling operation. The blocking/polling is based on the AMQP heartbeat frames and regular frames and socket timeout configuration. What happens in the Java RabbitMQ client is that there is thread for each channel (or maybe its connection) and that thread loops gathering frames from RabbitMQ which eventually become commands that are put in a blocking queue (I believe its like a SynchronousQueue aka handoff queue but Rabbit has its own special one). The QueueingConsumer is a higher level API and will pull commands off of that handoff queue mentioned early because if commands are left on the handoff queue it will block the channel frame gathering loop. This is can be bad because timeout the connection. Also the QueueingConsumer allows work to be done on a separate thread instead of being in the same thread as the looping frame thread mentioned earlier. Now if you look at most Consumer implementations you will probably notice that they are almost always unbounded blocking queues. I'm not entirely sure why the bounding of these queues can't be a multiplier of the prefetch but if they are less than the prefetch it will certainly cause problems with the connection timing out.
RabbitMQ
24,309,230
13
TL;DR How to create Spring Boot AMQP connection factory programatically? Hey, In order to connect to my RabbitMQ I added these to my application.properties file of my Spring Boot app: spring.rabbitmq.host=host spring.rabbitmq.port=5672 spring.rabbitmq.username=myapp spring.rabbitmq.password=mypass And according to my understanding, these values are then used to create Spring Boot's auto configured ConnectionFactory, which I then use in: @Bean @Conditional(RabbitCondition.class) SimpleMessageListenerContainer container(ConnectionFactory connectionFactory, MessageListenerAdapter completedOrderListenerAdapter) { SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(); container.setConnectionFactory(connectionFactory); container.setQueueNames(completedOrderQueueName); container.setMessageListener(completedOrderListenerAdapter); return container; } I would like to be able to use rabbitMQ credentials from different environment files which are not application.properties, so I would like to create ConnectionFactory bean programatically. How do I achieve this? Thanks.
@Bean public ConnectionFactory connectionFactory() { CachingConnectionFactory connectionFactory = new CachingConnectionFactory(); connectionFactory.setAddresses(address); connectionFactory.setUsername(username); connectionFactory.setPassword(password); return connectionFactory; }
RabbitMQ
54,002,827
12
I am trying to connect to a remote rabbitmq server. I have the correct credentials and vhost exists on the remove server, but I cannot connect. I get the error pika.exceptions.ProbableAccessDeniedError: (530, 'NOT_ALLOWED - vhost test_vhost not found') I have struggled with this for a while but I can't seem to get what the problem is.
For me, before: AMQP_URL = 'amqp://guest:guest@localhost:5672/hostname' after: AMQP_URL = 'amqp://guest:guest@localhost:5672' it works.
RabbitMQ
53,102,929
12
We are planning to use Kafka for queueing in our application. I have some bit of experience in RabbitMQ and Spring. With RabbitMQ and Spring, we used to manage queue creation while starting up the spring service. With Kafka, I'm not sure what could be the best way to create the topics? Is there a way to manage the topics with Spring. Or, should we write a separate script which helps in creating topics? Maintaining a separate script for creating topics seems a bit weird for me. Any suggestions will be appreciated.
In spring it is possible to create topics during the start of the application using beans: @Bean public KafkaAdmin admin() { Map<String, Object> configs = new HashMap<>(); configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, StringUtils.arrayToCommaDelimitedString(kafkaEmbedded().getBrokerAddresses())); return new KafkaAdmin(configs); } @Bean public NewTopic topic1() { return new NewTopic("foo", 10, (short) 2); } Alternatively you can write your own create topics by autowiring the AdminClient, so for instance reading the list from an input file or specify advanced properties such as partition numbers: @Autowired private KafkaAdmin admin; //...your implementation Also note that since Kafka 1.1.0 auto.create.topics.enable is enabled by default (see Broker configs). For more information refer to the spring-kafka docs
RabbitMQ
50,909,458
12
I'm trying to setup RabbitMQ in a model where there is only one producer and one consumer, and where messages sent by the producer are delivered to the consumer only if the consumer is connected, but dropped if the consumer is not present. Basically I want the queue to drop all the messages it receives when no consumer is connected to it. An additional constraint is that the queue must be declared on the RabbitMQ server side, and must not be explicitly created by the consumer or the producer. Is that possible? I've looked at a few things, but I can't seem to make it work: durable vs non-durable does not work, because it is only useful when the broker restarts. I need the same effect but on a connection. setting auto_delete to true on the queue means that my client can never connect to this queue again. x-message-ttl and max-length make it possible to lose message even when there is a consumer connected. I've looked at topic exchanges, but as far as I can tell, these only affect the routing of messages between the exchange and the queue based on the message content, and can't take into account whether or not a queue has connected consumers. The effect that I'm looking for would be something like auto_delete on disconnect, and auto_create on connect. Is there a mechanism in rabbitmq that lets me do that?
After a bit more research, I discovered that one of the assumptions in my question regarding x-message-ttl was wrong. I overlooked a single sentence from the RabbitMQ documentation: Setting the TTL to 0 causes messages to be expired upon reaching a queue unless they can be delivered to a consumer immediately https://www.rabbitmq.com/ttl.html It turns out that the simplest solution is to set x-message-ttl to 0 on my queue.
RabbitMQ
45,327,819
12
I am trying to use RabbitMQ HTTP REST client to publish messages into the queue. I am using the following url and request http://xxxx/api/exchanges/xxxx/exc.notif/publish { "routing_key":"routing.key", "payload":{ }, "payload_encoding":"string", "properties":{ "headers":{ "notif_d":"TEST", "notif_k": ["example1", "example2"], "userModTime":"timestamp" } } } And getting back from the rabbit the following response: {"error":"bad_request","reason":"payload_not_string"} I have just one header set: Content-Type:application/json I was trying to set the "payload_encoding":"base64", but it didn't help. I am new to rabbit any response is welcome.
Try with { "properties": { "content-type": "application/json" }, "routing_key": "testKey", "payload": "1234", "payload_encoding": "string" }
RabbitMQ
44,905,126
12
I'm running a Django app in an EC2 instance, which uses RabbitMQ + Celery for task queuing. Are there any drawbacks to running my RabbitMQ node from the same EC2 instance as my production app?
The answer to this questions really depends on the context of your application. When you're faced with scenarios you should always consider a few things. Seperation of concerns Here, we want to make sure that if one of the systems are not responsible for the running of other systems. This includes things like If the ec2 instance running all the stuff goes down, will the remaining tasks in queue continue running if my RAM is full, will all systems remain functioning Can I scale just one segment of my app without having to redesign infrastructure. By having rabbit and django (with some kind of service, wsgi, gunicorn, waitress etc) all on one box, you loose a lot of resource contingency. Although RAM and CPU may be abundant, there is a limit to IO, disk writes, network writes etc. This means that if for some reason you have a heavy write function, all other systems may suffer as a result. If you have a heavy write to RAM funciton, the same applies. So really the downfalls from keeping things in one system that I can see from your question and my own experience are as follows. Multiple points of failure. If your one instance of rabbit fails, your queues and tasks stop working. If your app starts generating big traffic, other systems start to contend for recourses. If any component goes down, that could mean other downtime of other services. System downtime means complete downtime of all components. Lots of headaches when your application demands more resources with minimal downtime. Lots of web traffic will slow down task running Lots of task running will slow down web requests Lots of IO will slow down all the things The rule of thumb that I usually follow is keep single points of failures far from each other - that way you only need to manage those components. A good use case for this would be to use an EC2 instance for your app, another for your workers and another for your rabbit. That way you can apply smaller/bigger instances for just those components if you need to. You can even create AMIs and create autoscaling groups - if it is your use case. Here are some articles for reference Seperation of concern Modern design architectures Single points of failure
RabbitMQ
44,196,151
12
We have two servers, Server A and Server B. Server A is dedicated for running django web app. Due to large number of data we decided to run the celery tasks in server B. Server A and B uses a common database. Tasks are initiated after post save in models from Server A,webapp. How to implement this idea using rabbitmq in my django project
You have 2 servers, 1 project and 2 settings(1 per server). server A (web server + rabbit) server B (only celery for workers) Then you set up the broker url in both settings. Something like this: BROKER_URL = 'amqp://user:password@IP_SERVER_A:5672//' matching server A to IP of server A in server B settings. For now, any task must be sent to rabbit in server A to virtual server /. In server B, you must just initialize celery worker, something like this: python manage.py celery worker -Q queue_name -l info and thats it. Explanation: django sends messages to rabbit to queue a task, then celery workers request some message to execute a task. Note: Is not required that rabbitMQ have to be installed in server A, you can install rabbit in server C and reference it in the BROKER_URL in both settings(A and B) like this: BROKER_URL='amqp://user:password@IP_SERVER_C:5672//'. Sorry for my English. greetings.
RabbitMQ
44,113,578
12
I'm using RabbitMQ's round robin feature to dispatch messages between multiple consumers but having only one of them receive the actual message at a time. My problem is that my messages represent tasks and I would like to have local sessions (state) on my consumers. I know beforehand which messages belong to which session but I don't know what is the best way (or is there a way?) to make RabbitMQ dispatch to consumers using an algorithm I specify. I don't want to write my own orchestration service because it will become a bottleneck and I don't want my producers to know which consumer will take their messages because I'll lose the decoupling I get using Rabbit. Is there a way to make RabbitMQ dispatch my messages to consumers based on a pre-defined algorithm/rule instead of round robin? Clarification: I use several microservices written in different languages and each service has its own job. I communicate between them using protobuf messages. I give each new message a UUID. If a consumer receives a message it can create a response message from it (this might not be the correct terminology since the producers and consumers are decoupled and they don't know about each other) and this UUID is copied to the new message. This forms a data transformation pipeline and this "process" is identified by the UUID (the processId). My problem is that it is possible that I have multiple worker consumers and I need a worker to stick to an UUID if it has seen it before. I have this need because there might be local state for each process After the process is finished I want to clean up the local state A microservice might receive multiple messages for the same process and I need to differentiate which message belongs to which process Since RabbitMQ distributes tasks between workers using round robin I can't force my processes to stick to a worker. I have several caveats: The producers are decoupled from the consumers so direct messaging is not an option The number of workers is not constant (there is a load balancer which might start new instances of a worker) If there is a workaround which does not involve changing the round robin algorithm and does not break my constraints it is also OK!
If you don't want to go for an orchestration service, you can try a topology like that instead: For the simplicity sake I assume that your processId is used as the routing key (in the real world you may want to store it in the header and use header exchange instead). An incoming message will be accepted by the Incoming Exchange (type: direct), which has an alternative-exchange attribute set to point to the No Session Exchange (fanout). Here is what RabbitMQ docs say on the 'Alternative Exchanges`: It is sometimes desirable to let clients handle messages that an exchange was unable to route (i.e. either because there were no bound queues our no matching bindings). Typical examples of this are detecting when clients accidentally or maliciously publish messages that cannot be routed "or else" routing semantics where some messages are handled specially and the rest by a generic handler RabbitMQ's Alternate Exchange ("AE") feature addresses these use cases. (we are particularly interested in the or else use case here) Each consumer will create it's own queue and bind it to the Incoming Exchange, using processId(s) for the session(s) it is aware of so far, as the binding's routing key. This way it will only get messages for the sessions it is interested in. In addition, all the consumers will bind to the shared No Session Queue. If a message with a previously unknown processId comes in, there will be no specific binding for it registered with the Incoming Exchange so it will get re-routed to the No Session Exchange => No Session Queue and get dispatched to one of the Consumers in a usual (round-robin) manner. A consumer will then register a new binding for it with the Incoming Exchange (i.e. start a new "session"), so that it will then be getting all the subsequent messages with this processId. Once "session" is over it will have to remove the corresponding binding (i.e. close the "session").
RabbitMQ
43,001,689
12
I'm new to RabbitMQ, and I'm somewhat lost in the documentation. Currently, as an example, I'm trying to build a small mailer-service that listens to a queue, but I'm somewhat stuck on where I should put the parameters that my service has (destination, subject, ...) Should I put them inside some encoded format (json), inside my messages, or should I use the header-construction, like the following example: string message = "Hello World!"; var body = Encoding.UTF8.GetBytes(message); var properties = new BasicProperties(); properties.Headers = new Dictionary<string, object>(); properties.Headers.Add("destination", "matthias123@localhost"); channel.BasicPublish(exchange: "", routingKey: "sendmail", basicProperties: properties,body: body); Does using the headers offer additional benefits? Like, for example, would it be possible to filter messages that are sent to a specific destination?
I wouldn't use headers for what you are trying to do. That information belongs in the body of the message, in my opinion. Look at it this way: The body of the message should contain everything you need to complete the work requested. In this case, it would be the sender, subject, email content, etc. Headers on the other hand, are bits of data about the AMQP message, not the message contents. There's a lot of potential confusion here with your work to be done being "email". Too much overlap in terminology between the AMQP message, and email message. That being said, I'll pick a different example of work to do: calculate the fibonacci sequence. In this case, the message you send across rabbitmq would contain something like how many places of fibonacci to calculate up front and then how many to send back, after that. For example you might send a message like this (as json in this case): { start: 1, take: 3 } This should produce a result of 1, 1, 2 because it starts at the first position and returns 3 items from the sequence. Using your specific question and logic: should I put the start and take attributes into headers of the message? No. If I did, that would mean my message is empty as all of the information about the work to be done would be contained in the headers. It doesn't make sense when I look at it this way because now there is no message to send... only headers. On the other hand, if I keep these two points of data in the message body, the headers become more useful as a way to send metadata about the AMQP message itself... Not information about the content of the message, but information about the idea of the message. In this case I'm saying that I want to return items from the fibonacci sequence. In other words, I'm engaging in RPC (remote procedure call) and am expecting a return value. AMQP doesn't support return values directly. What I can do, however, is stuff a queue name into the headers and send the result to that queue. Then the code that requested the fibonacci numbers can listen to that queue and get the results. So I might do something like this when sending the message: var properties = new BasicProperties(); properties.Headers = new Dictionary(); properties.Headers.Add("return-queue", "fibreturn"); Here I'm setting up a "return-queue" header - information about the message, or request for information in this case - inside of the headers. The code that handles the fibonacci sequence will read this header and send the response back to this queue. This is a better use of headers, as it makes the header store information about the message... In this case, where the response should be sent. The headers don't contain information about the actual work to be done, though. That is all stored in the message body directly. P.S. I am purposely not using the "reply-to" property like you normally should, to do RCP. I'm using this as an example of why you shouldn't put your "destination" in the headers. For a better implementation of the fibonacci sequence idea, see the RMQ docs and how it uses "reply-to" correctly https://www.rabbitmq.com/tutorials/tutorial-six-dotnet.html
RabbitMQ
42,593,804
12
I am facing an issue in receiving a message from RabbitMQ. I am sending a message like below HashMap<Object, Object> senderMap=new HashMap<>(); senderMap.put("STATUS", "SUCCESS"); senderMap.put("EXECUTION_START_TIME", new Date()); rabbitTemplate.convertAndSend(Constants.ADAPTOR_OP_QUEUE,senderMap); If we see in RabbitMQ, we will get a fully qualified type. In the current scenario, we have n number of producer for the same consumer. If i use any mapper, it leads to an exception. How will i send a message so that it doesn't contain any type_id and i can receive the message as Message object and later i can bind it to my custom object in the receiver. I am receiving message like below. Could you please let me know how to use Jackson2MessageConverter so that message will get directly binds to my Object/HashMap from Receiver end. Also i have removed the Type_ID now from the sender. How Message looks in RabbitMQ priority: 0 delivery_mode: 2 headers: ContentTypeId: java.lang.Object KeyTypeId: java.lang.Object content_encoding: UTF-8 content_type: application/json {"Execution_start_time":1473747183636,"status":"SUCCESS"} @Component public class AdapterOutputHandler { private static Logger logger = Logger.getLogger(AdapterOutputHandler.class); @RabbitListener(containerFactory="adapterOPListenerContainerFactory",queues=Constants.ADAPTOR_OP_QUEUE) public void handleAdapterQueueMessage(HashMap<String,Object> message){ System.out.println("Receiver:::::::::::"+message.toString()); } } Connection @Bean(name="adapterOPListenerContainerFactory") public SimpleRabbitListenerContainerFactory adapterOPListenerContainerFactory() { SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory(); factory.setConnectionFactory(connectionFactory()); Jackson2JsonMessageConverter messageConverter = new Jackson2JsonMessageConverter(); DefaultClassMapper classMapper = new DefaultClassMapper(); messageConverter.setClassMapper(classMapper); factory.setMessageConverter(messageConverter); } Exception Caused by: org.springframework.amqp.support.converter.MessageConversionException: failed to convert Message content. Could not resolve __TypeId__ in header and no defaultType provided at org.springframework.amqp.support.converter.DefaultClassMapper.toClass(DefaultClassMapper.java:139) I don't want to use __TYPE__ID from sender because they are multiple senders for the same queue and only one consumer.
it leads to an exception What exception? TypeId: com.diff.approach.JobListenerDTO That means you are sending a DTO, not a hash map as you describe in the question. If you want to remove the typeId header, you can use a message post processor... rabbitTemplate.convertAndSend(Constants.INPUT_QUEUE, dto, m -> { m.getMessageProperties.getHeaders().remove("__TypeId__"); return m; }); (or , new MessagePostProcessor() {...} if you're not using Java 8). EDIT What version of Spring AMQP are you using? With 1.6 you don't even have to remove the __TypeId__ header - the framework looks at the listener parameter type and tells the Jackson converter the type so it automatically converts to that (if it can). As you can see here; it works fine without removing the type id... package com.example; import java.util.HashMap; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import org.springframework.amqp.core.Queue; import org.springframework.amqp.rabbit.annotation.RabbitListener; import org.springframework.amqp.rabbit.config.SimpleRabbitListenerContainerFactory; import org.springframework.amqp.rabbit.connection.ConnectionFactory; import org.springframework.amqp.rabbit.core.RabbitAdmin; import org.springframework.amqp.rabbit.core.RabbitTemplate; import org.springframework.amqp.support.converter.Jackson2JsonMessageConverter; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.context.ConfigurableApplicationContext; import org.springframework.context.annotation.Bean; @SpringBootApplication public class So39443850Application { private static final String QUEUE = "so39443850"; public static void main(String[] args) throws Exception { ConfigurableApplicationContext context = SpringApplication.run(So39443850Application.class, args); context.getBean(RabbitTemplate.class).convertAndSend(QUEUE, new DTO("baz", "qux")); context.getBean(So39443850Application.class).latch.await(10, TimeUnit.SECONDS); context.getBean(RabbitAdmin.class).deleteQueue(QUEUE); context.close(); } private final CountDownLatch latch = new CountDownLatch(1); @RabbitListener(queues = QUEUE, containerFactory = "adapterOPListenerContainerFactory") public void listen(HashMap<String, Object> message) { System.out.println(message.getClass() + ":" + message); latch.countDown(); } @Bean public Queue queue() { return new Queue(QUEUE); } @Bean public RabbitTemplate rabbitTemplate(ConnectionFactory connectionFactory) { RabbitTemplate template = new RabbitTemplate(connectionFactory); template.setMessageConverter(new Jackson2JsonMessageConverter()); return template; } @Bean public SimpleRabbitListenerContainerFactory adapterOPListenerContainerFactory(ConnectionFactory connectionFactory) { SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory(); factory.setConnectionFactory(connectionFactory); factory.setMessageConverter(new Jackson2JsonMessageConverter()); return factory; } public static class DTO { private String foo; private String baz; public DTO(String foo, String baz) { this.foo = foo; this.baz = baz; } public String getFoo() { return this.foo; } public void setFoo(String foo) { this.foo = foo; } public String getBaz() { return this.baz; } public void setBaz(String baz) { this.baz = baz; } } } Result: class java.util.HashMap:{foo=baz, baz=qux} This is described in the documentation... In versions prior to 1.6, the type information to convert the JSON had to be provided in message headers, or a custom ClassMapper was required. Starting with version 1.6, if there are no type information headers, the type can be inferred from the target method arguments. You can also configure a custom ClassMapper to always return HashMap.
RabbitMQ
39,443,850
12
I am writing an automated test to test a consumer. So far I did not need to include a header when publishing messages but now I do. And it seems like its lacking documentation. This is my publisher: class RMQProducer(object): def __init__(self, host, exchange, routing_key): self.host = host self.exchange = exchange self.routing_key = routing_key def publish_message(self, message): connection = pika.BlockingConnection(pika.ConnectionParameters(self.host)) channel = connection.channel() message = json.dumps(message) channel.basic_publish(exchange=self.exchange, routing_key=self.routing_key, body=message) I want to do smtn like: channel.basic_publish(exchange=self.exchange, routing_key=self.routing_key, body=message, headers={"key": "value"}) Whats the correct way to add headers to this message?
You would use pika.BasicProperties to add headers. channel.basic_publish(exchange=self.exchange, routing_key=self.routing_key, properties=pika.BasicProperties( headers={'key': 'value'} # Add a key/value header ), body=message) The official documentation for pika does indeed not cover this scenario exactly, but the documentation does have the specifications listed. I would would strongly recommend that you bookmark this page, if you are going to continue using pika.
RabbitMQ
37,682,184
12
Just want to know the meaning of the parameters in worker.py file: def callback(ch, method, properties, body): print " [x] Received %r" % (body,) What do ch, method, and properties mean?
ch "ch" is the "channel" over which the communication is happening. Think of a RabbitMQ connection in two parts: the TCP/IP connection channels within the connection the actual TCP/IP connection is expensive to create, so you only want one connection per process instance. A channel is where the work is done with RabbitMQ. a channel exists within an connection, and you need to have the channel reference so you can ack/nack messages, etc. method i think "method" is meta information regarding the message delivery when you want to acknowledge the message - tell RabbitMQ that you are done processing it - you need both the channel and the delivery tag. the delivery tag comes from the method parameter. i'm not sure why this is called "method" - perhaps it is related to the AMQP spec, where the "method" is meta-data about which AMQP method was executed? properties the "properties" of the message are user-defined properties on the message. you can set any arbitrary key / value pair that you want in these properties, and possibly get things like routing key used (though this may come from "method") properties are often uses for bits of data that your code needs to have, but aren't part of the actual message body. for example, if you had a re-sequencer process to make sure messages are processed in order, the "properties" would probably contain the message's sequence number.
RabbitMQ
34,202,345
12
I have an existing queue created in RabbitMQ. It can be created with or without x-dead-letter-exchange parameter. I am creating a consumer of this queue in Spring using the RabbitTemplate. When I declare the queue, I don't want to specify the x-dead-letter-exchange parameter. I would like the template to somehow figure it itself or not care. I am throwing AmqpRejectAndDontRequeueException from my consumer to indicate bad messages, but I want the creator of the queue to be responsible for the decision whether or not to create an exchange and queue for the rejected messages. Here is my bean that declares the queue in Spring: @Bean Queue queue() { Map<String, Object> args = new HashMap<>(); // set the queue with a dead letter feature args.put("x-dead-letter-exchange", REJECTED_EXCHANGE); args.put("x-dead-letter-routing-key", REJECTED_ROUTING_KEY); Queue queue = new Queue(Constants.QUEUE_NAME, false, false, false, args); return queue; } This works fine, but when the creator of the queue decides not to use the dead letter feature, I see the following error: Channel shutdown: channel error; protocol method: #method<channel.close> (reply-code=406, reply-text=PRECONDITION_FAILED - inequivalent arg 'x-dead-letter-exchange' for queue 'queueName' The message is a bit longer, it continues telling me which side has which x-dead-letter-exchange (none or a name of the exchange). I've tried different combinations (e.g. creating the queue with exchange and not specifying it in the Spring or creating the queue without the exchange and specifying it in the Spring), only to see different variants of this message. How do I declare the queue so it simply accepts whatever parameters are already set in the queue?
Yes, The possible cause is - if you declare some queues manually and later your program (client in code) tries to create one (based on the settings you had in code) then you get this error. The reason behind it is when your code (client application) tries to access one queue. It gets a signal from the server that the connection is not available for this. To solve this problem Delete all the queues that you have created manually, and let the client program create them by itself. If you got problems in deleting the queues, because of some data is there in it, or for some reason, you want to maintain it, create one queue manually, and move all the queue data to be deleted in it through "Move" tab of the queue.
RabbitMQ
31,938,638
12
I am sending a normal message through a producer to RabbitMQ and then I send a second message with the expiration attribute assigned to a value. Then using the rabbitmqctl list_queues command I monitor the status of the messages. I found that if I send a normal message first and then a message with expiration, the rabbitmqctl list_queues is always showing me 2 messages pending on the queue. When I consume them, I get only one. On the other hand if I send just 1 message with expiration, in the beginning I see the message and then after the correct expiration time, I find it deleted. My question is, on the first situation is actually the message taking space? Or it is an interface bug? My rabbitMQ version is: rabbitmq-server.noarch -> 3.1.5-1.el6
Looks like you missed some of the documentation on this feature. If you read the RabbitMQ documentation on per-message TTL (expiration), you will notice the following warning for exactly the behavior you are seeing (emphasis added): Caveats While consumers never see expired messages, only when expired messages reach the head of a queue will they actually be discarded (or dead-lettered). When setting a per-queue TTL this is not a problem, since expired messages are always at the head of the queue. When setting per-message TTL however, expired messages can queue up behind non-expired ones until the latter are consumed or expired. Hence resources used by such expired messages will not be freed, and they will be counted in queue statistics (e.g. the number of messages in the queue).
RabbitMQ
27,991,503
12
I'm new to rabbitmq and pika, and is having trouble with stopping consuming. channel and queue setting: connection = pika.BlockingConnection(pika.ConnectionParameters('localhost')) channel = connection.channel() channel.queue_declare(queue=new_task_id, durable=True, auto_delete=True) Basically, consumer and producer are like this: consumer: def task(task_id): def callback(channel, method, properties, body): if body != "quit": print(body) else: print(body) channel.stop_consuming(task_id) channel.basic_consume(callback, queue=task_id, no_ack=True) channel.start_consuming() print("finish") return "finish" producer: proc = Popen(['app/sample.sh'], shell=True, stdout=PIPE) while proc.returncode is None: # running line = proc.stdout.readline() if line: channel.basic_publish( exchange='', routing_key=self.request.id, body=line ) else: channel.basic_publish( exchange='', routing_key=self.request.id, body="quit" ) break consumer task gave me output: # ... output from sample.sh, as expected quit �}q(UstatusqUSUCCESSqU tracebackqNUresultqNUtask_idqU 1419350416qUchildrenq]u. However, "finish" didn't get printed, so I'm guessing it's because channel.stop_consuming(task_id) didn't stop consuming. If so, what is the correct way to do? Thank you.
I had the same problem. It seems to be caused by the fact that internally, start_consuming calls self.connection.process_data_events(time_limit=None). This time_limit=None makes it hang. I managed to workaround this problem by replacing the call to channel.start_consuming() with its implemenation, hacked: while channel._consumer_infos: channel.connection.process_data_events(time_limit=1) # 1 second
RabbitMQ
27,624,166
12
Can someone please explain what is going on behind the scenes in a RabbitMQ cluster with multiple nodes and queues in mirrored fashion when publishing to a slave node? From what I read, it seems that all actions other than publishes go only to the master and the master then broadcasts the effect of the actions to the slaves(this is from the documentation). Form my understanding it means a consumer will always consume message from the master queue. Also, if I send a request to a slave for consuming a message, that slave will do an extra hop by getting to the master for fetching that message. But what happens when I publish to a slave node? Will this node do the same thing of sending first the message to the master? It seems there are so many extra hops when dealing with slaves, so it seems you could have a better performance if you know only the master. But how do you handle master failure? Then one of the slaves will be elected master, so you have to know where to connect to? Asking all of this because we are using RabbitMQ cluster with HAProxy in front, so we can decouple the cluster structure from our apps. This way, whenever a node goes done, the HAProxy will redirect to living nodes. But we have problems when we kill one of the rabbit nodes. The connection to rabbit is permanent, so if it fails, you have to recreate it. Also, you have to resend the messages in this cases, otherwise you will lose them. Even with all of this, messages can still be lost, because they may be in transit when I kill a node (in some buffers, somewhere on the network etc). So you have to use transactions or publisher confirms, which guarantee the delivery after all the mirrors have been filled up with the message. But here another issue. You may have duplicate messages, because the broker might have sent a confirmation that never reached the producer (due to network failures, etc). Therefore consumer applications will need to perform deduplication or handle incoming messages in an idempotent manner. Is there a way of avoiding this? Or I have to decide whether I can lose couple of messages versus duplication of some messages?
Can someone please explain what is going on behind the scenes in a RabbitMQ cluster with multiple nodes and queues in mirrored fashion when publishing to a slave node? This blog outlines exactly what happens. But what happens when I publish to a slave node? Will this node do the same thing of sending first the message to the master? The message will be redirected to the master Queue - that is, the node on which the Queue was created. But how do you handle master failure? Then one of the slaves will be elected master, so you have to know where to connect to? Again, this is covered here. Essentially, you need a separate service that polls RabbitMQ and determines whether nodes are alive or not. RabbitMQ provides a management API for this. Your publishing and consuming applications need to refer to this service either directly, or through a mutual data-store in order to determine that correct node to publish to or consume from. The connection to rabbit is permanent, so if it fails, you have to recreate it. Also, you have to resend the messages in this cases, otherwise you will lose them. You need to subscribe to connection-interrupted events to react to severed connections. You will need to build in some level of redundancy on the client in order to ensure that messages are not lost. I suggest, as above, that you introduce a service specifically designed to interrogate RabbitMQ. You client can attempt to publish a message to the last known active connection, and should this fail, the client might ask the monitor service for an up-to-date listing of the RabbitMQ cluster. Assuming that there is at least one active node, the client may then establish a connection to it and publish the message successfully. Even with all of this, messages can still be lost, because they may be in transit when I kill a node There are certain edge-cases that you can't cover with redundancy, and neither can RabbitMQ. For example, when a message lands in a Queue, and the HA policy invokes a background process to copy the message to a backup node. During this process there is potential for the message to be lost before it is persisted to the backup node. Should the active node immediately fail, the message will be lost for good. There is nothing that can be done about this. Unfortunately, when we get down to the level of actual bytes travelling across the wire, there's a limit to the amount of safeguards that we can build. herefore consumer applications will need to perform deduplication or handle incoming messages in an idempotent manner. You can handle this a number of ways. For example, setting the message-ttl to a relatively low value will ensure that duplicated messages don't remain on the Queue for extended periods of time. You can also tag each message with a unique reference, and check that reference at the consumer level. Of course, this would require storing a cache of processed messages to compare incoming messages against; the idea being that if a previously processed message arrives, its tag will have been cached by the consumer, and the message can be ignored. One thing that I'd stress with AMQP and Queue-based solutions in general is that your infrastructure provides the tools, but not the entire solution. You have to bridge those gaps based on your business needs. Often, the best solution is derived through trial and error. I hope my suggestions are of use. I blog about a number of RabbitMQ design solutions here, including the issues you mentioned, here if you're interested.
RabbitMQ
27,104,726
12
This is a long one. I have a list of usernames and passwords. For each one I want to login to the accounts and do something things. I want to use several machines to do this faster. The way I was thinking of doing this is have a main machine whose job is just having a cron which from time to time checks if the rabbitmq queue is empty. If it is, read the list of usernames and passwords from a file and send it to the rabbitmq queue. Then have a bunch of machines which are subscribed to that queue whose job is receiving a user/pass, do stuff on it, acknowledge it, and move on to the next one, until the queue is empty and then the main machine fills it up again. So far I think I have everything down. Now comes my problem. I have checked that the things to be done with each user/passes aren't so intensive and so I could have each machine doing three of them simultaneously using python's threading. In fact for a single machine I have implemented this where I load the user/passes into a python Queue() and then have three threads consume that Queue(). Now I want to do something similar, but instead of consuming from a python Queue(), each thread of each machine should consume from a rabbitmq queue. This is where I'm stuck. To run tests I started by using rabbitmq's tutorial. send.py: import pika, sys connection = pika.BlockingConnection(pika.ConnectionParameters('localhost')) channel = connection.channel() channel.queue_declare(queue='hello') message = ' '.join(sys.argv[1:]) channel.basic_publish(exchange='', routing_key='hello', body=message) connection.close() worker.py import time, pika connection = pika.BlockingConnection(pika.ConnectionParameters('localhost')) channel = connection.channel() channel.queue_declare(queue='hello') def callback(ch, method, properties, body): print ' [x] received %r' % (body,) time.sleep( body.count('.') ) ch.basic_ack(delivery_tag = method.delivery_tag) channel.basic_qos(prefetch_count=1) channel.basic_consume(callback, queue='hello', no_ack=False) channel.start_consuming() For the above you can run two worker.py which will subscribe to the rabbitmq queue and consume as expected. My threading without rabbitmq is something like this: runit.py class Threaded_do_stuff(threading.Thread): def __init__(self, user_queue): threading.Thread.__init__(self) self.user_queue = user_queue def run(self): while True: login = self.user_queue.get() do_stuff(user=login[0], pass=login[1]) self.user_queue.task_done() user_queue = Queue.Queue() for i in range(3): td = Threaded_do_stuff(user_queue) td.setDaemon(True) td.start() ## fill up the queue for user in list_users: user_queue.put(user) ## go! user_queue.join() This also works as expected: you fill up the queue and have 3 threads subscribe to it. Now what I want to do is something like runit.py but instead of using a python Queue(), using something like worker.py where the queue is actually a rabbitmq queue. Here's something which I tried and didn't work (and I don't understand why) rabbitmq_runit.py import time, threading, pika class Threaded_worker(threading.Thread): def callback(self, ch, method, properties, body): print ' [x] received %r' % (body,) time.sleep( body.count('.') ) ch.basic_ack(delivery_tag = method.delivery_tag) def __init__(self): threading.Thread.__init__(self) self.connection = pika.BlockingConnection(pika.ConnectionParameters('localhost')) self.channel = self.connection.channel() self.channel.queue_declare(queue='hello') self.channel.basic_qos(prefetch_count=1) self.channel.basic_consume(self.callback, queue='hello') def run(self): print 'start consuming' self.channel.start_consuming() for _ in range(3): print 'launch thread' td = Threaded_worker() td.setDaemon(True) td.start() I would expect that this launches three threads each of which is blocked by .start_consuming() which just stays there waiting for the rabbitmq queue to send them sometihing. Instead, this program starts, does some prints, and exits. The pattern of the exists is weird too: launch thread launch thread start consuming launch thread start consuming In particular notice there is one "start consuming" missing. What's going on? EDIT: One answer I found to a similar question is here Consuming a rabbitmq message queue with multiple threads (Python Kombu) and the answer is to "use celery", whatever that means. I don't buy it, I shouldn't need anything remotely as sophisticated as celery. In particular, I'm not trying to set up an RPC and I don't need to read replies from the do_stuff routines. EDIT 2: The print pattern that I expected would be the following. I do python send.py first message...... python send.py second message. python send.py third message. python send.py fourth message. and the print pattern would be launch thread start consuming [x] received 'first message......' launch thread start consuming [x] received 'second message.' launch thread start consuming [x] received 'third message.' [x] received 'fourth message.'
The problem is that you're making the thread daemonic: td = Threaded_worker() td.setDaemon(True) # Shouldn't do that. td.start() Daemonic threads will be terminated as soon as the main thread exits: A thread can be flagged as a “daemon thread”. The significance of this flag is that the entire Python program exits when only daemon threads are left. The initial value is inherited from the creating thread. The flag can be set through the daemon property. Leave out setDaemon(True) and you should see it behave the way you expect. Also, the pika FAQ has a note about how to use it with threads: Pika does not have any notion of threading in the code. If you want to use Pika with threading, make sure you have a Pika connection per thread, created in that thread. It is not safe to share one Pika connection across threads. This suggests you should move everything you're doing in __init__() into run(), so that the connection is created in the same thread you're actually consuming from the queue in.
RabbitMQ
25,489,292
12
I am trying to connect to RabbitMQ with EasyNetQ. RabbitMQ is on remote VM. _rabbitBus = RabbitHutch.CreateBus( string.Format("host={0};virtualhost={1}", _hostSettings.Host, _hostSettings.VHost), x => x.Register<IEasyNetQLogger>(l => _logger)); _rabbitBus.Subscribe<Message>(_topic, ReceiveMessage, m => m.WithTopic(_topic)); I get a TimeoutException The operation requested on PersistentChannel timed out.. Remote VM is replying to pings, ports 5672 and 15672 are opened (checked with nmap). RabbitMQ management can be accessed from my host. Also, if RabbitMQ is run on my local machine, it works fine. I've tried connecting to RabbitMQ installed on my computer from other pc's in LAN, and it also works. I've come to an assumption, that it's related to the fact it's on a virtual machine, and maybe there's something wrong in connection. But again, Rabbit's web management works fine. Also tested on EasyNetQ Test application - works on localhost, but not on remote. Output as following: DEBUG: Trying to connect ERROR: Failed to connect to Broker: '192.168.0.13', Port: 5672 VHost: '/'. ExceptionMessage: 'None of the specified endpoints were reachable' ERROR: Failed to connected to any Broker. Retrying in 5000 ms EasyNetQ v0.28.4.242
As Mike suggested i had this and then checked the permissions. "guest" user can only connect via localhost (see RabbitMQ Access Control.) Try adding a user with permissions using the management interface and then connect as below var _bus = RabbitHutch.CreateBus(string.Format("host={0};virtualhost={1};username={2};password={3}", _hostSettings.Host, _hostSettings.VHost, _hostSettings.UserName, _hostSettings.Password));
RabbitMQ
22,882,318
12
I have a web application that uses the jquery autocomplete plugin, which essentially sends via ajax a request containing text that has been typed into a textbox to our web server, once the web server receives this request, it is then handed off to rabbitmq. I know that we do get benefits from using messaging, but it seems like using it for blocking rpc calls is a misuse and that something like WCF is far more appropriate in this instance, is this the case or is it considered acceptable architecture?
It's possible to perform RPC synchronous requests with RabbitMQ. Here it's explained very well, with its drawback included! So it's considered an acceptable architecture. Discouraged, but acceptable whenever the synchronous response is mandatory. As a possible counter-effect is that adding RabbitMQ in the middle, you will add some latency to the solution. However you have the possibility to gain in terms of reliability, flexibility, scalability,...
RabbitMQ
22,797,961
12
Do RabbitMQ queues have a AWS SQS-like - "message visibility timeout" ? From the AWS SQS documentation : "The visibility timeout clock starts ticking once Amazon SQS returns the message. During that time, the component processes and deletes the message. But what happens if the component fails before deleting the message? If your system doesn't call DeleteMessage for that message before the visibility timeout expires, the message again becomes visible to the ReceiveMessage calls placed by the components in your system and it will be received again" Thanks!
I believe you are looking for the RabbitMQ manual acknowledgment feature. This feature allows you get messages from the queue and once you have receive them ack'ed them. If something happens in the middle of this process, the message will be available again in the queue after a certain amount of time. Also, in the meantime since you get the message until you ack it, the message is not available for other consumers to consume. I think this is the same behavior as Message Visibility Timeout of SQS.
RabbitMQ
19,410,762
12
I installed rabbitmq using homebrew. I am trying to start rabbitmq server but I always get this error which I am unable to figure out why! I have erlang installed and there is no other application running on the same port. $ rabbitmq-server {error_logger,{{2013,2,11},{22,37,49}},"Can't set short node name!\nPlease check your configuration\n",[]} {error_logger,{{2013,2,11},{22,37,49}},crash_report,[[{initial_call,{net_kernel,init,['Argument__1']}},{pid,},{registered_name,[]},{error_info,{exit,{error,badarg},[{gen_server,init_it,6,[{file,"gen_server.erl"},{line,320}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]}},{ancestors,[net_sup,kernel_sup,]},{messages,[]},{links,[]},{dictionary,[{longnames,false}]},{trap_exit,true},{status,running},{heap_size,610},{stack_size,24},{reductions,249}],[]]} {error_logger,{{2013,2,11},{22,37,49}},supervisor_report,[{supervisor,{local,net_sup}},{errorContext,start_error},{reason,{'EXIT',nodistribution}},{offender,[{pid,undefined},{name,net_kernel},{mfargs,{net_kernel,start_link,[[rabbitmqprelaunch1593,shortnames]]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]}]} {error_logger,{{2013,2,11},{22,37,49}},supervisor_report,[{supervisor,{local,kernel_sup}},{errorContext,start_error},{reason,shutdown},{offender,[{pid,undefined},{name,net_sup},{mfargs,{erl_distribution,start_link,[]}},{restart_type,permanent},{shutdown,infinity},{child_type,supervisor}]}]} {error_logger,{{2013,2,11},{22,37,49}},std_info,[{application,kernel},{exited,{shutdown,{kernel,start,[normal,[]]}}},{type,permanent}]} {"Kernel pid terminated",application_controller,"{application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}"} Crash dump was written to: erl_crash.dump Kernel pid terminated (application_controller) ({application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}) btw, erl -sname abc gives the same output Update: This is what I have in /etc/hosts 127.0.0.1 localhost 255.255.255.255 broadcasthost
check your computer name and your short host name or alias name in /etc/hosts, match this Check your computer name [wendy@nyc123]$ nyc123 is your computer name Check your short hostname [wendy@nyc123]$ hostname -s [wendy@nyc123]$ nyc456 This error could happen because your computer name and short host name didn't match. To match this, you can change the computer hostname or alias name. Change computer host name [wendy@nyc123]$ hostname nyc456 close your terminal and open again [wendy@nyc456]$ the computer name has changed or Change alias name in /etc/hosts 127.0.0.1 nyc123.com nyc123 save and check again [wendy@nyc123]$ hostname -s [wendy@nyc123]$ nyc123 Restart your rabbitmq! [root@nyc123]$ rabbitmq-server start</p> RabbitMQ 3.6.0. Copyright (C) 2007-2015 Pivotal Software, Inc.</p> ## ## Licensed under the MPL. See http://www.rabbitmq.com/</p> ## ##</p> ########## Logs: /var/log/rabbitmq/[email protected]</p> ###### ## /var/log/rabbitmq/[email protected]</p> ##########</p> Starting broker... completed with 6 plugins.</p>
RabbitMQ
14,821,675
12
I have an existing RabbitMQ deployment that that a few Java applications are using the send out log messages as string JSON objects on various channels. I would like to use Celery to consume these messages and write them to various places (e.g. DB, Hadoop, etc.). I can see that Celery is design to be both the producer and consumer of RabbitMQ messages, since it tries to hide the mechanism by which those messages are delivered. Is there anyway to get Celery to consume messages created by another app and run jobs when they arrive?
It's currently hard to add custom consumers to the celery workers, but this is changing in the development version (to become 3.1) where I've added support for Consumer boot-steps. There's no documentation yet as I've just finished implementing it, but here's an example: from celery import Celery from celery.bin import Option from celery.bootsteps import ConsumerStep from kombu import Consumer, Exchange, Queue class CustomConsumer(ConsumerStep): queue = Queue('custom', Exchange('custom'), routing_key='custom') def __init__(self, c, enable_custom_consumer=False, **kwargs): self.enable = self.enable_custom_consumer def get_consumers(self, connection): return [ Consumer(connection.channel(), queues=[self.queue], callbacks=[self.on_message]), ] def on_message(self, body, message): print('GOT MESSAGE: %r' % (body, )) message.ack() celery = Celery(broker='amqp://localhost//') celery.steps['consumer'].add(CustomConsumer) celery.user_options['worker'].add( Option('--enable-custom-consumer', action='store_true', help='Enable our custom consumer.'), ) Note that the API may change in the final version, one thing that I'm not yet sure about is how channels are handled after get_consumer(connection). Currently the channel of the consumer is closed when connection is lost, and at shutdown, but people may want to handle channels manually. In that case there's always the possibility of customizing ConsumerStep, or writing a new StartStopStep.
RabbitMQ
12,681,802
12
I have a long-running process that must run every five minutes, but more than one instance of the processes should never run at the same time. The process should not normally run past five min, but I want to be sure that a second instance does not start up if it runs over. Per a previous recommendation, I'm using Django Celery to schedule this long-running task. I don't think a periodic task will work, because if I have a five minute period, I don't want a second task to execute if another instance of the task is running already. My current experiment is as follows: at 8:55, an instance of the task starts to run. When the task is finishing up, it will trigger another instance of itself to run at the next five min mark. So if the first task finished at 8:57, the second task would run at 9:00. If the first task happens to run long and finish at 9:01, it would schedule the next instance to run at 9:05. I've been struggling with a variety of cryptic errors when doing anything more than the simple example below and I haven't found any other examples of people scheduling tasks from a previous instance of itself. I'm wondering if there is maybe a better approach to doing what I am trying to do. I know there's a way to name one's tasks; perhaps there's a way to search for running or scheduled instances with the same name? Does anyone have any advice to offer regarding running a task every five min, but ensuring that only one task runs at a time? Thank you, Joe In mymodule/tasks.py: import datetime from celery.decorators import task @task def test(run_periodically, frequency): run_long_process() now = datetime.datetime.now() # Run this task every x minutes, where x is an integer specified by frequency eta = ( now - datetime.timedelta( minutes = now.minute % frequency , seconds = now.second, microseconds = now.microsecond ) ) + datetime.timedelta(minutes=frequency) task = test.apply_async(args=[run_periodically, frequency,], eta=eta) From a ./manage.py shell: from mymodule import tasks result = tasks.test.apply_async(args=[True, 5])
You can use periodic tasks paired with a special lock which ensures the tasks are executed one at a time. Here is a sample implementation from Celery documentation: http://ask.github.com/celery/cookbook/tasks.html#ensuring-a-task-is-only-executed-one-at-a-time Your described method with scheduling task from the previous execution can stop the execution of tasks if there will be failure in one of them.
RabbitMQ
8,902,986
12
"Durable" and "persistent mode" appear to relate to reboots rather than relating to there being no subscribers to receive the message. I'd like RabbitMQ to keep messages on the queue when there are no subscribers. When a subscriber does come online, the message should be recieved by that subscriber. Is this possible with RabbitMQ? Code sample: Server: namespace RabbitEg { class Program { private const string EXCHANGE_NAME = "helloworld"; static void Main(string[] args) { ConnectionFactory cnFactory = new RabbitMQ.Client.ConnectionFactory() { HostName = "localhost" }; using (IConnection cn = cnFactory.CreateConnection()) { using (IModel channel = cn.CreateModel()) { //channel.ExchangeDelete(EXCHANGE_NAME); channel.ExchangeDeclare(EXCHANGE_NAME, "direct", true); //channel.BasicReturn += new BasicReturnEventHandler(channel_BasicReturn); for (int i = 0; i < 100; i++) { byte[] payLoad = Encoding.ASCII.GetBytes("hello world _ " + i); IBasicProperties channelProps = channel.CreateBasicProperties(); channelProps.SetPersistent(true); channel.BasicPublish(EXCHANGE_NAME, "routekey_helloworld", false, false, channelProps, payLoad); Console.WriteLine("Sent Message " + i); System.Threading.Thread.Sleep(25); } Console.ReadLine(); } } } } } Client: namespace RabbitListener { class Program { private const string EXCHANGE_NAME = "helloworld"; static void Main(string[] args) { ConnectionFactory cnFactory = new ConnectionFactory() { HostName = "localhost" }; using (IConnection cn = cnFactory.CreateConnection()) { using (IModel channel = cn.CreateModel()) { channel.ExchangeDeclare(EXCHANGE_NAME, "direct", true); string queueName = channel.QueueDeclare("myQueue", true, false, false, null); channel.QueueBind(queueName, EXCHANGE_NAME, "routekey_helloworld"); Console.WriteLine("Waiting for messages"); QueueingBasicConsumer consumer = new QueueingBasicConsumer(channel); channel.BasicConsume(queueName, true, consumer); while (true) { BasicDeliverEventArgs e = (BasicDeliverEventArgs)consumer.Queue.Dequeue(); Console.WriteLine(Encoding.ASCII.GetString(e.Body)); } } } } } }
See the AMQP Reference for an explanation of what durable and persistent mean. Basically, queues are either durable or non-durable. The former survive broker restarts, the latter do not. Messages are published as either transient or persistent. The idea is that persistent messages on durable queues should also survive broker restarts. So, to get what you want, you need to 1) declare the queue as durable and 2) publish the messages as persistent. In addition, you may also want to enable publisher confirms on the channel; that way, you'll know when the broker has assumed responsibility for the message.
RabbitMQ
7,952,000
12
I am new to RabbitMQ. I want to be able to handle reading messages without blocking when there are multiple queues (to read from). Any inputs on how I can do that? //Edit 1 public class Rabbit : IMessageBus { private List<string> publishQ = new List<string>(); private List<string> subscribeQ = new List<string>(); ConnectionFactory factory = null; IConnection connection = null; IModel channel = null; Subscription sub = null; public void writeMessage( Measurement m1 ) { byte[] body = Measurement.AltSerialize( m1 ); int msgCount = 1; Console.WriteLine("Sending message to queue {1} via the amq.direct exchange.", m1.id); string finalQueue = publishToQueue( m1.id ); while (msgCount --> 0) { channel.BasicPublish("amq.direct", finalQueue, null, body); } Console.WriteLine("Done. Wrote the message to queue {0}.\n", m1.id); } public string publishToQueue(string firstQueueName) { Console.WriteLine("Creating a queue and binding it to amq.direct"); string queueName = channel.QueueDeclare(firstQueueName, true, false, false, null); channel.QueueBind(queueName, "amq.direct", queueName, null); Console.WriteLine("Done. Created queue {0} and bound it to amq.direct.\n", queueName); return queueName; } public Measurement readMessage() { Console.WriteLine("Receiving message..."); Measurement m = new Measurement(); int i = 0; foreach (BasicDeliverEventArgs ev in sub) { m = Measurement.AltDeSerialize(ev.Body); //m.id = //get the id here, from sub if (++i == 1) break; sub.Ack(); } Console.WriteLine("Done.\n"); return m; } public void subscribeToQueue(string queueName ) { sub = new Subscription(channel, queueName); } public static string MsgSysName; public string MsgSys { get { return MsgSysName; } set { MsgSysName = value; } } public Rabbit(string _msgSys) //Constructor { factory = new ConnectionFactory(); factory.HostName = "localhost"; connection = factory.CreateConnection(); channel = connection.CreateModel(); //consumer = new QueueingBasicConsumer(channel); System.Console.WriteLine("\nMsgSys: RabbitMQ"); MsgSys = _msgSys; } ~Rabbit() { //observer?? connection.Dispose(); //channel.Dispose(); System.Console.WriteLine("\nDestroying RABBIT"); } } //Edit 2 private List<Subscription> subscriptions = new List<Subscription>(); Subscription sub = null; public Measurement readMessage() { Measurement m = new Measurement(); foreach(Subscription element in subscriptions) { foreach (BasicDeliverEventArgs ev in element) { //ev = element.Next(); if( ev != null) { m = Measurement.AltDeSerialize( ev.Body ); return m; } m = null; } } System.Console.WriteLine("No message in the queue(s) at this time."); return m; } public void subscribeToQueue(string queueName) { sub = new Subscription(channel, queueName); subscriptions.Add(sub); } //Edit 3 //MessageHandler.cs public class MessageHandler { // Implementation of methods for Rabbit class go here private List<string> publishQ = new List<string>(); private List<string> subscribeQ = new List<string>(); ConnectionFactory factory = null; IConnection connection = null; IModel channel = null; QueueingBasicConsumer consumer = null; private List<Subscription> subscriptions = new List<Subscription>(); Subscription sub = null; public void writeMessage ( Measurement m1 ) { byte[] body = Measurement.AltSerialize( m1 ); //declare a queue if it doesn't exist publishToQueue(m1.id); channel.BasicPublish("amq.direct", m1.id, null, body); Console.WriteLine("\n [x] Sent to queue {0}.", m1.id); } public void publishToQueue(string queueName) { string finalQueueName = channel.QueueDeclare(queueName, true, false, false, null); channel.QueueBind(finalQueueName, "amq.direct", "", null); } public Measurement readMessage() { Measurement m = new Measurement(); foreach(Subscription element in subscriptions) { if( element.QueueName == null) { m = null; } else { BasicDeliverEventArgs ev = element.Next(); if( ev != null) { m = Measurement.AltDeSerialize( ev.Body ); m.id = element.QueueName; element.Ack(); return m; } m = null; } element.Ack(); } System.Console.WriteLine("No message in the queue(s) at this time."); return m; } public void subscribeToQueue(string queueName) { sub = new Subscription(channel, queueName); subscriptions.Add(sub); } public static string MsgSysName; public string MsgSys { get { return MsgSysName; } set { MsgSysName = value; } } public MessageHandler(string _msgSys) //Constructor { factory = new ConnectionFactory(); factory.HostName = "localhost"; connection = factory.CreateConnection(); channel = connection.CreateModel(); consumer = new QueueingBasicConsumer(channel); System.Console.WriteLine("\nMsgSys: RabbitMQ"); MsgSys = _msgSys; } public void disposeAll() { connection.Dispose(); channel.Dispose(); foreach(Subscription element in subscriptions) { element.Close(); } System.Console.WriteLine("\nDestroying RABBIT"); } } //App1.cs using System; using System.IO; using UtilityMeasurement; using UtilityMessageBus; public class MainClass { public static void Main() { MessageHandler obj1 = MessageHandler("Rabbit"); System.Console.WriteLine("\nA {0} object is now created.", MsgSysName); //Create new Measurement messages Measurement m1 = new Measurement("q1", 2345, 23.456); Measurement m2 = new Measurement("q2", 222, 33.33); System.Console.WriteLine("Test message 1:\n ID: {0}", m1.id); System.Console.WriteLine(" Time: {0}", m1.time); System.Console.WriteLine(" Value: {0}", m1.value); System.Console.WriteLine("Test message 2:\n ID: {0}", m2.id); System.Console.WriteLine(" Time: {0}", m2.time); System.Console.WriteLine(" Value: {0}", m2.value); // Ask queue name and store it System.Console.WriteLine("\nName of queue to publish to: "); string queueName = (System.Console.ReadLine()).ToString(); obj1.publishToQueue( queueName ); // Write message to the queue obj1.writeMessage( m1 ); System.Console.WriteLine("\nName of queue to publish to: "); string queueName2 = (System.Console.ReadLine()).ToString(); obj1.publishToQueue( queueName2 ); obj1.writeMessage( m2 ); obj1.disposeAll(); } } //App2.cs using System; using System.IO; using UtilityMeasurement; using UtilityMessageBus; public class MainClass { public static void Main() { //Asks for the message system System.Console.WriteLine("\nEnter name of messageing system: "); System.Console.WriteLine("Usage: [Rabbit] [Zmq]"); string MsgSysName = (System.Console.ReadLine()).ToString(); //Declare an IMessageBus instance: //Here, an object of the corresponding Message System // (ex. Rabbit, Zmq, etc) is instantiated IMessageBus obj1 = MessageBusFactory.GetMessageBus(MsgSysName); System.Console.WriteLine("\nA {0} object is now created.", MsgSysName); //Create a new Measurement object m Measurement m = new Measurement(); System.Console.WriteLine("Queue name to subscribe to: "); string QueueName1 = (System.Console.ReadLine()).ToString(); obj1.subscribeToQueue( QueueName1 ); //Read message into m m = obj1.readMessage(); if (m != null ) { System.Console.WriteLine("\nMessage received from queue {0}:\n ID: {1}", m.id, m.id); System.Console.WriteLine(" Time: {0}", m.time); System.Console.WriteLine(" Value: {0}", m.value); } System.Console.WriteLine("Another queue name to subscribe to: "); string QueueName2 = (System.Console.ReadLine()).ToString(); obj1.subscribeToQueue( QueueName2 ); m = obj1.readMessage(); if (m != null ) { System.Console.WriteLine("\nMessage received from queue {0}:\n ID: {1}", m.id, m.id); System.Console.WriteLine(" Time: {0}", m.time); System.Console.WriteLine(" Value: {0}", m.value); } obj1.disposeAll(); } }
two sources of info: http://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss You should really try to understand the examples first. %Program Files%\RabbitMQ\DotNetClient\examples\src (basic examples) get full working examples from their Mercurial repository (c# projects). Useful operations to understand: Declare / Assert / Listen / Subscribe / Publish Re: your question -- there's no reason why you can't have multiple listenners. Or you could subscribe to n routing paths with one listenner on an "exchange". ** re: non-blocking ** A typical listenner consumes messages one at a time. You can pull them off the queue, or they will automatically be placed close to the consumer in a 'windowed' fashion (defined through quality of service qos parameters). The beauty of the approach is that a lot of hard work is done for you (re: reliability, guaranteed delivery, etc.). A key feature of RabbitMQ, is that if there is an error in processing, then the message is re-added back into the queue (a fault tolerance feature). Need to know more about you situation. Often if you post to the list I mentioned above, you can get hold of someone on staff at RabbitMQ. They're very helpful. Hope that helps a little. It's a lot to get your head around at first, but it is worth persisting with. Q&A see: http://www.rabbitmq.com/faq.html Q. Can you subscribe to multiple queues using new Subscription(channel, queueName) ? Yes. You either use a binding key e.g. abc.*.hij, or abc.#.hij, or you attach multiple bindings. The former assumes that you have designed your routing keys around some kind of principle that makes sense for you (see routing keys in the FAQ). For the latter, you need to bind to more than one queue. Implementing n-bindings manually. see: http://hg.rabbitmq.com/rabbitmq-dotnet-client/file/default/projects/client/RabbitMQ.Client/src/client/messagepatterns/Subscription.cs there's not much code behind this pattern, so you could roll your own subscription pattern if wildcards are not enough. you could inherit from this class and add another method for additional bindings... probably this will work or something close to this (untested). The AQMP spec says that multiple manual binding are possible: http://www.rabbitmq.com/amqp-0-9-1-reference.html#queue.bind Q. And if so, how can I go through all the subscribed queues and return a message (null when no messages)? With a subscriber you are notified when a message is available. Otherwise what you are describing is a pull interface where you pull the message down on request. If no messages available, you'll get a null as you'd like. btw: the Notify method is probably more convenient. Q. Oh, and mind you that that I have all this operations in different methods. I will edit my post to reflect the code Live code: this version must use wild cards to subscribe to more than one routing key n manual routing keys using subscription is left as an exercise for the reader. ;-) I think you were leaning towards a pull interface anyway. btw: pull interfaces are less efficient than notify ones. using (Subscription sub = new Subscription(ch, QueueNme)) { foreach (BasicDeliverEventArgs ev in sub) { Process(ev.Body); ... Note: the foreach uses IEnumerable, and IEnumerable wraps the event that a new message has arrived through the "yield" statement. Effectively it is an infinite loop. --- UPDATE AMQP was designed with the idea of keeping the number of TCP connections as low as the number of applications, so that means you can have many channels per connection. the code in this question (edit 3) tries to use two subscribers with one channel, whereas it should (I believe), be one subscriber per channel per thread to avoid locking issues. Sugestion: use a routing key "wildcard". It is possible to subscribe to more than one distinct queue names with the java client, but the .net client does not to my knowledge have this implemented in the Subscriber helper class. If you really do need two distinct queue names on the same subscription thread, then the following pull sequence is suggested for .net: using (IModel ch = conn.CreateModel()) { // btw: no reason to close the channel afterwards IMO conn.AutoClose = true; // no reason to closs the connection either. Here for completeness. ch.QueueDeclare(queueName); BasicGetResult result = ch.BasicGet(queueName, false); if (result == null) { Console.WriteLine("No message available."); } else { ch.BasicAck(result.DeliveryTag, false); Console.WriteLine("Message:"); } return 0; } -- UPDATE 2: from RabbitMQ list: "assume that element.Next() is blocking on one of the subscriptions. You could retrieve deliveries from each subscription with a timeout to read past it. Alternatively you could set up a single queue to receive all measurements and retrieve messages from it with a single subscription." (Emile) What that means is that when the first queue is empty, .Next() blocks waiting for the next message to appear. i.e. the subscriber has a wait-for-next-message built in. -- UPDATE 3: under .net, use the QueueingBasicConsumer for consumption from multiple queues. Actually here's a thread about it to get a feel on usage: Wait for a single RabbitMQ message with a timeout -- UPDATE4: some more info on the .QueueingBasicConsumer There's example code here. http://www.rabbitmq.com/releases/rabbitmq-dotnet-client/v1.4.0/rabbitmq-dotnet-client-1.4.0-net-2.0-htmldoc/type-RabbitMQ.Client.QueueingBasicConsumer.html example copied into the answer with a few modifications (see //<-----). IModel channel = ...; QueueingBasicConsumer consumer = new QueueingBasicConsumer(channel); channel.BasicConsume(queueName, false, null, consumer); //<----- channel.BasicConsume(queueName2, false, null, consumer); //<----- // etc. channel.BasicConsume(queueNameN, false, null, consumer); //<----- // At this point, messages will be being asynchronously delivered, // and will be queueing up in consumer.Queue. while (true) { try { BasicDeliverEventArgs e = (BasicDeliverEventArgs) consumer.Queue.Dequeue(); // ... handle the delivery ... channel.BasicAck(e.DeliveryTag, false); } catch (EndOfStreamException ex) { // The consumer was cancelled, the model closed, or the // connection went away. break; } } -- UPDATE 5 : a simple get that will act on any queue (a slower, but sometimes more convenient method). ch.QueueDeclare(queueName); BasicGetResult result = ch.BasicGet(queueName, false); if (result == null) { Console.WriteLine("No message available."); } else { ch.BasicAck(result.DeliveryTag, false); Console.WriteLine("Message:"); // deserialize body and display extra info here. }
RabbitMQ
6,696,694
12
I've built this sample: Getting Started With RabbitMQ in .net, but made 2 programs: one-publisher one-subscriber I'm using BasicPublish to publish and BasicAck to listen as in example. If I run one publisher and several subscribers-on every "send message" from publisher- only one subscriber gets it. So that there is some order (as subscribers were started) in which publisher sends message to subscribers, and I want to send one message to all subscribers. What is wrong with that sample? May be you can provide working sample of publisher/subscribers message exchange via RabbitMq?
The example you link to uses simple queueing without an exchange, which ensures that only a single consumer will handle the message. To support pub/sub in RabbitMQ, you need to first create an Exchange, and then have each subscriber bind a Queue on that Exchange. The producer then sends messages to the Exchange, which will publish the message to each Queue that has been bound to it (at least with the simple Fanout exchange type. Routing can be achieved with Direct and Topic exchanges.) For a Java sample (which could be converted to C# pretty easily) please see here. Edit: Updated .Net version can be found here
RabbitMQ
5,951,477
12
I am using RabbitMQ with Django through Celery. I am using the most basic setup: # RabbitMQ connection settings BROKER_HOST = 'localhost' BROKER_PORT = '5672' BROKER_USER = 'guest' BROKER_PASSWORD = 'guest' BROKER_VHOST = '/' I imported a Celery task and queued it to run one year later. From the iPython shell: In [1]: from apps.test_app.tasks import add In [2]: dt=datetime.datetime(2012, 2, 18, 10, 00) In [3]: add.apply_async((10, 6), eta=dt) DEBUG:amqplib:Start from server, version: 8.0, properties: {u'information': 'Licensed under the MPL. See http://www.rabbitmq.com/', u'product': 'RabbitMQ', u'version': '2.2.0', u'copyright': 'Copyright (C) 2007-2010 LShift Ltd., Cohesive Financial Technologies LLC., and Rabbit Technologies Ltd.', u'platform': 'Erlang/OTP'}, mechanisms: ['PLAIN', 'AMQPLAIN'], locales: ['en_US'] DEBUG:amqplib:Open OK! known_hosts [] DEBUG:amqplib:using channel_id: 1 DEBUG:amqplib:Channel open DEBUG:amqplib:Closed channel #1 Out[3]: <AsyncResult: cfc507a1-175f-438e-acea-8c989a120ab3> RabbitMQ received this message in the celery queue: $ rabbitmqctl list_queues name messages durable Listing queues ... KTMacBook.local.celeryd.pidbox 0 false celery 1 true celeryctl_KTMacBook.local 0 true ...done. I then killed RabbitMQ by hitting control-C followed by 'a' to abort. When I start the server again and check it with rabbitmqctl, it says that there are no messages in the celery queue: $ rabbitmqctl list_queues name messages durable Listing queues ... celery 0 true celeryctl_KTMacBook.local 0 true ...done. The celery queue was durable. Why were the messages not persisted? What do I need to do to make the messages persistent?
Making a queue durable is not the same as making the messages on it persistent. Durable queues mean they come up again automatically when the server has restarted - which has obviously happened in your case. But this doesn't affect the messages themselves. To make messages persistent, you have to also mark the message's delivery_mode property to 2. See the classic write-up Rabbits and Warrens for a full explanation. Edit: Full link is broken, but as of Dec 2013 you could still find the blog post from the main URL: http://blogs.digitar.com/jjww/
RabbitMQ
5,046,944
12
I'd like to send a message to a RabbitMQ server and then wait for a reply message (on a "reply-to" queue). Of course, I don't want to wait forever in case the application processing these messages is down - there needs to be a timeout. It sounds like a very basic task, yet I can't find a way to do this. I've now run into this problem with both py-amqplib and the RabbitMQ .NET client. The best solution I've got so far is to poll using basic_get with sleep in-between, but this is pretty ugly: def _wait_for_message_with_timeout(channel, queue_name, timeout): slept = 0 sleep_interval = 0.1 while slept < timeout: reply = channel.basic_get(queue_name) if reply is not None: return reply time.sleep(sleep_interval) slept += sleep_interval raise Exception('Timeout (%g seconds) expired while waiting for an MQ response.' % timeout) Surely there is some better way?
Here's what I ended up doing in the .NET client: protected byte[] WaitForMessageWithTimeout(string queueName, int timeoutMs) { var consumer = new QueueingBasicConsumer(Channel); var tag = Channel.BasicConsume(queueName, true, null, consumer); try { object result; if (!consumer.Queue.Dequeue(timeoutMs, out result)) throw new ApplicationException(string.Format("Timeout ({0} seconds) expired while waiting for an MQ response.", timeoutMs / 1000.0)); return ((BasicDeliverEventArgs)result).Body; } finally { Channel.BasicCancel(tag); } } Unfortunately, I cannot do the same with py-amqplib, because its basic_consume method does not call the callback unless you call channel.wait() and channel.wait() doesn't support timeouts! This silly limitation (which I keep running into) means that if you never receive another message your thread is frozen forever.
RabbitMQ
2,799,731
12
I have RabbitMQ setup with two queues called: low and high. I want my celery workers to consume from the high priority queue before consuming tasks for the low priority queue. I get this following error when trying to push a message into RabbitMQ >>> import tasks >>> tasks.high.apply_async() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/vagrant/.local/lib/python3.6/site-packages/celery/app/task.py", line 570, in apply_async **options File "/home/vagrant/.local/lib/python3.6/site-packages/celery/app/base.py", line 756, in send_task amqp.send_task_message(P, name, message, **options) File "/home/vagrant/.local/lib/python3.6/site-packages/celery/app/amqp.py", line 552, in send_task_message **properties File "/home/vagrant/.local/lib/python3.6/site-packages/kombu/messaging.py", line 181, in publish exchange_name, declare, File "/home/vagrant/.local/lib/python3.6/site-packages/kombu/connection.py", line 510, in _ensured return fun(*args, **kwargs) File "/home/vagrant/.local/lib/python3.6/site-packages/kombu/messaging.py", line 194, in _publish [maybe_declare(entity) for entity in declare] File "/home/vagrant/.local/lib/python3.6/site-packages/kombu/messaging.py", line 194, in <listcomp> [maybe_declare(entity) for entity in declare] File "/home/vagrant/.local/lib/python3.6/site-packages/kombu/messaging.py", line 102, in maybe_declare return maybe_declare(entity, self.channel, retry, **retry_policy) File "/home/vagrant/.local/lib/python3.6/site-packages/kombu/common.py", line 121, in maybe_declare return _maybe_declare(entity, channel) File "/home/vagrant/.local/lib/python3.6/site-packages/kombu/common.py", line 145, in _maybe_declare entity.declare(channel=channel) File "/home/vagrant/.local/lib/python3.6/site-packages/kombu/entity.py", line 609, in declare self._create_queue(nowait=nowait, channel=channel) File "/home/vagrant/.local/lib/python3.6/site-packages/kombu/entity.py", line 618, in _create_queue self.queue_declare(nowait=nowait, passive=False, channel=channel) File "/home/vagrant/.local/lib/python3.6/site-packages/kombu/entity.py", line 653, in queue_declare nowait=nowait, File "/home/vagrant/.local/lib/python3.6/site-packages/amqp/channel.py", line 1154, in queue_declare spec.Queue.DeclareOk, returns_tuple=True, File "/home/vagrant/.local/lib/python3.6/site-packages/amqp/abstract_channel.py", line 80, in wait self.connection.drain_events(timeout=timeout) File "/home/vagrant/.local/lib/python3.6/site-packages/amqp/connection.py", line 500, in drain_events while not self.blocking_read(timeout): File "/home/vagrant/.local/lib/python3.6/site-packages/amqp/connection.py", line 506, in blocking_read return self.on_inbound_frame(frame) File "/home/vagrant/.local/lib/python3.6/site-packages/amqp/method_framing.py", line 55, in on_frame callback(channel, method_sig, buf, None) File "/home/vagrant/.local/lib/python3.6/site-packages/amqp/connection.py", line 510, in on_inbound_method method_sig, payload, content, File "/home/vagrant/.local/lib/python3.6/site-packages/amqp/abstract_channel.py", line 126, in dispatch_method listener(*args) File "/home/vagrant/.local/lib/python3.6/site-packages/amqp/channel.py", line 282, in _on_close reply_code, reply_text, (class_id, method_id), ChannelError, amqp.exceptions.PreconditionFailed: Queue.declare: (406) PRECONDITION_FAILED - inequivalent arg 'x-max-priority' for queue 'high' in vhost '/': received none but current is the value '10' of type 'signedint' Here is my celery configuration import ssl broker_url="amqps://" result_backend="amqp://" include=["tasks"] task_acks_late=True task_default_rate_limit="150/m" task_time_limit=300 worker_prefetch_multiplier=1 worker_max_tasks_per_child=2 timezone="UTC" broker_use_ssl = {'keyfile': '/usr/local/share/private/my_key.key', 'certfile': '/usr/local/share/ca-certificates/my_cert.crt', 'ca_certs': '/usr/local/share/ca-certificates/rootca.crt', 'cert_reqs': ssl.CERT_REQUIRED, 'ssl_version': ssl.PROTOCOL_TLSv1_2} from kombu import Exchange, Queue task_default_priority=5 task_queue_max_priority = 10 task_queues = [Queue('high', Exchange('high'), routing_key='high', queue_arguments={'x-max-priority': 10}),] task_routes = {'tasks.high': {'queue': 'high'}} I have a tasks.py script with the following tasks defined from __future__ import absolute_import, unicode_literals from celery_app import celery_app @celery_app.task def low(queue='low'): print("Low Priority") @celery_app.task(queue='high') def high(): print("HIGH PRIORITY") And my celery_app.py script: from __future__ import absolute_import, unicode_literals from celery import Celery from celery_once import QueueOnce import celeryconfig celery_app = Celery("test") if __name__ == '__main__': celery_app.start() I am starting the celery workers with this command celery -A celery_app worker -l info --config celeryconfig --concurrency=16 -n "%h:celery" -O fair -Q high,low I'm using: RabbitMQ: 3.7.17 Celery: 4.3.0 Python: 3.6.7 OS: Ubuntu 18.04.3 LTS bionic
Recently I stuck with the same Issue and found this question. I decided to post possible solution for anyone else who will find it in the future. Current error message means that the queue had been declared with a priority 10, but now its signature contains a priority none. For example here is a similar issue with x-expires with good explanation: Celery insists that every client know in advanced how a queue was created. In order to fix such issue you may vary the following things: change task_queue_max_priority (which defines default value of queue's x-max-priority) or get rid of it. declare queue low with the queue_arguments={'x-max-priority': 10} as you did for queue high. For me the problem has been solved when all queue declarations matched with previously created queues.
RabbitMQ
63,607,314
11
I created a microservice application that microservices using MassTransit and RabbitMQ for communication. Each microservice developed using clean architecture, so we have MediatR inside each microservice. Is it possible to use MassTransit for inside communication as well? so I can use the same signature for all services and when I want to expose a service to be used inter-microservice, it will be doable with ease. So MediatR used for intra-communication and RabbitMQ used for inter-communication, and whole universe is on MassTransit system. [Update] My question is how we can configure consumers so some can be used for inside communication (via MediatR) and some can be used for external communication (via RabbitMQ) and easily change them from inside to outside. [Update2] for example here is my MassTransit registration: services.AddMassTransit(x => { x.AddConsumers(Assembly.GetExecutingAssembly()); x.AddBus(provider => Bus.Factory.CreateUsingRabbitMq(cfg => { cfg.Host(new Uri(config.RabbitMQ.Address), h => { h.Username(config.RabbitMQ.Username); h.Password(config.RabbitMQ.Password); }); cfg.ReceiveEndpoint("my-queue", ep => { ep.ConfigureConsumers(provider); }); })); x.AddMediator((provider, cfg) => { cfg.ConfigureConsumers(provider); }); }); How can I differ in internal communication and external communication? in other words, how can I register some consumers to MediatR and some to RabbitMQ?
They can be used together, and MassTransit has its own Mediator implementation as well so you can write your handlers once and use them either via the mediator or via a durable transport such as RabbitMQ. There are videos available that take you through the capabilities, starting with mediator and moving to RabbitMQ.
RabbitMQ
62,084,208
11
I currently have a small server running in a docker container, the server uses RabbitMQ which is being run by docker-compose using the DockerHub image. It is running nicely, but I'm worried that it may not be properly configured for production (production being a simple server, without clustering or anything fancy). In particular, I'm worried about the disk space limit described at RabbitMQ production checklist. I'm not sure how to configure these things through docker-compose, as the env variables defined by the image seem to be quite limited. My docker-compose file: version: '3.4' services: rabbitmq: image: rabbitmq:3-management-alpine ports: - "5672:5672" - "15672:15672" volumes: - rabbitmq:/var/lib/rabbitmq restart: always environment: - RABBITMQ_DEFAULT_USER=user - RABBITMQ_DEFAULT_PASS=secretpassword my-server: # server config here volumes: rabbitmq: networks: server-network: driver: bridge
disk_free_limit is set in /etc/rabbitmq/rabbitmq.conf, seems there is no environment available here. So, you just need to override the rabbitmq.conf with your own one with docker bind mount volume to make your aim. For your case, if you enter into the rabbitmq container, you can see: shubuntu1@shubuntu1:~$ docker exec some-rabbit cat /etc/rabbitmq/rabbitmq.conf loopback_users.guest = false listeners.tcp.default = 5672 So you just need to add disk_free_limit.absolute = 1GB local rabbitmq.conf & mount it to container to override the default configure, full example as next: rabbitmq.conf: loopback_users.guest = false listeners.tcp.default = 5672 disk_free_limit.absolute = 1GB docker-compose.yaml: version: '3.4' services: rabbitmq: image: rabbitmq:3-management-alpine ports: - "5672:5672" - "15672:15672" volumes: - rabbitmq:/var/lib/rabbitmq - ./rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf volumes: rabbitmq: networks: server-network: driver: bridge check if have effect now: $ docker-compose up -d $ docker-compose logs rabbitmq | grep "Disk free limit" rabbitmq_1 | 2019-07-30 04:51:40.609 [info] <0.241.0> Disk free limit set to 1000MB You can see disk free limit already set to 1GB.
RabbitMQ
57,262,128
11
I have some images in my queue and I pass each image to my flask server where processing on images is done and a response is received in my rabbitmq server. After receiving response, I get this error "pika.exceptions.StreamLostError: Stream connection lost(104,'Connection reset by peer')". This happens when rabbitmq channel again starts consuming the connection. I don't understand why this happens. Also I would like to restart the server again automatically if this error persists. Is there any way to do that?
Your consume process is probably taking too much time to complete and send Ack/Nack to the server. Therefore, server does not receive heartbeat from your client, and thereby stops from serving. Then, on the client side you receive: pika.exceptions.StreamLostError: Stream connection lost(104,'Connection reset by peer') You should see server logs as well. It is probably like this: missed heartbeats from client, timeout: 60s See this issue for mor information.
RabbitMQ
56,859,006
11
I've got a project where we are going to have hundreds (potentially thousands) of queues in rabbit and each of these queues will need to be consumed by a pool of consumers. In rabbit (using spring-amqp), you have the rabbitlistener annotation which allows me to statically assign the queues this particular consumer(s) will handle. My question is - with rabbit and spring, is there a clean way for me to grab a section of queues (lets say queues that start with a-c) and then also listen for any queues that are created while the consumer is running. Example (at start): ant-queue apple-queue cat-queue While consumer is running: Add bat-queue Here is the (very simple) code I currently have: @Component public class MessageConsumer { public MessageConsumer() { // ideally grab a section of queues here, initialize a parameter and give to the rabbitlistener annotation } @RabbitListener(queues= {"ant-queue", "apple-queue", "cat-queue"}) public void processQueues(String messageAsJson) { < how do I update the queues declared in rabbit listener above ? > } } Edit: I should add - I've gone through the spring amqp documentation I found online and I haven't found anything outside of statically (either hardcoded or via properties) declaring the queues
Inject (@Autowired or otherwise) the RabbitListenerEndpointRegistry. Get a reference to the listener container (use the id attribute on the annotation to give it a known id) (registry.getListenerContainer(id)). Cast the container to an AbstractMessageListenerContainer and call addQueues() or addQueueNames(). Note that is more efficient to use a DirectMessageListenerContainer when adding queues dynamically; with a SimpleMessageListenerContainer the consumer(s) are stopped and restarted. With the direct container, each queue gets its own consumer(s). See Choosing a container.
RabbitMQ
54,094,994
11
I have a few microservices, which are exposed through an API-Gateway. The gateway takes care of handling authentication and routing into the system. The services behind the gateway are mostly simple CRUD-Services. Each service exposes its own API and they communicate synchronously via HTTP. All of these services, including the API-Gateway, are "default" NestJS applications. Let's stick with the Cats example. Whenever the Cat-Service updates or creates a new Cat, I want an CatCreatedEvent or CatUpdatedEvent to be emmited. The event should be pushed into some message broker like RabbitMQ and another service should listen to this event and process the event asynchronously. I am not sure how to achive this, in terms of how to "inject" RabbitMQ the right way and I am wondering if this approach makes sense in generel. I have seen the CQRS Module for NestJS, but i think CQRS is a bit too much for this domain. Especially because there is no benefit in this domain to split read- and write-models. Maybe I am totally on the wrong track, so I hope you can give me some advises.
RabbitMQ is supported in nestjs as a microservice. If you want your application to support both http requests and a message broker, you can create a hybrid application. // Create your regular nest application. const app = await NestFactory.create(ApplicationModule); // Then combine it with a RabbitMQ microservice const microservice = app.connectMicroservice({ transport: Transport.RMQ, options: { urls: [`amqp://localhost:5672`], queue: 'my_queue', queueOptions: { durable: false }, }, }); await app.startAllMicroservices(); await app.listen(3001);
RabbitMQ
53,995,130
11
I have the following problem: I need to test connection to RabbitMQ Server which operates on AMQ Protocol, and i need to do it using CMD or something similar, so i can execute the command from script. I don't know if it's possible,the only thing that I found on internet was testing connection through HTTP, and it doesn't work for me.So shortly, I need cmd command that tests connection to RabbitMQ server which uses AMQP. I hope that someone understands what is my problem, maybe i didn't described it good. Thanks in advance.
I found another way to verify basic tcp connectivity using just netcat/telnet. nc hostname 5672 OR telnet hostname 5672 Type HELO and hit enter 4 times. You should see a response of AMQP. example: > nc rabbitserver 5672 HELO AMQP The other tools mentioned here would verify deeper compatibility between the client and server as well as validate other protocols. If you simply need to make sure that port 5672 is open in the firewall between the client and server then this basic test should be enough.
RabbitMQ
52,494,492
11
I've had a RabbitMQ server running for months. This morning I was unable to connect to it, my applications was timing out and the Management client was unresponsive. Rebooted the machine. Applications are still timing out. I'm able to login to the Management client but I see this message: Virtual host / experienced an error on node rabbit@MQT01 and may be inaccessible All my queues are there but can't see any exchanges. I hope someone can help me figure out what going on. I've looked at the logs but can't find any good hint. Here a part of the log: 2018-09-11 09:39:42 =ERROR REPORT==== ** Generic server <0.281.0> terminating ** Last message in was {'$gen_cast',{submit_async,#Fun<rabbit_queue_index.36.122888644>}} ** When Server state == undefined ** Reason for termination == ** {function_clause,[{rabbit_queue_index,journal_minus_segment1,[{{true,<<172,190,166,92,192,205,125,125,36,223,114,188,53,139,128,108,0,0,0,0,0,0,0,0,0,0,26,151>>,<<>>},no_del,no_ack},{{true,<<89,173,78,227,188,37,119,171,231,189,220,236,244,79,138,177,0,0,0,0,0,0,0,0,0,0,23,40>>,<<>>},no_del,no_ack}],[{file,"src/rabbit_queue_index.erl"},{line,1231}]},{rabbit_queue_index,'-journal_minus_segment/3-fun-0-',4,[{file,"src/rabbit_queue_index.erl"},{line,1208}]},{array,sparse_foldl_3,7,[{file,"array.erl"},{line,1684}]},{array,sparse_foldl_2,9,[{file,"array.erl"},{line,1678}]},{rabbit_queue_index,'-recover_journal/1-fun-0-',1,[{file,"src/rabbit_queue_index.erl"},{line,915}]},{lists,map,2,[{file,"lists.erl"},{line,1239}]},{rabbit_queue_index,segment_map,2,[{file,"src/rabbit_queue_index.erl"},{line,1039}]},{rabbit_queue_index,recover_journal,1,[{file,"src/rabbit_queue_index.erl"},{line,906}]}]} 2018-09-11 09:39:42 =CRASH REPORT==== crasher: initial call: worker_pool_worker:init/1 pid: <0.281.0> registered_name: [] exception exit: {{function_clause,[{rabbit_queue_index,journal_minus_segment1,[{{true,<<172,190,166,92,192,205,125,125,36,223,114,188,53,139,128,108,0,0,0,0,0,0,0,0,0,0,26,151>>,<<>>},no_del,no_ack},{{true,<<89,173,78,227,188,37,119,171,231,189,220,236,244,79,138,177,0,0,0,0,0,0,0,0,0,0,23,40>>,<<>>},no_del,no_ack}],[{file,"src/rabbit_queue_index.erl"},{line,1231}]},{rabbit_queue_index,'-journal_minus_segment/3-fun-0-',4,[{file,"src/rabbit_queue_index.erl"},{line,1208}]},{array,sparse_foldl_3,7,[{file,"array.erl"},{line,1684}]},{array,sparse_foldl_2,9,[{file,"array.erl"},{line,1678}]},{rabbit_queue_index,'-recover_journal/1-fun-0-',1,[{file,"src/rabbit_queue_index.erl"},{line,915}]},{lists,map,2,[{file,"lists.erl"},{line,1239}]},{rabbit_queue_index,segment_map,2,[{file,"src/rabbit_queue_index.erl"},{line,1039}]},{rabbit_queue_index,recover_journal,1,[{file,"src/rabbit_queue_index.erl"},{line,906}]}]},[{gen_server2,terminate,3,[{file,"src/gen_server2.erl"},{line,1161}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]} ancestors: [worker_pool_sup,rabbit_sup,<0.262.0>] message_queue_len: 0 messages: [] links: [<0.276.0>,<0.336.0>,#Port<0.31196>] dictionary: [{fhc_age_tree,{1,{{10352640,#Ref<0.1077581647.1695285251.67028>},true,nil,nil}}},{worker_pool_worker,true},{rand_seed,{#{jump => #Fun<rand.16.15449617>,max => 288230376151711743,next => #Fun<rand.15.15449617>,type => exsplus},[257570830250844431|246837015578235662]}},{worker_pool_name,worker_pool},{{"c:/Users/dfpsb/AppData/Roaming/RabbitMQ/db/RABBIT~1/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L/queues/9GD33C2I2PKZ7A8QHZ4MWWCKE/journal.jif",fhc_file},{file,1,true}},{{#Ref<0.1077581647.1695285251.67028>,fhc_handle},{handle,{file_descriptor,prim_file,{#Port<0.31196>,1808}},#Ref<0.1077581647.1695285251.67028>,240,false,0,infinity,[],<<>>,0,0,0,0,0,false,"c:/Users/dfpsb/AppData/Roaming/RabbitMQ/db/RABBIT~1/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L/queues/9GD33C2I2PKZ7A8QHZ4MWWCKE/journal.jif",[write,binary,raw,read],[{write_buffer,infinity}],true,true,10352640}}] trap_exit: false status: running heap_size: 10958 stack_size: 27 reductions: 104391 neighbours: neighbour: [{pid,<0.279.0>},{registered_name,[]},{initial_call,{worker_pool_worker,init,['Argument__1']}},{current_function,{gen,do_call,4}},{ancestors,[worker_pool_sup,rabbit_sup,<0.262.0>]},{message_queue_len,0},{links,[<0.276.0>,<0.336.0>]},{trap_exit,false},{status,waiting},{heap_size,4185},{stack_size,42},{reductions,21548},{current_stacktrace,[{gen,do_call,4,[{file,"gen.erl"},{line,169}]},{gen_server,call,3,[{file,"gen_server.erl"},{line,210}]},{file,call,2,[{file,"file.erl"},{line,1499}]},{rabbit_queue_index,get_journal_handle,1,[{file,"src/rabbit_queue_index.erl"},{line,881}]},{rabbit_queue_index,load_journal,1,[{file,"src/rabbit_queue_index.erl"},{line,894}]},{rabbit_queue_index,recover_journal,1,[{file,"src/rabbit_queue_index.erl"},{line,904}]},{rabbit_queue_index,scan_queue_segments,3,[{file,"src/rabbit_queue_index.erl"},{line,724}]},{rabbit_queue_index,queue_index_walker_reader,2,[{file,"src/rabbit_queue_index.erl"},{line,712}]}]}] neighbour: [{pid,<0.278.0>},{registered_name,[]},{initial_call,{worker_pool_worker,init,['Argument__1']}},{current_function,{gen,do_call,4}},{ancestors,[worker_pool_sup,rabbit_sup,<0.262.0>]},{message_queue_len,0},{links,[<0.276.0>,<0.336.0>,#Port<0.31157>]},{trap_exit,false},{status,waiting},{heap_size,6772},{stack_size,102},{reductions,129623},{current_stacktrace,[{gen,do_call,4,[{file,"gen.erl"},{line,169}]},{gen_server2,call,3,[{file,"src/gen_server2.erl"},{line,323}]},{array,sparse_foldr_3,6,[{file,"array.erl"},{line,1848}]},{array,sparse_foldr_2,8,[{file,"array.erl"},{line,1837}]},{lists,foldr,3,[{file,"lists.erl"},{line,1276}]},{rabbit_queue_index,scan_queue_segments,3,[{file,"src/rabbit_queue_index.erl"},{line,725}]},{rabbit_queue_index,queue_index_walker_reader,2,[{file,"src/rabbit_queue_index.erl"},{line,712}]},{rabbit_queue_index,'-queue_index_walker/1-fun-0-',2,[{file,"src/rabbit_queue_index.erl"},{line,694}]}]}] neighbour: [{pid,<0.280.0>},{registered_name,[]},{initial_call,{worker_pool_worker,init,['Argument__1']}},{current_function,{array,set_1,4}},{ancestors,[worker_pool_sup,rabbit_sup,<0.262.0>]},{message_queue_len,0},{links,[<0.276.0>,<0.336.0>,#Port<0.31170>]},{trap_exit,false},{status,runnable},{heap_size,121536},{stack_size,44},{reductions,122988},{current_stacktrace,[{array,set_1,4,[{file,"array.erl"},{line,590}]},{array,set_1,4,[{file,"array.erl"},{line,592}]},{array,set_1,4,[{file,"array.erl"},{line,592}]},{array,set,3,[{file,"array.erl"},{line,574}]},{rabbit_queue_index,parse_segment_publish_entry,5,[{file,"src/rabbit_queue_index.erl"},{line,1135}]},{rabbit_queue_index,segment_entries_foldr,3,[{file,"src/rabbit_queue_index.erl"},{line,1091}]},{lists,foldr,3,[{file,"lists.erl"},{line,1276}]},{rabbit_queue_index,scan_queue_segments,3,[{file,"src/rabbit_queue_index.erl"},{line,725}]}]}] neighbour: [{pid,<0.336.0>},{registered_name,[]},{initial_call,{gatherer,init,['Argument__1']}},{current_function,{gen_server2,process_next_msg,1}},{ancestors,[<0.332.0>,<0.324.0>,<0.323.0>,rabbit_vhost_sup_sup,rabbit_sup,<0.262.0>]},{message_queue_len,2},{links,[<0.280.0>,<0.332.0>,<0.281.0>,<0.278.0>,<0.279.0>]},{trap_exit,false},{status,runnable},{heap_size,987},{stack_size,8},{reductions,73223},{current_stacktrace,[{gen_server2,process_next_msg,1,[{file,"src/gen_server2.erl"},{line,666}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]}] 2018-09-11 09:39:42 =CRASH REPORT==== crasher: initial call: rabbit_msg_store:init/1 pid: <0.332.0> registered_name: [] exception exit: {{{function_clause,[{rabbit_queue_index,journal_minus_segment1,[{{true,<<172,190,166,92,192,205,125,125,36,223,114,188,53,139,128,108,0,0,0,0,0,0,0,0,0,0,26,151>>,<<>>},no_del,no_ack},{{true,<<89,173,78,227,188,37,119,171,231,189,220,236,244,79,138,177,0,0,0,0,0,0,0,0,0,0,23,40>>,<<>>},no_del,no_ack}],[{file,"src/rabbit_queue_index.erl"},{line,1231}]},{rabbit_queue_index,'-journal_minus_segment/3-fun-0-',4,[{file,"src/rabbit_queue_index.erl"},{line,1208}]},{array,sparse_foldl_3,7,[{file,"array.erl"},{line,1684}]},{array,sparse_foldl_2,9,[{file,"array.erl"},{line,1678}]},{rabbit_queue_index,'-recover_journal/1-fun-0-',1,[{file,"src/rabbit_queue_index.erl"},{line,915}]},{lists,map,2,[{file,"lists.erl"},{line,1239}]},{rabbit_queue_index,segment_map,2,[{file,"src/rabbit_queue_index.erl"},{line,1039}]},{rabbit_queue_index,recover_journal,1,[{file,"src/rabbit_queue_index.erl"},{line,906}]}]},{gen_server2,call,[<0.336.0>,out,infinity]}},[{gen_server2,init_it,6,[{file,"src/gen_server2.erl"},{line,589}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]} ancestors: [<0.324.0>,<0.323.0>,rabbit_vhost_sup_sup,rabbit_sup,<0.262.0>] message_queue_len: 1 messages: [{'EXIT',<0.336.0>,{function_clause,[{rabbit_queue_index,journal_minus_segment1,[{{true,<<172,190,166,92,192,205,125,125,36,223,114,188,53,139,128,108,0,0,0,0,0,0,0,0,0,0,26,151>>,<<>>},no_del,no_ack},{{true,<<89,173,78,227,188,37,119,171,231,189,220,236,244,79,138,177,0,0,0,0,0,0,0,0,0,0,23,40>>,<<>>},no_del,no_ack}],[{file,"src/rabbit_queue_index.erl"},{line,1231}]},{rabbit_queue_index,'-journal_minus_segment/3-fun-0-',4,[{file,"src/rabbit_queue_index.erl"},{line,1208}]},{array,sparse_foldl_3,7,[{file,"array.erl"},{line,1684}]},{array,sparse_foldl_2,9,[{file,"array.erl"},{line,1678}]},{rabbit_queue_index,'-recover_journal/1-fun-0-',1,[{file,"src/rabbit_queue_index.erl"},{line,915}]},{lists,map,2,[{file,"lists.erl"},{line,1239}]},{rabbit_queue_index,segment_map,2,[{file,"src/rabbit_queue_index.erl"},{line,1039}]},{rabbit_queue_index,recover_journal,1,[{file,"src/rabbit_queue_index.erl"},{line,906}]}]}}] links: [<0.335.0>,<0.324.0>] dictionary: [] trap_exit: true status: running heap_size: 2586 stack_size: 27 reductions: 57377 neighbours: neighbour: [{pid,<0.335.0>},{registered_name,[]},{initial_call,{rabbit_msg_store_gc,init,['Argument__1']}},{current_function,{gen_server2,process_next_msg,1}},{ancestors,[<0.332.0>,<0.324.0>,<0.323.0>,rabbit_vhost_sup_sup,rabbit_sup,<0.262.0>]},{message_queue_len,0},{links,[<0.332.0>]},{trap_exit,false},{status,waiting},{heap_size,987},{stack_size,8},{reductions,174},{current_stacktrace,[{gen_server2,process_next_msg,1,[{file,"src/gen_server2.erl"},{line,666}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]}] 2018-09-11 09:39:42 =SUPERVISOR REPORT==== Supervisor: {local,worker_pool_sup} Context: child_terminated Reason: {function_clause,[{rabbit_queue_index,journal_minus_segment1,[{{true,<<172,190,166,92,192,205,125,125,36,223,114,188,53,139,128,108,0,0,0,0,0,0,0,0,0,0,26,151>>,<<>>},no_del,no_ack},{{true,<<89,173,78,227,188,37,119,171,231,189,220,236,244,79,138,177,0,0,0,0,0,0,0,0,0,0,23,40>>,<<>>},no_del,no_ack}],[{file,"src/rabbit_queue_index.erl"},{line,1231}]},{rabbit_queue_index,'-journal_minus_segment/3-fun-0-',4,[{file,"src/rabbit_queue_index.erl"},{line,1208}]},{array,sparse_foldl_3,7,[{file,"array.erl"},{line,1684}]},{array,sparse_foldl_2,9,[{file,"array.erl"},{line,1678}]},{rabbit_queue_index,'-recover_journal/1-fun-0-',1,[{file,"src/rabbit_queue_index.erl"},{line,915}]},{lists,map,2,[{file,"lists.erl"},{line,1239}]},{rabbit_queue_index,segment_map,2,[{file,"src/rabbit_queue_index.erl"},{line,1039}]},{rabbit_queue_index,recover_journal,1,[{file,"src/rabbit_queue_index.erl"},{line,906}]}]} Offender: [{pid,<0.281.0>},{id,4},{mfargs,{worker_pool_worker,start_link,[worker_pool]}},{restart_type,transient},{shutdown,4294967295},{child_type,worker}] 2018-09-11 09:39:42 =CRASH REPORT==== crasher: initial call: rabbit_vhost_process:init/1 pid: <0.325.0> registered_name: [] exception exit: {{error,{{{function_clause,[{rabbit_queue_index,journal_minus_segment1,[{{true,<<172,190,166,92,192,205,125,125,36,223,114,188,53,139,128,108,0,0,0,0,0,0,0,0,0,0,26,151>>,<<>>},no_del,no_ack},{{true,<<89,173,78,227,188,37,119,171,231,189,220,236,244,79,138,177,0,0,0,0,0,0,0,0,0,0,23,40>>,<<>>},no_del,no_ack}],[{file,"src/rabbit_queue_index.erl"},{line,1231}]},{rabbit_queue_index,'-journal_minus_segment/3-fun-0-',4,[{file,"src/rabbit_queue_index.erl"},{line,1208}]},{array,sparse_foldl_3,7,[{file,"array.erl"},{line,1684}]},{array,sparse_foldl_2,9,[{file,"array.erl"},{line,1678}]},{rabbit_queue_index,'-recover_journal/1-fun-0-',1,[{file,"src/rabbit_queue_index.erl"},{line,915}]},{lists,map,2,[{file,"lists.erl"},{line,1239}]},{rabbit_queue_index,segment_map,2,[{file,"src/rabbit_queue_index.erl"},{line,1039}]},{rabbit_queue_index,recover_journal,1,[{file,"src/rabbit_queue_index.erl"},{line,906}]}]},{gen_server2,call,[<0.336.0>,out,infinity]}},{child,undefined,msg_store_persistent,{rabbit_msg_store,start_link,[msg_store_persistent,"c:/Users/dfpsb/AppData/Roaming/RabbitMQ/db/RABBIT~1/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L",[],{#Fun<rabbit_queue_index.2.122888644>,{start,[{resource,<<"/">>,queue,<<"DF-9ID59RK-WS.InterchangeFragtbrevEnvelope.TurPlan">>},{resource,<<"/">>,queue,<<"Test1.InterchangeFragtbrevEnvelope.RET">>},{resource,<<"/">>,queue,<<"Test2.DfLoggingEvent.Debug">>},{resource,<<"/">>,queue,<<"DF-9ID59RK-WS.InterchangeTurEnvelope.DFMobil">>},{resource,<<"/">>,queue,<<"Paw.DfLoggingEvent.Debug">>},{resource,<<"/">>,queue,<<"DevUnitTest.TruckLoadingEnvelope.UnitTest">>},{resource,<<"/">>,queue,<<"Test1.InterchangeFragtbrevEnvelope.RET_error">>},{resource,<<"/">>,queue,<<"Paw.InterchangeFragtbrevEnvelope.TurPlan">>},{resource,<<"/">>,queue,<<"DevUnitTest.TestMsg.UnitTest_error">>},{resource,<<"/">>,queue,<<"DevUnitTest.TestMsg.UnitTest">>},{resource,<<"/">>,queue,<<"Paw.InterchangeTurEnvelope.DFMobil">>},{resource,<<"/">>,queue,<<"Test2.InterchangeFragtbrevEnvelope.TurPlan">>},{resource,<<"/">>,queue,<<"Paw.InterchangeFragtbrevEnvelope.TurPlan_error">>},{resource,<<"/">>,queue,<<"Paw.TruckLoadingEnvelope.TurPlan">>},{resource,<<"/">>,queue,<<"Test2.InterchangeFragtbrevEnvelope.TurPlan_error">>},{resource,<<"/">>,queue,<<"Test2.TruckLoadingEnvelope.TurPlan">>},{resource,<<"/">>,queue,<<"Paw.DfLoggingEvent.Warning">>},{resource,<<"/">>,queue,<<"DF-9ID59RK-WS.InterchangeFragtbrevEnvelope.RET">>},{resource,<<"/">>,queue,<<"Test2.InterchangeTurEnvelope.DFMobil">>},{resource,<<"/">>,queue,<<"Test2.DfLoggingEvent.Warning">>}]}}]},transient,30000,worker,[rabbit_msg_store]}}},[{gen_server2,init_it,6,[{file,"src/gen_server2.erl"},{line,581}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]} ancestors: [<0.323.0>,rabbit_vhost_sup_sup,rabbit_sup,<0.262.0>] message_queue_len: 0 messages: [] links: [<0.323.0>] dictionary: [] trap_exit: true status: running heap_size: 10958 stack_size: 27 reductions: 63314 neighbours: 2018-09-11 09:39:42 =SUPERVISOR REPORT==== Supervisor: {<0.323.0>,rabbit_vhost_sup_wrapper} Context: start_error Reason: {error,{{{function_clause,[{rabbit_queue_index,journal_minus_segment1,[{{true,<<172,190,166,92,192,205,125,125,36,223,114,188,53,139,128,108,0,0,0,0,0,0,0,0,0,0,26,151>>,<<>>},no_del,no_ack},{{true,<<89,173,78,227,188,37,119,171,231,189,220,236,244,79,138,177,0,0,0,0,0,0,0,0,0,0,23,40>>,<<>>},no_del,no_ack}],[{file,"src/rabbit_queue_index.erl"},{line,1231}]},{rabbit_queue_index,'-journal_minus_segment/3-fun-0-',4,[{file,"src/rabbit_queue_index.erl"},{line,1208}]},{array,sparse_foldl_3,7,[{file,"array.erl"},{line,1684}]},{array,sparse_foldl_2,9,[{file,"array.erl"},{line,1678}]},{rabbit_queue_index,'-recover_journal/1-fun-0-',1,[{file,"src/rabbit_queue_index.erl"},{line,915}]},{lists,map,2,[{file,"lists.erl"},{line,1239}]},{rabbit_queue_index,segment_map,2,[{file,"src/rabbit_queue_index.erl"},{line,1039}]},{rabbit_queue_index,recover_journal,1,[{file,"src/rabbit_queue_index.erl"},{line,906}]}]},{gen_server2,call,[<0.336.0>,out,infinity]}},{child,undefined,msg_store_persistent,{rabbit_msg_store,start_link,[msg_store_persistent,"c:/Users/dfpsb/AppData/Roaming/RabbitMQ/db/RABBIT~1/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L",[],{#Fun<rabbit_queue_index.2.122888644>,{start,[{resource,<<"/">>,queue,<<"DF-9ID59RK-WS.InterchangeFragtbrevEnvelope.TurPlan">>},{resource,<<"/">>,queue,<<"Test1.InterchangeFragtbrevEnvelope.RET">>},{resource,<<"/">>,queue,<<"Test2.DfLoggingEvent.Debug">>},{resource,<<"/">>,queue,<<"DF-9ID59RK-WS.InterchangeTurEnvelope.DFMobil">>},{resource,<<"/">>,queue,<<"Paw.DfLoggingEvent.Debug">>},{resource,<<"/">>,queue,<<"DevUnitTest.TruckLoadingEnvelope.UnitTest">>},{resource,<<"/">>,queue,<<"Test1.InterchangeFragtbrevEnvelope.RET_error">>},{resource,<<"/">>,queue,<<"Paw.InterchangeFragtbrevEnvelope.TurPlan">>},{resource,<<"/">>,queue,<<"DevUnitTest.TestMsg.UnitTest_error">>},{resource,<<"/">>,queue,<<"DevUnitTest.TestMsg.UnitTest">>},{resource,<<"/">>,queue,<<"Paw.InterchangeTurEnvelope.DFMobil">>},{resource,<<"/">>,queue,<<"Test2.InterchangeFragtbrevEnvelope.TurPlan">>},{resource,<<"/">>,queue,<<"Paw.InterchangeFragtbrevEnvelope.TurPlan_error">>},{resource,<<"/">>,queue,<<"Paw.TruckLoadingEnvelope.TurPlan">>},{resource,<<"/">>,queue,<<"Test2.InterchangeFragtbrevEnvelope.TurPlan_error">>},{resource,<<"/">>,queue,<<"Test2.TruckLoadingEnvelope.TurPlan">>},{resource,<<"/">>,queue,<<"Paw.DfLoggingEvent.Warning">>},{resource,<<"/">>,queue,<<"DF-9ID59RK-WS.InterchangeFragtbrevEnvelope.RET">>},{resource,<<"/">>,queue,<<"Test2.InterchangeTurEnvelope.DFMobil">>},{resource,<<"/">>,queue,<<"Test2.DfLoggingEvent.Warning">>}]}}]},transient,30000,worker,[rabbit_msg_store]}}} Offender: [{pid,undefined},{id,rabbit_vhost_process},{mfargs,{rabbit_vhost_process,start_link,[<<"/">>]}},{restart_type,permanent},{shutdown,30000},{child_type,worker}]
I figured out what was going on. Someone on my team (me) was creating an unprecedented amount of connections to RabbitMQ. For each connection a file handler is created to one or more files, not sure which. The OS (Windows in my case) has a file handler limit, not sure what the limit is, but when reached, an error is thrown. This corrupted the virtual host (/) and I had to delete it and create it again. Good thing this wasn't production, because then all the messages would've been gone. Edit (June 2020): There might be a fix for this so it can't happen again. A virtual host can be limited in how many connections they allow. This way Windows won't faulter on the many file handles. Look in the management portal -> Admin -> Limits, set max-connections to a number you think it plausible for your setup.
RabbitMQ
52,271,432
11
I'd like to use queue names using a specific pattern, like project.{queue-name}.queue. And to keep this pattern solid, I wrote a helper class to generate this name from a simple identifier. So, foo would generate a queue called project.foo.queue. Simple. But, the annotation RabbitListener demands a constant string and gives me an error using my helper class. How can I achieve this (or maybe another approach) using RabbitListener annotation? @Component public class FooListener { // it doesn't work @RabbitListener(queues = QueueName.for("foo")) // it works @RabbitListener(queues = "project.foo.queue") void receive(final FooMessage message) { // ... } }
To create and listen to a queue name constructed from a dynamic UUID, you could use random.uuid. The problem is that this must be captured to a Java variable in only one place because a new random value would be generated each time the property is referenced. The solution is to use Spring Expression Language (SpEL) to call a function that provides the configured value, something like: @RabbitListener(queues = "#{configureAMQP.getControlQueueName()}") void receive(final FooMessage message) { // ... } Create the queue with something like this: @Configuration public class ConfigureAMQP { @Value("${controlQueuePrefix}-${random.uuid}") private String controlQueueName; public String getControlQueueName() { return controlQueueName; } @Bean public Queue controlQueue() { System.out.println("controlQueue(): controlQueueName=" + controlQueueName); return new Queue(controlQueueName, true, true, true); } } Notice that the necessary bean used in the SpEL was created implicitly based on the @Configuration class (with a slight alteration of the spelling ConfigureAMQP -> configureAMQP).
RabbitMQ
49,909,859
11
I am well experienced with the RabbitMQ and AMQP protocol, and have built a system with patterns for Commands, Requests and Events. Now I am going to build a system running on AWS Lambda and therefore use SNS, SQS etc. I want to understand the "mapping" between these things. What are the equivalent to an exchange in AMQP? What are the equivalent to a routing key? How to set up queue bindings for fanout, direct and topic exchanges (or similar) in SNS and SQS? How did other people handle this? To me it looks like RabbitMQ is a tool built to fit the usual needs of a message bus, where AWS provides blocks and you have to setup/build the functionality yourself. Am I right?
What are the equivalent to an exchange in AMQP? The closest concept might be SNS, as you can configure a SNS topic to publish to n SQS queues. Then when you write to that topic, each subscribed queue gets a message. You can also write messages directly to SQS queues if you like. What are the equivalent to a routing key? There's no real equivalent for this. The SNS-to-SQS bindings don't allow for any additional filtering/control beyond topic-to-queue bindings. You could approximate routing by having multiple SNS topics i.e. each topic is a "routing key". How to set up queue bindings for fanout, direct and topic exchanges (or similar) in SNS and SQS? Fanout: write to SNS topic and every subscribed queue will receive the same message. Direct: write directly to SQS queue(s), or SNS topic that has only those queues subscribed. Topic: create SNS topic and subscribe queues accordingly. How did other people handle this? I used RabbitMQ before AWS messaging, so I went through the same learning curve. AWS doesn't provide as many exchange/routing bells & whistles, but in my experience you can get close enough with some combination of SNS topics and SQS queues.
RabbitMQ
46,880,229
11
Say that I have this task: def do_stuff_for_some_time(some_id): e = Model.objects.get(id=some_id) e.domanystuff() and I'm using it like so: do_stuff_for_some_time.apply_async(args=[some_id], queue='some_queue') The problem I'm facing is that there are a lot of repetitive tasks with the same arg param and it's boggling down the queue. Is it possible to apply async only if the same args and the same task is not in the queue?
celery-singleton solves this requirement Caveat: requires redis broker (for distributed locks) pip install celery-singleton Use the Singleton task base class: from celery_singleton import Singleton @celery_app.task(base=Singleton) def do_stuff_for_some_time(some_id): e = Model.objects.get(id=some_id) e.domanystuff() from the docs: calls to do_stuff.delay() will either queue a new task or return an AsyncResult for the currently queued/running instance of the task
RabbitMQ
45,107,418
11
I am trying to get a few messages from a queue using the HTTP API of rabbitmq. I am following the documentation in here I have no vhost configured. I tried the following curl command: curl -i -u guest:guest -H "content-type:application/json" -X POST http://127.0.0.1:15672/api/queues/foo/get -d'{"count":5,"requeue":true,"encoding":"auto","truncate":50000}' RabbitMQ then answers: HTTP/1.1 405 Method Not Allowed vary: origin Server: MochiWeb/1.1 WebMachine/1.10.0 (never breaks eye contact) Date: Thu, 20 Apr 2017 08:03:28 GMT Content-Length: 66 Allow: HEAD, GET, PUT, DELETE, OPTIONS {"error":"Method Not Allowed","reason":"\"Method Not Allowed\"\n"} Can you point out my mistake? How can I get these messages?
you are missing the queue name: curl -i -u guest:guest -H "content-type:application/json" -X POST http://127.0.0.1:15672/api/queues/foo/my_queue/get -d'{"count":5,"requeue":true,"encoding":"auto","truncate":50000}' where foo is the virtual host, and my_queue is the queue name. as result: [ { "payload_bytes":4, "redelivered":true, "exchange":"", "routing_key":"my_queue", "message_count":5, "properties":{ "delivery_mode":1, "headers":{ } }, "payload":"test", "payload_encoding":"string" }, { "payload_bytes":4, "redelivered":true, "exchange":"", "routing_key":"my_queue", "message_count":4, "properties":{ "delivery_mode":1, "headers":{ } }, "payload":"test", "payload_encoding":"string" }, { "payload_bytes":4, "redelivered":true, "exchange":"", "routing_key":"my_queue", "message_count":3, "properties":{ "delivery_mode":1, "headers":{ } }, "payload":"test", "payload_encoding":"string" }, { "payload_bytes":4, "redelivered":true, "exchange":"", "routing_key":"my_queue", "message_count":2, "properties":{ "delivery_mode":1, "headers":{ } }, "payload":"test", "payload_encoding":"string" }, { "payload_bytes":4, "redelivered":true, "exchange":"", "routing_key":"my_queue", "message_count":1, "properties":{ "delivery_mode":1, "headers":{ } }, "payload":"test", "payload_encoding":"string" } ] EDIT In case you are using the default vhost: curl -i -u guest:guest -H "content-type:application/json" -X POST http://127.0.0.1:15672/api/queues/%2f/my_queue/get -d'{"count":5,"requeue":true,"encoding":"auto","truncate":50000}'
RabbitMQ
43,513,681
11
I have implemented the example from the RabbitMQ website: RabbitMQ Example I have expanded it to have an application with a button to send a message. Now I started two consumer on two different computers. When I send the message the first message is sent to computer1, then the second message is sent to computer2, the thrid to computer1 and so on. Why is this, and how can I change the behavior to send each message to each consumer?
Why is this As noted by Yazan, messages are consumed from a single queue in a round-robin manner. The behavior your are seeing is by design, making it easy to scale up the number of consumers for a given queue. how can I change the behavior to send each message to each consumer? To have each consumer receive the same message, you need to create a queue for each consumer and deliver the same message to each queue. The easiest way to do this is to use a fanout exchange. This will send every message to every queue that is bound to the exchange, completely ignoring the routing key. If you need more control over the routing, you can use a topic or direct exchange and manage the routing keys. Whatever type of exchange you choose, though, you will need to have a queue per consumer and have each message routed to each queue.
RabbitMQ
41,160,585
11
I'm looking for a solution to have scheduled messages with RabbitMQ, so not only delaying the messages as described in several sources but schedule it to have a message e.g. every day. If not RabbitMQ, any other solutions out there you can think of and you'd suggest for a microservices environment using a message-bus? So it's really about combining the concept of a task-scheduler and a message bus ... Or is it better to use a job scheduler just to push messages to the message queue, e.g. using rundeck in combination with RabbitMQ?
Or is it better to use a job scheduler just to push messages to the message queue, e.g. using rundeck in combination with RabbitMQ? yes. RabbitMQ is not designed to handle scheduling, and attempting to use it for that will just be painful (at best). It is best to use another scheduling system, like cron jobs or rundeck or any of the other numerous scheduling tools available. From that tool, you can execute code that will push messages across RabbitMQ, triggering work in other parts of your system.
RabbitMQ
40,807,355
11
I'm required to create a simple queue manager to pass a number from a sender to a consumer. Hello World tutorial provided by RabbitMQ covers almost 70% of it. But I need to change the queue to not to forever waiting for incoming messages. Or stop waiting after certain amount of messages. I read and tried few solutions from other post, but it doesn't work. rabbitmq AMQP::consume() - undefined method. there's another method, wait_frame but it is protected. and other post is in python which I dont understand. <?php require_once __DIR__ . '/vendor/autoload.php'; require 'config.php'; use PhpAmqpLib\Connection\AMQPStreamConnection; function recieveQueue($queueName){ $connection = new AMQPStreamConnection('localhost', 5672, 'guest', 'guest'); // try{ // $connection->wait_frame(10); // }catch(AMQPConnectionException $e){ // echo "asdasd"; // } $channel = $connection->channel(); $channel->queue_declare($queueName, false, false, false, false); echo ' [*] Waiting for messages. To exit press CTRL+C', "\n"; $callback = function($msg) { echo " [x] Received ", $msg->body, "\n"; }; // $tag = uniqid() . microtime(true); // $queue->consume($callback, $flags, $tag); $channel->basic_consume($queueName, '', false, true, false, false, $callback); // $channel->cancel($tag); while(count($channel->callbacks)) { $channel->wait(); } echo "\nfinish"; } recieveQueue('vtiger'); ?>
Modify wait() in while loop: $timeout = 55; while(count($channel->callbacks)) { $channel->wait(null, false, $timeout); }
RabbitMQ
33,930,923
11
Let's just accept for a moment that it is not a horrible idea to implement RPC over message queues (like RabbitMQ) -- sometimes it might be necessary when interfacing with legacy systems. In case of RPC over RabbitMQ, clients send a message to the broker, broker routes the message to a worker, worker returns the result through the broker to the client. However, if a worker implements more than one remote method, then somehow the different calls need to be routed to different listeners. What is the general practice in this case? All RPC over MQ examples show only one remote method. It would be nice and easy to just set the method name as the routing rule/queue name, but I don't know whether this is the right way to do it.
Let's just accept for a moment that it is not a horrible idea to implement RPC over message queues (like RabbitMQ) it's not horrible at all! it's common, and recommended in many situations - not just legacy integration. ... ok, to your actual question now :) from a very high level perspective, here is what you need to do. Your request and response need to have two key pieces of information: a correlation-id a reply-to queue These bits of information will allow you to correlate the original request and the response. Before you send the request have your requesting code create an exclusive queue for itself. This queue will be used to receive the replies. create a new correlation id - typically a GUID or UUID to guarantee uniqueness. When Sending The Request Attach the correlation id that you generated, to the message properties. there is a correlationId property that you should use for this. store the correlation id with the associated callback function (reply handler) for the request, somewhere inside of the code that is making the request. you will need to this when the reply comes in. attach the name of the exclusive queue that you created, to the replyTo property of the message, as well. with all this done, you can send the message across rabbitmq when replying the reply code needs to use both the correlationId and the replyTo fields from the original message. so be sure to grab those the reply should be sent directly to the replyTo queue. don't use standard publishing through an exchange. instead, send the reply message directly to the queue using the "send to queue" feature of whatever library you're using, and send the response directly to the replyTo queue. be sure to include the correlationId in the response, as well. this is the critical part to answer your question when handling the reply The code that made the original request will receive the message from the replyTo queue. it will then pull the correlationId out of the message properties. use the correlation id to look up the callback method for the request... the code that handles the response. pass the message to this callback method, and you're pretty much done. the implementation details this works, from a high level perspective. when you get down into the code, the implementation details will vary depending on the language and driver / library you are using. most of the good RabbitMQ libraries for any given language will have Request/Response built in to them. If yours doesn't, you might want to look for a different library. Unless you are writing a patterns based library on top of the AMQP protocol, you should look for a library that has common patterns implemented for you. If you need more information on the Request/Reply pattern, including all of the details that I've provided here (and more), check out these resources: My own RabbitMQ Patterns email course / ebook RabbitMQ Tutorials Enterprise Integration Patterns - be sure to buy the book for the complete description / implementation pattern. it's worth having this book If you're working in Node.js, I recommend using the wascally library, which includes the Request/Reply feature you need. For Ruby, check out bunny. For Java or .NET, look at some of the many service bus implementations around. In .NET, I recommend NServiceBus or MassTransit.
RabbitMQ
31,687,652
11
I wondering why my RabbitMQ RPC-Client always processed the dead messages after restart. _channel.QueueDeclare(queue, false, false, false, null); should disable buffers. If I overload the QueueDeclare inside the RPC-Client I can't connect to the server. Is something wrong here? Any idea how to fix this problem? RPC-Server new Thread(() => { var factory = new ConnectionFactory { HostName = _hostname }; if (_port > 0) factory.Port = _port; _connection = factory.CreateConnection(); _channel = _connection.CreateModel(); _channel.QueueDeclare(queue, false, false, false, null); _channel.BasicQos(0, 1, false); var consumer = new QueueingBasicConsumer(_channel); _channel.BasicConsume(queue, false, consumer); IsRunning = true; while (IsRunning) { BasicDeliverEventArgs ea; try { ea = consumer.Queue.Dequeue(); } catch (Exception ex) { IsRunning = false; } var body = ea.Body; var props = ea.BasicProperties; var replyProps = _channel.CreateBasicProperties(); replyProps.CorrelationId = props.CorrelationId; var xmlRequest = Encoding.UTF8.GetString(body); var messageRequest = XmlSerializer.DeserializeObject(xmlRequest, typeof(Message)) as Message; var messageResponse = handler(messageRequest); _channel.BasicPublish("", props.ReplyTo, replyProps, messageResponse); _channel.BasicAck(ea.DeliveryTag, false); } }).Start(); RPC-Client public void Start() { if (IsRunning) return; var factory = new ConnectionFactory { HostName = _hostname, Endpoint = _port <= 0 ? new AmqpTcpEndpoint(_endpoint) : new AmqpTcpEndpoint(_endpoint, _port) }; _connection = factory.CreateConnection(); _channel = _connection.CreateModel(); _replyQueueName = _channel.QueueDeclare(); // Do not connect any more _consumer = new QueueingBasicConsumer(_channel); _channel.BasicConsume(_replyQueueName, true, _consumer); IsRunning = true; } public Message Call(Message message) { if (!IsRunning) throw new Exception("Connection is not open."); var corrId = Guid.NewGuid().ToString().Replace("-", ""); var props = _channel.CreateBasicProperties(); props.ReplyTo = _replyQueueName; props.CorrelationId = corrId; if (!String.IsNullOrEmpty(_application)) props.AppId = _application; message.InitializeProperties(_hostname, _nodeId, _uniqueId, props); var messageBytes = Encoding.UTF8.GetBytes(XmlSerializer.ConvertToString(message)); _channel.BasicPublish("", _queue, props, messageBytes); try { while (IsRunning) { var ea = _consumer.Queue.Dequeue(); if (ea.BasicProperties.CorrelationId == corrId) { var xmlResponse = Encoding.UTF8.GetString(ea.Body); try { return XmlSerializer.DeserializeObject(xmlResponse, typeof(Message)) as Message; } catch(Exception ex) { IsRunning = false; return null; } } } } catch (EndOfStreamException ex) { IsRunning = false; return null; } return null; }
Try setting the DeliveryMode property to non-persistent (1) in your RPC-Client code like this: public Message Call(Message message) { ... var props = _channel.CreateBasicProperties(); props.DeliveryMode = 1; //you might want to do this in your RPC-Server as well ... } AMQP Model Explained contains very useful resources, like explaining how to handle messages that end up in the dead letter queue. Another useful note from the documentation with regards to queue durability: Durable queues are persisted to disk and thus survive broker restarts. Queues that are not durable are called transient. Not all scenarios and use cases mandate queues to be durable. Durability of a queue does not make messages that are routed to that queue durable. If broker is taken down and then brought back up, durable queue will be re-declared during broker startup, however, only persistent messages will be recovered. Note that it talks about broker restart not publisher or consumer restart.
RabbitMQ
31,369,854
11
At first glance I liked very much the "Batches" feature in Celery because I need to group an amount of IDs before calling an API (otherwise I may be kicked out). Unfortunately, when testing a little bit, batch tasks don't seem to play well with the rest of the Canvas primitives, in this case, chains. For example: @a.task(base=Batches, flush_every=10, flush_interval=5) def get_price(requests): for request in requests: a.backend.mark_as_done(request.id, 42, request=request) print "filter_by_price " + str([r.args[0] for r in requests]) @a.task def completed(): print("complete") So, with this simple workflow: chain(get_price.s("ID_1"), completed.si()).delay() I see this output: [2015-07-11 16:16:20,348: INFO/MainProcess] Connected to redis://localhost:6379/0 [2015-07-11 16:16:20,376: INFO/MainProcess] mingle: searching for neighbors [2015-07-11 16:16:21,406: INFO/MainProcess] mingle: all alone [2015-07-11 16:16:21,449: WARNING/MainProcess] celery@ultra ready. [2015-07-11 16:16:34,093: WARNING/Worker-4] filter_by_price ['ID_1'] After 5 seconds, filter_by_price() gets triggered just like expected. The problem is that completed() never gets invoked. Any ideas of what could be going on here? If not using batches, what could be a decent approach to solve this problem? PS: I have set CELERYD_PREFETCH_MULTIPLIER=0 like the docs say.
Looks like the behaviour of batch tasks is significantly different from normal tasks. Batch tasks are not even emitting signals like task_success. Since you need to call completed task after get_price, You can call it directly from get_price itself. @a.task(base=Batches, flush_every=10, flush_interval=5) def get_price(requests): for request in requests: # do something completed.delay()
RabbitMQ
31,360,918
11
I'm trying to move away from SQS to RabbitMQ for messaging service. I'm looking to build a stable high availability queuing service. For now I'm going with cluster. Current Implementation , I have three EC2 machines with RabbitMQ with management plugin installed in a AMI , and then I explicitly go to each of the machine and add sudo rabbitmqctl join_cluster rabbit@<hostnameOfParentMachine> With HA property set to all and the synchronization works. And a load balancer on top it with a DNS assigned. So far this thing works. Expected Implementation: Create an autoscaling clustered environment where the machines that go Up/Down has to join/Remove the cluster dynamically. What is the best way to achieve this? Please help.
I had a similar configuration 2 years ago. I decided to use amazon VPC, by default my design had two RabbitMQ instances always running, and configured in cluster (called master-nodes). The rabbitmq cluster was behind an internal amazon load balancer. I created an AMI with RabbitMQ and management plug-in configured (called “master-AMI”), and then I configured the autoscaling rules. if an autoscaling alarm is raised a new master-AMI is launched. This AMI executes the follow script the first time is executed: #!/usr/bin/env python import json import urllib2,base64 if __name__ == '__main__': prefix ='' from subprocess import call call(["rabbitmqctl", "stop_app"]) call(["rabbitmqctl", "reset"]) try: _url = 'http://internal-myloadbalamcer-xxx.com:15672/api/nodes' print prefix + 'Get json info from ..' + _url request = urllib2.Request(_url) base64string = base64.encodestring('%s:%s' % ('guest', 'guest')).replace('\n', '') request.add_header("Authorization", "Basic %s" % base64string) data = json.load(urllib2.urlopen(request)) ##if the script got an error here you can assume that it's the first machine and then ## exit without controll the error. Remember to add the new machine to the balancer print prefix + 'request ok... finding for running node' for r in data: if r.get('running'): print prefix + 'found running node to bind..' print prefix + 'node name: '+ r.get('name') +'- running:' + str(r.get('running')) from subprocess import call call(["rabbitmqctl", "join_cluster",r.get('name')]) break; pass except Exception, e: print prefix + 'error during add node' finally: from subprocess import call call(["rabbitmqctl", "start_app"]) pass The scripts uses the HTTP API “http://internal-myloadbalamcer-xxx.com:15672/api/nodes” to find nodes, then choose one and binds the new AMI to the cluster. As HA policy I decided to use this: rabbitmqctl set_policy ha-two "^two\." ^ "{""ha-mode"":""exactly"",""ha-params"":2,"ha-sync-mode":"automatic"}" Well, the join is “quite” easy, the problem is decide when you can remove the node from the cluster. You can’t remove a node based on autoscaling rule, because you can have messages to the queues that you have to consume. I decided to execute a script periodically running to the two master-node instances that: checks the messages count through the API http://node:15672/api/queues if the messages count for all queue is zero, I can remove the instance from the load balancer and then from the rabbitmq cluster. This is broadly what I did, hope it helps. [EDIT] I edited the answer, since there is this plugin that can help: I suggest to see this: https://github.com/rabbitmq/rabbitmq-autocluster The plugin has been moved to the official RabbitMQ repository, and can easly solve this kind of the problems
RabbitMQ
31,340,413
11
Please, imagine you have a method like the following: public void PlaceOrder(Order order) { this.SaveOrderToDataBase(order); this.bus.Publish(new OrderPlaced(Order)); } After the order is saved to the database, an event is published to the message queuing system, so other subsystems on the same or another machine can process it. But, what happens if this.bus.Publish(new OrderPlaced(Order)) call fails? Or the machine crashes just after saving the order into the database? The event is not published and other subsystems cannot process it. This is unacceptable. If this happens I need to ensure that the event is eventually published. What are the acceptable strategies can I use? Which is the best one? NOTE: I don't want to use distributed transactions. EDIT: Paul Sasik is very close, and I think I can achieve 100%. This is what I thought: first create a table Events in the database like the following: CREATE TABLE Events (EventId int PRIMARY KEY) You may want to use guids instead of int, or you may use sequences or identities. Then do the following pseudocode: open transaction save order and event via A SINGLE transaction in case of failure, report error and return place order in message queue in case of failure, report error, roll back transaction and return commit transaction All events must include EventId. When event subscribers receive an event, they first check EventId existence in database. This way you get 100% realiability, not only 99.999%
The correct way to ensure the event is eventually published to the message queuing sytem is explained in this video and on this blog post Basically you need to store the message to be sent into the database in the same transaction you perform the bussines logic operation, then send the message to the bus asynchronously and delete the message from the database in another transaction: public void PlaceOrder(Order order) { BeginTransaction(); Try { SaveOrderToDataBase(order); ev = new OrderPlaced(Order); SaveEventToDataBase(ev); CommitTransaction(); } Catch { RollbackTransaction(); return; } PublishEventAsync(ev); } async Task PublishEventAsync(BussinesEvent ev) { BegintTransaction(); try { await DeleteEventAsync(ev); await bus.PublishAsync(ev); CommitTransaction(); } catch { RollbackTransaction(); } } Because PublishEventAsync may fail you have to retry later, so you need a background process for retrying failed sendings, something like this: foreach (ev in eventsThatNeedsToBeSent) { await PublishEventAsync(ev); }
RabbitMQ
30,780,979
11
I am using Unity App Block as my IOC container for my service layer of a WCF project. This works quite well using the Unity.WCF library to plug it into each WCF service. I recently introduced RabbitMQ into my service layer and I am currently using the "using" blocks to connect and add to the queue. I dont like this though and am looking to use the HierachicalLifetimeManager to create and destroy my connection to RabbitMQ as I need them? Does this sound correct? I'm looking for a sample of this, or atleast some guidance on the best approach? (e.g. Should I encapsulate the connection and inject into each service as needed? How would I encapsulate RabbitMQ consumer etc?)
I would advise registering the IConnection as a singleton. To register the IConnection as a singleton in Unity you would use a ContainerControlledLifetimeManager, e.g. var connectionFactory = new ConnectionFactory { // Configure the connection factory }; unityContainer.RegisterInstance(connectionFactory); unityContainer.RegisterType<IConnection, AutorecoveringConnection>(new ContainerControlledLifetimeManager(), new InjectionMethod("init")); The AutorecoveringConnection instance, once resolved for the first time will stay alive until the owning UnityContainer is disposed. Because we have registered the ConnectionFactory with Unity, this will automatically be injected into the constructor of AutorecoveringConnection. The InjectionMethod ensures that the first time the AutorecoveringConnection is resolved, the init method is invoked. As for your question about whether you should abstract away RabbitMQ from your services, my answer would be yes, however I would not simply create an IMessageQueue abstraction. Think about what purpose you are using your message queue for, is it to push statuses? If so, have an IStatusNotifier interface with a concrete implementation for RabbitMQ. If it's to fetch updates, have an IUpdateSource interface with a concrete implementation for RabbitMQ. You can see where I am going with this. If you create an abstraction for a Message Queue, you are limiting yourself to features only available across all Message Queue implementations. By having a different implementation of IStatusNotifier for different Message Queue implementations, you are able to take advantage of features which are unique to different technologies while also remaining flexible in case completely different technologies are employed in future (e.g. Writing to a SQL database or outputting to a console).
RabbitMQ
29,985,065
11
Im using amqp.node library to integrate rabbitmq into my system. But in consumer i want to process just one message at the time, then acknowledge the message then consume the next message from the queue. The current code is: // Consumer open.then(function(conn) { var ok = conn.createChannel(); ok = ok.then(function(ch) { ch.assertQueue(q); ch.consume(q, function(msg) { if (msg !== null) { othermodule.processMessage(msg, function(error, response){ console.log(msg.content.toString()); ch.ack(msg); }); } }); }); return ok; }).then(null, console.warn); The ch.consume will process all messages in the channel at one time and the function of the module call it here othermodule will not be executed in the same time line. I want to wait for the othermodule function to finish before consume the next message in the queue.
At this moment (2018), I think RabbitMQ team has an option to do that: https://www.rabbitmq.com/tutorials/tutorial-two-javascript.html ch.prefetch(1); In order to defeat that we can use the prefetch method with the value of 1. This tells RabbitMQ not to give more than one message to a worker at a time. Or, in other words, don't dispatch a new message to a worker until it has processed and acknowledged the previous one. Instead, it will dispatch it to the next worker that is not still busy.
RabbitMQ
28,747,192
11
All the examples in pika tutorial end with the client invoking start_consuming(), which starts an infinite loop. These examples work for me. However, I do not want my client to run forever. Instead, I need my client to consume messages for some time, such as 15 minutes, then stop. How do I accomplish that?
You can consume messages one at a time with your own loops, say you have a channel and queue setup. The following will check if the queue is empty, and if not, pop a single message off of it. queue_state = channel.queue_declare(queue, durable=True, passive=True) queue_empty = queue_state.method.message_count == 0 declaring a queue that already exists, and setting the passive flags allows you to query it's state. Next we process a message: if not queue_empty: method, properties, body = channel.basic_get(queue, no_ack=True) callback_func(channel, method, properties, body) Here callback_func is our normal callback. Make sure not to register the callback with the queue when you want to process this way. # DO NOT channel.basic_consume(callback_func, queue, no_ack=True) This will make the manual consume do weird things. I have seen the queue_declare code actually process a message if I have made this call beforehand.
RabbitMQ
26,977,708
11
I create vhost: rabbitmqctl add_vhost test Then user: rabbitmqctl add_user user 123456 Then I take permissions to that user: rabbitmqctl set_permissions -p test user "test" "test" "test" I use Celery, in tasks.py: app = Celery('tasks', broker='amqp://user:123456@localhost/test', backend='amqp://user:123456@localhost/test') Then I run: celery -A tasks worker --loglevel=info I have error: amqp.exceptions.AccessRefused: Exchange.declare: (403) ACCESS_REFUSED - access to exchange 'celeryev' in vhost 'test' refused for user 'user' How to fix that?
Take a look at set_permissions here: https://www.rabbitmq.com/rabbitmqctl.8.html#Access_control When you call set_permissions you are passing "test" for configure, read and write, so your user will be able to only use a queue/exchange by the name "test" Also, take a look at this link as well: https://www.rabbitmq.com/access-control.html
RabbitMQ
26,471,231
11
I'm learning how to use rabbitMQ. I'm running the rabbit-MQ server on my MacBook and trying to connect with a python client. I followed the installation instructions here. And now I'm performing the tutorial shown here. The tutorial says to run this client: #!/usr/bin/env python import pika connection = pika.BlockingConnection(pika.ConnectionParameters('localhost')) channel = connection.channel() However, when I do, I get the following error while trying to establish the connection: WARNING:pika.adapters.base_connection:Connection to ::1:5672 failed: [Errno 61] Connection refused As you can see rabbitmq-server seems to be running fine in a different window: % rabbitmq-server RabbitMQ 3.3.1. Copyright (C) 2007-2014 GoPivotal, Inc. ## ## Licensed under the MPL. See http://www.rabbitmq.com/ ## ## ########## Logs: /usr/local/var/log/rabbitmq/[email protected] ###### ## /usr/local/var/log/rabbitmq/[email protected] ########## Starting broker... completed with 10 plugins. % ps -ef | grep -i rabbit 973025343 37253 1 0 2:47AM ?? 0:00.00 /usr/local/Cellar/rabbitmq/3.3.1/erts-5.10.3/bin/../../erts-5.10.3/bin/epmd -daemon 973025343 37347 262 0 2:49AM ttys001 0:02.66 /usr/local/Cellar/rabbitmq/3.3.1/erts-5.10.3/bin/../../erts-5.10.3/bin/beam.smp -W w -K true -A30 -P 1048576 -- -root /usr/local/Cellar/rabbitmq/3.3.1/erts-5.10.3/bin/../.. -progname erl -- -home /Users/myUser -- -pa /usr/local/Cellar/rabbitmq/3.3.1/ebin -noshell -noinput -s rabbit boot -sname rabbit@localhost -boot /usr/local/Cellar/rabbitmq/3.3.1/releases/3.3.1/start_sasl -kernel inet_default_connect_options [{nodelay,true}] -rabbit tcp_listeners [{"127.0.0.1",5672}] -sasl errlog_type error -sasl sasl_error_logger false -rabbit error_logger {file,"/usr/local/var/log/rabbitmq/[email protected]"} -rabbit sasl_error_logger {file,"/usr/local/var/log/rabbitmq/[email protected]"} -rabbit enabled_plugins_file "/usr/local/etc/rabbitmq/enabled_plugins" -rabbit plugins_dir "/usr/local/Cellar/rabbitmq/3.3.1/plugins" -rabbit plugins_expand_dir "/usr/local/var/lib/rabbitmq/mnesia/rabbit@localhost-plugins-expand" -os_mon start_cpu_sup false -os_mon start_disksup false -os_mon start_memsup false -mnesia dir "/usr/local/var/lib/rabbitmq/mnesia/rabbit@localhost" -kernel inet_dist_listen_min 25672 -kernel inet_dist_listen_max 25672 How can I establish this connection? What is the problem?
The client is trying to connect using IPv6 localhost (::1:5672), while the server is listening to IPv4 localhost ({"127.0.0.1",5672}). Try changing the client to connect to the IPv4 localhost instead; connection = pika.BlockingConnection(pika.ConnectionParameters('127.0.0.1'))
RabbitMQ
24,103,758
11
Looking for some code samples to solve this problem :- Would like to write some code (Python or Javascript) that would act as a subscriber to a RabbitMQ queue so that on receiving a message it would broadcast the message via websockets to any connected client. I've looked at Autobahn and node.js (using "amqp" and "ws" ) but cannot get things to work as needed. Here's the server code in javascript using node.js:- var amqp = require('amqp'); var WebSocketServer = require('ws').Server var connection = amqp.createConnection({host: 'localhost'}); var wss = new WebSocketServer({port:8000}); wss.on('connection',function(ws){ ws.on('open', function() { console.log('connected'); ws.send(Date.now().toString()); }); ws.on('message',function(message){ console.log('Received: %s',message); ws.send(Date.now().toString()); }); }); connection.on('ready', function(){ connection.queue('MYQUEUE', {durable:true,autoDelete:false},function(queue){ console.log(' [*] Waiting for messages. To exit press CTRL+C') queue.subscribe(function(msg){ console.log(" [x] Received from MYQUEUE %s",msg.data.toString('utf-8')); payload = msg.data.toString('utf-8'); // HOW DOES THIS NOW GET SENT VIA WEBSOCKETS ?? }); }); }); Using this code, I can successfully subscribe to a queue in Rabbit and receive any messages that are sent to the queue. Similarly, I can connect a websocket client (e.g. a browser) to the server and send/receive messages. BUT ... how can I send the payload of the Rabbit queue message as a websocket message at the point indicated ("HOW DOES THIS NOW GET SENT VIA WEBSOCKETS") ? I think it's something to do with being stuck in the wrong callback or they need to be nested somehow ...? Alternatively, if this can be done easier in Python (via Autobahn and pika) that would be great. Thanks !
One way to implement your system is use python with tornado. Here the server: import tornado.ioloop import tornado.web import tornado.websocket import os import pika from threading import Thread clients = [] def threaded_rmq(): connection = pika.BlockingConnection(pika.ConnectionParameters("localhost")); print 'Connected:localhost' channel = connection.channel() channel.queue_declare(queue="my_queue") print 'Consumer ready, on my_queue' channel.basic_consume(consumer_callback, queue="my_queue", no_ack=True) channel.start_consuming() def consumer_callback(ch, method, properties, body): print " [x] Received %r" % (body,) for itm in clients: itm.write_message(body) class SocketHandler(tornado.websocket.WebSocketHandler): def open(self): print "WebSocket opened" clients.append(self) def on_message(self, message): self.write_message(u"You said: " + message) def on_close(self): print "WebSocket closed" clients.remove(self) class MainHandler(tornado.web.RequestHandler): def get(self): print "get page" self.render("websocket.html") application = tornado.web.Application([ (r'/ws', SocketHandler), (r"/", MainHandler), ]) if __name__ == "__main__": thread = Thread(target = threaded_rmq) thread.start() application.listen(8889) tornado.ioloop.IOLoop.instance().start() and here the html page: <html> <head> <script src="//code.jquery.com/jquery-1.11.0.min.js"></script> <script> $(document).ready(function() { var ws; if ('WebSocket' in window) { ws = new WebSocket('ws://localhost:8889/ws'); } else if ('MozWebSocket' in window) { ws = new MozWebSocket('ws://localhost:8889/ws'); } else { alert("<tr><td> your browser doesn't support web socket </td></tr>"); return; } ws.onopen = function(evt) { alert("Connection open ...")}; ws.onmessage = function(evt){ alert(evt.data); }; function closeConnect(){ ws.close(); } }); </script> </head> <html> So when you publish a message to "my_queue" the message is redirects to all web page connected. I hope it can be useful EDIT** Here https://github.com/Gsantomaggio/rabbitmqexample you can find the complete example
RabbitMQ
22,862,970
11
I've been watching Rick Branson's PyCon video: Messaging at Scale at Instagram. You might want to watch the video in order to answer this question. Rick Branson uses Celery, Redis and RabbitMQ. To get you up to speed, each user has a redis list for their homefeed. Each list contains media ID's of photos posted by the people they follow. Justin Bieber for example has 1.5 million followers. When he posts a photo, the ID of that photo needs to be inserted into each individual redis list for each of his followers. This is called the Fanout-On-Write approach. However, there are a few reliability problems with this approach. It can work, but for someone like Justin Bieber or Lady Gaga who have millions of followers, doing this in the web request (where you have 0-500ms to complete the request) can be problem. By then, the request will timeout. So Rick Branson decided to use Celery, an asynchronous task queue/job queue based on distributed message passing. Any heavy lifting such as inserting media IDs into follower's lists can be done asynchronously, outside of the web request. The request will complete and celery will continue to insert the IDs into all of the lists. This approach works wonders. But again, you don't want to deliver all of Justin's followers to Celery in one huge chunk because it would tie up a celery worker. Why not have multiple workers work on it at the same time so it finishes faster? Brilliant idea! you'd want to break up this chunk into smaller chunks and have different workers working on each batch. Rick Branson does a batch of 10,000 followers, and he uses something called a cursor to keep inserting media IDs for all of Justin Bieber's followers until it is completed. In the video, he talks about this in 3:56 I was wondering if anyone could explain this more and show examples of how it can be done. I'm currently trying to attempt the same setup. I use Andy McCurdy's redis-py python client library to communicate with my redis server. For every user on my service, I create a redis followers list. So a user with an ID of 343 would have a list at the following key: followers:343 I also create a homefeed list for each user. Every user has their own list. So a user with an ID of 1990 would have a list at the following key: homefeed:1990 In the "followers:343" redis list, it contains all the IDs of the people who follow user 343. user 343 has 20,007 followers. Below, I am retrieving all the IDs in the list starting from index 0 all the way to the end -1 just to show you what it looks like. >>> r_server.lrange("followers:343", 0, -1) ['8', '7', '5', '3', '65', '342', '42', etc...] ---> for the sake of example, assume this list has another 20,000 IDs. What you see is a list of all the ID's of users who follow user 343. Here is my proj/mydjangoapp/tasks.py which contains my insert_into_homefeed function: from __future__ import absolute_import from celery import shared_task import redis pool = redis.ConnectionPool(host='XX.XXX.XXX.X', port=6379, db=0, password='XXXXX') @shared_task def insert_into_homefeed(photo_id, user_id): # Grab the list of all follower IDs from Redis for user_id. r_server = redis.Redis(connection_pool=pool) followers_list = r_server.lrange("followers:%s" % (user_id), 0, -1) # Now for each follower_id in followers_list, find their homefeed key # in Redis and insert the photo_id into that homefeed list. for follower_id in followers_list: homefeed_list = r_server.lpush("homefeed:%s" % (follower_id), photo_id) return "Fan Out Completed for %s" % (user_id) In this task, when called from the Django view, it will grab all the IDs of the people who follow user 343 and then insert the photo ID into all of their homefeed lists. Here is my upload view in my proj/mydjangoapp/views.py. I basically call celery's delay method and pass on the neccessary variables so that the request ends quickly: # Import the Celery Task Here from mydjangoapp.tasks import insert_into_homefeed @csrf_exempt def Upload(request): if request.method == 'POST': data = json.loads(request.body) newPhoto = Photo.objects.create(user_id = data['user_id'], description= data['description'], photo_url = data['photo_url']) newPhoto_ID = newPhoto.pk insert_into_homefeed.delay(newPhoto_ID, data['user_id']) return HttpResponse("Request Completed") How can I do this in such a way that it will be batched by 10,000?
The approach described in the video is task "chaining". To get your task method up and running as a chain, you want to add an extra parameter that represents the index into the list of followers. Instead of working on the full list of followers, the task only works on a fixed batch size, starting from the index argument it was handed. At completion, the task should create a new task and pass the new index. INSERT_INTO_HOMEFEED_BATCH = 10000 @shared_task def insert_into_homefeed(photo_id, user_id, index=0): # Grab the list of all follower IDs from Redis for user_id. r_server = redis.Redis(connection_pool=pool) range_limit = index + INSERT_INTO_HOMEFEED_BATCH - 1 # adjust for zero-index followers_list_batch = r_server.lrange("followers:%s" % (user_id), index, range_limit) if not followers_list_batch: return # zero followers or no more batches # Now for each follower_id in followers_list_batch, find their homefeed key # in Redis and insert the photo_id into that homefeed list. for follower_id in followers_list: homefeed_list = r_server.lpush("homefeed:%s" % (follower_id), photo_id) insert_into_homefeed.delay(photo_id, user_id, range_limit + 1) This works well because Redis lists are ordered and the lrange command doesn't return an error on out-of-range inputs.
RabbitMQ
21,007,096
11
I want my email service that I wrote to be completely decoupled from my flask application. I am using celery with rabbitmq. So I am wondering is there a way I can configure celery so that in one project I have the Flask application that sends the message to the queue (producer). And in another project I have the celery instance running that listens to the message and execute the task(consumer). I am still confused by how the communication will exactly work? Do I put the API (that sends the email) in my flask application OR the celery project? Ultimately I would like to have the Flask application and the Celery instance in different EC2 instances - with rabbitmq acting as the message broker. Thanks for your help!
You can use Celery's send_task function to send the task through RabbitMQ to the worker using the task name. You still need to import the module that you have the celery app in: If the task is not registered in the current process you can use send_task() to call the task by name instead. Example: from yourmodule.yourapp import celery celery.send_task("yourtasksmodule.yourtask", args=["Hello World"])
RabbitMQ
19,643,774
11
I have converted a standalone batch job to use celery for dispatching the work to be done. I'm using RabbitMQ. Everything is running on a single machine and no other processes are using the RabbitMQ instance. My script just creates a bunch of tasks which are processed by workers. Is there a simple way to measure the time from the start of my script until all tasks are finished? I know that this a bit complicated by design when using message queues. But I don't want to do it in production, just for testing and getting a performance estimation.
You could use celery signals, functions registered will be called before and after a task is executed, it is trivial to measure elapsed time: from time import time from celery.signals import task_prerun, task_postrun d = {} @task_prerun.connect def task_prerun_handler(signal, sender, task_id, task, args, kwargs, **extras): d[task_id] = time() @task_postrun.connect def task_postrun_handler(signal, sender, task_id, task, args, kwargs, retval, state, **extras): try: cost = time() - d.pop(task_id) except KeyError: cost = -1 print task.__name__, cost
RabbitMQ
19,481,470
11
I'm in a phase of learning RabbitMQ/AMQP from the RabbitMQ documentation. Something that is not clear to me that I wanted to ask those who have hands-on experience. I want to have multiple consumers listening to the same queue in order to balance the work load. What I need is pretty much close to the "Work Queues" example in the RabbitMQ tutorial. I want the consumer to acknowledge message explicitly after it finishes handling it to preserve the message and delegate it to another consumer in case of crash. Handling a message may take a while. My question is whether AMQP postpones next message processing until the previous message is ack'ed? If so how do I achieve load balancing between multiple workers and guarantee no messages get lost?
No, the other consumers don't get blocked. Other messages will get delivered even if they have unacknowledged but delivered predecessors. If a channel closes while holding unacknowledged messages, those messages get returned to the queue. See RabbitMQ Broker Semantics Messages can be returned to the queue using AMQP methods that feature a requeue parameter (basic.recover, basic.reject and basic.nack), or due to a channel closing while holding unacknowledged messages. EDIT In response to your comment: Time to dive a little deeper into the AMQP specification then perhaps: 3.1.4 Message Queues A message queue is a named FIFO buffer that holds message on behalf of a set of consumer applications. Applications can freely create, share, use, and destroy message queues, within the limits of their authority. Note that in the presence of multiple readers from a queue, or client transactions, or use of priority fields, or use of message selectors, or implementation-specific delivery optimisations the queue MAY NOT exhibit true FIFO characteristics. The only way to guarantee FIFO is to have just one consumer connected to a queue. The queue may be described as “weak-FIFO” in these cases. [...] 3.1.8 Acknowledgements An acknowledgement is a formal signal from the client application to a message queue that it has successfully processed a message.[...] So acknowledgement confirms processing, not receipt. The broker will hold on to the message until it's gotten acknowleged, so that it can redeliver them. But it is free to deliver more messages to consumers even if the prededing messages have not yet been acknowledged. The consumers will not be blocked.
RabbitMQ
17,841,843
11
I am using Python3 and I want to use RabbitMQ. I already tried to use Pika, and txAMQP but they do not support Python 3. Have anybody an idea how I can use RabbitMQ.
Check this page https://github.com/hollobon/pika-python3 May be it can help you.
RabbitMQ
15,655,189
11
I am able to create a fanout exchange using the Publish/Subscribe RabbitMQ Java tutorial, and any connected consumer will receive a copy of a message. Instead of declaring an exchange and binding dynamically/programmatically, I would like to create the exchange and the binding prior to connecting any consumers. I have done this through the RabbitMQ Management Console. For some reason, however, my consumers are receiving messages in a round-robin fashion, rather than all receiving copies of the message. What am I missing? Here are some code snippets: Publisher: channel.basicPublish("public", "", null, rowId.getBytes("UTF-8")); Consumer: QueueingConsumer consumer = new QueueingConsumer(channel); channel.basicConsume("myqueue", false, consumer); ...And in the RabbitMQ Management console, I created an exchange "public" of type "fanout", and I set a binding from that exchange to "myqueue". I'd appreciate any help!
It sounds like all of your consumers are subscribing to the same queue. When multiple consumers are subscribing to the same queue, the default behavior of RabbitMQ is to round-robin the messages between all the subscribed consumers. See "Round-robin dispatching" in the RabbitMQ Tutorial #2: Work Queues. The fanout exchange is for ensuring that each queue bound to it gets a copy of the message, not each consumer. If you want each consumer to get a copy of the message, typically you would have each consumer create their own queue and then bind to the exchange. I'm not sure why you're trying to avoid programmatically creating/binding a queue, but if you know ahead of time the number of subscribers and create a queue for each one, you can get the same effect.
RabbitMQ
15,342,340
11
I want to upload images to S3 server, but before uploading I want to generate thumbnails of 3 different sizes, and I want it to be done out of request/response cycle hence I am using celery. I have read the docs, here is what I have understood. Please correct me if I am wrong. Celery helps you manage your task queues outside the request response cycle. Then there is something called carrot/kombu - its a django middleware that packages tasks that get created via celery. Then the third layer PyAMQP that facilitates the communication of carrot to a broker. eg. RabbitMQ, AmazonSQS, ironMQ etc. Broker sits on a different server and does stuff for you. Now my understanding is - if multiple users upload image at the same time, celery will queue the resizing, and the resizing will actually happen at the ironMQ server, since it offers a cool addon on heroku. Now the doubts: But what after the image is resized, will ironMQ push it to the S3 server, or will it notify once the process is completed.. i am not clear about it. What is the difference between celery and kombu/carrot, could you explain vividly.
IronMQ does not process your tasks for you; it simply serves as the backend for Celery to keep track of what jobs need to be performed. So, here's what happens. Assume you have two servers, your web server and your Celery server. Your web server is responsible for handling requests, your Celery server creates the thumbnails and uploads them to S3. Here's what a typical request looks like: Your user uploads the image to your web server. You store that image somewhere--I'd recommend putting it on S3 right then, personally, but you could also store it in, for example IronCache, base64-encoded. The point is to put it somewhere your Celery server can access it. You queue up a job on Celery, passing the location of the image to your Celery server. Your Celery server downloads the image, generates your thumbnails, and uploads them to S3. It then stores the S3 URLs in the job results. Your web server waits until the job finishes, then has access to the results. Alternatively, you could have your Celery server store the results in the database itself. The point is that the Celery server does the heavy lifting (generating the thumbnails) and does not hold up the request loop while it does. I wrote an example for using IronMQ on Heroku. You can see it here: http://iron-celery-demo.herokuapp.com. You can see the source for the example on Github and read the tutorial, which explains pretty thoroughly and step-by-step how to deploy Celery on Heroku. To clear up the AMQP stuff: IronMQ is a cloud-based message queue service developed by Iron.io. AMQP is an open messaging specification RabbitMQ is the most popular implementation (that I know of) of the AMQP specification. PyAMQP is a Python library that lets Python clients communicate with any implementation of AMQP, including RabbitMQ One of the biggest differences between IronMQ and RabbitMQ/AMQP is that IronMQ is hosted and managed, so you don't have to host the server yourself and worry about uptime. The spec offers a bunch more in terms of differentiation, and there are underlying differences, but Celery abstracts most of those away. Because you're using Celery, the only difference you're liable to notice is that IronMQ is hosted, so you don't have to stand up and manage your own server. Full disclosure: I am employed by Iron.io, the company behind IronMQ.
RabbitMQ
15,121,519
11