status
stringclasses 1
value | repo_name
stringlengths 9
24
| repo_url
stringlengths 28
43
| issue_id
int64 1
104k
| updated_files
stringlengths 8
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[ns, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
axios/axios
|
https://github.com/axios/axios
| 1,252 |
["lib/core/dispatchRequest.js", "test/specs/headers.spec.js"]
|
Strange behavior when header name matches HTTP request method
|
#### Summary
Axios appears to treat headers names that correspond to lowercase HTTP methods incorrectly.
If the header name matches the current HTTP method, the header name is transformed to `0`:
```js
axios.post('http://localhost:8888/', null, {headers: {'post': 'test'}});
```
```
POST / HTTP/1.1
0: test
Accept: application/json, text/plain, */*
Get: test
User-Agent: axios/0.17.1
Host: localhost:8888
Connection: close
Content-Length: 0
```
If the header name matches a different HTTP method, it is removed:
```js
axios.post('http://localhost:8888/', null, {headers: {'get': 'test'}});
```
```
POST / HTTP/1.1
Accept: application/json, text/plain, */*
Content-Type: application/x-www-form-urlencoded
User-Agent: axios/0.17.1
Host: localhost:8888
Connection: close
Content-Length: 0
```
This only seems to happen when the header name is lowercase-a header named `Get` is handled correctly:
```
axios.post('http://localhost:8888/', null, {headers: {'Get': 'test'}});
```
```
POST / HTTP/1.1
Accept: application/json, text/plain, */*
Content-Type: application/x-www-form-urlencoded
Get: test
User-Agent: axios/0.17.1
Host: localhost:8888
Connection: close
Content-Length: 0
```
I didn't see anything in the docs about special meaning for these header names when passed directly. `axios.defaults.headers` has keys that correspond to HTTP methods, but it doesn't seem to make sense to interpret the headers set directly in the same way.
#### Context
- axios version: 0.17.1
- Environment: node v8.9.1
|
https://github.com/axios/axios/issues/1252
|
https://github.com/axios/axios/pull/1258
|
1cdf9e4039ede6dd5e033112f3ff6bbaffb66970
|
920510b3a6fecdeb2ba2eb472b0de77ec3cbdd06
| 2017-12-20T17:26:26Z |
javascript
| 2020-05-22T19:26:10Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 1,212 |
["lib/helpers/buildURL.js", "test/specs/helpers/buildURL.spec.js"]
|
Bug: Unescaping reserved character @
|
Summary
----
According to the [spec](https://tools.ietf.org/html/rfc3986#section-2.2), `@` is a reserved character and thus should be encoded. But axios un-encodes this: https://github.com/axios/axios/blob/master/lib/helpers/buildURL.js#L7
```js
function encode(val) {
return encodeURIComponent(val).
replace(/%40/gi, '@').
replace(/%3A/gi, ':').
replace(/%24/g, '$').
replace(/%2C/gi, ',').
replace(/%20/g, '+').
replace(/%5B/gi, '[').
replace(/%5D/gi, ']');
}
```
This seems like a bug. What's the reason?
Context
----
- axios version: *e.g.: v0.17.0*
- Environment: *e.g.: node v8.4.0, windows 7 x64
|
https://github.com/axios/axios/issues/1212
|
https://github.com/axios/axios/pull/1671
|
5effc0827e2134744d27529cb36970994768263b
|
8a8c534a609cefb10824dec2f6a4b3ca1aa99171
| 2017-12-01T16:38:34Z |
javascript
| 2020-05-27T12:37:39Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 1,158 |
["lib/adapters/http.js", "test/unit/adapters/http.js"]
|
Request method changed on redirect
|
Hi,
I'm using axios v0.17.0 with HEAD method like this:
`axios.head('http://example.com'); `
(let's suppose http://example.com is set to redirect 301 to http://www.example.com)
For some reason Request method will change on the first redirect to GET instead of keeping HEAD all the way down. This will read in logs like this:
```
example.com:80 10.0.0.1 - - "HEAD / HTTP/1.1" 301 237 "-" "axios/0.17.0"
example.com:80 10.0.0.1 - - "GET / HTTP/1.1" 200 246 "-" "axios/0.17.0"
```
instead of this:
```
example.com:80 10.0.0.1 - - "HEAD / HTTP/1.1" 301 237 "-" "axios/0.17.0"
example.com:80 10.0.0.1 - - "HEAD / HTTP/1.1" 200 246 "-" "axios/0.17.0"
```
When I do the test in Firefox it keeps original request method (HEAD) all the way down.
Any idea why is this?
Thanks,
Milos
|
https://github.com/axios/axios/issues/1158
|
https://github.com/axios/axios/pull/1758
|
9005a54a8b42be41ca49a31dcfda915d1a91c388
|
21ae22dbd3ae3d3a55d9efd4eead3dd7fb6d8e6e
| 2017-11-02T14:01:29Z |
javascript
| 2018-08-27T15:26:38Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 1,121 |
["lib/core/Axios.js", "lib/defaults.js"]
|
Default method for an instance always overwritten by get
|
In [./lib/core/Axios.js](https://github.com/axios/axios/blob/master/lib/core/Axios.js#L35)
```js
config = utils.merge(defaults, this.defaults, { method: 'get' }, config);
```
The method is always overwritten with the ` { method: 'get' }`, so you are forced to pass the method in each request, like the following:
```js
const myAlwaysPostAPI = axios.create({
baseURL: 'http://localhost'
url : '/myResource',
});
myAlwaysPostAPI({
method: 'post', //always forced to pass this in each API call
data : {key: 'value'}
});
```
this also affects
```
myAPI.defaults.method = 'post'; //does NOT work
```
### Expected Behavier
```js
const myAlwaysPostAPI = axios.create({
baseURL: 'http://localhost'
url : '/myResource',
method: 'post'
});
myAlwaysPostAPI({
data: {key: 'value'}
});
```
I think the solution is to move the default method to defaults file, will follow this issue with a PR for that.
|
https://github.com/axios/axios/issues/1121
|
https://github.com/axios/axios/pull/1122
|
d59c70fdfd35106130e9f783d0dbdcddd145b58f
|
a105872c1ee2ce3f71ff84a07d9fc27e161a37b0
| 2017-10-10T19:47:39Z |
javascript
| 2018-02-20T06:39:06Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 1,100 |
["ECOSYSTEM.md"]
|
redux-saga-requests - Axios and Redux Saga
|
Hi, I just published new library [redux-saga-requests](https://github.com/klis87/redux-saga-requests), which simplifies AJAX requests. If you find it useful, we could add it to resources > ecosystem in axios docs.
|
https://github.com/axios/axios/issues/1100
|
https://github.com/axios/axios/pull/1279
|
6e605016f03c59b5e0c9d2855deb3c5e6ec5bbfc
|
138108ee56bd689305ae505a66b48d5e9c8aa494
| 2017-09-24T20:30:24Z |
javascript
| 2018-01-11T05:09:16Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 1,077 |
["lib/core/Axios.js", "lib/core/dispatchRequest.js", "lib/core/mergeConfig.js", "lib/utils.js", "test/specs/core/mergeConfig.spec.js", "test/specs/headers.spec.js", "test/specs/requests.spec.js", "test/specs/utils/deepMerge.spec.js", "test/specs/utils/isX.spec.js", "test/specs/utils/merge.spec.js"]
|
utils.merge not considering/handling the prototype chain
|
#### Summary
Trying to use Axios on Node with your own http(s)Agent set via the global default triggers the error: `TypeError: self.agent.addRequest is not a function`
For minimal example see: https://runkit.com/rupesh/59b1bcc1efc2f800128b7d54
It seems to be related to utils.merge not considering/handling the prototype chain:
<img width="814" alt="developer_tools_-_node_js" src="https://user-images.githubusercontent.com/3313870/30188235-58af3bd8-9425-11e7-9f99-4bc01ff4bf96.png">
Stack trace:
```
TypeError: self.agent.addRequest is not a function
at new ClientRequest in core _http_client.js — line 160
at http.Agent in core http.js — line 31
at RedirectableRequest._performRequest in follow-redirects/index.js — line 73
at Writable.RedirectableRequest in follow-redirects/index.js — line 57
at wrappedProtocol.request in follow-redirects/index.js — line 229
at dispatchHttpRequest in axios/lib/adapters/http.js — line 131
at httpAdapter in axios/lib/adapters/http.js — line 18
```
#### Context
- axios version: 0.16.2
- Environment: node v4, 6, 7 and 8
|
https://github.com/axios/axios/issues/1077
|
https://github.com/axios/axios/pull/2844
|
487941663b791a4a5273d456aab24c6ddd10eb0e
|
0d69a79c81a475f1cca6d83d824eed1e5b0b045d
| 2017-09-07T22:37:44Z |
javascript
| 2020-06-08T18:52:45Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 1,004 |
["lib/adapters/http.js"]
|
Axios waits for timeout (when configured) even if nock throws an error
|
#### Summary
When using nock and axios on the server side (node environment) to mock requests and responses, I ran into an interesting issue when I have axios configured with a timeout value.
If nock is not configured for a particular request, it throws an error: `Error: Nock: No match for request`.
If axios does NOT have a timeout value set, no thread is left open and hanging and the test suite closes as normal.
If a timeout value IS set, axios waits the entire duration of the timeout before closing. Whats weird is the promise axios returns is rejected by nock (which is how I found that error above) so I have no idea why axios stays open for the full timeout.
The issue is in test runners (specifically Jest) will remain open until all threads are done. With axios hanging open here, our test times ballooned without any visible reason and it took me several hours to trace it to this change that caused it.
I have a simple workaround, if in the test environment don't set a timeout value (or make sure we nock all requests) but it feels like a bug to me that the axios promise throws but yet it remains running.
#### Context
- axios version: *v0.16.2*
- nock version: *v9.0.13*
- jest version: *v20.0.4*
- Environment: *node v6.10.2*
|
https://github.com/axios/axios/issues/1004
|
https://github.com/axios/axios/pull/1040
|
b14cdd842561b583b30b5bca5c209536c5cb8028
|
4fbf08467459845b0551dd60e3fd2086b1d19c4a
| 2017-07-19T16:35:08Z |
javascript
| 2018-03-08T17:35:58Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 975 |
["README.md", "lib/adapters/http.js"]
|
Support unix domain socket
|
In `adapters/http.js`, it request use `https` or `http` module with `options`.
```javascript
var options = {
hostname: parsed.hostname,
port: parsed.port,
path: buildURL(parsed.path, config.params, config.paramsSerializer).replace(/^\?/, ''),
method: config.method,
headers: headers,
agent: agent,
auth: auth
};
```
We can add a key to support unix domain socket
```javascript
var options = {
hostname: parsed.hostname,
port: parsed.port,
path: buildURL(parsed.path, config.params, config.paramsSerializer).replace(/^\?/, ''),
socketPath: config.socketPath, // just for test
method: config.method,
headers: headers,
agent: agent,
auth: auth
};
```
using axios like
```javascript
axios({
socketPath: '/var/run/docker.sock',
url: '/images/json'
})
.then(resp => console.log(resp.data))
```
I don't know if it fits your design concept, so I mentioned this issue
|
https://github.com/axios/axios/issues/975
|
https://github.com/axios/axios/pull/1070
|
40b829994c2e407109a38a4cf82703261aa3c22c
|
ccc78899bb6f595e3c44ec7ad6af610455859d78
| 2017-06-26T06:21:50Z |
javascript
| 2018-02-17T00:05:48Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 949 |
["lib/core/Axios.js", "lib/core/dispatchRequest.js", "test/specs/interceptors.spec.js"]
|
Set baseURL in interceptors is not working.
|
<!-- Click "Preview" for a more readable version -->
#### Summary
set baseURL in interceptors is not working.
```
const service = axios.create({
baseURL: 'http://localhost/'
});
service.interceptors.request.use(config => {
config.baseURL = 'dd'
console.log(config.url) // output : http://localhost/././../
return config;
}, error => {
// Do something with request error
console.error(error); // for debug
Promise.reject(error);
})
```
#### Context
- axios version: *e.g.: v0.16.2*
- Environment: *e.g.: node vv8.0.0, chrome 59.0.3071.86, macOS 10.12*
|
https://github.com/axios/axios/issues/949
|
https://github.com/axios/axios/pull/950
|
6508280bbfa83a731a33aa99394f7f6cdeb0ea0b
|
2b8562694ec4322392cb0cf0d27fe69bd290fcb2
| 2017-06-09T07:37:56Z |
javascript
| 2017-08-12T12:15:27Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 846 |
["lib/adapters/http.js", "lib/utils.js", "package.json"]
|
Axios 0.16.1 adds buffer.js to the browser bundle, tripling its size
|
#### Summary
It looks like latest version includes `buffer.js` on the browser bundle even if not actually used.
This makes axios bundle size **3 times bigger**, adding 22kB to the minified size (0.16.0 is 11.9 KB while 0.16.1 is 33.9kB).
#### Context
- axios version: v0.16.1
|
https://github.com/axios/axios/issues/846
|
https://github.com/axios/axios/pull/887
|
1beb245f3a9cdc6da333c054ba5776a2697911dd
|
d1278dfe353d772c689a7884913a46f122538cd2
| 2017-04-18T09:46:43Z |
javascript
| 2017-05-31T02:31:42Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 807 |
[".github/workflows/stale.yml"]
|
CORS did't work on IE,header did't sent
|
My custom header sets well on chrome or safari, but when it works on IE(9), my custom header did't sent to the server,
No 'Content-Type' was sent to the server,
And 'Accept' was still '*/*',
the XHR was 415 error if the header did't set well,
there is no problem in other browser
`import "babel-polyfill";
import axios from 'axios/dist/axios.min' //"axios": "^0.15.3",
let baseUrl = 'http://192.168.51.********/';
const axiosInstance = axios.create({
baseURL: baseUrl,
timeout: 30000,
headers: {
'Content-Type': 'application/json',
'Accept': 'application/json'
}
})
axiosInstance.interceptors.request.use(function (config) {
let temp = config.data;
config.data = {};
config.data.data = temp;
config.data.token = window.sessionStorage.getItem("token") || "";
return config;
})
export default axiosInstance;`
|
https://github.com/axios/axios/issues/807
|
https://github.com/axios/axios/pull/4980
|
60e85533b3c38687b862d5a77bdf9614d1e3f6d0
|
659eeaf67cc0d54e86d0e38b90bd6f8174f56fca
| 2017-03-31T03:37:11Z |
javascript
| 2022-09-29T06:27:44Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 775 |
[".github/workflows/stale.yml"]
|
How to reproduce the algorithm ON DUPLICATE KEY UPDATE?
|
I have a table with users options:
```
------------------------------------------------------
| id | option_id | option_value |
------------------------------------------------------
```
I want to update a row in a table with one command or create a new one if it does not already exist.
```mysql
INSERT INTO
options
SET
option_id = 'enabled',
value = 1
ON DUPLICATE KEY UPDATE
value = 1
```
How to achieve this result using Lucid or Query Builder?
|
https://github.com/axios/axios/issues/775
|
https://github.com/axios/axios/pull/4980
|
60e85533b3c38687b862d5a77bdf9614d1e3f6d0
|
659eeaf67cc0d54e86d0e38b90bd6f8174f56fca
| 2017-03-19T15:32:30Z |
javascript
| 2022-09-29T06:27:44Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 768 |
["lib/adapters/http.js"]
|
Protocol "https:" not supported. Expected "http:" Error
|
```
const agent = new http.Agent({family: 4});
axios.get("http://mywebsite.com", {
httpAgent: agent
})
.then((response) => {
console.log(response);
})
```
With above code, I get below error
http_client.js:55
throw new Error('Protocol "' + protocol + '" not supported. ' +
^
Error: Protocol "https:" not supported. Expected "http:"
at new ClientRequest (_http_client.js:55:11)
I am using Axios 0.15.3 and Node 6.10.0 version
|
https://github.com/axios/axios/issues/768
|
https://github.com/axios/axios/pull/1904
|
dc4bc49673943e35280e5df831f5c3d0347a9393
|
03e6f4bf4c1eced613cf60d59ef50b0e18b31907
| 2017-03-17T14:02:26Z |
javascript
| 2019-12-25T20:55:36Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 744 |
[".github/workflows/ci.yml"]
|
Axios not parsing JSON but instead returning all the markup on page using React & Redux
|
I have a local .json file that I am trying to fetch but instead all the markup for my page is being returned. I've scoured online looking for a solution, but coming up short.
flickrActions.js:
```
export const fetchFlickr = () => {
return function(dispatch, getState) {
axios.get("/data/output.json", {
headers: { 'Content-Type': 'application/json' }
})
.then((response) => {
dispatch({type: "FETCH_TWEETS_FULFILLED", payload: response.data})
})
.catch((err) => {
dispatch({type: "FETCH_TWEETS_REJECTED", payload: err})
})
}
}
```
output.json:
```
[
{
"code":"BAcyDyQwcXX",
"id":"1161022966406956503",
"display_src":"https://scontent.cdninstagram.com/hphotos-xap1/t51.2885-15/e35/12552326_495932673919321_1443393332_n.jpg"
},
{
"code":"BAcyDyQwcXX",
"id":"1161022966406956503",
"display_src":"https://scontent.cdninstagram.com/hphotos-xap1/t51.2885-15/e35/12552326_495932673919321_1443393332_n.jpg"
},
{
"code":"BAcyDyQwcXX",
"id":"1161022966406956503",
"display_src":"https://scontent.cdninstagram.com/hphotos-xap1/t51.2885-15/e35/12552326_495932673919321_1443393332_n.jpg"
}
]
```
What is being returned:
<img width="901" alt="screen shot 2017-03-08 at 7 46 01 am" src="https://cloud.githubusercontent.com/assets/1817084/23677524/97c55a46-03d4-11e7-9283-6ab76da33782.png">
|
https://github.com/axios/axios/issues/744
|
https://github.com/axios/axios/pull/4796
|
6ac313a01ee3c3daccd6d7f0f9b5005fb714c811
|
de7f4c6c393acab30e27b5b35f3846518aabc28d
| 2017-03-07T20:53:31Z |
javascript
| 2022-06-18T09:11:11Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 728 |
["lib/utils.js"]
|
isStandardBrowserEnv not working properly
|
Hello we are using axios with react native and react-native-signalr lib.
The signalr does something with document due to which axios in isStandardBrowserEnv is not properly determining where its running and tries to read cookies in react native app and fails.
I propose we detect react native from navigator.product as it was introduced in this commit for react native:
[https://github.com/facebook/react-native/commit/3c65e62183ce05893be0822da217cb803b121c61](url)
|
https://github.com/axios/axios/issues/728
|
https://github.com/axios/axios/pull/731
|
161c616211bf5a0b01219ae6aa2b7941192ef2c1
|
2f98d8fcdba89ebd4cc0b098ee2f1af4cba69d8d
| 2017-02-24T09:51:38Z |
javascript
| 2017-03-02T06:04:54Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 723 |
["lib/core/Axios.js"]
|
cannot send post method in "request" function from Axios instance
|
When I try to send POST method using this code:
```
const axios = require('axios')
let config = {url: 'http://{{hostname}}', method: 'post'}
let instance = axios.create(config)
instance.request()
```
It send GET method instead of POST.
I think this line https://github.com/mzabriskie/axios/blob/master/lib/core/Axios.js#L37 should be: `config = utils.merge(defaults, this.defaults, config);`
|
https://github.com/axios/axios/issues/723
|
https://github.com/axios/axios/pull/1342
|
b6b0865352743c4c61d7e80d9708f98fa876a253
|
23ba29602cf941d943772cbccee1fd260f5e0d02
| 2017-02-23T06:38:20Z |
javascript
| 2018-02-17T02:58:48Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 718 |
["index.d.ts", "test/typescript/axios.ts"]
|
Generic get for TypeScript
|
Hello,
As axios automatically converts JSON responses could we get something like this:
```ts
get<T>(url: string, config?: AxiosRequestConfig): AxiosPromise<T>;
export interface AxiosPromise<T> extends Promise<AxiosResponse<T>> {
}
export interface AxiosResponse<T> {
data: T;
status: number;
statusText: string;
headers: any;
config: AxiosRequestConfig;
}
```
|
https://github.com/axios/axios/issues/718
|
https://github.com/axios/axios/pull/1061
|
638804aa2c16e1dfaa5e96e68368c0981048c4c4
|
7133141cb9472f88220bdaf6e6ccef898786298d
| 2017-02-21T20:43:37Z |
javascript
| 2017-10-20T14:16:23Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 717 |
[".github/workflows/stale.yml"]
|
HTML response is not considered an error?
|
Without setting an `Accept` header responses in HTML of content type `text/html` are not considered an error by axios by default?
It seems to wrap the response, although being 200 - OK into a `{data: <html>blabla</html>, status: 200}`. How can I enforce json responses and everything else considered an error?
|
https://github.com/axios/axios/issues/717
|
https://github.com/axios/axios/pull/4980
|
60e85533b3c38687b862d5a77bdf9614d1e3f6d0
|
659eeaf67cc0d54e86d0e38b90bd6f8174f56fca
| 2017-02-21T20:26:28Z |
javascript
| 2022-09-29T06:27:44Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 662 |
[".github/workflows/stale.yml"]
|
HTTPS request works using the https module but not in Axios
|
This request works with the https library:
```
const options = {
hostname: 'baseurl.com',
path: '/accounts',
method: 'GET',
ca: fs.readFileSync(`${path}CA.pem`),
cert: fs.readFileSync(`${path}CERT.pem`),
key: fs.readFileSync(`${path}KEY.pem`),
auth: 'user:password',
rejectUnauthorized: false
};
const req = https.request(options, (res) => {
res.on('data', (data) => {
process.stdout.write(data);
});
});
req.end();
req.on('error', (e) => {
console.error(e);
});
```
but this seemingly equivalent request does not work using Axios:
```
const instance = axios.create({
baseURL: 'https://baseurl.com',
httpsAgent: new https.Agent({
ca: fs.readFileSync(`${path}CA.pem`),
cert: fs.readFileSync(`${path}CERT.pem`),
key: fs.readFileSync(`${path}KEY.pem`),
auth: 'user:password',
rejectUnauthorized: false
})
});
instance.get('/accounts')
.then(_ => console.log(`response: ${_}`))
.catch(err => console.log(`error: ${err.stack}`));
```
Instead it throws this error:
error: Error: write EPROTO
at Object.exports._errnoException (util.js:870:11)
at exports._exceptionWithHostPort (util.js:893:20)
at WriteWrap.afterWrite (net.js:763:14)
I've tried these variations of base url but no luck:
- 'https://baseurl.com:443'
- 'baseurl.com'
- 'baseurl.com:443'
any help much appreciated.
|
https://github.com/axios/axios/issues/662
|
https://github.com/axios/axios/pull/4797
|
de7f4c6c393acab30e27b5b35f3846518aabc28d
|
68723fc38923f99d34a78efb0f68de97890d0bec
| 2017-01-23T11:08:26Z |
javascript
| 2022-06-18T09:13:19Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 658 |
["package-lock.json"]
|
Error when trying to connect to HTTPS site through proxy
|
I get this error when connecting to an HTTPS site through a proxy. It works fine when I connect to that same site using HTTP (the site is available in both HTTP and HTTPS). To test it you can try with any HTTPS proxies (test ones available at https://free-proxy-list.net/, I've tried only the ones with a 'Yes' in the 'Https' column.)
```
(node:11581) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 2):
Error: write EPROTO 140378207070016:error:140770FC:SSL
routines:SSL23_GET_SERVER_HELLO:unknown protocol:../deps/openssl/openssl/ssl/s23_clnt.c:794:
```
The code
```js
async function download() {
const response = await axios('https://api.ipify.org', {
timeout: 5000,
proxy: {
host: '104.154.142.106',
port: 3128
}
})
console.log(response.headers)
}
```
|
https://github.com/axios/axios/issues/658
|
https://github.com/axios/axios/pull/5294
|
3a7c363e540e388481346e0c0a3c80e8318dbf5d
|
e1989e91de13f2d6cd6732745ae64dbc41e288de
| 2017-01-18T12:53:02Z |
javascript
| 2022-11-22T17:56:32Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 635 |
["README.md", "lib/adapters/http.js", "test/unit/adapters/http.js"]
|
Is there a way to force not to use proxy?
|
In my case I have a `http_proxy` env variable on my development computer which passes all traffic to my vpn, however this proxy would fail to work when hostname is `localhost`, therefore when I'm testing my application it always fails because I setup my server at localhost, I want a method to force axios receiving a `none` for proxy config, currently if `proxy` property is `null` or `undefined` it will use the `http_proxy` env
|
https://github.com/axios/axios/issues/635
|
https://github.com/axios/axios/pull/691
|
62db26b58854f53beed0d9513b5cf18615c64a2d
|
07a7b7c84c3ea82ea3f624330be9e0d3f738ac70
| 2017-01-06T06:34:50Z |
javascript
| 2017-08-14T11:38:44Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 615 |
["README.md"]
|
Change the browser support logo links in Readme file
|
Change the logos of different browsers in the Readme section by updating the links of the original repository.
|
https://github.com/axios/axios/issues/615
|
https://github.com/axios/axios/pull/616
|
322be107301c5c725b13e3c0c00108e55655f540
|
253131c31ae1269099efb865eb0469a218e1ab2d
| 2016-12-25T19:56:48Z |
javascript
| 2017-01-08T13:55:01Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 605 |
["package-lock.json"]
|
https request localhost api contradiction for React server rendering
|
That's say I have an express api server along with web server rendering,
axios for react server side render need to specify the domain like `localhost:3001/getUser`
If the website is https but request api to the same server from `localhost`,it's `http`.
It will cause Error for untrust request from https to http.
so I change the `axios.post(localhost:3001/getUser)` to `axios.post(/getUser)`,it solved untrust origin.
But another Error appear is that react server side render need axios to specify domain.
|
https://github.com/axios/axios/issues/605
|
https://github.com/axios/axios/pull/5493
|
366161e5e48f818fa42c906e91b71f7876aadabb
|
a105feb7b5f8abca95a30d476707288268123892
| 2016-12-17T01:51:05Z |
javascript
| 2023-01-26T18:19:48Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 597 |
["lib/utils.js"]
|
IE8 is determined wrong as non standard browser environment.
|
In IE8, the value of `typeof document.createElemen` is `object`, not `function` so this `isStandardBrowserEnv` returns `false`.
```
function isStandardBrowserEnv() {
return (
typeof window !== 'undefined' &&
typeof document !== 'undefined' &&
typeof document.createElement !== 'function'
);
}
```
Because of this wrong determine, XDomainRequest is not working in IE8. I made a pull request for fix this, but failed to pass the CI.
|
https://github.com/axios/axios/issues/597
|
https://github.com/axios/axios/pull/731
|
161c616211bf5a0b01219ae6aa2b7941192ef2c1
|
2f98d8fcdba89ebd4cc0b098ee2f1af4cba69d8d
| 2016-12-15T11:18:10Z |
javascript
| 2017-03-02T06:04:54Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 574 |
["lib/helpers/combineURLs.js", "test/specs/helpers/combineURLs.spec.js"]
|
API endpoints with file extensions
|
Hello! Great project.
To jump right in, it is a hacky experience to create an axios client for a single-endpoint API that also uses a file extension. ie `https://coolapi.com/api.php`
All url requests make their way through `combineURLs.js`, to be appended with a `/` regardless of if there was a relative url attached to the client's request config.
The current hack (it feels like hack...) to get around this is to add a request interceptor to remove the ending slash. I'm currently doing this in a project.
Happy to PR this issue with a fix to combineURLs - but I want to run it past owners first for validity.
Cheers! ✨
|
https://github.com/axios/axios/issues/574
|
https://github.com/axios/axios/pull/581
|
cfe33d4fd391288158a3d14b9366b17c779b19e3
|
fe7d09bb08fa1c0e414956b7fc760c80459b0a43
| 2016-12-05T03:57:09Z |
javascript
| 2016-12-08T05:23:45Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 537 |
["lib/adapters/xhr.js", "test/specs/requests.spec.js"]
|
Axios 0.15.2 doesn't reject promise if request is cancelled in Chrome
|
I have [email protected]
If this HTML is run, some requests will be cancelled by Chrome:
<html>
<head>
<script src="https://cdnjs.cloudflare.com/ajax/libs/axios/0.15.2/axios.js"></script>
</head>
<script>
var promises = [];
window.onload = function() {
for (var i = 0; i < 4; i++) {
var promise = axios.get('some url which causes ERR_CONNECTION_CLOSED');
promises.push(promise);
}
}
</script>
</html>

But promises which turned into cancelled request won't be fulfilled:

|
https://github.com/axios/axios/issues/537
|
https://github.com/axios/axios/pull/1399
|
d4dc124f15c8dc69861b9cf92c059dbda46ae565
|
0499a970d7064d15c9361430b40729d9b8e6f6bb
| 2016-11-16T14:01:23Z |
javascript
| 2018-03-07T19:50:16Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 490 |
["lib/adapters/http.js"]
|
better handling of missing protocol in url
|
If the requested url does not have a protocol (http/https) specified, the error given is a bit cryptic. While it is a bit of an edge case, it has actually happened to me on numerous occasions.
Currently, the following is thrown:
```
error checking auth TypeError: Cannot read property 'slice' of null
at dispatchHttpRequest (.../node_modules/axios/lib/adapters/http.js:84:37)
at httpAdapter (.../node_modules/axios/lib/adapters/http.js:19:10)
at dispatchRequest (.../node_modules/axios/lib/core/dispatchRequest.js:52:10)
at process._tickCallback (internal/process/next_tick.js:103:7)
```
The reason I'm opening a issue rather than a pull request is that I see two different ways of handling it and would like @mzabriskie to chime in.
1. Check explicitly for `parsed.protocol` in the httpAdapter and throw an error if it doesn't exist.
2. Default to using `http://` and continue on as usual
Love axios, long time user, let me know what you think and I can make a PR.
|
https://github.com/axios/axios/issues/490
|
https://github.com/axios/axios/pull/493
|
b21a280df0475c89b8cd0ca7ac698a16eca46ec0
|
b78f3fe79298a000f056ff40bbd1447c2d667cc5
| 2016-10-17T01:50:39Z |
javascript
| 2016-10-18T17:09:51Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 476 |
["README.md", "lib/adapters/http.js", "test/unit/adapters/http.js"]
|
HTTP proxy authentication
|
HTTP proxy support was added in #366, but the proxy configuration only takes into account the proxy host and port. We are running in an environment that requires authenticating to the proxy, and as far as I understood, axios doesn't support that. We are using the standard `http_proxy` and `https_proxy` environment variables with the credentials in the URL, like `http://proxy-user:proxy-password@proxy-host:proxy-port`, and passing that works as expected with e.g. [superagent-proxy](https://github.com/TooTallNate/superagent-proxy).
So I'm requesting to have support for HTTP proxy authentication, preferably even taking the credentials automatically from HTTP proxy environment variables.
|
https://github.com/axios/axios/issues/476
|
https://github.com/axios/axios/pull/483
|
b78f3fe79298a000f056ff40bbd1447c2d667cc5
|
df6d3ce6cf10432b7920d8c3ac0efb7254989bc4
| 2016-10-10T15:25:28Z |
javascript
| 2016-10-19T09:02:42Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 460 |
[".github/workflows/ci.yml"]
|
Converting circular structure to JSON
|
I tray send request with data:

```
console.log({ setting });
const request = axios.post(SETTINGS_URL, { setting });
```
Error:
defaults.js:37 Uncaught (in promise) TypeError: Converting circular structure to JSON(…)
|
https://github.com/axios/axios/issues/460
|
https://github.com/axios/axios/pull/4798
|
68723fc38923f99d34a78efb0f68de97890d0bec
|
1db715dd3b67c2b6dd7bdaa39bb0aa9d013d9fd2
| 2016-09-29T12:19:36Z |
javascript
| 2022-06-18T09:15:53Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 434 |
["package-lock.json"]
|
Need to bypass proxy
|
I have some endpoints that my proxy server doesn't handle well and in my NodeJS environment the http_proxy environment is set.
Can you honor the no_proxy environmental variable and/or allow a config option to disable proxy usage?
I'm using [email protected] for now.
|
https://github.com/axios/axios/issues/434
|
https://github.com/axios/axios/pull/4506
|
170588f3d78f855450d1ce50968651a54cda7386
|
2396fcd7e9b27853670759ee95d8f64156730159
| 2016-09-02T00:47:37Z |
javascript
| 2022-03-07T07:13:50Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 395 |
["lib/adapters/xhr.js", "test/specs/xsrf.spec.js"]
|
Fails on same origin request running inside a sandboxed iframe
|
When I do run a GET request within an `<iframe sandbox="allow-scripts" />` to an URI on the same origin of the iframe, the request fails with the error:
`DOMException: Failed to read the 'cookie' property from 'Document': The document is sandboxed and lacks the 'allow-same-origin' flag.`
The issue is actually caused by the XHR adapter, while reading the cookies, since you can't access cookies in a sandboxed iframe:
```
// Add xsrf header
// This is only done if running in a standard browser environment.
// Specifically not if we're in a web worker, or react-native.
if (utils.isStandardBrowserEnv()) {
var cookies = require('./../helpers/cookies');
// Add xsrf header
var xsrfValue = config.withCredentials || isURLSameOrigin(config.url) ?
cookies.read(config.xsrfCookieName) :
undefined;
if (xsrfValue) {
requestHeaders[config.xsrfHeaderName] = xsrfValue;
}
}
```
I'm willing to submit a PR to fix it, but I'm actually wondering what's the best approach in your opinion (according to the axios design):
1. Add a try/catch in `helpers/cookies.js` `read()` function and `return null` on error
2. Add an option to disable the XSRF header, or implicitely disable it if `config.xsrfHeaderName === null`
3. ... any other suggestion is welcome
Thank you,
Marco
|
https://github.com/axios/axios/issues/395
|
https://github.com/axios/axios/pull/406
|
8abe0d4007dd6b3fae5a1c4e019587b7a7f50930
|
6132d9630d641d253f268c8aa2e128aef94ed44f
| 2016-08-01T11:10:35Z |
javascript
| 2016-08-13T00:02:59Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 382 |
["lib/core/dispatchRequest.js", "lib/utils.js", "test/specs/headers.spec.js", "test/specs/utils/deepMerge.spec.js"]
|
Don't send default header
|
If a header has been set as a default, there does not appear to be any way to skip it on an individual request. Setting `null` or `undefined` doesn't do anything.
|
https://github.com/axios/axios/issues/382
|
https://github.com/axios/axios/pull/1845
|
4b3947aa59aaa3c0a6187ef20d1b9dddb9bbf066
|
487941663b791a4a5273d456aab24c6ddd10eb0e
| 2016-07-19T21:34:53Z |
javascript
| 2020-06-04T18:57:54Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 379 |
[".gitignore", ".npmignore", "Gruntfile.js", "axios.d.ts", "lib/axios.js", "package.json", "test/specs/instance.spec.js", "test/typescript/axios.ts", "typings.json"]
|
Update Typings
|
This is a catch all for the multiple requests to update the type definition. Needs to be done for the next release.
|
https://github.com/axios/axios/issues/379
|
https://github.com/axios/axios/pull/419
|
fa5ce95fdcee5c3a6c0ffac0164f148351ae3081
|
59080e68d983782445eded3a39f426161611e749
| 2016-07-16T17:47:27Z |
javascript
| 2016-08-19T03:42:03Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 358 |
[".github/workflows/ci.yml"]
|
`progress` config option not working
|
I'm attempting to use the `progress` config option as shown below in my code. It appears to mimic exactly what is in the example, but it never actually enters the function.
```
var form = new FormData()
form.append('content', fs.createReadStream('./myfile.mkv');
axios.post("http://example.com", form, {
progress: (progressEvent) => {
console.log('progress!');
}
})
.then(response => {
console.log('done');
})
.catch(err => {
console.log('failed');
});
```
It outputs `done`, but nothing is outputted on progress. Also, it's worth noting I attempted this with very large files (1GB) and no progress function was ever entered.
|
https://github.com/axios/axios/issues/358
|
https://github.com/axios/axios/pull/4798
|
68723fc38923f99d34a78efb0f68de97890d0bec
|
1db715dd3b67c2b6dd7bdaa39bb0aa9d013d9fd2
| 2016-06-25T00:43:11Z |
javascript
| 2022-06-18T09:15:53Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 298 |
["package-lock.json"]
|
Client side support for protecting against XSRF
|
hi,
- i am new to js.
- i am trying to learn how you guys implemented, the Promise API and Client side support for protecting against XSRF
- can you tell me which files I need to look for in axios to get those codes
|
https://github.com/axios/axios/issues/298
|
https://github.com/axios/axios/pull/5438
|
b4b5b360ec9c9be80f69e99b436ef543072b8b43
|
ebb9e814436d2f6c7cc65ffecb6ff013539ce961
| 2016-04-18T16:20:22Z |
javascript
| 2023-01-07T16:15:58Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 295 |
["package-lock.json"]
|
How can i get set-cookie header ?
|
Hello.
"Set-Cookie" header exist in HTTP response header. But axios response didn't have "set-cookie" header.
Response header have only "content-type".
How can i get "set-cookie" header ?
|
https://github.com/axios/axios/issues/295
|
https://github.com/axios/axios/pull/5438
|
b4b5b360ec9c9be80f69e99b436ef543072b8b43
|
ebb9e814436d2f6c7cc65ffecb6ff013539ce961
| 2016-04-14T05:26:00Z |
javascript
| 2023-01-07T16:15:58Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 287 |
["lib/adapters/http.js"]
|
Zlib crashes app
|
Heya
I'm getting errors like this in production - my code is loading pages from wild http servers, so this is most likely related to them being buggy.
```
{ [Error: unexpected end of file] errno: -5, code: 'Z_BUF_ERROR' }
Error: unexpected end of file
at Zlib._handle.onerror (zlib.js:363:17)
```
This is on node 4x and v5.7.0
Would it be possible to add a `stream.on('error'` clause in axios\lib\adapters\http.js and throw this as an exception to the axios response promise?
|
https://github.com/axios/axios/issues/287
|
https://github.com/axios/axios/pull/303
|
8a60a4eb8b0e12993875a944e3db6f75c1318975
|
716e487038fca364648f61c6c432063f1d2c41c0
| 2016-04-08T23:07:16Z |
javascript
| 2016-04-19T23:26:45Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 273 |
[".travis.yml", "karma.conf.js"]
|
All PR builds fail due to Travis security restrictions
|
From https://docs.travis-ci.com/user/encryption-keys/ :
"Please note that encrypted environment variables are not available for pull requests from forks."
Also, from https://docs.travis-ci.com/user/pull-requests#Security-Restrictions-when-testing-Pull-Requests :
"If your build relies on these to run, for instance to run Selenium tests with Sauce Labs, your build needs to take this into account. You won’t be able to run these tests for pull requests from external contributors."
|
https://github.com/axios/axios/issues/273
|
https://github.com/axios/axios/pull/274
|
e22cbae49447f1c88680cc3d97888b6949f6a41f
|
104276ffa774760d0b00cb1312621d5b1993e483
| 2016-03-22T02:07:15Z |
javascript
| 2016-03-24T06:02:45Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 268 |
["lib/adapters/xhr.js", "test/specs/requests.spec.js"]
|
Uint8Array becomes [Object object] when POST'ing in IE 11
|
IE 11 is giving me a hard time...
I'm trying to send some binary data as a `Uint8Array` in a POST request, but `[Object object]` gets sent instead. In IE's dev tools, inspecting the request body reveals that the `Uint8Array` is being transformed incorrectly as `[Object object]`:

I tried commenting out some code to see if the request will send the correct data, and it does. I commented out these lines:
```
if (utils.isArrayBuffer(requestData)) {
requestData = new DataView(requestData);
}
```
https://github.com/mzabriskie/axios/blob/master/lib/adapters/xhr.js#L162
... and the resulting request sent binary data as expected:
<img width="463" alt="screen shot 2016-03-15 at 7 14 07 pm" src="https://cloud.githubusercontent.com/assets/440299/13800038/27c28680-eae2-11e5-9718-512fcc078cd7.png">
What is the purpose of the `new DataView()` call? What is happening that turns the binary data into the `[Object object]` string? Why does this break only in IE 11 (works in Chrome, Firefox, Edge)?
|
https://github.com/axios/axios/issues/268
|
https://github.com/axios/axios/pull/299
|
fa9444e0babdbf87cc6e04eb72da420c0f2ffbc5
|
aeac3e132ee3de1eca15af9bd964face570e729c
| 2016-03-15T20:57:29Z |
javascript
| 2016-04-26T20:25:19Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 266 |
["package-lock.json"]
|
Interceptors - how to prevent intercepted messages from resolving as error
|
I'm trying to make an interceptor for 401 responses that result from expired token. Upon interception I want to login and retry the requests with the new token. My problem is that login is also done asynchronously, so by the time the retry happens, the original promises reject. Is there a way around that? Here's my code:
```
axios.interceptors.response.use(undefined, err => {
if (err.status === 401 && err.config && !err.config.__isRetryRequest) {
refreshLogin(getRefreshToken(),
success => {
setTokens(success.access_token, success.refresh_token)
err.config.__isRetryRequest = true
err.config.headers.Authorization = 'Bearer ' + getAccessToken()
axios(err.config)
},
error => { console.log('Refresh login error: ', error) }
)
}
})
```
|
https://github.com/axios/axios/issues/266
|
https://github.com/axios/axios/pull/5438
|
b4b5b360ec9c9be80f69e99b436ef543072b8b43
|
ebb9e814436d2f6c7cc65ffecb6ff013539ce961
| 2016-03-14T21:09:11Z |
javascript
| 2023-01-07T16:15:58Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 226 |
["package-lock.json"]
|
XHR timeout does not trigger reject on the Promise
|
Axios [set the timeout](https://github.com/mzabriskie/axios/blob/master/lib/adapters/xhr.js#L36) for XHR request, but doesn't listen to `ontimeout` events, rejecting the promise, so my `then` and `catch` are never called.
The PR #227 to fixes that
|
https://github.com/axios/axios/issues/226
|
https://github.com/axios/axios/pull/5295
|
f79cf7bfa9fadd647ac8e22f1a3ff491d6c37e13
|
2c83d47e37554a49b8df4899fd0d9ee0ff48f95d
| 2016-02-05T17:52:06Z |
javascript
| 2022-11-22T18:23:57Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 220 |
["package-lock.json"]
|
Custom instance defaults can't be set as documented
|
https://github.com/mzabriskie/axios#custom-instance-defaults
Because the defaults from `defaults.js` are not merged in here: https://github.com/mzabriskie/axios/blob/master/lib/axios.js#L88, `instance.defaults.headers` will be `undefined`.
I'll try to make a PR some time soon.
|
https://github.com/axios/axios/issues/220
|
https://github.com/axios/axios/pull/5245
|
0da6db79956aef9e8b5951123bab4dd5decd8c4c
|
7f0fc695693dbc9309fe86acbdf9f84614138011
| 2016-02-02T10:56:56Z |
javascript
| 2022-11-10T12:02:26Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 199 |
["package-lock.json"]
|
Both then and catch executing in same call
|
I'm doing this call:
```
axios.get('/groups', {
baseURL: 'http://localhost:5000',
headers: { Authorization: 'Bearer ' + getAccessToken(), 'Content-Type': 'application/json; charset=utf-8' }
})
.then((response) => {
console.log('then response',response)
success(response.data)} )
.catch((response) => {
console.log('catch response',response)
error(response.status, response.data.description)})
```
I'm calling this exactly once. I get a 200 OK response, `then` block executes, with `response` being an object I would normally expect. Right after that `catch` executes, response being a string:
```
TypeError: Cannot read property 'groups' of null(…)
```
Closing: my bad, it was an error somewhere very far in the code, still trying to figure out how it made it into the promise...
|
https://github.com/axios/axios/issues/199
|
https://github.com/axios/axios/pull/5438
|
b4b5b360ec9c9be80f69e99b436ef543072b8b43
|
ebb9e814436d2f6c7cc65ffecb6ff013539ce961
| 2016-01-20T18:30:08Z |
javascript
| 2023-01-07T16:15:58Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 107 |
["lib/core/Axios.js", "lib/core/dispatchRequest.js", "lib/core/mergeConfig.js", "lib/utils.js", "test/specs/core/mergeConfig.spec.js", "test/specs/headers.spec.js", "test/specs/requests.spec.js", "test/specs/utils/deepMerge.spec.js", "test/specs/utils/isX.spec.js", "test/specs/utils/merge.spec.js"]
|
support the standard $http_proxy env var, like cURL and many programs do
|
This should "just work", and can build on the resolution to #68 .
https://www.google.com/search?q=http_proxy
This would allow proxying behind a corporate firewall without any/every 3rd party module using axios requiring code changes to support proxying.
|
https://github.com/axios/axios/issues/107
|
https://github.com/axios/axios/pull/2844
|
487941663b791a4a5273d456aab24c6ddd10eb0e
|
0d69a79c81a475f1cca6d83d824eed1e5b0b045d
| 2015-09-09T18:49:14Z |
javascript
| 2020-06-08T18:52:45Z |
closed
|
axios/axios
|
https://github.com/axios/axios
| 38 |
["lib/core/Axios.js", "lib/core/dispatchRequest.js", "lib/core/mergeConfig.js", "lib/utils.js", "test/specs/core/mergeConfig.spec.js", "test/specs/headers.spec.js", "test/specs/requests.spec.js", "test/specs/utils/deepMerge.spec.js", "test/specs/utils/isX.spec.js", "test/specs/utils/merge.spec.js"]
|
corrupted multibyte characters in node
|
Multibyte characters on chunk boundaries get corrupted.
|
https://github.com/axios/axios/issues/38
|
https://github.com/axios/axios/pull/2844
|
487941663b791a4a5273d456aab24c6ddd10eb0e
|
0d69a79c81a475f1cca6d83d824eed1e5b0b045d
| 2015-01-27T14:52:54Z |
javascript
| 2020-06-08T18:52:45Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 5,187 |
["server/gateway.go", "server/gateway_test.go"]
|
Using queue-groups when subscribing to $SYS.ACCOUNT.> across routes
|
### Observed behavior
I am currently using two superclusters of nats, one in test and one in production.
When subscribing to $SYS.ACCOUNT.> without any queue i can receive account CONNECT/DISCONNECT events from all clusters across routes.
When subscribing with a queue group i can only receive events from the cluster i am currently connected to.
Very simillar to https://github.com/nats-io/nats-server/issues/3177
### Expected behavior
Client within a queue group should be able to receive account events across a super cluster.
### Server and client version
nats-server: v2.10.11 (Both test & production)
~ # nats --version
v0.1.1
### Host environment
Official docker image in kubernetes using official helm charts.
### Steps to reproduce
``nats -s <redacted>--creds=<SYS_PATH>/sys.creds sub '$SYS.ACCOUNT.*.*'``
``nats -s <redacted>--creds=<SYS_PATH>/sys.creds sub '$SYS.ACCOUNT.*.*' --queue=foo``
This behaviour has been verfied both in test and production.
|
https://github.com/nats-io/nats-server/issues/5187
|
https://github.com/nats-io/nats-server/pull/5192
|
f30c7e1211a5445661e32af748d74cdd22d72c25
|
b246331e711ab5df91954ac2802642cfe830c10f
| 2024-03-07T14:29:43Z |
go
| 2024-03-08T19:31:43Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 5,163 |
["server/jetstream_api.go", "server/jetstream_test.go"]
|
Consumer resume does not unpause the consumer
|
### Observed behavior
When calling `nats consumer resume` it returns that the consumer is unpaused, but calling the consumer info indicates the consumer is still paused.
### Expected behavior
When calling `nats consumer resume` the consumer should be unpaused, and the consumer info should reflect that.
### Server and client version
- natscli @ main
- nats-server @ main / 2.11.0-dev
### Host environment
_No response_
### Steps to reproduce
1. Add stream: `nats str add stream --subjects stream --defaults`
2. Add consumer: `nats con add stream consumer --pull --defaults`
3. Consumer info, should not be paused: `nats con info stream consumer -j`
4. Pause consumer: `nats con pause stream consumer 7d -f`
5. Consumer info, should be paused: `nats con info stream consumer -j`
```js
{
"stream_name": "stream",
"name": "consumer",
"config": {
...
"pause_until": "2024-03-09T15:47:17.897365637Z"
},
"created": "2024-03-02T15:47:15.718156438Z",
...
"paused": true,
"pause_remaining": 604778793202318,
"ts": "2024-03-02T15:47:39.104162517Z"
}
```
6. Resume consumer: `nats con resume stream consumer -f`
CLI logs:
```
Consumer stream > consumer was resumed while previously paused until 2024-03-09 16:47:17
```
But, when requesting the consumer info again, it is not resumed.
|
https://github.com/nats-io/nats-server/issues/5163
|
https://github.com/nats-io/nats-server/pull/5164
|
2cbbc070f33492af13c8322c2885849929fea0ed
|
20c2a5f0a2978aa0ccd02e971a87899997addf7c
| 2024-03-02T15:49:43Z |
go
| 2024-03-02T23:13:48Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 5,117 |
["server/accounts.go", "server/client.go", "server/errors.go", "server/leafnode.go", "server/leafnode_test.go", "server/opts.go"]
|
improvements in leaf loop detection logic
|
### Observed behavior
loop detection logic in leaf nodes can be defeated in simple setups
Setup:
B --[leaf]--> A
C --[leaf]--> A
C --[leaf] --> B
with the right permissions and startup sequence, the loop is not detected
### Expected behavior
loop is detected
### Server and client version
2.10.11
### Host environment
docker compose
### Steps to reproduce
see attached docker compose files and instructions to reproduce the issue
[leaf-loops.tgz](https://github.com/nats-io/nats-server/files/14368289/leaf-loops.tgz)
|
https://github.com/nats-io/nats-server/issues/5117
|
https://github.com/nats-io/nats-server/pull/5126
|
155bb69cfdd4dfa8805b619f230231b6ab9acf38
|
f4aecd6e41fc157d80b880969a187e6e3276fe77
| 2024-02-22T03:34:14Z |
go
| 2024-02-26T21:51:12Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 5,098 |
["server/memstore.go", "server/norace_test.go"]
|
NATS JetStream slowing down over time
|
### Observed behavior
We see a constant degradation of E2E Latency over time (~24 test period) when having a constant load on the a single jetstream.
The degradation mainly can be seen in outliers - so the avg latency is not so much affected yet, but that will continue to degrade over time as well.
We see that, based on our latency Grafana DBs, that the degradtion comes from the producers side - The producer waits until a publish call is confirmed by the NATS cluster and then goes on. This duration is also monitored as a histogram as follows:
`lastPublish = System.nanoTime();`
`PublishAck pa = js.publish(msg, opts);`
`long deltaNano = System.nanoTime() - lastPublish;`
streamTimeout for publishoptions is set to 2 secs. deltaNano is monitored as a promehteus histogram.
Garafan DB as a proof:
E2E Latency:

Producer Latency:

Note: In the middle (~18:00) we stopped prod & consumers to verify if the JetStream recovers when the load is gone. Apparently it did not as the latency outliers remained where the were when restarting producers/consumers.
Note 2: We then started the exact same test again with another JetStream without restarting the NATS cluster. The latency was again fine in the beginning and followed the patterns as can be seen above.
Note 3: Unfornately the producer latency histogram is capped in that test which is why the increase then is not visible any longer in the +INF bucket.
### Expected behavior
There is no continuous degration of E2E Latency over time.
### Server and client version
Server. = 2.10.10
Client = Java v2.17.3
### Host environment
Openshift using NATS provided HELM charts
### Steps to reproduce
Single Jetstream on a 3 node cluster with R3.
10 Producers, 10 Consumers and 10 subjects (like test.1, test.2, test.10). Every producer/consumer form a pair (1:1 relation via subject).
Producers provide messages in a constant rate of 100msg/sec.
JetStream settings:
` StreamConfiguration sc = StreamConfiguration.builder()`
` .name(streamName) `
` .storageType(StorageType.Memory)`
` .subjects(subjects)`
` .retentionPolicy(RetentionPolicy.Limits)`
` .discardPolicy(DiscardPolicy.Old)`
` .replicas(3)`
` .maxAge(Duration.ofMinutes(10))`
` .maxBytes(500_000_000) //500MB`
` .maxMessagesPerSubject(100_000)`
` .build();`
Consumer Configuration:
` ConsumerConfiguration.Builder consumerBuilder = ConsumerConfiguration.builder()`
` .numReplicas(3)`
` .ackPolicy(AckPolicy.Explicit)`
` .name(exArgs.consumerName)`
` .ackWait(Duration.ofSeconds(5))`
` .inactiveThreshold(Duration.ofSeconds(60))`
` .filterSubject(exArgs.subject)`
` .deliverPolicy(DeliverPolicy.All)`
` .memStorage(true);`
Consumer is a pull consumer using "fetch"
`try (FetchConsumer consumer = consumerContext.fetch(consumeOptions)) {`
The paylod of messages is random and has the format:
SequenceNr + space + CurrentTimeMillis + space + String with 100 random characters
UTF-coded to byte array.
|
https://github.com/nats-io/nats-server/issues/5098
|
https://github.com/nats-io/nats-server/pull/5116
|
eedaef4c184f43c5fa96e473743c42ce449ea1ee
|
f81a0d4110ff2f56e80ed8399310c2b3d8025647
| 2024-02-16T12:55:53Z |
go
| 2024-02-23T21:43:08Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 5,071 |
["server/jetstream_cluster.go", "server/jetstream_cluster_3_test.go"]
|
JS stream count not balanced among the cluster nodes
|
### Observed behavior
When creating a multiple streams (with replicas = 3) on a Jetstream cluster (with number of nodes > stream replica count), I have been observing a behaviour where the streams are not evenly distributed among the servers.
Some of the server instances end up getting a large chunk of the stream replicas.
```
skohli@macos-JQWR9T560R ~ % nats --context east-sys-ac server report jetstream
╭──────────────────────────────────────────────────────────────────────────────────────────────────╮
│ JetStream Summary │
├────────┬─────────┬─────────┬───────────┬──────────┬───────┬────────┬──────┬─────────┬────────────┤
│ Server │ Cluster │ Streams │ Consumers │ Messages │ Bytes │ Memory │ File │ API Req │ API Err │
├────────┼─────────┼─────────┼───────────┼──────────┼───────┼────────┼──────┼─────────┼────────────┤
│ n1-c1 │ C1 │ 28 │ 0 │ 0 │ 0 B │ 0 B │ 0 B │ 55 │ 0 │
│ n2-c1* │ C1 │ 1 │ 0 │ 0 │ 0 B │ 0 B │ 0 B │ 33 │ 2 / 6.060% │
│ n3-c1 │ C1 │ 28 │ 0 │ 0 │ 0 B │ 0 B │ 0 B │ 30 │ 0 │
│ n4-c1 │ C1 │ 0 │ 0 │ 0 │ 0 B │ 0 B │ 0 B │ 0 │ 0 │
│ n5-c1 │ C1 │ 27 │ 0 │ 0 │ 0 B │ 0 B │ 0 B │ 63 │ 0 │
├────────┼─────────┼─────────┼───────────┼──────────┼───────┼────────┼──────┼─────────┼────────────┤
│ │ │ 84 │ 0 │ 0 │ 0 B │ 0 B │ 0 B │ 181 │ 2 │
╰────────┴─────────┴─────────┴───────────┴──────────┴───────┴────────┴──────┴─────────┴────────────╯
```
On some testing, if we **wait** for some time (sleep = 3s) before creating consecutive streams we end up seeing a far balanced distribution.
```
skohli@macos-JQWR9T560R ~ % nats --context east-sys-ac server report jetstream
╭─────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ JetStream Summary │
├────────┬─────────┬─────────┬───────────┬──────────┬───────┬────────┬──────┬─────────┬───────────────┤
│ Server │ Cluster │ Streams │ Consumers │ Messages │ Bytes │ Memory │ File │ API Req │ API Err │
├────────┼─────────┼─────────┼───────────┼──────────┼───────┼────────┼──────┼─────────┼───────────────┤
│ n1-c1 │ C1 │ 18 │ 0 │ 0 │ 0 B │ 0 B │ 0 B │ 696 │ 381 / 54.741% │
│ n2-c1* │ C1 │ 14 │ 0 │ 0 │ 0 B │ 0 B │ 0 B │ 496 │ 253 / 51.008% │
│ n3-c1 │ C1 │ 17 │ 0 │ 0 │ 0 B │ 0 B │ 0 B │ 145 │ 0 │
│ n4-c1 │ C1 │ 18 │ 0 │ 0 │ 0 B │ 0 B │ 0 B │ 64 │ 0 │
│ n5-c1 │ C1 │ 17 │ 0 │ 0 │ 0 B │ 0 B │ 0 B │ 227 │ 42 / 18.502% │
├────────┼─────────┼─────────┼───────────┼──────────┼───────┼────────┼──────┼─────────┼───────────────┤
│ │ │ 84 │ 0 │ 0 │ 0 B │ 0 B │ 0 B │ 1,628 │ 676 │
╰────────┴─────────┴─────────┴───────────┴──────────┴───────┴────────┴──────┴─────────┴───────────────╯
```
### Expected behavior
The expectation was to see a balanced distribution even without the wait between the create calls.
In my use case I need to create multiple streams on a Jetstream cluster and such a behaviour might cause performance issues.
I could add a wait to help with the issue but that creates a long delay in the init process when creating a large number of streams.
It would be great if you could highlight if this is the expected behaviour or if there is some other way in which the issue can be remediated?
### Server and client version
Nats Server Version: `nats-server: v2.10.9`
Client version: `nats --version 0.1.1`
### Host environment
```
uname -a
Darwin 22.3.0 Darwin Kernel Version 22.3.0: Mon Jan 30 20:38:37 PST 2023; root:xnu-8792.81.3~2/RELEASE_ARM64_T6000 arm64
```
CPU: `Apple M1 Pro arm64`
### Steps to reproduce
1) Create a Jetstream cluster with 5 server nodes, I'm using the following config for the nodes and starting the servers individually using `nats-server -js -c node.conf`.
Each server having a unique name and the port is added to the cluster
```
server_name=n1-c1
listen=4222
include sys.conf
jetstream {
store_dir=nats/storage
}
cluster {
name: C1
listen: 0.0.0.0:6222
routes: [
nats://0.0.0.0:6222
nats://0.0.0.0:6223
nats://0.0.0.0:6224
nats://0.0.0.0:6225
nats://0.0.0.0:6226
]
}
```
2) Once all the servers are up and running, Create multiple JS streams. All streams here have an identical configuration apart from having a unique name and subject.
I'm using the nats-cli to create 30 streams
```
for i in {1..30}; do nats --context east-sys stream create bar$i --subjects="test$i.*" --ack --max-msgs=-1 --max-bytes=-1 --max-age=1y --storage file --retention limits --max-msg-size=-1 --discard old --dupe-window="0s" --no-allow-rollup --max-msgs-per-subject=-1 --no-deny-delete --no-deny-purge --replicas 3; done
```
The configuration of the streams are as follows
```
Information for Stream bar1 created 2024-02-12 20:15:57
Subjects: test1.*
Replicas: 3
Storage: File
Options:
Retention: Limits
Acknowledgments: true
Discard Policy: Old
Duplicate Window: 2m0s
Direct Get: true
Allows Msg Delete: true
Allows Purge: true
Allows Rollups: false
Limits:
Maximum Messages: unlimited
Maximum Per Subject: unlimited
Maximum Bytes: unlimited
Maximum Age: 1y0d0h0m0s
Maximum Message Size: unlimited
Maximum Consumers: unlimited
Cluster Information:
Name: C1
Leader: n3-c1
Replica: n1-c1, current, seen 344ms ago
Replica: n5-c1, current, seen 344ms ago
State:
Messages: 0
Bytes: 0 B
First Sequence: 0
Last Sequence: 0
Active Consumers: 0
```
3) Once all streams are created check the Jetstream server report to find the stream count on each server node
```
nats --context east-sys-ac server report jetstream
╭─────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ JetStream Summary │
├────────┬─────────┬─────────┬───────────┬──────────┬───────┬────────┬──────┬─────────┬───────────────┤
│ Server │ Cluster │ Streams │ Consumers │ Messages │ Bytes │ Memory │ File │ API Req │ API Err │
├────────┼─────────┼─────────┼───────────┼──────────┼───────┼────────┼──────┼─────────┼───────────────┤
│ n1-c1* │ C1 │ 28 │ 0 │ 0 │ 0 B │ 0 B │ 0 B │ 443 │ 226 / 51.015% │
│ n2-c1 │ C1 │ 0 │ 0 │ 0 │ 0 B │ 0 B │ 0 B │ 442 │ 251 / 56.787% │
│ n3-c1 │ C1 │ 28 │ 0 │ 0 │ 0 B │ 0 B │ 0 B │ 87 │ 0 │
│ n4-c1 │ C1 │ 0 │ 0 │ 0 │ 0 B │ 0 B │ 0 B │ 52 │ 0 │
│ n5-c1 │ C1 │ 28 │ 0 │ 0 │ 0 B │ 0 B │ 0 B │ 146 │ 0 │
├────────┼─────────┼─────────┼───────────┼──────────┼───────┼────────┼──────┼─────────┼───────────────┤
│ │ │ 84 │ 0 │ 0 │ 0 B │ 0 B │ 0 B │ 1,170 │ 477 │
╰────────┴─────────┴─────────┴───────────┴──────────┴───────┴────────┴──────┴─────────┴───────────────╯
```
4) If the same steps are followed but with the slight modification of adding a sleep interval between the stream creation we are able to see a well balanced system
```
for i in {1..30}; do sleep 3; nats --context east-sys stream create bar$i --subjects="test$i.*" --ack --max-msgs=-1 --max-bytes=-1 --max-age=1y --storage file --retention limits --max-msg-size=-1 --discard old --dupe-window="0s" --no-allow-rollup --max-msgs-per-subject=-1 --no-deny-delete --no-deny-purge --replicas 3; done
```
```
nats --context east-sys-ac server report jetstream
╭────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ JetStream Summary │
├────────┬─────────┬─────────┬───────────┬──────────┬───────┬────────┬──────┬─────────┬──────────────┤
│ Server │ Cluster │ Streams │ Consumers │ Messages │ Bytes │ Memory │ File │ API Req │ API Err │
├────────┼─────────┼─────────┼───────────┼──────────┼───────┼────────┼──────┼─────────┼──────────────┤
│ n1-c1 │ C1 │ 18 │ 0 │ 0 │ 0 B │ 0 B │ 0 B │ 82 │ 0 │
│ n2-c1* │ C1 │ 15 │ 0 │ 0 │ 0 B │ 0 B │ 0 B │ 191 │ 87 / 45.549% │
│ n3-c1 │ C1 │ 17 │ 0 │ 0 │ 0 B │ 0 B │ 0 B │ 49 │ 0 │
│ n4-c1 │ C1 │ 19 │ 0 │ 0 │ 0 B │ 0 B │ 0 B │ 27 │ 0 │
│ n5-c1 │ C1 │ 18 │ 0 │ 0 │ 0 B │ 0 B │ 0 B │ 92 │ 0 │
├────────┼─────────┼─────────┼───────────┼──────────┼───────┼────────┼──────┼─────────┼──────────────┤
│ │ │ 87 │ 0 │ 0 │ 0 B │ 0 B │ 0 B │ 441 │ 87 │
╰────────┴─────────┴─────────┴───────────┴──────────┴───────┴────────┴──────┴─────────┴──────────────╯
```
[node_configs.zip](https://github.com/nats-io/nats-server/files/14257867/node_configs.zip)
|
https://github.com/nats-io/nats-server/issues/5071
|
https://github.com/nats-io/nats-server/pull/5079
|
b96c9bcefd17b40072f3c26aee0ebd94eb92d12c
|
056ad1899b592814b976baefee3ea92923ac25d3
| 2024-02-13T08:07:47Z |
go
| 2024-02-14T05:08:36Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 5,052 |
["server/accounts.go", "server/accounts_test.go", "server/client.go", "server/config_check_test.go", "server/msgtrace.go", "server/msgtrace_test.go", "server/opts.go", "server/server.go", "server/stream.go"]
|
Otel activation for message traces
|
### Proposed change
In https://github.com/nats-io/nats-server/pull/5014 a basic capability to do on-demand message tracing was introduced for core pub-sub messages.
This capability is activated using a NATS specific header. The bigger picture is that users want to trigger traces based on trace headers from the likes of Open Telemetry etc.
There is prior-art in the server around [ADR-3](https://github.com/nats-io/nats-architecture-and-design/blob/main/adr/ADR-3.md) that already supports triggering based on the headers we need and there are already functions for parsing the headers and determining if it's a continuation of a trace that started outside of NATS. See the `shouldSample()` function.
This leaves the question of where should traces be delivered when these headers like `traceparent` are present, of course we don't know as previously we configured the delivery per message. The only option I can think of is to add an account scope config item that configures a per-account subject for these externally-activated traces to be delivered.
Some other cases to consider, there might be more:
* If the configuration is not set the account does not support this style of activation of tracing.
* If the `Nats-Trace-Only` is present AND otel headers are set, we honor `Nats-Trace-Only` and do not deliver the message
* If `Nats-Trace-Dest` is set AND otel headers are set, we honor `Nats-Trace-Dest` and deliver to that subject
/cc @kozlovic
### Use case
Integration with external tracing systems on account level.
### Contribution
_No response_
|
https://github.com/nats-io/nats-server/issues/5052
|
https://github.com/nats-io/nats-server/pull/5057
|
de3aedbebe8d80da5ee78a1147fa8127bfeecb3f
|
5cf4ae534318a9c487aeb85192e4b844e2efc130
| 2024-02-09T09:48:10Z |
go
| 2024-02-13T17:47:59Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 5,002 |
["server/auth_callout.go", "server/auth_callout_test.go"]
|
External authorization (AuthCallout) not working with Scoped accounts
|
### Observed behavior
I'm building an app that implements decentralized auth and I'm closely following this example: https://natsbyexample.com/examples/auth/callout-decentralized/cli
I tried to make a small modification to enforce scoped users for AUTH, APP1, APP2 accounts.
But something related to NATS Limits is not working, here is the server log from my modified example:
```
[25] 2024/01/25 18:04:34.236241 [DBG] 127.0.0.1:45638 - cid:15 - Client connection created
[25] 2024/01/25 18:04:34.238418 [INF] 127.0.0.1:45638 - cid:15 - Connected Client has JetStream denied on pub: [$JSC.> $NRG.> $JS.API.> $KV.> $OBJ.>] sub: [$JSC.> $NRG.> $JS.API.> $KV.> $OBJ.>]
[25] 2024/01/25 18:04:34.238481 [DBG] 127.0.0.1:45638 - cid:15 - Authenticated JWT: Client "UBI4WTDSZIE73AGIPSG44AAM4VVCCSN4G5PO4OSBMMWJZNGVRTITTQ3F" (claim-name: "sentinel", claim-tags: []) signed with "AAREKTYNSYEEPZSBLUEA76SBZT7TE2JOA6T2QDMTEMBZELR2OG6LGLJE" by Account "AD6S5TEQBC2SABK5724DB7V7PFJIFHJA4FP7MEJOOKOYEUICA7ZPTHKB" (claim-name: "AUTH", claim-tags: []) signed with "OBNJ5SVK7NDOOKUOUZSNPX7LS2SXZEJICAY4JT2L5CJ4WRMOHUU4K6NN" has mappings false accused 0xc0002ea280
[25] 2024/01/25 18:04:34.241261 [ERR] 127.0.0.1:45638 - cid:15 - maximum subscriptions exceeded
[25] 2024/01/25 18:04:34.241801 [DBG] 127.0.0.1:45638 - cid:15 - Client connection closed: Client Closed
```
### Expected behavior
no messages "maximum subscriptions exceeded" in server logs, the modified example should be able to complete without errors
### Server and client version
```
# nats-server --version
nats-server: v2.10.4
# nats --version
v0.1.1
```
### Host environment
docker
### Steps to reproduce
I made some modifications to callout-decentralized example that:
* enforce using scoped signing keys for AUTH, APP1, APP2 accounts
* make the auth service to issue scoped user JWT (`uc.SetScoped(true)`)
Here is a diff:
https://github.com/ConnectEverything/nats-by-example/commit/85a733a984b21b3e3cf2a8acb28fe7f4eb5a5a7f
To reproduce the modified example:
1. `git clone https://github.com/dpotapov/nats-by-example.git`
2. `cd nats-by-example/examples/auth/callout-decentralized/cli`
3. `docker build -t callout-decentralized .`
4. `docker run -it --rm callout-decentralized -c bash`
5. `bash main.sh`
6. `cat /nats-server.log` # to examine server logs
|
https://github.com/nats-io/nats-server/issues/5002
|
https://github.com/nats-io/nats-server/pull/5013
|
d413e33187dced42bb45dde646e87fd645d60778
|
3bb480727a46769f3ea7c0dfc3d517bd0361256c
| 2024-01-25T18:07:47Z |
go
| 2024-01-30T16:45:18Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,934 |
["server/accounts.go", "server/leafnode.go", "server/leafnode_test.go", "server/server.go"]
|
Changes to subjects of interest due to subject mappings amendments are not propagated to leaf nodes
|
### Observed behavior
Subjects of interest which are communicated to leaf nodes are not updated accordingly when subject mappings are updated on the main NATS server.
Assume we have a main NATS node and a leaf node. Assume we have a publisher connected to the leaf node, who is publishing to 'source1'. Assume we also have a subscriber who is connected to the main NATS node and subscribing to 'target'.
e.g. if the mapping is set up to be `source1` -> `target`, and we then change the mapping to `source2` -> `target`.
A subscriber (connected to main NATS node) on 'target' should start receiving data from a publisher (connected) to leaf node that is publishing to 'source2'.
Starting up a subscriber (connected to the main NATS node) on the subject 'source2' (instead of 'target') forces data to start streaming from the publisher on the leaf node to the subscriber who is listening for 'target' subject data. No data is received on the subscriber to 'source2' as expected because the NATS server is mapping the messages from source2 to target. However, it appears that because there is a subscriber to 'source2', the data on the leaf node is actually published out (due to an interested party in this subject). When we remove the subscriber to 'source2', the subscriber to 'target' stops receiving data.
### Expected behavior
When the subject mapping in the server config is changed, and a reload signal is issued, the expected behaviour should be to inform the leaf nodes of the updated subjects of interest.
e.g. if the mapping is set up to be `source1` -> `target`, and we change the mapping to `source2` -> `target`.
A subscriber (connected to main NATS node) on target should start receiving data from a publisher (connected) to leaf node that is publishing to source2.
### Server and client version
Issue exists on v2.9.24, v2.10.7.
### Host environment
Container optimised OS
Repeated test locally on Mac OS 13.2.1.
### Steps to reproduce
1. Create a `server.conf` file:
```
port: 4222
server_name: test-server
debug: true
trace: true
leafnodes {
port: 7422
}
mappings = {
"source1": "target",
}
```
2. Create a `leaf.conf` file:
```
port: 12543
server_name: test-leaf
debug: true
trace: true
leafnodes {
remotes = [
{
url: "nats://localhost:7422"
},
]
}
```
3. Start up a nats server: `nats-server -c server.conf`
4. Start up a nats leaf node: `nats-server -c leaf.conf`
5. Verify existing subject mapping works end to end by
a. Starting up a publisher on the leaf node: `nats publish --server=nats://localhost:12543 "source1" "" --count=1000 --sleep=1s`
b. Starting up a subscriber on the main NATS node: `nats sub --server=nats://localhost:4222 'target'`
c. Expect to see messages arrive on the subscriber at a rate of once a second.
6. Update the `server.conf` file to amend the subject mappings on the main NATS server:
```
port: 4222
server_name: test-server
leafnodes {
port: 7422
}
mappings = {
"source2": "target",
}
```
7. Issue a reload signal so that the new config is picked up, ensuring the correct PID for the main server is used (not the leaf node): `nats-server --signal reload=<PID>`
8. Expect to see `Reloaded server configuration` in main NATS server log.
9. Expect to see publisher continue to publish data out (via progress bar).
10. Expect to see no data received by subscriber to `target` subject.
11. Stop publisher on `source1` subject, and change it to publish to `source2`: `nats publish --server=nats://localhost:12543 "source2" "" --count=1000 --sleep=1s`
12. Should see issue now where subscriber on `target` subject does not receive the published data that is being published to `source2`.
13. Further observation is that starting a new and separate subscriber on `source2` (on main NATS node) forces the other subscriber (on `target` subject) to start receiving data, even though this new subscriber on `source2` will not receive any data. Stopping this new subscriber on `source2` will again stop data being received by the other subscriber on `target`.
14. Restarting the leaf node forces the subject interest to propagate properly, and the subscriber on `target` will start seeing data from the publisher on `source2`, which is the intended behaviour.
----------------------------------------------------
Diving into NATS server code, it seems that upon issuing a config reload signal, the global account is recreated and therefore it loses context of the currently connected leaf nodes. Therefore it doesn't think it needs to inform any leaf nodes of the changes to subjects of interest due to the new subject mapping changes that have been applied. I would expect [this block](https://github.com/nats-io/nats-server/blob/main/server/accounts.go#L704-L714) of code to execute, however nleafs is always 0 since the account object has been initialised as part of the config reload.
If I run nats server in embedded mode, I am able to remove the old mapping and add the new one correctly. I call AddMapping() on the global account, and since the account object is not recreated and the mapping is applied directly on the existing account object, it actually propagates the change in subject interest properly to the leaf node as we should expect.
|
https://github.com/nats-io/nats-server/issues/4934
|
https://github.com/nats-io/nats-server/pull/4937
|
ddb262a3e3289745adbaba80e03885eddb4aa108
|
5f003be38fc49a545b0fc820eddff69e1c68a8bd
| 2024-01-09T22:06:40Z |
go
| 2024-01-10T04:35:33Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,893 |
["server/accounts.go"]
|
NATS server panic on [server.(*Account).selectMappedSubject] method call
|
### Observed behavior
With a high push rate, servers sometimes fail with a panic. This appears in the push rate of approximately 40-60K per second.
The short account configuration description follows:
```
...
accounts: {
SERVICE: {
users: [
{user: "user1", password: "password1"}
]
# subjects mapping
mappings = {
"requests.*": [
{destination: "requests.$1.experimental", weight: 1%}
]
}
}
}
...
```
Subscribers for `requests.$1.experimental` can/can not present
Panic raw:
```
panic: runtime error: index out of range [-2]
goroutine 7340 [running]:
math/rand.(*rngSource).Uint64(...)
math/rand/rng.go:249
math/rand.(*rngSource).Int63(0x0?)
math/rand/rng.go:234 +0x85
math/rand.(*Rand).Int63(...)
math/rand/rand.go:96
math/rand.(*Rand).Int31(...)
math/rand/rand.go:110
math/rand.(*Rand).Int31n(0xc00021e5a0, 0x64)
math/rand/rand.go:145 +0x53
github.com/nats-io/nats-server/v2/server.(*Account).selectMappedSubject(0xc0001fd900, {0xc000616540, 0x15})
github.com/nats-io/nats-server/v2/server/accounts.go:812 +0x327
github.com/nats-io/nats-server/v2/server.(*client).selectMappedSubject(0xc0003f0c80)
github.com/nats-io/nats-server/v2/server/client.go:3748 +0x45
github.com/nats-io/nats-server/v2/server.(*client).parse(0xc0003f0c80, {0xc0005d5680, 0x80, 0x80})
github.com/nats-io/nats-server/v2/server/parser.go:488 +0x1ec6
github.com/nats-io/nats-server/v2/server.(*client).readLoop(0xc0003f0c80, {0x0, 0x0, 0x0})
github.com/nats-io/nats-server/v2/server/client.go:1371 +0x12fa
github.com/nats-io/nats-server/v2/server.(*Server).createClientEx.func1()
github.com/nats-io/nats-server/v2/server/server.go:3217 +0x25
github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine.func1()
github.com/nats-io/nats-server/v2/server/server.go:3700 +0x32
```
### Expected behavior
The NATS server is working properly :) without panics.
The approximately **1%** mapped to `requests.$1.experimental`
### Server and client version
Server version: **v2.10.5**
Client version: **github.com/nats-io/nats.go v1.31.0**
### Host environment
GCP K8S runtime with NATS Helm chart https://github.com/nats-io/k8s modified only to add requested resources.
### Steps to reproduce
1. Create a cluster with 5 nodes.
2. Create subject mapping configuration.
3. Create subject push through up to 60K requests per second.
4. Wait for panic...
|
https://github.com/nats-io/nats-server/issues/4893
|
https://github.com/nats-io/nats-server/pull/4894
|
6ed27feb2058d9d2c0304a6ec14455cddd3cb084
|
2c4b65d3793265d35e8cb8a167d6e7eb85d5edf0
| 2023-12-18T10:33:28Z |
go
| 2023-12-19T13:07:36Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,850 |
["server/consumer.go", "server/memstore.go", "server/memstore_test.go"]
|
NATS Server nil pointer when sourcing from one memory stream to another
|
### Observed behavior
Exiting the application after NATS has apparently shut down results in a NATS server panic.
``` console
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x60 pc=0xc9767c]
goroutine 611 [running]:
github.com/nats-io/nats-server/v2/server.(*memStore).GetSeqFromTime(0xc0005d4000, {0x0?, 0xc00199b998?, 0x0?})
/home/andreib/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/memstore.go:287 +0x15c
github.com/nats-io/nats-server/v2/server.(*consumer).selectStartingSeqNo(0xc001d9d400)
/home/andreib/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/consumer.go:4742 +0x25d
github.com/nats-io/nats-server/v2/server.(*stream).addConsumerWithAssignment(0xc000305880, 0xc00196b8d0, {0x0, 0x0}, 0x0, 0x0, 0x0)
/home/andreib/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/consumer.go:947 +0x1b5c
github.com/nats-io/nats-server/v2/server.(*stream).addConsumerWithAction(...)
/home/andreib/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/consumer.go:694
github.com/nats-io/nats-server/v2/server.(*Server).jsConsumerCreateRequest(0xc000016d80, 0x0?, 0xc000c33300, 0x3d?, {0xc00196d100, 0x40}, {0xc000c27ed8, 0x11}, {0xc0019685a0, 0x1d6, ...})
/home/andreib/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/jetstream_api.go:3901 +0x14b4
github.com/nats-io/nats-server/v2/server.(*jetStream).apiDispatch(0xc000070700, 0xc0000cf2c0, 0xc000c33300, 0xc00028c780, {0xc00196d100, 0x40}, {0xc000c27ed8, 0x11}, {0xc0019685a0, 0x1d6, ...})
/home/andreib/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/jetstream_api.go:768 +0x25e
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000c33300, 0x0, 0xc0000cf2c0, 0x0?, {0xc00196d0c0, 0x40, 0x40}, {0xc000c27ec0, 0x11, 0x18}, ...)
/home/andreib/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/client.go:3430 +0xa90
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000c33300, 0xc00028c780, 0xc00197cf30, {0xc0019685a0, 0x1d8, 0x1e0}, {0xc000599d00, 0x0, 0x0?}, {0xc00196d0c0, ...}, ...)
/home/andreib/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/client.go:4483 +0xbe8
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc000c33300, 0xc0001ecab0, 0xc00028c500, {0xc000b6c000, 0x191, 0x1a0})
/home/andreib/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/client.go:4268 +0x122b
github.com/nats-io/nats-server/v2/server.(*Account).addServiceImportSub.func1(0x280?, 0x0?, 0x0?, {0x0?, 0x0?}, {0x0?, 0x0?}, {0xc000b6c000, 0x191, 0x1a0})
/home/andreib/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/accounts.go:1997 +0x2c
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000c33300, 0x0, 0xc0002a8840, 0xc00028c500?, {0xc000599e00, 0x40, 0x100}, {0xc000599f00, 0xf, 0x100}, ...)
/home/andreib/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/client.go:3428 +0xb34
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000c33300, 0xc00028c500, 0xc00197ced0, {0xc000b6c000, 0x191, 0x1a0}, {0xc000599d00, 0x0, 0xc001de7c40?}, {0xc000599e00, ...}, ...)
/home/andreib/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/client.go:4483 +0xbe8
github.com/nats-io/nats-server/v2/server.(*client).processInboundClientMsg(0xc000c33300, {0xc000b6c000, 0x191, 0x1a0})
/home/andreib/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/client.go:3903 +0xca8
github.com/nats-io/nats-server/v2/server.(*stream).internalLoop(0xc000718380)
/home/andreib/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/stream.go:4819 +0xdc9
created by github.com/nats-io/nats-server/v2/server.(*stream).setupSendCapabilities in goroutine 461
/home/andreib/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/stream.go:4720 +0x1aa
```
### Expected behavior
`exit 0`
### Server and client version
Server: `v2.10.6`
Client: `v1.31.0`
### Host environment
CPU: `32 × AMD Ryzen 9 5950X 16-Core Processor`
Memory: `128 GB DDR4`
GPU: `NVIDIA GeForce RTX 4090/PCIe/SSE2`
### Steps to reproduce
1. Create an in-process NATS instance in golang.
1. Create 2 memorystore streams:
2.1 `WORKFLOW` - a typical stream
2.2 `WORKFLOW_TELEMETRY` - a stream with a stream source {stream: `WORKFLOW`, filter_subject: `WORKFLOW.*.State.>`}
1. The application generates some messages
1. `WORKFLOW_TELEMETRY` correctly receives the filtered messages from `WORKFLOW` stream.
1. Shutdown() and WaitForShutdown() are called to ensure clean exit.
1. The application terminates.
|
https://github.com/nats-io/nats-server/issues/4850
|
https://github.com/nats-io/nats-server/pull/4853
|
a0ca33eccaffd707c46187aed4086e878649b4ad
|
4ab691939f1c2bd2f6361f048ffb22806aaca57e
| 2023-12-05T12:40:00Z |
go
| 2023-12-06T05:44:58Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,842 |
["server/filestore.go", "server/filestore_test.go"]
|
Corrupted Jetstream subjects when regenerating stream index.db
|
### Observed behavior
Some subjects stored in a Jetstream filestore are corrupted when for a stream when regenerating a stream's index.db. This was initially observed when updating to 2.10.6 and restoring a stream which logged:
```
Stream state encountered internal inconsistency on recover
```
It can be reproduced with a fresh stream by manually deleting index.db.
Have only reproduced with a stream that contains large messages (close to 1 MB).
### Expected behavior
No corruption.
### Server and client version
nats-server 2.10.6, still occurs on `main`
Bisected to 66b0b5a566d95497ef28e75e161ec919910612a0
Observed via nats-cli 0.1.1
### Host environment
Debian 12 x86_64
### Steps to reproduce
<details>
<summary>
Stream config
</summary>
```
{
"name": "test",
"subjects": [
"test.*"
],
"retention": "limits",
"max_consumers": -1,
"max_msgs_per_subject": -1,
"max_msgs": -1,
"max_bytes": -1,
"max_age": 604800000000000,
"max_msg_size": -1,
"storage": "file",
"discard": "old",
"num_replicas": 1,
"duplicate_window": 120000000000,
"sealed": false,
"deny_delete": false,
"deny_purge": false,
"allow_rollup_hdrs": false,
"allow_direct": false,
"mirror_direct": false
}
```
</details>
```
# publish large messages to a bunch of subjects
for i in $(seq 1000); do yes test | head -c 900000 | nats publish "test.$i"; done
# observe correct subject names
nats stream subjects test
╭─────────────────────────────────────────────────────────╮
│ 1000 Subjects in stream test │
├──────────┬───────┬───────────┬───────┬──────────┬───────┤
│ Subject │ Count │ Subject │ Count │ Subject │ Count │
├──────────┼───────┼───────────┼───────┼──────────┼───────┤
│ test.52 │ 1 │ test.981 │ 1 │ test.459 │ 1 │
│ test.838 │ 1 │ test.409 │ 1 │ test.48 │ 1 │
│ test.440 │ 1 │ test.672 │ 1 │ test.483 │ 1 │
│ test.315 │ 1 │ test.750 │ 1 │ test.303 │ 1 │
│ test.703 │ 1 │ test.171 │ 1 │ test.695 │ 1 │
│ test.552 │ 1 │ test.774 │ 1 │ test.348 │ 1 │
│ test.663 │ 1 │ test.209 │ 1 │ test.65 │ 1 │
...
# stop nats-server
# delete index.db
rm jetstream/\$G/streams/test/msgs/index.db
# start nats-server
# observe corrupted/missing subject names
nats stream subjects test
╭─────────────────────────────────────────────────────────╮
│ 28 Subjects in stream test │
├──────────┬───────┬───────────┬───────┬──────────┬───────┤
│ Subject │ Count │ Subject │ Count │ Subject │ Count │
├──────────┼───────┼───────────┼───────┼──────────┼───────┤
│ test.999 │ 1 │test │ 1 │ te │ 1 │
│ │ 1 │ test.1000 │ 1 │ test.997 │ 1 │
│ �P │ 1 │test │ 1 │test. │ 1 │
│ ŋt │ 1 │ test.998 │ 1 │ test.9 │ 1 │
│ test.99 │ 1 │ Sl� │ 1 │ test.995 │ 1 │
│ (ŋ │ 1 │ tes │ 1 │ test.996 │ 1 │
...
```
|
https://github.com/nats-io/nats-server/issues/4842
|
https://github.com/nats-io/nats-server/pull/4851
|
8f0427cb7457ae83ec1b13c394fde278d28f2f62
|
a0ca33eccaffd707c46187aed4086e878649b4ad
| 2023-12-04T06:08:44Z |
go
| 2023-12-06T00:52:15Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,826 |
["server/consumer.go", "server/jetstream_consumer_test.go", "server/stream.go"]
|
Implementation of `stream.purge()` seems wrong for consumers with multiple filter subjects
|
### Observed behavior
While reading the code of `stream.purge()` I found what looks like an oversight:
https://github.com/nats-io/nats-server/blob/main/server/stream.go#L2005 and the lines below check if the subject of the purge request matches the consumer's filter subject. A consumer can however have multiple filter subjects and this is not taken into account here.
### Expected behavior
The code should probably not use the consumer's config, but instead check if the purge subject is a subset match for any of the consumers's subject filters by accessing `o.subj` (https://github.com/nats-io/nats-server/blob/main/server/consumer.go#L311).
### Server and client version
server version 2.10.5
### Host environment
_No response_
### Steps to reproduce
_No response_
|
https://github.com/nats-io/nats-server/issues/4826
|
https://github.com/nats-io/nats-server/pull/4873
|
1342aa1371b81f0bfa0f324b42b4b85cbfe759fd
|
3ba8e175fa7bfc754e73b132bc9f93dc158e2c46
| 2023-11-28T17:46:22Z |
go
| 2023-12-12T17:06:20Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,809 |
["server/consumer.go", "server/stream.go"]
|
Panic when updating consumer config
|
### Observed behavior
I'm updating a consumer with nats.go client (using `UpdateConsumer`) to change `FilterSubjects` field and it sometimes causes the server to crash with panic:
```
fatal error: sync: Unlock of unlocked RWMutex
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x9e4749]
goroutine 579496 [running]:
sync.fatal({0xb6bdfd?, 0x0?})
runtime/panic.go:1061 +0x18
sync.(*RWMutex).Unlock(0xc00169c00c)
sync/rwmutex.go:209 +0x45
panic({0xaa8e80?, 0x1041fb0?})
runtime/panic.go:914 +0x21f
github.com/nats-io/nats-server/v2/server.(*stream).swapSigSubs(0x0, 0xc00169c000, {0xc001dcc000, 0xf0, 0x0?})
github.com/nats-io/nats-server/v2/server/stream.go:5190 +0x49
github.com/nats-io/nats-server/v2/server.(*consumer).updateConfig(0xc00169c000, 0xc001ebcdd0)
github.com/nats-io/nats-server/v2/server/consumer.go:1871 +0xb8d
github.com/nats-io/nats-server/v2/server.(*stream).addConsumerWithAssignment(0xc00017d180, 0xc001ebcdd0, {0x0, 0x0}, 0x0, 0x0, 0x0)
github.com/nats-io/nats-server/v2/server/consumer.go:782 +0x1225
github.com/nats-io/nats-server/v2/server.(*stream).addConsumerWithAction(...)
github.com/nats-io/nats-server/v2/server/consumer.go:694
github.com/nats-io/nats-server/v2/server.(*Server).jsConsumerCreateRequest(0xc000198d80, 0x0?, 0xc000ccd980, 0x3d?, {0xc002688090, 0x2a}, {0xc000620810, 0x11}, {0xc002086000, 0x3092, ...})
github.com/nats-io/nats-server/v2/server/jetstream_api.go:3901 +0x14b4
github.com/nats-io/nats-server/v2/server.(*jetStream).apiDispatch(0xc00021c100, 0xc000218840, 0xc000ccd980, 0xc0001a3900, {0xc002688090, 0x2a}, {0xc000620810, 0x11}, {0xc002086000, 0x3092, ...})
github.com/nats-io/nats-server/v2/server/jetstream_api.go:768 +0x25e
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000ccd980, 0x0, 0xc000218840, 0x414705?, {0xc002688060, 0x2a, 0x30}, {0xc0006207f8, 0x11, 0x18}, ...)
github.com/nats-io/nats-server/v2/server/client.go:3425 +0xa90
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000ccd980, 0xc0001a3900, 0xc000f9a120, {0xc002086000, 0x3094, 0x3500}, {0x0, 0x0, 0xc00035a540?}, {0xc002688060, ...}, ...)
github.com/nats-io/nats-server/v2/server/client.go:4477 +0xb70
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc000ccd980, 0xc000240090, 0xc0001a3680, {0xc000c6c05d, 0x3043, 0x3fa3})
github.com/nats-io/nats-server/v2/server/client.go:4262 +0x1194
github.com/nats-io/nats-server/v2/server.(*Account).addServiceImportSub.func1(0x2ad2?, 0x3000?, 0x11?, {0xc000956690?, 0xc000d86268?}, {0xc000b5ad20?, 0x1cb?}, {0xc000c6c05d, 0x3043, 0x3fa3})
github.com/nats-io/nats-server/v2/server/accounts.go:1997 +0x2c
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000ccd980, 0x0, 0xc000219e00, 0xc001a43350?, {0xc000c6c004, 0x2a, 0x3ffc}, {0xc000c6c02f, 0x26, 0x3fd1}, ...)
github.com/nats-io/nats-server/v2/server/client.go:3423 +0xb34
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000ccd980, 0xc0001a3680, 0xc002370330, {0xc000c6c05d, 0x3043, 0x3fa3}, {0x0, 0x0, 0x5?}, {0xc000c6c004, ...}, ...)
github.com/nats-io/nats-server/v2/server/client.go:4477 +0xb70
github.com/nats-io/nats-server/v2/server.(*client).processInboundClientMsg(0xc000ccd980, {0xc000c6c05d, 0x3043, 0x3fa3})
github.com/nats-io/nats-server/v2/server/client.go:3897 +0xc88
github.com/nats-io/nats-server/v2/server.(*client).processInboundMsg(0xc000ccd980?, {0xc000c6c05d?, 0x57?, 0x3ffc?})
github.com/nats-io/nats-server/v2/server/client.go:3736 +0x37
github.com/nats-io/nats-server/v2/server.(*client).parse(0xc000ccd980, {0xc000c6c000, 0x30a0, 0x4000})
github.com/nats-io/nats-server/v2/server/parser.go:497 +0x204f
github.com/nats-io/nats-server/v2/server.(*client).readLoop(0xc000ccd980, {0x0, 0x0, 0x0})
github.com/nats-io/nats-server/v2/server/client.go:1371 +0x12fa
github.com/nats-io/nats-server/v2/server.(*Server).createClientEx.func1()
github.com/nats-io/nats-server/v2/server/server.go:3217 +0x25
github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine.func1()
github.com/nats-io/nats-server/v2/server/server.go:3700 +0x32
created by github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine in goroutine 579507
github.com/nats-io/nats-server/v2/server/server.go:3698 +0x145
```
This also causes the nats.go client to panic on [line 439](https://github.com/nats-io/nats.go/blob/v1.31.0/jsm.go#L439), because `info.Config` is nil.
### Expected behavior
No panic and updated consumer config.
### Server and client version
server version: 2.10.5
nats.go version: 1.31.0
### Host environment
linux, docker, amd64
### Steps to reproduce
Update one consumer with different `FilterSubjects` concurrently.
|
https://github.com/nats-io/nats-server/issues/4809
|
https://github.com/nats-io/nats-server/pull/4818
|
399590f0160dc3c3cb6e51eb799cceae6174d0f8
|
41918d686eae3d18771cbc4f3958ca2b7a3b06bc
| 2023-11-21T15:21:23Z |
go
| 2023-11-26T04:55:46Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,775 |
["server/filestore.go", "server/memstore.go"]
|
Silent data loss: enforceMsgLimit() and enforceBytesLimit() should never modify streams configured with DiscardPolicy.NEW
|
### Observed behavior
In https://github.com/nats-io/nats-server/issues/4771, Maurice and I had stumbled across an issue where JetStream was underestimating the size of an incoming message.
The expected behavior was that DiscardPolicy.NEW would reject the incoming message for [violating the stream's max_bytes configuration](https://github.com/MauriceVanVeen/nats-server/blob/bf8ac377f1e549c599be7b2a627e312c24777805/server/filestore.go#L2984). Instead, the DiscardPolicy.NEW conditions admitted the message into the queue.
Later, a call to [enforceBytesLimit()](https://github.com/MauriceVanVeen/nats-server/blob/bf8ac377f1e549c599be7b2a627e312c24777805/server/filestore.go#L3064) made a separate calculation and determined that the stream exceeded its limits, and reacted by pruning messages from the front of the stream in the manner of DiscardPolicy.OLD.
Falling back from DiscardPolicy.NEW behavior to DiscardPolicy.OLD behavior precariously risks silent data loss.
### Expected behavior
Clients using DiscardPolicy.NEW streams MUST be able to depend on delivery confirmation being a reliable indicator that their message has been durably persisted. Deletion of the message should occur only as explicit delete operations by a client, or by the stream's RetentionPolicy.
[filestore.enforceMsgLimit() and filestore.enforceBytesLimit()](https://github.com/MauriceVanVeen/nats-server/blob/bf8ac377f1e549c599be7b2a627e312c24777805/server/filestore.go#L3233C1-L3257C2)
and [memstore.enforceMsgLimit() and memstore.enforceBytesLimit()](https://github.com/MauriceVanVeen/nats-server/blob/bf8ac377f1e549c599be7b2a627e312c24777805/server/memstore.go#L552-L570) should not discard messages from a stream if it is configured with DiscardPolicy.NEW even if that stream has somehow gotten into a state where it is exceeding its configured limits.
### Server and client version
nats-server v2.10.4
### Host environment
MacBook Pro, 2.6GHz 6-Core Intel Core i7, 16GB RAM
### Steps to reproduce
This behavior can be induced via the reproduction procedure in https://github.com/nats-io/nats-server/issues/4771 on v2.10.4 OR simply by editing a DiscardPolicy.NEW stream's configuration with lower limits.
**1/ Start NATS server with JetStream**
```
% nats-server -js
[28834] 2023/11/08 18:15:51.624371 [INF] Starting nats-server
[28834] 2023/11/08 18:15:51.624608 [INF] Version: 2.10.4
```
**2/ Create and populate an unlimited workqueue with DiscardPolicy.NEW**
```
% nats stream create MyTinyStream
? Subjects tiny
? Storage memory
? Replication 1
? Retention Policy Work Queue
? Discard Policy New
? Stream Messages Limit -1
? Per Subject Messages Limit -1
? Total Stream Size -1
? Message TTL -1
? Max Message Size -1
? Duplicate tracking time window 2m0s
? Allow message Roll-ups No
? Allow message deletion Yes
? Allow purging subjects or the entire stream Yes
Stream MyTinyStream was created
Information for Stream MyTinyStream created 2023-11-08 18:17:16
Subjects: tiny
Replicas: 1
Storage: Memory
Options:
Retention: WorkQueue
Acknowledgments: true
Discard Policy: New
Duplicate Window: 2m0s
Direct Get: true
Allows Msg Delete: true
Allows Purge: true
Allows Rollups: false
Limits:
Maximum Messages: unlimited
Maximum Per Subject: unlimited
Maximum Bytes: unlimited
Maximum Age: unlimited
Maximum Message Size: unlimited
Maximum Consumers: unlimited
State:
Messages: 0
Bytes: 0 B
First Sequence: 0
Last Sequence: 0
Active Consumers: 0
% nats stream info --json MyTinyStream
{
"config": {
"name": "MyTinyStream",
"subjects": [
"tiny"
],
"retention": "workqueue",
"max_consumers": -1,
"max_msgs_per_subject": -1,
"max_msgs": -1,
"max_bytes": -1,
"max_age": 0,
"max_msg_size": -1,
"storage": "memory",
"discard": "new",
"num_replicas": 1,
"duplicate_window": 120000000000,
"sealed": false,
"deny_delete": false,
"deny_purge": false,
"allow_rollup_hdrs": false,
"allow_direct": true,
"mirror_direct": false
},
"created": "2023-11-08T23:17:16.589945Z",
"state": {
"messages": 0,
"bytes": 0,
"first_seq": 0,
"first_ts": "0001-01-01T00:00:00Z",
"last_seq": 0,
"last_ts": "0001-01-01T00:00:00Z",
"consumer_count": 0
},
"cluster": {
"leader": "NAVJAZ5TUGLOX5EF4OWQKQVPBEEFX3R24UENTUNV3UQQVC3AMASDJEZO"
},
"ts": "2023-11-08T23:17:43.401458Z"
}
% nats req --count 1000 tiny payload
1000 / 1000 [===========================================================================================================================================] 0s
% nats stream info MyTinyStream
Information for Stream MyTinyStream created 2023-11-08 18:17:16
Subjects: tiny
Replicas: 1
Storage: Memory
Options:
Retention: WorkQueue
Acknowledgments: true
Discard Policy: New
Duplicate Window: 2m0s
Direct Get: true
Allows Msg Delete: true
Allows Purge: true
Allows Rollups: false
Limits:
Maximum Messages: unlimited
Maximum Per Subject: unlimited
Maximum Bytes: unlimited
Maximum Age: unlimited
Maximum Message Size: unlimited
Maximum Consumers: unlimited
State:
Messages: 1,000
Bytes: 26 KiB
First Sequence: 1 @ 2023-11-08 18:18:04 UTC
Last Sequence: 1,000 @ 2023-11-08 18:18:05 UTC
Active Consumers: 0
Number of Subjects: 1
```
**3/ Reduce the queue's max_msgs or max_bytes limits.**
```
% nats stream edit MyTinyStream --max-msgs 100
Differences (-old +new):
api.StreamConfig{
... // 4 identical fields
MaxConsumers: -1,
MaxMsgsPer: -1,
- MaxMsgs: -1,
+ MaxMsgs: 100,
MaxBytes: -1,
MaxAge: s"0s",
... // 22 identical fields
}
? Really edit Stream MyTinyStream Yes
Stream MyTinyStream was updated
Information for Stream MyTinyStream created 2023-11-08 18:17:16
Subjects: tiny
Replicas: 1
Storage: Memory
Options:
Retention: WorkQueue
Acknowledgments: true
Discard Policy: New
Duplicate Window: 2m0s
Direct Get: true
Allows Msg Delete: true
Allows Purge: true
Allows Rollups: false
Limits:
Maximum Messages: 100
Maximum Per Subject: unlimited
Maximum Bytes: unlimited
Maximum Age: unlimited
Maximum Message Size: unlimited
Maximum Consumers: unlimited
State:
Messages: 100
Bytes: 2.6 KiB
First Sequence: 901 @ 2023-11-08 18:18:05 UTC
Last Sequence: 1,000 @ 2023-11-08 18:18:05 UTC
Active Consumers: 0
Number of Subjects: 1
```
**4/ Observe that the DiscardPolicy.NEW queue has eliminated messages from the head of the stream in the style of DiscardPolicy.OLD even though applications think the message was durably persisted.**
```
Messages: 100
Bytes: 2.6 KiB
First Sequence: 901 @ 2023-11-08 18:18:05 UTC
Last Sequence: 1,000 @ 2023-11-08 18:18:05 UTC
```
This was a contrived manual reproduction procedure, but it risks occurring due to bugs like https://github.com/nats-io/nats-server/issues/4771 which cause DiscardPolicy.OLD behavior on DiscardPolicy.NEW streams.
|
https://github.com/nats-io/nats-server/issues/4775
|
https://github.com/nats-io/nats-server/pull/4802
|
108f76ca3b10a2cf80fbe275537c5a475b98749e
|
fe7394e58a9bf6edb7e76b42ef294d11c9ff7f33
| 2023-11-08T23:23:10Z |
go
| 2023-11-19T20:33:46Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,771 |
["server/filestore.go", "server/filestore_test.go", "server/memstore.go", "server/memstore_test.go"]
|
Silent data-loss when combining max_msgs, max_bytes, and DiscardPolicy.NEW.
|
### Observed behavior
Work queues with DiscardPolicy.NEW and low values of max_bytes will successfully acknowledge messages that it has not persisted, silently losing data without informing the client of failure.
### Expected behavior
Work queues with DiscardPolicy.NEW and low values of max_bytes and max_messages should reject client's attempts to publish messages when they breach the stream's configured limits.
### Server and client version
Reproduced on two versions of NATS server:
```
[73342] 2023/11/08 14:19:20.642846 [INF] Version: 2.9.21
[73342] 2023/11/08 14:19:20.642849 [INF] Git: [not set]
[73342] 2023/11/08 14:19:20.642852 [INF] Name: NDYXOCAKZ5Q6GRYV7DVFSBGPLDWNJEDX5CHMHZQMDQZFDCIFR4PAMZRM
[73342] 2023/11/08 14:19:20.642856 [INF] Node: IdiAg2WV
[73342] 2023/11/08 14:19:20.642858 [INF] ID: NDYXOCAKZ5Q6GRYV7DVFSBGPLDWNJEDX5CHMHZQMDQZFDCIFR4PAMZRM
```
And
```
[76874] 2023/11/08 14:40:51.494791 [INF] Version: 2.10.4
[76874] 2023/11/08 14:40:51.494795 [INF] Git: [not set]
[76874] 2023/11/08 14:40:51.494798 [INF] Name: NB7AT3TC4PCRBMLBLXLLFKOJWNCCCERB2V7KBT4GNMWBLDYLVQCORAYW
[76874] 2023/11/08 14:40:51.494801 [INF] Node: O0yFs2Zy
[76874] 2023/11/08 14:40:51.494803 [INF] ID: NB7AT3TC4PCRBMLBLXLLFKOJWNCCCERB2V7KBT4GNMWBLDYLVQCORAYW
```
NATSCLI:
```
% nats --version
0.0.35
```
### Host environment
MacBook Pro, 2.6GHz 6-Core Intel Core i7, 16GB RAM
### Steps to reproduce
**1/ Launch NATS server w/ Jetstream.**
```
% nats-server -js
[72908] 2023/11/08 14:15:41.350218 [INF] Starting nats-server
[72908] 2023/11/08 14:15:41.350396 [INF] Version: 2.9.21
```
**2/ Provision a small message-limited workqueue with DiscardPolicy.NEW.**
```
% nats stream add
? Stream Name MyTinyStream
? Subjects tiny
? Storage memory
? Replication 1
? Retention Policy Work Queue
? Discard Policy New
? Stream Messages Limit 10
? Per Subject Messages Limit -1
? Total Stream Size -1
? Message TTL -1
? Max Message Size -1
? Duplicate tracking time window 2m0s
? Allow message Roll-ups No
? Allow message deletion Yes
? Allow purging subjects or the entire stream Yes
Stream MyTinyStream was created
Information for Stream MyTinyStream created 2023-11-08 14:19:52
Subjects: tiny
Replicas: 1
Storage: Memory
Options:
Retention: WorkQueue
Acknowledgements: true
Discard Policy: New
Duplicate Window: 2m0s
Allows Msg Delete: true
Allows Purge: true
Allows Rollups: false
Limits:
Maximum Messages: 10
Maximum Per Subject: unlimited
Maximum Bytes: unlimited
Maximum Age: unlimited
Maximum Message Size: unlimited
Maximum Consumers: unlimited
State:
Messages: 0
Bytes: 0 B
FirstSeq: 0
LastSeq: 0
Active Consumers: 0
% nats stream info --json MyTinyStream
{
"config": {
"name": "MyTinyStream",
"subjects": [
"tiny"
],
"retention": "workqueue",
"max_consumers": -1,
"max_msgs_per_subject": -1,
"max_msgs": 10,
"max_bytes": -1,
"max_age": 0,
"max_msg_size": -1,
"storage": "memory",
"discard": "new",
"num_replicas": 1,
"duplicate_window": 120000000000,
"sealed": false,
"deny_delete": false,
"deny_purge": false,
"allow_rollup_hdrs": false,
"allow_direct": false,
"mirror_direct": false
},
"created": "2023-11-08T19:19:52.186033Z",
"state": {
"messages": 0,
"bytes": 0,
"first_seq": 0,
"first_ts": "0001-01-01T00:00:00Z",
"last_seq": 0,
"last_ts": "0001-01-01T00:00:00Z",
"consumer_count": 0
},
"cluster": {
"leader": "NDYXOCAKZ5Q6GRYV7DVFSBGPLDWNJEDX5CHMHZQMDQZFDCIFR4PAMZRM"
}
}
```
**3/ As expected, Messages beyond max_msgs are rejected with an error:**
```
% nats req --count 11 tiny 'payload'
14:20:24 Sending request on "tiny"
14:20:24 Received with rtt 320.368µs
{"stream":"MyTinyStream", "seq":1}
14:20:24 Sending request on "tiny"
14:20:24 Received with rtt 125.79µs
{"stream":"MyTinyStream", "seq":2}
14:20:24 Sending request on "tiny"
14:20:24 Received with rtt 1.678709ms
{"stream":"MyTinyStream", "seq":3}
14:20:24 Sending request on "tiny"
14:20:24 Received with rtt 115.791µs
{"stream":"MyTinyStream", "seq":4}
14:20:24 Sending request on "tiny"
14:20:24 Received with rtt 118.04µs
{"stream":"MyTinyStream", "seq":5}
14:20:24 Sending request on "tiny"
14:20:24 Received with rtt 159.818µs
{"stream":"MyTinyStream", "seq":6}
14:20:24 Sending request on "tiny"
14:20:24 Received with rtt 129.754µs
{"stream":"MyTinyStream", "seq":7}
14:20:24 Sending request on "tiny"
14:20:24 Received with rtt 110.903µs
{"stream":"MyTinyStream", "seq":8}
14:20:24 Sending request on "tiny"
14:20:24 Received with rtt 123.284µs
{"stream":"MyTinyStream", "seq":9}
14:20:24 Sending request on "tiny"
14:20:24 Received with rtt 102.729µs
{"stream":"MyTinyStream", "seq":10}
14:20:24 Sending request on "tiny"
14:20:24 Received with rtt 153.29µs
{"error":{"code":503,"err_code":10077,"description":"maximum messages exceeded"},"stream":"MyTinyStream","seq":0}
```
**4/ Delete and recreate the stream with both max_msgs=10 and max_bytes=100, DiscardPolicy=NEW:**
```
% nats stream add
? Stream Name MyTinyStream
? Subjects tiny
? Storage memory
? Replication 1
? Retention Policy Work Queue
? Discard Policy New
? Stream Messages Limit 10
? Per Subject Messages Limit -1
? Total Stream Size 100
? Message TTL -1
? Max Message Size -1
? Duplicate tracking time window 2m0s
? Allow message Roll-ups No
? Allow message deletion Yes
? Allow purging subjects or the entire stream Yes
Stream MyTinyStream was created
Information for Stream MyTinyStream created 2023-11-08 14:26:22
Subjects: tiny
Replicas: 1
Storage: Memory
Options:
Retention: WorkQueue
Acknowledgements: true
Discard Policy: New
Duplicate Window: 2m0s
Allows Msg Delete: true
Allows Purge: true
Allows Rollups: false
Limits:
Maximum Messages: 10
Maximum Per Subject: unlimited
Maximum Bytes: 100 B
Maximum Age: unlimited
Maximum Message Size: unlimited
Maximum Consumers: unlimited
State:
Messages: 0
Bytes: 0 B
FirstSeq: 0
LastSeq: 0
Active Consumers: 0
% nats stream info --json MyTinyStream
{
"config": {
"name": "MyTinyStream",
"subjects": [
"tiny"
],
"retention": "workqueue",
"max_consumers": -1,
"max_msgs_per_subject": -1,
"max_msgs": 10,
"max_bytes": 100,
"max_age": 0,
"max_msg_size": -1,
"storage": "memory",
"discard": "new",
"num_replicas": 1,
"duplicate_window": 120000000000,
"sealed": false,
"deny_delete": false,
"deny_purge": false,
"allow_rollup_hdrs": false,
"allow_direct": false,
"mirror_direct": false
},
"created": "2023-11-08T19:26:22.556543Z",
"state": {
"messages": 0,
"bytes": 0,
"first_seq": 0,
"first_ts": "0001-01-01T00:00:00Z",
"last_seq": 0,
"last_ts": "0001-01-01T00:00:00Z",
"consumer_count": 0
},
"cluster": {
"leader": "NDYXOCAKZ5Q6GRYV7DVFSBGPLDWNJEDX5CHMHZQMDQZFDCIFR4PAMZRM"
}
}
```
**5/ NATS acknowledges messages well beyond the stream’s configured limits.**
```
% nats req --count 1000 tiny 'payload'
1000 / 1000 [=======================================================================================================================================================] 0s
davcote@3c22fbc65918 ~ % nats req --count 10 tiny 'payload'
14:29:15 Sending request on "tiny"
14:29:15 Received with rtt 204.398µs
{"stream":"MyTinyStream", "seq":1001}
14:29:15 Sending request on "tiny"
14:29:15 Received with rtt 201.685µs
{"stream":"MyTinyStream", "seq":1002}
14:29:15 Sending request on "tiny"
14:29:15 Received with rtt 364.434µs
{"stream":"MyTinyStream", "seq":1003}
14:29:15 Sending request on "tiny"
14:29:15 Received with rtt 184.403µs
{"stream":"MyTinyStream", "seq":1004}
14:29:15 Sending request on "tiny"
14:29:15 Received with rtt 125.532µs
{"stream":"MyTinyStream", "seq":1005}
14:29:15 Sending request on "tiny"
14:29:15 Received with rtt 123.309µs
{"stream":"MyTinyStream", "seq":1006}
14:29:15 Sending request on "tiny"
14:29:15 Received with rtt 122.677µs
{"stream":"MyTinyStream", "seq":1007}
14:29:15 Sending request on "tiny"
14:29:15 Received with rtt 292.402µs
{"stream":"MyTinyStream", "seq":1008}
14:29:15 Sending request on "tiny"
14:29:15 Received with rtt 124.354µs
{"stream":"MyTinyStream", "seq":1009}
14:29:15 Sending request on "tiny"
14:29:15 Received with rtt 110.642µs
{"stream":"MyTinyStream", "seq":1010}
```
**6/ The stream has silently discarded all but three messages, despite positively acknowledging their storage.**
```
% nats stream info MyTinyStream
Information for Stream MyTinyStream created 2023-11-08 14:29:02
Subjects: tiny
Replicas: 1
Storage: Memory
Options:
Retention: WorkQueue
Acknowledgements: true
Discard Policy: New
Duplicate Window: 2m0s
Allows Msg Delete: true
Allows Purge: true
Allows Rollups: false
Limits:
Maximum Messages: 10
Maximum Per Subject: unlimited
Maximum Bytes: 100 B
Maximum Age: unlimited
Maximum Message Size: unlimited
Maximum Consumers: unlimited
State:
Messages: 3
Bytes: 81 B
FirstSeq: 1,008 @ 2023-11-08T19:29:15 UTC
LastSeq: 1,010 @ 2023-11-08T19:29:15 UTC
Active Consumers: 0
Number of Subjects: 1
```
|
https://github.com/nats-io/nats-server/issues/4771
|
https://github.com/nats-io/nats-server/pull/4772
|
57f42d5085830756c5c097bf4c9ca72b42962948
|
cef4c46ef8086b3a61f58ab8b32482049daf17cb
| 2023-11-08T19:50:50Z |
go
| 2023-11-08T21:06:08Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,765 |
["server/leafnode.go", "server/leafnode_test.go"]
|
Leafnode websocket connections: subpath in wrong order
|
### Observed behavior
When connecting leafnodes by websockets via a reverse proxy (e.g. in Kubernetes), the subpath for websocket is attached after the static `/leafnode` path instead of before it.
Currently, if in a leafnode configuration the following config is given:
```
leafnodes {
remotes = [
{
url: "wss://user:[email protected]:443/nats-leafnode/",
account: "ACC"
},
]
}
```
The websocket tries to connect to the following URL:
`server.host.org:443/leafnodenats-leafnode/` instead of `server.host.org:443/nats-leafnode/leafnode`
### Expected behavior
The websocket should connect to
`server.host.org:443/nats-leafnode/leafnode` instead of `server.host.org:443/leafnodenats-leafnode/`
### Server and client version
Both at 2.10.4
### Host environment
_No response_
### Steps to reproduce
Deploy a NATS server after an NGINX with websocket support, which rewrites the websocket connection URL from somehost.org/nats/leafnode to somehost.org/leafnode.
Have a NATS server with leafnode support running behind the proxy and a NATS server acting as a leafnode connecting through the reverse proxy with following config:
```
leafnodes {
remotes = [
{
url: "ws://user:[email protected]:443/nats/",
account: "ACC"
},
]
}
```
# Additional Info
The following line probably causes this issue:
https://github.com/nats-io/nats-server/blob/9c2e109f9f770a2c74e33d2c2a675e120b0065c4/server/leafnode.go#L2792
and should probably be changed to something like:
```
path = curPath + path
```
|
https://github.com/nats-io/nats-server/issues/4765
|
https://github.com/nats-io/nats-server/pull/4770
|
cef4c46ef8086b3a61f58ab8b32482049daf17cb
|
8d3253d907d8c3b4a09f714eb4a7b1695e8608dd
| 2023-11-08T09:32:54Z |
go
| 2023-11-08T21:30:36Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,732 |
["server/jetstream_cluster.go", "server/jetstream_cluster_3_test.go"]
|
server should not allow stream edit to exceed limits
|
### Observed behavior
On a nats system with these tiered limits:
```
$ nats account info
...
Account Limits:
Max Message Payload: 1.0 MiB
Tier: R1:
Configuration Requirements:
Stream Requires Max Bytes Set: true
Consumer Maximum Ack Pending: Unlimited
Stream Resource Usage Limits:
Memory: 0 B of 0 B
Memory Per Stream: Unlimited
Storage: 0 B of 2.5 GiB
Storage Per Stream: 5.0 GiB
Streams: 0 of 5
Consumers: 0 of 50
```
(Note R3 allows 0 streams)
I can not create a R3 stream:
```
$ nats s add test --subjects js.in.test --defaults
nats: error: could not create Stream: no JetStream default or applicable tiered limit present (10120))
```
So I add a R1 and then edit it to R3:
```
$ nats s add test --subjects js.in.test --defaults
...
Replicas: 1
...
```
I can then edit the R1 to a R3:
```
$ nats s edit --replicas 3 test -f
...
Replicas: 3
...
Cluster Information:
Name: ngstest-aws-useast2
Leader:
Replica: aws-useast2-natscj1-1, current, not seen
Replica: aws-useast2-natscj1-2, current, seen 154ms ago
Replica: aws-useast2-natscj1-3, current, not seen
```
But its completely inoperable and can never get quorum, cannot be scaled down and can only be deleted.
account info now shows:
```
Account Limits:
Max Message Payload: 1.0 MiB
Tier: R1:
Configuration Requirements:
Stream Requires Max Bytes Set: true
Consumer Maximum Ack Pending: Unlimited
Stream Resource Usage Limits:
Memory: 0 B of 0 B
Memory Per Stream: Unlimited
Storage: 0 B of 2.5 GiB
Storage Per Stream: 5.0 GiB
Streams: 0 of 5
Consumers: 0 of 50
Tier: R3:
Configuration Requirements:
Stream Requires Max Bytes Set: false
Consumer Maximum Ack Pending: Unlimited
Stream Resource Usage Limits:
Memory: 0 B of 0 B
Memory Per Stream: Unlimited
Storage: 0 B of 0 B
Storage Per Stream: Unlimited
Streams: 1 of 0
Consumers: 0 of 0
```
Note now I have a Tier: R3 which I didnt have before and it shows 1 of 0 streams.
### Expected behavior
server should not allow the stream to be edited into an impossible scenario.
### Server and client version
Server 2.10.4 cli main
### Host environment
internal test env
|
https://github.com/nats-io/nats-server/issues/4732
|
https://github.com/nats-io/nats-server/pull/4738
|
f4046401a3bef11b22c6ea358fb42cd5c5944051
|
7d0e27a5d57ae2de4ffeb651fd2c852f3836be9e
| 2023-11-01T16:03:51Z |
go
| 2023-11-02T17:16:17Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,707 |
["server/jetstream_cluster.go", "server/jetstream_consumer_test.go"]
|
Different behavior between single node and clustered nats-servers regarding 'DeliverLastPerSubjectPolicy' with multiple subjects.
|
### Observed behavior
We have go code that runs an ordered consumer with two subjects and the 'DeliverLastPerSubjectPolicy'. It works great using a nats-server 2.10.3 as single node. But if we use the same code on the three nodes cluster, we get a strange error.
I checked out 'nats.go' and added a line to debug what is sent to nats-server. See image for reference:
<img width="1285" alt="image" src="https://github.com/nats-io/nats-server/assets/719156/a90fb3b2-b3b7-4712-997c-976e82953e8b">
This is the local server, single-server with no auth setup:
```
"$JS.API.CONSUMER.CREATE.kcl-orderlist.faQrdj1mbBnQdTMzO7DpLJ_1" / {"stream_name":"kcl-orderlist","config":{"name":"faQrdj1mbBnQdTMzO7DpLJ_1","deliver_policy":"last_per_subject","ack_policy":"none","replay_policy":"instant","inactive_threshold":300000000000,"num_replicas":1,"mem_storage":true,"filter_subjects":["kcl.v1.orderlist.*.*.*.*.data","kcl.v1.orderlist.*.*.info"]},"action":""} / (*jetstream.APIError)(nil)
```
This is the same code against our 3 node cluster running with an admin user (full access) on the:
```
"$JS.API.CONSUMER.CREATE.kcl-orderlist.b0bjYmCvgD8ZONdSKx3CuF_1" / {"stream_name":"kcl-orderlist","config":{"name":"b0bjYmCvgD8ZONdSKx3CuF_1","deliver_policy":"last_per_subject","ack_policy":"none","replay_policy":"instant","inactive_threshold":300000000000,"num_replicas":1,"mem_storage":true,"filter_subjects":["kcl.v1.orderlist.*.*.*.*.data","kcl.v1.orderlist.*.*.info"]},"action":""} / &jetstream.APIError{Code:400, ErrorCode:0x276e, Description:"consumer delivery policy is deliver last per subject, but FilterSubject is not set"}
```
### Expected behavior
Considering, that both servers are the same version (2.10.3) and everything else is working fine between the test and the staging system, I expected to have the same behavior.
Especially getting the error that the subjects are missing, while they are clearly in the JSON that is getting sent to the server makes me unhappy.
Here are the binary messages as another reference:
<img width="513" alt="image" src="https://github.com/nats-io/nats-server/assets/719156/3a3b6563-6bd2-4fb9-98f6-efacf1c5b0c2">
<img width="522" alt="image" src="https://github.com/nats-io/nats-server/assets/719156/3baeefae-d9c4-43ee-b67e-e1b2f80f9c5d">
### Server and client version
Both nats-server versions are 2.10.3, but I also tried 2.10.1 and current main on the cluster.
The data (stream) in that account on the cluster should be identical with the test system. We delete the stream and recreate all the data beforehand.
### Host environment
We develop the code with go using WASM target, that is why I dug into the raw messages to avoid problems because of that fact.
The cluster is a three node cluster running on Debian Linux, with the nats-server compiled using go1.21.1 on that same machine, but I also downloaded the official release v2.10.3 Linux amd64 binary and used that instead.
### Steps to reproduce
Hard to tell. I would be grateful for ideas on what could be the culprit and how to test what may be the problem here. A wild guess of mine would be, that the storage on the cluster is not created new as the test system and was created some month ago.
When I change the subjects to just one of both, there is no error. When I use something like `kcl.v1.orderlist.>` there is no error and the app works (kinda, because it then gets some other messages that are unexpected). We could rearrange the subjects to work around this problem, but I wonder what is going on, why is it different, and how can this error happen in the first place?
|
https://github.com/nats-io/nats-server/issues/4707
|
https://github.com/nats-io/nats-server/pull/4721
|
a923096b221eafcbe5176733d97d21c809a00b23
|
d59f0c72ba30f002886882442f7729a4031da5c7
| 2023-10-25T17:49:26Z |
go
| 2023-10-30T11:01:01Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,696 |
["server/filestore.go", "server/memstore.go"]
|
`purge` command purges entire stream with `sequenceId=1`
|
### Observed behavior
When purging a stream (either via CLI or node JSM client), a `sequenceId` parameter value of 1 will cause the entire stream to be purged.
State prior to purge:
```
State:
Messages: 2
Bytes: 249 B
FirstSeq: 1 @ 2023-10-24T12:49:25 UTC
LastSeq: 2 @ 2023-10-24T12:49:25 UTC
Active Consumers: 0
Number of Subjects: 2
```
Either of the following will produce the behavior:
- CLI Command: `nats stream purge stream_name --seq=1`
- Node Client: `await jetstreamManager.streams.purge(stream, {seq: 1})`
State after purge:
```
State:
Messages: 0
Bytes: 0 B
FirstSeq: 3 @ 0001-01-01T00:00:00 UTC
LastSeq: 2 @ 2023-10-24T12:49:25 UTC
Active Consumers: 0
```
### Expected behavior
The node client docs state that the `sequenceId` parameter has the following behavior:
> Purge all messages up to but not including the message with this sequence.
So a value of 1 should leave the stream unchanged
### Server and client version
Server: 2.9.23
Node client: 2.13.1
CLI client: 0.0.35
### Host environment
_No response_
### Steps to reproduce
- Create a limits-retention stream
- Publish a few messages to it
- Run `nats stream purge stream_name --seq=1`
|
https://github.com/nats-io/nats-server/issues/4696
|
https://github.com/nats-io/nats-server/pull/4698
|
2e71381c15abc55557b43ee2b1640f55c1420b06
|
68458643a3827d0bb5a57937be3182f1fb0aff3d
| 2023-10-24T13:00:11Z |
go
| 2023-10-24T16:04:21Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,685 |
["server/jetstream_test.go", "server/memstore.go"]
|
For K/V with history, WatchAll does not return all keys if jetstream.MemoryStorage is used
|
### What version were you using?
github.com/nats-io/nats-server/v2 v2.10.3
github.com/nats-io/nats.go v1.31.0
### What environment was the server running in?
Linux, x86-64
### Is this defect reproducible?
Yes, I've created a full reproduction here.
https://github.com/a-h/natswatchalltest
### Given the capability you are leveraging, describe your expectation?
I expect that if I...
Put 1 item into a K/V bucket that has history enabled, update it, then add 4 more items... then `WatchAll` should a goroutine which will return 5 items (error handling elided for brevity).
```go
// Add 5 items to table.
_, err = kv.Put(ctx, "user1", []byte("value: a"))
// If I comment this update out, I get the expected results...
// But if I leave it in, the iteration stops at user 4.
_, err = kv.Update(ctx, "user1", []byte("value: a"), 1)
_, err = kv.Put(ctx, "user2", []byte("value: b"))
_, err = kv.Put(ctx, "user3", []byte("value: c"))
_, err = kv.Put(ctx, "user4", []byte("value: d"))
_, err = kv.Put(ctx, "user5", []byte("value: e"))
watcher, err := kv.WatchAll(ctx)
for update := range watcher.Updates() {
if update == nil {
break
}
log.Printf("Got %s\n", update.Key())
}
watcher.Stop()
```
### Given the expectation, what is the defect you are observing?
I only receive an output of 4 items.
```
2023/10/20 15:18:19 Got user1
2023/10/20 15:18:19 Got user2
2023/10/20 15:18:19 Got user3
2023/10/20 15:18:19 Got user4
```
|
https://github.com/nats-io/nats-server/issues/4685
|
https://github.com/nats-io/nats-server/pull/4693
|
1f2c5aed28ce148958aac3c4a5adf0767f693ccf
|
2e71381c15abc55557b43ee2b1640f55c1420b06
| 2023-10-20T16:49:30Z |
go
| 2023-10-24T14:24:52Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,680 |
["server/consumer.go", "server/filestore.go"]
|
Object store - List bucket's content is very slow
|
### Observed behavior
Listing items in the specific bucket of the object store is very slow, even if there is only one item.
### Expected behavior
Expected to get a list of the items faster.
### Server and client version
nats-server 2.10.3
natscli 0.1.1 (that is recommended here https://docs.nats.io/nats-concepts/what-is-nats/walkthrough_setup)
NATS.Client for .NET 1.1.0
### Host environment
Microsoft Windows NT 10.0.19045.0
CPU: AMD Ryzen 9 5950X
NVMe: ADATA SX8200PNP
```
[68776] 2023/10/19 13:47:51.238565 [INF] ---------------- JETSTREAM ----------------
[68776] 2023/10/19 13:47:51.238565 [INF] Max Memory: 95.94 GB
[68776] 2023/10/19 13:47:51.238565 [INF] Max Storage: 1.00 TB
[68776] 2023/10/19 13:47:51.238565 [INF] Store Directory: "e:\Temp\nats\jetstream"
[68776] 2023/10/19 13:47:51.238565 [INF] -------------------------------------------
```
### Steps to reproduce
I have created a bucket as in docs: `nats obj add testbucket`, and added a big file to it (~18G): `nats obj add testbucket somefile.mp4`. When trying to get the list of the objects in the bucket: `nats obj ls testbucket` I have received a message "nats: error: context deadline exceeded" with some delay.
In the server log I got this message:
`[68776] 2023/10/19 14:48:10.303340 [WRN] Internal subscription on "$JS.API.CONSUMER.CREATE.OBJ_testbucket.32td3LZE.$O.testbucket.M.>" took too long: 6.8698428s`
When upgraded timeout to 10s: `/nats obj ls testbucket --timeout=10s` it was ok.
I supposed that this is client-side problems and created a simple app using the latest NATS.Client for .NET 1.1.0:
```
using NATS.Client;
using NATS.Client.JetStream;
using NATS.Client.ObjectStore;
ConnectionFactory factory = new();
IConnection connection = factory.CreateConnection();
var jso = JetStreamOptions
.Builder()
.WithOptOut290ConsumerCreate(true)
.Build()
;
IObjectStore os = connection.CreateObjectStoreContext("testbucket", ObjectStoreOptions.Builder(jso).Build());
DateTime start = DateTime.Now;
var response = os.GetList();
Console.WriteLine(DateTime.Now - start);
```
It was also failing with default timeout. After some tuning it started working and in a consequent 10 runs time to get the list with the single object there was varying from 3.5s to 5.0s. A bit better but anyway very slow. Adding the second 14G file push this delay to 7.5+ seconds, another 8G file added one more second to this process and so on.
At the same time getting files by name is quick. It starts to download immediately.
As I see, the name of the file is added to the end of the stream (the last block file). I don't know yet how the storage is organized internally , but if it is using streams, maybe it would make sense to separate blob stream from the stream with meta-information to remove this functional dependency on the blob size? Because small objects works much faster.
|
https://github.com/nats-io/nats-server/issues/4680
|
https://github.com/nats-io/nats-server/pull/4712
|
2fb1b1b6021ae70ebf727b1b97f816f38966a01f
|
51b6a8ed228edfe290fbf7a21d3d0a617eaa6091
| 2023-10-19T12:27:54Z |
go
| 2023-10-26T22:10:55Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,674 |
["server/accounts_test.go", "server/client.go"]
|
Account service import duplicates MSG as HMSG with remapping subject
|
### What version were you using?
nats-server: v2.10.3
configuration:
```
accounts: {
tenant1: {
jetstream: enabled
users: [
{ user: "one", password: "one" }
]
exports: [
{ stream: "DW.>" }
{ service: "DW.>" }
]
}
global: {
jetstream: enabled
users: [
{ user: "global", password: "global" }
]
imports: [
{ stream: { account: tenant1, subject: "DW.>" }, prefix: tenant1 }
{ service: { account: tenant1, subject: "DW.>" }, to: "tenant1.DW.>" }
]
}
}
no_auth_user: global
```
### What environment was the server running in?
- Darwin Kernel Version 22.6.0 arm64
- docker.io/bitnami/nats:2.9.17-debian-11-r1
### Is this defect reproducible?
Subscribe in one terminal:
`nats --user=one --password=one sub '>'`
Publish in another terminal:
`nats --user=one --password=one pub DW.test.123 'test'`
### Given the capability you are leveraging, describe your expectation?
Receive only one MSG:
```
[#1] Received on "DW.test.123"
test
```
### Given the expectation, what is the defect you are observing?
I receive one MSG and one HMSG duplicate with strange subject without one token (DW.test.123 -> DW.123):
```
[#1] Received on "DW.test.123"
test
[#2] Received on "DW.123"
Nats-Request-Info: {"acc":"tenant1","rtt":2675625}
test
```
Part of nats-server trace log:
```
[17346] 2023/10/18 08:52:54.543902 [TRC] 127.0.0.1:58872 - cid:8 - <<- [SUB > 1]
[17346] 2023/10/18 08:53:04.042908 [TRC] 127.0.0.1:58910 - cid:9 - <<- [PUB DW.test.123 4]
[17346] 2023/10/18 08:53:04.042914 [TRC] 127.0.0.1:58910 - cid:9 - <<- MSG_PAYLOAD: ["test"]
[17346] 2023/10/18 08:53:04.042922 [TRC] 127.0.0.1:58872 - cid:8 - ->> [MSG DW.test.123 1 4]
[17346] 2023/10/18 08:53:04.043297 [TRC] 127.0.0.1:58872 - cid:8 - ->> [HMSG DW.123 1 64 68]
```
If I disable this service import for the `global` account, there is no HMSG duplication in `tenant1` account:
`{ service: { account: tenant1, subject: "DW.>" }, to: "tenant1.DW.>" }`
Thus importing service by the `global` account causes HMSG duplication in the `tenant1` account. Moreover, it remaps the subject with cutting the second token.
|
https://github.com/nats-io/nats-server/issues/4674
|
https://github.com/nats-io/nats-server/pull/4678
|
8fd0efd2af1d9a1c30b2e7e92d24737611ed32a0
|
44b0221160e67cb1decf91822de46437da812906
| 2023-10-18T06:32:40Z |
go
| 2023-10-19T05:34:29Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,664 |
["server/mqtt.go", "server/mqtt_test.go"]
|
MQTT - memory leak when server handle a retained message publish in handleRetainedMsg
|
### What version were you using?
2.10.2
### What environment was the server running in?
All environments affected
### Is this defect reproducible?
STR:
- publish in mqtt protocol some messages with retain flag enabled, incrementing the `sseq` of messages
- keep the same subject for each message
- heaps grows indefinitely as messages are published, and fill the RAM completely
### Given the capability you are leveraging, describe your expectation?
I setup a test to prove the leak in the function `handleRetainedMsg`. I expect that the handler creates a internal subscription only 1 time per subject (if a new messages comes with same subject, then the same subscription is used). I also expect that when `handleRetainedMsgDel` is called, then subscription related to that subject is deleted
```
func TestMQTTRetainedMsgDel(t *testing.T) {
o := testMQTTDefaultOptions()
s := testMQTTRunServer(t, o)
defer testMQTTShutdownServer(s)
mc, _ := testMQTTConnect(t, &mqttConnInfo{clientID: "sub", cleanSess: true}, o.MQTT.Host, o.MQTT.Port)
defer mc.Close()
c := testMQTTGetClient(t, s, "sub")
asm := c.mqtt.asm
var i uint64
// 3 messages with increasing sseq are published
for i = 0; i < 3; i++ {
rf := &mqttRetainedMsgRef{sseq: i}
//
asm.handleRetainedMsg("subject", rf)
}
asm.handleRetainedMsgDel("subject", 2)
if asm.sl.count > 0 {
t.Fatalf("all retained messages subs should be removed, but %d still present", asm.sl.count)
}
fmt.Printf("a")
}
```
### Given the expectation, what is the defect you are observing?
in `handleRetainedMsg` invokation, additional subscriptios are created even if the message subject is the same of an already mapped message. The result is that
- the subscription list grows with a lot of subscriptions containing the same subject
- the heap is filled
example handling 1000 messages with same subject and increasing `sseq`:

|
https://github.com/nats-io/nats-server/issues/4664
|
https://github.com/nats-io/nats-server/pull/4665
|
e2a44d3e2d6406238630d20d1bb3543991c5cf87
|
2ed0ecb0056d1d3e55e7a5f72968df35fcbf9d19
| 2023-10-16T19:44:20Z |
go
| 2023-10-18T08:34:40Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,653 |
["server/consumer.go", "server/jetstream_consumer_test.go", "server/stream.go"]
|
Updating a consumer on a work queue stream with overlapping subjects passes without an error
|
### What version were you using?
nats-server: `v2.10.2`
nats.go: `v1.30.2`
### What environment was the server running in?
Linux amd64
### Is this defect reproducible?
On a work queue stream, consumers with overlapping subjects are not allowed and on consumer creation we get the error `10100`:
```
filtered consumer not unique on workqueue stream
```
But if we try to update an existing consumer with overlapping subjects we don't get the same error, the update passes and now consumers start receiving the same messages on the overlapping subjects.
I've created some tests that show this bug in this [repo](https://github.com/mdawar/jetstream-api-demos/tree/main/consumer-subjects).
Steps to reproduce:
1. Clone the repo:
```sh
git clone [email protected]:mdawar/jetstream-api-demos.git
```
2. Change directory:
```sh
cd jetstream-api-demos/consumer-subjects
```
3. Install dependencies:
```
go mod download
```
4. Run the tests:
```
go test
```
### Given the capability you are leveraging, describe your expectation?
The expected behavior is to get an error if we try to update an existing consumer with overlapping subjects on a work queue stream.
### Given the expectation, what is the defect you are observing?
For work queue streams, overlapping consumer subjects are checked on create and not on update.
|
https://github.com/nats-io/nats-server/issues/4653
|
https://github.com/nats-io/nats-server/pull/4654
|
ea0843fe26671a72a19a70367bbb05612be7d990
|
1e8f6bf1e1d088daa04b5b4050304787beb19c5d
| 2023-10-12T08:21:31Z |
go
| 2023-10-12T14:27:27Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,643 |
["server/filestore.go", "server/filestore_test.go", "server/jetstream_test.go"]
|
KV Get/Watch fails even though key/value is present
|
### What version were you using?
`2.10.1`
### What environment was the server running in?
Docker image `2.10.1` was where we originally saw it but able to reproduce it running `nats-server` binary directly on mac M2
### Is this defect reproducible?
Yes the overall issue is reproducible but the specific keys that are erroring is not the same every time.
1. nats-server `2.9.21` running in production (for months)
2. `nats stream backup` (or `nats account backup`, both appeared to cause the same problem)
3. Spin up brand new nats-server `2.10.1`
4. `nats stream restore` (or `nats account restore`) with the data from the old server to the new server
5. Change number of history values in the `kv` with `nats stream edit {bucket} --max-msgs-per-subject=1`
6. We have one KV that is pretty large (~3GB) and we were getting `nats key not found` errors for keys that we know are in the KV
7. The following were ways that we _could_ access the data:
* `nats kv ls {bucket}` would include the "missing" key
* `nats kv history {bucket} {key}` would show the "missing" key where the last entry was a `PUT` (see screenshot)
* `nats kv get {bucket} {key} --revision={revision}`
8. The following did **not** work:
* `nats kv get {bucket} {key}` would give `nats key not found` errors
* `nats kv watch {bucket} {key}`
Some notes:
* Once the server was in this "error" state, the `nats kv get` would fail every time, it wasn't intermittent
* Unable to provide the backups that caused this due to data contents (redacted some info in screenshot)
* This only happened to _some_ of the keys, it wasn't _every_ key

### Given the capability you are leveraging, describe your expectation?
We expect that a `KV Get` would return the value stored in that key.
### Given the expectation, what is the defect you are observing?
Some of the keys were showing `nats key not found` error even though the following were true:
* `nats kv ls` showed the key in list
* `nats kv history {bucket} {key}` showed the last entry was a `PUT` (so it should exist)
* `nats kv get {bucket} {key} --revision={revision}` returned correct value
|
https://github.com/nats-io/nats-server/issues/4643
|
https://github.com/nats-io/nats-server/pull/4656
|
1e8f6bf1e1d088daa04b5b4050304787beb19c5d
|
444a47e97c8514a352915e566600588bf3bded31
| 2023-10-10T00:18:41Z |
go
| 2023-10-12T19:30:06Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,560 |
["server/reload.go"]
|
Race condition in reloading config at reload.go:2012
|
### What version were you using?
2.10.0-beta.56 (latest main branch)
### What environment was the server running in?
Ubuntu 23.04 on intel.
### Is this defect reproducible?
Sometimes, using go test -race on my app
### Given the capability you are leveraging, describe your expectation?
No race conditions occurs
### Given the expectation, what is the defect you are observing?
The race condition occurs during reload of account info in clientHasMovedToDifferentAccount().
The read at reload.go:2012 is outside the mutex lock, while the write at client.go:1974 is within the mutex locked area.
The section of code shows that a read of c.opts.NKey and c.opts.Username occurs before the lock on line 2024:
```go
if c.opts.Nkey != "" {
if s.nkeys != nil {
nu = s.nkeys[c.opts.Nkey]
}
} else if c.opts.Username != "" {
if s.users != nil {
u = s.users[c.opts.Username]
}
} else {
return false
}
// Get the current account name
c.mu.Lock()
```
race report:
```
[307924] 2023/09/19 09:24:01.680756 [INF] Reloaded: authorization users
[307924] 2023/09/19 09:24:01.680787 [INF] Reloaded: accounts
==================
WARNING: DATA RACE
Write at 0x00c0001fbed0 by goroutine 279:
reflect.Value.SetString()
/snap/go/10339/src/reflect/value.go:2463 +0x87
encoding/json.(*decodeState).literalStore()
/snap/go/10339/src/encoding/json/decode.go:947 +0x1024
encoding/json.(*decodeState).value()
/snap/go/10339/src/encoding/json/decode.go:388 +0x224
encoding/json.(*decodeState).object()
/snap/go/10339/src/encoding/json/decode.go:755 +0x14ab
encoding/json.(*decodeState).value()
/snap/go/10339/src/encoding/json/decode.go:374 +0xae
encoding/json.(*decodeState).unmarshal()
/snap/go/10339/src/encoding/json/decode.go:181 +0x38f
encoding/json.Unmarshal()
/snap/go/10339/src/encoding/json/decode.go:108 +0x22b
github.com/nats-io/nats-server/v2/server.(*client).processConnect()
/home/henk/dev/hiveot/nats-server/server/client.go:1974 +0x3e7
github.com/nats-io/nats-server/v2/server.(*client).parse()
/home/henk/dev/hiveot/nats-server/server/parser.go:926 +0x1409
github.com/nats-io/nats-server/v2/server.(*client).readLoop()
/home/henk/dev/hiveot/nats-server/server/client.go:1369 +0x1c98
github.com/nats-io/nats-server/v2/server.(*Server).createClientEx.func1()
/home/henk/dev/hiveot/nats-server/server/server.go:3132 +0x54
github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine.func1()
/home/henk/dev/hiveot/nats-server/server/server.go:3609 +0x277
Previous read at 0x00c0001fbed0 by goroutine 276:
github.com/nats-io/nats-server/v2/server.(*Server).clientHasMovedToDifferentAccount()
/home/henk/dev/hiveot/nats-server/server/reload.go:2012 +0x57
github.com/nats-io/nats-server/v2/server.(*Server).reloadAuthorization()
/home/henk/dev/hiveot/nats-server/server/reload.go:1907 +0x58c
github.com/nats-io/nats-server/v2/server.(*Server).applyOptions()
/home/henk/dev/hiveot/nats-server/server/reload.go:1704 +0x351
github.com/nats-io/nats-server/v2/server.(*Server).reloadOptions()
/home/henk/dev/hiveot/nats-server/server/reload.go:1089 +0x217
github.com/nats-io/nats-server/v2/server.(*Server).ReloadOptions()
/home/henk/dev/hiveot/nats-server/server/reload.go:1028 +0x4f0
github.com/hiveot/hub/core/natsmsgserver.(*NatsMsgServer).ApplyAuth()
/home/henk/dev/hiveot/hub/core/natsmsgserver/NatsAuth.go:80 +0xf9c
github.com/hiveot/hub/core/auth/authservice.(*AuthManageProfile).onChange.func1()
/home/henk/dev/hiveot/hub/core/auth/authservice/AuthManageProfile.go:133 +0x6f
```
|
https://github.com/nats-io/nats-server/issues/4560
|
https://github.com/nats-io/nats-server/pull/4561
|
ecbfac862ccaf652d2e23a251fbbb1e5fa1212cb
|
271b648fc7f3613636f904787afe6f7fea2ee709
| 2023-09-19T17:32:38Z |
go
| 2023-09-19T18:22:28Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,549 |
["server/events.go"]
|
Race condition reading c.acc.Name in events.go accountConnectEvent
|
### What version were you using?
2.10.0-beta.56 (latest main branch)
### What environment was the server running in?
Ubuntu 23.04 on intel
### Is this defect reproducible?
Yes, however only in my app at the moment.
That said, after a code inspection this is quite evident.
### Given the capability you are leveraging, describe your expectation?
No race condition appears.
### Given the expectation, what is the defect you are observing?
Data race when testing using go test -race ./...
At server/events.go:2162:
```go
c.mu.Unlock()
subj := fmt.Sprintf(connectEventSubj, c.acc.Name)
```
Cause: The mutex unlock comes before access to c.acc.Name.
Solution: swap the two lines so that c.mu.Unlock() comes after the fmt.Sprintf statement.
events.go:2164
```
==================
WARNING: DATA RACE
Write at 0x00c000454648 by goroutine 200:
github.com/nats-io/nats-server/v2/server.(*client).registerWithAccount()
/home/henk/dev/hiveot/nats-server/server/client.go:786 +0x184
github.com/nats-io/nats-server/v2/server.(*client).RegisterNkeyUser()
/home/henk/dev/hiveot/nats-server/server/client.go:923 +0x66
github.com/nats-io/nats-server/v2/server.(*Server).processClientOrLeafCallout.func5()
/home/henk/dev/hiveot/nats-server/server/auth_callout.go:275 +0x8d5
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg()
/home/henk/dev/hiveot/nats-server/server/client.go:3415 +0xd0f
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults()
/home/henk/dev/hiveot/nats-server/server/client.go:4469 +0x118a
github.com/nats-io/nats-server/v2/server.(*client).processInboundClientMsg()
/home/henk/dev/hiveot/nats-server/server/client.go:3889 +0x16fd
github.com/nats-io/nats-server/v2/server.(*client).processInboundMsg()
/home/henk/dev/hiveot/nats-server/server/client.go:3728 +0x88
github.com/nats-io/nats-server/v2/server.(*client).parse()
/home/henk/dev/hiveot/nats-server/server/parser.go:497 +0x3526
github.com/nats-io/nats-server/v2/server.(*client).readLoop()
/home/henk/dev/hiveot/nats-server/server/client.go:1369 +0x1c98
github.com/nats-io/nats-server/v2/server.(*Server).createClientEx.func1()
/home/henk/dev/hiveot/nats-server/server/server.go:3132 +0x54
github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine.func1()
/home/henk/dev/hiveot/nats-server/server/server.go:3609 +0x277
Previous read at 0x00c000454648 by goroutine 215:
github.com/nats-io/nats-server/v2/server.(*Server).accountConnectEvent()
/home/henk/dev/hiveot/nats-server/server/events.go:2164 +0xc3e
github.com/nats-io/nats-server/v2/server.(*Server).isClientAuthorized()
/home/henk/dev/hiveot/nats-server/server/auth.go:383 +0x104
github.com/nats-io/nats-server/v2/server.(*Server).checkAuthentication()
/home/henk/dev/hiveot/nats-server/server/auth.go:353 +0x7a
github.com/nats-io/nats-server/v2/server.(*client).processConnect()
/home/henk/dev/hiveot/nats-server/server/client.go:2040 +0xc70
github.com/nats-io/nats-server/v2/server.(*client).parse()
/home/henk/dev/hiveot/nats-server/server/parser.go:926 +0x1409
github.com/nats-io/nats-server/v2/server.(*client).readLoop()
/home/henk/dev/hiveot/nats-server/server/client.go:1369 +0x1c98
github.com/nats-io/nats-server/v2/server.(*Server).createClientEx.func1()
/home/henk/dev/hiveot/nats-server/server/server.go:3132 +0x54
github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine.func1()
/home/henk/dev/hiveot/nats-server/server/server.go:3609 +0x277
Goroutine 200 (running) created at:
github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine()
/home/henk/dev/hiveot/nats-server/server/server.go:3605 +0x2cb
github.com/nats-io/nats-server/v2/server.(*Server).createClientEx()
/home/henk/dev/hiveot/nats-server/server/server.go:3132 +0x16d5
github.com/nats-io/nats-server/v2/server.(*Server).createClientInProcess()
/home/henk/dev/hiveot/nats-server/server/server.go:2963 +0x58
github.com/nats-io/nats-server/v2/server.(*Server).InProcessConn.func1()
/home/henk/dev/hiveot/nats-server/server/server.go:2607 +0x44
github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine.func1()
/home/henk/dev/hiveot/nats-server/server/server.go:3609 +0x277
Goroutine 215 (running) created at:
github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine()
/home/henk/dev/hiveot/nats-server/server/server.go:3605 +0x2cb
github.com/nats-io/nats-server/v2/server.(*Server).createClientEx()
/home/henk/dev/hiveot/nats-server/server/server.go:3132 +0x16d5
github.com/nats-io/nats-server/v2/server.(*Server).createClient()
/home/henk/dev/hiveot/nats-server/server/server.go:2959 +0x44
github.com/nats-io/nats-server/v2/server.(*Server).AcceptLoop.func2()
/home/henk/dev/hiveot/nats-server/server/server.go:2582 +0xe
github.com/nats-io/nats-server/v2/server.(*Server).acceptConnections.func1()
/home/henk/dev/hiveot/nats-server/server/server.go:2633 +0x4f
github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine.func1()
/home/henk/dev/hiveot/nats-server/server/server.go:3609 +0x277
==================
```
|
https://github.com/nats-io/nats-server/issues/4549
|
https://github.com/nats-io/nats-server/pull/4550
|
0d9328027f1cae08d2c196a5395e8fd722fb5a51
|
6f3805650b4d43ee6200fbe68e9249de715d63ca
| 2023-09-17T03:40:50Z |
go
| 2023-09-17T17:35:34Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,479 |
["server/auth.go"]
|
Authorization callout user not valid (when having users in nats options)
|
### What version were you using?
Latest from main as of 2023-09-02.
### What environment was the server running in?
dev box Ubuntu
### Is this defect reproducible?
Yes. Take a look at auth.go:293
```go
for _, u := range opts.AuthCallout.AuthUsers {
// Check for user in users and nkeys since this is server config.
var found bool
if len(s.users) > 0 {
_, found = s.users[u]
} else if len(s.nkeys) > 0 && !found {
_, found = s.nkeys[u]
}
if !found {
s.Errorf("Authorization callout user %q not valid: %v", u, err)
}
}
```
The statement:
> else if len(s.nkeys) > 0 && !found {
is never run when len(s.users) > 0.
This should simply be an "if len()..." without the else.
### Given the capability you are leveraging, describe your expectation?
No errors should be reported if the callout user exists in nkeys
### Given the expectation, what is the defect you are observing?
I'm observing an incorrect error in the log. The callout nkey does exist, but the check doesn't check the nkeys.
|
https://github.com/nats-io/nats-server/issues/4479
|
https://github.com/nats-io/nats-server/pull/4501
|
d07e8eb2109c2bd079863d6f70423a297fedca6a
|
0b35767307ccf21bd5ede127211dc3a966be8406
| 2023-09-03T03:44:52Z |
go
| 2023-09-07T23:19:52Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,462 |
["server/monitor.go", "server/monitor_sort_opts.go", "server/monitor_test.go"]
|
Monitoring server not sorting connections by idle time correctly
|
### What version were you using?
nats-server: v2.9.21
### What environment was the server running in?
Linux amd64
### Is this defect reproducible?
When sorting connections by idle time, you can see from the results of the API that they're not correctly sorted.
For example, this is the current response of the demo server, for https://demo.nats.io:8222/connz?sort=idle:
<details>
```json
{
"server_id": "NC5BFXEYWVQUHQQMG7FVARRY3AYVOKP7H2FIUI2ZTH55Z5FWHWJ2LOXY",
"now": "2023-08-31T15:19:01.894405231Z",
"num_connections": 34,
"total": 34,
"offset": 0,
"limit": 1024,
"connections": [
{
"cid": 232484,
"kind": "Client",
"type": "nats",
"ip": "46.105.118.190",
"port": 61513,
"start": "2023-08-14T13:24:33.914546665Z",
"last_activity": "2023-08-31T15:10:57.288873282Z",
"rtt": "1m47.922127283s",
"uptime": "17d1h54m27s",
"idle": "8m4s",
"pending_bytes": 0,
"in_msgs": 2647,
"out_msgs": 573,
"in_bytes": 5304731,
"out_bytes": 65807,
"subscriptions": 3,
"lang": ".NET",
"version": "1.0.7.0"
},
{
"cid": 302292,
"kind": "Client",
"type": "nats",
"ip": "35.203.20.39",
"port": 60212,
"start": "2023-08-18T00:28:49.563009727Z",
"last_activity": "2023-08-28T16:47:37.780172094Z",
"rtt": "41.986973ms",
"uptime": "13d14h50m12s",
"idle": "2d22h31m24s",
"pending_bytes": 0,
"in_msgs": 9,
"out_msgs": 9,
"in_bytes": 201841,
"out_bytes": 1354,
"subscriptions": 1,
"lang": "nats.js",
"version": "2.12.1",
"tls_version": "1.3",
"tls_cipher_suite": "TLS_AES_128_GCM_SHA256"
},
{
"cid": 510887,
"kind": "Client",
"type": "nats",
"ip": "213.21.176.71",
"port": 51602,
"start": "2023-08-31T12:02:42.586895287Z",
"last_activity": "2023-08-31T15:09:09.959922587Z",
"rtt": "170.829052ms",
"uptime": "3h16m19s",
"idle": "9m51s",
"pending_bytes": 0,
"in_msgs": 2322,
"out_msgs": 2322,
"in_bytes": 4179400,
"out_bytes": 3589812,
"subscriptions": 1,
"lang": "python3",
"version": "2.3.1"
},
{
"cid": 512755,
"kind": "Client",
"type": "nats",
"ip": "3.67.185.186",
"port": 2889,
"start": "2023-08-31T13:56:16.917563807Z",
"last_activity": "2023-08-31T15:19:01.853396005Z",
"rtt": "122.629797ms",
"uptime": "1h22m44s",
"idle": "0s",
"pending_bytes": 0,
"in_msgs": 19835,
"out_msgs": 19834,
"in_bytes": 783400,
"out_bytes": 1627544,
"subscriptions": 2,
"lang": "go",
"version": "1.28.0"
},
{
"cid": 512696,
"kind": "Client",
"type": "nats",
"ip": "158.181.140.134",
"port": 60166,
"start": "2023-08-31T12:35:36.434240579Z",
"last_activity": "2023-08-31T13:16:59.400240469Z",
"rtt": "215.60588ms",
"uptime": "2h43m25s",
"idle": "2h2m2s",
"pending_bytes": 0,
"in_msgs": 8,
"out_msgs": 0,
"in_bytes": 523,
"out_bytes": 0,
"subscriptions": 0,
"lang": "go",
"version": "1.28.0"
},
{
"cid": 512738,
"kind": "Client",
"type": "nats",
"ip": "103.228.147.188",
"port": 25359,
"start": "2023-08-31T13:29:32.108773632Z",
"last_activity": "2023-08-31T13:29:35.330791161Z",
"rtt": "329.856582ms",
"uptime": "1h49m29s",
"idle": "1h49m26s",
"pending_bytes": 0,
"in_msgs": 0,
"out_msgs": 0,
"in_bytes": 0,
"out_bytes": 0,
"subscriptions": 3,
"name": "products-jetstream",
"lang": "nats.js",
"version": "2.13.1",
"tls_version": "1.3",
"tls_cipher_suite": "TLS_AES_128_GCM_SHA256"
},
{
"cid": 512737,
"kind": "Client",
"type": "nats",
"ip": "103.228.147.188",
"port": 25214,
"start": "2023-08-31T13:29:32.045875267Z",
"last_activity": "2023-08-31T13:29:35.131993396Z",
"rtt": "271.093209ms",
"uptime": "1h49m29s",
"idle": "1h49m26s",
"pending_bytes": 0,
"in_msgs": 0,
"out_msgs": 0,
"in_bytes": 0,
"out_bytes": 0,
"subscriptions": 1,
"lang": "nats.js",
"version": "2.13.1",
"tls_version": "1.3",
"tls_cipher_suite": "TLS_AES_128_GCM_SHA256"
},
{
"cid": 512736,
"kind": "Client",
"type": "nats",
"ip": "103.228.147.188",
"port": 25375,
"start": "2023-08-31T13:29:30.564841942Z",
"last_activity": "2023-08-31T13:29:31.609500016Z",
"rtt": "265.152418ms",
"uptime": "1h49m31s",
"idle": "1h49m30s",
"pending_bytes": 0,
"in_msgs": 0,
"out_msgs": 0,
"in_bytes": 0,
"out_bytes": 0,
"subscriptions": 1,
"lang": "nats.js",
"version": "2.13.1",
"tls_version": "1.3",
"tls_cipher_suite": "TLS_AES_128_GCM_SHA256"
},
{
"cid": 512735,
"kind": "Client",
"type": "nats",
"ip": "103.228.147.188",
"port": 25120,
"start": "2023-08-31T13:29:30.550593092Z",
"last_activity": "2023-08-31T13:29:31.488087925Z",
"rtt": "266.701161ms",
"uptime": "1h49m31s",
"idle": "1h49m30s",
"pending_bytes": 0,
"in_msgs": 0,
"out_msgs": 0,
"in_bytes": 0,
"out_bytes": 0,
"subscriptions": 1,
"lang": "nats.js",
"version": "2.13.1",
"tls_version": "1.3",
"tls_cipher_suite": "TLS_AES_128_GCM_SHA256"
},
{
"cid": 512757,
"kind": "Client",
"type": "nats",
"ip": "5.172.234.44",
"port": 15761,
"start": "2023-08-31T14:06:38.521316193Z",
"last_activity": "2023-08-31T14:06:39.014367484Z",
"rtt": "177.211815ms",
"uptime": "1h12m23s",
"idle": "1h12m22s",
"pending_bytes": 0,
"in_msgs": 0,
"out_msgs": 0,
"in_bytes": 0,
"out_bytes": 0,
"subscriptions": 3,
"lang": "C",
"version": "3.7.0-beta"
},
{
"cid": 190,
"kind": "Client",
"type": "nats",
"ip": "20.198.152.10",
"port": 3529,
"start": "2023-08-04T15:04:18.092808785Z",
"last_activity": "2023-08-04T15:04:18.528062581Z",
"rtt": "209.00094ms",
"uptime": "27d0h14m43s",
"idle": "27d0h14m43s",
"pending_bytes": 0,
"in_msgs": 0,
"out_msgs": 0,
"in_bytes": 0,
"out_bytes": 0,
"subscriptions": 1,
"name": "NATS CLI Version 0.0.18",
"lang": "go",
"version": "1.11.0"
},
{
"cid": 189,
"kind": "Client",
"type": "nats",
"ip": "20.198.152.10",
"port": 3545,
"start": "2023-08-04T15:04:18.063247078Z",
"last_activity": "2023-08-04T15:04:18.490762948Z",
"rtt": "213.378956ms",
"uptime": "27d0h14m43s",
"idle": "27d0h14m43s",
"pending_bytes": 0,
"in_msgs": 0,
"out_msgs": 0,
"in_bytes": 0,
"out_bytes": 0,
"subscriptions": 1,
"name": "NATS CLI Version 0.0.18",
"lang": "go",
"version": "1.11.0"
},
{
"cid": 512793,
"kind": "Client",
"type": "nats",
"ip": "149.100.31.93",
"port": 37007,
"start": "2023-08-31T15:18:59.790010071Z",
"last_activity": "2023-08-31T15:19:00.153539702Z",
"rtt": "179.817393ms",
"uptime": "2s",
"idle": "1s",
"pending_bytes": 0,
"in_msgs": 3,
"out_msgs": 0,
"in_bytes": 159,
"out_bytes": 0,
"subscriptions": 2,
"lang": "python3",
"version": "2.3.1"
},
{
"cid": 510990,
"kind": "Client",
"type": "nats",
"ip": "24.203.20.164",
"port": 38663,
"start": "2023-08-31T12:20:23.537600632Z",
"last_activity": "2023-08-31T12:20:23.878404944Z",
"rtt": "65.69959ms",
"uptime": "2h58m38s",
"idle": "2h58m38s",
"pending_bytes": 0,
"in_msgs": 2,
"out_msgs": 2,
"in_bytes": 71,
"out_bytes": 1088,
"subscriptions": 1,
"lang": "nats.js",
"version": "2.12.1",
"tls_version": "1.3",
"tls_cipher_suite": "TLS_AES_128_GCM_SHA256"
},
{
"cid": 302267,
"kind": "Client",
"type": "nats",
"ip": "35.203.112.31",
"port": 2568,
"start": "2023-08-17T23:05:24.227520133Z",
"last_activity": "2023-08-17T23:05:24.53277908Z",
"rtt": "40.589101ms",
"uptime": "13d16h13m37s",
"idle": "13d16h13m37s",
"pending_bytes": 0,
"in_msgs": 2,
"out_msgs": 2,
"in_bytes": 71,
"out_bytes": 1046,
"subscriptions": 1,
"lang": "nats.js",
"version": "2.12.1",
"tls_version": "1.3",
"tls_cipher_suite": "TLS_AES_128_GCM_SHA256"
},
{
"cid": 206,
"kind": "Client",
"type": "nats",
"ip": "139.162.174.82",
"port": 55136,
"start": "2023-08-04T15:04:18.859582308Z",
"last_activity": "2023-08-04T15:04:19.145791177Z",
"rtt": "123.871791ms",
"uptime": "27d0h14m43s",
"idle": "27d0h14m42s",
"pending_bytes": 0,
"in_msgs": 0,
"out_msgs": 0,
"in_bytes": 0,
"out_bytes": 0,
"subscriptions": 1,
"lang": "go",
"version": "1.25.0"
},
{
"cid": 208,
"kind": "Client",
"type": "nats",
"ip": "172.104.228.102",
"port": 61866,
"start": "2023-08-04T15:04:19.039808071Z",
"last_activity": "2023-08-04T15:04:19.299904845Z",
"rtt": "124.063004ms",
"uptime": "27d0h14m42s",
"idle": "27d0h14m42s",
"pending_bytes": 0,
"in_msgs": 0,
"out_msgs": 0,
"in_bytes": 0,
"out_bytes": 0,
"subscriptions": 1,
"lang": "go",
"version": "1.27.1"
},
{
"cid": 205,
"kind": "Client",
"type": "nats",
"ip": "202.61.207.151",
"port": 12569,
"start": "2023-08-04T15:04:18.826788417Z",
"last_activity": "2023-08-04T15:04:19.084226953Z",
"rtt": "127.8401ms",
"uptime": "27d0h14m43s",
"idle": "27d0h14m42s",
"pending_bytes": 0,
"in_msgs": 0,
"out_msgs": 0,
"in_bytes": 0,
"out_bytes": 0,
"subscriptions": 1,
"lang": "go",
"version": "1.27.1"
},
{
"cid": 209,
"kind": "Client",
"type": "nats",
"ip": "34.32.242.155",
"port": 45304,
"start": "2023-08-04T15:04:19.075154418Z",
"last_activity": "2023-08-04T15:04:19.321253743Z",
"rtt": "112.891846ms",
"uptime": "27d0h14m42s",
"idle": "27d0h14m42s",
"pending_bytes": 0,
"in_msgs": 0,
"out_msgs": 0,
"in_bytes": 0,
"out_bytes": 0,
"subscriptions": 1,
"lang": "python3",
"version": "2.2.0"
},
{
"cid": 506461,
"kind": "Client",
"type": "nats",
"ip": "165.22.78.170",
"port": 52462,
"start": "2023-08-31T06:40:38.883766916Z",
"last_activity": "2023-08-31T06:40:39.125471256Z",
"rtt": "119.635873ms",
"uptime": "8h38m23s",
"idle": "8h38m22s",
"pending_bytes": 0,
"in_msgs": 0,
"out_msgs": 0,
"in_bytes": 0,
"out_bytes": 0,
"subscriptions": 1,
"lang": "go",
"version": "1.28.0"
},
{
"cid": 498304,
"kind": "Client",
"type": "nats",
"ip": "46.235.233.44",
"port": 57392,
"start": "2023-08-29T11:46:13.041284421Z",
"last_activity": "2023-08-29T11:46:13.186823869Z",
"rtt": "137.574244ms",
"uptime": "2d3h32m48s",
"idle": "2d3h32m48s",
"pending_bytes": 0,
"in_msgs": 0,
"out_msgs": 0,
"in_bytes": 0,
"out_bytes": 0,
"subscriptions": 0,
"lang": "nats.js",
"version": "2.7.0-1"
},
{
"cid": 499881,
"kind": "Client",
"type": "nats",
"ip": "174.168.189.80",
"port": 59688,
"start": "2023-08-30T07:08:02.088854695Z",
"last_activity": "2023-08-30T07:08:02.218662261Z",
"rtt": "57.951947ms",
"uptime": "1d8h10m59s",
"idle": "1d8h10m59s",
"pending_bytes": 0,
"in_msgs": 0,
"out_msgs": 0,
"in_bytes": 0,
"out_bytes": 0,
"subscriptions": 1,
"name": "NATS CLI Version 0.0.35",
"lang": "go",
"version": "1.19.0"
},
{
"cid": 191,
"kind": "Client",
"type": "nats",
"ip": "18.157.168.211",
"port": 46488,
"start": "2023-08-04T15:04:18.127140273Z",
"last_activity": "2023-08-04T15:04:18.250155296Z",
"rtt": "118.742415ms",
"uptime": "27d0h14m43s",
"idle": "27d0h14m43s",
"pending_bytes": 0,
"in_msgs": 0,
"out_msgs": 0,
"in_bytes": 0,
"out_bytes": 0,
"subscriptions": 0,
"lang": "rust",
"version": "0.20.1"
},
{
"cid": 342794,
"kind": "Client",
"type": "nats",
"ip": "65.49.52.68",
"port": 13309,
"start": "2023-08-22T06:44:24.779488834Z",
"last_activity": "2023-08-22T06:44:24.869589645Z",
"rtt": "45.812053ms",
"uptime": "9d8h34m37s",
"idle": "9d8h34m37s",
"pending_bytes": 0,
"in_msgs": 0,
"out_msgs": 0,
"in_bytes": 0,
"out_bytes": 0,
"subscriptions": 3,
"name": "ecommerce-ad-revenue",
"lang": "go",
"version": "1.24.0"
},
{
"cid": 512753,
"kind": "Client",
"type": "nats",
"ip": "20.99.129.96",
"port": 1024,
"start": "2023-08-31T13:52:23.95523995Z",
"last_activity": "2023-08-31T13:52:24.034193931Z",
"rtt": "48.041543ms",
"uptime": "1h26m37s",
"idle": "1h26m37s",
"pending_bytes": 0,
"in_msgs": 0,
"out_msgs": 0,
"in_bytes": 0,
"out_bytes": 0,
"subscriptions": 0,
"name": "TB NATS connection for Tenant 12c3b560-4096-11eb-a8a0-effd4ecfe4e7",
"lang": "java",
"version": "2.8.0"
},
{
"cid": 430605,
"kind": "Client",
"type": "nats",
"ip": "20.99.129.96",
"port": 4544,
"start": "2023-08-24T20:13:50.202295858Z",
"last_activity": "2023-08-24T20:13:50.26299438Z",
"rtt": "47.997872ms",
"uptime": "6d19h5m11s",
"idle": "6d19h5m11s",
"pending_bytes": 0,
"in_msgs": 0,
"out_msgs": 0,
"in_bytes": 0,
"out_bytes": 0,
"subscriptions": 0,
"name": "TB NATS connection for Tenant 12c3b560-4096-11eb-a8a0-effd4ecfe4e7",
"lang": "java",
"version": "2.8.0"
},
{
"cid": 188,
"kind": "Client",
"type": "nats",
"ip": "18.144.6.11",
"port": 38406,
"start": "2023-08-04T15:04:17.558775202Z",
"last_activity": "2023-08-04T15:04:17.608147766Z",
"rtt": "42.178401ms",
"uptime": "27d0h14m44s",
"idle": "27d0h14m44s",
"pending_bytes": 0,
"in_msgs": 0,
"out_msgs": 0,
"in_bytes": 0,
"out_bytes": 0,
"subscriptions": 0,
"lang": "rust",
"version": "0.20.1"
},
{
"cid": 342793,
"kind": "Client",
"type": "nats",
"ip": "65.49.52.92",
"port": 57323,
"start": "2023-08-22T06:44:23.949148968Z",
"last_activity": "2023-08-22T06:44:23.994467709Z",
"rtt": "45.896184ms",
"uptime": "9d8h34m37s",
"idle": "9d8h34m37s",
"pending_bytes": 0,
"in_msgs": 0,
"out_msgs": 0,
"in_bytes": 0,
"out_bytes": 0,
"subscriptions": 0,
"name": "ecommerce-payment",
"lang": "go",
"version": "1.16.0"
},
{
"cid": 342795,
"kind": "Client",
"type": "nats",
"ip": "65.49.52.92",
"port": 21787,
"start": "2023-08-22T06:44:26.036225687Z",
"last_activity": "2023-08-22T06:44:26.081219652Z",
"rtt": "44.357775ms",
"uptime": "9d8h34m35s",
"idle": "9d8h34m35s",
"pending_bytes": 0,
"in_msgs": 0,
"out_msgs": 0,
"in_bytes": 0,
"out_bytes": 0,
"subscriptions": 0,
"name": "ecommerce-catalog",
"lang": "go",
"version": "1.16.0"
},
{
"cid": 342796,
"kind": "Client",
"type": "nats",
"ip": "65.49.52.92",
"port": 47070,
"start": "2023-08-22T06:44:26.735661559Z",
"last_activity": "2023-08-22T06:44:26.780439263Z",
"rtt": "44.049149ms",
"uptime": "9d8h34m35s",
"idle": "9d8h34m35s",
"pending_bytes": 0,
"in_msgs": 0,
"out_msgs": 0,
"in_bytes": 0,
"out_bytes": 0,
"subscriptions": 0,
"name": "ecommerce-payment",
"lang": "go",
"version": "1.16.0"
},
{
"cid": 183,
"kind": "Client",
"type": "nats",
"ip": "52.53.163.133",
"port": 54468,
"start": "2023-08-04T15:04:17.161658177Z",
"last_activity": "2023-08-04T15:04:17.206095417Z",
"rtt": "42.054901ms",
"uptime": "27d0h14m44s",
"idle": "27d0h14m44s",
"pending_bytes": 0,
"in_msgs": 0,
"out_msgs": 0,
"in_bytes": 0,
"out_bytes": 0,
"subscriptions": 0,
"lang": "rust",
"version": "0.20.1"
},
{
"cid": 495533,
"kind": "Client",
"type": "nats",
"ip": "65.49.52.68",
"port": 13099,
"start": "2023-08-29T07:57:32.623812529Z",
"last_activity": "2023-08-29T07:57:32.665044853Z",
"rtt": "44.234068ms",
"uptime": "2d7h21m29s",
"idle": "2d7h21m29s",
"pending_bytes": 0,
"in_msgs": 0,
"out_msgs": 0,
"in_bytes": 0,
"out_bytes": 0,
"subscriptions": 0,
"name": "ecommerce-payment",
"lang": "go",
"version": "1.24.0"
},
{
"cid": 512792,
"kind": "Client",
"type": "nats",
"ip": "151.181.215.50",
"port": 46494,
"start": "2023-08-31T15:17:51.748456938Z",
"last_activity": "2023-08-31T15:17:51.788927665Z",
"rtt": "41.857148ms",
"uptime": "1m10s",
"idle": "1m10s",
"pending_bytes": 0,
"in_msgs": 0,
"out_msgs": 0,
"in_bytes": 0,
"out_bytes": 0,
"subscriptions": 0,
"name": "service",
"lang": "go",
"version": "1.28.0"
},
{
"cid": 202,
"kind": "Client",
"type": "nats",
"ip": "54.159.64.169",
"port": 1318,
"start": "2023-08-04T15:04:18.79136239Z",
"last_activity": "2023-08-04T15:04:18.823379501Z",
"rtt": "32.079352ms",
"uptime": "27d0h14m43s",
"idle": "27d0h14m43s",
"pending_bytes": 0,
"in_msgs": 0,
"out_msgs": 0,
"in_bytes": 0,
"out_bytes": 0,
"subscriptions": 0,
"lang": "go",
"version": "1.10.0"
}
]
}
```
</details>
These are the idle times in order of their appearance in the response:
```
8m4s
2d22h31m24s
9m51s
0s
2h2m2s
1h49m26s
1h49m26s
1h49m30s
1h49m30s
1h12m22s
27d0h14m43s
27d0h14m43s
1s
2h58m38s
13d16h13m37s
27d0h14m42s
27d0h14m42s
27d0h14m42s
27d0h14m42s
8h38m22s
2d3h32m48s
1d8h10m59s
27d0h14m43s
9d8h34m37s
1h26m37s
6d19h5m11s
27d0h14m44s
9d8h34m37s
9d8h34m35s
9d8h34m35s
27d0h14m44s
2d7h21m29s
1m10s
27d0h14m43s
```
They are clearly not sorted.
### Given the capability you are leveraging, describe your expectation?
I expected the connections to be sorted by their idle time in descending order.
### Given the expectation, what is the defect you are observing?
The issue seems to be in [`byIdle.Less`](https://github.com/nats-io/nats-server/blob/887a4ae692292c4efd95539993b7d665fea78ec5/server/monitor_sort_opts.go#L97).
The function is subtracting the connection's start time from the last activity time.
```go
type byIdle struct{ ConnInfos }
func (l byIdle) Less(i, j int) bool {
ii := l.ConnInfos[i].LastActivity.Sub(l.ConnInfos[i].Start)
ij := l.ConnInfos[j].LastActivity.Sub(l.ConnInfos[j].Start)
return ii < ij
}
```
|
https://github.com/nats-io/nats-server/issues/4462
|
https://github.com/nats-io/nats-server/pull/4463
|
f6aaea195e7ace8b1b8d019b4dd134f8b4b48061
|
ed8b50d943f3a53eb8384d0657849cc9a319dad9
| 2023-08-31T15:34:35Z |
go
| 2023-09-01T16:44:19Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,445 |
["server/filestore.go", "server/jetstream_test.go"]
|
A KeyValue stream's max messages per subject isn't being respected after updating the stream's max messages per subject to be smaller than before
|
### What version were you using?
`2.9.21`
### What environment was the server running in?
Mac M1
### Is this defect reproducible?
Yes, this is reproducible. Run a nats server with jetstream enabled and then run the following test:
```
package history_test
import (
"encoding/json"
"strconv"
"testing"
"github.com/nats-io/nats.go"
"github.com/stretchr/testify/require"
)
func TestHistory(t *testing.T) {
// Start server with `nats-server --js` to make sure jetstream is turned on
nc, err := nats.Connect(nats.DefaultURL)
require.Nil(t, err)
// Create jetstream and KeyValue
js, err := nc.JetStream()
require.Nil(t, err)
bucketName := "HistoryTest"
kv, err := js.CreateKeyValue(&nats.KeyValueConfig{
Bucket: bucketName,
Description: "Test for number of historical values",
History: 10,
})
require.Nil(t, err)
key := "mykey"
// Write to the same key 50 times
for j := 0; j < 50; j++ {
value := strconv.Itoa(j)
marshalledBytes, err := json.Marshal(value)
require.Nil(t, err)
_, err = kv.Put(key, marshalledBytes)
require.Nil(t, err)
}
// Verify we have the correct amount of history
info, err := js.StreamInfo("KV_" + bucketName)
require.Nil(t, err)
histories, err := kv.History(key)
require.Nil(t, err)
require.Len(t, histories, int(info.Config.MaxMsgsPerSubject))
// Update the stream to store half the amount of history as before
newConfig := info.Config
newConfig.MaxMsgsPerSubject = newConfig.MaxMsgsPerSubject / 2
info, err = js.UpdateStream(&newConfig)
require.Nil(t, err)
// Write 50 more values
for j := 0; j < 50; j++ {
value := strconv.Itoa(50 + j)
marshalledBytes, err := json.Marshal(value)
require.Nil(t, err)
_, err = kv.Put(key, marshalledBytes)
require.Nil(t, err)
}
// Verify that the history has the updated number of max messages per subject
histories, err = kv.History(key)
require.Nil(t, err)
require.Len(t, histories, int(info.Config.MaxMsgsPerSubject)) // We fail here...
}
```
### Given the capability you are leveraging, describe your expectation?
My expectation is that after updating the Max Messages Per Subject for the Stream, the corresponding KeyValue should contain the updated amount of history
### Given the expectation, what is the defect you are observing?
After updating the stream associated with the KV, it is storing more history than it should be storing.
|
https://github.com/nats-io/nats-server/issues/4445
|
https://github.com/nats-io/nats-server/pull/4446
|
f9a2efdc5ce6279690e2f7b162217577c1863fe1
|
abf5e0bc0fca524914201928efde59a6c88ede17
| 2023-08-29T03:10:08Z |
go
| 2023-08-29T23:47:05Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,436 |
["server/monitor.go", "server/monitor_test.go"]
|
HTTP headers are not set for the monitoring server when the status is not 200 OK
|
### What version were you using?
nats-server: v2.9.21
### What environment was the server running in?
Linux amd64
### Is this defect reproducible?
All the monitoring server endpoints return `http.StatusOK` except the `/healthz` endpoint when the health check fails.
Requests made to the `/healthz` endpoint when the health check fails will receive responses with the default HTTP headers, for example the `Content-Type` header will be set to the value `text/plain; charset=utf-8` instead of `application/json` and `application/javascript` for JSONP.
This is happening because setting the headers on an `http.ResponseWriter` has no effect after setting the status code with `ResponseWriter.WriteHeader`.
The function [`ResponseHandler()`](https://github.com/mdawar/nats-server/blob/5b18e80d424926eca48091c6eb2e2a56fdffdc5d/server/monitor.go#L2305) sets the headers, but when the health check fails, the function [`HandleHealthz()`](https://github.com/mdawar/nats-server/blob/5b18e80d424926eca48091c6eb2e2a56fdffdc5d/server/monitor.go#L3080) calls `WriteHeader()` before calling `ResponseHandler()` which causes this issue.
I will add tests that reproduce this issue in a pull request.
### Given the capability you are leveraging, describe your expectation?
The HTTP headers for the `/healthz` endpoint should be set properly when the health check fails.
### Given the expectation, what is the defect you are observing?
Setting the headers has no effect after calling `ResponseWriter.WriteHeader()`.
|
https://github.com/nats-io/nats-server/issues/4436
|
https://github.com/nats-io/nats-server/pull/4437
|
76c394260963fa56f68af8ddc6495014987ff9e5
|
6d6d3cfa5597d8dd377e497cdcb7bd638b34cd2a
| 2023-08-26T10:25:50Z |
go
| 2023-08-31T20:55:20Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,422 |
["server/monitor.go", "server/monitor_test.go"]
|
CORS requests are not supported by the monitoring server
|
### What version were you using?
nats-server: v2.9.21
### What environment was the server running in?
Linux amd64
### Is this defect reproducible?
Any request from the browser using the [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API) will fail with an error: `blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource`
Only [JSONP](https://en.wikipedia.org/wiki/JSONP) requests are supported, this method is old and rarely used in modern frontend development, it requires a `<script>` tag to be inserted into the DOM to load the resource.
The example from the [monitoring docs](https://docs.nats.io/running-a-nats-service/nats_admin/monitoring#creating-monitoring-applications) uses jQuery's `getJSON` function which loads the URL in a script tag.
```js
$.getJSON('https://demo.nats.io:8222/connz?callback=?', function(data) {
console.log(data);
});
```
Here's a demo that makes requests to the [demo](https://demo.nats.io:8222/) NATS server and shows that only JSONP is supported:
https://codepen.io/mdawar/pen/gOZpXrx
The `serverURL` variable can be swapped to local to make the requests to a local NATS server.
### Given the capability you are leveraging, describe your expectation?
I expect requests to the monitoring server using the browser's Fetch API to succeed just like JSONP requests.
I'm creating an open source monitoring web app that runs in the browser and I had to implement a custom fetch function to load the monitoring data by inserting a `<script>` tag into the DOM and then remove it when it's done, this is inconvenient and we can't handle errors like we do using the Fetch API.
### Given the expectation, what is the defect you are observing?
The monitoring server needs to set the `Access-Control-Allow-Origin` header to allow CORS requests.
|
https://github.com/nats-io/nats-server/issues/4422
|
https://github.com/nats-io/nats-server/pull/4423
|
0d135d416134f116296f6bf06e65009339d1437d
|
5b18e80d424926eca48091c6eb2e2a56fdffdc5d
| 2023-08-23T13:49:00Z |
go
| 2023-08-25T21:49:09Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,396 |
["server/jetstream_cluster.go", "server/jetstream_super_cluster_test.go"]
|
`nats stream cluster peer-remove` puts R1 stream in non-recoverable state
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [x] Included `nats-server -DV` output
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
```bash
$ nats-server --version
nats-server: v2.9.21
$ nats --version
0.0.35
```
#### OS/Container environment:
macOS 13.5 (22G74)
#### Steps or code to reproduce the issue:
1. Setup a simple 3-cluster super cluster, each cluster with 1 server.
Use steps from here: https://natsbyexample.com/examples/topologies/supercluster-jetstream/cli
```
$ nats --context east-sys server report jetstream
╭───────────────────────────────────────────────────────────────────────────────────────────────╮
│ JetStream Summary │
├────────┬─────────┬─────────┬───────────┬──────────┬───────┬────────┬──────┬─────────┬─────────┤
│ Server │ Cluster │ Streams │ Consumers │ Messages │ Bytes │ Memory │ File │ API Req │ API Err │
├────────┼─────────┼─────────┼───────────┼──────────┼───────┼────────┼──────┼─────────┼─────────┤
│ n2 │ central │ 1 │ 0 │ 0 │ 0 B │ 0 B │ 0 B │ 3 │ 0 │
│ n1* │ east │ 0 │ 0 │ 0 │ 0 B │ 0 B │ 0 B │ 22 │ 1 │
│ n3 │ west │ 0 │ 0 │ 0 │ 0 B │ 0 B │ 0 B │ 0 │ 0 │
├────────┼─────────┼─────────┼───────────┼──────────┼───────┼────────┼──────┼─────────┼─────────┤
│ │ │ 1 │ 0 │ 0 │ 0 B │ 0 B │ 0 B │ 25 │ 1 │
╰────────┴─────────┴─────────┴───────────┴──────────┴───────┴────────┴──────┴─────────┴─────────╯
╭────────────────────────────────────────────────────────────╮
│ RAFT Meta Group Information │
├──────┬──────────┬────────┬─────────┬────────┬────────┬─────┤
│ Name │ ID │ Leader │ Current │ Online │ Active │ Lag │
├──────┼──────────┼────────┼─────────┼────────┼────────┼─────┤
│ n1 │ fjFyEjc1 │ yes │ true │ true │ 0.00s │ 0 │
│ n2 │ 44jzkV9D │ │ true │ true │ 0.44s │ 0 │
│ n3 │ BXScrY9i │ │ true │ true │ 0.44s │ 0 │
╰──────┴──────────┴────────┴─────────┴────────┴────────┴─────╯
```
2. Create a simple R1 stream
```
$ nats --context east stream add \
--subjects test \
--storage file \
--replicas 1 \
--retention limits \
--discard old \
--max-age 1m \
--max-msgs=100 \
--max-msgs-per-subject=-1 \
--max-msg-size=-1 \
--max-bytes=-1 \
--dupe-window=1m \
--no-allow-rollup \
--no-deny-delete \
--no-deny-purge \
test
```
3. Verify that stream is created and landed on one of the cluster
```
$ nats --context east stream report
Obtaining Stream stats
╭─────────────────────────────────────────────────────────────────────────────────────────╮
│ Stream Report │
├────────┬─────────┬───────────┬───────────┬──────────┬───────┬──────┬─────────┬──────────┤
│ Stream │ Storage │ Placement │ Consumers │ Messages │ Bytes │ Lost │ Deleted │ Replicas │
├────────┼─────────┼───────────┼───────────┼──────────┼───────┼──────┼─────────┼──────────┤
│ test │ File │ │ 0 │ 0 │ 0 B │ 0 │ 0 │ n2* │
╰────────┴─────────┴───────────┴───────────┴──────────┴───────┴──────┴─────────┴──────────╯
```
4. use `peer-remove` command on the newly created stream
```
$ nats --context east stream cluster peer-remove test
? Select a Peer n2
11:33:19 Removing peer "n2"
nats: error: peer remap failed (10075)
```
#### Expected result:
The `peer-remove` command either
- fails with error message that stream cannot be re-located in another server of the same cluster (since all clusters in this super cluster are single-node)
- succeeds and relocates the stream to another cluster.
#### Actual result:
- The `peer-remove` command fails and leaves the stream in middle state.
- The stream does not have any replicas
```
$ nats --context east stream report
Obtaining Stream stats
╭─────────────────────────────────────────────────────────────────────────────────────────╮
│ Stream Report │
├────────┬─────────┬───────────┬───────────┬──────────┬───────┬──────┬─────────┬──────────┤
│ Stream │ Storage │ Placement │ Consumers │ Messages │ Bytes │ Lost │ Deleted │ Replicas │
├────────┼─────────┼───────────┼───────────┼──────────┼───────┼──────┼─────────┼──────────┤
│ test │ File │ │ 0 │ 0 │ 0 B │ 0 │ 0 │ │
╰────────┴─────────┴───────────┴───────────┴──────────┴───────┴──────┴─────────┴──────────╯
```
- Any command to manage or inspect the stream returns error. __There's no way to unblock the stream, or to remove it from the cluster.__
```
$ nats --context east stream edit test
nats: error: could not request Stream test configuration: stream is offline (10118)
$ nats --context east stream rm test
? Really delete Stream test Yes
nats: error: could not remove Stream: stream is offline (10118)
$ nats --context east stream info test
nats: error: could not request Stream info: stream is offline (10118)
```
- It is impossible to create another stream that subscribe to the same subject(s). So when this issue happens, the cluster is in really bad shape that certain subjects cannot be subscribed by jetstream.
|
https://github.com/nats-io/nats-server/issues/4396
|
https://github.com/nats-io/nats-server/pull/4420
|
dc09bb764a59008d288882f2e3500e5228c3005e
|
5a926f1911cf24880d8d66fd3da6429f46920400
| 2023-08-14T18:40:18Z |
go
| 2023-08-23T03:01:27Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,367 |
["server/client.go", "server/leafnode_test.go"]
|
Leafnode cluster receiving message 3 times?
|
I'm trying to establish communication between two NATS clusters (A and B).
**Cluster A** has multiple isolated customer accounts (`AccountA`, `AccountB`, `AccountC`). These accounts represent multiple instances of the same service running for different customers. So, for example, subject `mysubj` in `AccountA` is completely isolated from `mysubj` in `AccountB`; they are both handled by separate instances of an application, for example.
**Cluster B** has an account W from which I want to be able to send requests to those multiple accounts in cluster A. The idea is to use prefixes in the subject name to differentiate between accounts. For example, to send a request to `AccountA`'s `mysubj` in cluster A, I would make a request to `a.mysubj` from account W in cluster B. Likewise, a request to `b.mysubj` should reach `mysubj` in `AccountB`, and so on.
I'm trying to accomplish this using a leafnode connection where cluster A is the leafnode and cluster B is the hub.
However, I can't seem to get it right. With my current attempt, requests made from W to `a.mysubj` do reach `mysubj` in AccountA of cluster A, but messages are being received 3 times, rather than one.
In the example below, I included the configuration files and a test script, all of which should be in the same directory. Cluster A (the leafnode) has 3 nodes, but cluster B (the remote) has only 1 node to make it shorter.
What am I missing?
Thank you.
## Reproducible example
### Cluster A
#### cluster-a-common.conf
```
accounts: {
$SYS: {
users: [
{user: admin, password: pwd}
]
},
AccountA: {
jetstream: enabled
mappings: {
a.>: ">"
}
exports: [
{service: a.>}
]
users: [
{user: a, password: a}
]
},
AccountB: {
jetstream: enabled
mappings: {
b.>: ">"
}
exports: [
{service: b.>}
]
users: [
{user: b, password: b}
]
},
AccountC: {
jetstream: enabled
mappings: {
c.>: ">"
}
exports: [
{service: c.>}
]
users: [
{user: c, password: c}
]
},
AGG: {
imports: [
{service: {subject: a.>, account: AccountA}},
{service: {subject: b.>, account: AccountB}},
{service: {subject: c.>, account: AccountC}}
]
users: [
{user: agg, password: agg}
]
},
}
leafnodes {
remotes: [
{
urls: [
nats-leaf://agg:[email protected]:7422,
]
account: AGG
},
]
}
```
### cluster-a-0.conf:
```
port: 4222
http: 8222
server_name:cluster-a-0
jetstream {
max_mem: 1Gi
store_dir: /tmp/cluster-a-0
}
lame_duck_grace_period: 10s
lame_duck_duration: 30s
cluster {
listen: 0.0.0.0:6222
name: cluster-a
routes = [
nats://127.0.0.1:6222,
nats://127.0.0.1:6223,
nats://127.0.0.1:6224,
]
connect_retries: 120
}
include cluster-a-common.conf
```
### cluster-a-1.conf
```
port: 4223
http: 8223
server_name:cluster-a-1
jetstream {
max_mem: 1Gi
store_dir: /tmp/cluster-a-1
}
lame_duck_grace_period: 10s
lame_duck_duration: 30s
cluster {
listen: 0.0.0.0:6223
name: cluster-a
routes = [
nats://127.0.0.1:6222,
nats://127.0.0.1:6223,
nats://127.0.0.1:6224,
]
connect_retries: 120
}
include cluster-a-common.conf
```
## cluster-a-2.conf
```
port: 4224
http: 8224
server_name:cluster-a-2
jetstream {
max_mem: 1Gi
store_dir: /tmp/cluster-a-2
}
lame_duck_grace_period: 10s
lame_duck_duration: 30s
cluster {
listen: 0.0.0.0:6224
name: cluster-a
routes = [
nats://127.0.0.1:6222,
nats://127.0.0.1:6223,
nats://127.0.0.1:6224,
]
connect_retries: 120
}
include cluster-a-common.conf
```
### Cluster B
#### cluster-b-0.conf
```
port: 4225
http: 8225
server_name:cluster-b-0
jetstream {
max_mem: 1Gi
store_dir: /tmp/cluster-b-0
}
lame_duck_grace_period: 10s
lame_duck_duration: 30s
accounts {
$SYS: {
users: [
{user: admin, password: pwd}
]
},
AGG: {
jetstream: enabled
exports: [
{service: >}
]
users: [
{user: agg, password: agg}
]
},
W: {
imports: [
{service: {subject: >, account: AGG}}
]
users: [
{user: w, password: w}
]
},
}
leafnodes {
port: 7422
}
```
### test.sh
This script exemplifies the behavior that I'm seeing. When I run a responder for `mysubj` in `AccountA` and then send a request to `a.mysubj` from account `W` in cluster B, the request is received 3 times.
```
#!/bin/bash
set -euo pipefail
nats-server -c cluster-a-0.conf > /dev/null 2>&1 &
nats-server -c cluster-a-1.conf > /dev/null 2>&1 &
nats-server -c cluster-a-2.conf > /dev/null 2>&1 &
nats-server -c cluster-b-0.conf > /dev/null 2>&1 &
sleep 3
curl --fail --silent --retry 5 --retry-delay 1 http://localhost:8222/healthz > /dev/null
curl --fail --silent --retry 5 --retry-delay 1 http://localhost:8223/healthz > /dev/null
curl --fail --silent --retry 5 --retry-delay 1 http://localhost:8224/healthz > /dev/null
curl --fail --silent --retry 5 --retry-delay 1 http://localhost:8225/healthz > /dev/null
# Create a responder for mysubj in AccountA in cluster A
nats -s localhost:4222 reply mysubj 'replyA' --user a --password a &
sleep 1
# Send a request to a.mysubj from W in cluster B
nats -s localhost:4225 request a.mysubj 'requestA' --user w --password w
pkill -P $$
```
### Example output
```
12:37:33 Listening on "mysubj" in group "NATS-RPLY-22"
12:37:34 Sending request on "a.mysubj"
12:37:34 [#0] Received on subject "mysubj":
12:37:34 Nats-Request-Info: {"acc":"W","rtt":2060300}
requestA
12:37:34 [#1] Received on subject "mysubj":
12:37:34 Nats-Request-Info: {"acc":"W","rtt":2060300}
requestA
12:37:34 [#2] Received on subject "mysubj":
12:37:34 Nats-Request-Info: {"acc":"W","rtt":2060300}
requestA
12:37:34 Received with rtt 4.1203ms
replyA
```
|
https://github.com/nats-io/nats-server/issues/4367
|
https://github.com/nats-io/nats-server/pull/4578
|
637d8f292144d8e1b385fb09fa94c26e886987e2
|
fe2c116a6b3c38d1dbaea0dc84fc7cfce0628df3
| 2023-08-04T12:02:06Z |
go
| 2023-09-24T20:54:39Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,363 |
["server/raft.go"]
|
Did not receive all consumer info results for 'USERS > stream_t3' JetStream cluster consumer 'USERS > stream_t1 > stream_t1_2' has NO quorum
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [x] Included `nats-server -DV` output
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
`nats-server-v2.9.20-linux-amd64.zip`
#### OS/Container environment:
k8s container, CentOS Compatibility
#### Steps or code to reproduce the issue:
Environment Description:
3 nodes: jetstream: enable
5+ streams: Retention: Interest, Replicas: 3, Storage: File, ...
1+ consumers for each stream: Pull Mode: true, Ack Policy: Explicit, ...
```
nats stream info stream_t3
Information for Stream stream_t3 created 2023-08-03 18:56:03
Subjects: stream_t3.>
Replicas: 3
Storage: File
Options:
Retention: Interest
Acknowledgements: true
Discard Policy: Old
Duplicate Window: 2m0s
Allows Msg Delete: true
Allows Purge: true
Allows Rollups: false
Limits:
Maximum Messages: 20,000,000
Maximum Per Subject: unlimited
Maximum Bytes: unlimited
Maximum Age: 7d0h0m0s
Maximum Message Size: unlimited
Maximum Consumers: unlimited
Cluster Information:
Name: nats_cluster
Leader: nats1
Replica: nats0, current, seen 0.62s ago
Replica: nats2, current, seen 0.62s ago
State:
Messages: 0
Bytes: 0 B
FirstSeq: 0
LastSeq: 0
Active Consumers: 1
```
## How to reproduce the isse:
1. Use cgroup to limit the nats-server writing speed to 10KB/s to simulate the situation that the disk is busy.
ex:
```
mkdir /sys/fs/cgroup/blkio/writelimit
echo $(pidof nats-server) >> /sys/fs/cgroup/blkio/writelimit/cgroup.procs
echo "253:0 10000 " > /sys/fs/cgroup/blkio/writelimit/blkio.throttle.write_bps_device
```
`253:0` from here:
```
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 400G 0 disk
├─vda1 253:1 0 1G 0 part /boot
└─vda2 253:2 0 399G 0 part /
vdb 253:16 0 8G 0 disk
```
2. pub and sub some msgs. run `nats consumer report somestream` , Sometimes timeout
```
[root@localhost ~]# nats consumer info stream_t3
? Select a Consumer stream_t3_2
nats: error: could not load Consumer stream_t3 > stream_t3_2: context deadline exceeded
```
nats-server.log
```
[2661477] 2023/08/03 19:11:44.295584 [WRN] Did not receive all consumer info results for 'USERS > stream_t3'
...
[2661477] 2023/08/03 19:11:50.565116 [WRN] Did not receive all consumer info results for 'USERS > stream_t3'
```
nats-server.log on another node:
```
[2025315] 2023/08/03 14:29:13.771080 [WRN] JetStream cluster consumer 'USERS > stream_t1 > stream_t1_2' has NO quorum, stalled.
...
[2025315] 2023/08/03 14:29:41.448070 [WRN] JetStream cluster consumer 'USERS > stream_t1 > stream_t1_2' has NO quorum, stalled.
```
nats consumer report XXX --trace :
```
[root@localhost ~]# nats consumer report stream_t3 --trace
20:41:24 >>> $JS.API.STREAM.INFO.stream_t3
null
20:41:24 <<< $JS.API.STREAM.INFO.stream_t3
{"type":"io.nats.jetstream.api.v1.stream_info_response","total":0,"offset":0,"limit":0,"config":{"name":"stream_t3","subjects":["stream_t3.\u003e"],"retention":"interest","max_consumers":-1,"max_msgs":20000000,"max_bytes":-1,"max_age":604800000000000,"max_msgs_per_subject":-1,"max_msg_size":-1,"discard":"old","storage":"file","num_replicas":3,"duplicate_window":120000000000,"allow_direct":false,"mirror_direct":false,"sealed":false,"deny_delete":false,"deny_purge":false,"allow_rollup_hdrs":false},"created":"2023-08-03T10:56:03.111435473Z","state":{"messages":0,"bytes":0,"first_seq":0,"first_ts":"0001-01-01T00:00:00Z","last_seq":0,"last_ts":"0001-01-01T00:00:00Z","consumer_count":1},"cluster":{"name":"nats_cluster","leader":"nats1","replicas":[{"name":"nats0","current":true,"active":768639229,"peer":"LWhxtZZD"},{"name":"nats2","current":true,"active":768602295,"peer":"SRLRpmYS"}]}}
20:41:24 >>> $JS.API.STREAM.INFO.stream_t3
null
20:41:24 <<< $JS.API.STREAM.INFO.stream_t3
{"type":"io.nats.jetstream.api.v1.stream_info_response","total":0,"offset":0,"limit":0,"config":{"name":"stream_t3","subjects":["stream_t3.\u003e"],"retention":"interest","max_consumers":-1,"max_msgs":20000000,"max_bytes":-1,"max_age":604800000000000,"max_msgs_per_subject":-1,"max_msg_size":-1,"discard":"old","storage":"file","num_replicas":3,"duplicate_window":120000000000,"allow_direct":false,"mirror_direct":false,"sealed":false,"deny_delete":false,"deny_purge":false,"allow_rollup_hdrs":false},"created":"2023-08-03T10:56:03.111435473Z","state":{"messages":0,"bytes":0,"first_seq":0,"first_ts":"0001-01-01T00:00:00Z","last_seq":0,"last_ts":"0001-01-01T00:00:00Z","consumer_count":1},"cluster":{"name":"nats_cluster","leader":"nats1","replicas":[{"name":"nats0","current":true,"active":774041524,"peer":"LWhxtZZD"},{"name":"nats2","current":true,"active":774004590,"peer":"SRLRpmYS"}]}}
20:41:24 >>> $JS.API.CONSUMER.LIST.stream_t3
{"offset":0}
20:41:28 <<< $JS.API.CONSUMER.LIST.stream_t3
{"type":"io.nats.jetstream.api.v1.consumer_list_response","total":0,"offset":0,"limit":256,"consumers":[],"missing":["stream_t3_2"]}
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Consumer report for stream_t3 with **1** consumers │
├──────────┬──────┬────────────┬──────────┬─────────────┬─────────────┬─────────────┬───────────┬─────────┤
│ Consumer │ Mode │ Ack Policy │ Ack Wait │ Ack Pending │ Redelivered │ Unprocessed │ Ack Floor │ Cluster │
├──────────┼──────┼────────────┼──────────┼─────────────┼─────────────┼─────────────┼───────────┼─────────┤
╰──────────┴──────┴────────────┴──────────┴─────────────┴─────────────┴─────────────┴───────────┴─────────╯
```
4. set disk writing speed limit to 10 MB/s, But the problem still exists.
ex:
```
echo "253:0 10240000" > /sys/fs/cgroup/blkio/writelimit/blkio.throttle.write_bps_device
```
#### Expected result:
#### Actual result:
```
[root@localhost ~]# nats consumer info stream_t3
? Select a Consumer stream_t3_2
nats: error: could not load Consumer stream_t3 > stream_t3_2: context deadline exceeded
```
nats0 log:
```
[2661477] 2023/08/03 18:56:03.113500 [DBG] Starting stream monitor for 'USERS > stream_t3' [S-R3F-P0t3poBg]
[2661477] 2023/08/03 18:56:03.568867 [INF] JetStream cluster new stream leader for 'USERS > stream_t3'
[2661477] 2023/08/03 18:56:27.128860 [DBG] JetStream cluster creating raft group:&{Name:C-R3F-P00IVKYA Peers:[SRLRpmYS LWhxtZZD RztkeQup] Storage:File Cluster:nats_cluster Preferred:LWhxtZZD node:<nil>}
[2661477] 2023/08/03 18:56:27.129503 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Started
[2661477] 2023/08/03 18:56:27.129531 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Starting campaign
[2661477] 2023/08/03 18:56:27.129880 [DBG] Starting consumer monitor for 'USERS > stream_t3 > stream_t3_2' [C-R3F-P00IVKYA]
[2661477] 2023/08/03 18:56:27.744702 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Switching to candidate
[2661477] 2023/08/03 18:56:27.744756 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Sending out voteRequest {term:1 lastTerm:0 lastIndex:0 candidate:LWhxtZZD reply:}
[2661477] 2023/08/03 18:56:27.745092 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received a voteResponse &{term:0 peer:RztkeQup granted:true}
[2661477] 2023/08/03 18:56:27.745132 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Switching to leader
[2661477] 2023/08/03 18:56:27.745209 [INF] JetStream cluster new consumer leader for 'USERS > stream_t3 > stream_t3_2'
[2661477] 2023/08/03 18:56:27.745252 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received a voteResponse &{term:0 peer:SRLRpmYS granted:true}
[2661477] 2023/08/03 18:58:37.320705 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Installing snapshot of 8 bytes
[2661477] 2023/08/03 19:01:55.183674 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Switching to follower
[2661477] 2023/08/03 19:01:55.183814 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] AppendEntry updating leader to "SRLRpmYS"
[2661477] 2023/08/03 19:01:55.183833 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] AppendEntry detected pindex less than ours: 1:99 vs 1:101
[2661477] 2023/08/03 19:01:55.183884 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Truncating and repairing WAL to Term 1 Index 99
[2661477] 2023/08/03 19:02:03.183529 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Canceling catchup subscription since we are now up to date
[2661477] 2023/08/03 19:02:03.183764 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] AppendEntry did not match 2 100 with 2 99
[2661477] 2023/08/03 19:02:03.183844 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Proposal ignored, not leader
[2661477] 2023/08/03 19:02:03.183899 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Proposal ignored, not leader
[2661477] 2023/08/03 19:02:03.183863 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received a voteRequest &{term:2 lastTerm:1 lastIndex:99 candidate:SRLRpmYS reply:$NRG.R.tXejSJ6I}
[2661477] 2023/08/03 19:02:03.183991 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Sending a voteResponse &{term:2 peer:LWhxtZZD granted:false} -> "$NRG.R.tXejSJ6I"
[2661477] 2023/08/03 19:02:03.184010 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received a voteRequest &{term:3 lastTerm:1 lastIndex:99 candidate:RztkeQup reply:$NRG.R.AaDo9quq}
[2661477] 2023/08/03 19:02:03.184021 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Sending a voteResponse &{term:2 peer:LWhxtZZD granted:false} -> "$NRG.R.AaDo9quq"
[2661477] 2023/08/03 19:02:03.184028 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received a voteRequest &{term:4 lastTerm:1 lastIndex:99 candidate:RztkeQup reply:$NRG.R.AaDo9quq}
[2661477] 2023/08/03 19:02:03.184036 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Sending a voteResponse &{term:3 peer:LWhxtZZD granted:false} -> "$NRG.R.AaDo9quq"
[2661477] 2023/08/03 19:02:03.184044 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received a voteRequest &{term:5 lastTerm:2 lastIndex:101 candidate:SRLRpmYS reply:$NRG.R.tXejSJ6I}
[2661477] 2023/08/03 19:02:03.184054 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Sending a voteResponse &{term:4 peer:LWhxtZZD granted:true} -> "$NRG.R.tXejSJ6I"
[2661477] 2023/08/03 19:02:03.185692 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Canceling catchup subscription since we are now up to date
[2661477] 2023/08/03 19:02:03.185815 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Update peers from leader to map[LWhxtZZD:0xc00002ab88 RztkeQup:0xc00002abd0 SRLRpmYS:0xc00002abb8]
[2661477] 2023/08/03 19:02:03.185839 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Update peers from leader to map[LWhxtZZD:0xc00002ab88 RztkeQup:0xc00002abd0 SRLRpmYS:0xc00002abb8]
[2661477] 2023/08/03 19:03:29.031822 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Switching to candidate
[2661477] 2023/08/03 19:03:29.031911 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Sending out voteRequest {term:6 lastTerm:5 lastIndex:132 candidate:LWhxtZZD reply:}
[2661477] 2023/08/03 19:03:29.368966 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received a voteRequest &{term:6 lastTerm:5 lastIndex:132 candidate:SRLRpmYS reply:$NRG.R.tXejSJ6I}
[2661477] 2023/08/03 19:03:29.369012 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Sending a voteResponse &{term:6 peer:LWhxtZZD granted:false} -> "$NRG.R.tXejSJ6I"
[2661477] 2023/08/03 19:03:35.151255 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received a voteResponse &{term:6 peer:SRLRpmYS granted:false}
[2661477] 2023/08/03 19:03:38.080137 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Sending out voteRequest {term:7 lastTerm:5 lastIndex:132 candidate:LWhxtZZD reply:}
[2661477] 2023/08/03 19:03:38.080677 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received a voteResponse &{term:6 peer:SRLRpmYS granted:true}
[2661477] 2023/08/03 19:03:38.080730 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Switching to leader
[2661477] 2023/08/03 19:03:38.080872 [INF] JetStream cluster new consumer leader for 'USERS > stream_t3 > stream_t3_2'
[2661477] 2023/08/03 19:04:39.155298 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received a voteResponse &{term:6 peer:RztkeQup granted:false}
[2661477] 2023/08/03 19:04:39.155871 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received a voteRequest &{term:7 lastTerm:5 lastIndex:99 candidate:RztkeQup reply:$NRG.R.AaDo9quq}
[2661477] 2023/08/03 19:04:39.155921 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Sending a voteResponse &{term:7 peer:LWhxtZZD granted:false} -> "$NRG.R.AaDo9quq"
[2661477] 2023/08/03 19:04:40.416268 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received a voteRequest &{term:8 lastTerm:5 lastIndex:99 candidate:RztkeQup reply:$NRG.R.AaDo9quq}
[2661477] 2023/08/03 19:04:40.416313 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Stepping down from leader, detected higher term: 8 vs 7
[2661477] 2023/08/03 19:04:40.416342 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Sending a voteResponse &{term:7 peer:LWhxtZZD granted:false} -> "$NRG.R.AaDo9quq"
[2661477] 2023/08/03 19:04:40.416349 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Switching to follower
[2661477] 2023/08/03 19:04:40.728858 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received a voteRequest &{term:9 lastTerm:7 lastIndex:149 candidate:SRLRpmYS reply:$NRG.R.tXejSJ6I}
[2661477] 2023/08/03 19:04:40.728908 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Sending a voteResponse &{term:8 peer:LWhxtZZD granted:true} -> "$NRG.R.tXejSJ6I"
[2661477] 2023/08/03 19:04:40.729297 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] AppendEntry updating leader to "SRLRpmYS"
[2661477] 2023/08/03 19:04:40.729564 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Update peers from leader to map[LWhxtZZD:0xc00002ab88 RztkeQup:0xc00002abd0 SRLRpmYS:0xc00002abb8]
[2661477] 2023/08/03 19:04:41.076240 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Proposal ignored, not leader
[2661477] 2023/08/03 19:04:41.086351 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Proposal ignored, not leader
[2661477] 2023/08/03 19:04:46.376735 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received a voteRequest &{term:9 lastTerm:5 lastIndex:99 candidate:RztkeQup reply:$NRG.R.AaDo9quq}
[2661477] 2023/08/03 19:04:46.376786 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Sending a voteResponse &{term:9 peer:LWhxtZZD granted:false} -> "$NRG.R.AaDo9quq"
[2661477] 2023/08/03 19:04:51.086795 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Proposal ignored, not leader
[2661477] 2023/08/03 19:04:55.251238 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received a voteRequest &{term:10 lastTerm:5 lastIndex:99 candidate:RztkeQup reply:$NRG.R.AaDo9quq}
[2661477] 2023/08/03 19:04:55.251282 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Sending a voteResponse &{term:9 peer:LWhxtZZD granted:false} -> "$NRG.R.AaDo9quq"
[2661477] 2023/08/03 19:04:55.499154 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Switching to candidate
[2661477] 2023/08/03 19:04:55.499200 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Sending out voteRequest {term:11 lastTerm:9 lastIndex:151 candidate:LWhxtZZD reply:}
[2661477] 2023/08/03 19:04:55.499716 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received a voteResponse &{term:10 peer:SRLRpmYS granted:true}
[2661477] 2023/08/03 19:04:55.499770 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Switching to leader
[2661477] 2023/08/03 19:05:02.282570 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received a voteRequest &{term:11 lastTerm:5 lastIndex:99 candidate:RztkeQup reply:$NRG.R.AaDo9quq}
[2661477] 2023/08/03 19:05:02.282633 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Sending a voteResponse &{term:11 peer:LWhxtZZD granted:false} -> "$NRG.R.AaDo9quq"
[2661477] 2023/08/03 19:05:07.188392 [INF] JetStream cluster new consumer leader for 'USERS > stream_t3 > stream_t3_2'
[2661477] 2023/08/03 19:05:10.175371 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received a voteRequest &{term:12 lastTerm:5 lastIndex:99 candidate:RztkeQup reply:$NRG.R.AaDo9quq}
[2661477] 2023/08/03 19:05:10.175404 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Stepping down from leader, detected higher term: 12 vs 11
[2661477] 2023/08/03 19:05:10.175448 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Sending a voteResponse &{term:11 peer:LWhxtZZD granted:false} -> "$NRG.R.AaDo9quq"
[2661477] 2023/08/03 19:05:10.175458 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Switching to follower
[2661477] 2023/08/03 19:05:10.359017 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received a voteRequest &{term:13 lastTerm:11 lastIndex:154 candidate:SRLRpmYS reply:$NRG.R.tXejSJ6I}
[2661477] 2023/08/03 19:05:10.359077 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Sending a voteResponse &{term:12 peer:LWhxtZZD granted:true} -> "$NRG.R.tXejSJ6I"
[2661477] 2023/08/03 19:05:10.359541 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] AppendEntry updating leader to "SRLRpmYS"
[2661477] 2023/08/03 19:05:10.359889 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Update peers from leader to map[LWhxtZZD:0xc00002ab88 RztkeQup:0xc00002abd0 SRLRpmYS:0xc00002abb8]
[2661477] 2023/08/03 19:05:15.562706 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received a voteRequest &{term:13 lastTerm:5 lastIndex:99 candidate:RztkeQup reply:$NRG.R.AaDo9quq}
[2661477] 2023/08/03 19:05:15.562759 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Sending a voteResponse &{term:13 peer:LWhxtZZD granted:false} -> "$NRG.R.AaDo9quq"
[2661477] 2023/08/03 19:05:19.142944 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received a voteResponse &{term:13 peer:RztkeQup granted:false}
[2661477] 2023/08/03 19:05:19.142974 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Ignoring old vote response, we have stepped down
[2661477] 2023/08/03 19:05:19.146444 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received a voteResponse &{term:9 peer:RztkeQup granted:true}
[2661477] 2023/08/03 19:05:19.146450 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Ignoring old vote response, we have stepped down
[2661477] 2023/08/03 19:05:55.180426 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Update peers from leader to map[LWhxtZZD:0xc00002ab88 RztkeQup:0xc00002abd0 SRLRpmYS:0xc00002abb8]
[2661477] 2023/08/03 19:05:55.180562 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Update peers from leader to map[LWhxtZZD:0xc00002ab88 RztkeQup:0xc00002abd0 SRLRpmYS:0xc00002abb8]
[2661477] 2023/08/03 19:07:14.781886 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Switching to candidate
[2661477] 2023/08/03 19:07:14.781938 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Sending out voteRequest {term:14 lastTerm:13 lastIndex:176 candidate:LWhxtZZD reply:}
[2661477] 2023/08/03 19:07:14.782365 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received a voteResponse &{term:13 peer:RztkeQup granted:true}
[2661477] 2023/08/03 19:07:14.782421 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Switching to leader
[2661477] 2023/08/03 19:07:55.185806 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received a voteResponse &{term:14 peer:SRLRpmYS granted:false}
[2661477] 2023/08/03 19:08:03.184125 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received a voteRequest &{term:14 lastTerm:13 lastIndex:176 candidate:SRLRpmYS reply:$NRG.R.tXejSJ6I}
[2661477] 2023/08/03 19:08:03.184164 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Update peers from leader to map[LWhxtZZD:0xc00002ab88 RztkeQup:0xc00002abd0 SRLRpmYS:0xc00002abb8]
[2661477] 2023/08/03 19:08:03.184485 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Sending a voteResponse &{term:14 peer:LWhxtZZD granted:false} -> "$NRG.R.tXejSJ6I"
[2661477] 2023/08/03 19:08:03.184496 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received append entry from another leader, stepping down to "SRLRpmYS"
[2661477] 2023/08/03 19:08:03.184515 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received append entry from another leader, stepping down to "SRLRpmYS"
[2661477] 2023/08/03 19:08:03.184520 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received append entry from another leader, stepping down to "SRLRpmYS"
[2661477] 2023/08/03 19:08:03.184528 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received append entry from another leader, stepping down to "SRLRpmYS"
[2661477] 2023/08/03 19:08:03.184533 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received append entry from another leader, stepping down to "SRLRpmYS"
[2661477] 2023/08/03 19:08:03.184538 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received append entry from another leader, stepping down to "SRLRpmYS"
[2661477] 2023/08/03 19:08:03.184543 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received append entry from another leader, stepping down to "SRLRpmYS"
[2661477] 2023/08/03 19:08:03.184665 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received append entry from another leader, stepping down to "SRLRpmYS"
[2661477] 2023/08/03 19:08:03.184671 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received append entry from another leader, stepping down to "SRLRpmYS"
[2661477] 2023/08/03 19:08:03.184676 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received append entry from another leader, stepping down to "SRLRpmYS"
[2661477] 2023/08/03 19:08:03.184681 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Received append entry from another leader, stepping down to "SRLRpmYS"
[2661477] 2023/08/03 19:08:03.184687 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Switching to follower
[2661477] 2023/08/03 19:08:03.184715 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Ignoring old vote response, we have stepped down
[2661477] 2023/08/03 19:08:03.184823 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Switching to follower
[2661477] 2023/08/03 19:08:03.184828 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Switching to follower
[2661477] 2023/08/03 19:08:03.184837 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Switching to follower
[2661477] 2023/08/03 19:08:03.184851 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Switching to follower
[2661477] 2023/08/03 19:08:03.184864 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Switching to follower
[2661477] 2023/08/03 19:08:03.184870 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Switching to follower
[2661477] 2023/08/03 19:08:03.184876 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Switching to follower
[2661477] 2023/08/03 19:08:03.184952 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Switching to follower
[2661477] 2023/08/03 19:08:03.184970 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Switching to follower
[2661477] 2023/08/03 19:08:03.184975 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Switching to follower
[2661477] 2023/08/03 19:08:04.181402 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] AppendEntry did not match 14 180 with 14 179
[2661477] 2023/08/03 19:08:06.181626 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Catchup may be stalled, will request again
[2661477] 2023/08/03 19:08:09.181520 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Catchup may be stalled, will request again
[2661477] 2023/08/03 19:08:12.181359 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Catchup may be stalled, will request again
[2661477] 2023/08/03 19:08:12.991936 [INF] JetStream cluster new stream leader for 'USERS > stream_t3'
[2661477] 2023/08/03 19:08:12.992042 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Update peers from leader to map[LWhxtZZD:0xc00002ab88 RztkeQup:0xc00002abd0 SRLRpmYS:0xc00002abb8]
[2661477] 2023/08/03 19:08:12.992061 [INF] JetStream cluster new consumer leader for 'USERS > stream_t3 > stream_t3_2'
[2661477] 2023/08/03 19:08:18.060313 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Update peers from leader to map[LWhxtZZD:0xc00002ab88 RztkeQup:0xc00002abd0 SRLRpmYS:0xc00002abb8]
[2661477] 2023/08/03 19:08:18.060369 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Update peers from leader to map[LWhxtZZD:0xc00002ab88 RztkeQup:0xc00002abd0 SRLRpmYS:0xc00002abb8]
[2661477] 2023/08/03 19:08:20.658660 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Not switching to candidate, catching up
[2661477] 2023/08/03 19:08:20.658723 [DBG] RAFT [LWhxtZZD - C-R3F-P00IVKYA] Canceling catchup subscription since we are now up to date
[2661477] 2023/08/03 19:11:44.295584 [WRN] Did not receive all consumer info results for 'USERS > stream_t3'
[2661477] 2023/08/03 19:11:50.565116 [WRN] Did not receive all consumer info results for 'USERS > stream_t3'
[2661477] 2023/08/03 19:12:42.288628 [WRN] Did not receive all consumer info results for 'USERS > stream_t3'
[2661477] 2023/08/03 19:13:01.912683 [WRN] Did not receive all consumer info results for 'USERS > stream_t3'
[2661477] 2023/08/03 19:17:14.857760 [WRN] Did not receive all consumer info results for 'USERS > stream_t3'
[2661477] 2023/08/03 19:20:42.943194 [WRN] Did not receive all consumer info results for 'USERS > stream_t3'
```
nats1:
```
[2025315] 2023/08/03 14:13:54.888952 [DBG] JetStream cluster creating raft group:&{Name:C-R3F-yBaTkf7T Peers:[SRLRpmYS RztkeQup LWhxtZZD] Storage:File Cluster:nats_cluster Preferred: node:<nil>}
[2025315] 2023/08/03 14:13:54.889074 [DBG] Starting consumer monitor for 'USERS > stream_t1 > stream_t1_1' [C-R3F-LcnOKyov]
[2025315] 2023/08/03 14:13:54.889254 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Started
[2025315] 2023/08/03 14:13:54.889512 [DBG] Starting consumer monitor for 'USERS > stream_t1 > stream_t1_2' [C-R3F-yBaTkf7T]
[2025315] 2023/08/03 14:13:59.822237 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Update peers from leader to map[LWhxtZZD:0xc0005a0c78 RztkeQup:0xc0005a0c48 SRLRpmYS:0xc0005a0c60]
[2025315] 2023/08/03 14:13:59.822410 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Update peers from leader to map[LWhxtZZD:0xc0005a0c78 RztkeQup:0xc0005a0c48 SRLRpmYS:0xc0005a0c60]
[2025315] 2023/08/03 14:14:03.034362 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Received a voteRequest &{term:4 lastTerm:3 lastIndex:213 candidate:LWhxtZZD reply:$NRG.R.naLLNjR2}
[2025315] 2023/08/03 14:14:03.034393 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Sending a voteResponse &{term:3 peer:RztkeQup granted:true} -> "$NRG.R.naLLNjR2"
[2025315] 2023/08/03 14:14:03.034737 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] AppendEntry updating leader to "LWhxtZZD"
[2025315] 2023/08/03 14:14:03.035273 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Update peers from leader to map[LWhxtZZD:0xc0005a0c78 RztkeQup:0xc0005a0c48 SRLRpmYS:0xc0005a0c60]
[2025315] 2023/08/03 14:29:03.465223 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Received a voteRequest &{term:5 lastTerm:4 lastIndex:527 candidate:SRLRpmYS reply:$NRG.R.ntuQEKs2}
[2025315] 2023/08/03 14:29:03.465294 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Sending a voteResponse &{term:4 peer:RztkeQup granted:true} -> "$NRG.R.ntuQEKs2"
[2025315] 2023/08/03 14:29:07.997860 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Switching to candidate
[2025315] 2023/08/03 14:29:07.997982 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Sending out voteRequest {term:6 lastTerm:4 lastIndex:527 candidate:RztkeQup reply:}
[2025315] 2023/08/03 14:29:13.771080 [WRN] JetStream cluster consumer 'USERS > stream_t1 > stream_t1_2' has NO quorum, stalled.
[2025315] 2023/08/03 14:29:13.771111 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Sending out voteRequest {term:7 lastTerm:4 lastIndex:527 candidate:RztkeQup reply:}
[2025315] 2023/08/03 14:29:21.331280 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Sending out voteRequest {term:8 lastTerm:4 lastIndex:527 candidate:RztkeQup reply:}
[2025315] 2023/08/03 14:29:28.630468 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Sending out voteRequest {term:9 lastTerm:4 lastIndex:527 candidate:RztkeQup reply:}
[2025315] 2023/08/03 14:29:32.985565 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Sending out voteRequest {term:10 lastTerm:4 lastIndex:527 candidate:RztkeQup reply:}
[2025315] 2023/08/03 14:29:41.447991 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Sending out voteRequest {term:11 lastTerm:4 lastIndex:527 candidate:RztkeQup reply:}
[2025315] 2023/08/03 14:29:41.448070 [WRN] JetStream cluster consumer 'USERS > stream_t1 > stream_t1_2' has NO quorum, stalled.
[2025315] 2023/08/03 14:29:48.805144 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Sending out voteRequest {term:12 lastTerm:4 lastIndex:527 candidate:RztkeQup reply:}
[2025315] 2023/08/03 14:29:55.278697 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Sending out voteRequest {term:13 lastTerm:4 lastIndex:527 candidate:RztkeQup reply:}
[2025315] 2023/08/03 14:30:02.173176 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Received append entry in candidate state from "LWhxtZZD", converting to follower
[2025315] 2023/08/03 14:30:02.178127 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Received append entry in candidate state from "LWhxtZZD", converting to follower
[2025315] 2023/08/03 14:30:02.178590 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Switching to follower
[2025315] 2023/08/03 14:30:02.178601 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Switching to follower
[2025315] 2023/08/03 14:30:02.183026 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Received a voteResponse &{term:5 peer:LWhxtZZD granted:false}
[2025315] 2023/08/03 14:30:02.183031 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Ignoring old vote response, we have stepped down
[2025315] 2023/08/03 14:30:02.183209 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Received a voteResponse &{term:6 peer:LWhxtZZD granted:false}
[2025315] 2023/08/03 14:30:02.183214 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Ignoring old vote response, we have stepped down
[2025315] 2023/08/03 14:30:02.183639 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Received a voteResponse &{term:7 peer:LWhxtZZD granted:false}
[2025315] 2023/08/03 14:30:02.183644 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Ignoring old vote response, we have stepped down
[2025315] 2023/08/03 14:30:02.183867 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Received a voteResponse &{term:8 peer:LWhxtZZD granted:false}
[2025315] 2023/08/03 14:30:02.183872 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Ignoring old vote response, we have stepped down
[2025315] 2023/08/03 14:30:02.184034 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Received a voteResponse &{term:9 peer:LWhxtZZD granted:false}
[2025315] 2023/08/03 14:30:02.184039 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Ignoring old vote response, we have stepped down
[2025315] 2023/08/03 14:30:02.184481 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Received a voteResponse &{term:10 peer:LWhxtZZD granted:false}
[2025315] 2023/08/03 14:30:02.184486 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Ignoring old vote response, we have stepped down
[2025315] 2023/08/03 14:30:02.184493 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Received a voteResponse &{term:11 peer:LWhxtZZD granted:false}
[2025315] 2023/08/03 14:30:02.184497 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Ignoring old vote response, we have stepped down
[2025315] 2023/08/03 14:30:02.184503 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Received a voteResponse &{term:12 peer:LWhxtZZD granted:false}
[2025315] 2023/08/03 14:30:02.184508 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Ignoring old vote response, we have stepped down
[2025315] 2023/08/03 14:30:02.184926 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] AppendEntry updating leader to "SRLRpmYS"
[2025315] 2023/08/03 14:30:02.184963 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] AppendEntry detected pindex less than ours: 4:527 vs 4:529
[2025315] 2023/08/03 14:30:02.185064 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Truncating and repairing WAL to Term 4 Index 527
[2025315] 2023/08/03 14:30:02.188903 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Canceling catchup subscription since we are now up to date
[2025315] 2023/08/03 14:30:02.192344 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] AppendEntry did not match 5 528 with 5 527
[2025315] 2023/08/03 14:30:02.193196 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Received a voteResponse &{term:5 peer:SRLRpmYS granted:false}
[2025315] 2023/08/03 14:30:02.193203 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Ignoring old vote response, we have stepped down
[2025315] 2023/08/03 14:30:02.193899 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Received a voteResponse &{term:6 peer:SRLRpmYS granted:false}
[2025315] 2023/08/03 14:30:02.193905 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Ignoring old vote response, we have stepped down
[2025315] 2023/08/03 14:30:02.194105 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Received a voteResponse &{term:7 peer:SRLRpmYS granted:false}
[2025315] 2023/08/03 14:30:02.194109 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Ignoring old vote response, we have stepped down
[2025315] 2023/08/03 14:30:02.194244 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Received a voteResponse &{term:8 peer:SRLRpmYS granted:false}
[2025315] 2023/08/03 14:30:02.194249 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Ignoring old vote response, we have stepped down
[2025315] 2023/08/03 14:30:02.194255 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Received a voteResponse &{term:9 peer:SRLRpmYS granted:false}
[2025315] 2023/08/03 14:30:02.194259 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Ignoring old vote response, we have stepped down
[2025315] 2023/08/03 14:30:02.194266 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Received a voteResponse &{term:10 peer:SRLRpmYS granted:false}
[2025315] 2023/08/03 14:30:02.194272 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Ignoring old vote response, we have stepped down
[2025315] 2023/08/03 14:30:02.194287 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Received a voteResponse &{term:11 peer:SRLRpmYS granted:false}
[2025315] 2023/08/03 14:30:02.194291 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Ignoring old vote response, we have stepped down
[2025315] 2023/08/03 14:30:02.194723 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Received a voteResponse &{term:12 peer:SRLRpmYS granted:false}
[2025315] 2023/08/03 14:30:02.194728 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Ignoring old vote response, we have stepped down
[2025315] 2023/08/03 14:30:09.867057 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Not switching to candidate, catching up
[2025315] 2023/08/03 14:30:09.867095 [DBG] RAFT [RztkeQup - C-R3F-yBaTkf7T] Canceling catchup subscription since we are now up to date
```
nats2:
```
[3295844] 2023/08/03 14:13:54.938550 [DBG] JetStream cluster creating raft group:&{Name:C-R3F-yBaTkf7T Peers:[SRLRpmYS RztkeQup LWhxtZZD] Storage:File Cluster:nats_cluster Preferred: node:<nil>}
[3295844] 2023/08/03 14:13:54.939064 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Started
[3295844] 2023/08/03 14:13:54.939271 [DBG] Starting consumer monitor for 'USERS > stream_t1 > stream_t1_2' [C-R3F-yBaTkf7T]
[3295844] 2023/08/03 14:13:54.939798 [DBG] Starting consumer monitor for 'USERS > stream_t1 > stream_t1_1' [C-R3F-LcnOKyov]
[3295844] 2023/08/03 14:13:59.823838 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Update peers from leader to map[LWhxtZZD:0xc00030ade0 RztkeQup:0xc00030adc8 SRLRpmYS:0xc00030adb0]
[3295844] 2023/08/03 14:13:59.823868 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Update peers from leader to map[LWhxtZZD:0xc00030ade0 RztkeQup:0xc00030adc8 SRLRpmYS:0xc00030adb0]
[3295844] 2023/08/03 14:14:03.034308 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Received a voteRequest &{term:4 lastTerm:3 lastIndex:213 candidate:LWhxtZZD reply:$NRG.R.naLLNjR2}
[3295844] 2023/08/03 14:14:03.034337 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Sending a voteResponse &{term:3 peer:SRLRpmYS granted:true} -> "$NRG.R.naLLNjR2"
[3295844] 2023/08/03 14:14:03.034754 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] AppendEntry updating leader to "LWhxtZZD"
[3295844] 2023/08/03 14:14:03.035297 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Update peers from leader to map[LWhxtZZD:0xc00030ade0 RztkeQup:0xc00030adc8 SRLRpmYS:0xc00030adb0]
[3295844] 2023/08/03 14:29:03.464882 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Switching to candidate
[3295844] 2023/08/03 14:29:03.464939 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Sending out voteRequest {term:5 lastTerm:4 lastIndex:527 candidate:SRLRpmYS reply:}
[3295844] 2023/08/03 14:29:03.465436 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Received a voteResponse &{term:4 peer:RztkeQup granted:true}
[3295844] 2023/08/03 14:29:03.465479 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Switching to leader
[3295844] 2023/08/03 14:30:02.184565 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Received a voteRequest &{term:6 lastTerm:4 lastIndex:527 candidate:RztkeQup reply:$NRG.R.uOdHLCNw}
[3295844] 2023/08/03 14:30:02.184933 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Stepping down from leader, detected higher term: 6 vs 5
[3295844] 2023/08/03 14:30:02.184949 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Sending a voteResponse &{term:5 peer:SRLRpmYS granted:false} -> "$NRG.R.uOdHLCNw"
[3295844] 2023/08/03 14:30:02.184959 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Received a voteRequest &{term:7 lastTerm:4 lastIndex:527 candidate:RztkeQup reply:$NRG.R.uOdHLCNw}
[3295844] 2023/08/03 14:30:02.184811 [INF] JetStream cluster new consumer leader for 'USERS > stream_t1 > stream_t1_2'
[3295844] 2023/08/03 14:30:02.184966 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Stepping down from leader, detected higher term: 7 vs 6
[3295844] 2023/08/03 14:30:02.185696 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Sending a voteResponse &{term:6 peer:SRLRpmYS granted:false} -> "$NRG.R.uOdHLCNw"
[3295844] 2023/08/03 14:30:02.185709 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Received a voteRequest &{term:8 lastTerm:4 lastIndex:527 candidate:RztkeQup reply:$NRG.R.uOdHLCNw}
[3295844] 2023/08/03 14:30:02.186481 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Stepping down from leader, detected higher term: 8 vs 7
[3295844] 2023/08/03 14:30:02.186503 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Sending a voteResponse &{term:7 peer:SRLRpmYS granted:false} -> "$NRG.R.uOdHLCNw"
[3295844] 2023/08/03 14:30:02.186515 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Received a voteRequest &{term:9 lastTerm:4 lastIndex:527 candidate:RztkeQup reply:$NRG.R.uOdHLCNw}
[3295844] 2023/08/03 14:30:02.186665 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Stepping down from leader, detected higher term: 9 vs 8
[3295844] 2023/08/03 14:30:02.186698 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Sending a voteResponse &{term:8 peer:SRLRpmYS granted:false} -> "$NRG.R.uOdHLCNw"
[3295844] 2023/08/03 14:30:02.186712 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Received a voteRequest &{term:10 lastTerm:4 lastIndex:527 candidate:RztkeQup reply:$NRG.R.uOdHLCNw}
[3295844] 2023/08/03 14:30:02.186719 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Stepping down from leader, detected higher term: 10 vs 9
[3295844] 2023/08/03 14:30:02.186728 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Sending a voteResponse &{term:9 peer:SRLRpmYS granted:false} -> "$NRG.R.uOdHLCNw"
[3295844] 2023/08/03 14:30:02.186736 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Received a voteRequest &{term:11 lastTerm:4 lastIndex:527 candidate:RztkeQup reply:$NRG.R.uOdHLCNw}
[3295844] 2023/08/03 14:30:02.186742 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Stepping down from leader, detected higher term: 11 vs 10
[3295844] 2023/08/03 14:30:02.186750 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Sending a voteResponse &{term:10 peer:SRLRpmYS granted:false} -> "$NRG.R.uOdHLCNw"
[3295844] 2023/08/03 14:30:02.186756 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Switching to follower
[3295844] 2023/08/03 14:30:02.186776 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Switching to follower
[3295844] 2023/08/03 14:30:02.186784 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Received a voteRequest &{term:12 lastTerm:4 lastIndex:527 candidate:RztkeQup reply:$NRG.R.uOdHLCNw}
[3295844] 2023/08/03 14:30:02.186820 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Installing snapshot of 8 bytes
[3295844] 2023/08/03 14:30:02.186823 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Sending a voteResponse &{term:11 peer:SRLRpmYS granted:false} -> "$NRG.R.uOdHLCNw"
[3295844] 2023/08/03 14:30:02.188478 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Received a voteRequest &{term:13 lastTerm:4 lastIndex:527 candidate:RztkeQup reply:$NRG.R.uOdHLCNw}
[3295844] 2023/08/03 14:30:02.188494 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Sending a voteResponse &{term:12 peer:SRLRpmYS granted:false} -> "$NRG.R.uOdHLCNw"
[3295844] 2023/08/03 14:30:02.188499 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Switching to follower
[3295844] 2023/08/03 14:30:02.188505 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Switching to follower
[3295844] 2023/08/03 14:30:02.188510 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Switching to follower
[3295844] 2023/08/03 14:30:02.188516 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Switching to follower
[3295844] 2023/08/03 14:30:02.192904 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] AppendEntry updating leader to "LWhxtZZD"
[3295844] 2023/08/03 14:30:02.192944 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] AppendEntry detected pindex less than ours: 4:527 vs 5:529
[3295844] 2023/08/03 14:30:02.192955 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Truncating and repairing WAL to Term 0 Index 0
[3295844] 2023/08/03 14:30:02.192968 [WRN] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Resetting WAL state
[3295844] 2023/08/03 14:30:02.193199 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Canceling catchup subscription since we are now up to date
[3295844] 2023/08/03 14:30:02.192949 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Received a voteResponse &{term:4 peer:LWhxtZZD granted:false}
[3295844] 2023/08/03 14:30:02.193584 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Ignoring old vote response, we have stepped down
[3295844] 2023/08/03 14:30:02.193629 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] AppendEntry did not match 4 528 with 4 0
[3295844] 2023/08/03 14:30:11.175340 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Not switching to candidate, catching up
[3295844] 2023/08/03 14:30:11.175396 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Canceling catchup subscription since we are now up to date
[3295844] 2023/08/03 14:31:54.932502 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Not current, no commits
[3295844] 2023/08/03 14:31:54.932580 [WRN] JetStream consumer 'USERS > stream_t1 > stream_t1_2' is not current
[3295844] 2023/08/03 14:33:54.932700 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Not current, no commits
[3295844] 2023/08/03 14:33:54.932783 [WRN] JetStream consumer 'USERS > stream_t1 > stream_t1_2' is not current
[3295844] 2023/08/03 14:35:54.933117 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Not current, no commits
[3295844] 2023/08/03 14:35:54.933196 [WRN] JetStream consumer 'USERS > stream_t1 > stream_t1_2' is not current
[3295844] 2023/08/03 14:37:54.932739 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Not current, no commits
[3295844] 2023/08/03 14:37:54.932751 [WRN] JetStream consumer 'USERS > stream_t1 > stream_t1_2' is not current
[3295844] 2023/08/03 14:43:54.933217 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Not current, no commits
[3295844] 2023/08/03 14:43:54.933229 [WRN] JetStream consumer 'USERS > stream_t1 > stream_t1_2' is not current
[3295844] 2023/08/03 14:47:54.933274 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Not current, no commits
[3295844] 2023/08/03 14:47:54.933287 [WRN] JetStream consumer 'USERS > stream_t1 > stream_t1_2' is not current
[3295844] 2023/08/03 14:59:54.932959 [DBG] RAFT [SRLRpmYS - C-R3F-yBaTkf7T] Not current, no commits
[3295844] 2023/08/03 14:59:54.932971 [WRN] JetStream consumer 'USERS > stream_t1 > stream_t1_2' is not current
```
Thank you!
|
https://github.com/nats-io/nats-server/issues/4363
|
https://github.com/nats-io/nats-server/pull/4428
|
c9b5b329a49bf833c3380aa06500c06811d92e70
|
5a497272c3725bea8fe8cf1f9d959c4d9a95b450
| 2023-08-03T13:32:18Z |
go
| 2023-08-25T04:07:57Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,343 |
["conf/parse.go", "conf/parse_test.go"]
|
Binary file accepted as valid configuration
|
## Defect
I mistakenly fed `nats-server` a binary rather than a configuration file, and `-t` tells me it is a valid configuration.
Make sure that these boxes are checked before submitting your issue -- thank you!
- [x] Included `nats-server -DV` output
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
Noticed this on `2.9.21-RC.1`, but this has not changed recently.
#### OS/Container environment:
Not relevant
#### Steps or code to reproduce the issue:
Pass the path of a binary (or other non-textfile, like PDF) as configuration argument:
```
./nats-server -t -c ./nats-server
```
#### Expected result:
Non zero-exit code, and a message telling me I'm doing it wrong
#### Actual result:
`nats-server: configuration file ./nats-server is valid`
#### Details
Stepping through the configuration parsing steps, it seems like the entire binary is treated treated like as `itemKey`, and accepted.
Even setting `pedantic`, there are no errors and no warnings.
|
https://github.com/nats-io/nats-server/issues/4343
|
https://github.com/nats-io/nats-server/pull/4358
|
aa6ac2d665de95d3d9f17519d27c546912d9fd21
|
7c9a91fc91c6a25dc4f588006c46dd27bf6764ef
| 2023-07-27T22:05:35Z |
go
| 2023-08-02T05:33:30Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,341 |
["server/client.go", "server/sublist.go"]
|
1M topics
|
Recently, doing performance research - we found that after NATS server reaches some large number of topics, its consuming performance degraded. Even if hardware is capable, NATS server is not scaling. Is there any configuration/patch we can do to improve performance with large number of topics? Below are details:
Hardware: Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz (32 cores) . 126GB RAM.
Publishing is done in synthetic test: 1M msg/sec messages, but in 1M topics with pattern `X.X.X`
Expected: 1M msg/sec
Actual: msg/sec is around 700k.
NATS server version: 2.9.19
nats-top:

htop:

Test code (compiled with golang 1.20).:
start with
```bash
for ((i=0; i<=40; i++))
do
nohup ./publish --app_num $i &
done
```
```golang
package main
import (
"context"
"fmt"
"github.com/nats-io/nats.go"
"github.com/urfave/cli/v2"
"os"
"os/signal"
"sync/atomic"
"time"
)
var payload = []byte(`J534V53qJr4zs756XKEjlZ5dXpCbSN8LdaLtujtgBOWJ4bgFuVwJLQWMAa6yjrIbxcnYbsodBSXXLr6LvA4YbYcOtlgWnJ52mhAyf8Px9D5t4OQUTUHcaTCjMcX8f0UYfOrvBnEwY8oKyeVFTLQLXpSs4rUPc5Gt4xU3oKbBZF4WKjgikxn3tPLoIkHZhPSG58RyAJD6U7A9DF4COuemClBq6WIe68ZeI41OiOQyV0ChhEUiyXz5PgI3oKJWlW30glZg06DGk024rJVCMDG9nRZt3hIEgFHANoLyJd9lqWyQ`)
func worker(ctx context.Context, interval time.Duration, nc *nats.Conn, topic string, batchSize int, publishedCount *uint32) {
ticker := time.NewTicker(interval)
defer ticker.Stop()
for {
select {
case <-ticker.C:
for n := 0; n < batchSize; n++ {
if err := nc.Publish(fmt.Sprintf("%s.%d", topic, n), payload); err != nil {
fmt.Println("error publish: ", err)
} else {
atomic.AddUint32(publishedCount, 1)
}
}
case <-ctx.Done():
return
}
}
}
func work(c *cli.Context) error {
ctx, cancel := signal.NotifyContext(context.Background(), os.Interrupt)
defer cancel()
nc, err := nats.Connect(c.String("url"))
if err != nil {
return err
}
defer nc.Close()
appNum := c.Int("app_num")
parallel := c.Int("parallel")
interval := c.Duration("interval")
totalMessages := c.Int("tags")
batchSize := totalMessages / parallel
var publishedCount uint32
go func() {
ticker := time.NewTicker(1 * time.Second)
defer ticker.Stop()
for {
select {
case <-ticker.C:
count := atomic.SwapUint32(&publishedCount, 0)
fmt.Printf("app %d published %d\n", appNum, count)
case <-ctx.Done():
return
}
}
}()
for i := 0; i < parallel; i++ {
go worker(ctx, interval, nc, fmt.Sprintf("%d.%d", appNum, i), batchSize, &publishedCount)
}
<-ctx.Done()
fmt.Println("Benchmarking completed.")
return nil
}
func main() {
app := &cli.App{
Flags: []cli.Flag{
&cli.IntFlag{
Name: "app_num",
Value: 0,
},
&cli.StringFlag{
Name: "url",
Value: nats.DefaultURL,
},
&cli.IntFlag{
Name: "parallel",
Value: 100,
},
&cli.IntFlag{
Name: "tags",
Value: 25000,
},
&cli.DurationFlag{
Name: "interval",
Value: time.Second,
},
},
Action: work,
}
if err := app.Run(os.Args); err != nil {
fmt.Println("Error:", err)
}
}
```
|
https://github.com/nats-io/nats-server/issues/4341
|
https://github.com/nats-io/nats-server/pull/4359
|
09e78a33490b50648462f76bb36439eac9b12511
|
09ab23c929a15790f94e29e7a50dd603dd55b350
| 2023-07-27T09:07:39Z |
go
| 2023-08-02T04:43:23Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,320 |
["server/jetstream_test.go", "server/stream.go"]
|
Serializability of Expected-Last-Subject-Sequence is not guaranteed in clustered stream
|
## Defect
- [ ] Included `nats-server -DV` output
- [X] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
Tested 2.9.15 - 2.9.20
#### OS/Container environment:
All
#### Steps or code to reproduce the issue:
[Draft PR](https://github.com/nats-io/nats-server/pull/4319) with a failing server test case.
#### Expected result:
Concurrent publishes having the same `Nats-Expected-Last-Subject-Sequence` should only result in one publish being accepted.
#### Actual result:
Multiple publishes may be accepted depending on timing.
|
https://github.com/nats-io/nats-server/issues/4320
|
https://github.com/nats-io/nats-server/pull/4319
|
75ad503ddc7aa5a2bcfb32f99dae1adb1e6b5cad
|
80fb29f9e30cf7477ef0f2a1244c27be1ec3d49f
| 2023-07-18T17:18:30Z |
go
| 2023-07-18T19:37:57Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,291 |
["server/mqtt.go", "server/mqtt_test.go"]
|
KV/stream republished messages do not propagate to MQTT subscriptions
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [ ] Included `nats-server -DV` output
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
2.9.19, main, and dev
#### OS/Container environment:
macOS
#### Steps or code to reproduce the issue:
Start server with MQTT enabled"
```
server_name: test
jetstream: enabled
mqtt: {
port: 1883
}
```
Define a KV bucket with republish configured:
```
nats kv add config --republish-source='$KV.config.>' --republish-destination='mqtt.config.>'
```
Start an MQTT client subscription on `mqtt/config/#` (I used the [HiveMQ MQTT CLI](https://github.com/hivemq/mqtt-cli#getting-started)), e.g.
```
mqtt sub -V 3 -t "mqtt/config/#"
```
Put a key-value pair to the bucket.
```
nats kv put "device.1" '{"state": "on"}'
```
#### Expected result:
The MQTT client receives a message.
#### Actual result:
It does not.
#### Note
Starting a NATS sub on `mqtt.config.>` will result in the message being received, so the republish works, but for some reason, it does not appear to propagate to the MQTT subsystem.
|
https://github.com/nats-io/nats-server/issues/4291
|
https://github.com/nats-io/nats-server/pull/4303
|
0c8552cd345264951fb10cbc8e9f4a034d75906f
|
77189b09c73a65cf64e18b2e1950d841988f9950
| 2023-07-06T19:14:23Z |
go
| 2023-07-13T14:26:26Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,289 |
["server/events.go"]
|
Leak memory when creating a consumer name with jetstream.
|
I have problem leaking mem in sync.(*Map).Store in function github.com/nats-io/nats-server/v2/server.getHashSize.
I debug using go tool pprof. Help me please!
|
https://github.com/nats-io/nats-server/issues/4289
|
https://github.com/nats-io/nats-server/pull/4329
|
da60f2ab32d1fd6f3294df21be4c2253f5a32aa5
|
ba517e4bfbba5bc6ca657a9b60a7d429f53b6791
| 2023-07-06T08:34:58Z |
go
| 2023-07-20T23:04:18Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,275 |
["server/client.go", "server/jetstream_test.go"]
|
Allow messages published on $JS.EVENT.ADVISORY.API to be exported via a service
|
### Use case:
I want to aggregate JetStream advisory events from multiple accounts on one aggregate account. This would allow `nats-surveyor` to maintain a single connection with the server to gather JetStream advisory metrics from multiple accounts.
I have the following config:
```
jetstream: {max_mem_store: 64GB, max_file_store: 10TB}
accounts {
A {
jetstream: enabled
users: [ {user: pp, password: foo} ]
imports [
{ service: { account: AGG, subject: '$JS.EVENT.ADVISORY.ACC.A.>' }, to: '$JS.EVENT.ADVISORY.>' }
]
}
AGG {
users: [ {user: agg, password: foo} ]
exports: [
{ service: '$JS.EVENT.ADVISORY.ACC.*.>', response: Singleton, account_token_position: 5 }
]
}
}
```
**Intended behavior**
1. On JetStream events, server publishes messages on `$JS.EVENT.ADVISORY.>` in account `A`.
2. Account `AGG` exports a service with `account_token_position` on which the advisory events should be forwarded
3. All advosory messages from account `A` should be available on `$JS.EVENT.ADVISORY.ACC.A.>`
**Actual behavior**
Advisory events published on subjects other than `$JS.EVENT.ADVISORY.API` (e.g. `$JS.EVENT.ADVISORY.STREAM.CREATED.ORDERS`) are mapped correctly and available on account `AGG`. Messages published on `$JS.EVENT.ADVISORY.API` (the audit messages) are not available on `AGG`.
### Additional information
- Behavior is the same whether or not mapping is used, e.g. exporting `$JS.EVENT.>` -> `$JS.EVENT.>` does not fix the issue
- Intended behavior **can** be achieved using stream export from account `A` to account `AGG` and in that case it works fine (I can subscribe on `$JS.EVENT.ADVOSORY.ACC.A.>` and all events are available. However, using service exports would be a desirable solution since with services I can utilize `account_token_position` and have a single export on account `AGG` instead of an import per account.
- This seems to be caused by some advisory events being published using `Server.sendInternalAccountMsg()` (those are not forwarded) - https://github.com/nats-io/nats-server/blob/main/server/jetstream_api.go#L4332C29-L4332C29
- The same behavior is present when attempting to export service latency subjects (which also uses `sendInternalAccountMsg()`)
- I am not sure whether this behavior is intentional or not, but seems to be inconsistent given it differs depending on whether stream export or service export is used.
I added a test to show the issue: https://github.com/nats-io/nats-server/commit/3b19e16ac804192ef919d04d6c4e6be6ceebb25b
The first test (`TestJetStreamAccountImportJSAdvisoriesAsService`) uses service exports (and fails) and the second uses stream exports (which succeeds). Both tests have the same scenario:
1. On account `JS`, subscribe to `$JS.EVENT.ADVISORY.>`
2. On account `AGG`, subscribe to `$JS.EVENT.ADVISORY.ACC.JS.>`
3. Create a stream on account `JS`
4. Expect two messages on both subscriptions:
- an action event on `$JS.EVENT.ADVISORY.STREAM.CREATED.ORDERS`
- an api audit event on `$JS.EVENT.ADVISORY.API`
|
https://github.com/nats-io/nats-server/issues/4275
|
https://github.com/nats-io/nats-server/pull/4302
|
9cfe8b8f75b5498237aecf62073619ccb2174892
|
0c8552cd345264951fb10cbc8e9f4a034d75906f
| 2023-06-28T10:01:56Z |
go
| 2023-07-12T19:24:00Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,270 |
["server/jetstream_cluster_3_test.go", "server/monitor.go"]
|
R3 stream restore incorrectly fails with stream name already in use, cannot restore (10130)
|
## Defect
- [ ] Included `nats-server -DV` output
- [ ] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
nats-server: 2.9.19
#### OS/Container environment:
**host:** CentOS Stream 8
**running container:** nats:2.9.19-alpine
**cli:** v0.0.35
#### Steps or code to reproduce the issue:
1. backup a "large" R3 stream (our case 53M AVRO messages totaling 25GiB, backup having 5.4GB - not small, but definitely not _large_)
2. restore into empty cluster
3. will fail with message `nats: error: restore failed: stream name already in use, cannot restore (10130)` even though the cluster was empty
Note that restore of smaller streams does not exhibit this behavior (not sure where the actual threshold is).
#### Expected result:
Stream restored.
#### Actual result:
Fails with error message and leaves behind empty stream with corresponding name. At the very least, this error message is confusing.
This error was consistent accross multiple data wipes, until we manually changed replication factor in backup config from 3 to 1 at which point the restore was successful.
Seems like during restore servers can't keep up syncing - also note that this is using HDDs.
Could increasing `write_deadline` help with this issue?
Will try to provide additional info if needed later.
|
https://github.com/nats-io/nats-server/issues/4270
|
https://github.com/nats-io/nats-server/pull/4277
|
68a17a48468df0319722efda2c25822e2f3e56f6
|
503de4593d68e7d70ad5b61127c458ce75ed2cb5
| 2023-06-23T23:39:54Z |
go
| 2023-06-29T00:48:09Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,252 |
["server/client.go"]
|
No `tls_required` in INFO for TLS-only WebSocket connections
|
It is a pretty minor counter-intuitive behavior that bugged me while I was working on adding websocket support to the nats-pure.rb client.
It seems that `INFO` doesn't really account on how client is connected as it is always return info about standard protocol.
#### Versions of `nats-server` and affected client libraries used:
nats-server 2.9.18
#### Steps or code to reproduce the issue:
1. Set up NATS server that accepts only plaintext connections over standard protocol, but accepts only secure connections over WebSocket protocol.
```
# nats.conf
net: "127.0.0.1"
port: 4222
websocket {
port: 8443
tls {
cert_file: "./spec/configs/certs/server.pem"
key_file: "./spec/configs/certs/key.pem"
}
}
```
E.g. demo.nats.io seems to be configured like this.
2. Connect over websocket to this server, e.g. using [`websocat`](https://github.com/vi/websocat)
```
$ websocat --binary --insecure wss://127.0.0.1:8443
INFO {"server_id":"…","server_name":"…","version":"2.9.16","proto":1,"git_commit":"f84ca24","go":"go1.19.8","host":"127.0.0.1","port":4222,"headers":true,"max_payload":1048576,"client_id":6,"client_ip":"127.0.0.1"}
```
#### Expected result:
`INFO` contains `tls_required` equals to `true`.
#### Actual result:
`INFO` doesn't contain `tls_required`.
|
https://github.com/nats-io/nats-server/issues/4252
|
https://github.com/nats-io/nats-server/pull/4255
|
f7896b496983c0015ea38dc455cd65435125ac49
|
04a79b9b1eaaf966cc93246e6c4622c036b31a76
| 2023-06-19T03:32:04Z |
go
| 2023-06-20T13:33:00Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,234 |
["server/events.go", "server/events_test.go", "server/gateway.go"]
|
How to config a single nats-server with jetstream in most resource-effective way
|
### Discussed in https://github.com/nats-io/nats-server/discussions/4227
<div type='discussions-op-text'>
<sup>Originally posted by **tpihl** June 8, 2023</sup>
Hi,
I am making a PoC using nats-server (and go-based nats client) without network access on a battery-powered SBC.
However, the nats-server seems to keep itself rather busy (relative term but significant for battery time) and I hope for help in calming it down when the rest of the world is carm.
Changing PING interval was easy but it seems to have a very small effect (most activity seems to be related to
$SYS.>
$JSC.>
Are these really necessary in a single instance nats-server with a single client in the same machine. If they are, can we reduce the frequency of gathering/sending that info when there is no other activity?
I havent tried to run in MQTT mode even though you probably guess that it's MQTT server this PoC competes with. My argument is borrowed from how i remember a nats.io recommendation, something like "use real nats/jetstream if possible, and that it should prove as effective in a comparable scenario with MQTT"
Any ideas/suggestions?
/T</div>
|
https://github.com/nats-io/nats-server/issues/4234
|
https://github.com/nats-io/nats-server/pull/4235
|
aae218fe778618fd4453c8e890844a2d75504ce9
|
860c481f0f19f9d39fe744d53502c6f7d0746f59
| 2023-06-11T02:17:06Z |
go
| 2023-06-11T03:55:43Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,221 |
["server/filestore.go", "server/memstore.go"]
|
KV Get message rate increases with message size
|
Please see [my NATS KV benchmark script](https://gist.github.com/jjthiessen/5d55da45ea4a73ff3ec5d4394b9f31a5#file-bench-sh) (just a wrapper around `nats bench`) along with transcripts/console output for two runs (one with [file storage on SSD](https://gist.github.com/jjthiessen/5d55da45ea4a73ff3ec5d4394b9f31a5#file-bench-ssd-log), the other with [file storage on a ramfs mount](https://gist.github.com/jjthiessen/5d55da45ea4a73ff3ec5d4394b9f31a5#file-bench-ram-log)) (all part of [the same gist](https://gist.github.com/jjthiessen/5d55da45ea4a73ff3ec5d4394b9f31a5)). I only included file storage results here, as memory backed KV appears to be fairly uniformly slow (much slower than file storage) at the moment.
The `Put`/publish rates and trends more-or-less make sense to me/aren't surprising.
However, the `Get`/subscribe message throughput rates seem to increase with message size.
This is rather surprising (for me, at least) — I would expect the data throughput rate to increase with message size (as it does), but for the message throughput rate to decrease.
The effect of this is that in order to get better KV read performance, one only needs to pad values to produce larger messages (even if the extra content is otherwise useless).
Does anybody know i) why this might be the case; and ii) whether I'm holding it wrong?
For additional context see [this thread](https://natsio.slack.com/archives/CM3T6T7JQ/p1684950034097139) on Slack.
|
https://github.com/nats-io/nats-server/issues/4221
|
https://github.com/nats-io/nats-server/pull/4232
|
8c513ad2c201788ed85c0ba76fd0cc13e9be8818
|
783e9491b131538100038d20badc835ecb3bd09f
| 2023-06-07T22:10:28Z |
go
| 2023-06-10T22:23:11Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,196 |
["server/jetstream_cluster.go", "server/jetstream_cluster_3_test.go"]
|
Clustered stream recovery, full stream purge is executed instead of PurgeEx
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [x] Included `nats-server -DV` output
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
- nats-server 2.9.17
- natsio/nats-box 0.13.8
```
nats [7] 2023/05/28 21:18:44.413964 [INF] Starting nats-server
nats [7] 2023/05/28 21:18:44.414008 [INF] Version: 2.9.17
nats [7] 2023/05/28 21:18:44.414009 [INF] Git: [4f2c9a5]
nats [7] 2023/05/28 21:18:44.414011 [INF] Cluster: nats
nats [7] 2023/05/28 21:18:44.414012 [INF] Name: nats-0
nats [7] 2023/05/28 21:18:44.414015 [INF] Node: S1Nunr6R
nats [7] 2023/05/28 21:18:44.414016 [INF] ID: NDRHBNGCROHHJJKLFJH3KZELDQCDBOGIOZN4R2W6K2K3YQCKZV2H7Z4K
nats [7] 2023/05/28 21:18:44.414025 [INF] Using configuration file: /etc/nats-config/nats.conf
nats [7] 2023/05/28 21:18:44.414645 [INF] Starting http monitor on 0.0.0.0:8222
nats [7] 2023/05/28 21:18:44.414704 [INF] Starting JetStream
nats [7] 2023/05/28 21:18:44.414893 [INF] _ ___ _____ ___ _____ ___ ___ _ __ __
nats [7] 2023/05/28 21:18:44.414898 [INF] _ | | __|_ _/ __|_ _| _ \ __| /_\ | \/ |
nats [7] 2023/05/28 21:18:44.414899 [INF] | || | _| | | \__ \ | | | / _| / _ \| |\/| |
nats [7] 2023/05/28 21:18:44.414901 [INF] \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_| |_|
nats [7] 2023/05/28 21:18:44.414902 [INF]
nats [7] 2023/05/28 21:18:44.414903 [INF] https://docs.nats.io/jetstream
nats [7] 2023/05/28 21:18:44.414904 [INF]
nats [7] 2023/05/28 21:18:44.414905 [INF] ---------------- JETSTREAM ----------------
nats [7] 2023/05/28 21:18:44.414914 [INF] Max Memory: 1.00 GB
nats [7] 2023/05/28 21:18:44.414916 [INF] Max Storage: 10.00 GB
nats [7] 2023/05/28 21:18:44.414918 [INF] Store Directory: "/data/jetstream"
nats [7] 2023/05/28 21:18:44.414920 [INF] -------------------------------------------
nats [7] 2023/05/28 21:18:44.415198 [INF] Starting JetStream cluster
nats [7] 2023/05/28 21:18:44.415208 [INF] Creating JetStream metadata controller
nats [7] 2023/05/28 21:18:44.415608 [INF] JetStream cluster bootstrapping
nats [7] 2023/05/28 21:18:44.415854 [INF] Listening for client connections on 0.0.0.0:4222
nats [7] 2023/05/28 21:18:44.415979 [INF] Server is ready
nats [7] 2023/05/28 21:18:44.416087 [INF] Cluster name is nats
nats [7] 2023/05/28 21:18:44.416129 [INF] Listening for route connections on 0.0.0.0:6222
```
#### OS/Container environment:
k8s / reproduced locally using kind
#### Steps or code to reproduce the issue:
```
kind create cluster
helm install nats nats/nats --set cluster.enabled=true --set cluster.name=nats --set nats.jetstream.enabled=true --set nats.image.tag=2.9.17-alpine
# commands to execute in nats-box
nats str add TEST --subjects="TEST.>" --storage=file --replicas=3 # defaults for the rest
nats bench TEST --js --stream TEST --pub 1 --msgs 20000 --multisubject
# run at least once, might need to be done multiple times (note that it doesn't fully clear the stream)
nats str purge --subject=TEST.0 -f TEST
# perform a rolling restart
kubectl rollout restart statefulset/nats
# it's a bit inconsistent sadly.. since it relies on when a snapshot is taken, but can try stepping down multiple times, and you should notice the state is different per leader
nats str cluster step-down
# if not reproduced yet, can try repeating the purge command, restart again, and re-check
# per leader
nats str state
```
```
Cluster Information:
Name: nats
Leader: nats-0
Replica: nats-1, current, seen 0.87s ago
Replica: nats-2, current, seen 0.87s ago
State:
Messages: 0
Bytes: 0 B
FirstSeq: 20,001 @ 0001-01-01T00:00:00 UTC
LastSeq: 20,000 @ 2023-05-28T21:24:20 UTC
Active Consumers: 0
```
```
Cluster Information:
Name: nats
Leader: nats-1
Replica: nats-0, current, seen 0.65s ago
Replica: nats-2, current, seen 0.65s ago
State:
Messages: 19,999
Bytes: 3.2 MiB
FirstSeq: 2 @ 2023-05-28T21:24:19 UTC
LastSeq: 20,000 @ 2023-05-28T21:24:20 UTC
Active Consumers: 0
Number of Subjects: 19,999
```
```
Cluster Information:
Name: nats
Leader: nats-2
Replica: nats-0, current, seen 0.01s ago
Replica: nats-1, current, seen 0.01s ago
State:
Messages: 19,999
Bytes: 3.2 MiB
FirstSeq: 2 @ 2023-05-28T21:24:19 UTC
LastSeq: 20,000 @ 2023-05-28T21:24:20 UTC
Active Consumers: 0
Number of Subjects: 19,999
```
#### Expected result:
Stream state stays consistent even after restarts.
#### Actual result:
Due to the partial purge requests a full stream purge happens when restarting/recovering. This would result in data being lost after doing restarts/deploys.
|
https://github.com/nats-io/nats-server/issues/4196
|
https://github.com/nats-io/nats-server/pull/4212
|
eb09ddd73ad14f0b35bfcacad4f239461573f92f
|
e1f8064e9ed773c348ff532e7cc3088f94ccc98e
| 2023-05-28T21:47:54Z |
go
| 2023-06-04T01:12:22Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,195 |
["server/jetstream.go"]
|
NATS Server corrupts the meta.inf file when it gets killed/closed/exited during startup stream loading time.
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [X] Included `nats-server -DV` output
- [X] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
Logs are included.
**nats-server.exe -DV**
[2684] 2023/05/26 01:08:32.952160 [[32mINF[0m] Starting nats-server
[2684] 2023/05/26 01:08:32.953198 [[32mINF[0m] Version: 2.9.16
[2684] 2023/05/26 01:08:32.953720 [[32mINF[0m] Git: [f84ca24]
[2684] 2023/05/26 01:08:32.954231 [[36mDBG[0m] Go build: go1.19.8
[2684] 2023/05/26 01:08:32.954257 [[32mINF[0m] Name: NAHUTXJA2HWFEV45HQ534LEZ54ZKESIX5NZ6VJKMYKY5XZ5H767XTS2G
[2684] 2023/05/26 01:08:32.954257 [[32mINF[0m] ID: NAHUTXJA2HWFEV45HQ534LEZ54ZKESIX5NZ6VJKMYKY5XZ5H767XTS2G
[2684] 2023/05/26 01:08:32.954257 [[36mDBG[0m] Created system account: "$SYS"
[2684] 2023/05/26 01:08:32.955842 [[32mINF[0m] Listening for client connections on 0.0.0.0:4222
[2684] 2023/05/26 01:08:32.955842 [[36mDBG[0m] Get non local IPs for "0.0.0.0"
[2684] 2023/05/26 01:08:32.960573 [[36mDBG[0m] ip=1.7.19.7
[2684] 2023/05/26 01:08:32.962145 [[32mINF[0m] Server is ready
[2684] 2023/05/26 01:08:32.963720 [[36mDBG[0m] maxprocs: Leaving GOMAXPROCS=4: CPU quota u
#### Versions of `nats-server` and affected client libraries used:
**Version: 2.9.16**
#### OS/Container environment:
**Windows Server 2019 Version 1809 (OS Build 17763.4010)**
#### Steps or code to reproduce the issue:
1. Start the Nats-server, Enable Jetstream, Create a stream, Add many messages (possibly of bug size) this is to increase the time when nats-server starts up as per step No.5
2. Now Stop the nats-server process.
3. Have a script to kill/stop the nats-server service upon demand.
4. Start the nats-server service. (having the logs enabled)
5. When the server starts to "Starting restore for stream '$G > streamname' ", Kill the nats-server.exe.
6. Start the nats-server process newly again.
7. We see the error " **Error unmarshalling stream metafile** "C:\\\DataStore\\EventData\\jetstream\\$G\\streams\\STREAM_EI\\meta.inf": invalid character 'h' looking for beginning of value". This invalid character can change every time.
8. The stream could not be recovered at all.
9. This kill can also occur other times before the server is saying it is ready.
10. Also the same issue happens when System Time changes when server is doing step No. 5
#### Expected result:
When we start the service, the stream should be recoverable.
#### Actual result:
When we start the service and follow the steps, the stream meta file is updated / corrupted. leaving the stream un usable.
#### Actual Logs
[LogToSahre.txt](https://github.com/nats-io/nats-server/files/11564431/LogToSahre.txt)
Some path and stream names are redacted, please do not mind those.
|
https://github.com/nats-io/nats-server/issues/4195
|
https://github.com/nats-io/nats-server/pull/4210
|
449b429b5886c66850841ca5603925bf3af852c4
|
eb09ddd73ad14f0b35bfcacad4f239461573f92f
| 2023-05-25T11:53:16Z |
go
| 2023-06-04T00:36:10Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,191 |
["server/leafnode.go", "server/leafnode_test.go"]
|
Data race in `validateLeafNode`
|
On `dev`:
```
==================
WARNING: DATA RACE
Read at 0x00c0019021c0 by goroutine 150889:
github.com/nats-io/nats-server/v2/server.validateLeafNode()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/leafnode.go:226 +0xf70
github.com/nats-io/nats-server/v2/server.validateOptions()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/server.go:989 +0x147
github.com/nats-io/nats-server/v2/server.NewServer()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/server.go:605 +0x24a
github.com/nats-io/nats-server/v2/server.RunServer()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/server_test.go:79 +0x1f6
github.com/nats-io/nats-server/v2/server.TestLeafNodeLMsgSplit()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/leafnode_test.go:2389 +0x10d0
testing.tRunner()
/home/travis/.gimme/versions/go1.19.9.linux.amd64/src/testing/testing.go:1446 +0x216
testing.(*T).Run.func1()
/home/travis/.gimme/versions/go1.19.9.linux.amd64/src/testing/testing.go:1493 +0x47
Previous write at 0x00c0019021c0 by goroutine 150944:
github.com/nats-io/nats-server/v2/server.(*Server).createLeafNode()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/leafnode.go:940 +0x924
github.com/nats-io/nats-server/v2/server.(*Server).connectToRemoteLeafNode()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/leafnode.go:549 +0xe88
github.com/nats-io/nats-server/v2/server.(*Server).solicitLeafNodeRemotes.func2()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/leafnode.go:174 +0x44
Goroutine 150889 (running) created at:
testing.(*T).Run()
/home/travis/.gimme/versions/go1.19.9.linux.amd64/src/testing/testing.go:1493 +0x75d
testing.runTests.func1()
/home/travis/.gimme/versions/go1.19.9.linux.amd64/src/testing/testing.go:1846 +0x99
testing.tRunner()
/home/travis/.gimme/versions/go1.19.9.linux.amd64/src/testing/testing.go:1446 +0x216
testing.runTests()
/home/travis/.gimme/versions/go1.19.9.linux.amd64/src/testing/testing.go:1844 +0x7ec
testing.(*M).Run()
/home/travis/.gimme/versions/go1.19.9.linux.amd64/src/testing/testing.go:1726 +0xa84
github.com/nats-io/nats-server/v2/server.TestMain()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/sublist_test.go:1603 +0x392
main.main()
_testmain.go:2333 +0x324
Goroutine 150944 (finished) created at:
github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/server.go:3509 +0x88
github.com/nats-io/nats-server/v2/server.(*Server).solicitLeafNodeRemotes()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/leafnode.go:174 +0xab
github.com/nats-io/nats-server/v2/server.(*Server).Start()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/server.go:2248 +0x1d9b
github.com/nats-io/nats-server/v2/server.RunServer()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/server_test.go:89 +0x24e
github.com/nats-io/nats-server/v2/server.TestLeafNodeLMsgSplit()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/leafnode_test.go:2382 +0xd16
testing.tRunner()
/home/travis/.gimme/versions/go1.19.9.linux.amd64/src/testing/testing.go:1446 +0x216
testing.(*T).Run.func1()
/home/travis/.gimme/versions/go1.19.9.linux.amd64/src/testing/testing.go:1493 +0x47
==================
```
|
https://github.com/nats-io/nats-server/issues/4191
|
https://github.com/nats-io/nats-server/pull/4194
|
0db9d203838c04866b316d6cc2b0075bdf753dfa
|
24d4bd603926e0b3237ceffd217a8bab85141180
| 2023-05-24T13:50:10Z |
go
| 2023-05-25T00:42:37Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,162 |
["server/jetstream_cluster.go", "server/jetstream_cluster_3_test.go"]
|
One JetStream cluster node restart, KeyValue client may get expired value
|
## One JetStream cluster node restart, KeyValue client may get expired value
- [yes] Included `nats-server -DV` output
- [yes] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
### server info:
``` DV
Starting nats-server
Version: 2.9.16
Git: [f84ca24]
Go build: go1.19.8
Name: NCK6Q7PPK4US5CDFZXMWSP57HLELDR77LGY5SPNBE4GCTHMNYZQQGFLR
ID: NCK6Q7PPK4US5CDFZXMWSP57HLELDR77LGY5SPNBE4GCTHMNYZQQGFLR
Created system account: "$SYS"
Listening for client connections on 0.0.0.0:4222
Get non local IPs for "0.0.0.0"
ip=172.16.67.134
Server is ready
maxprocs: Leaving GOMAXPROCS=4: CPU quota undefined
```
``` Start server s1 log
Starting nats-server
Version: 2.9.16
Git: [f84ca24]
Cluster: test-cluster
Name: s1
Node: 3ahZoO2Q
ID: NDH3IQQ74UTRYE3H7YY4HU5MBTHGDODQOPSRGIJIZ377EAOR7YF4JN5E
Plaintext passwords detected, use nkeys or bcrypt
Using configuration file: .\s1\config.conf
Starting http monitor on 0.0.0.0:8001
Starting JetStream
_ ___ _____ ___ _____ ___ ___ _ __ __
_ | | __|_ _/ __|_ _| _ \ __| /_\ | \/ |
| || | _| | | \__ \ | | | / _| / _ \| |\/| |
\__/|___| |_| |___/ |_| |_|_\___/_/ \_\_| |_|
https://docs.nats.io/jetstream
---------------- JETSTREAM ----------------
Max Memory: 6.00 GB
Max Storage: 1.00 TB
```
- s1
``` conf
server_name=s1
listen=4001
http_port=8001
accounts {
$SYS {
users = [
{
user: "admin",
pass: "$2asdasfasfas"
}
]
}
}
jetstream {
store_dir=C:/Users/xx/nats/s1/data
}
cluster {
name: test-cluster
listen: 0.0.0.0:6001
routes:[
nats-route://localhost:6002
nats-route://localhost:6003
]
}
```
- s2
``` conf
server_name=s2
listen=4002
http_port=8002
accounts {
$SYS {
users = [
{
user: "admin",
pass: "$2asdasfasfas"
}
]
}
}
jetstream {
store_dir=C:/Users/xx/nats/s2/data
}
cluster {
name: test-cluster
listen: 0.0.0.0:6002
routes:[
nats-route://localhost:6001
nats-route://localhost:6003
]
}
```
- s3
``` conf
server_name=s3
listen=4003
http_port=8003
accounts {
$SYS {
users = [
{
user: "admin",
pass: "$2asdasfasfas"
}
]
}
}
jetstream {
store_dir=C:/Users/xx/nats/s3/data
}
cluster {
name: test-cluster
listen: 0.0.0.0:6003
routes:[
nats-route://localhost:6002
nats-route://localhost:6001
]
}
```
#### Versions of `nats-server` and affected client libraries used:
nats-server -version: 2.9.16
github.com/nats-io/nats.go v1.25.0
#### OS/Container environment:
Windows 11 22H2 22621.1702
#### Steps or code to reproduce the issue:
- start s1,s2,s3
- connect
``` go
var servers = []string{"nats://localhost:4001", "nats://localhost:4002", "nats://localhost:4003"}
nc, err := nats.Connect(strings.Join(servers, ","))
```
- create KeyValue
``` go
kv, err := js.CreateKeyValue(&nats.KeyValueConfig{
Bucket: "test",
TTL: time.Second * 60,
Replicas: 3,
Storage: nats.FileStorage,
History: 1,
})
```
- put key1 = val1
```go
ver, err := kv.Put("key1", []byte("val1"))
if err != nil {
return err
}
fmt.Println("put[1] ver =", ver)
```
- start a goroutine
``` go
var ctx, cancel = context.WithCancel(context.Background())
defer cancel()
go func() {
for {
select {
case <-ctx.Done():
return
default:
item, err := kv.Get("key1")
if err != nil {
fmt.Println(err)
return
}
ver := item.Revision()
fmt.Println("get key1 = ", string(item.Value()), "ver =", ver)
}
}
}()
```
- shutdown server s1
- put key1 = val2
``` go
ver, err := kv.Put("key1", []byte("val2"))
if err != nil {
return err
}
fmt.Println("put[2] ver =", ver)
```
- start server s1
#### Expected result:
put[1] ver = 1
get key1 = val1 ver 1
get key1 = val1 ver 1
... more [get key1 = val1 ver 1]
put[2] ver = 2
get key1 = val2 ver 2
get key1 = val2 ver 2
get key1 = val2 ver 2
... more [get key1 = val2 ver 2]
#### Actual result:
put[1] ver = 1
get key1 = val1 ver 1
get key1 = val1 ver 1
... more [get key1 = val1 ver 1]
put[2] ver = 2
get key1 = val2 ver 2
get key1 = val2 ver 2
get key1 = val1 ver 1 // ----- unexpected value
get key1 = val1 ver 1 // ----- unexpected value
get key1 = val2 ver 2
get key1 = val2 ver 2
get key1 = val1 ver 1 // ----- unexpected value
get key1 = val2 ver 2
get key1 = val2 ver 2
get key1 = val2 ver 2
... more [get key1 = val2 ver 2]
|
https://github.com/nats-io/nats-server/issues/4162
|
https://github.com/nats-io/nats-server/pull/4171
|
4feb7b95a30242408d9eda75e1abe77940baae09
|
87f17fcff49984eb85e0316578c3365982fac271
| 2023-05-15T02:38:33Z |
go
| 2023-05-16T18:29:37Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,150 |
["server/monitor.go", "server/monitor_sort_opts.go", "server/monitor_test.go"]
|
Allow sorting Connz by RTT
|
It would be nice to be able to sort `Connz` results by their RTT descending, as filtering on larger numbers of connections makes it hard to discover connections that have a high RTT to the server.
|
https://github.com/nats-io/nats-server/issues/4150
|
https://github.com/nats-io/nats-server/pull/4157
|
c31e710d9eb8a5744bc78abaf681221f03502fb3
|
a982bbcb73b3e32966afd5a61f9661549bd267f6
| 2023-05-11T21:19:56Z |
go
| 2023-05-13T03:47:17Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,149 |
["server/client.go", "server/monitor.go", "server/monitor_test.go", "server/server.go"]
|
Connz options doesn't support filtering by NKEY
|
Currently `ConnzOptions` only supports filtering by username (via the `user`) field. It would be great if this field could also filter by NKey, or if we could have another field to filter by user nkey.
|
https://github.com/nats-io/nats-server/issues/4149
|
https://github.com/nats-io/nats-server/pull/4156
|
fc64c6119dfe56c5f4796031ff502d5d2cfd5125
|
c31e710d9eb8a5744bc78abaf681221f03502fb3
| 2023-05-11T21:17:06Z |
go
| 2023-05-12T22:38:46Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,144 |
["server/monitor.go", "server/monitor_test.go"]
|
HTTP Monitor accountz is returning mapped account imports backwards
|
## Defect
The `accountz` monitor is displaying account imports with a `to:` mapping backwards. Note example below where APP imports `downlink.>` from CLOUD and maps to `event.>` in its own namespace. The monitor is swapping origin and to subjects in its query output.
server.conf
```text
port: 4222
http_port: 8222
accounts: {
CLOUD: {
exports: [
{ service: "downlink.>" }
]
}
APP: {
imports: [
{ service: { account: "CLOUD", subject: "downlink.>" }, to: "event.>" }
]
}
}
```
curl 'localhost:8222/accountz?acc=APP'
```json
{
"server_id": "NBYEWXTFQ35FLVGIIBBSBF3E6XTTMVRW5SBY3ZVUVYXD7RHO47TU3DMG",
"now": "2023-05-10T16:42:43.234101423Z",
"system_account": "$SYS",
"account_detail": {
"account_name": "APP",
"update_time": "2023-05-10T16:41:09.454493125Z",
"expired": false,
"complete": true,
"jetstream_enabled": false,
"leafnode_connections": 0,
"client_connections": 0,
"subscriptions": 4,
"imports": [
{
"subject": "$SYS.REQ.ACCOUNT.PING.STATZ",
"account": "$SYS",
"to": "$SYS.REQ.ACCOUNT.APP.STATZ",
"type": "service",
"invalid": false,
"share": false,
"tracking": false
},
{
"subject": "event.\u003e",
"account": "CLOUD",
"to": "downlink.\u003e",
"type": "service",
"invalid": false,
"share": false,
"tracking": false
},
{
"subject": "$SYS.REQ.ACCOUNT.PING.CONNZ",
"account": "$SYS",
"to": "$SYS.REQ.ACCOUNT.APP.CONNZ",
"type": "service",
"invalid": false,
"share": false,
"tracking": false
},
{
"subject": "$SYS.REQ.SERVER.PING.CONNZ",
"account": "$SYS",
"to": "$SYS.REQ.ACCOUNT.APP.CONNZ",
"type": "service",
"invalid": false,
"share": false,
"tracking": false
}
],
"sublist_stats": {
"num_subscriptions": 4,
"num_cache": 0,
"num_inserts": 4,
"num_removes": 0,
"num_matches": 0,
"cache_hit_rate": 0,
"max_fanout": 0,
"avg_fanout": 0
}
}
}
```
#### Versions of `nats-server` and affected client libraries used:
nats-server v2.9.16
#### OS/Container environment:
linux
#### Steps or code to reproduce the issue:
Run server with above conf:
`nats-server --config ./server.conf`
Query the HTTP monitor for accountz and specific acc details:
`curl 'localhost:8222/accountz?acc=APP'`
#### Expected result:
Monitor correctly display that APP imports `downlink.>` from CLOUD and maps to `event.>` in its own namespace.
#### Actual result:
Origin subject and mapped subject are reversed in query output.
|
https://github.com/nats-io/nats-server/issues/4144
|
https://github.com/nats-io/nats-server/pull/4158
|
ea75beaeb1c03cc76170535371c6c4ef12d21b27
|
fe71ef524ce8413edc5c6e3ad3140694d2755254
| 2023-05-10T16:56:11Z |
go
| 2023-05-15T21:04:57Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,141 |
["server/errors.json", "server/jetstream_errors_generated.go", "server/jetstream_super_cluster_test.go", "server/jetstream_test.go", "server/stream.go"]
|
JetStream Create API fails to reject some invalid source configurations
|
## Defect
An invalid stream configuration (missing names for some of the sources) is detected if using the NATS CLI to add the stream asset, but is not detected by the stream create API, e.g. when creating a stream with NACK or through a custom NATS program.
CLI detects:
```bash
$ nats --server "nats://UserB1:[email protected]:4222" str add 'BOGUS-SOURCES' --config ./bogus-sources.json
nats: error: could not create Stream: configuration validation failed: /sources/1/name: , /sources/1/name: length must be >= 1, but got 0, /sources/1/name: does not match pattern '^[^.*>]+$', /sources/2/name: , /sources/2/name: length must be >= 1, but got 0, /sources/2/name: does not match pattern '^[^.*>]+$'
```
Server allows:
```bash
$ cat ./bogus-sources.json | nats --server "nats://UserB1:[email protected]:4222" req '$JS.API.STREAM.CREATE.BOGUS-SOURCES' --replies 1
16:33:58 Reading payload from STDIN
16:33:58 Sending request on "$JS.API.STREAM.CREATE.BOGUS-SOURCES"
16:33:58 Received with rtt 8.268718ms
{"type":"io.nats.jetstream.api.v1.stream_create_response","config":{"name":"BOGUS-SOURCES","retention":"limits","max_consumers":-1,"max_msgs":-1,"max_bytes":-1,"max_age":0,"max_msgs_per_subject":-1,"max_msg_size":-1,"discard":"old","storage":"file","num_replicas":1,"duplicate_window":120000000000,"sources":[{"name":"LACR-STREAM"},{"name":"","external":{"api":"lacrJS.API","deliver":""}},{"name":"","external":{"api":"","deliver":"retail-lacr"}}],"allow_direct":false,"mirror_direct":false,"sealed":false,"deny_delete":false,"deny_purge":false,"allow_rollup_hdrs":false},"created":"2023-05-09T23:33:58.817532152Z","state":{"messages":0,"bytes":0,"first_seq":0,"first_ts":"0001-01-01T00:00:00Z","last_seq":0,"last_ts":"0001-01-01T00:00:00Z","consumer_count":0},"cluster":{"name":"nats","leader":"mybasic-nats-2"},"sources":[{"name":"LACR-STREAM","lag":0,"active":-1},{"name":"","external":{"api":"lacrJS.API","deliver":""},"lag":0,"active":-1},{"name":"","external":{"api":"","deliver":"retail-lacr"},"lag":0,"active":-1}],"did_create":true}
```
#### Versions of `nats-server` and affected client libraries used:
2.9.x (main)
#### Steps or code to reproduce the issue:
Create a stream configuration that has incomplete source definitions (here 2 of 3 sources missing name attribute):
```text
{
"name": "BOGUS-SOURCES",
"sources": [
{
"name": "LACR-STREAM"
},
{
"external": {
"api": "lacrJS.API"
}
},
{
"external": {
"deliver": "retail-lacr"
}
}
],
"retention": "limits",
"max_consumers": -1,
"max_msgs_per_subject": -1,
"max_msgs": -1,
"max_bytes": -1,
"max_age": 0,
"max_msg_size": -1,
"storage": "file",
"discard": "old",
"num_replicas": 1,
"duplicate_window": 120000000000,
"sealed": false,
"deny_delete": false,
"deny_purge": false,
"allow_rollup_hdrs": false,
"allow_direct": false,
"mirror_direct": false
}
```
Note: this is equivalent to a NACK stream asset YAML with two erroneous hyphens (externalApiPrefix, externalDeliverPrefix):
```yaml
apiVersion: jetstream.nats.io/v1beta2
kind: Stream
metadata:
name: bogus-sources
spec:
name: "BOGUS-SOURCES"
sources:
- name: "LACR-STREAM"
- externalApiPrefix: "lacrJS.API"
- externalDeliverPrefix: "retail-lacr"
retention: limits
maxConsumers: -1
maxMsgsPerSubject: -1
maxMsgs: -1
maxBytes: -1
maxAge: ""
maxMsgSize: -1
storage: file
discard: old
replicas: 1
duplicateWindow: 120s
denyDelete: false
allowRollup: false
allowDirect: false
```
#### Expected result:
NATS server will reject the stream creation request due to malformed request payload.
#### Actual result:
NATS server creates the asset, but attempts to interact with the stream through the NATS CLI later, e.g. `nats stream info BOGUS-SOURCES` fails with schema validation errors.
|
https://github.com/nats-io/nats-server/issues/4141
|
https://github.com/nats-io/nats-server/pull/4222
|
19eba1b8c84df5c5ef02d6e9b0295e0da5091211
|
40619659d53bffef148edbb44a629b0f163d3a49
| 2023-05-09T23:56:34Z |
go
| 2023-06-08T22:14:25Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,110 |
["server/accounts_test.go", "server/mqtt_test.go", "server/routes_test.go", "server/server.go", "server/server_test.go", "server/websocket_test.go", "test/cluster_test.go"]
|
server.Start() documentation indicates method blocks, however, it does not
|
Hey,
server.Start() method documentation seems a bit misleading. Documentation indicates that calling the method would block, and thus should be called in a go proc. However, it seems that the current implementation does not block, and the common pattern is to call server.WaitForShutdown() to block|wait for the server to exit.
This is something that one runs into when trying to embed NATS server. Pretty trivial to figure out how to call the method, however, would be even faster if docs were better.
Current docs in server/server.go
``` go
// Start up the server, this will block.
// Start via a Go routine if needed.
func (s *Server) Start() {
s.Noticef("Starting nats-server")
...
```
Something like this might be better
``` go
// Start up the server, this will not block.
//
// WaitForShutdown can be used to block and wait for the server to shutdown if needed.
func (s *Server) Start() {
s.Noticef("Starting nats-server")
...
```
|
https://github.com/nats-io/nats-server/issues/4110
|
https://github.com/nats-io/nats-server/pull/4111
|
3feb9f73b9bf9fcf719378aaadfffee2c03672ff
|
c3b07df86f16e1ae1e772a9483a9e31c029444d0
| 2023-04-27T11:13:35Z |
go
| 2023-04-27T16:50:03Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,043 |
["server/filestore.go", "server/memstore.go", "server/store.go"]
|
Extreme JetStream write performance degredation when subject counts are limited
|
## Defect
Writes to JetStream seem to slow down drastically when there is a constrained number of subjects _and_ there is a size/count/ttl limit applied.
For example, with a stream limited to 100k messages and writing 1m messages
| test | MsgsPerSec |
|------|------------|
| single subject | 52414 |
| all unique subjects | 30191 |
| 16 subjects | 37574 |
| 256 subjects | 8984 |
| 1024 subjects: |2417 |
#### Versions of `nats-server` and affected client libraries used:
Tested with 2.9.6 and 2.10.0-beta.30 (via nats cli)
#### OS/Container environment:
* OS: Linux Mint 21.1
* Kernel: 6.2.7-x64v3-xanmod1
* CPU: i7-8565U
#### Steps or code to reproduce the issue:
```
$ export NATS_CONTEXT=nats_development
$ rm -rf ~/.local/share/nats/nats_development/jetstream
$ nats server run --jetstream --verbose
$ nats stream add --defaults --max-msgs=100000 --subjects 'foo.>' --max-bytes=1GB TEST
```
```
$ nats bench --pub=1 --js foo.bar --stream TEST --msgs=1000000 --size=200 --pubbatch=1024 --csv /dev/stdout --multisubject
```
#### Expected result:
Running with different `--multisubjectmax` values should result in similar performace
#### Actual result:
```csv
RunID,ClientID,MsgCount,MsgBytes,MsgsPerSec,BytesPerSec,DurationSecs
single subject,P0,2000000,200000000,52414,10482974.154080,19.078555
--multisubjectmax=0,P0,2000000,200000000,30191,6038270.572961,33.122067
--multisubjectmax=16,JoCx5tctLCR9yhX1yC7zzt,P0,2000000,200000000,37574,7514918.051133,26.613730
--multisubjectmax=256,P0,2000000,200000000,8984,1796898.424830,111.302897
--multisubjectmax=1024,P0,2000000,200000000,2417,483465.501359,413.679982
```
|
https://github.com/nats-io/nats-server/issues/4043
|
https://github.com/nats-io/nats-server/pull/4048
|
3602b3882a7fc8558cc5ca5bf1d7809f8cf8274f
|
3f8fc5c5b15ead6bd9e9b5267d633c84c743f755
| 2023-04-12T12:07:53Z |
go
| 2023-04-14T00:31:49Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.