status
stringclasses 1
value | repo_name
stringlengths 9
24
| repo_url
stringlengths 28
43
| issue_id
int64 1
104k
| updated_files
stringlengths 8
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[ns, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,041 |
["server/filestore.go", "server/filestore_test.go"]
|
Purging subjects using file storage doesn't update the first seq correctly
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [x] Included `nats-server -DV` output
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
```
[1] 2023/04/12 08:26:57.409824 [INF] Starting nats-server
[1] 2023/04/12 08:26:57.409898 [INF] Version: 2.9.15
[1] 2023/04/12 08:26:57.409900 [INF] Git: [b91fa85]
[1] 2023/04/12 08:26:57.409901 [INF] Name: NDPQFSNXRAAYAREAM4CLTMZAKMLPMI4GHOS4XPQEU46NEBPZEPS4QSZD
[1] 2023/04/12 08:26:57.409905 [INF] Node: gpgKsVwa
[1] 2023/04/12 08:26:57.409907 [INF] ID: NDPQFSNXRAAYAREAM4CLTMZAKMLPMI4GHOS4XPQEU46NEBPZEPS4QSZD
[1] 2023/04/12 08:26:57.412407 [INF] Starting JetStream
[1] 2023/04/12 08:26:57.412692 [INF] _ ___ _____ ___ _____ ___ ___ _ __ __
[1] 2023/04/12 08:26:57.412696 [INF] _ | | __|_ _/ __|_ _| _ \ __| /_\ | \/ |
[1] 2023/04/12 08:26:57.412697 [INF] | || | _| | | \__ \ | | | / _| / _ \| |\/| |
[1] 2023/04/12 08:26:57.412698 [INF] \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_| |_|
[1] 2023/04/12 08:26:57.412699 [INF]
[1] 2023/04/12 08:26:57.412700 [INF] https://docs.nats.io/jetstream
[1] 2023/04/12 08:26:57.412701 [INF]
[1] 2023/04/12 08:26:57.412702 [INF] ---------------- JETSTREAM ----------------
[1] 2023/04/12 08:26:57.412705 [INF] Max Memory: 11.25 GB
[1] 2023/04/12 08:26:57.412706 [INF] Max Storage: 125.81 GB
[1] 2023/04/12 08:26:57.412707 [INF] Store Directory: "/tmp/nats/jetstream"
[1] 2023/04/12 08:26:57.412708 [INF] -------------------------------------------
[1] 2023/04/12 08:26:57.413328 [INF] Listening for client connections on 0.0.0.0:4222
[1] 2023/04/12 08:26:57.413648 [INF] Server is ready
```
#### Versions of `nats-server` and affected client libraries used:
nats-server 2.9.15
natsio/nats-box latest
#### OS/Container environment:
Using docker:
```
docker run --network host nats -js
docker run --rm --network host -it --name nats-cli natsio/nats-box
```
#### Steps or code to reproduce the issue:
1. Create an OBJ bucket:
`nats obj add bucket --storage=file`
(to demonstrate when having many messages spanning multiple message blocks)
2. Create a file with random data:
`head -c 100000000 </dev/urandom > bucket_data`
3. Put the data into the bucket:
`nats obj put bucket bucket_data -f`
4. `nats str info -a OBJ_bucket`
Expected: `state.FirstSeq = 1, state.LastSeq = 764, state.NumDeleted = 0`
Actual: equals ^
5. Put the data into the bucket again, to start purging previous data:
`nats obj put bucket bucket_data -f`
6. `nats str info -a OBJ_bucket`
Expected: `state.FirstSeq = 765, state.LastSeq = 1528, state.NumDeleted = 0`
Actual: `state.FirstSeq = 63, state.LastSeq = 1528, state.NumDeleted = 702`
#### Expected result:
A stream's first seq should be correct after purging from a subject.
Only occurs when using file storage, memory storage works as expected.
#### Actual result:
A stream's first seq stops being updated.
|
https://github.com/nats-io/nats-server/issues/4041
|
https://github.com/nats-io/nats-server/pull/4042
|
2a03d9dff766293e499f1893f7783615cc260641
|
4fcc2ff418891fcc4fa2cb3d88b9e1b057b204fb
| 2023-04-12T08:41:22Z |
go
| 2023-04-12T18:43:50Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,038 |
["server/consumer.go", "server/jetstream_test.go"]
|
PullSubscribe with '%' character result in disconnect and no messages on fetch
|
Hi,
We are using a NATS server with JetStream configured.
When we want to do a PullSubscribe for a wildcard subject we encode the subject to become the Durable name of the consumer. This results in a string that doesn't contain the wildcards but it introduces '%'. This should be valid since it is in the ASCII printable range and differs from the wildcard characters.
Although when we subscription and fetch the messages we get no results in return. If we use a consumername that doesn't include any '%' we get the correct results.
You can reproduce this with the following procedure:
- Enable JetStream on the NATS server.
- I created a JetStream with the following configuration (yellow highlighted items differ from default):

- I published a message: `nats pub "test.1" "someData"`
- I created a small test program to simulate this. Commenting one of the lines with the `// <-- SWAP BETWEEN THESE TWO LINES` comment enables you to simulate a working and broken scenario.
``` C#
using NATS.Client;
using NATS.Client.JetStream;
class Program
{
static void Main(string[] args)
{
// Connect
var connectionFactory = new ConnectionFactory();
var options = ConnectionFactory.GetDefaultOptions();
options.DisconnectedEventHandler += (sender, eventArgs) =>
{
Console.WriteLine("ERROR");
};
options.Url = "localhost:4222";
var connection = connectionFactory.CreateConnection(options);
while (connection.State != ConnState.CONNECTED)
{
Task.Delay(100);
}
// Subscribe
var context = connection.CreateJetStreamContext();
var consumerName = "Test%123"; // <-- SWAP BETWEEN THESE TWO LINES
//var consumerName = "Test123"; // <-- SWAP BETWEEN THESE TWO LINES
var subOptions = PullSubscribeOptions.Builder().WithDurable(consumerName).Build();
var sub = context.PullSubscribe("test.*", subOptions);
IList<Msg>? messages = sub.Fetch(10, 1000);
// Cleanup
sub.Unsubscribe();
IJetStreamManagement? jsm = connection.CreateJetStreamManagementContext();
jsm?.DeleteConsumer("test", consumerName);
Console.WriteLine($"Number of received messages: {messages.Count}");
}
}
```
No error is seen except a disconnect and reconnect that happens in case a '%' is present in the consumer name.
Is collected these logs:
- Consumer name "Test123":
```
[48464] 2023/04/11 15:59:33.199889 [←[33mTRC←[0m] [::1]:53879 - cid:8 - <<- MSG_PAYLOAD: ["{\"offset\":0,\"subject\":\"test.*\"}"]
[48464] 2023/04/11 15:59:33.200409 [←[33mTRC←[0m] [::1]:53879 - cid:8 - ->> [MSG _INBOX.pU59zTePf8ImM5SwaBMIeN.1 1 110]
[48464] 2023/04/11 15:59:33.201444 [←[33mTRC←[0m] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 517]
[48464] 2023/04/11 15:59:33.201444 [←[33mTRC←[0m] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"uW4fSa7rf1gFbbjcJiCzIQ\",\"timestamp\":\"2023-04-11T13:59:33.2004095Z\",\"server\":\"NDGUJBY2XGSUJJU2XWVIPKQHHGLPPGTFHH3Y2ITLDMUXQDPKOT4UD536\",\"client\":{\"acc\":\"$G\",\"rtt\":29941500,\"server\":\"NDGUJBY2XGSUJJU2XWVIPKQHHGLPPGTFHH3Y2ITLDMUXQDPKOT4UD536\"},\"subject\":\"$JS.API.STREAM.NAMES\",\"request\":\"{\\\"offset\\\":0,\\\"subject\\\":\\\"test.*\\\"}\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.stream_names_response\\\",\\\"total\\\":1,\\\"offset\\\":0,\\\"limit\\\":1024,\\\"streams\\\":[\\\"test\\\"]}\"}"]
[48464] 2023/04/11 15:59:33.209996 [←[33mTRC←[0m] [::1]:53879 - cid:8 - <<- [PUB $JS.API.CONSUMER.INFO.test.Test123 _INBOX.pU59zTePf8ImM5SwaBMIeN.2 0]
[48464] 2023/04/11 15:59:33.210561 [←[33mTRC←[0m] [::1]:53879 - cid:8 - <<- MSG_PAYLOAD: [""]
[48464] 2023/04/11 15:59:33.211602 [←[33mTRC←[0m] [::1]:53879 - cid:8 - ->> [MSG _INBOX.pU59zTePf8ImM5SwaBMIeN.2 1 131]
[48464] 2023/04/11 15:59:33.212125 [←[33mTRC←[0m] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 502]
[48464] 2023/04/11 15:59:33.212648 [←[33mTRC←[0m] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"uW4fSa7rf1gFbbjcJiCzKs\",\"timestamp\":\"2023-04-11T13:59:33.2116026Z\",\"server\":\"NDGUJBY2XGSUJJU2XWVIPKQHHGLPPGTFHH3Y2ITLDMUXQDPKOT4UD536\",\"client\":{\"acc\":\"$G\",\"rtt\":29941500,\"server\":\"NDGUJBY2XGSUJJU2XWVIPKQHHGLPPGTFHH3Y2ITLDMUXQDPKOT4UD536\"},\"subject\":\"$JS.API.CONSUMER.INFO.test.Test123\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.consumer_info_response\\\",\\\"error\\\":{\\\"code\\\":404,\\\"err_code\\\":10014,\\\"description\\\":\\\"consumer not found\\\"}}\"}"]
[48464] 2023/04/11 15:59:33.222200 [←[33mTRC←[0m] [::1]:53879 - cid:8 - <<- [SUB _INBOX.pU59zTePf8ImW6SwaBMIeN 2]
[48464] 2023/04/11 15:59:33.229764 [←[33mTRC←[0m] [::1]:53879 - cid:8 - <<- [PUB $JS.API.CONSUMER.CREATE.test.Test123.test.* _INBOX.pU59zTePf8ImM5SwaBMIeN.3 194]
[48464] 2023/04/11 15:59:33.229764 [←[33mTRC←[0m] [::1]:53879 - cid:8 - <<- MSG_PAYLOAD: ["{\"stream_name\":\"test\",\"config\":{\"durable_name\":\"Test123\",\"deliver_policy\":\"all\",\"opt_start_seq\":0,\"ack_policy\":\"explicit\",\"filter_subject\":\"test.*\",\"replay_policy\":\"instant\",\"rate_limit_bps\":0}}"]
[48464] 2023/04/11 15:59:33.233425 [←[33mTRC←[0m] JETSTREAM - <<- [SUB $JSC.CI.$G.test.Test123 45]
[48464] 2023/04/11 15:59:33.233950 [←[33mTRC←[0m] [::1]:53879 - cid:8 - ->> [MSG _INBOX.pU59zTePf8ImM5SwaBMIeN.3 1 553]
[48464] 2023/04/11 15:59:33.234470 [←[33mTRC←[0m] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 1228]
[48464] 2023/04/11 15:59:33.236156 [←[33mTRC←[0m] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"uW4fSa7rf1gFbbjcJiCzPm\",\"timestamp\":\"2023-04-11T13:59:33.2339506Z\",\"server\":\"NDGUJBY2XGSUJJU2XWVIPKQHHGLPPGTFHH3Y2ITLDMUXQDPKOT4UD536\",\"client\":{\"acc\":\"$G\",\"rtt\":29941500,\"server\":\"NDGUJBY2XGSUJJU2XWVIPKQHHGLPPGTFHH3Y2ITLDMUXQDPKOT4UD536\"},\"subject\":\"$JS.API.CONSUMER.CREATE.test.Test123.test.*\",\"request\":\"{\\\"stream_name\\\":\\\"test\\\",\\\"config\\\":{\\\"durable_name\\\":\\\"Test123\\\",\\\"deliver_policy\\\":\\\"all\\\",\\\"opt_start_seq\\\":0,\\\"ack_policy\\\":\\\"explicit\\\",\\\"filter_subject\\\":\\\"test.*\\\",\\\"replay_policy\\\":\\\"instant\\\",\\\"rate_limit_bps\\\":0}}\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.consumer_create_response\\\",\\\"stream_name\\\":\\\"test\\\",\\\"name\\\":\\\"Test123\\\",\\\"created\\\":\\\"2023-04-11T13:59:33.2307959Z\\\",\\\"config\\\":{\\\"durable_name\\\":\\\"Test123\\\",\\\"name\\\":\\\"Test123\\\",\\\"deliver_policy\\\":\\\"all\\\",\\\"ack_policy\\\":\\\"explicit\\\",\\\"ack_wait\\\":30000000000,\\\"max_deliver\\\":-1,\\\"filter_subject\\\":\\\"test.*\\\",\\\"replay_policy\\\":\\\"instant\\\",\\\"max_waiting\\\":512,\\\"max_ack_pending\\\":1000,\\\"num_replicas\\\":0},\\\"delivered\\\":{\\\"consumer_seq\\\":0,\\\"stream_seq\\\":0},\\\"ack_floor\\\":{\\\"consumer_seq\\\":0,\\\"stream_seq\\\":0},\\\"num_ack_pending\\\":0,\\\"num_redelivered\\\":0,\\\"num_waiting\\\":0,\\\"num_pending\\\":1}\"}"]
[48464] 2023/04/11 15:59:33.271801 [←[33mTRC←[0m] [::1]:53879 - cid:8 - <<- [PUB $JS.API.CONSUMER.MSG.NEXT.test.Test123 _INBOX.pU59zTePf8ImW6SwaBMIeN 32]
[48464] 2023/04/11 15:59:33.271961 [←[33mTRC←[0m] [::1]:53879 - cid:8 - <<- MSG_PAYLOAD: ["{\"batch\":10,\"expires\":990000000}"]
[48464] 2023/04/11 15:59:33.274079 [←[33mTRC←[0m] [::1]:53879 - cid:8 - ->> [MSG test.1 2 $JS.ACK.test.Test123.1.1.1.1681221465875194400.0 8]
[48464] 2023/04/11 15:59:34.269651 [←[33mTRC←[0m] [::1]:53879 - cid:8 - ->> [HMSG _INBOX.pU59zTePf8ImW6SwaBMIeN 2 81 81]
[48464] 2023/04/11 15:59:34.290161 [←[33mTRC←[0m] [::1]:53879 - cid:8 - <<- [UNSUB 2 0]
[48464] 2023/04/11 15:59:34.291194 [←[33mTRC←[0m] [::1]:53879 - cid:8 - <-> [DELSUB 2]
[48464] 2023/04/11 15:59:34.291194 [←[33mTRC←[0m] [::1]:53879 - cid:8 - <<- [PUB $JS.API.CONSUMER.DELETE.test.Test123 _INBOX.pU59zTePf8ImM5SwaBMIeN.4 0]
[48464] 2023/04/11 15:59:34.291717 [←[33mTRC←[0m] [::1]:53879 - cid:8 - <<- MSG_PAYLOAD: [""]
[48464] 2023/04/11 15:59:34.291717 [←[33mTRC←[0m] JETSTREAM - <-> [DELSUB 1]
[48464] 2023/04/11 15:59:34.291717 [←[33mTRC←[0m] JETSTREAM - <-> [DELSUB 2]
[48464] 2023/04/11 15:59:34.292237 [←[33mTRC←[0m] JETSTREAM - <-> [DELSUB 45]
[48464] 2023/04/11 15:59:34.294852 [←[33mTRC←[0m] [::1]:53879 - cid:8 - ->> [MSG _INBOX.pU59zTePf8ImM5SwaBMIeN.4 1 75]
[48464] 2023/04/11 15:59:34.294852 [←[33mTRC←[0m] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 440]
[48464] 2023/04/11 15:59:34.294852 [←[33mTRC←[0m] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"uW4fSa7rf1gFbbjcJiCzUg\",\"timestamp\":\"2023-04-11T13:59:34.2948523Z\",\"server\":\"NDGUJBY2XGSUJJU2XWVIPKQHHGLPPGTFHH3Y2ITLDMUXQDPKOT4UD536\",\"client\":{\"acc\":\"$G\",\"rtt\":29941500,\"server\":\"NDGUJBY2XGSUJJU2XWVIPKQHHGLPPGTFHH3Y2ITLDMUXQDPKOT4UD536\"},\"subject\":\"$JS.API.CONSUMER.DELETE.test.Test123\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.consumer_delete_response\\\",\\\"success\\\":true}\"}"]
[48464] 2023/04/11 15:59:34.306878 [←[33mTRC←[0m] [::1]:53879 - cid:8 - <-> [DELSUB 1]
```
- Consumer name "Test%123":
```
[48464] 2023/04/11 16:00:03.603246 [←[33mTRC←[0m] [::1]:53987 - cid:12 - <<- MSG_PAYLOAD: ["{\"offset\":0,\"subject\":\"test.*\"}"]
[48464] 2023/04/11 16:00:03.603246 [←[33mTRC←[0m] [::1]:53987 - cid:12 - ->> [MSG _INBOX.FJeU2ND3YLF47JaShLTIYr.1 1 110]
[48464] 2023/04/11 16:00:03.604298 [←[33mTRC←[0m] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 517]
[48464] 2023/04/11 16:00:03.604298 [←[33mTRC←[0m] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"uW4fSa7rf1gFbbjcJiCzX8\",\"timestamp\":\"2023-04-11T14:00:03.6032468Z\",\"server\":\"NDGUJBY2XGSUJJU2XWVIPKQHHGLPPGTFHH3Y2ITLDMUXQDPKOT4UD536\",\"client\":{\"acc\":\"$G\",\"rtt\":25338400,\"server\":\"NDGUJBY2XGSUJJU2XWVIPKQHHGLPPGTFHH3Y2ITLDMUXQDPKOT4UD536\"},\"subject\":\"$JS.API.STREAM.NAMES\",\"request\":\"{\\\"offset\\\":0,\\\"subject\\\":\\\"test.*\\\"}\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.stream_names_response\\\",\\\"total\\\":1,\\\"offset\\\":0,\\\"limit\\\":1024,\\\"streams\\\":[\\\"test\\\"]}\"}"]
[48464] 2023/04/11 16:00:03.612980 [←[33mTRC←[0m] [::1]:53987 - cid:12 - <<- [PUB $JS.API.CONSUMER.INFO.test.Test%123 _INBOX.FJeU2ND3YLF47JaShLTIYr.2 0]
[48464] 2023/04/11 16:00:03.613655 [←[33mTRC←[0m] [::1]:53987 - cid:12 - <<- MSG_PAYLOAD: [""]
[48464] 2023/04/11 16:00:03.614184 [←[33mTRC←[0m] [::1]:53987 - cid:12 - ->> [MSG _INBOX.FJeU2ND3YLF47JaShLTIYr.2 1 131]
[48464] 2023/04/11 16:00:03.615251 [←[33mTRC←[0m] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 503]
[48464] 2023/04/11 16:00:03.615251 [←[33mTRC←[0m] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"uW4fSa7rf1gFbbjcJiCzZa\",\"timestamp\":\"2023-04-11T14:00:03.6141843Z\",\"server\":\"NDGUJBY2XGSUJJU2XWVIPKQHHGLPPGTFHH3Y2ITLDMUXQDPKOT4UD536\",\"client\":{\"acc\":\"$G\",\"rtt\":25338400,\"server\":\"NDGUJBY2XGSUJJU2XWVIPKQHHGLPPGTFHH3Y2ITLDMUXQDPKOT4UD536\"},\"subject\":\"$JS.API.CONSUMER.INFO.test.Test%123\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.consumer_info_response\\\",\\\"error\\\":{\\\"code\\\":404,\\\"err_code\\\":10014,\\\"description\\\":\\\"consumer not found\\\"}}\"}"]
[48464] 2023/04/11 16:00:03.624851 [←[33mTRC←[0m] [::1]:53987 - cid:12 - <<- [SUB _INBOX.FJeU2ND3YLF4ANaShLTIYr 2]
[48464] 2023/04/11 16:00:03.631254 [←[33mTRC←[0m] [::1]:53987 - cid:12 - <<- [PUB $JS.API.CONSUMER.CREATE.test.Test%123.test.* _INBOX.FJeU2ND3YLF47JaShLTIYr.3 195]
[48464] 2023/04/11 16:00:03.631820 [←[33mTRC←[0m] [::1]:53987 - cid:12 - <<- MSG_PAYLOAD: ["{\"stream_name\":\"test\",\"config\":{\"durable_name\":\"Test%123\",\"deliver_policy\":\"all\",\"opt_start_seq\":0,\"ack_policy\":\"explicit\",\"filter_subject\":\"test.*\",\"replay_policy\":\"instant\",\"rate_limit_bps\":0}}"]
[48464] 2023/04/11 16:00:03.634463 [←[33mTRC←[0m] JETSTREAM - <<- [SUB $JSC.CI.$G.test.Test%123 46]
[48464] 2023/04/11 16:00:03.634463 [←[33mTRC←[0m] [::1]:53987 - cid:12 - ->> [MSG _INBOX.FJeU2ND3YLF47JaShLTIYr.3 1 556]
[48464] 2023/04/11 16:00:03.634463 [←[33mTRC←[0m] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 1233]
[48464] 2023/04/11 16:00:03.635524 [←[33mTRC←[0m] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"uW4fSa7rf1gFbbjcJiCzeU\",\"timestamp\":\"2023-04-11T14:00:03.6344634Z\",\"server\":\"NDGUJBY2XGSUJJU2XWVIPKQHHGLPPGTFHH3Y2ITLDMUXQDPKOT4UD536\",\"client\":{\"acc\":\"$G\",\"rtt\":25338400,\"server\":\"NDGUJBY2XGSUJJU2XWVIPKQHHGLPPGTFHH3Y2ITLDMUXQDPKOT4UD536\"},\"subject\":\"$JS.API.CONSUMER.CREATE.test.Test%123.test.*\",\"request\":\"{\\\"stream_name\\\":\\\"test\\\",\\\"config\\\":{\\\"durable_name\\\":\\\"Test%123\\\",\\\"deliver_policy\\\":\\\"all\\\",\\\"opt_start_seq\\\":0,\\\"ack_policy\\\":\\\"explicit\\\",\\\"filter_subject\\\":\\\"test.*\\\",\\\"replay_policy\\\":\\\"instant\\\",\\\"rate_limit_bps\\\":0}}\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.consumer_create_response\\\",\\\"stream_name\\\":\\\"test\\\",\\\"name\\\":\\\"Test%123\\\",\\\"created\\\":\\\"2023-04-11T14:00:03.6318205Z\\\",\\\"config\\\":{\\\"durable_name\\\":\\\"Test%123\\\",\\\"name\\\":\\\"Test%123\\\",\\\"deliver_policy\\\":\\\"all\\\",\\\"ack_policy\\\":\\\"explicit\\\",\\\"ack_wait\\\":30000000000,\\\"max_deliver\\\":-1,\\\"filter_subject\\\":\\\"test.*\\\",\\\"replay_policy\\\":\\\"instant\\\",\\\"max_waiting\\\":512,\\\"max_ack_pending\\\":1000,\\\"num_replicas\\\":0},\\\"delivered\\\":{\\\"consumer_seq\\\":0,\\\"stream_seq\\\":0},\\\"ack_floor\\\":{\\\"consumer_seq\\\":0,\\\"stream_seq\\\":0},\\\"num_ack_pending\\\":0,\\\"num_redelivered\\\":0,\\\"num_waiting\\\":0,\\\"num_pending\\\":1}\"}"]
[48464] 2023/04/11 16:00:03.666037 [←[33mTRC←[0m] [::1]:53987 - cid:12 - <<- [PUB $JS.API.CONSUMER.MSG.NEXT.test.Test%123 _INBOX.FJeU2ND3YLF4ANaShLTIYr 32]
[48464] 2023/04/11 16:00:03.666344 [←[33mTRC←[0m] [::1]:53987 - cid:12 - <<- MSG_PAYLOAD: ["{\"batch\":10,\"expires\":990000000}"]
[48464] 2023/04/11 16:00:03.667887 [←[33mTRC←[0m] [::1]:53987 - cid:12 - ->> [MSG test.1 2 $JS.ACK.test.Test%d.1.1.1.1681221465875194400%!(EXTRA uint64=0) 8]
[48464] 2023/04/11 16:00:03.679056 [←[33mTRC←[0m] [::1]:53987 - cid:12 - <-> [DELSUB 1]
[48464] 2023/04/11 16:00:03.679907 [←[33mTRC←[0m] [::1]:53987 - cid:12 - <-> [DELSUB 2]
[48464] 2023/04/11 16:00:06.725543 [←[33mTRC←[0m] [::1]:54019 - cid:15 - <<- [CONNECT {"verbose":false,"pedantic":false,"ssl_required":false,"lang":".NET","version":"1.0.3.0","protocol":1,"echo":true,"headers":true,"no_responders":true}]
[48464] 2023/04/11 16:00:06.726061 [←[33mTRC←[0m] [::1]:54019 - cid:15 - <<- [PING]
[48464] 2023/04/11 16:00:06.727080 [←[33mTRC←[0m] [::1]:54019 - cid:15 - ->> [PONG]
[48464] 2023/04/11 16:00:06.732643 [←[33mTRC←[0m] [::1]:54019 - cid:15 - <<- [SUB _INBOX.FJeU2ND3YLF47JaShLTIYr.* 1]
[48464] 2023/04/11 16:00:06.733410 [←[33mTRC←[0m] [::1]:54019 - cid:15 - <<- [SUB _INBOX.FJeU2ND3YLF4ANaShLTIYr 2]
[48464] 2023/04/11 16:00:06.740087 [←[33mTRC←[0m] [::1]:54019 - cid:15 - <<- [UNSUB 2 0]
[48464] 2023/04/11 16:00:06.740139 [←[33mTRC←[0m] [::1]:54019 - cid:15 - <-> [DELSUB 2]
[48464] 2023/04/11 16:00:06.740139 [←[33mTRC←[0m] [::1]:54019 - cid:15 - <<- [PING]
[48464] 2023/04/11 16:00:06.740643 [←[33mTRC←[0m] [::1]:54019 - cid:15 - ->> [PONG]
[48464] 2023/04/11 16:00:06.741240 [←[33mTRC←[0m] [::1]:54019 - cid:15 - <<- [PUB $JS.API.CONSUMER.DELETE.test.Test%123 _INBOX.FJeU2ND3YLF47JaShLTIYr.4 0]
[48464] 2023/04/11 16:00:06.744399 [←[33mTRC←[0m] [::1]:54019 - cid:15 - <<- MSG_PAYLOAD: [""]
[48464] 2023/04/11 16:00:06.744933 [←[33mTRC←[0m] JETSTREAM - <-> [DELSUB 1]
[48464] 2023/04/11 16:00:06.745451 [←[33mTRC←[0m] JETSTREAM - <-> [DELSUB 2]
[48464] 2023/04/11 16:00:06.745965 [←[33mTRC←[0m] JETSTREAM - <-> [DELSUB 46]
[48464] 2023/04/11 16:00:06.747524 [←[33mTRC←[0m] [::1]:54019 - cid:15 - ->> [MSG _INBOX.FJeU2ND3YLF47JaShLTIYr.4 1 75]
[48464] 2023/04/11 16:00:06.747524 [←[33mTRC←[0m] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 434]
[48464] 2023/04/11 16:00:06.747524 [←[33mTRC←[0m] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"uW4fSa7rf1gFbbjcJiCzjO\",\"timestamp\":\"2023-04-11T14:00:06.7475249Z\",\"server\":\"NDGUJBY2XGSUJJU2XWVIPKQHHGLPPGTFHH3Y2ITLDMUXQDPKOT4UD536\",\"client\":{\"acc\":\"$G\",\"rtt\":1,\"server\":\"NDGUJBY2XGSUJJU2XWVIPKQHHGLPPGTFHH3Y2ITLDMUXQDPKOT4UD536\"},\"subject\":\"$JS.API.CONSUMER.DELETE.test.Test%123\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.consumer_delete_response\\\",\\\"success\\\":true}\"}"]
[48464] 2023/04/11 16:00:06.753389 [←[33mTRC←[0m] [::1]:54019 - cid:15 - <-> [DELSUB 1]
```
This line in the log above looks suspicious if you compare it to the log of the working situation (note the `%d`):
```
[48464] 2023/04/11 16:00:03.667887 [←[33mTRC←[0m] [::1]:53987 - cid:12 - ->> [MSG test.1 2 $JS.ACK.test.Test%d.1.1.1.1681221465875194400%!(EXTRA uint64=0) 8]
```
VS
```
[48464] 2023/04/11 15:59:33.274079 [←[33mTRC←[0m] [::1]:53879 - cid:8 - ->> [MSG test.1 2 $JS.ACK.test.Test123.1.1.1.1681221465875194400.0 8]
```
Is this expected behavior, are we doing something wrong, or is this a bug?
Thanks in advance for the support!
Greetings,
Frederic
|
https://github.com/nats-io/nats-server/issues/4038
|
https://github.com/nats-io/nats-server/pull/4040
|
31a2269710b1bbd3b7e5a1b2da4a1e328dd831aa
|
2a03d9dff766293e499f1893f7783615cc260641
| 2023-04-11T14:25:07Z |
go
| 2023-04-12T11:27:52Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,014 |
["server/client.go"]
|
Disconnection of a NATS server in a cluster is not detected by ping-pong mechanism
|
**Defect**
This is more or less a follow on to #3682
We are using three NATS servers in a cluster, and see a bug which prevents the loss of a particular server being detected by the other two servers. This matters because we use queue group subscriptions: when a server goes offline, we need the queue group subscription for services on the dead server to be removed. Otherwise, NATS can end up selecting entities on the dead server as the nominated responder for pub messages to the queue group.
In our system, there is continual traffic exchanged between all three servers during normal operation. And in short, this continuous traffic results in all outbound Pings from one server to another being held off/delayed indefinitely. And because the Pings are held off, the servers don't detect a stale connection, and hence doesn't close the connection and attempt to reconnect.
I believe I understand the issue.
In NATS server client.go:: processPingTimer() there is a check to test whether to delay sending an outgoing ping, in two cases:
- we recently (within specified pingInterval) received traffic (a message or sub/unsub) from the remote end
- we recently received a ping from the remote end (Remote ping)
This makes perfect sense: incoming receive messages mean we still have a link and don't need to send a ping.
However, the first test above is derived from the client.last field ("last packet" time) which is set for both incoming and outgoing traffic on this link.
I think the solution is to store a separate client.ping.lastRxMsg field which is updated for receive traffic only, and then use that field in the test of whether to hold off sending a ping.
Also, with this change, I think the same logic can be applied to all connection types (CLIENT, ROUTER, GATEWAY, LEAF). So we can remove the tests to always send ping for a GATEWAY.
**OS/Container environment**
Linux ARM64 - but not a factor
**Steps or code to reproduce the issue**
Configure 3 server cluster, and then break the connection to one of the servers
**Expected result**
The remaining servers should automatically detect loss of connection, with the detection time as configured by ping_interval and ping_max parameters.
**Actual result**
The remaining servers do eventually detect loss of connection via a WriteBuffer deadline being exceeded, but it takes a very long time (presumably) depending on the volume of traffic and the size of the buffer.
|
https://github.com/nats-io/nats-server/issues/4014
|
https://github.com/nats-io/nats-server/pull/4016
|
d14968cb4face7aba66b225a172b2bc6f6784ffb
|
81541361dcc21b2ab9915120a41cd6a0e225b4d6
| 2023-04-03T10:51:04Z |
go
| 2023-04-06T03:16:22Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 4,000 |
["server/stream.go"]
|
JetStream RePublish should work with SOURCES and MIRROR message ingest paths
|
A stream may ingest messages from subscription on one or more defined SUBJECT filters, but also may ingest messages replicated from an upstream via SOURCES or MIRROR configuration.
In the current release (2.9.x), if RePublish is configured on the stream, it acts only on ingested messages from SUBJECT filter(s). It would be desirable and useful for RePublish to work on any ingested message into the stream.
|
https://github.com/nats-io/nats-server/issues/4000
|
https://github.com/nats-io/nats-server/pull/4010
|
87d7263026af42da03a376cff9be1cd4e1b2f6a1
|
ce115ab1e6fc008a94d020df191643fccfbeab11
| 2023-03-29T19:06:44Z |
go
| 2023-04-05T01:38:27Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,992 |
[".goreleaser.yml"]
|
critical CVEs showed in container vulnerability scan with nats:2.9 image
|
Hi there,
We've seen some weird output when run vulnerability scan on the nats:2.9 container image. We do not see such behavior in nats:2.8 and earlier. Does one have any idea what could've caused this? Thanks!
when run with Twistlock, reports like below;
```
root@ubuntuguest:~# docker pull nats:2.9 && ./tw.sh nats:2.9
2.9: Pulling from library/nats
Digest: sha256:58e483681983f87fdc738e980d582d4cc7218914c4f0056d36bab0c5acfc5a8b
Status: Downloaded newer image for nats:2.9
docker.io/library/nats:2.9
Scan results for: image nats:2.9 sha256:74665a2673c052e85d3417cec864d1aabab3263f3328051037a133d9b85765ce
Vulnerabilities
+---------------------+----------+------+------------------------------------------+---------+-------------------------+-----------+------------+------------------------------------------------------+
| CVE | SEVERITY | CVSS | PACKAGE | VERSION | STATUS | PUBLISHED | DISCOVERED | DESCRIPTION |
+---------------------+----------+------+------------------------------------------+---------+-------------------------+-----------+------------+------------------------------------------------------+
| GHSA-j756-f273-xhp4 | critical | 9.00 | github.com/nats-io/nats-server/v2 | (devel) | fixed in 2.2.0 | > 1 years | < 1 hour | (This advisory is canonically |
| | | | | | > 1 years ago | | | <https://advisories.nats.io/CVE/CVE-2021-3127.txt>) |
| | | | | | | | | ## Problem Description The NATS server provides |
| | | | | | | | | for Subjects which... |
+---------------------+----------+------+------------------------------------------+---------+-------------------------+-----------+------------+------------------------------------------------------+
| CVE-2020-26892 | critical | 9.00 | github.com/nats-io/nats-server/v2 | (devel) | fixed in 2.1.9 | > 2 years | < 1 hour | The JWT library in NATS nats-server before 2.1.9 |
| | | | | | > 2 years ago | | | has Incorrect Access Control because of how |
| | | | | | | | | expired credentials are handled. |
+---------------------+----------+------+------------------------------------------+---------+-------------------------+-----------+------------+------------------------------------------------------+
| GHSA-2c64-vj8g-vwrq | high | 7.00 | github.com/nats-io/nats-server/v2 | (devel) | fixed in 2.1.9 | > 1 years | < 1 hour | (This advisory is canonically |
| | | | | | > 1 years ago | | | https://advisories.nats.io/CVE/CVE-2020-26892.txt |
| | | | | | | | | ) ## Problem Description NATS nats-server |
| | | | | | | | | through 2020-10-07 has Inc... |
+---------------------+----------+------+------------------------------------------+---------+-------------------------+-----------+------------+------------------------------------------------------+
| CVE-2021-3127 | high | 7.00 | github.com/nats-io/nats-server/v2/server | (devel) | fixed in 2.2.0 | > 2 years | < 1 hour | NATS Server 2.x before 2.2.0 and JWT library |
| | | | | | > 2 years ago | | | before 2.0.1 have Incorrect Access Control because |
| | | | | | | | | Import Token bindings are mishandled. |
+---------------------+----------+------+------------------------------------------+---------+-------------------------+-----------+------------+------------------------------------------------------+
| CVE-2020-28466 | high | 7.00 | github.com/nats-io/nats-server/v2/server | (devel) | fixed in 2.2.0 | > 2 years | < 1 hour | This affects all versions of package |
| | | | | | > 2 years ago | | | github.com/nats-io/nats-server/server. Untrusted |
| | | | | | | | | accounts are able to crash the server using |
| | | | | | | | | configs that represe... |
+---------------------+----------+------+------------------------------------------+---------+-------------------------+-----------+------------+------------------------------------------------------+
| CVE-2020-26521 | high | 7.00 | github.com/nats-io/nats-server/v2 | (devel) | fixed in 2.1.9 | > 2 years | < 1 hour | The JWT library in NATS nats-server before 2.1.9 |
| | | | | | > 2 years ago | | | allows a denial of service (a nil dereference in |
| | | | | | | | | Go code). |
```
When run with trivy, it reported version error then doesn't give any output:
```
root@ubuntuguest:~/code/nats-server# trivy image nats:2.9
2023-03-24T17:47:58.829Z INFO Vulnerability scanning is enabled
2023-03-24T17:47:58.829Z INFO Secret scanning is enabled
2023-03-24T17:47:58.829Z INFO If your scanning is slow, please try '--scanners vuln' to disable secret scanning
2023-03-24T17:47:58.829Z INFO Please see also https://aquasecurity.github.io/trivy/v0.37/docs/secret/scanning/#recommendation for faster secret detection
2023-03-24T17:47:58.843Z INFO Number of language-specific files: 1
2023-03-24T17:47:58.843Z INFO Detecting gobinary vulnerabilities...
2023-03-24T17:47:58.843Z WARN version error ((devel)): malformed version: (devel)
2023-03-24T17:47:58.843Z WARN version error ((devel)): malformed version: (devel)
2023-03-24T17:47:58.843Z WARN version error ((devel)): malformed version: (devel)
2023-03-24T17:47:58.843Z WARN version error ((devel)): malformed version: (devel)
2023-03-24T17:47:58.843Z WARN version error ((devel)): malformed version: (devel)
2023-03-24T17:47:58.844Z WARN version error ((devel)): malformed version: (devel)
2023-03-24T17:47:58.844Z WARN version error ((devel)): malformed version: (devel)
2023-03-24T17:47:58.844Z WARN version error ((devel)): malformed version: (devel)
2023-03-24T17:47:58.844Z WARN version error ((devel)): malformed version: (devel)
2023-03-24T17:47:58.844Z WARN version error ((devel)): malformed version: (devel)
2023-03-24T17:47:58.844Z WARN version error ((devel)): malformed version: (devel)
```
|
https://github.com/nats-io/nats-server/issues/3992
|
https://github.com/nats-io/nats-server/pull/3993
|
57daedafa849a371ee89362edd4fe78157340507
|
9cc66c0f32e733deb93de4e48ec41ceaf79acb33
| 2023-03-24T17:57:04Z |
go
| 2023-03-28T16:00:59Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,959 |
["server/ocsp.go", "test/ocsp_test.go"]
|
When OCSP is enabled, if leaf mTLS configured, mTLS handshake succeeds but leaf connection fails
|
## When OCSP is enabled, if leaf mTLS configured, mTLS handshake succeeds but leaf connection fails
In server up to 2.9.15, where OCSP is enabled, operator is able to configure a Leaf listener with mTLS enablement (client auth as well as server auth), but ultimate leaf connection fails after successful mTLS handshake.
#### Steps or code to reproduce the issue:
1. Configure a NATS server with the OCSP staple feature enabled
2. Add a leaf configuration with a TLS block with `verify: true`
3. Attempt NATS leaf remote connection to the NATS Server with mTLS (client sending valid cert)
#### Expected result:
mTLS handshake succeeds and Leaf connection established
#### Actual result:
mTLS handshake succeeds, but Leaf connection is rejected by the OCSP-enabled "hub". Debug entry observed in hub logs:
`[1] 2023/03/13 20:40:24.564561 [ERR] 10.xx.xx.xx:42480 - lid:11392 - TLS leafnode handshake error: Leafnode client missing OCSP Staple`
If OCSP is disabled at hub, mTLS handshake and Leaf connection succeed.
|
https://github.com/nats-io/nats-server/issues/3959
|
https://github.com/nats-io/nats-server/pull/3964
|
84de2a3b72516024085c73ea66f0d0cbc06a96b1
|
c1373d6666590b0c8381b57a44ddcc73f22223b2
| 2023-03-14T00:09:25Z |
go
| 2023-03-15T00:30:25Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,953 |
["server/consumer.go", "server/jetstream_cluster.go", "server/jetstream_cluster_3_test.go"]
|
Work Queue stream does not remove consumed messages when scaled from R1 -> R3
|
nats-server 2.9.15
3 node cluster
## Steps to reproduce: overview
1. Create R1 Work Queue stream
2. Create durable push consumer
3. Subscribe to consumer
4. Edit stream to R3
5. Publish to stream
6. Consumed message is not removed from stream <----- NOT EXPECTED
#### "Fix"
7. Request consumer cluster step down
8. Publish to stream
9. Consumed message is removed from stream <----- EXPECTED
#### "Unfix"
10. Edit stream to R1
11. Edit stream to R3
12. Publish to stream
13. Consumed message is not removed from stream <----- NOT EXPECTED
## Steps to reproduce: detailed
1. Create R1 Work Queue stream
`nats stream add --config=stream.conf`
where `stream.conf` is
```
{
"name": "stream1",
"subjects": [
"subject1"
],
"retention": "workqueue",
"max_consumers": -1,
"max_msgs_per_subject": -1,
"max_msgs": -1,
"max_bytes": -1,
"max_age": 0,
"max_msg_size": -1,
"storage": "file",
"discard": "new",
"num_replicas": 1,
"duplicate_window": 120000000000,
"sealed": false,
"deny_delete": false,
"deny_purge": false,
"allow_rollup_hdrs": false,
"allow_direct": false,
"mirror_direct": false
}
```
2. Create durable push consumer
`nats consumer add stream --config=consumer.conf`
where `consumer.conf` is
```
{
"ack_policy": "explicit",
"deliver_policy": "all",
"deliver_subject": "deliver1",
"deliver_group": "group1",
"durable_name": "consumer1",
"max_deliver": -1,
"replay_policy": "instant",
"num_replicas": 0
}
```
3. Subscribe to consumer
`nats consumer sub stream1 consumer1`
4. Edit stream to R3
`nats stream edit stream1 --replicas=3`
5. Publish to stream
`nats publish subject1 "test"`
6. Consumed message is not removed from stream <----- NOT EXPECTED
`nats stream state stream1`
...
Messages: 1
...
7. Request consumer cluster step down
`nats consumer cluster step-down stream1 consumer1`
8. Publish to stream
`nats publish subject1 "test"`
9. Consumed message is removed from stream <----- EXPECTED
`nats stream state stream1`
...
Messages: 0
...
10. Edit stream to R1
`nats stream edit stream1 --replicas=1`
11. Edit stream to R3
`nats stream edit stream1 --replicas=3`
12. Publish to stream
`nats publish subject1 "test"`
13. Consumed message is not removed from stream <----- NOT EXPECTED
`nats stream state stream1`
...
Messages: 1
...
|
https://github.com/nats-io/nats-server/issues/3953
|
https://github.com/nats-io/nats-server/pull/3960
|
07bc964d5145e6f385bdb7092a00f80955074ddf
|
3ecf55bcf39de4664b738e0175c8094b5258ee4e
| 2023-03-10T13:36:24Z |
go
| 2023-03-14T13:52:11Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,924 |
["server/mqtt.go", "server/mqtt_test.go"]
|
MQTT: leading wildcards shouldn't match $ topics
|
## Defect
- [X] Included `nats-server -DV` output
- [X] Included a [Minimal, Complete, and Verifiable example](https://stackoverflow.com/help/mcve)
The MQTT 3.1.1 specification says that
> The Server MUST NOT match Topic Filters starting with a wildcard character (# or +) with Topic Names beginning with a $ character [MQTT-4.7.2-1].
However, when I subscribe to `#`, I get messages for `$JS` and `$MQTT`
#### Versions of `nats-server` and affected client libraries used:
nats-server 2.9.14, using mosquitto_{pub,sub} to test MQTT functionality
#### OS/Container environment:
#### Steps or code to reproduce the issue:
<details>
<summary>configuration file</summary>
<pre><code>
{
"authorization": {
"users": [
{
"user": "mqtt"
}
]
},
"jetstream": {
"store_dir": "/var/lib/nats"
},
"mqtt": {
"no_auth_user": "mqtt",
"port": 1883
},
"port": 4222,
"server_name": "sateda-nats"
}
</code></pre></details>
- Run NATS with MQTT enabled
- subscribe to the wildcard `#` with an MQTT client
- publish to some topic with an MQTT client
```
shell1 $ mosquitto_sub -h sateda.home -t \# -v
shell2 $ mosquitto_pub -h sateda.home -t dom/test -m foobar
```
#### Expected result:
```
dom/test foobar
```
#### Actual result:
```
$JS/API/STREAM/MSG/GET/$MQTT_sess {"last_by_subj":"$MQTT.sess.KUn6Syw3"}
$MQTT/JSA/lo8Xni4O/ML/pDM6qkBkttZLBMC7UDVGXF {"type":"io.nats.jetstream.api.v1.stream_msg_get_response","error":{"code":404,"err_code":10037,"description":"no message found"}}
$JS/EVENT/ADVISORY/API {"type":"io.nats.jetstream.advisory.v1.api_audit","id":"AwzjXbTgkO7m9lyjA5noEU","timestamp":"2023-02-28T23:53:36.161665974Z","server":"sateda-nats","client":{"acc":"$G","server":"sateda-nats"},"subject":"$JS.API.STREAM.MSG.GET.$MQTT_sess","request":"{\"last_by_subj\":\"$MQTT.sess.KUn6Syw3\"}","response":"{\"type\":\"io.nats.jetstream.api.v1.stream_msg_get_response\",\"error\":{\"code\":404,\"err_code\":10037,\"description\":\"no message found\"}}"}
$MQTT/sess/KUn6Syw3 {"origin":"lo8Xni4O","id":"AwzjXbTgkO7m9lyjA5no9D","clean":true}
dom/test foobar
$JS/API/STREAM/MSG/DELETE/$MQTT_sess {"seq":59,"no_erase":true}
$MQTT/JSA/lo8Xni4O/MD/pDM6qkBkttZLBMC7UDVGdn {"type":"io.nats.jetstream.api.v1.stream_msg_delete_response","success":true}
$JS/EVENT/ADVISORY/API {"type":"io.nats.jetstream.advisory.v1.api_audit","id":"AwzjXbTgkO7m9lyjA5noJl","timestamp":"2023-02-28T23:53:36.16346853Z","server":"sateda-nats","client":{"acc":"$G","server":"sateda-nats"},"subject":"$JS.API.STREAM.MSG.DELETE.$MQTT_sess","request":"{\"seq\":59,\"no_erase\":true}","response":"{\"type\":\"io.nats.jetstream.api.v1.stream_msg_delete_response\",\"success\":true}"}
```
<details>
<summary>nats-server -DV output</summary>
<pre><code>
[2459215] 2023/02/28 23:57:23.092987 [INF] Starting nats-server
[2459215] 2023/02/28 23:57:23.093072 [INF] Version: 2.9.14
[2459215] 2023/02/28 23:57:23.093084 [INF] Git: [not set]
[2459215] 2023/02/28 23:57:23.093094 [DBG] Go build: go1.19.5
[2459215] 2023/02/28 23:57:23.093104 [INF] Name: sateda-nats
[2459215] 2023/02/28 23:57:23.093122 [INF] Node: lo8Xni4O
[2459215] 2023/02/28 23:57:23.093136 [INF] ID: NBFTNOA3AE4RX5C445IPUDA2XZZTXXKN2ZCBAHLZYKXT2FFD7XMZRXEG
[2459215] 2023/02/28 23:57:23.093160 [WRN] Plaintext passwords detected, use nkeys or bcrypt
[2459215] 2023/02/28 23:57:23.093186 [INF] Using configuration file: /nix/store/6ycmlzwqwjs9k6lrd7kimcf6vfg8kwzn-nats.conf
[2459215] 2023/02/28 23:57:23.093269 [DBG] Created system account: "$SYS"
[2459215] 2023/02/28 23:57:23.094252 [INF] Starting JetStream
[2459215] 2023/02/28 23:57:23.094365 [DBG] JetStream creating dynamic configuration - 94.40 GB memory, 43.60 GB disk
[2459215] 2023/02/28 23:57:23.094835 [INF] _ ___ _____ ___ _____ ___ ___ _ __ __
[2459215] 2023/02/28 23:57:23.094851 [INF] _ | | __|_ _/ __|_ _| _ \ __| /_\ | \/ |
[2459215] 2023/02/28 23:57:23.094858 [INF] | || | _| | | \__ \ | | | / _| / _ \| |\/| |
[2459215] 2023/02/28 23:57:23.094865 [INF] \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_| |_|
[2459215] 2023/02/28 23:57:23.094872 [INF]
[2459215] 2023/02/28 23:57:23.094879 [INF] https://docs.nats.io/jetstream
[2459215] 2023/02/28 23:57:23.094885 [INF]
[2459215] 2023/02/28 23:57:23.094893 [INF] ---------------- JETSTREAM ----------------
[2459215] 2023/02/28 23:57:23.094908 [INF] Max Memory: 94.40 GB
[2459215] 2023/02/28 23:57:23.094920 [INF] Max Storage: 43.60 GB
[2459215] 2023/02/28 23:57:23.094931 [INF] Store Directory: "/var/lib/nats/jetstream"
[2459215] 2023/02/28 23:57:23.094940 [INF] -------------------------------------------
[2459215] 2023/02/28 23:57:23.095130 [DBG] Exports:
[2459215] 2023/02/28 23:57:23.095145 [DBG] $JS.API.>
[2459215] 2023/02/28 23:57:23.095438 [DBG] Enabled JetStream for account "$G"
[2459215] 2023/02/28 23:57:23.095463 [DBG] Max Memory: -1 B
[2459215] 2023/02/28 23:57:23.095472 [DBG] Max Storage: -1 B
[2459215] 2023/02/28 23:57:23.095517 [DBG] Recovering JetStream state for account "$G"
[2459215] 2023/02/28 23:57:23.096306 [INF] Starting restore for stream '$G > $MQTT_msgs'
[2459215] 2023/02/28 23:57:23.097633 [INF] Restored 0 messages for stream '$G > $MQTT_msgs'
[2459215] 2023/02/28 23:57:23.097958 [INF] Starting restore for stream '$G > $MQTT_rmsgs'
[2459215] 2023/02/28 23:57:23.099166 [INF] Restored 10 messages for stream '$G > $MQTT_rmsgs'
[2459215] 2023/02/28 23:57:23.099388 [INF] Starting restore for stream '$G > $MQTT_sess'
[2459215] 2023/02/28 23:57:23.100457 [INF] Restored 4 messages for stream '$G > $MQTT_sess'
[2459215] 2023/02/28 23:57:23.100547 [INF] Recovering 1 consumers for stream - '$G > $MQTT_msgs'
[2459215] 2023/02/28 23:57:23.101283 [TRC] JETSTREAM - <<- [SUB $JSC.CI.$G.$MQTT_msgs.WOIJts90_3pjLcRDJWc3qRxiFggtPhQ 45]
[2459215] 2023/02/28 23:57:23.101887 [INF] Recovering 1 consumers for stream - '$G > $MQTT_rmsgs'
[2459215] 2023/02/28 23:57:23.102389 [TRC] JETSTREAM - <<- [SUB $JSC.CI.$G.$MQTT_rmsgs.$MQTT_rmsgs_lo8Xni4O 46]
[2459215] 2023/02/28 23:57:23.102748 [ERR] consumer '$G > $MQTT_msgs > ' MUST match replication (1 vs 0) of stream with interest policy
[2459215] 2023/02/28 23:57:23.102768 [DBG] JetStream state for account "$G" recovered
[2459215] 2023/02/28 23:57:23.103387 [INF] Listening for MQTT clients on mqtt://0.0.0.0:1883
[2459215] 2023/02/28 23:57:23.103477 [INF] Listening for client connections on 0.0.0.0:4222
[2459215] 2023/02/28 23:57:23.103496 [DBG] Get non local IPs for "0.0.0.0"
[2459215] 2023/02/28 23:57:23.104625 [DBG] ip=192.168.1.148
[2459215] 2023/02/28 23:57:23.104730 [DBG] ip=192.168.2.1
[2459215] 2023/02/28 23:57:23.104762 [INF] Server is ready
[2459215] 2023/02/28 23:57:23.105375 [DBG] maxprocs: Leaving GOMAXPROCS=32: CPU quota undefined
[2459215] 2023/02/28 23:57:23.341653 [DBG] 192.168.1.183:54456 - mid:17 - Client connection created
[2459215] 2023/02/28 23:57:23.342041 [TRC] 192.168.1.183:54456 - mid:17 - "Yz3NSzVD9dKZIFVd7SlOlH" - <<- [CONNECT clientID=Yz3NSzVD9dKZIFVd7SlOlH keepAlive=1m30s]
[2459215] 2023/02/28 23:57:23.342185 [INF] Creating MQTT streams/consumers with replicas 1 for account "$G"
[2459215] 2023/02/28 23:57:23.344383 [TRC] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 1067]
[2459215] 2023/02/28 23:57:23.344484 [TRC] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"Yz3NSzVD9dKZIFVd7SlOyR\",\"timestamp\":\"2023-02-28T23:57:23.343978193Z\",\"server\":\"sateda-nats\",\"client\":{\"acc\":\"$G\",\"server\":\"sateda-nats\"},\"subject\":\"$JS.API.STREAM.INFO.$MQTT_sess\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.stream_info_response\\\",\\\"total\\\":0,\\\"offset\\\":0,\\\"limit\\\":0,\\\"config\\\":{\\\"name\\\":\\\"$MQTT_sess\\\",\\\"subjects\\\":[\\\"$MQTT.sess.\\\\u003e\\\"],\\\"retention\\\":\\\"limits\\\",\\\"max_consumers\\\":-1,\\\"max_msgs\\\":-1,\\\"max_bytes\\\":-1,\\\"max_age\\\":0,\\\"max_msgs_per_subject\\\":1,\\\"max_msg_size\\\":-1,\\\"discard\\\":\\\"old\\\",\\\"storage\\\":\\\"file\\\",\\\"num_replicas\\\":1,\\\"duplicate_window\\\":120000000000,\\\"allow_direct\\\":false,\\\"mirror_direct\\\":false,\\\"sealed\\\":false,\\\"deny_delete\\\":false,\\\"deny_purge\\\":false,\\\"allow_rollup_hdrs\\\":false},\\\"created\\\":\\\"2023-02-28T22:47:15.120279295Z\\\",\\\"state\\\":{\\\"messages\\\":4,\\\"bytes\\\":1608,\\\"first_seq\\\":20,\\\"first_ts\\\":\\\"2023-02-28T22:48:05.166904324Z\\\",\\\"last_seq\\\":95,\\\"last_ts\\\":\\\"2023-02-28T23:57:16.965285234Z\\\",\\\"num_subjects\\\":4,\\\"num_deleted\\\":72,\\\"consumer_count\\\":0}}\"}"]
[2459215] 2023/02/28 23:57:23.345538 [TRC] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 1007]
[2459215] 2023/02/28 23:57:23.345630 [TRC] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"Yz3NSzVD9dKZIFVd7SlP2p\",\"timestamp\":\"2023-02-28T23:57:23.344717572Z\",\"server\":\"sateda-nats\",\"client\":{\"acc\":\"$G\",\"server\":\"sateda-nats\"},\"subject\":\"$JS.API.STREAM.INFO.$MQTT_msgs\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.stream_info_response\\\",\\\"total\\\":0,\\\"offset\\\":0,\\\"limit\\\":0,\\\"config\\\":{\\\"name\\\":\\\"$MQTT_msgs\\\",\\\"subjects\\\":[\\\"$MQTT.msgs.\\\\u003e\\\"],\\\"retention\\\":\\\"interest\\\",\\\"max_consumers\\\":-1,\\\"max_msgs\\\":-1,\\\"max_bytes\\\":-1,\\\"max_age\\\":0,\\\"max_msgs_per_subject\\\":-1,\\\"max_msg_size\\\":-1,\\\"discard\\\":\\\"old\\\",\\\"storage\\\":\\\"file\\\",\\\"num_replicas\\\":1,\\\"duplicate_window\\\":120000000000,\\\"allow_direct\\\":false,\\\"mirror_direct\\\":false,\\\"sealed\\\":false,\\\"deny_delete\\\":false,\\\"deny_purge\\\":false,\\\"allow_rollup_hdrs\\\":false},\\\"created\\\":\\\"2023-02-28T22:47:15.123275463Z\\\",\\\"state\\\":{\\\"messages\\\":0,\\\"bytes\\\":0,\\\"first_seq\\\":1,\\\"first_ts\\\":\\\"1970-01-01T00:00:00Z\\\",\\\"last_seq\\\":0,\\\"last_ts\\\":\\\"0001-01-01T00:00:00Z\\\",\\\"consumer_count\\\":1}}\"}"]
[2459215] 2023/02/28 23:57:23.346044 [TRC] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 1046]
[2459215] 2023/02/28 23:57:23.346147 [TRC] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"Yz3NSzVD9dKZIFVd7SlP7D\",\"timestamp\":\"2023-02-28T23:57:23.345864919Z\",\"server\":\"sateda-nats\",\"client\":{\"acc\":\"$G\",\"server\":\"sateda-nats\"},\"subject\":\"$JS.API.STREAM.INFO.$MQTT_rmsgs\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.stream_info_response\\\",\\\"total\\\":0,\\\"offset\\\":0,\\\"limit\\\":0,\\\"config\\\":{\\\"name\\\":\\\"$MQTT_rmsgs\\\",\\\"subjects\\\":[\\\"$MQTT.rmsgs\\\"],\\\"retention\\\":\\\"limits\\\",\\\"max_consumers\\\":-1,\\\"max_msgs\\\":-1,\\\"max_bytes\\\":-1,\\\"max_age\\\":0,\\\"max_msgs_per_subject\\\":-1,\\\"max_msg_size\\\":-1,\\\"discard\\\":\\\"old\\\",\\\"storage\\\":\\\"file\\\",\\\"num_replicas\\\":1,\\\"duplicate_window\\\":120000000000,\\\"allow_direct\\\":false,\\\"mirror_direct\\\":false,\\\"sealed\\\":false,\\\"deny_delete\\\":false,\\\"deny_purge\\\":false,\\\"allow_rollup_hdrs\\\":false},\\\"created\\\":\\\"2023-02-28T22:47:15.12527423Z\\\",\\\"state\\\":{\\\"messages\\\":10,\\\"bytes\\\":3680,\\\"first_seq\\\":948,\\\"first_ts\\\":\\\"2023-02-28T23:57:16.039944921Z\\\",\\\"last_seq\\\":957,\\\"last_ts\\\":\\\"2023-02-28T23:57:16.048564687Z\\\",\\\"num_subjects\\\":1,\\\"consumer_count\\\":1}}\"}"]
[2459215] 2023/02/28 23:57:23.346272 [TRC] JETSTREAM - <-> [DELSUB 1]
[2459215] 2023/02/28 23:57:23.346362 [TRC] JETSTREAM - <-> [DELSUB 2]
[2459215] 2023/02/28 23:57:23.346392 [TRC] JETSTREAM - <-> [DELSUB 46]
[2459215] 2023/02/28 23:57:23.346467 [DBG] JETSTREAM - JetStream connection closed: Client Closed
[2459215] 2023/02/28 23:57:23.346491 [DBG] JETSTREAM - JetStream connection closed: Client Closed
[2459215] 2023/02/28 23:57:23.347288 [TRC] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 357]
[2459215] 2023/02/28 23:57:23.347369 [TRC] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"Yz3NSzVD9dKZIFVd7SlPFz\",\"timestamp\":\"2023-02-28T23:57:23.347163853Z\",\"server\":\"sateda-nats\",\"client\":{\"acc\":\"$G\",\"server\":\"sateda-nats\"},\"subject\":\"$JS.API.CONSUMER.DELETE.$MQTT_rmsgs.$MQTT_rmsgs_lo8Xni4O\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.consumer_delete_response\\\",\\\"success\\\":true}\"}"]
[2459215] 2023/02/28 23:57:23.348800 [TRC] JETSTREAM - <<- [SUB $JSC.CI.$G.$MQTT_rmsgs.$MQTT_rmsgs_lo8Xni4O 47]
[2459215] 2023/02/28 23:57:23.349433 [TRC] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 1227]
[2459215] 2023/02/28 23:57:23.349527 [TRC] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"Yz3NSzVD9dKZIFVd7SlPOl\",\"timestamp\":\"2023-02-28T23:57:23.349209029Z\",\"server\":\"sateda-nats\",\"client\":{\"acc\":\"$G\",\"server\":\"sateda-nats\"},\"subject\":\"$JS.API.CONSUMER.DURABLE.CREATE.$MQTT_rmsgs.$MQTT_rmsgs_lo8Xni4O\",\"request\":\"{\\\"stream_name\\\":\\\"$MQTT_rmsgs\\\",\\\"config\\\":{\\\"durable_name\\\":\\\"$MQTT_rmsgs_lo8Xni4O\\\",\\\"deliver_policy\\\":\\\"all\\\",\\\"ack_policy\\\":\\\"none\\\",\\\"filter_subject\\\":\\\"$MQTT.rmsgs\\\",\\\"replay_policy\\\":\\\"instant\\\",\\\"deliver_subject\\\":\\\"$MQTT.sub.Yz3NSzVD9dKZIFVd7SlOpf\\\",\\\"num_replicas\\\":0}}\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.consumer_create_response\\\",\\\"stream_name\\\":\\\"$MQTT_rmsgs\\\",\\\"name\\\":\\\"$MQTT_rmsgs_lo8Xni4O\\\",\\\"created\\\":\\\"2023-02-28T23:57:23.347930888Z\\\",\\\"config\\\":{\\\"durable_name\\\":\\\"$MQTT_rmsgs_lo8Xni4O\\\",\\\"deliver_policy\\\":\\\"all\\\",\\\"ack_policy\\\":\\\"none\\\",\\\"max_deliver\\\":-1,\\\"filter_subject\\\":\\\"$MQTT.rmsgs\\\",\\\"replay_policy\\\":\\\"instant\\\",\\\"deliver_subject\\\":\\\"$MQTT.sub.Yz3NSzVD9dKZIFVd7SlOpf\\\",\\\"num_replicas\\\":0},\\\"delivered\\\":{\\\"consumer_seq\\\":0,\\\"stream_seq\\\":947},\\\"ack_floor\\\":{\\\"consumer_seq\\\":0,\\\"stream_seq\\\":0},\\\"num_ack_pending\\\":0,\\\"num_redelivered\\\":0,\\\"num_waiting\\\":0,\\\"num_pending\\\":10,\\\"push_bound\\\":true}\"}"]
[2459215] 2023/02/28 23:57:23.350662 [TRC] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 450]
[2459215] 2023/02/28 23:57:23.350723 [TRC] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"Yz3NSzVD9dKZIFVd7SlPT9\",\"timestamp\":\"2023-02-28T23:57:23.3505513Z\",\"server\":\"sateda-nats\",\"client\":{\"acc\":\"$G\",\"server\":\"sateda-nats\"},\"subject\":\"$JS.API.STREAM.MSG.GET.$MQTT_sess\",\"request\":\"{\\\"last_by_subj\\\":\\\"$MQTT.sess.Y0hZqM8U\\\"}\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.stream_msg_get_response\\\",\\\"error\\\":{\\\"code\\\":404,\\\"err_code\\\":10037,\\\"description\\\":\\\"no message found\\\"}}\"}"]
[2459215] 2023/02/28 23:57:23.351434 [TRC] 192.168.1.183:54456 - mid:17 - "Yz3NSzVD9dKZIFVd7SlOlH" - ->> [CONNACK sp=false rc=0]
[2459215] 2023/02/28 23:57:23.351471 [DBG] 192.168.1.183:54456 - mid:17 - "Yz3NSzVD9dKZIFVd7SlOlH" - Client connected
[2459215] 2023/02/28 23:57:23.351803 [TRC] 192.168.1.183:54456 - mid:17 - "Yz3NSzVD9dKZIFVd7SlOlH" - <<- [SUBSCRIBE [# (>) QoS=0] pi=1]
[2459215] 2023/02/28 23:57:23.351920 [TRC] 192.168.1.183:54456 - mid:17 - "Yz3NSzVD9dKZIFVd7SlOlH" - ->> [PUBLISH shellies/shellyplug-s-7C87CEB4A100/temperature dup=false QoS=0 retain=true size=5]
[2459215] 2023/02/28 23:57:23.351947 [TRC] 192.168.1.183:54456 - mid:17 - "Yz3NSzVD9dKZIFVd7SlOlH" - ->> [PUBLISH shellies/shellyplug-s-7C87CEB4A100/temperature_f dup=false QoS=0 retain=true size=5]
[2459215] 2023/02/28 23:57:23.351968 [TRC] 192.168.1.183:54456 - mid:17 - "Yz3NSzVD9dKZIFVd7SlOlH" - ->> [PUBLISH shellies/shellyplug-s-7C87CEB4A100/overtemperature dup=false QoS=0 retain=true size=1]
[2459215] 2023/02/28 23:57:23.351988 [TRC] 192.168.1.183:54456 - mid:17 - "Yz3NSzVD9dKZIFVd7SlOlH" - ->> [PUBLISH shellies/shellyplug-s-7C87CEB4A100/online dup=false QoS=0 retain=true size=4]
[2459215] 2023/02/28 23:57:23.352008 [TRC] 192.168.1.183:54456 - mid:17 - "Yz3NSzVD9dKZIFVd7SlOlH" - ->> [PUBLISH shellies/shellyplug-s-7C87CEB4A100/announce dup=false QoS=0 retain=true size=152]
[2459215] 2023/02/28 23:57:23.352029 [TRC] 192.168.1.183:54456 - mid:17 - "Yz3NSzVD9dKZIFVd7SlOlH" - ->> [PUBLISH shellies/shellyplug-s-7C87CEB4A100/info dup=false QoS=0 retain=true size=875]
[2459215] 2023/02/28 23:57:23.352083 [TRC] 192.168.1.183:54456 - mid:17 - "Yz3NSzVD9dKZIFVd7SlOlH" - ->> [PUBLISH shellies/shellyplug-s-7C87CEB4A100/relay/0 dup=false QoS=0 retain=true size=2]
[2459215] 2023/02/28 23:57:23.354986 [TRC] 192.168.1.183:54456 - mid:17 - "Yz3NSzVD9dKZIFVd7SlOlH" - ->> [PUBLISH shellies/shellyplug-s-7C87CEB4A100/relay/0/power dup=false QoS=0 retain=true size=5]
[2459215] 2023/02/28 23:57:23.355109 [TRC] 192.168.1.183:54456 - mid:17 - "Yz3NSzVD9dKZIFVd7SlOlH" - ->> [PUBLISH shellies/shellyplug-s-7C87CEB4A100/relay/0/energy dup=false QoS=0 retain=true size=5]
[2459215] 2023/02/28 23:57:23.356463 [TRC] 192.168.1.183:54456 - mid:17 - "Yz3NSzVD9dKZIFVd7SlOlH" - ->> [PUBLISH shellies/announce dup=false QoS=0 retain=true size=152]
[2459215] 2023/02/28 23:57:23.356737 [TRC] 192.168.1.183:54456 - mid:17 - "Yz3NSzVD9dKZIFVd7SlOlH" - ->> [PUBLISH $MQTT/sess/Y0hZqM8U dup=false QoS=0 retain=false size=84]
[2459215] 2023/02/28 23:57:23.357150 [TRC] 192.168.1.183:54456 - mid:17 - "Yz3NSzVD9dKZIFVd7SlOlH" - ->> [SUBACK pi=1]
[2459215] 2023/02/28 23:57:23.607119 [DBG] 192.168.1.183:54468 - mid:22 - Client connection created
[2459215] 2023/02/28 23:57:23.607297 [TRC] 192.168.1.183:54468 - mid:22 - "Yz3NSzVD9dKZIFVd7SlPXX" - <<- [CONNECT clientID=Yz3NSzVD9dKZIFVd7SlPXX keepAlive=1m30s]
[2459215] 2023/02/28 23:57:23.607744 [TRC] 192.168.1.183:54456 - mid:17 - "Yz3NSzVD9dKZIFVd7SlOlH" - ->> [PUBLISH $JS/API/STREAM/MSG/GET/$MQTT_sess dup=false QoS=0 retain=false size=38]
[2459215] 2023/02/28 23:57:23.607913 [TRC] 192.168.1.183:54456 - mid:17 - "Yz3NSzVD9dKZIFVd7SlOlH" - ->> [PUBLISH $MQTT/JSA/lo8Xni4O/ML/yZYm5DA3HArTOvb4OVBuKd dup=false QoS=0 retain=false size=130]
[2459215] 2023/02/28 23:57:23.608020 [TRC] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 452]
[2459215] 2023/02/28 23:57:23.608082 [TRC] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"Yz3NSzVD9dKZIFVd7SlPbv\",\"timestamp\":\"2023-02-28T23:57:23.607653721Z\",\"server\":\"sateda-nats\",\"client\":{\"acc\":\"$G\",\"server\":\"sateda-nats\"},\"subject\":\"$JS.API.STREAM.MSG.GET.$MQTT_sess\",\"request\":\"{\\\"last_by_subj\\\":\\\"$MQTT.sess.nk9uejrm\\\"}\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.stream_msg_get_response\\\",\\\"error\\\":{\\\"code\\\":404,\\\"err_code\\\":10037,\\\"description\\\":\\\"no message found\\\"}}\"}"]
[2459215] 2023/02/28 23:57:23.608134 [TRC] 192.168.1.183:54456 - mid:17 - "Yz3NSzVD9dKZIFVd7SlOlH" - ->> [PUBLISH $JS/EVENT/ADVISORY/API dup=false QoS=0 retain=false size=452]
[2459215] 2023/02/28 23:57:23.608210 [TRC] 192.168.1.183:54456 - mid:17 - "Yz3NSzVD9dKZIFVd7SlOlH" - ->> [PUBLISH $MQTT/sess/nk9uejrm dup=false QoS=0 retain=false size=64]
[2459215] 2023/02/28 23:57:23.608507 [TRC] 192.168.1.183:54468 - mid:22 - "Yz3NSzVD9dKZIFVd7SlPXX" - ->> [CONNACK sp=false rc=0]
[2459215] 2023/02/28 23:57:23.608538 [DBG] 192.168.1.183:54468 - mid:22 - "Yz3NSzVD9dKZIFVd7SlPXX" - Client connected
[2459215] 2023/02/28 23:57:23.608913 [TRC] 192.168.1.183:54468 - mid:22 - "Yz3NSzVD9dKZIFVd7SlPXX" - <<- [PUBLISH dom/test dup=false QoS=0 retain=false size=6]
[2459215] 2023/02/28 23:57:23.608945 [TRC] 192.168.1.183:54468 - mid:22 - "Yz3NSzVD9dKZIFVd7SlPXX" - <<- MSG_PAYLOAD: ["foobar"]
[2459215] 2023/02/28 23:57:23.609067 [TRC] 192.168.1.183:54456 - mid:17 - "Yz3NSzVD9dKZIFVd7SlOlH" - ->> [PUBLISH dom/test dup=false QoS=0 retain=false size=6]
[2459215] 2023/02/28 23:57:23.609158 [TRC] 192.168.1.183:54468 - mid:22 - "Yz3NSzVD9dKZIFVd7SlPXX" - <<- [DISCONNECT]
[2459215] 2023/02/28 23:57:23.609289 [TRC] 192.168.1.183:54456 - mid:17 - "Yz3NSzVD9dKZIFVd7SlOlH" - ->> [PUBLISH $JS/API/STREAM/MSG/DELETE/$MQTT_sess dup=false QoS=0 retain=false size=26]
[2459215] 2023/02/28 23:57:23.609739 [TRC] 192.168.1.183:54456 - mid:17 - "Yz3NSzVD9dKZIFVd7SlOlH" - ->> [PUBLISH $MQTT/JSA/lo8Xni4O/MD/yZYm5DA3HArTOvb4OVBuSR dup=false QoS=0 retain=false size=77]
[2459215] 2023/02/28 23:57:23.609849 [TRC] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 381]
[2459215] 2023/02/28 23:57:23.609884 [DBG] 192.168.1.183:54468 - mid:22 - "Yz3NSzVD9dKZIFVd7SlPXX" - Client connection closed: Client Closed
[2459215] 2023/02/28 23:57:23.609905 [TRC] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"Yz3NSzVD9dKZIFVd7SlPgJ\",\"timestamp\":\"2023-02-28T23:57:23.60953733Z\",\"server\":\"sateda-nats\",\"client\":{\"acc\":\"$G\",\"server\":\"sateda-nats\"},\"subject\":\"$JS.API.STREAM.MSG.DELETE.$MQTT_sess\",\"request\":\"{\\\"seq\\\":98,\\\"no_erase\\\":true}\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.stream_msg_delete_response\\\",\\\"success\\\":true}\"}"]
[2459215] 2023/02/28 23:57:23.610048 [TRC] 192.168.1.183:54456 - mid:17 - "Yz3NSzVD9dKZIFVd7SlOlH" - ->> [PUBLISH $JS/EVENT/ADVISORY/API dup=false QoS=0 retain=false size=381]
^C[2459215] 2023/02/28 23:57:24.232569 [DBG] Trapped "interrupt" signal
[2459215] 2023/02/28 23:57:24.232733 [INF] Initiating Shutdown...
[2459215] 2023/02/28 23:57:24.232781 [INF] Initiating JetStream Shutdown...
[2459215] 2023/02/28 23:57:24.232853 [TRC] JETSTREAM - <-> [DELSUB 1]
[2459215] 2023/02/28 23:57:24.232892 [TRC] JETSTREAM - <-> [DELSUB 2]
[2459215] 2023/02/28 23:57:24.232921 [DBG] JETSTREAM - JetStream connection closed: Client Closed
[2459215] 2023/02/28 23:57:24.232939 [DBG] JETSTREAM - JetStream connection closed: Client Closed
[2459215] 2023/02/28 23:57:24.232960 [TRC] JETSTREAM - <-> [DELSUB 45]
[2459215] 2023/02/28 23:57:24.232997 [TRC] JETSTREAM - <-> [DELSUB 1]
[2459215] 2023/02/28 23:57:24.233022 [DBG] JETSTREAM - JetStream connection closed: Client Closed
[2459215] 2023/02/28 23:57:24.233037 [DBG] JETSTREAM - JetStream connection closed: Client Closed
[2459215] 2023/02/28 23:57:24.233061 [TRC] JETSTREAM - <-> [DELSUB 1]
[2459215] 2023/02/28 23:57:24.233085 [TRC] JETSTREAM - <-> [DELSUB 2]
[2459215] 2023/02/28 23:57:24.233105 [DBG] JETSTREAM - JetStream connection closed: Client Closed
[2459215] 2023/02/28 23:57:24.233118 [DBG] JETSTREAM - JetStream connection closed: Client Closed
[2459215] 2023/02/28 23:57:24.233137 [TRC] JETSTREAM - <-> [DELSUB 47]
[2459215] 2023/02/28 23:57:24.233156 [TRC] JETSTREAM - <-> [DELSUB 1]
[2459215] 2023/02/28 23:57:24.233173 [DBG] JETSTREAM - JetStream connection closed: Client Closed
[2459215] 2023/02/28 23:57:24.233196 [DBG] JETSTREAM - JetStream connection closed: Client Closed
[2459215] 2023/02/28 23:57:24.233174 [DBG] JETSTREAM - JetStream connection closed: Client Closed
[2459215] 2023/02/28 23:57:24.233200 [DBG] JETSTREAM - JetStream connection closed: Client Closed
[2459215] 2023/02/28 23:57:24.233572 [TRC] JETSTREAM - <-> [DELSUB 1]
[2459215] 2023/02/28 23:57:24.233617 [DBG] JETSTREAM - JetStream connection closed: Client Closed
[2459215] 2023/02/28 23:57:24.233632 [DBG] JETSTREAM - JetStream connection closed: Client Closed
[2459215] 2023/02/28 23:57:24.233665 [DBG] JETSTREAM - JetStream connection closed: Client Closed
[2459215] 2023/02/28 23:57:24.233895 [INF] JetStream Shutdown
[2459215] 2023/02/28 23:57:24.234028 [DBG] 192.168.1.183:54456 - mid:17 - "Yz3NSzVD9dKZIFVd7SlOlH" - Client connection closed: Server Shutdown
[2459215] 2023/02/28 23:57:24.234041 [DBG] MQTT accept loop exiting..
[2459215] 2023/02/28 23:57:24.234033 [DBG] Client accept loop exiting..
[2459215] 2023/02/28 23:57:24.234063 [TRC] 192.168.1.183:54456 - mid:17 - "Yz3NSzVD9dKZIFVd7SlOlH" - <-> [DELSUB >]
[2459215] 2023/02/28 23:57:24.234107 [DBG] SYSTEM - System connection closed: Client Closed
[2459215] 2023/02/28 23:57:24.234345 [ERR] 192.168.1.183:54456 - mid:17 - "Yz3NSzVD9dKZIFVd7SlOlH" - unable to delete session "Yz3NSzVD9dKZIFVd7SlOlH" record at sequence 97
[2459215] 2023/02/28 23:57:24.234378 [INF] Server Exiting..
</code></pre></details>
|
https://github.com/nats-io/nats-server/issues/3924
|
https://github.com/nats-io/nats-server/pull/3926
|
4a7d73b8f7c030f7300398038ef82ef11a224709
|
8f7a88103b5a8deb75def7764885daab2628bf57
| 2023-03-01T00:00:07Z |
go
| 2023-03-01T05:58:24Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,921 |
["server/leafnode.go", "server/monitor.go", "server/monitor_test.go"]
|
Add Server Name to Leafz Monitoring Endpoint
|
## Feature Request
In the monitoring endpoint, for /Leafz, add serverName attribute of the Leafnode that connected.
#### Use Case:
The IP doesn't give us all of the information that we need to uniquely identify which of the many Leafnodes that are connecting to the core nats server. For example, we have connections coming outside of our network that get DNAT to a different IP that the original leafnode server was using. Also, deploying within a K8S environment uses the internal K8S IP and tracking it down can be difficult.
#### Proposed Change:
Change the LeafInfo struct to include Server Name of the Leaf node and then within the loop that is building the LeafNodes call ln.leaf.remoteServer for the cluster name.
#### Who Benefits From The Change(s)?
Anyone that uses K8S or bridges networks together where DNAT is involved.
#### Alternative Approaches
Build out a better way of showing the entire nats / leafnodes topology.
|
https://github.com/nats-io/nats-server/issues/3921
|
https://github.com/nats-io/nats-server/pull/3923
|
d920ca631921e087125f3cb4a5ee468593224646
|
321afe6aee12428114d420e0f88483b9660bfa5e
| 2023-02-28T21:23:38Z |
go
| 2023-02-28T22:54:43Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,917 |
["server/accounts.go"]
|
panic on `nats server report connections` with non-`$SYS` account
|
## Defect
#### Versions of `nats-server` and affected client libraries used:
- nats-server 2.9.14
- nats-box 0.13.4
#### OS/Container environment:
- Kubernetes with official Helm chart
#### Steps or code to reproduce the issue:
Tried reproducing this in a similar 3-node setup on Kubernetes, but sadly wasn't able to.
Normally, when doing a `nats server report connections` with a user under `$SYS` this command always successfully returns a report in the CLI. However, when running this with a non-`$SYS` account, this command may sometimes succeed, and other times it will lead into an error message in the CLI:
```
nats: error: server request failed, ensure the account used has system privileges and appropriate permissions
```
When running this command quite rapidly after each other, it sometimes will result in a panic. (Although I'm not quite sure if it's even related to running this quickly, or if it's a race condition.)
The following panic happened:
```
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x38 pc=0x76acd4]
goroutine 178036893 [running]:
github.com/nats-io/nats-server/v2/server.(*serviceExport).setResponseThresholdTimer(...)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/accounts.go:2227
github.com/nats-io/nats-server/v2/server.(*Account).addRespServiceImport(0xc000ba5440, 0xc000ba58c0, {0xc006c1abd0, 0x26}, 0xc003292090, 0x0, 0x0)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/accounts.go:2347 +0x2f4
github.com/nats-io/nats-server/v2/server.(*client).setupResponseServiceImport(0xc0042f6600, 0xc002456d68?, 0xc003292090, 0x80?, 0xc002456df0?)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/client.go:3724 +0x7f
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc0042f6600, 0xc003292090, 0xc000ba58c0, {0xc00043004f, 0x100, 0x1b1})
/home/travis/gopath/src/github.com/nats-io/nats-server/server/client.go:3896 +0x44a
github.com/nats-io/nats-server/v2/server.(*Account).addServiceImportSub.func1(0x0?, 0x0?, 0x0?, {0x0?, 0x0?}, {0x0?, 0x0?}, {0xc00043004f, 0x100, 0x1b1})
/home/travis/gopath/src/github.com/nats-io/nats-server/server/accounts.go:1965 +0x32
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc0042f6600, 0x0, 0xc002783140, 0x0?, {0xc000430005, 0x1a, 0x1fb}, {0xc000430020, 0x26, 0x1e0}, ...)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/client.go:3192 +0xb89
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc0042f6600, 0xc000ba58c0, 0xc001cf2330, {0xc00043004f, 0x100, 0x1b1}, {0x0, 0x0, 0x0?}, {0xc000430005, ...}, ...)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/client.go:4239 +0xb10
github.com/nats-io/nats-server/v2/server.(*client).processInboundClientMsg(0xc0042f6600, {0xc00043004f, 0x100, 0x1b1})
/home/travis/gopath/src/github.com/nats-io/nats-server/server/client.go:3660 +0xaa8
github.com/nats-io/nats-server/v2/server.(*client).processInboundMsg(0xc0042f6600?, {0xc00043004f?, 0x48?, 0x1fb?})
/home/travis/gopath/src/github.com/nats-io/nats-server/server/client.go:3507 +0x3d
github.com/nats-io/nats-server/v2/server.(*client).parse(0xc0042f6600, {0xc000430000, 0x14f, 0x200})
/home/travis/gopath/src/github.com/nats-io/nats-server/server/parser.go:497 +0x210a
github.com/nats-io/nats-server/v2/server.(*client).readLoop(0xc0042f6600, {0x0, 0x0, 0x0})
/home/travis/gopath/src/github.com/nats-io/nats-server/server/client.go:1238 +0xf36
github.com/nats-io/nats-server/v2/server.(*Server).createClient.func1()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/server.go:2654 +0x29
created by github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine
/home/travis/gopath/src/github.com/nats-io/nats-server/server/server.go:3083 +0x85
```
#### Expected result:
Consistently return data that can be shown in the CLI report.
#### Actual result:
May panic a server, making it restart.
|
https://github.com/nats-io/nats-server/issues/3917
|
https://github.com/nats-io/nats-server/pull/3919
|
bca45c28aa3a89bb7e8da55b505c1b403e319b50
|
daadbc07cb5330ff25ded2ad3a86352aaad2e343
| 2023-02-27T21:13:04Z |
go
| 2023-02-28T15:51:04Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,868 |
["server/accounts.go", "server/accounts_test.go"]
|
nats subject mapping does not work with jetstream
|
## Feature Request
I am trying to use NATS subject mapping where the dest subjects are used to create a jetstream. If I send request to the source subjects, I get `nats: no responders available for request` .
I would expect the jetstream to be able to ACK to the request even when subject mapping is used.
#### Use Case:
#### Proposed Change:
#### Who Benefits From The Change(s)?
#### Alternative Approaches
|
https://github.com/nats-io/nats-server/issues/3868
|
https://github.com/nats-io/nats-server/pull/3887
|
5c6b3b620a4cc77750828056637559d936c2ee62
|
ad8aa7c0c5518733c543ba17e5b43f938db3d26e
| 2023-02-14T21:30:29Z |
go
| 2023-02-20T20:59:49Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,848 |
["server/filestore.go", "server/jetstream_cluster_3_test.go", "server/memstore.go"]
|
After changing the replica of a stream: The non-leaders have more messages
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [x] Included `nats-server -DV` output
```
[124] 2023/02/06 08:14:04.050458 [INF] Starting nats-server
[124] 2023/02/06 08:14:04.050483 [INF] Version: 2.9.11
[124] 2023/02/06 08:14:04.050485 [INF] Git: [23ffc16]
[124] 2023/02/06 08:14:04.050491 [DBG] Go build: go1.19.4
[124] 2023/02/06 08:14:04.050496 [INF] Name: NBS3DHEIAJA2NGY2ZQ6IL2MK4YLIDGIZNF3IKORWUNMVAQEJOGJHY3EZ
[124] 2023/02/06 08:14:04.050498 [INF] ID: NBS3DHEIAJA2NGY2ZQ6IL2MK4YLIDGIZNF3IKORWUNMVAQEJOGJHY3EZ
```
- [x] Included a Minimal, Complete, and Verifiable example: Please see section _"Steps or code to reproduce the issue"_
#### Versions of `nats-server` and affected client libraries used:
Reproducible in different versions and setups.
#### OS/Container environment:
Reproducible for various operating systems and environments (k8s, locally mac, locally linux, …).
#### Steps or code to reproduce the issue:
<details>
<summary><code>docker-compose.yml</code></summary>
```yml
version: '3'
services:
nats-1:
container_name: nats-1
image: nats:2.9.11
entrypoint: /nats-server
command: --name nats-1 --cluster_name NATS --js --sd /data --cluster nats://0.0.0.0:4245 --routes nats://nats-1:4245,nats://nats-2:4245,nats://nats-3:4245 -p 4222 --http_port 8222
networks:
- nats
ports:
- 14222:4222
- 18222:8222
nats-2:
container_name: nats-2
image: nats:2.9.11
entrypoint: /nats-server
command: --name nats-2 --cluster_name NATS --js --sd /data --cluster nats://0.0.0.0:4245 --routes nats://nats-1:4245,nats://nats-2:4245,nats://nats-3:4245 -p 4222 --http_port 8222
networks:
- nats
ports:
- 24222:4222
- 28222:8222
nats-3:
container_name: nats-3
image: nats:2.9.11
entrypoint: /nats-server
command: --name nats-3 --cluster_name NATS --js --sd /data --cluster nats://0.0.0.0:4245 --routes nats://nats-1:4245,nats://nats-2:4245,nats://nats-3:4245 -p 4222 --http_port 8222
networks:
- nats
ports:
- 34222:4222
- 38222:8222
nats-setup:
container_name: nats_setup
depends_on:
- nats-1
- nats-2
- nats-3
build:
context: .
environment:
- NATS_URL=nats://host.docker.internal:14222
networks:
- nats
networks:
nats: {}
```
</details>
<details>
<summary><code>Dockerfile</code></summary>
```yml
FROM alpine:3.17
RUN apk add --no-cache wget rclone curl jq && \
wget -P ~ https://github.com/nats-io/natscli/releases/download/v0.0.35/nats-0.0.35-linux-arm64.zip && \
unzip -d ~ ~/nats-0.0.35-linux-arm64.zip && \
cp ~/nats-0.0.35-linux-arm64/nats /usr/local/bin && \
adduser -D user
WORKDIR /workdir
COPY stream.conf .
COPY script.sh .
RUN chmod +x script.sh
USER user
ENTRYPOINT ["/workdir/script.sh"]
```
</details>
<details>
<summary><code>script.sh</code></summary>
```sh
#!/bin/sh
function fetch_stream_information() {
N1=$(curl --silent "http://nats-1:8222/jsz?consumers=1" | jq ".messages")
N2=$(curl --silent "http://nats-2:8222/jsz?consumers=1" | jq ".messages")
N3=$(curl --silent "http://nats-3:8222/jsz?consumers=1" | jq ".messages")
echo "=================="
echo "nats-1: $N1"
echo "nats-2: $N2"
echo "nats-3: $N3"
echo "=================="
}
nats stream ls
nats stream create --config=stream.conf
nats stream info test
nats pub test.a --count 10 "Message {{Count}} @ {{Time}}"
sleep 1
nats pub test.a --count 10 "Message {{Count}} @ {{Time}}"
sleep 1
nats pub test.a --count 10 "Message {{Count}} @ {{Time}}"
sleep 1
nats pub test.a --count 10 "Message {{Count}} @ {{Time}}"
sleep 1
nats pub test.a --count 10 "Message {{Count}} @ {{Time}}"
sleep 1
fetch_stream_information
nats stream edit --replicas=3 --force test
sleep 2
fetch_stream_information
sleep 2
fetch_stream_information
sleep 2
fetch_stream_information
nats pub test.a --count 4 "Message {{Count}} @ {{Time}}"
sleep 1
fetch_stream_information
sleep 1
fetch_stream_information
sleep 1
fetch_stream_information
sleep 1
fetch_stream_information
sleep 1
fetch_stream_information
```
</details>
<details>
<summary><code>stream.conf</code></summary>
```json
{
"config":{
"name":"test",
"subjects":[
"test.*"
],
"retention":"limits",
"max_consumers":-1,
"max_msgs":100,
"max_bytes":-1,
"max_age":10000000000,
"max_msg_size":-1,
"storage":"file",
"discard":"old",
"num_replicas":1,
"duplicate_window":5000000000,
"deny_delete":false,
"deny_purge":false,
"allow_rollup_hdrs":true
},
"created":"2021-02-27T23:49:36.700424Z",
"state":{
"messages":0,
"bytes":0,
"first_seq":0,
"first_ts":"0001-01-01T00:00:00Z",
"last_seq":0,
"last_ts":"0001-01-01T00:00:00Z",
"consumer_count":0
}
}
```
</details>
1. download all files
2. run `docker compose up --build --force-recreate --remove-orphans --renew-anon-volumes`
3. observe logs
Short explanation what this scenario does:
1. Creation of a stream with `replica: 1` and a time based retention
2. Publish 50 messages
3. Change stream configuration to `replica: 3`
4. Wait
5. Publish 4 messages
6. ⚠️ The replicas have more messages than the leader
#### Expected result:
After changing the replica size of a stream and the first retention of a message the messages on all replicas are the same.
#### Actual result:
After changing the replica size of a stream and the first retention of a message the messages on the leader are the correct ones, but the replicas have old messages.
**Thanks in advance! 🙏**
Cheers
Stefan
|
https://github.com/nats-io/nats-server/issues/3848
|
https://github.com/nats-io/nats-server/pull/3861
|
79df099c441473bea8cca67db7d7af53484968e7
|
8169a117e4b2c01d1487548ff059835ec84c4512
| 2023-02-06T19:51:21Z |
go
| 2023-02-24T00:41:26Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,826 |
["logger/log.go", "logger/log_test.go", "server/configs/reload/reload.conf", "server/log.go", "server/log_test.go", "server/opts.go", "server/reload.go", "server/reload_test.go", "server/signal_test.go"]
|
Configure log timestamp as UTC
|
## Feature Request
Log timestamp can be enabled or disabled (default enabled), but timestamp is always in the host's currently configured timezone (e.g. PST). The timezone is not recorded in the timestamp format.
Users would benefit from an option to log timestamp as UTC regardless of host's currently configured timezone.
#### Use Case:
Logs from multiple servers are scraped into a central log aggregator service where timezone normalization is needed or required.
#### Proposed Change:
Add a server configuration option (not default to preserve back compatibility) to always log timestamps (if enabled) as normalized UTC.
#### Who Benefits From The Change(s)?
Users who aggregate server logs from multiple hosts that may not always be uniformly configured as UTC timezone at host level.
#### Alternative Approaches
Output timestamp in format that preserves timezone information which could allow log aggregators to normalize downstream as required.
|
https://github.com/nats-io/nats-server/issues/3826
|
https://github.com/nats-io/nats-server/pull/3833
|
5072404ed22a3c17bd59bc27bfad6aa088dc1885
|
0551c2f9141b00b465300bedc02ede1ce1e9ccba
| 2023-01-28T00:33:24Z |
go
| 2023-02-13T09:42:02Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,803 |
["server/jetstream_cluster_2_test.go", "server/stream.go"]
|
Allow RePublish stream option to be editable
|
## Feature Request
Currently, NATS server doesn't allow clients to change the `RePublish` stream option after it has been defined. Allowing clients to remove/modify `RePublish` would allow users to disable republishing when it is no longer necessary and to make it easier to make modifications to the subject namespace over time.
In addition, it would be useful for the RePublish option to be a list so you can republish messages to multiple subjects.
#### Use Case:
The most immediate use cases are 1) disabling republishing when it is no longer necessary and 2) using the feature to make modifications to the subject namespace.
#### Proposed Change:
Allow the `RePublish` stream option to be editable.
#### Who Benefits From The Change(s)?
Users who want to disable RePublishing or change a destination subject.
#### Alternative Approaches
It should be possible to implement the same behavior by creating a new stream that uses the old stream as a source. Then once the streams are synced, you can set the desired `RePublish` option in the new stream and delete the old stream. To prevent data loss between deleting the old stream and subscribing the new stream to the old subjects, you can use subject mapping to redirect messages from the old stream to the new stream.
|
https://github.com/nats-io/nats-server/issues/3803
|
https://github.com/nats-io/nats-server/pull/3811
|
9e1a25dc94ae1d2cf72f813d971e63250dc7bc34
|
a929f86fae0eb64ca966c9df8164d2816746b849
| 2023-01-23T07:10:21Z |
go
| 2023-01-25T17:11:03Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,793 |
["main.go", "server/monitor.go"]
|
Reported number of gomaxprocs in varz not updated after automaxprocs runs
|
## Defect
When creating the varz struct, the server picks up the initial gomaxprocs before automaxprocs runs
and it is not updated later. As a result, the reported value in `/varz` will remain the initial one without
considering the limits:
https://github.com/nats-io/nats-server/blob/a9a9b92f6d98cae92c299744ad6b602d4981c13e/server/monitor.go#L45
|
https://github.com/nats-io/nats-server/issues/3793
|
https://github.com/nats-io/nats-server/pull/3796
|
f2b087c5672aef0e1826ed932b27b349cd80b801
|
f836b75efe669c9fdc0b67335a910e4f711af526
| 2023-01-19T17:22:36Z |
go
| 2023-01-20T21:28:18Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,791 |
["server/jetstream_cluster.go", "server/jetstream_cluster_3_test.go"]
|
Regular KV watch error when 1 server is down in a 3 servers cluster
|
## Defect
With the following setup:
- cluster of 3 NATS servers
- a KV store created with "replicas=3" (when all servers are up)
- 1 of the 3 servers is going down
Every NATS operations works fine except the KV `watch` which sometimes produces the error `nats: error: context deadline exceeded` when 1 server is down.
Please find more details below:
- [x] `nats-server -DV` output (date of the servers is UTC and date the my client is CET so there is exactly 1 hour of delay between servers and client logs)
1. [1st server logs](https://github.com/nats-io/nats-server/files/10447389/cluster-1-log.txt) (probably nothing to observe here)
2. [2nd server logs](https://github.com/nats-io/nats-server/files/10447386/cluster-2-log.txt) (the nats-cli connects to this server, look for the `Client connection created` line)
3. [nats-cli logs](https://github.com/nats-io/nats-server/files/10447391/nats-cli-log.txt) (this output is the result of the command `strace -tt -f -e trace='%network' nats --trace --server nats://163.172.167.179:4222,nats://163.172.189.87:4222 --creds /path/to/the/file.creds kv watch mykv`)
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
See section "Steps or code to reproduce the issue".
#### Versions of `nats-server` and affected client libraries used:
- nats-server version is v2.9.11
- nats-cli version is 0.0.35
#### OS/Container environment:
Ubuntu 20.04.5 LTS
#### Steps or code to reproduce the issue:
1. install a NATS cluster with 3 servers and create an account / user with JetStream enabled
2. create a KV with `replicas=3`: `nats kv add mykv --replicas=3`
3. put 2 keys/values: `nats kv put key1 value1` && `nats kv put key2 value2`
4. shutdown 1 of the 3 nats servers
5. run a watch: `nats kv watch mykv`
This last command fails sometimes with a `nats: error: context deadline exceeded`.
This error also happens even if `nats` is run with the `--server` option and a server that is up,
#### Expected result:
The `nats kv watch` should work because 2 servers are still up.
All other operations like `nats kv get` and `nats kv put` works very well.
#### Actual result:
Watching a KV sometimes fails with `nats: error: context deadline exceeded`.
Notes:
1. the issue is also reproduced with the nats.go Golang client.
2. after a `watch` failure, we can see some consumers that were not deleted when running `nats consumer ls KV_mykv`
|
https://github.com/nats-io/nats-server/issues/3791
|
https://github.com/nats-io/nats-server/pull/3820
|
b238323b0c5e274643db54d52f7f498521f361c4
|
461aad17a53b8d6c4f400697bdd2f02ea12b2e29
| 2023-01-18T15:08:33Z |
go
| 2023-01-26T16:27:11Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,782 |
["server/accounts.go", "server/opts.go"]
|
Add option to the nats resolver to hard delete the jwt instead of renaming them to `.deleted`.
|
## Feature Request
Add option to the nats resolver to hard delete the jwt instead of renaming them to `.deleted`.
I'll propose and MR to implement the option listed bellow.
#### Use Case:
We manage the accounts in our database so if we want to recover them we can use the database so we don't need the copy on disk.
We can do some manual (or cron) cleanup but as the nats code already has an HardDelete option internally, why not expose it to the configuration.
#### Proposed Change:
Add an option in the resolver config, for instance: `hard_delete: bool`.
#### Who Benefits From The Change(s)?
Any user that do not need the `.deleted` files.
#### Alternative Approaches
Overloading `allow_delete: hard_delete` but that may be problematic for the parser and backward compatibility
|
https://github.com/nats-io/nats-server/issues/3782
|
https://github.com/nats-io/nats-server/pull/3783
|
a5326c97ef5629e1bc4fd77196d55266aca1cbc3
|
7b82384fd75eab3008f3aa419cbc26bb81cea9b0
| 2023-01-13T12:38:32Z |
go
| 2023-04-06T14:04:03Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,773 |
["server/certstore/certstore.go", "server/client.go", "server/monitor.go", "server/ocsp.go", "test/configs/certs/ocsp_peer/mini-ca/misc/misconfig_TestServer1_bundle.pem", "test/configs/certs/ocsp_peer/mini-ca/misc/trust_config1_bundle.pem", "test/configs/certs/ocsp_peer/mini-ca/misc/trust_config2_bundle.pem", "test/configs/certs/ocsp_peer/mini-ca/misc/trust_config3_bundle.pem", "test/ocsp_peer_test.go", "test/ocsp_test.go"]
|
OCSP mode assumes only one CA cert, for the current issuer
|
## Defect
#### Versions of `nats-server` and affected client libraries used:
nats-server 2.9.11, nats-cli current HEAD
#### OS/Container environment:
FreeBSD Jail, but reproduces anywhere
#### Steps or code to reproduce the issue:
Have a TLS block with a `ca_file` with more than one cert in it, and setup an OCSP block.
```
tls {
ca_file: /etc/ssl/cert.pem
cert_file: /etc/nats/pkix/nats.home.pennock.cloud/tls.crt
key_file: /etc/nats/pkix/nats.home.pennock.cloud/tls.key
}
ocsp {
mode: always
url: "http://r3.o.lencr.org"
}
```
#### Expected result:
The server starts up, requests and obtains an OCSP staple, and I can query it.
#### Actual result:
If the `ca_file` bundle contains just one certificate, the intermediate used for issuing the current certificate, then things work. This is problematic, because at a bare minimum with a CA such as Let's Encrypt, you should be configuring both R3 and R4 (or E1 and E2) together, so that LE can fail-over to their stand-by. The `ca_file` should be a list of all certificate authorities trusted for verifying peer identities, not a statement of which specific intermediate issued the current server certificate.
If the `ca_file` contains a list of all the normal system CA certs, then with OCSP enabled the nats-server start-up errors out with:
```
% /usr/local/sbin/nats-server -DV -c /etc/nats/nats.conf -l /dev/stdout
nats-server: CN=Thawte Universal CA Root,OU=Thawte Universal CA Root,O=Thawte invalid ca basic constraints: is not ca
```
If the `ca_file` contains `/etc/ssl/bundle-lets-encrypt.pem` which is a bundle of _just_ the Let's Encrypt certs (E1, E2, R3, R4), then nats-server errors out with:
```
% /usr/local/sbin/nats-server -DV -c /etc/nats/nats.conf -l /dev/stdout
nats-server: bad OCSP status update for certificate at '/etc/nats/pkix/nats.home.pennock.cloud/tls.crt': failed to get remote status: ocsp: error from server: unauthorized
```
which is OCSP-speak in this scenario for "sent the wrong cert details".
If I strip `ca_file` down to contain _only_ the R3 certificate, then I get working OCSP; running the new natscli command:
```
nats account tls --ocsp --no-pem
```
correctly yields:
```
# OCSP: GOOD status=0 sn=295101430811608931834342633074769021965446 producedAt=(2023-01-09 22:46:00 +0000 UTC) thisUpdate=(2023-01-09 22:00:00 +0000 UTC) nextUpdate=(2023-01-16 21:59:58 +0000 UTC)
```
|
https://github.com/nats-io/nats-server/issues/3773
|
https://github.com/nats-io/nats-server/pull/4355
|
33d1f852b2d5a34a8e82cbd508c8e44411619bd6
|
971c61692ac6f4f85a8d90461a1e1715b1906db5
| 2023-01-09T23:01:52Z |
go
| 2023-08-01T23:13:25Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,734 |
["server/filestore.go", "server/jetstream_test.go"]
|
Nats Server is ignoring Jetstreams limits in case of 0 byte files and grows uncontrolled
|
## Defect
We discovered that some Nats brokers are running over their Jetstream limits in our edge computing scenarios. We cannot completely avoid, that these devices are shut down properly, so sometimes there can occur 0 byte files.
Somehow the Nats broker seems not to be able to clear the directory, when there is a 0 byte file inside for the .blk files as shown in this screenshot:

We have a limit of 1GB for our Jetstream and the max is 5 GB. The metrics show a steady rise of data, without data being removed from the Jetstream.
<img width="945" alt="CleanShot 2022-12-21 at 13 12 16@2x" src="https://user-images.githubusercontent.com/12080409/208902653-1beb135c-6dcc-4f00-ad5b-6781696f2bb6.png">
Nats does nothing strange until the 5GB limit is reached:
<img width="1192" alt="CleanShot 2022-12-21 at 13 16 57@2x" src="https://user-images.githubusercontent.com/12080409/208903377-2467550c-a31c-44d0-9c60-7cf54ec76389.png">
#### Versions of `nats-server` and affected client libraries used:
#### OS/Container environment:
containerd - nats server in version: 2.9.10-alpine
```
nats.conf: |-
pid_file: "/var/run/nats/nats.pid"
http: 8222
jetstream {
store_dir: "/data/jetstream/store"
max_file_store: 5G
}
```
Stream:
```yaml
---
apiVersion: jetstream.nats.io/v1beta2
kind: Stream
metadata:
name: telemetry
namespace: cecu-broker
spec:
maxBytes: 1073741824
name: telemetry
storage: file
subjects:
- telemetry.*
- telemetry.command.*
```
#### Steps or code to reproduce the issue:
Create a 0 byte file in the messages directory and wait until Jetstream is running over.
#### Expected result:
Jetstream keeps the 1 GB limit for the stream as configured.
#### Actual result:
File system is running full until Nats runs into max limit (in our case 5GB) and fails to transfer any data.
|
https://github.com/nats-io/nats-server/issues/3734
|
https://github.com/nats-io/nats-server/pull/4166
|
ee38f8bbc550467c988affe99c4e7d85442b2a1e
|
9434110c050f64a3902c5da90309acb47b1022bd
| 2022-12-21T12:18:08Z |
go
| 2023-05-15T22:40:12Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,727 |
["server/consumer.go", "server/errors.json", "server/jetstream_cluster.go", "server/jetstream_cluster_3_test.go", "server/jetstream_errors_generated.go", "server/jetstream_test.go", "server/stream.go"]
|
Add stream configuration option to set a default inactive threshold for consumers
| null |
https://github.com/nats-io/nats-server/issues/3727
|
https://github.com/nats-io/nats-server/pull/4105
|
0fadaf211f9ef5a2144a00805b72aef124a678fa
|
752d35015c05640e26c562289e5b9996e3c98b34
| 2022-12-20T18:30:50Z |
go
| 2023-09-01T14:36:01Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,712 |
["server/jetstream_cluster_3_test.go", "server/raft.go"]
|
R3 Memory Stream cannot recover when 1 server restarts in 2.9.9
|
R3 memory stream has RAFT errors in `2.9.9`. This does not happen in `2.9.8`
1. Create a cluster of 3
2. Create an R3 memory stream `nats bench js.bench --js --pub 2 --msgs 1000000 --storage=memory --replicas=3 --size=1024 --purge`
```
Cluster Information:
Name: nats
Leader: nats-0
Replica: nats-1, current, seen 0.63s ago
Replica: nats-2, current, seen 0.63s ago
```
3. Restart one of the servers, in this case I did the replica `nats-2`
```
Cluster Information:
Name: nats
Leader:
Replica: nats-0, outdated, seen 2m57s ago, 160,069 operations behind
Replica: nats-1, current, seen 4.61s ago
Replica: nats-2, outdated, seen 4.61s ago, 1 operation behind
```
Logs from all 3 servers:
`nats-0`:
```
[7] 2022/12/14 16:01:56.024985 [INF] Starting nats-server
[7] 2022/12/14 16:01:56.025015 [INF] Version: 2.9.9
[7] 2022/12/14 16:01:56.025017 [INF] Git: [825949b]
[7] 2022/12/14 16:01:56.025018 [INF] Cluster: nats
[7] 2022/12/14 16:01:56.025019 [INF] Name: nats-0
[7] 2022/12/14 16:01:56.025020 [INF] Node: S1Nunr6R
[7] 2022/12/14 16:01:56.025021 [INF] ID: NCQWINNGATEV3LXMT2FT2WBY6GQYVGA2W6FMTFBE5KWBCO2ACTOSQ7UG
[7] 2022/12/14 16:01:56.025026 [INF] Using configuration file: /etc/nats-config/nats.conf
[7] 2022/12/14 16:01:56.025471 [INF] Starting http monitor on 0.0.0.0:8222
[7] 2022/12/14 16:01:56.025495 [INF] Starting JetStream
[7] 2022/12/14 16:01:56.025566 [INF] _ ___ _____ ___ _____ ___ ___ _ __ __
[7] 2022/12/14 16:01:56.025569 [INF] _ | | __|_ _/ __|_ _| _ \ __| /_\ | \/ |
[7] 2022/12/14 16:01:56.025570 [INF] | || | _| | | \__ \ | | | / _| / _ \| |\/| |
[7] 2022/12/14 16:01:56.025571 [INF] \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_| |_|
[7] 2022/12/14 16:01:56.025572 [INF]
[7] 2022/12/14 16:01:56.025574 [INF] https://docs.nats.io/jetstream
[7] 2022/12/14 16:01:56.025574 [INF]
[7] 2022/12/14 16:01:56.025576 [INF] ---------------- JETSTREAM ----------------
[7] 2022/12/14 16:01:56.025580 [INF] Max Memory: 2.00 GB
[7] 2022/12/14 16:01:56.025583 [INF] Max Storage: 10.00 GB
[7] 2022/12/14 16:01:56.025584 [INF] Store Directory: "/data/jetstream"
[7] 2022/12/14 16:01:56.025585 [INF] -------------------------------------------
[7] 2022/12/14 16:01:56.025784 [INF] Starting JetStream cluster
[7] 2022/12/14 16:01:56.025788 [INF] Creating JetStream metadata controller
[7] 2022/12/14 16:01:56.025899 [INF] JetStream cluster recovering state
[7] 2022/12/14 16:01:56.026156 [INF] Listening for client connections on 0.0.0.0:4222
[7] 2022/12/14 16:01:56.026253 [INF] Server is ready
[7] 2022/12/14 16:01:56.026265 [INF] Cluster name is nats
[7] 2022/12/14 16:01:56.026284 [INF] Listening for route connections on 0.0.0.0:6222
[7] 2022/12/14 16:01:56.027126 [INF] 10.1.61.168:6222 - rid:10 - Route connection created
[7] 2022/12/14 16:01:56.027135 [INF] 10.1.61.141:6222 - rid:11 - Route connection created
[7] 2022/12/14 16:01:56.126792 [WRN] Waiting for routing to be established...
[7] 2022/12/14 16:01:57.297676 [INF] JetStream cluster new metadata leader: nats-2/nats
[7] 2022/12/14 16:02:15.550951 [WRN] Healthcheck failed: "JetStream stream '$G > benchstream' is not current"
[7] 2022/12/14 16:02:39.508589 [INF] JetStream cluster new stream leader for '$G > benchstream'
[7] 2022/12/14 16:03:00.562885 [INF] 10.1.61.141:33714 - rid:21 - Route connection created
[7] 2022/12/14 16:03:00.563048 [INF] 10.1.61.141:33714 - rid:21 - Router connection closed: Duplicate Route
[7] 2022/12/14 16:03:08.414232 [INF] 10.1.61.168:36646 - rid:22 - Route connection created
[7] 2022/12/14 16:03:08.414457 [INF] 10.1.61.168:36646 - rid:22 - Router connection closed: Duplicate Route
[7] 2022/12/14 16:05:28.316290 [INF] 10.1.61.188:53078 - rid:23 - Route connection created
[7] 2022/12/14 16:05:28.508930 [ERR] RAFT [S1Nunr6R - S-R3M-SmE55jwu] Error sending snapshot to follower [cnrtt3eg]: raft: no snapshot available
[7] 2022/12/14 16:05:32.003783 [INF] Self is new JetStream cluster metadata leader
[7] 2022/12/14 16:05:32.544640 [INF] JetStream cluster new stream leader for '$G > benchstream'
[7] 2022/12/14 16:05:32.544844 [ERR] RAFT [S1Nunr6R - S-R3M-SmE55jwu] Error sending snapshot to follower [cnrtt3eg]: raft: no snapshot available
[7] 2022/12/14 16:05:44.649502 [INF] JetStream cluster new stream leader for '$G > benchstream'
[7] 2022/12/14 16:05:44.649820 [ERR] RAFT [S1Nunr6R - S-R3M-SmE55jwu] Error sending snapshot to follower [cnrtt3eg]: raft: no snapshot available
[7] 2022/12/14 16:05:53.519192 [INF] JetStream cluster new stream leader for '$G > benchstream'
[7] 2022/12/14 16:05:53.519473 [ERR] RAFT [S1Nunr6R - S-R3M-SmE55jwu] Error sending snapshot to follower [cnrtt3eg]: raft: no snapshot available
[7] 2022/12/14 16:06:04.008716 [INF] JetStream cluster new stream leader for '$G > benchstream'
[7] 2022/12/14 16:06:04.008974 [ERR] RAFT [S1Nunr6R - S-R3M-SmE55jwu] Error sending snapshot to follower [cnrtt3eg]: raft: no snapshot available
[7] 2022/12/14 16:06:12.236093 [INF] JetStream cluster new stream leader for '$G > benchstream'
[7] 2022/12/14 16:06:12.236375 [ERR] RAFT [S1Nunr6R - S-R3M-SmE55jwu] Error sending snapshot to follower [cnrtt3eg]: raft: no snapshot available
[7] 2022/12/14 16:06:16.367603 [INF] JetStream cluster new stream leader for '$G > benchstream'
[7] 2022/12/14 16:06:16.367798 [ERR] RAFT [S1Nunr6R - S-R3M-SmE55jwu] Error sending snapshot to follower [cnrtt3eg]: raft: no snapshot available
[7] 2022/12/14 16:06:22.798241 [INF] JetStream cluster new stream leader for '$G > benchstream'
[7] 2022/12/14 16:06:22.798546 [ERR] RAFT [S1Nunr6R - S-R3M-SmE55jwu] Error sending snapshot to follower [cnrtt3eg]: raft: no snapshot available
[7] 2022/12/14 16:06:32.808144 [INF] JetStream cluster new stream leader for '$G > benchstream'
[7] 2022/12/14 16:06:32.808418 [ERR] RAFT [S1Nunr6R - S-R3M-SmE55jwu] Error sending snapshot to follower [cnrtt3eg]: raft: no snapshot available
[7] 2022/12/14 16:06:44.336013 [INF] JetStream cluster new stream leader for '$G > benchstream'
[7] 2022/12/14 16:06:44.336189 [ERR] RAFT [S1Nunr6R - S-R3M-SmE55jwu] Error sending snapshot to follower [cnrtt3eg]: raft: no snapshot available
[7] 2022/12/14 16:06:53.427105 [INF] JetStream cluster new stream leader for '$G > benchstream'
[7] 2022/12/14 16:06:53.427400 [ERR] RAFT [S1Nunr6R - S-R3M-SmE55jwu] Error sending snapshot to follower [cnrtt3eg]: raft: no snapshot available
```
`nats-1`:
```
[7] 2022/12/14 16:01:42.997362 [INF] Starting nats-server
[7] 2022/12/14 16:01:42.997395 [INF] Version: 2.9.9
[7] 2022/12/14 16:01:42.997396 [INF] Git: [825949b]
[7] 2022/12/14 16:01:42.997398 [INF] Cluster: nats
[7] 2022/12/14 16:01:42.997399 [INF] Name: nats-1
[7] 2022/12/14 16:01:42.997401 [INF] Node: yrzKKRBu
[7] 2022/12/14 16:01:42.997404 [INF] ID: NBX5D6W3EL7VGPVH6BHSXOGCVH77TX5FQUAIZUPAM4CM7JJK7KYRWRSP
[7] 2022/12/14 16:01:42.997414 [INF] Using configuration file: /etc/nats-config/nats.conf
[7] 2022/12/14 16:01:42.997856 [INF] Starting http monitor on 0.0.0.0:8222
[7] 2022/12/14 16:01:42.997877 [INF] Starting JetStream
[7] 2022/12/14 16:01:42.997940 [INF] _ ___ _____ ___ _____ ___ ___ _ __ __
[7] 2022/12/14 16:01:42.997942 [INF] _ | | __|_ _/ __|_ _| _ \ __| /_\ | \/ |
[7] 2022/12/14 16:01:42.997943 [INF] | || | _| | | \__ \ | | | / _| / _ \| |\/| |
[7] 2022/12/14 16:01:42.997944 [INF] \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_| |_|
[7] 2022/12/14 16:01:42.997945 [INF]
[7] 2022/12/14 16:01:42.997947 [INF] https://docs.nats.io/jetstream
[7] 2022/12/14 16:01:42.997948 [INF]
[7] 2022/12/14 16:01:42.997949 [INF] ---------------- JETSTREAM ----------------
[7] 2022/12/14 16:01:42.997955 [INF] Max Memory: 2.00 GB
[7] 2022/12/14 16:01:42.997958 [INF] Max Storage: 10.00 GB
[7] 2022/12/14 16:01:42.997960 [INF] Store Directory: "/data/jetstream"
[7] 2022/12/14 16:01:42.997961 [INF] -------------------------------------------
[7] 2022/12/14 16:01:42.998108 [INF] Starting JetStream cluster
[7] 2022/12/14 16:01:42.998110 [INF] Creating JetStream metadata controller
[7] 2022/12/14 16:01:42.998226 [INF] JetStream cluster recovering state
[7] 2022/12/14 16:01:42.998484 [INF] Listening for client connections on 0.0.0.0:4222
[7] 2022/12/14 16:01:42.998596 [INF] Server is ready
[7] 2022/12/14 16:01:42.998633 [INF] Cluster name is nats
[7] 2022/12/14 16:01:42.998653 [INF] Listening for route connections on 0.0.0.0:6222
[7] 2022/12/14 16:01:42.999631 [INF] 10.1.61.168:6222 - rid:10 - Route connection created
[7] 2022/12/14 16:01:42.999684 [INF] 10.1.61.166:6222 - rid:11 - Route connection created
[7] 2022/12/14 16:01:43.099272 [WRN] Waiting for routing to be established...
[7] 2022/12/14 16:01:43.268478 [INF] JetStream cluster new metadata leader: nats-0/nats
[7] 2022/12/14 16:01:56.027127 [INF] 10.1.61.165:44852 - rid:14 - Route connection created
[7] 2022/12/14 16:01:57.297711 [INF] JetStream cluster new metadata leader: nats-2/nats
[7] 2022/12/14 16:02:03.728508 [INF] JetStream cluster new stream leader for '$G > benchstream'
[7] 2022/12/14 16:02:03.728592 [ERR] RAFT [yrzKKRBu - S-R3M-NfkFgsyU] Error sending snapshot to follower [S1Nunr6R]: raft: no snapshot available
[7] 2022/12/14 16:02:20.419875 [INF] JetStream cluster new stream leader for '$G > benchstream'
[7] 2022/12/14 16:02:20.420069 [ERR] RAFT [yrzKKRBu - S-R3M-NfkFgsyU] Error sending snapshot to follower [S1Nunr6R]: raft: no snapshot available
[7] 2022/12/14 16:02:48.023257 [WRN] 10.1.61.132:58230 - cid:21 - "v1.16.0:go:NATS CLI Version development" - Readloop processing time: 6.117662496s
[7] 2022/12/14 16:02:49.518804 [INF] 10.1.61.166:6222 - rid:11 - Slow Consumer Detected: WriteDeadline of 10s exceeded with 2 chunks of 70620 total bytes.
[7] 2022/12/14 16:02:51.335679 [WRN] 10.1.61.132:58230 - cid:21 - "v1.16.0:go:NATS CLI Version development" - Readloop processing time: 3.312397376s
[7] 2022/12/14 16:02:58.659381 [WRN] 10.1.61.132:56172 - cid:23 - "v1.16.0:go:NATS CLI Version development" - Readloop processing time: 3.007578095s
[7] 2022/12/14 16:02:59.519405 [INF] 10.1.61.166:6222 - rid:11 - Slow Consumer Detected: WriteDeadline of 10s exceeded with 515 chunks of 33651723 total bytes.
[7] 2022/12/14 16:02:59.519426 [INF] 10.1.61.166:6222 - rid:11 - Router connection closed: Slow Consumer (Write Deadline)
[7] 2022/12/14 16:03:00.562425 [INF] 10.1.61.165:6222 - rid:25 - Route connection created
[7] 2022/12/14 16:03:00.563192 [INF] 10.1.61.165:6222 - rid:25 - Router connection closed: Duplicate Route
[7] 2022/12/14 16:03:20.795039 [INF] 10.1.61.168:58176 - rid:26 - Route connection created
[7] 2022/12/14 16:03:20.795196 [INF] 10.1.61.168:58176 - rid:26 - Router connection closed: Duplicate Route
[7] 2022/12/14 16:05:28.316316 [INF] 10.1.61.188:50008 - rid:27 - Route connection created
[7] 2022/12/14 16:05:32.004048 [INF] JetStream cluster new metadata leader: nats-0/nats
[7] 2022/12/14 16:05:40.555902 [INF] JetStream cluster new stream leader for '$G > benchstream'
[7] 2022/12/14 16:05:40.556254 [ERR] RAFT [yrzKKRBu - S-R3M-SmE55jwu] Error sending snapshot to follower [cnrtt3eg]: raft: no snapshot available
[7] 2022/12/14 16:05:48.677125 [INF] JetStream cluster new stream leader for '$G > benchstream'
[7] 2022/12/14 16:05:48.677353 [ERR] RAFT [yrzKKRBu - S-R3M-SmE55jwu] Error sending snapshot to follower [cnrtt3eg]: raft: no snapshot available
[7] 2022/12/14 16:05:59.266785 [INF] JetStream cluster new stream leader for '$G > benchstream'
[7] 2022/12/14 16:05:59.267116 [ERR] RAFT [yrzKKRBu - S-R3M-SmE55jwu] Error sending snapshot to follower [cnrtt3eg]: raft: no snapshot available
[7] 2022/12/14 16:06:27.424254 [INF] JetStream cluster new stream leader for '$G > benchstream'
[7] 2022/12/14 16:06:27.424455 [ERR] RAFT [yrzKKRBu - S-R3M-SmE55jwu] Error sending snapshot to follower [cnrtt3eg]: raft: no snapshot available
[7] 2022/12/14 16:06:37.268869 [INF] JetStream cluster new stream leader for '$G > benchstream'
[7] 2022/12/14 16:06:37.269047 [ERR] RAFT [yrzKKRBu - S-R3M-SmE55jwu] Error sending snapshot to follower [cnrtt3eg]: raft: no snapshot available
[7] 2022/12/14 16:06:48.935220 [INF] JetStream cluster new stream leader for '$G > benchstream'
[7] 2022/12/14 16:06:48.935532 [ERR] RAFT [yrzKKRBu - S-R3M-SmE55jwu] Error sending snapshot to follower [cnrtt3eg]: raft: no snapshot available
[7] 2022/12/14 16:07:01.950252 [INF] JetStream cluster new stream leader for '$G > benchstream'
[7] 2022/12/14 16:07:01.950471 [ERR] RAFT [yrzKKRBu - S-R3M-SmE55jwu] Error sending snapshot to follower [cnrtt3eg]: raft: no snapshot available
[7] 2022/12/14 16:07:09.584142 [INF] JetStream cluster new stream leader for '$G > benchstream'
[7] 2022/12/14 16:07:09.584357 [ERR] RAFT [yrzKKRBu - S-R3M-SmE55jwu] Error sending snapshot to follower [cnrtt3eg]: raft: no snapshot available
[7] 2022/12/14 16:07:25.626529 [INF] JetStream cluster new stream leader for '$G > benchstream'
[7] 2022/12/14 16:07:25.626817 [ERR] RAFT [yrzKKRBu - S-R3M-SmE55jwu] Error sending snapshot to follower [cnrtt3eg]: raft: no snapshot available
[7] 2022/12/14 16:07:29.876317 [INF] JetStream cluster new stream leader for '$G > benchstream'
[7] 2022/12/14 16:07:29.876652 [ERR] RAFT [yrzKKRBu - S-R3M-SmE55jwu] Error sending snapshot to follower [cnrtt3eg]: raft: no snapshot available
```
`nats-2`:
```
[7] 2022/12/14 16:05:28.314068 [INF] Starting nats-server
[7] 2022/12/14 16:05:28.314104 [INF] Version: 2.9.9
[7] 2022/12/14 16:05:28.314106 [INF] Git: [825949b]
[7] 2022/12/14 16:05:28.314107 [INF] Cluster: nats
[7] 2022/12/14 16:05:28.314109 [INF] Name: nats-2
[7] 2022/12/14 16:05:28.314111 [INF] Node: cnrtt3eg
[7] 2022/12/14 16:05:28.314113 [INF] ID: NBUWFK5PP5JIAOPG4KUCQZBE6VMUEVBXWCGUY5MNF6Q34T36JHOYG5LX
[7] 2022/12/14 16:05:28.314120 [INF] Using configuration file: /etc/nats-config/nats.conf
[7] 2022/12/14 16:05:28.314626 [INF] Starting http monitor on 0.0.0.0:8222
[7] 2022/12/14 16:05:28.314650 [INF] Starting JetStream
[7] 2022/12/14 16:05:28.314727 [INF] _ ___ _____ ___ _____ ___ ___ _ __ __
[7] 2022/12/14 16:05:28.314730 [INF] _ | | __|_ _/ __|_ _| _ \ __| /_\ | \/ |
[7] 2022/12/14 16:05:28.314731 [INF] | || | _| | | \__ \ | | | / _| / _ \| |\/| |
[7] 2022/12/14 16:05:28.314732 [INF] \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_| |_|
[7] 2022/12/14 16:05:28.314733 [INF]
[7] 2022/12/14 16:05:28.314734 [INF] https://docs.nats.io/jetstream
[7] 2022/12/14 16:05:28.314735 [INF]
[7] 2022/12/14 16:05:28.314737 [INF] ---------------- JETSTREAM ----------------
[7] 2022/12/14 16:05:28.314742 [INF] Max Memory: 2.00 GB
[7] 2022/12/14 16:05:28.314745 [INF] Max Storage: 10.00 GB
[7] 2022/12/14 16:05:28.314746 [INF] Store Directory: "/data/jetstream"
[7] 2022/12/14 16:05:28.314747 [INF] -------------------------------------------
[7] 2022/12/14 16:05:28.314896 [INF] Starting JetStream cluster
[7] 2022/12/14 16:05:28.314900 [INF] Creating JetStream metadata controller
[7] 2022/12/14 16:05:28.315018 [INF] JetStream cluster recovering state
[7] 2022/12/14 16:05:28.315281 [INF] Listening for client connections on 0.0.0.0:4222
[7] 2022/12/14 16:05:28.315387 [INF] Server is ready
[7] 2022/12/14 16:05:28.315422 [INF] Cluster name is nats
[7] 2022/12/14 16:05:28.315443 [INF] Listening for route connections on 0.0.0.0:6222
[7] 2022/12/14 16:05:28.316311 [INF] 10.1.61.165:6222 - rid:10 - Route connection created
[7] 2022/12/14 16:05:28.316322 [INF] 10.1.61.141:6222 - rid:11 - Route connection created
[7] 2022/12/14 16:05:28.415383 [WRN] Waiting for routing to be established...
[7] 2022/12/14 16:05:32.003976 [INF] JetStream cluster new metadata leader: nats-0/nats
[7] 2022/12/14 16:05:47.934387 [WRN] Healthcheck failed: "JetStream stream '$G > benchstream' is not current"
[7] 2022/12/14 16:05:57.934823 [WRN] Healthcheck failed: "JetStream stream '$G > benchstream' is not current"
[7] 2022/12/14 16:06:07.935043 [WRN] Healthcheck failed: "JetStream stream '$G > benchstream' is not current"
[7] 2022/12/14 16:06:17.935070 [WRN] Healthcheck failed: "JetStream stream '$G > benchstream' is not current"
[7] 2022/12/14 16:06:27.934964 [WRN] Healthcheck failed: "JetStream stream '$G > benchstream' is not current"
[7] 2022/12/14 16:06:37.934234 [WRN] Healthcheck failed: "JetStream stream '$G > benchstream' is not current"
[7] 2022/12/14 16:06:47.934799 [WRN] Healthcheck failed: "JetStream stream '$G > benchstream' is not current"
[7] 2022/12/14 16:06:57.934664 [WRN] Healthcheck failed: "JetStream stream '$G > benchstream' is not current"
[7] 2022/12/14 16:07:07.934835 [WRN] Healthcheck failed: "JetStream stream '$G > benchstream' is not current"
[7] 2022/12/14 16:07:17.934462 [WRN] Healthcheck failed: "JetStream stream '$G > benchstream' is not current"
[7] 2022/12/14 16:07:27.934916 [WRN] Healthcheck failed: "JetStream stream '$G > benchstream' is not current"
[7] 2022/12/14 16:07:37.934793 [WRN] Healthcheck failed: "JetStream stream '$G > benchstream' is not current"
[7] 2022/12/14 16:07:47.934515 [WRN] Healthcheck failed: "JetStream stream '$G > benchstream' is not current"
[7] 2022/12/14 16:07:57.934846 [WRN] Healthcheck failed: "JetStream stream '$G > benchstream' is not current"
```
|
https://github.com/nats-io/nats-server/issues/3712
|
https://github.com/nats-io/nats-server/pull/3713
|
6ff03574fb91a5785ca16561cb4f5cb4d4044625
|
a68194579d5aeee659fa04a23eadd232c0dbbafc
| 2022-12-14T16:08:53Z |
go
| 2022-12-15T00:56:33Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,703 |
["server/events_test.go", "server/jetstream_cluster_3_test.go", "server/jetstream_test.go", "server/monitor.go"]
|
Fix `/healthz` check for `js-enabled=true`
|
`js-enabled=true` is not a well defined health check request parameter currently. It currently has the same effect that looking at the NATS Config to see if JS Should be enabled:
https://github.com/nats-io/nats-server/blob/d6f243f39bac2e0eaa577cd5e2bfa2d848a86389/server/monitor.go#L3012-L3021
We should instead just check the NATS Config to determine if JetStream should be enabled, and compare the current state to the config for the health check.
The `js-enabled=true` parameter should be changed to `js-enabled-only=true` and should terminate the Healthz endpoint after checking the Desired state (from the config) of JetStream against the Current state of JetStream
- `js-enabled-only=true` -> exit after checking if JetStream is enabled
- `js-server-only=true` -> exit after checking if Server is Current with Meta Leader
- no paramaters -> check everything
|
https://github.com/nats-io/nats-server/issues/3703
|
https://github.com/nats-io/nats-server/pull/3704
|
5b42cda4ddff59378fc984b75dec7f62a20b70d7
|
dcd7ffdc4b945eae612fe28c08478e28f7a04157
| 2022-12-09T19:06:06Z |
go
| 2022-12-10T15:32:47Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,677 |
["server/jetstream_api.go", "server/jetstream_cluster.go", "server/jetstream_cluster_3_test.go", "server/stream.go"]
|
Jetstream fails to properly create consumers and jetstream in case of parallel requests
|
## Defect
If you are trying to add the same stream in parallel, then you receive timeouts on parallel calls and created is stream is not usable: you can not publish anything there, just get timeout
#### Versions of `nats-server` and affected client libraries used:
2.9.8
#### OS/Container environment:
Macos, Linux (any)
#### Steps or code to reproduce the issue:
Just run a test:
https://github.com/nats-io/nats-server/compare/main...omnilight:nats-server:multiple_parallel_consumers_problem
Result:
```
=== RUN TestJetStreamParallelConsumerCreation
jetstream_cluster_1_test.go:785: Can not add stream, unexpected error: goroutine 7, context deadline exceeded
jetstream_cluster_1_test.go:785: Can not add stream, unexpected error: goroutine 9, context deadline exceeded
jetstream_cluster_1_test.go:785: Can not add stream, unexpected error: goroutine 5, context deadline exceeded
jetstream_cluster_1_test.go:785: Can not add stream, unexpected error: goroutine 0, context deadline exceeded
jetstream_cluster_1_test.go:785: Can not add stream, unexpected error: goroutine 6, context deadline exceeded
jetstream_cluster_1_test.go:785: Can not add stream, unexpected error: goroutine 4, context deadline exceeded
jetstream_cluster_1_test.go:785: Can not add stream, unexpected error: goroutine 1, context deadline exceeded
jetstream_cluster_1_test.go:785: Can not add stream, unexpected error: goroutine 2, context deadline exceeded
jetstream_cluster_1_test.go:785: Can not add stream, unexpected error: goroutine 3, context deadline exceeded
jetstream_cluster_1_test.go:809: Can not publish message, unexpected error: nats: timeout
--- FAIL: TestJetStreamParallelConsumerCreation (20.74s)
```
if you change variable `numParallel = 1` (https://github.com/nats-io/nats-server/compare/main...omnilight:nats-server:multiple_parallel_consumers_problem#diff-c7ea4877e0bb3c57935948893945e9daa480c0b1e5988c57533c3c7086baa914R763), then test will pass
If you run `CreateStream` consistently, then test will pass also.
#### Expected result:
CreateStream operation should be idempotent and should be allowed to run in parallel
#### Actual result:
Test fails
|
https://github.com/nats-io/nats-server/issues/3677
|
https://github.com/nats-io/nats-server/pull/3679
|
c4c876129356b785f3883d3a5e8a3b5d9511b979
|
7665787050d63259b2d140b821800a97217a8242
| 2022-12-04T11:11:29Z |
go
| 2022-12-06T00:19:23Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,657 |
["server/consumer.go", "server/jetstream_test.go"]
|
nil pointer dereference in simple stream usage
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [x] Included `nats-server -DV` output
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
Server:
```
$ docker run nats:2.9.7 -DV
[1] 2022/11/21 21:32:26.630403 [INF] Starting nats-server
[1] 2022/11/21 21:32:26.630476 [INF] Version: 2.9.7
[1] 2022/11/21 21:32:26.630479 [INF] Git: [1e76678]
[1] 2022/11/21 21:32:26.630481 [DBG] Go build: go1.19.3
[1] 2022/11/21 21:32:26.630483 [INF] Name: NB77E5IAHUPZ3QVEWVN5G4CIM6H5TAX6IJOCJO6GRBDEQ73HOZNIC7WN
[1] 2022/11/21 21:32:26.630488 [INF] ID: NB77E5IAHUPZ3QVEWVN5G4CIM6H5TAX6IJOCJO6GRBDEQ73HOZNIC7WN
[1] 2022/11/21 21:32:26.630539 [DBG] Created system account: "$SYS"
[1] 2022/11/21 21:32:26.633964 [INF] Listening for client connections on 0.0.0.0:4222
[1] 2022/11/21 21:32:26.633999 [DBG] Get non local IPs for "0.0.0.0"
[1] 2022/11/21 21:32:26.634317 [DBG] ip=172.17.0.3
[1] 2022/11/21 21:32:26.634383 [INF] Server is ready
[1] 2022/11/21 21:32:26.634627 [DBG] maxprocs: Leaving GOMAXPROCS=4: CPU quota undefined
```
Client:
```
$ nats --version
v0.0.35
```
#### OS/Container environment:
Docker container `nats:2.9.7`
#### Steps or code to reproduce the issue:
Store this as `config.json`
```json
{
"name": "foo",
"subjects": [
"bar"
],
"retention": "limits",
"max_consumers": -1,
"max_msgs_per_subject": -1,
"max_msgs": -1,
"max_bytes": -1,
"max_age": 0,
"max_msg_size": -1,
"storage": "file",
"discard": "old",
"num_replicas": 1,
"duplicate_window": 120000000000,
"sealed": false,
"deny_delete": false,
"deny_purge": false,
"allow_rollup_hdrs": false,
"allow_direct": false,
"mirror_direct": false
}
```
Run the following:
```bash
# step 1
docker run -d -p 4222:4222 --name nats nats:2.9.7 -js
# step 2
nats stream add --config=config.json
# step 3
nats pub bar baz
# step 4
nats stream view foo
# step 5
docker logs nats
```
#### Expected result:
You see `baz` in the stream `foo` and the log show no errors.
⚠️ NOTE: this non-deterministically succeeds/fails; here is an example success that works as expected, same steps:
```
$ docker run -d -p 4222:4222 --name nats nats:2.9.7 -js
8b9cbbb0e65c4d26e4ec42f624f70104ec1bd65049a8ac5b6d5b1d6f6ff6ab59
$ nats stream add --config=config.json
Stream foo was created
Information for Stream foo created 2022-11-21 15:00:13
Subjects: bar
Replicas: 1
Storage: File
Options:
Retention: Limits
Acknowledgements: true
Discard Policy: Old
Duplicate Window: 2m0s
Allows Msg Delete: true
Allows Purge: true
Allows Rollups: false
Limits:
Maximum Messages: unlimited
Maximum Per Subject: unlimited
Maximum Bytes: unlimited
Maximum Age: unlimited
Maximum Message Size: unlimited
Maximum Consumers: unlimited
State:
Messages: 0
Bytes: 0 B
FirstSeq: 0
LastSeq: 0
Active Consumers: 0
$ nats pub bar baz
15:00:16 Published 3 bytes to "bar"
$ nats stream view foo
[1] Subject: bar Received: 2022-11-21T15:00:16-07:00
baz
15:00:17 Reached apparent end of data
$ docker logs nats
[1] 2022/11/21 22:00:11.105470 [INF] Starting nats-server
[1] 2022/11/21 22:00:11.105608 [INF] Version: 2.9.7
[1] 2022/11/21 22:00:11.105614 [INF] Git: [1e76678]
[1] 2022/11/21 22:00:11.105615 [INF] Name: NBWM3MUKRUDUXEVQDTW27IHTBWEVBCXDCMF2MC6ONGE3SMGNURP2B67H
[1] 2022/11/21 22:00:11.105620 [INF] Node: 55l7Yt1s
[1] 2022/11/21 22:00:11.105621 [INF] ID: NBWM3MUKRUDUXEVQDTW27IHTBWEVBCXDCMF2MC6ONGE3SMGNURP2B67H
[1] 2022/11/21 22:00:11.106623 [INF] Starting JetStream
[1] 2022/11/21 22:00:11.107777 [INF] _ ___ _____ ___ _____ ___ ___ _ __ __
[1] 2022/11/21 22:00:11.107804 [INF] _ | | __|_ _/ __|_ _| _ \ __| /_\ | \/ |
[1] 2022/11/21 22:00:11.107806 [INF] | || | _| | | \__ \ | | | / _| / _ \| |\/| |
[1] 2022/11/21 22:00:11.107807 [INF] \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_| |_|
[1] 2022/11/21 22:00:11.107808 [INF]
[1] 2022/11/21 22:00:11.107809 [INF] https://docs.nats.io/jetstream
[1] 2022/11/21 22:00:11.107810 [INF]
[1] 2022/11/21 22:00:11.107811 [INF] ---------------- JETSTREAM ----------------
[1] 2022/11/21 22:00:11.107814 [INF] Max Memory: 5.83 GB
[1] 2022/11/21 22:00:11.107815 [INF] Max Storage: 24.96 GB
[1] 2022/11/21 22:00:11.107816 [INF] Store Directory: "/tmp/nats/jetstream"
[1] 2022/11/21 22:00:11.107818 [INF] -------------------------------------------
[1] 2022/11/21 22:00:11.108327 [INF] Listening for client connections on 0.0.0.0:4222
[1] 2022/11/21 22:00:11.108632 [INF] Server is ready
```
#### Actual result:
⚠️ NOTE: Again, calling out that this non-deterministically succeeds/fails. Here's an example of it failing:
```
$ docker run -d -p 4222:4222 --name nats nats:2.9.7 -js
c6ee43c70e3f31091c4b7dda17631f62cf81ce68a5b1c23c67cefa96fe62a55f
$ nats stream add --config=config.json
Stream foo was created
Information for Stream foo created 2022-11-21 15:00:34
Subjects: bar
Replicas: 1
Storage: File
Options:
Retention: Limits
Acknowledgements: true
Discard Policy: Old
Duplicate Window: 2m0s
Allows Msg Delete: true
Allows Purge: true
Allows Rollups: false
Limits:
Maximum Messages: unlimited
Maximum Per Subject: unlimited
Maximum Bytes: unlimited
Maximum Age: unlimited
Maximum Message Size: unlimited
Maximum Consumers: unlimited
State:
Messages: 0
Bytes: 0 B
FirstSeq: 0
LastSeq: 0
Active Consumers: 0
$ nats pub bar baz
15:00:36 Published 3 bytes to "bar"
$ nats stream view foo
[1] Subject: bar Received: 2022-11-21T15:00:36-07:00
baz
15:00:38 Reached apparent end of data
15:00:38 Disconnected due to: EOF, will attempt reconnect
$ docker logs nats
[1] 2022/11/21 22:00:33.147577 [INF] Starting nats-server
[1] 2022/11/21 22:00:33.147691 [INF] Version: 2.9.7
[1] 2022/11/21 22:00:33.147693 [INF] Git: [1e76678]
[1] 2022/11/21 22:00:33.147695 [INF] Name: NCHLR747DO434WRHDFZIRY3HGYPMJVH57A4LJBH7EJ2YC2FGD3BVNDDF
[1] 2022/11/21 22:00:33.147699 [INF] Node: aCsxkTp9
[1] 2022/11/21 22:00:33.147700 [INF] ID: NCHLR747DO434WRHDFZIRY3HGYPMJVH57A4LJBH7EJ2YC2FGD3BVNDDF
[1] 2022/11/21 22:00:33.148125 [INF] Starting JetStream
[1] 2022/11/21 22:00:33.148451 [INF] _ ___ _____ ___ _____ ___ ___ _ __ __
[1] 2022/11/21 22:00:33.148455 [INF] _ | | __|_ _/ __|_ _| _ \ __| /_\ | \/ |
[1] 2022/11/21 22:00:33.148456 [INF] | || | _| | | \__ \ | | | / _| / _ \| |\/| |
[1] 2022/11/21 22:00:33.148457 [INF] \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_| |_|
[1] 2022/11/21 22:00:33.148458 [INF]
[1] 2022/11/21 22:00:33.148459 [INF] https://docs.nats.io/jetstream
[1] 2022/11/21 22:00:33.148460 [INF]
[1] 2022/11/21 22:00:33.148461 [INF] ---------------- JETSTREAM ----------------
[1] 2022/11/21 22:00:33.148463 [INF] Max Memory: 5.83 GB
[1] 2022/11/21 22:00:33.148465 [INF] Max Storage: 24.96 GB
[1] 2022/11/21 22:00:33.148466 [INF] Store Directory: "/tmp/nats/jetstream"
[1] 2022/11/21 22:00:33.148467 [INF] -------------------------------------------
[1] 2022/11/21 22:00:33.149756 [INF] Listening for client connections on 0.0.0.0:4222
[1] 2022/11/21 22:00:33.150165 [INF] Server is ready
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x7b707a]
goroutine 30 [running]:
github.com/nats-io/nats-server/v2/server.(*consumer).suppressDeletion(0xc00036b680)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/consumer.go:3138 +0xba
github.com/nats-io/nats-server/v2/server.(*consumer).processInboundAcks(0xc00036b680, 0xc0000e8e40)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/consumer.go:3118 +0x1d8
created by github.com/nats-io/nats-server/v2/server.(*consumer).setLeader
/home/travis/gopath/src/github.com/nats-io/nats-server/server/consumer.go:1069 +0xb0c
```
To call out the unexpected pieces:
Step 4 shows this output:
> ⚠️ `Disconnected due to: EOF, will attempt reconnect` is unexpected
```
$ nats stream view foo
[1] Subject: bar Received: 2022-11-21T14:39:03-07:00
baz
14:39:05 Reached apparent end of data
14:39:05 Disconnected due to: EOF, will attempt reconnect
```
Step 5 shows this output showing the server has crashed due to a nil pointer dereference.
> ⚠️ `panic: runtime error: invalid memory address or nil pointer dereference` is unexpected
```
$ docker logs nats
[1] 2022/11/21 21:38:52.685141 [INF] Starting nats-server
[1] 2022/11/21 21:38:52.685266 [INF] Version: 2.9.7
[1] 2022/11/21 21:38:52.685276 [INF] Git: [1e76678]
[1] 2022/11/21 21:38:52.685280 [INF] Name: NCODTBRWYKKQNPTQKU6T32Q257VEBF7JOUZOWSRGJQBE77XVDYWBJFML
[1] 2022/11/21 21:38:52.685302 [INF] Node: vlK9yvhk
[1] 2022/11/21 21:38:52.685304 [INF] ID: NCODTBRWYKKQNPTQKU6T32Q257VEBF7JOUZOWSRGJQBE77XVDYWBJFML
[1] 2022/11/21 21:38:52.686761 [INF] Starting JetStream
[1] 2022/11/21 21:38:52.690065 [INF] _ ___ _____ ___ _____ ___ ___ _ __ __
[1] 2022/11/21 21:38:52.690092 [INF] _ | | __|_ _/ __|_ _| _ \ __| /_\ | \/ |
[1] 2022/11/21 21:38:52.690095 [INF] | || | _| | | \__ \ | | | / _| / _ \| |\/| |
[1] 2022/11/21 21:38:52.690096 [INF] \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_| |_|
[1] 2022/11/21 21:38:52.690097 [INF]
[1] 2022/11/21 21:38:52.690098 [INF] https://docs.nats.io/jetstream
[1] 2022/11/21 21:38:52.690099 [INF]
[1] 2022/11/21 21:38:52.690100 [INF] ---------------- JETSTREAM ----------------
[1] 2022/11/21 21:38:52.690104 [INF] Max Memory: 5.83 GB
[1] 2022/11/21 21:38:52.690106 [INF] Max Storage: 24.96 GB
[1] 2022/11/21 21:38:52.690108 [INF] Store Directory: "/tmp/nats/jetstream"
[1] 2022/11/21 21:38:52.690109 [INF] -------------------------------------------
[1] 2022/11/21 21:38:52.692415 [INF] Listening for client connections on 0.0.0.0:4222
[1] 2022/11/21 21:38:52.693190 [INF] Server is ready
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x7b707a]
goroutine 82 [running]:
github.com/nats-io/nats-server/v2/server.(*consumer).suppressDeletion(0xc0000af200)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/consumer.go:3138 +0xba
github.com/nats-io/nats-server/v2/server.(*consumer).processInboundAcks(0xc0000af200, 0xc000074f00)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/consumer.go:3118 +0x1d8
created by github.com/nats-io/nats-server/v2/server.(*consumer).setLeader
/home/travis/gopath/src/github.com/nats-io/nats-server/server/consumer.go:1069 +0xb0c
```
|
https://github.com/nats-io/nats-server/issues/3657
|
https://github.com/nats-io/nats-server/pull/3658
|
dde34cea54dfce024cee5225979612fdfbda627d
|
b45b439ad50906cef54314586a19f7962c720ddc
| 2022-11-21T21:51:36Z |
go
| 2022-11-21T23:51:25Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,655 |
["server/accounts.go", "server/jetstream_jwt_test.go"]
|
desabling/reenabling account leads to inconsistent jetstream info
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [x] Included `nats-server -DV` output
[nats-server.log](https://github.com/nats-io/nats-server/files/10055820/nats-server.log)
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
[commands.txt](https://github.com/nats-io/nats-server/files/10066410/commands.txt) (updated file)
#### Versions of `nats-server` and affected client libraries used:
nats-server 2.9.7
#### OS/Container environment:
debian 11
#### Steps or code to reproduce the issue:
see attached commands.txt
#### Expected result:
Pushing the jwt could reenable the account even without resigning it (this is more a feature request, but here again restarting the server makes the account valid again event on 2.9.6 where resigning was sufficient to get back the stream ).
Once jetstream is active on the restored account it should have the stream ready without restarting the server.
#### Actual result:
A restart is needed and the intermediate state is probably unsafe (ie: what if we try to re-create the existing stream ... untested)
|
https://github.com/nats-io/nats-server/issues/3655
|
https://github.com/nats-io/nats-server/pull/3687
|
65d1b622ccbadca2732f872113469445155ebdf0
|
0881e4c37082bb690deaa0ae7a081bfdc3813a00
| 2022-11-21T11:01:42Z |
go
| 2022-12-06T23:16:13Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,653 |
["server/filestore.go", "server/filestore_test.go"]
|
KV with ttl not expired when due
|
## Defect
we just upgraded to 2.9.7 in our test env.
We have a R3 kv bucket with TTL=5sec being used to campaign for leadership
```
nats kv info ELECTIONS
Information for Key-Value Store Bucket ELECTIONS created 2022-11-15T17:24:47+01:00
Configuration:
Bucket Name: ELECTIONS
History Kept: 1
Values Stored: 1
Backing Store Kind: JetStream
Bucket Size: 133 B
Maximum Bucket Size: unlimited
Maximum Value Size: unlimited
JetStream Stream: KV_ELECTIONS
Storage: File
Cluster Information:
Name: fineco_stage
Leader: vxr-dev-nats03
Replica: vxr-dev-nats01, current, seen 0.66s ago
Replica: vxr-dev-nats02, current, seen 0.66s ago
```
Our process can no longer become leader, apparently not all nodes expire the bucket after 5sec :
Querying the bucket with nats-cli sometime shows :
```
[devmercury@vxr-dev-rundecksvn anagrafica]$ date
Fri Nov 18 15:35:48 CET 2022
[devmercury@vxr-dev-rundecksvn anagrafica]$ nats -s nats://vxr-dev-nats02:4222 kv get ELECTIONS offloading_fe
nats: error: nats: key not found
[devmercury@vxr-dev-rundecksvn anagrafica]$ nats -s nats://vxr-dev-nats02:4222 kv get ELECTIONS offloading_fe
nats: error: nats: key not found
[devmercury@vxr-dev-rundecksvn anagrafica]$ nats -s nats://vxr-dev-nats02:4222 kv get ELECTIONS offloading_fe
nats: error: nats: key not found
[devmercury@vxr-dev-rundecksvn anagrafica]$ nats -s nats://vxr-dev-nats02:4222 kv get ELECTIONS offloading_fe
ELECTIONS > offloading_fe created @ 18 Nov 22 13:57 UTC
```
It is as if the value is kept in a node.
```
nats s info KV_ELECTIONS :
Information for Stream KV_ELECTIONS created 2022-11-15 17:24:47
Subjects: $KV.ELECTIONS.>
Replicas: 3
Storage: File
Options:
Retention: Limits
Acknowledgements: true
Discard Policy: New
Duplicate Window: 5s
Direct Get: true
Allows Msg Delete: false
Allows Purge: true
Allows Rollups: true
Limits:
Maximum Messages: unlimited
Maximum Per Subject: 1
Maximum Bytes: unlimited
Maximum Age: 5.00s
Maximum Message Size: unlimited
Maximum Consumers: unlimited
Cluster Information:
Name: fineco_stage
Leader: vxr-dev-nats03
Replica: vxr-dev-nats01, current, seen 0.81s ago
Replica: vxr-dev-nats02, current, seen 0.81s ago
State:
Messages: 1
Bytes: 133 B
FirstSeq: 40,565 @ 0001-01-01T00:00:00 UTC
LastSeq: 66,746 @ 2022-11-18T13:57:16 UTC
Deleted Messages: 26,181
Active Consumers: 0
Number of Subjects: 1
```
Make sure that these boxes are checked before submitting your issue -- thank you!
- [x] Included `nats-server -DV` output
nats-server -DV
[4137868] 2022/11/18 15:54:57.119578 [INF] Starting nats-server
[4137868] 2022/11/18 15:54:57.119649 [INF] Version: 2.9.7
[4137868] 2022/11/18 15:54:57.119665 [INF] Git: [1e76678]
[4137868] 2022/11/18 15:54:57.119675 [DBG] Go build: go1.19.3
[4137868] 2022/11/18 15:54:57.119680 [INF] Name: NC6LMGGJZMNTOFTBXPYG4K6B27O7SXX2FJUW2QUZNGVDJAKUANYKU2PY
[4137868] 2022/11/18 15:54:57.119683 [INF] ID: NC6LMGGJZMNTOFTBXPYG4K6B27O7SXX2FJUW2QUZNGVDJAKUANYKU2PY
[4137868] 2022/11/18 15:54:57.119706 [DBG] Created system account: "$SYS"
[4137868] 2022/11/18 15:54:57.121027 [FTL] Error listening on port: 0.0.0.0:4222, "listen tcp 0.0.0.0:4222: bind: address already in use"
- [ ] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
I'm keeping the kv with this issue unused let me know if you want access to it
#### Versions of `nats-server` and affected client libraries used:
2.9.7
This has been opened after a brief talk on slack: https://natsio.slack.com/archives/CM3T6T7JQ/p1668782422524019
|
https://github.com/nats-io/nats-server/issues/3653
|
https://github.com/nats-io/nats-server/pull/3711
|
5f9a69e4f9ebd4b7684775ea74e69cf7c5009855
|
294ea58c3c5336f287905e82afc9d1e78a2bff63
| 2022-11-18T14:57:41Z |
go
| 2022-12-14T15:49:47Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,652 |
["server/filestore.go", "server/filestore_test.go"]
|
Messages cannot be stored on stream after upgrade to 2.9.7
|
## Defect
Nats could not store messages on the stream after upgrading the nats cluster to 2.9.7
```[ERR] JetStream failed to store a msg on stream '$G > sap': error opening msg block file [""]: open : no such file or directory```
Before the upgrade the 3 different cluster nodes reported different stream states for messages on stream.
After the upgrade the stream info looks like this:
```
Information for Stream sap created 2022-11-07 13:33:01
Subjects: kyma.>
Replicas: 3
Storage: File
Options:
Retention: Interest
Acknowledgements: true
Discard Policy: New
Duplicate Window: 2m0s
Allows Msg Delete: true
Allows Purge: true
Allows Rollups: false
Limits:
Maximum Messages: unlimited
Maximum Per Subject: unlimited
Maximum Bytes: 900 MiB
Maximum Age: unlimited
Maximum Message Size: unlimited
Maximum Consumers: unlimited
Cluster Information:
Name: eventing-nats
Leader: eventing-nats-2
Replica: eventing-nats-0, current, seen 0.00s ago
Replica: eventing-nats-1, current, seen 0.01s ago
State:
Messages: 0
Bytes: 0 B
FirstSeq: 1,303,628,312 @ 0001-01-01T00:00:00 UTC
LastSeq: 1,303,628,311 @ 2022-11-16T11:03:13 UTC
Active Consumers: 2
Number of Subjects: 2
```
Make sure that these boxes are checked before submitting your issue -- thank you!
- [ ] Included nats.go version
- [ ] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats.go` and the `nats-server` if one was involved:
previous version of nats cluster was: 2.9.6
3 nats nodes in a cluster
1 stream (replication factor 3)
2 consumers (replication factor 3)
#### OS/Container environment:
previous image [2.9.6-alpine3.16](https://hub.docker.com/layers/library/nats/2.9.6-alpine3.16/images/sha256-abb4111e51c329aba5d98f59258cbcbe33991478a70ed25a4c1b5e53a05ab284?context=explore)
upgrade image [2.9.7-alpine3.16](https://hub.docker.com/layers/library/nats/2.9.7-alpine3.16/images/sha256-d4698031a0afeb1ed37b240a9632b22a2d0fcea5fd48af3c8b28e1eba3348033?context=explore)
#### Steps or code to reproduce the issue:
it happened with the upgrade.
#### Expected result:
everything works as expected
#### Actual result:
```[ERR] JetStream failed to store a msg on stream '$G > sap': error opening msg block file [""]: open : no such file or directory```
|
https://github.com/nats-io/nats-server/issues/3652
|
https://github.com/nats-io/nats-server/pull/3717
|
d937e92d5591ab8f50c2db4aab58d7adadc49d19
|
860c0329394cf30be60ee883897fd7d20d3e3e57
| 2022-11-18T10:28:47Z |
go
| 2022-12-15T15:26:47Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,645 |
["server/client.go", "server/jetstream_test.go"]
|
Duplicate detections triggers on `*Nats-Msg-Id` in the header not on an exact `Nats-Msg-Id` match
|
## Defect
I am testing on Windows Server 2019 with the following version of the nats-server:
```
>nats-server -DV
[2288] 2022/11/17 15:18:26.916148 [[32mINF[0m] Starting nats-server
[2288] 2022/11/17 15:18:26.991985 [[32mINF[0m] Version: 2.9.3
[2288] 2022/11/17 15:18:26.992503 [[32mINF[0m] Git: [25e82d7]
[2288] 2022/11/17 15:18:26.992545 [[36mDBG[0m] Go build: go1.19.2
>nats --version
0.0.34
```
Using JetStream named `ORDERS` with retention: `WorkQueue`.
#### Steps or code to reproduce the issue:
Send three messages: first one with `Nats-Msg-Id` header, second one with `Orig-Nats-Msg-Id` and the third with `Original-Nats-Msg-Id`:
```
nats pub ORDERS.test "test" -H "Nats-Msg-Id:1"
nats pub ORDERS.test "test" -H "Orig-Nats-Msg-Id:1"
nats pub ORDERS.test "test" -H "Original-Nats-Msg-Id:1"
```
#### Expected result:
It must be three different messages.
#### Actual result:
But actually there is only one message in that subject:
```
>nats stream subjects
? Select a Stream ORDERS
ORDERS.test: 1 : 0 : 0
```
Even more, if we publish to different subject, stream keeps only the first message:
```
nats pub ORDERS.test1 "test" -H "Nats-Msg-Id:1"
nats pub ORDERS.test2 "test" -H "Orig-Nats-Msg-Id:1"
nats pub ORDERS.test3 "test" -H "Original-Nats-Msg-Id:1"
>nats stream subjects
? Select a Stream ORDERS
ORDERS.test1: 1 : 0 : 0
```
|
https://github.com/nats-io/nats-server/issues/3645
|
https://github.com/nats-io/nats-server/pull/3649
|
d6f243f39bac2e0eaa577cd5e2bfa2d848a86389
|
97742f2fdc8c88122584f11f16f57b79b85aadc6
| 2022-11-17T12:39:27Z |
go
| 2022-11-17T15:27:19Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,639 |
["server/consumer.go", "server/jetstream_test.go", "server/stream.go"]
|
WorkQueuePolicy incorrectly allows consumers with overlapping subjects
|
Reported by @Kilio22
### Discussed in https://github.com/nats-io/nats-server/discussions/3637
<div type='discussions-op-text'>
<sup>Originally posted by **Kilio22** November 16, 2022</sup>
Hi,
I am new to NATS, and I am playing with Jetstream and the different retention policies, especially the WorkQueuePolicy.
I noticed in the following example https://natsbyexample.com/examples/jetstream/workqueue-stream/go that you cannot create two consumers with overlaping filtrer subjects.
However, I manage to do the following using nats-cli and a local nats-server started in Jetstream mode:
* create a stream with `toto.>` as subject
* create a first pull consumer with `toto.*.tata` as subject
* create a second pull consumer with `toto.>` as subject
Here, to me the subjects are overlaping, as both can match a message published on a subject such as `toto.tutu.tata`.
I was thinking that maybe it would result in receiving a message published on subject `toto.tutu.tata` only on the first consumer, but no, it is received by both consumers.
Could you explain why please?
Thanks!</div>
|
https://github.com/nats-io/nats-server/issues/3639
|
https://github.com/nats-io/nats-server/pull/3640
|
aeba37685b36e57bd9b43eb4d7744d9eede5277d
|
74a16b0097e53e16f11abc1c27b65a0ce80cd973
| 2022-11-17T00:06:04Z |
go
| 2022-11-17T00:22:35Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,636 |
["server/jetstream_cluster.go"]
|
Consumer Replica R3 -> R2 while Server is offline results in hung health probe
|
Issue reported by a customer
## Version
```
Version: 2.10.0-beta.1
Git: [deadc05c]
Go build: go1.19.2
```
## Summary
3-pod cluster. `Pod 0`, `Pod 1`, and `Pod 2`. `Pod 1` was taken offline due to issues, and `Stream S` with `Consumer C` were changed from R3 to R2 to remove `Pod 1` as a peer
## Issue 1
`Pod 1` came back online. It was stuck starting up because the `/healthz` endpoint reported that it was still trying to catch-up `Consumer C` even though that was an R2 consumer that was no longer assigned to `Pod 1`
## Fix Attempt 1
`Stream S` was changed from R2 back to R3
## Issue 2
Even though `Stream S` changed from R2 to R3 and `Consumer C` had num_replicas=0 (I think this means inherit replica count from stream)?, `Consumer C` still reflected that it was R2. `Pod 1` was still stuck trying to catch up.
## Fix Attempt 2
`Consumer C` was deleted, and `Pod 1` finally started up
|
https://github.com/nats-io/nats-server/issues/3636
|
https://github.com/nats-io/nats-server/pull/3818
|
bcd53ba53a1cb2bb928a8ab6221bdaf83dcc9156
|
b238323b0c5e274643db54d52f7f498521f361c4
| 2022-11-16T17:08:15Z |
go
| 2023-01-26T16:26:07Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,630 |
["server/consumer.go", "server/jetstream_cluster_3_test.go", "server/jetstream_test.go"]
|
Inactive threshold for pull consumer treat message signals (ack/nak etc.) as interest.
|
### Proposed Change:
For pull consumers, consider message signals (+ACK/-NAK/+WPI/+TERM/+NXT) as activity/interest and reset the inactive threshold timer.
|
https://github.com/nats-io/nats-server/issues/3630
|
https://github.com/nats-io/nats-server/pull/3635
|
e44612ffcae899128f1502858b596230c5adc375
|
46032f25c7a9136bf67acc152d8e4c7030db9c2c
| 2022-11-15T18:06:56Z |
go
| 2022-11-16T20:12:02Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,626 |
["server/jetstream_api.go", "server/jetstream_cluster_3_test.go"]
|
Strange behaviour using KeyValue store
|
Hi,
I realise there wont be much information here initially, however I'll put what I have until I can get a reproducible test case.
nats-server -DV
```
# nats-server -DV
[21] 2022/11/14 23:21:59.234003 [INF] Starting nats-server
[21] 2022/11/14 23:21:59.234045 [INF] Version: 2.9.6
[21] 2022/11/14 23:21:59.234048 [INF] Git: [289a9e1]
[21] 2022/11/14 23:21:59.234054 [DBG] Go build: go1.19.3
```
I have a 5 server cluster running via docker, server configs are as follows:
```
port: 4222
http_port: 8222
server_name: nats-x
client_advertise: 10.x.xx.20
cluster {
name: "dnCluster"
port: 6222
cluster_advertise: 10.x.xx.20:6222
routes = [
nats-route://10.x.xx.20:6222
nats-route://10.x.xx.20:6222
nats-route://10.x.xx.20:6222
nats-route://10.x.xx.20:6222
]
}
accounts {
$SYS { users = [ { user: "admin", pass: "xxxx" } ] }
}
jetstream {
store_dir: /data
max_memory_store: 268435456
}
```
Creating the KeyValue stores using the go client then php clients can use it via grpc:
TTL is various but generally is around 4 hours
```
s.JetStream.CreateKeyValue(&nats.KeyValueConfig{
Bucket: bucket,
Description: "Session Storage",
History: 1,
TTL: time.Duration(ttl),
Storage: nats.FileStorage,
Replicas: 3,
})
```
The nats server is generally doing around 5000msg/sec however this issue seems to happen at any time.
Issue 1:
Random server starts displaying this in the logs even though all servers are up, once this happens these keyvalue stores are non-functioning. Deleting them via cli has no effect ( if it even lets you delete them ). If you manage to delete them you can no longer create a new KeyValue store with the same name.
```
[1] 2022/11/14 06:23:24.196626 [WRN] JetStream cluster stream '$G > KV_SESSION_e0b58a5b-c238-45d3-8ada-837d9bfdeea9' has NO quorum, stalled
[1] 2022/11/14 06:23:27.610976 [WRN] JetStream cluster stream '$G > KV_SESSION_e64684f5-591a-44f3-9478-ed73cfcb233f' has NO quorum, stalled
[1] 2022/11/14 06:23:28.135742 [WRN] JetStream cluster stream '$G > KV_SESSION_02e19464-2a6c-465c-b250-36de31fd643e' has NO quorum, stalled
```
Issue 2:
random segfault:
```
[1] 2022/11/14 09:16:43.271775 [INF] Self is new JetStream cluster metadata leader
2022-11-14T09:41:45.350514792Z
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x50 pc=0x3b2ca8]
goroutine 20 [running]:
github.com/nats-io/nats-server/v2/server.(*Server).jsStreamInfoRequest(0x0?, 0x0?, 0x40000b0000, 0x0?, {0x40012713b0, 0x43}, {0x40027c6c90, 0x11}, {0x4002524400, 0x7e, ...})
/home/travis/gopath/src/github.com/nats-io/nats-server/server/jetstream_api.go:1786 +0x958
github.com/nats-io/nats-server/v2/server.(*jetStream).apiDispatch(0x40000f8000, 0x40000de480, 0x40000b0000, 0x4000112d80, {0x40012713b0, 0x43}, {0x40027c6c90, 0x11}, {0x4002524400, 0x7e, ...})
/home/travis/gopath/src/github.com/nats-io/nats-server/server/jetstream_api.go:764 +0x1cc
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0x40000b0000, 0x0, 0x40000de480, 0x40027c6c78?, {0x4001271180, 0x43, 0x50}, {0x40027c6c78, 0x11, 0x18}, ...)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/client.go:3194 +0xa54
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0x40000b0000, 0x4000112d80, 0x40015ae810, {0x4002524400, 0x80, 0x80}, {0x0, 0x0, 0x5d7200?}, {0x4001271180, ...}, ...)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/client.go:4235 +0x74c
github.com/nats-io/nats-server/v2/server.(*client).processInboundClientMsg(0x40000b0000, {0x4002524400, 0x80, 0x80})
/home/travis/gopath/src/github.com/nats-io/nats-server/server/client.go:3660 +0x9d4
github.com/nats-io/nats-server/v2/server.(*Server).internalSendLoop(0x400019e000, 0x0?)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/events.go:428 +0xd60
created by github.com/nats-io/nats-server/v2/server.(*Server).setSystemAccount
/home/travis/gopath/src/github.com/nats-io/nats-server/server/server.go:1278 +0x308
[1] 2022/11/14 09:17:14.865616 [INF] 10.x.xx.20:6222(nats-e) - rid:6 - Router connection closed: Read Error
node-e:
[1] 2022/11/14 09:16:43.272941 [INF] JetStream cluster new metadata leader: nats-b/dnCluster
2022-11-14T09:17:14.863325825Z
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x50 pc=0x3b2ca8]
goroutine 13 [running]:
github.com/nats-io/nats-server/v2/server.(*Server).jsStreamInfoRequest(0x0?, 0x0?, 0x4000197980, 0x400018e6f0?, {0x4000818aa0, 0x43}, {0x4000b922b8, 0x11}, {0x4000b4ad80, 0x7e, ...})
/home/travis/gopath/src/github.com/nats-io/nats-server/server/jetstream_api.go:1786 +0x958
github.com/nats-io/nats-server/v2/server.(*Server).processJSAPIRoutedRequests(0x400011e000)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/jetstream_api.go:796 +0x27c
created by github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine
/home/travis/gopath/src/github.com/nats-io/nats-server/server/server.go:3077 +0xbc
```
The issue still happens if I stop all nodes, remove data and restart them all ( as if booting a new cluster )
These 2 things may be related and unfortunately I don't have the docker logs from the previous time it happened to check for segfaults.
In the mean time I'm trying to create a reproducible environment locally.
|
https://github.com/nats-io/nats-server/issues/3626
|
https://github.com/nats-io/nats-server/pull/3631
|
c94a260c97b66394c9ea8cd1e990c9de536176d8
|
0f79b913ec516a8ce07488fc3a9407af5ae41fea
| 2022-11-14T23:24:09Z |
go
| 2022-11-15T19:24:10Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,623 |
["server/accounts.go", "server/jetstream_jwt_test.go"]
|
Nats deleted account leak subscriptions
|
## Defect
When adding a jetstream enabled account, it creates 6 (internal?) subscriptions, when deleting the account, those subscriptions remains until the server is restarted.
The step bellow reproduce it on a single server and a single account. But we found it on a cluster with several (a lot of) accounts where the number of subscriptions, the messages rate (in and out) and the memory used by the 3 server process where getting higher after each account creation and never got down when the accounts where deleted.
Make sure that these boxes are checked before submitting your issue -- thank you!
- [x] Included `nats-server -DV` output
[nats-server.log](https://github.com/nats-io/nats-server/files/10002937/nats-server.log)
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
[commands.txt](https://github.com/nats-io/nats-server/files/10002917/commands.txt)
#### Versions of `nats-server` and affected client libraries used:
nats-server: 2.9.6
#### OS/Container environment:
debian 11
#### Steps or code to reproduce the issue:
see attached commands.txt
#### Expected result:
The number of subscriptions should drop back to the initial values (from 59 to 53 in the example case) after deleting the account.
#### Actual result:
The number of subscriptions only drop after restarting the server.
|
https://github.com/nats-io/nats-server/issues/3623
|
https://github.com/nats-io/nats-server/pull/3627
|
3c6fa8284b01be74d689a4afbb78f0a11af4ae00
|
b3b7772b87005c05c135d17fb68d1b484cee74dc
| 2022-11-14T12:37:10Z |
go
| 2022-11-15T00:12:12Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,612 |
["server/consumer.go", "server/jetstream_cluster_3_test.go"]
|
"Dangling" messages on stream
|
nats-server: synadia/nats-server:nightly-20221107 (Git: [4e8d1ae])
nats: 0.0.34
3 node cluster, AWS EC2 Ubuntu
We have a interest based stream with 2 consumers, push subscribe, auto ack. We see no issues in our consuming services logs. But a lot of messages in the stream seemingly not consumed. Though vast majority of messages do get removed from the stream. Stream collects messages that are published using NATS core publish by a number of different services on different hosts. When issuing `nats stream view logs` it seems to clear most messages in the stream. But new ones continue to accumulate immediately.
1. `nats stream info logs; nats consumer info logs c1; nats consumer info logs c2;`
```
Information for Stream logs created 2022-07-13 20:07:45
Subjects: logs.>
Replicas: 3
Storage: File
Options:
Retention: Interest
Acknowledgements: true
Discard Policy: Old
Duplicate Window: 2m0s
Allows Msg Delete: true
Allows Purge: true
Allows Rollups: false
Limits:
Maximum Messages: unlimited
Maximum Per Subject: unlimited
Maximum Bytes: 1.0 GiB
Maximum Age: 5d0h0m0s
Maximum Message Size: unlimited
Maximum Consumers: unlimited
Cluster Information:
Name: nats-cluster
Leader: n1
Replica: n3, current, seen 0.07s ago
Replica: n2, current, seen 0.07s ago
State:
Messages: 23,229
Bytes: 20 MiB
FirstSeq: 134,390,673 @ 2022-11-07T21:52:23 UTC
LastSeq: 134,849,300 @ 2022-11-08T08:23:30 UTC
Deleted Messages: 435,399
Active Consumers: 2
Number of Subjects: 20
Information for Consumer logs > c1 created 2022-11-04T14:04:16Z
Configuration:
Durable Name: c1
Delivery Subject: c1.logs
Deliver Policy: All
Deliver Queue Group: c1
Ack Policy: Explicit
Ack Wait: 30s
Replay Policy: Instant
Max Ack Pending: 1,000
Flow Control: false
Cluster Information:
Name: nats-cluster
Leader: n1
Replica: n3, current, seen 0.17s ago
Replica: n2, current, seen 0.17s ago
State:
Last Delivered Message: Consumer sequence: 134,827,928 Stream sequence: 134,849,303 Last delivery: 0.17s ago
Acknowledgment floor: Consumer sequence: 134,827,928 Stream sequence: 134,849,303 Last Ack: 0.17s ago
Outstanding Acks: 0 out of maximum 1,000
Redelivered Messages: 0
Unprocessed Messages: 0
Active Interest: Active using Queue Group c1
Information for Consumer logs > c2 created 2022-11-04T15:36:04Z
Configuration:
Durable Name: c2
Delivery Subject: c2.logs
Deliver Policy: All
Deliver Queue Group: c2
Ack Policy: Explicit
Ack Wait: 30s
Replay Policy: Instant
Max Ack Pending: 1,000
Flow Control: false
Cluster Information:
Name: nats-cluster
Leader: n2
Replica: n3, current, seen 0.50s ago
Replica: n1, current, seen 0.50s ago
State:
Last Delivered Message: Consumer sequence: 134,835,616 Stream sequence: 134,849,303 Last delivery: 0.50s ago
Acknowledgment floor: Consumer sequence: 134,835,616 Stream sequence: 134,849,303 Last Ack: 0.50s ago
Outstanding Acks: 0 out of maximum 1,000
Redelivered Messages: 0
Unprocessed Messages: 0
Active Interest: Active using Queue Group c2
```
2. `nats stream view logs`
```
...
[messages removed for brevity]
...
? Next Page? No
```
3. `nats stream info logs; nats consumer info logs c1; nats consumer info logs c2;`
```
Information for Stream logs created 2022-07-13 20:07:45
Subjects: logs.>
Replicas: 3
Storage: File
Options:
Retention: Interest
Acknowledgements: true
Discard Policy: Old
Duplicate Window: 2m0s
Allows Msg Delete: true
Allows Purge: true
Allows Rollups: false
Limits:
Maximum Messages: unlimited
Maximum Per Subject: unlimited
Maximum Bytes: 1.0 GiB
Maximum Age: 5d0h0m0s
Maximum Message Size: unlimited
Maximum Consumers: unlimited
Cluster Information:
Name: nats-cluster
Leader: n1
Replica: n3, current, seen 1.30s ago
Replica: n2, current, seen 1.31s ago
State:
Messages: 35
Bytes: 33 KiB
FirstSeq: 134,848,747 @ 2022-11-08T08:22:17 UTC
LastSeq: 134,849,345 @ 2022-11-08T08:23:42 UTC
Deleted Messages: 564
Active Consumers: 2
Number of Subjects: 5
Information for Consumer logs > c1 created 2022-11-04T14:04:16Z
Configuration:
Durable Name: c1
Delivery Subject: c1.logs
Deliver Policy: All
Deliver Queue Group: c1
Ack Policy: Explicit
Ack Wait: 30s
Replay Policy: Instant
Max Ack Pending: 1,000
Flow Control: false
Cluster Information:
Name: nats-cluster
Leader: n1
Replica: n3, current, seen 0.00s ago
Replica: n2, current, seen 0.00s ago
State:
Last Delivered Message: Consumer sequence: 134,827,973 Stream sequence: 134,849,348 Last delivery: 0.04s ago
Acknowledgment floor: Consumer sequence: 134,827,973 Stream sequence: 134,849,348 Last Ack: 0.02s ago
Outstanding Acks: 0 out of maximum 1,000
Redelivered Messages: 0
Unprocessed Messages: 0
Active Interest: Active using Queue Group c1
Information for Consumer logs > c2 created 2022-11-04T15:36:04Z
Configuration:
Durable Name: c2
Delivery Subject: c2.logs
Deliver Policy: All
Deliver Queue Group: c2
Ack Policy: Explicit
Ack Wait: 30s
Replay Policy: Instant
Max Ack Pending: 1,000
Flow Control: false
Cluster Information:
Name: nats-cluster
Leader: n2
Replica: n3, current, seen 0.27s ago
Replica: n1, current, seen 0.27s ago
State:
Last Delivered Message: Consumer sequence: 134,835,662 Stream sequence: 134,849,349 Last delivery: 0.27s ago
Acknowledgment floor: Consumer sequence: 134,835,662 Stream sequence: 134,849,349 Last Ack: 0.27s ago
Outstanding Acks: 0 out of maximum 1,000
Redelivered Messages: 0
Unprocessed Messages: 0
Active Interest: Active using Queue Group c2
```
4. `nats stream purge logs; nats stream info logs; nats consumer info logs c1; nats consumer info logs c2;`
```
? Really purge Stream logs Yes
Information for Stream logs created 2022-07-13 20:07:45
Subjects: logs.>
Replicas: 3
Storage: File
Options:
Retention: Interest
Acknowledgements: true
Discard Policy: Old
Duplicate Window: 2m0s
Allows Msg Delete: true
Allows Purge: true
Allows Rollups: false
Limits:
Maximum Messages: unlimited
Maximum Per Subject: unlimited
Maximum Bytes: 1.0 GiB
Maximum Age: 5d0h0m0s
Maximum Message Size: unlimited
Maximum Consumers: unlimited
Cluster Information:
Name: nats-cluster
Leader: n1
Replica: n3, current, seen 0.00s ago
Replica: n2, outdated, seen 0.13s ago, 1 operation behind
State:
Messages: 0
Bytes: 0 B
FirstSeq: 134,857,694 @ 0001-01-01T00:00:00 UTC
LastSeq: 134,857,693 @ 2022-11-08T08:39:56 UTC
Active Consumers: 2
Information for Stream logs created 2022-07-13 20:07:45
Subjects: logs.>
Replicas: 3
Storage: File
Options:
Retention: Interest
Acknowledgements: true
Discard Policy: Old
Duplicate Window: 2m0s
Allows Msg Delete: true
Allows Purge: true
Allows Rollups: false
Limits:
Maximum Messages: unlimited
Maximum Per Subject: unlimited
Maximum Bytes: 1.0 GiB
Maximum Age: 5d0h0m0s
Maximum Message Size: unlimited
Maximum Consumers: unlimited
Cluster Information:
Name: nats-cluster
Leader: n1
Replica: n3, current, seen 0.27s ago
Replica: n2, current, seen 0.27s ago
State:
Messages: 0
Bytes: 0 B
FirstSeq: 134,857,695
LastSeq: 134,857,694 @ 2022-11-08T08:39:56 UTC
Active Consumers: 2
Information for Consumer logs > c1 created 2022-11-04T14:04:16Z
Configuration:
Durable Name: c1
Delivery Subject: c1.logs
Deliver Policy: All
Deliver Queue Group: c1
Ack Policy: Explicit
Ack Wait: 30s
Replay Policy: Instant
Max Ack Pending: 1,000
Flow Control: false
Cluster Information:
Name: nats-cluster
Leader: n1
Replica: n3, current, seen 0.09s ago
Replica: n2, current, seen 0.09s ago
State:
Last Delivered Message: Consumer sequence: 134,836,321 Stream sequence: 134,857,696 Last delivery: 0.10s ago
Acknowledgment floor: Consumer sequence: 134,836,321 Stream sequence: 134,857,696 Last Ack: 0.10s ago
Outstanding Acks: 0 out of maximum 1,000
Redelivered Messages: 0
Unprocessed Messages: 0
Active Interest: Active using Queue Group c1
Information for Consumer logs > c2 created 2022-11-04T15:36:04Z
Configuration:
Durable Name: c2
Delivery Subject: c2.logs
Deliver Policy: All
Deliver Queue Group: c2
Ack Policy: Explicit
Ack Wait: 30s
Replay Policy: Instant
Max Ack Pending: 1,000
Flow Control: false
Cluster Information:
Name: nats-cluster
Leader: n2
Replica: n3, current, seen 0.41s ago
Replica: n1, current, seen 0.40s ago
State:
Last Delivered Message: Consumer sequence: 134,844,004 Stream sequence: 134,857,696 Last delivery: 0.41s ago
Acknowledgment floor: Consumer sequence: 134,844,004 Stream sequence: 134,857,696 Last Ack: 0.41s ago
Outstanding Acks: 0 out of maximum 1,000
Redelivered Messages: 0
Unprocessed Messages: 0
Active Interest: Active using Queue Group c2
```
5. `nats stream info logs; nats consumer info logs c1; nats consumer info logs c2;`
```
Information for Stream logs created 2022-07-13 20:07:45
Subjects: logs.>
Replicas: 3
Storage: File
Options:
Retention: Interest
Acknowledgements: true
Discard Policy: Old
Duplicate Window: 2m0s
Allows Msg Delete: true
Allows Purge: true
Allows Rollups: false
Limits:
Maximum Messages: unlimited
Maximum Per Subject: unlimited
Maximum Bytes: 1.0 GiB
Maximum Age: 5d0h0m0s
Maximum Message Size: unlimited
Maximum Consumers: unlimited
Cluster Information:
Name: nats-cluster
Leader: n1
Replica: n3, current, seen 0.29s ago
Replica: n2, current, seen 0.29s ago
State:
Messages: 44
Bytes: 40 KiB
FirstSeq: 134,857,701 @ 2022-11-08T08:39:59 UTC
LastSeq: 134,858,552 @ 2022-11-08T08:41:21 UTC
Deleted Messages: 808
Active Consumers: 2
Number of Subjects: 6
Information for Consumer logs > c1 created 2022-11-04T14:04:16Z
Configuration:
Durable Name: c1
Delivery Subject: c1.logs
Deliver Policy: All
Deliver Queue Group: c1
Ack Policy: Explicit
Ack Wait: 30s
Replay Policy: Instant
Max Ack Pending: 1,000
Flow Control: false
Cluster Information:
Name: nats-cluster
Leader: n1
Replica: n3, current, seen 0.61s ago
Replica: n2, current, seen 0.61s ago
State:
Last Delivered Message: Consumer sequence: 134,837,177 Stream sequence: 134,858,552 Last delivery: 0.62s ago
Acknowledgment floor: Consumer sequence: 134,837,177 Stream sequence: 134,858,552 Last Ack: 0.61s ago
Outstanding Acks: 0 out of maximum 1,000
Redelivered Messages: 0
Unprocessed Messages: 0
Active Interest: Active using Queue Group c1
Information for Consumer logs > c2 created 2022-11-04T15:36:04Z
Configuration:
Durable Name: c2
Delivery Subject: c2.logs
Deliver Policy: All
Deliver Queue Group: c2
Ack Policy: Explicit
Ack Wait: 30s
Replay Policy: Instant
Max Ack Pending: 1,000
Flow Control: false
Cluster Information:
Name: nats-cluster
Leader: n2
Replica: n3, current, seen 0.16s ago
Replica: n1, current, seen 0.16s ago
State:
Last Delivered Message: Consumer sequence: 134,844,859 Stream sequence: 134,858,554 Last delivery: 0.16s ago
Acknowledgment floor: Consumer sequence: 134,844,859 Stream sequence: 134,858,554 Last Ack: 0.16s ago
Outstanding Acks: 0 out of maximum 1,000
Redelivered Messages: 0
Unprocessed Messages: 0
Active Interest: Active using Queue Group c2
```
From then on messages and deleted messages count on the stream continues to increase. Delete messages count increases faster than messages count.
|
https://github.com/nats-io/nats-server/issues/3612
|
https://github.com/nats-io/nats-server/pull/3620
|
39185c11a69e2ccfa2284bad644ccd5c30f7d419
|
b76b6a1f68720da76399387fc09110496bd3d753
| 2022-11-08T08:48:00Z |
go
| 2022-11-14T16:40:13Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,603 |
["server/consumer.go", "server/jetstream_cluster_3_test.go"]
|
Pull consumer - notify the client about stream/consumer no being available
|
## Feature Request
Currently, a `$JS.API.CONSUMER.MSG.NEXT.<stream>.<consumer>` API call does not notify the client if a given stream or consumer does not exist. Because of this, clients have to rely on heartbeats to detect any issues, but that comes with a delay and does not give any context on the error.
#### Proposed Change:
Server should send a status message on the inbox, with details on why a pull has failed (stream/consumer does not exist) and terminate the pull request immediately.
|
https://github.com/nats-io/nats-server/issues/3603
|
https://github.com/nats-io/nats-server/pull/3605
|
edf0fe31b023bb9e1b64635558bffdf40441e29f
|
c9fd7768893509f54959e37bcc8f0629fc20ab5f
| 2022-11-03T15:17:30Z |
go
| 2022-11-03T20:05:47Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,590 |
["server/sublist.go", "server/sublist_test.go"]
|
Cache hit rate gt 1
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [ ] Included `nats-server -DV` output
- [ ] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
Version: 2.9.3
#### OS/Container environment:
nats:2.9.3-alpine3.16
#### Steps or code to reproduce the issue:
go to http://{{ ip }}:8222/subsz check cache_hit_rate value
#### Expected result:
cache_hit_rate <= 1.0
#### Actual result:
cache_hit_rate > 1.0

|
https://github.com/nats-io/nats-server/issues/3590
|
https://github.com/nats-io/nats-server/pull/3591
|
3f0f3d74164c1b9ec280dea551cb93274e3039a3
|
a0719ec7a7766f2efda75cab017cbd80bdffb4ac
| 2022-10-28T06:44:19Z |
go
| 2022-10-28T15:29:42Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,581 |
["server/reload.go", "test/tls_test.go"]
|
allow_non_tls is lost after server reload
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [ ] Included `nats-server -DV` output
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
2.9.3
#### OS/Container environment:
ubuntu 20.04
#### Steps or code to reproduce the issue:
have the server running with a tls configuration and the option:
allow_non_tls: true
check the server banner: nc servername 4222
it includes tls_available":true but NOT "tls_required":true
reload the server configuration, check the banner again
it now includes tls_available":true AND "tls_required":true
#### Expected result:
it includes tls_available":true but NOT "tls_required":true
#### Actual result:
it now includes tls_available":true AND "tls_required":true
|
https://github.com/nats-io/nats-server/issues/3581
|
https://github.com/nats-io/nats-server/pull/3583
|
8cc87c988fe8d4a7a0b1a88779bff894261dccc5
|
6a2b59bf91c0f64396d7649e2d958bd70156e46d
| 2022-10-27T14:04:17Z |
go
| 2022-10-27T16:03:23Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,559 |
["server/jetstream_cluster_3_test.go", "server/stream.go"]
|
opt_start_time causing previously purged messages to be redelivered from source on server stop/start
|
## Defect
When adding a source to a stream and using the opt_start_time option all messages are being redelivered to the stream from the source when the server is stopped and started.
This doesn't happen if opt_start_time is not included in the config.
#### Versions of `nats-server` and affected client libraries used:
2.9.3
#### OS/Container environment:
mac montery
#### Steps or code to reproduce the issue:
Launch 3 servers hub, aleaf & bleaf using the same config as this ticket: https://github.com/nats-io/nats-server/issues/2920
Create queue stream on a-leaf:
`nats --server nats://localhost:4111 str add --config ./queue.json`
Create test stream on b-leaf:
`nats --server nats://localhost:2111 str add --config ./test.json`
Add entries to test stream:
`nats --server nats://localhost:2111 pub test hello --count 10`
Add test as source to queue:
```nats --server nats://localhost:4111 pub '$JS.a-leaf.API.STREAM.UPDATE.queue' '{ "name": "queue", "subjects": [ "queue" ], "retention": "limits", "max_consumers": -1, "max_msgs": -1, "max_bytes": -1, "max_age": 0, "max_msgs_per_subject": -1, "max_msg_size": -1, "discard": "old", "storage": "file", "num_replicas": 1, "duplicate_window": 120000000000, "sealed": false, "deny_delete": false, "deny_purge": false, "allow_rollup_hdrs": false, "sources": [ { "name": "test", "opt_start_time": "2022-10-14T12:32:06.122Z", "external": { "api": "$JS.b-leaf.API", "deliver": "" } } ] }'```
Verify entries are in queue:
`nats --server nats://localhost:4111 str report`
Purge queue
`nats --server nats://localhost:4111 str purge`
Verify no entries are in queue:
`nats --server nats://localhost:4111 str report`
Stop and start server that has queue (a-leaf)
Report stream will show messages redelivered to stream
`nats --server nats://localhost:4111 str report`
#### Expected result:
Report shows no messages in queue after stop/start.
#### Actual result:
Messages are being redelivered from source stream.
|
https://github.com/nats-io/nats-server/issues/3559
|
https://github.com/nats-io/nats-server/pull/3606
|
c9fd7768893509f54959e37bcc8f0629fc20ab5f
|
3e467fc114b8518ef424cfed4f668bb4763cbb59
| 2022-10-14T18:43:00Z |
go
| 2022-11-03T22:25:00Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,550 |
["server/jetstream.go", "server/jetstream_cluster_3_test.go", "server/server.go"]
|
Named consumer recreated with generated name after Consumer Leader down.
|
## Defect
When the server with the Consumer Leader is taken down, the server recreates the consumer on another replica, but does so with a generated name instead of the given name. All other settings are remembered.
This causes subsequent pull requests to fail because the original consumer name is used in the request.
#### Versions of `nats-server` and affected client libraries used:
`v2.9.3`
#### OS/Container environment:
Windows
#### Steps or code to reproduce the issue:
- Start a 5 node cluster
- Create a stream with file storage and replication 3
- Publish 100 messages
- Consuming a message using pull with batch size of 1
- a named ephemeral pull consumer
- inactive threshold of 1 hour
- default [consumer] replicas (so 3)
- do `nats c info` or look at the server output to determine the Consumer Leader
- kill the consumer leader server
#### Expected result:
The consumer is recreated on one of the replicas with the user given name.
#### Actual result:
The consumer is recreated on one of the replicas with a generated name.
### Notes
#### Consumer Info after creation
```
? Select a Stream ConStream
? Select a Consumer consumer1665667190256
Information for Consumer ConStream > consumer1665667190256 created 2022-10-13T09:19:50-04:00
Configuration:
Pull Mode: true
Filter Subject: ConSubject
Deliver Policy: All
Ack Policy: Explicit
Ack Wait: 30s
Replay Policy: Instant
Max Ack Pending: 1,000
Max Waiting Pulls: 512
Inactive Threshold: 1h0m0s
Cluster Information:
Name: cluster
Leader: server6
State:
Last Delivered Message: Consumer sequence: 1 Stream sequence: 1 Last delivery: 6.29s ago
Acknowledgment floor: Consumer sequence: 1 Stream sequence: 1 Last Ack: 6.28s ago
Outstanding Acks: 0 out of maximum 1,000
Redelivered Messages: 0
Unprocessed Messages: 99
Waiting Pulls: 0 of maximum 512
```
#### Consumer after server6 is killed:
```
C:\nats>nats c info
? Select a Stream ConStream
? Select a Consumer wlpY49ui
Information for Consumer ConStream > wlpY49ui created 2022-10-13T09:19:50-04:00
Configuration:
Pull Mode: true
Filter Subject: ConSubject
Deliver Policy: All
Ack Policy: Explicit
Ack Wait: 30s
Replay Policy: Instant
Max Ack Pending: 1,000
Max Waiting Pulls: 512
Inactive Threshold: 1h0m0s
Cluster Information:
Name: cluster
Leader: server8
State:
Last Delivered Message: Consumer sequence: 1 Stream sequence: 1
Acknowledgment floor: Consumer sequence: 1 Stream sequence: 1
Outstanding Acks: 0 out of maximum 1,000
Redelivered Messages: 0
Unprocessed Messages: 99
Waiting Pulls: 0 of maximum 512
```
|
https://github.com/nats-io/nats-server/issues/3550
|
https://github.com/nats-io/nats-server/pull/3561
|
0ef889d5ccb9520334af1d9dfd7a84b794f0158c
|
19ff7f5f7addf7ce59463f899e4ee0a8d1d946bf
| 2022-10-13T13:24:08Z |
go
| 2022-10-14T22:01:44Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,547 |
["server/client.go", "server/mqtt.go", "server/mqtt_test.go"]
|
Subject mapping doesn't work with MQTT
|
## Defect
#### Versions of `nats-server` and affected client libraries used:
```
[53] 2022/10/12 12:53:59.709111 [INF] Starting nats-server
[53] 2022/10/12 12:53:59.709136 [INF] Version: 2.9.1
[53] 2022/10/12 12:53:59.709145 [INF] Git: [2363a2c]
[53] 2022/10/12 12:53:59.709151 [DBG] Go build: go1.19.1
```
#### OS/Container environment:
Ubuntu 22.04.1 LTS
#### Steps or code to reproduce the issue:
I'm trying to use subject mapping with MQTT and the mapping doesn't work.
My nats server is running with this config:
```
listen: 0.0.0.0:4222
http: 0.0.0.0:8222
server_name: "mqtt_server"
jetstream {
store_dir=/var/lib/nats/data
max_file=10Gi
}
mappings = {
foo: bar
}
mqtt {
port: 1883
}
```
I subscribe on the topic `bar`:
```sh
docker run --rm --net host hivemq/mqtt-cli sub --topic bar -V 3
```
In another terminal I publish a message to `foo`:
```sh
docker run --rm --net host hivemq/mqtt-cli pub --topic foo -m "test" -V 3
```
#### Expected result:
I expect the message "test" to be received on the topic `bar`.
#### Actual result:
The message is not received on `bar`. I can see the message if I subscribe on `foo` instead but it means there is no mapping.
|
https://github.com/nats-io/nats-server/issues/3547
|
https://github.com/nats-io/nats-server/pull/3552
|
3269c506a8b3cb29856a0fe77e9ca8e4bae63bdf
|
11cc165c11488a1d67c88bfd1c92bae94b30ddd8
| 2022-10-12T13:06:00Z |
go
| 2022-10-13T22:30:57Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,535 |
["server/filestore.go", "server/norace_test.go", "server/raft.go", "server/store.go"]
|
After lot of restart due to oom nats panic: corrupt state file
|
## Defect
On nats cluster affected by
[https://github.com/nats-io/nats-server/issues/3517](https://github.com/nats-io/nats-server/issues/3517)
afer 1115 restarts on container nats server panic on start.
#### Versions of `nats-server` and affected client libraries used:
[56] 2022/10/10 11:11:13.961798 [INF] Starting nats-server
[56] 2022/10/10 11:11:13.961853 [INF] Version: 2.9.3-beta.1
[56] 2022/10/10 11:11:13.961859 [INF] Git: [cb086bce]
[56] 2022/10/10 11:11:13.961863 [DBG] Go build: go1.19.1
#### OS/Container environment:
AWS EKS
#### Steps or code to reproduce the issue:
start 3x nats cluster, with slow disks, create jetstream with file storage and 3 replicas. Push more data than disk are able consume, until nats write disk cache cause container to OOM. Since that nats cluster is in OOM loop after while data on disk are corrupted and nats panic.
#### Expected result:
nats and try to recover from last state
#### Actual result:
nats fail to start:
[142] 2022/10/10 11:00:45.333410 [DBG] Exiting consumer monitor for '$G > STX_SERVER_DATA > DB_SERVICE' [C-R3F-OuTvxCQ0]
panic: corrupt state file
goroutine 51 [running]:
github.com/nats-io/nats-server/v2/server.(*jetStream).applyConsumerEntries(0xc00024c000, 0xc000262d80, 0x0?, 0x0)
/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/jetstream_cluster.go:3992 +0x816
github.com/nats-io/nats-server/v2/server.(*jetStream).monitorConsumer(0xc00024c000, 0xc000262d80, 0xc000251680)
/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/jetstream_cluster.go:3871 +0xdc6
github.com/nats-io/nats-server/v2/server.(*jetStream).processClusterCreateConsumer.func1()
/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/jetstream_cluster.go:3585 +0x25
created by github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine
/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/server.go:3077 +0x85
|
https://github.com/nats-io/nats-server/issues/3535
|
https://github.com/nats-io/nats-server/pull/3688
|
0cc4499537b3617bac2791246a211b86b0bf51ef
|
e847f9594b92bd1ca1421347b758832a82268356
| 2022-10-10T11:19:35Z |
go
| 2022-12-06T22:27:30Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,528 |
["server/filestore.go", "server/filestore_test.go"]
|
Subject filtered purge does not delete all data.
|
An easy way to test this is with object- seems to happen with big streams.
```
$ nats obj add BIGDATA
$ dd if=/dev/zero of=random.dat count=$((1024*1024)) bs=1024
$ nats obj put BIGDATA random.dat
$ nats s subjects OBJ_BIGDATA
$O.BIGDATA.M.cmFuZG9tLmRhdA==: 1
$O.BIGDATA.C.XeqSUUd063OcyuG9SAUx7u: 8,192
```
OK so we have a subject with 1 gig of data in it... lets purge it.
```
$ nats s purge OBJ_BIGDATA --subject '$O.BIGDATA.C.XeqSUUd063OcyuG9SAUx7u'
....
State:
Messages: 64
Bytes: 7.9 MiB
FirstSeq: 63 @ 2022-10-06T17:42:41 UTC
LastSeq: 8,193 @ 2022-10-06T17:42:45 UTC
Deleted Messages: 8,067
Active Consumers: 0
Number of Subjects: 2
```
So thats too many messages, checking with `subjects`:
```
$ nats s subjects OBJ_BIGDATA
$O.BIGDATA.M.cmFuZG9tLmRhdA==: 1
$O.BIGDATA.C.XeqSUUd063OcyuG9SAUx7u: 63
```
All the data was not removed. Calling purge again will clear it.
|
https://github.com/nats-io/nats-server/issues/3528
|
https://github.com/nats-io/nats-server/pull/3529
|
b7a5163d5d9ed258e88d01679a4623a8e12e2972
|
207195c66cdbe17732dcb93bb2e57312f9a4fc37
| 2022-10-06T17:44:48Z |
go
| 2022-10-06T23:01:38Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,525 |
["server/disk_avail.go", "server/disk_avail_netbsd.go", "server/pse/pse_netbsd.go"]
|
Enable nats-server to be built on NetBSD
|
## Feature Request
#### Use Case:
I want to build Dendrite (Matrix Server) on NetBSD. Nats server is a dependency. Currently, Nats server cannot be built on NetBSD because of the following loopholes:
- pse not supported
- disk availability not supported
#### Proposed Change:
The necessary adjustments in the source code for a minimal version for NetBSD are minor. Process Usage (pse) can be adopted from OpenBSD. Disk availability could be taken over from Windows (dummy code).
#### Who Benefits From The Change(s)?
NetBSD users
#### Alternative Approaches
unknown
|
https://github.com/nats-io/nats-server/issues/3525
|
https://github.com/nats-io/nats-server/pull/3526
|
021e39419a38f0b8647a98772d3aba396d0aaf0e
|
6d03d75d6de528b48e01eb2653cc0d02b63d943b
| 2022-10-06T08:17:35Z |
go
| 2022-10-28T16:11:13Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,516 |
["server/jetstream_cluster.go", "server/jetstream_cluster_2_test.go", "server/jetstream_cluster_3_test.go"]
|
JetStream first sequence mismatch error keeps occuring for every restart/deployment
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [x] Included `nats-server -DV` output
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
*Note: trace logging not enabled, since to reproduce this issue messaging needs to be constant (both adds and deletes), and this spams the logged output. Debug logging is included.
#### Versions of `nats-server` and affected client libraries used:
- nats-server 2.9.2, running `nats:2.9.2-alpine`
- nats-box, running `natsio/nats-box:0.13.0` (no change made in Helm chart)
#### OS/Container environment:
- AWS EKS cluster
- NATS instances running on separate `c6i.xlarge` machines, clustered with 3 nodes
- nats-box on a separate machine (in our setup on a `r5.large`)
- Using Helm with https://github.com/nats-io/k8s, with minimal config:
```
cluster:
enabled: true
replicas: 3
nats:
image: nats:2.9.2-alpine
logging:
debug: true
jetstream:
enabled: true
memStorage:
enabled: true
size: 5Gi
fileStorage:
enabled: true
size: 30Gi
storageClassName: gp3-retained
resources:
requests:
cpu: 3
memory: 6Gi
limits:
# cpu: 4
memory: 8Gi
# Sets GOMEMLIMIT environment variable which makes the Go GC be aware of memory limits
# from the container. Recommended to be set to about 90% of the resource memory limits.
gomemlimit: 7GiB
```
*Note: where `gp3-retained` is a Kubernetes StorageClass using `gp3` disks and `Retain` reclaim policy.
#### Steps or code to reproduce the issue:
- Setup file-based R3 stream, with a TTL (in this example a TTL of 1 minute, but in our real setup it's a 1 hour TTL, just so there is no need to wait for a long time):
```
nats str add benchstream --storage=file --replicas=3 --retention=limits --max-age=1m --max-bytes=-1 --max-consumers=-1 --max-msg-size=-1 --max-msgs=-1 --max-msgs-per-subject=-1 --discard=old --dupe-window=1m --subjects=js.bench --allow-rollup --deny-delete --no-deny-purge
```
- Start benchmark:
```
nats bench js.bench --pub=1 --msgs=10000000 --pubsleep=0.00001s --size=1KB --pubbatch=1000
```
- The `benchstream` stream persists messages for 1 minute, after which it will also start removing messages. The benchmark should keep running for a long time. After at least a minute has past; remove 1 instance (not sure if there is a difference in being a leader or not, tested by restarting a non-leader instance)
- While the instance is restarting the following messages will be displayed:
```
[7] 2022/10/03 13:43:30.411232 [WRN] Error applying entries to '$G > benchstream': first sequence mismatch
[7] 2022/10/03 13:43:30.414851 [WRN] Resetting stream cluster state for '$G > benchstream'
```
#### Expected result:
The restarted instance should only need to catch up with the messages that it missed during it's downtime.
In this example a TTL of 1 minute is used, in our setup it's a 1 hour TTL, but with the same result.
#### Actual result:
The stream state is lost for this replica, and it needs to be reconstructed. Taking more time to perform the restart/deployment.
#### Files:
Before restart: [nats_logs_before.txt](https://github.com/nats-io/nats-server/files/9698092/nats_logs_before.txt)
After restart: [nats_logs_after.txt](https://github.com/nats-io/nats-server/files/9698093/nats_logs_after.txt)
|
https://github.com/nats-io/nats-server/issues/3516
|
https://github.com/nats-io/nats-server/pull/3567
|
37e876c7de21050631eabb082586ac6cd67f987e
|
7041b19d801b53388527307f6b30a53e1fea38af
| 2022-10-03T14:04:48Z |
go
| 2022-10-17T23:18:21Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,499 |
["server/consumer.go", "server/norace_test.go"]
|
Memory skyrockets after `nats stream view`
|
nats-server: 2.9.1
nats: 0.0.34
3 node cluster, AWS EC2 Ubuntu
Load before:

Load after `nats stream view`

After a few seconds CPU returned to before level. After a few minutes memory too returned to before level.
If curious about the max memory limit, its because previously (and with previous nats-server version) these spikes exhausted entire node memory causing node outage. Setting this limit doesn't seem to have affected the normal workings of nats-server.
|
https://github.com/nats-io/nats-server/issues/3499
|
https://github.com/nats-io/nats-server/pull/3610
|
ef00281bc2a953733c6cd573ff0248cc80fd6977
|
d8a4b56b8b772f8958f8ca8507a636579664420b
| 2022-09-27T08:48:21Z |
go
| 2022-11-05T21:23:24Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,496 |
["server/client.go"]
|
NATS server terminates unexpectedly on protocol violation
|
## Defect(s)
* NATS server terminates unexpectedly on protocol violation
* AlterNats with AlterNats.MessagePack serialization causes protocol violation (https://github.com/Cysharp/AlterNats/issues/19)
- [ ] Included `nats-server -DV` output
- [X] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
I will try without -DV for now, since it takes extra time for me to try that. I'll have to dig up information on how the service was installed - and then do this, which disturbs my normal development setup. If the problem is not quickly and easily reproducible with the instructions I'm giving in this initial report, then I'm happy to spend that time later. I might also spend that time anyway tomorrow, but for today I've done enough work on this issue.
#### Versions of `nats-server` and affected client libraries used:
NATS server 2.9.1 (latest)
AlterNats and AlterNats.MessagePack 1.0.5 (latest) for .NET from NuGet
#### OS/Container environment:
Windows 11 Pro, latest updates.
#### Steps or code to reproduce the issue:
[TestAlt.zip](https://github.com/nats-io/nats-server/files/9641501/TestAlt.zip)
Zip file TestAlt.zip contains source only - no binaries. It contains a test application that triggers this issue. The source is a Visual Studio 2022 solution with an F# console application for .NET 6. The application uses AlterNats and AlterNats.MessagePack 1.0.5 from NuGet.
It appears that in order to trigger the unexpected termination, all these conditions must be met.
* The test application must use the MessagePack serializer in the AlterNats.MessagePack library. Using the built-in Json serializer instead will not trigger any issue at all, and the request-response in the test will succeed.
* The NATS server must be running as a Windows service. Running it from the desktop will not cause the NATS service to crash. Instead the NATS service will report a protocol violation, in the console and/or any logging.
* The NATS server, running as a Windows service, must not be configured to log. If it is configured to log, then it will report the protocol violation in the log, and continue to run successfully.
**Configuration file, with three sensitive items replaced - port, user and password**
```
port: 9999
authorization {
user: someuser
password: somepassword
}
```
**Command used to start the NATS server**
`c:\devbox\TdNats\nats-server.exe -n TdNats -c c:\devbox\TdNats\TdNats.cfg`
**Log**
These three lines are added to the configuration file.
```
debug: true
trace: true
log_file: "c:/devbox/TdNats/log/nats-server-log.txt"
```
Remember that the NATS server does not terminate unexpectedly when logging is turned on.
Extract from the log, to the last logged line during the test. Actual user name replaced with "someuser".
[17092] 2022/09/25 20:48:42.694389 [INF] Server is ready
[17092] 2022/09/25 20:48:45.672084 [DBG] 127.0.0.1:56853 - cid:4 - Client connection created
[17092] 2022/09/25 20:48:45.756228 [TRC] 127.0.0.1:56853 - cid:4 - <<- [CONNECT {"echo":true,"verbose":false,"pedantic":false,"tls_required":false,"user":"someuser","pass":"[REDACTED]","lang":"C#","version":"1.0.0","protocol":1,"headers":false,"no_responders":false}]
[17092] 2022/09/25 20:48:45.853830 [TRC] 127.0.0.1:56853 - cid:4 - "v1.0.0:C#" - <<- [PING]
[17092] 2022/09/25 20:48:45.853830 [TRC] 127.0.0.1:56853 - cid:4 - "v1.0.0:C#" - ->> [PONG]
[17092] 2022/09/25 20:48:45.858831 [TRC] 127.0.0.1:56853 - cid:4 - "v1.0.0:C#" - <<- [PING]
[17092] 2022/09/25 20:48:45.858831 [TRC] 127.0.0.1:56853 - cid:4 - "v1.0.0:C#" - ->> [PONG]
[17092] 2022/09/25 20:48:45.868833 [TRC] 127.0.0.1:56853 - cid:4 - "v1.0.0:C#" - <<- [SUB Time 1]
[17092] 2022/09/25 20:48:45.882493 [TRC] 127.0.0.1:56853 - cid:4 - "v1.0.0:C#" - <<- [SUB _INBOX.ef0fdfe6-e324-424a-819f-0c500f3f83a8.* 2]
[17092] 2022/09/25 20:48:45.991559 [TRC] 127.0.0.1:56853 - cid:4 - "v1.0.0:C#" - <<- [PUB Time _INBOX.ef0fdfe6-e324-424a-819f-0c500f3f83a8.1 ]
[17092] 2022/09/25 20:48:45.992559 [ERR] 127.0.0.1:56853 - cid:4 - "v1.0.0:C#" - processPub Bad or Missing Size: 'Time _INBOX.ef0fdfe6-e324-424a-819f-0c500f3f83a8.1 '
[17092] 2022/09/25 20:48:45.992559 [DBG] 127.0.0.1:56853 - cid:4 - "v1.0.0:C#" - Client connection closed: Protocol Violation
[17092] 2022/09/25 20:48:45.992559 [TRC] 127.0.0.1:56853 - cid:4 - "v1.0.0:C#" - <-> [DELSUB 1]
[17092] 2022/09/25 20:48:45.992559 [TRC] 127.0.0.1:56853 - cid:4 - "v1.0.0:C#" - <-> [DELSUB 2]
|
https://github.com/nats-io/nats-server/issues/3496
|
https://github.com/nats-io/nats-server/pull/3497
|
74c0b18fd2143c17bdda4dfa497be2a75088a745
|
970e2c81d4d005fb414984ca8294c610f15b4f93
| 2022-09-25T19:11:30Z |
go
| 2022-09-26T18:40:01Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,493 |
["server/jetstream_cluster.go", "server/jetstream_cluster_3_test.go"]
|
Consumer scale down to R1 from R3 never responds
|
Latest server nightly (equiv to 2.9.1)
```
$ nats c info ORDERS_0 C3
...
Cluster Information:
Name: lon
Leader: n2-lon
Replica: n1-lon, current, seen 0.68s ago
Replica: n3-lon, current, seen 0.68s ago
...
```
We scale down do this using a request since CLI doesnt have replicas on consumer edit yet
```
$ nats req '$JS.API.CONSUMER.DURABLE.CREATE.ORDERS_0.C3' '{"stream_name":"ORDERS_0","config":{"description":"x","ack_policy":"explicit","ack_wait":5000000000,"deliver_policy":"all","durable_name":"C3","max_ack_pending":1000,"max_deliver":100,"max_waiting":512,"replay_policy":"instant","num_replicas":1}}'
11:32:07 Sending request on "$JS.API.CONSUMER.DURABLE.CREATE.ORDERS_0.C3"
....no response
```
consumer info shows it worked though:
```
Cluster Information:
Name: lon
Leader: n2-lon
```
Doing the same request at this time going from R1 -> R3 does get a response and all is well.
So we just need to make R3->R1 also respond always.
|
https://github.com/nats-io/nats-server/issues/3493
|
https://github.com/nats-io/nats-server/pull/3502
|
c93c2648afd793c3c454e1b50d45074c32c41df9
|
8247ecbf20957edd7e048d4a1aab51746cb9189a
| 2022-09-23T11:35:09Z |
go
| 2022-09-27T21:33:38Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,482 |
["server/mqtt.go", "server/mqtt_test.go"]
|
allow . in mqtt topics
|
## Feature Request
- Allow dots as topic separator in mqtt.
- Eventually, use / as topic separator in NATS and MQTT, common topic structure could be convenient for applications using both NATS and MQTT protocols.
#### Use Case:
Allow nats mqtt server to be used with spark plug B :
[Sparkplug B specifications](https://www.eclipse.org/tahu/spec/Sparkplug%20Topic%20Namespace%20and%20State%20ManagementV2.2-with%20appendix%20B%20format%20-%20Eclipse.pdf)
Sparkplug uses spBv1.0 as topic prefix.
#### Proposed Change:
- add a parameter to allow dots in mqtt topics, eventually URLEncode the topic for the conversion to the NATS topic
#### Who Benefits From The Change(s)?
- Users that would like to une NATS with Sparkplug B protocol
- Users that would like tu use nats in replacement of other mqtt workers, already having a working system that uses dots in the mqtt topics
- Myself :)
#### Alternative Approaches
Allow the use of / as separator in NATS, to harmonize the NATS and MQTT topics.
|
https://github.com/nats-io/nats-server/issues/3482
|
https://github.com/nats-io/nats-server/pull/4243
|
694cc7d2b7a7e9bf84a7f235747b155a7d4412f4
|
91d0b6ad3a0bd54a873ff9845cfad6f9a828f43c
| 2022-09-21T07:42:09Z |
go
| 2023-06-14T03:44:02Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,469 |
["server/jetstream_cluster.go"]
|
Consumer groups get created on the same peer when consumer's replica count = 1
|
When a consumer's replica count is 1, there's no randomisation when selecting a peer. Always the first peer is selected due to which consumers distribution is not uniform across the servers:
https://github.com/nats-io/nats-server/blob/f7cb5b1f0d8802655b789042188db0679a1094d3/server/jetstream_cluster.go#L5953
|
https://github.com/nats-io/nats-server/issues/3469
|
https://github.com/nats-io/nats-server/pull/3470
|
a41af2bdcbb2ff30ecb7d0e3c5f4c63638991492
|
d4f313027c1909e7b14f085f72c28caa3892bd46
| 2022-09-14T08:20:47Z |
go
| 2022-09-14T14:53:26Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,455 |
["server/consumer.go", "server/jetstream_test.go", "server/norace_test.go"]
|
Possible regression in pull consumer fetch to maximize batch size
|
In nightly, given a stream that is pre-populated and a pull consumer with two (or more) subscriptions using separate connections, a combined fetch size (between all subscriptions) that is smaller than max ack pending, I am observing varying returned batch sizes rather than the max size.
I tested the same code against the nightly tagged for July 5th and the issue was not observed. I tested the nightly tagged July 6th and the issue is present. Here is the [commit range from July 4-6](https://github.com/nats-io/nats-server/commits/main?since=2022-07-04&until=2022-07-06). Possible suspect is #3241
|
https://github.com/nats-io/nats-server/issues/3455
|
https://github.com/nats-io/nats-server/pull/3456
|
ae0d808f5be77bdccd4fd8eddf201100db586694
|
d979937bbd40e3458202a96b74f6196cc4ac9e2b
| 2022-09-08T13:05:36Z |
go
| 2022-09-08T19:08:10Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,431 |
["conf/lex.go", "conf/lex_test.go", "conf/parse_test.go"]
|
String envar incorrectly inferred as number
|
## Defect
A String envar that starts with a digit (0-9) followed by the second character being anyone of these...
```
return r == 'k' || r == 'K' || r == 'm' || r == 'M' || r == 'g' || r == 'G' || r == 't' || r == 'T' || r == 'p' || r == 'P' || r == 'e' || r == 'E'
```
and has a length > 2, is incorrectly inferred as a number and not a string.
After stepping through the code [isNumberSuffix()](https://github.com/nats-io/nats-server/blob/c5c8e385abffcc4f0b6911a9ec25f55d2ffcca0d/conf/lex.go#L1141) gets called but I feel like it should only infer it's a number if the suffix is at the end of the string.
- [x] Included `nats-server -DV` output
- [x] Included a [Minimal, Complete, and Verifiable example]
#### Versions of `nats-server` and affected client libraries used: latest
#### OS/Container environment:
#### Steps or code to reproduce the issue:
Here is a unit test that shows the issue. It can go in `conf/parse_test.go`
```go
func TestEnvVariableString(t *testing.T) {
ex := map[string]interface{}{
"foo": "2GP3JZCZ4G7JNWkzBhVYx9ZXtUiu",
}
evar := "__UNIQ22__"
os.Setenv(evar, "2GP3JZCZ4G7JNWkzBhVYx9ZXtUiu")
defer os.Unsetenv(evar)
test(t, fmt.Sprintf("foo = $%s", evar), ex)
}
```
#### Expected result:
The test passes
#### Actual result:
Errors with
```
Received err: variable reference for '__UNIQ22__' on line 1 could not be parsed: Parse error on line 1: 'Expected a top-level value to end with a new line, comment or EOF, but got 'P' instead.'
```
|
https://github.com/nats-io/nats-server/issues/3431
|
https://github.com/nats-io/nats-server/pull/3434
|
c4b5ca7cff77c9e41e5fba00240c05a335764a22
|
1a9b3c49c0b048463cbc2745abab32297c090e13
| 2022-09-01T12:13:15Z |
go
| 2022-09-03T02:51:00Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,422 |
["server/sublist.go", "server/sublist_test.go", "server/test_test.go"]
|
SubjectsCollide() returns true for non matching subjects
|
## Defect
`SubjectsCollide()` will match subjects which do not match same literal subject (e.g. `foo.*` and `foo.*.bar.>`). That issue impacts creating consumers with `FilterSubject`, as well as listing streams with subject filter (and probably more).
I added a simple test to verify the issue:
```go
func TestSubectsCollide(t *testing.T) {
s1 := "foo.*.bar.>"
s2 := "foo.*"
res := SubjectsCollide(s1, s2)
// prints true
fmt.Println(res)
}
```
|
https://github.com/nats-io/nats-server/issues/3422
|
https://github.com/nats-io/nats-server/pull/3423
|
cb3b88d4e4a17bf12b5c1ae36a41c003a8d2e1fd
|
4b9db05a0cbae11c2d4bc60eafe2cdc7d0827cc1
| 2022-08-31T10:59:47Z |
go
| 2022-08-31T13:58:13Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,420 |
["server/jetstream.go", "server/jetstream_jwt_test.go"]
|
After creating and deleting (a lot of) accounts, Server fail to enable jetstream on new accounts.
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [X] Included `nats-server -DV` output [nats.log](https://github.com/nats-io/nats-server/files/9459677/nats.log)
- [X] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
- nats-server v2.8.4 (also tested on main v2.9.0-RC.12)
- nsc version 2.7.2
- nats cli version 0.0.30
#### OS/Container environment:
Linux debian 11.4
#### Steps or code to reproduce the issue:
The following shell code :
- Configure a nats server with jwt accounts and a jetstream disk quota of 1G
- Creates an account with 900M disk size.
- Connects to it and check accounts info (ie jetstream is enabled)
- Delete the account on the server
- Creates a second one (same parameters)
- Connects to it : Here jetstream is disabled, the server log includes `Error configuring jetstream for account [xxx/TEST2]: insufficient storage resources available (10047)`
[commands.txt](https://github.com/nats-io/nats-server/files/9459676/commands.txt)
#### Expected result:
The second account should be jetstream enabled !
#### Actual result:
Jetstream is disabled on the account.
|
https://github.com/nats-io/nats-server/issues/3420
|
https://github.com/nats-io/nats-server/pull/3428
|
62f91c1dd2cf40f9eb3be07bc589ea4410a124a4
|
cc22510669be9415c1a6afeec34d2bbe0900ec99
| 2022-08-31T08:24:04Z |
go
| 2022-08-31T23:06:30Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,397 |
["server/accounts.go", "server/accounts_test.go", "server/client.go", "server/jetstream_leafnode_test.go", "server/parser.go"]
|
jetStreamContext.Publish causes nats-server stack overflow error for Jetstream imported to different account
|
## Defect
- [ ] Included `nats-server -DV` output
~~~~
./nats-server -DV -c .\leaf.conf
[12284] 2022/08/24 11:14:01.897092 [INF] Starting nats-server
[12284] 2022/08/24 11:14:01.898237 [INF] Version: 2.8.4
[12284] 2022/08/24 11:14:01.898237 [INF] Git: [66524ed]
[12284] 2022/08/24 11:14:01.898753 [DBG] Go build: go1.17.10
[12284] 2022/08/24 11:14:01.898775 [INF] Cluster: leaf-server
[12284] 2022/08/24 11:14:01.898775 [INF] Name: leaf-server
[12284] 2022/08/24 11:14:01.898775 [INF] Node: 1nWZLJcM
[12284] 2022/08/24 11:14:01.899292 [INF] ID: NAUQWGUKBJ4E6OK6NPEKN22QCKKGJR3QCR32P4WS3OPEA5QQENE7E7GQ
[12284] 2022/08/24 11:14:01.899363 [WRN] Plaintext passwords detected, use nkeys or bcrypt
[12284] 2022/08/24 11:14:01.899363 [INF] Using configuration file: .\leaf.conf
[12284] 2022/08/24 11:14:01.899916 [INF] Starting JetStream
[12284] 2022/08/24 11:14:01.899916 [DBG] JetStream creating dynamic configuration - 47.92 GB memory, 1.00 TB disk
[12284] 2022/08/24 11:14:01.901608 [INF] _ ___ _____ ___ _____ ___ ___ _ __ __
[12284] 2022/08/24 11:14:01.901675 [INF] _ | | __|_ _/ __|_ _| _ \ __| /_\ | \/ |
[12284] 2022/08/24 11:14:01.901675 [INF] | || | _| | | \__ \ | | | / _| / _ \| |\/| |
[12284] 2022/08/24 11:14:01.901675 [INF] \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_| |_|
[12284] 2022/08/24 11:14:01.901675 [INF]
[12284] 2022/08/24 11:14:01.902250 [INF] https://docs.nats.io/jetstream
[12284] 2022/08/24 11:14:01.902250 [INF]
[12284] 2022/08/24 11:14:01.902250 [INF] ---------------- JETSTREAM ----------------
[12284] 2022/08/24 11:14:01.902250 [INF] Max Memory: 47.92 GB
[12284] 2022/08/24 11:14:01.902832 [INF] Max Storage: 1.00 TB
[12284] 2022/08/24 11:14:01.902832 [INF] Store Directory: "store_leaf\jetstream"
[12284] 2022/08/24 11:14:01.902832 [INF] Domain: leaf
[12284] 2022/08/24 11:14:01.902832 [INF] -------------------------------------------
[12284] 2022/08/24 11:14:01.903362 [DBG] Exports:
[12284] 2022/08/24 11:14:01.903362 [DBG] $JS.API.>
[12284] 2022/08/24 11:14:01.903869 [DBG] Enabled JetStream for account "HUB_USER"
[12284] 2022/08/24 11:14:01.903897 [DBG] Max Memory: -1 B
[12284] 2022/08/24 11:14:01.903897 [DBG] Max Storage: -1 B
[12284] 2022/08/24 11:14:01.905694 [DBG] JetStream state for account "HUB_USER" recovered
[12284] 2022/08/24 11:14:01.905694 [DBG] Enabled JetStream for account "LEAF_USER"
[12284] 2022/08/24 11:14:01.905694 [DBG] Max Memory: -1 B
[12284] 2022/08/24 11:14:01.906203 [DBG] Max Storage: -1 B
[12284] 2022/08/24 11:14:01.907393 [DBG] JetStream state for account "LEAF_USER" recovered
[12284] 2022/08/24 11:14:01.908169 [DBG] Enabled JetStream for account "LEAF_INGRESS"
[12284] 2022/08/24 11:14:01.908169 [DBG] Max Memory: -1 B
[12284] 2022/08/24 11:14:01.908169 [DBG] Max Storage: -1 B
[12284] 2022/08/24 11:14:01.908724 [DBG] Recovering JetStream state for account "LEAF_INGRESS"
[12284] 2022/08/24 11:14:01.915459 [INF] Restored 9 messages for stream 'LEAF_INGRESS > time-stream'
[12284] 2022/08/24 11:14:01.915459 [DBG] JetStream state for account "LEAF_INGRESS" recovered
[12284] 2022/08/24 11:14:01.918538 [INF] Listening for client connections on 0.0.0.0:34111
[12284] 2022/08/24 11:14:01.918538 [DBG] Get non local IPs for "0.0.0.0"
[12284] 2022/08/24 11:14:01.966054 [DBG] ip=172.28.112.1
[12284] 2022/08/24 11:14:02.006058 [DBG] ip=fd5e:9a9e:c5bd:10:7d3a:5371:a3e0:ce8e
[12284] 2022/08/24 11:14:02.006058 [DBG] ip=fd5e:9a9e:c5bd:10:8c0:8d5d:bce6:90be
[12284] 2022/08/24 11:14:02.006626 [DBG] ip=fd5e:9a9e:c5bd:10:15af:9cd6:b21c:e8e0
[12284] 2022/08/24 11:14:02.006626 [DBG] ip=fd5e:9a9e:c5bd:10:d431:eca6:6145:dd68
[12284] 2022/08/24 11:14:02.006626 [DBG] ip=192.168.1.190
[12284] 2022/08/24 11:14:02.046406 [DBG] ip=172.31.144.1
[12284] 2022/08/24 11:14:02.059849 [DBG] ip=172.25.0.1
[12284] 2022/08/24 11:14:02.059849 [INF] Server is ready
[12284] 2022/08/24 11:14:02.059849 [DBG] Trying to connect as leafnode to remote server on "0.0.0.0:34333"
[12284] 2022/08/24 11:14:03.062257 [ERR] Error trying to connect as leafnode to remote server "0.0.0.0:34333" (attempt 1): dial tcp 0.0.0.0:34333: i/o timeout
~~~~
- [ ] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
Accounts config
~~~~
accounts {
SYS: {
users: [{user: admin, password: admin}]
}
HUB_USER: {
users: [{user: hub_user, password: hub_user}]
exports: [
]
jetstream: enabled
}
LEAF_USER: {
users: [{user: leaf_user, password: leaf_user}]
imports: [
{service: {account: LEAF_INGRESS, subject: "time-stream"}}
{service: {account: LEAF_INGRESS, subject: "_INBOX.>"}}
{service: {account: LEAF_INGRESS, subject: "$JS.leaf.API.>"}, to: "[email protected].>" }
]
jetstream: enabled
}
LEAF_INGRESS: {
users: [{user: leaf_ingress, password: leaf_ingress}]
exports: [
{service: "time-stream", accounts: [LEAF_USER]}
{service: "_INBOX.>", accounts: [LEAF_USER]}
{service: "$JS.leaf.API.>", response_type: "stream", accounts: [LEAF_USER]}
]
imports: [
]
jetstream: enabled
}
}
system_account: SYS
~~~~
Main config
~~~~
port: 34111
server_name: leaf-server
jetstream {
store_dir="./store_leaf"
domain=leaf
}
leafnodes {
remotes = [
{
urls: ["nats-leaf://leaf_ingress:[email protected]:34333"]
account: "LEAF_INGRESS"
}
]
}
include ./accounts_leaf.conf
~~~~
Client App
~~~~
package main
import (
"encoding/json"
"github.com/nats-io/nats.go"
log "github.com/sirupsen/logrus"
"time"
)
const natsSubjectTimeStream = "time-stream"
const streamName = "time-stream"
func main() {
natsUserUrl := "nats://leaf_user:[email protected]:34111"
log.Infof("Connecting User to NATS '%s'", natsUserUrl)
natsUserConnection, err := connectToNats(natsUserUrl, "Backend Service User Connection")
if err != nil {
log.Fatal(err)
}
defer func() {
log.Info("Closing User NATS connection")
natsUserConnection.Close()
}()
log.Print("getting JetStream context")
jetStreamContext, err := natsUserConnection.JetStream(nats.APIPrefix("[email protected]."))
if err != nil {
log.Fatal(err)
}
stream, err := jetStreamContext.StreamInfo(streamName)
if err != nil {
log.Print(err)
}
if stream == nil {
log.Printf("Creating stream '%s' and subject '%s'", streamName, natsSubjectTimeStream)
stream, err = jetStreamContext.AddStream(&nats.StreamConfig{
Name: streamName,
Subjects: []string{natsSubjectTimeStream},
Storage: nats.FileStorage,
})
if err != nil {
log.Print(err)
}
} else {
log.Printf("Stream '%s' already exists", streamName)
}
log.Printf("Stream info: '%v'", stream)
message := time.Now().Format(time.RFC3339)
buffer, err := json.Marshal(message)
if err != nil {
log.Fatal(err)
}
log.Infof("Publishing TimeStream: %s", message)
pubAck, err := jetStreamContext.Publish(natsSubjectTimeStream, buffer)
if err != nil {
log.Fatal(err)
}
log.Printf("Ack: '%v'", pubAck)
//err = natsUserConnection.Publish(natsSubjectTimeStream, buffer)
//if err != nil {
// log.Fatal(err)
//}
}
func connectToNats(natsUserUrl string, connectionName string) (*nats.Conn, error) {
options := nats.Options{
Url: natsUserUrl,
Name: connectionName,
}
var natsConnection *nats.Conn
var err error
for index := 0; index < 5; index++ {
if index > 0 {
time.Sleep(time.Second)
}
log.Info("Attempting to connect to NATS")
natsConnection, err = options.Connect()
if err == nil {
break
}
log.Errorf("NATS connection failed [%v]", err)
}
return natsConnection, err
}
~~~~
#### Versions of `nats-server` and affected client libraries used:
Server
nats-server 2.8.4 Windows
Client
github.com/nats-io/nats.go v1.13.1-0.20220308171302-2f2f6968e98d
#### OS/Container environment:
Windows 10 Professional
#### Steps or code to reproduce the issue:
1. Compile go app.
2. Start server: ./nats-server -c .\leaf.conf
3. Run client go app
#### Expected result:
No crash
#### Actual result:
Observed nats-server crash, but message was placed into stream before crash happened
~~~~
[36192] 2022/08/24 12:38:42.783120 [DBG] 127.0.0.1:8256 - cid:10 - Client connection created
[36192] 2022/08/24 12:38:42.784476 [TRC] 127.0.0.1:8256 - cid:10 - <<- [CONNECT {"verbose":false,"pedantic":false,"user":"leaf_user","pass":"[REDACTED]","tls_required":false,"name":"Backend Service User Connection","lang":"go","version":"1.13.0","protocol":1,"echo":true,"headers":true,"no_responders":true}]
[36192] 2022/08/24 12:38:42.784476 [TRC] 127.0.0.1:8256 - cid:10 - "v1.13.0:go:Backend Service User Connection" - <<- [PING]
[36192] 2022/08/24 12:38:42.785050 [TRC] 127.0.0.1:8256 - cid:10 - "v1.13.0:go:Backend Service User Connection" - ->> [PONG]
[36192] 2022/08/24 12:38:42.785600 [TRC] 127.0.0.1:8256 - cid:10 - "v1.13.0:go:Backend Service User Connection" - <<- [SUB _INBOX.yKc35aR2udok6P7A6u1pP5.* 1]
[36192] 2022/08/24 12:38:42.785600 [TRC] 127.0.0.1:8256 - cid:10 - "v1.13.0:go:Backend Service User Connection" - <<- [PUB [email protected] _INBOX.yKc35aR2udok6P7A6u1pP5.ulePjr3L 0]
[36192] 2022/08/24 12:38:42.785600 [TRC] 127.0.0.1:8256 - cid:10 - "v1.13.0:go:Backend Service User Connection" - <<- MSG_PAYLOAD: [""]
[36192] 2022/08/24 12:38:42.786680 [TRC] 127.0.0.1:8256 - cid:10 - "v1.13.0:go:Backend Service User Connection" - ->> [MSG _INBOX.yKc35aR2udok6P7A6u1pP5.ulePjr3L 1 640]
[36192] 2022/08/24 12:38:42.786680 [TRC] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 1051]
[36192] 2022/08/24 12:38:42.786680 [TRC] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"p9CcppauwI7vkUyCpLZWWo\",\"timestamp\":\"2022-08-24T16:38:42.7866466Z\",\"server\":\"leaf-server\",\"client\":{\"acc\":\"LEAF_USER\",\"svc\":\"LEAF_INGRESS\",\"rtt\":1355800,\"server\":\"leaf-server\",\"cluster\":\"leaf-server\"},\"subject\":\"$JS.API.STREAM.INFO.time-stream\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.stream_info_response\\\",\\\"config\\\":{\\\"name\\\":\\\"time-stream\\\",\\\"subjects\\\":[\\\"time-stream\\\"],\\\"retention\\\":\\\"limits\\\",\\\"max_consumers\\\":-1,\\\"max_msgs\\\":-1,\\\"max_bytes\\\":-1,\\\"max_age\\\":0,\\\"max_msgs_per_subject\\\":-1,\\\"max_msg_size\\\":-1,\\\"discard\\\":\\\"old\\\",\\\"storage\\\":\\\"file\\\",\\\"num_replicas\\\":1,\\\"duplicate_window\\\":120000000000,\\\"sealed\\\":false,\\\"deny_delete\\\":false,\\\"deny_purge\\\":false,\\\"allow_rollup_hdrs\\\":false},\\\"created\\\":\\\"2022-08-24T14:57:06.5118754Z\\\",\\\"state\\\":{\\\"messages\\\":9,\\\"bytes\\\":612,\\\"first_seq\\\":1,\\\"first_ts\\\":\\\"2022-08-24T14:57:11.5195017Z\\\",\\\"last_seq\\\":9,\\\"last_ts\\\":\\\"2022-08-24T15:08:31.6928406Z\\\",\\\"num_subjects\\\":1,\\\"consumer_count\\\":0},\\\"domain\\\":\\\"leaf\\\"}\",\"domain\":\"leaf\"}"]
[36192] 2022/08/24 12:38:42.788278 [TRC] 127.0.0.1:8256 - cid:10 - "v1.13.0:go:Backend Service User Connection" - <<- [PUB time-stream _INBOX.yKc35aR2udok6P7A6u1pP5.osGTP9X9 27]
[36192] 2022/08/24 12:38:42.788379 [TRC] 127.0.0.1:8256 - cid:10 - "v1.13.0:go:Backend Service User Connection" - <<- MSG_PAYLOAD: ["\"2022-08-24T12:38:42-04:00\""]
[36192] 2022/08/24 12:38:43.560338 [DBG] Trying to connect as leafnode to remote server on "0.0.0.0:34333"
[36192] 2022/08/24 12:38:44.563297 [DBG] Error trying to connect as leafnode to remote server "0.0.0.0:34333" (attempt 5): dial tcp 0.0.0.0:34333: i/o timeout
[36192] 2022/08/24 12:38:44.878332 [DBG] 127.0.0.1:8256 - cid:10 - "v1.13.0:go:Backend Service User Connection" - Client Ping Timer
[36192] 2022/08/24 12:38:44.878332 [TRC] 127.0.0.1:8256 - cid:10 - "v1.13.0:go:Backend Service User Connection" - ->> [PING]
[36192] 2022/08/24 12:38:44.879184 [TRC] 127.0.0.1:8256 - cid:10 - "v1.13.0:go:Backend Service User Connection" - <<- [PONG]
runtime: goroutine stack exceeds 1000000000-byte limit
runtime: sp=0xc0205813b0 stack=[0xc020580000, 0xc040580000]
fatal error: stack overflow
runtime stack:
runtime.throw({0xacf21a, 0xed96e0})
/home/travis/.gimme/versions/go1.17.10.linux.amd64/src/runtime/panic.go:1198 +0x76
runtime.newstack()
/home/travis/.gimme/versions/go1.17.10.linux.amd64/src/runtime/stack.go:1088 +0x5cc
runtime.morestack()
/home/travis/.gimme/versions/go1.17.10.linux.amd64/src/runtime/asm_amd64.s:461 +0x93
goroutine 37 [running]:
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc000087980, 0xc000422e10, 0xc000125b00, {0xc009cd9760, 0x7e, 0xa8})
/home/travis/gopath/src/github.com/nats-io/nats-server/server/client.go:3830 +0x113e fp=0xc0205813c0 sp=0xc0205813b8 pc=0x7f251e
github.com/nats-io/nats-server/v2/server.(*Account).processServiceImportResponse(0xc000125b00, 0xc020581470, 0x4d1789, 0x11, {0xc009cbbf68, 0x0}, {0x0, 0x0}, {0xc009cd9760, 0x7e, ...})
/home/travis/gopath/src/github.com/nats-io/nats-server/server/accounts.go:2118 +0x131 fp=0xc020581418 sp=0xc0205813c0 pc=0x7c60b1
github.com/nats-io/nats-server/v2/server.(*Account).processServiceImportResponse-fm(0x0, 0x0, 0x0, {0xc009cbbf68, 0x0}, {0x0, 0x0}, {0xc009cd9760, 0x7e, 0xa8})
/home/travis/gopath/src/github.com/nats-io/nats-server/server/accounts.go:2103 +0x77 fp=0xc020581480 sp=0xc020581418 pc=0x9c03b7
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000087980, 0xc000183e00, 0x4, {0xc009cbbf50, 0x0, 0x18}, {0x0, 0x0, 0x0}, {0xc000088e00, ...}, ...)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/client.go:3173 +0xc4d fp=0xc0205816f8 sp=0xc020581480 pc=0x7edacd
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000087980, 0x0, 0xc000226930, {0xc009cd9760, 0x7e, 0xa8}, {0xf83200, 0x0, 0x0}, {0xc009cbbf50, ...}, ...)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/client.go:4168 +0xa76 fp=0xc020581c70 sp=0xc0205816f8 pc=0x7f34d6
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc000087980, 0xc00016e510, 0xc0001258c0, {0xc009cd96b0, 0x7e, 0xa8})
/home/travis/gopath/src/github.com/nats-io/nats-server/server/client.go:3984 +0xed1 fp=0xc020581f30 sp=0xc020581c70 pc=0x7f22b1
github.com/nats-io/nats-server/v2/server.(*Account).addServiceImportSub.func1(0x0, 0x0, 0x0, {0x0, 0x0}, {0x0, 0x0}, {0xc009cd96b0, 0x7e, 0xa8})
/home/travis/gopath/src/github.com/nats-io/nats-server/server/accounts.go:1935 +0x32 fp=0xc020581f70 sp=0xc020581f30 pc=0x7c4d32
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000087980, 0xc000182180, 0x5, {0xc009cd7cb0, 0x0, 0x30}, {0x0, 0x0, 0x0}, {0xc000088e00, ...}, ...)
...
~~~~
|
https://github.com/nats-io/nats-server/issues/3397
|
https://github.com/nats-io/nats-server/pull/3407
|
97bba60bb5f783f1e47c0f23c2d51b1d9df01b4b
|
d73ca7d468095096f307d345d41afd33da0ba213
| 2022-08-24T16:45:50Z |
go
| 2022-08-26T22:31:04Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,362 |
["server/jetstream_test.go", "server/stream.go"]
|
JetStream: non specific error returned with overlapping subjects in standalone
|
This was reported as an issue in the C client (https://github.com/nats-io/nats.c/issues/572), but it turns out to be an issue that the server does not return the proper NewJSStreamSubjectOverlapError() error in standalone mode (but does it correctly in cluster mode).
Will have a PR to fix this.
|
https://github.com/nats-io/nats-server/issues/3362
|
https://github.com/nats-io/nats-server/pull/3363
|
76219f8e5b87efb639f7c7e1de6af32af61a562c
|
d8d25d9b0b1c1d79252749616d485052809195d9
| 2022-08-11T19:03:12Z |
go
| 2022-08-11T22:41:53Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,361 |
["server/consumer.go", "server/jetstream_cluster.go", "server/jetstream_cluster_test.go", "server/jetstream_test.go", "server/norace_test.go", "server/raft.go", "server/server.go"]
|
Zombie re-deliveries
|
## Defect
If you have a durable consumer with explicit ack and some message reached the maximum number of delivery attempts, if you restart the nats-server then the server tries again to re-deliver those messages.
2.8.4
#### Steps or code to reproduce the issue:
Create a storage=file stream and a durable (push or pull), start a subscriber on that consumer that doesn't acknowledge messages (e.g. `nats sub --no-akc` or `nats consumer next --no-ack`) publish a message into the stream and see that the message is delivered only n times (n = max deliveries), after that is over, stop and restart the nats-server(s) and after that restart the server attempts to deliver that message again.
|
https://github.com/nats-io/nats-server/issues/3361
|
https://github.com/nats-io/nats-server/pull/3365
|
56a807798ba9a946c8f7605c561c4698478f71d4
|
4b4de20c2536292a29a8bafcdc07c576cae24e63
| 2022-08-11T16:55:03Z |
go
| 2022-08-17T01:13:49Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,336 |
["server/stream.go"]
|
AllowDirect will always default true for new streams
|
Server: 2.9 Beta
On stream create, MaxMsgsPer is checked for values != 0 and sets AllowDirect to true if test is true. Unset or explicit 0 in stream request initializes MaxMsgsPer to -1 so test will always be true. Test instead for > 0 (an explicit configuration of messages per subject limit).
|
https://github.com/nats-io/nats-server/issues/3336
|
https://github.com/nats-io/nats-server/pull/3337
|
e03d84f70432f51bd4c2c8077411a6570113e4ab
|
daaaad5eafa8778e09f5f2f61aec94fa43bf4647
| 2022-08-05T00:23:31Z |
go
| 2022-08-05T00:46:52Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,331 |
["server/leafnode.go", "server/leafnode_test.go", "server/opts.go"]
|
Add nkey support to RemoteLeafOpts
|
## Feature Request
Add ability configure an nkey programmatically in the leafnode options.
#### Use Case:
We use a NATS server embedded in our client application. It operates as a leaf node, connecting to NGS. The nkey is _not_ stored on disk, and comes from the Windows credential store and is decrypted on demand. When using the nats.go client, we use the `Nkey` option (see https://github.com/nats-io/nats.go/blob/v1.16.0/nats.go#L1066-L1075). As we migrate to an embedded leafnode, we need the same ability to configure the user.
#### Proposed Change:
Add an option similar to the nats.go package with `type SignatureHandler func([]byte) ([]byte, error)`.
#### Who Benefits From The Change(s)?
me :partying_face:
#### Alternative Approaches
Write the credentials out to a temporary file and hope it gets deleted on panic. And also write them out on every server reconnect...
|
https://github.com/nats-io/nats-server/issues/3331
|
https://github.com/nats-io/nats-server/pull/3335
|
2120be647628e351f610dbcd12e6cacf118b3f0b
|
f208b8660df63a8c991781eb737350def4de6959
| 2022-08-04T10:01:23Z |
go
| 2022-08-05T16:20:04Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,317 |
["server/monitor.go", "server/monitor_test.go", "server/server.go"]
|
Include TLS client information in connz output
|
## Feature Request
When clients connect via TLS with client side certificate validation, no information about the client is available in the HTTP monitoring endpoints. This is what the data looks like right now:

#### Use Case:
Better insight into NATS server activity and its client connections.
#### Proposed Change:
Add public key, fingerprints, serial, subject to connz endpoint when query parameter `auth` is truthy.
|
https://github.com/nats-io/nats-server/issues/3317
|
https://github.com/nats-io/nats-server/pull/3387
|
f5ba11736b55120623c3aaf8fa78ace6b0096bf5
|
284e35132b75930c6af2ee3e59a53c3cc4c2b1b4
| 2022-08-01T08:27:25Z |
go
| 2022-08-24T20:28:01Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,313 |
["server/mqtt.go", "server/mqtt_test.go"]
|
mqtt nil pointer (without reproducer)
|
Hi
At some time we get an error

We do not have a reproducer
Have any ideas why `mqttAccountSessionManager` have is nil?
https://github.com/nats-io/nats-server/blob/66524ed715a72db192ec93465bf24391f7696f3f/server/mqtt.go#L1853
https://github.com/nats-io/nats-server/blob/66524ed715a72db192ec93465bf24391f7696f3f/server/mqtt.go#L3522-L3527
But `mqttAccountSessionManager` is sayed `quick reference to account session manager, immutable after processConnect()`
https://github.com/nats-io/nats-server/blob/66524ed715a72db192ec93465bf24391f7696f3f/server/mqtt.go#L313
In another goroutine trying to work with a client who is not yet ready?
https://github.com/nats-io/nats-server/blob/66524ed715a72db192ec93465bf24391f7696f3f/server/mqtt.go#L2815
Please, any ideas. Thx
PS nats-server v2.8.4
|
https://github.com/nats-io/nats-server/issues/3313
|
https://github.com/nats-io/nats-server/pull/3315
|
588a8fcca9b509d89c39d3f8f935d57436a249ed
|
b6b746095be5937aa1b2bcf93b00de8de7d53289
| 2022-07-31T14:06:33Z |
go
| 2022-07-31T18:43:28Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,298 |
["scripts/runTestsOnTravis.sh", "test/configs/certs/ca.pem", "test/configs/certs/client-cert.pem", "test/configs/certs/client-id-auth-cert.pem", "test/configs/certs/regenerate_top.sh", "test/configs/certs/server-cert.pem", "test/configs/certs/server-iponly.pem", "test/configs/certs/server-noip.pem", "test/configs/certs/srva-cert.pem", "test/configs/certs/srvb-cert.pem"]
|
Update TLS Certificates
|
Since Go 1.18, certificates with sha1WithRSAEncryption algorithm are not longer accepted, unless one export the `GODEBUG="x509sha1=1` environment variable.
It seem like we would need to update the following certificates:
```
test/configs/certs/client-cert.pem
test/configs/certs/client-id-auth-cert.pem
test/configs/certs/server-cert.pem
test/configs/certs/server-iponly.pem
test/configs/certs/server-noip.pem
test/configs/certs/srva-cert.pem
test/configs/certs/srvb-cert.pem
```
|
https://github.com/nats-io/nats-server/issues/3298
|
https://github.com/nats-io/nats-server/pull/3300
|
4b9e94d4a10d4ab06459f348f44355e8fd2f167d
|
6631bf7bd78c10ba946f4990a11e93f16426a8d1
| 2022-07-28T01:02:53Z |
go
| 2022-07-28T16:43:54Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,278 |
["server/jetstream_cluster.go"]
|
panic: runtime error: invalid memory address or nil pointer dereference at (*stream).isMigrating
|
Observing panic while starting jetstream in 3rd(last) node of js cluster
```
[1] 2022/07/21 04:42:32.138888 [[36mDBG[0m] Exiting stream monitor for '$G > DUMMY' [S-R3F-F3VzVqFJ]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x80e7f8]
goroutine 77 [running]:
github.com/nats-io/nats-server/v2/server.(*stream).isMigrating(0xc00048de18)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/jetstream_cluster.go:1937 +0x38
github.com/nats-io/nats-server/v2/server.(*jetStream).monitorStream(0xc00012a370, 0x0, 0xc00007e3f0, 0x0)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/jetstream_cluster.go:1773 +0xf0c
github.com/nats-io/nats-server/v2/server.(*jetStream).processClusterCreateStream.func1()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/jetstream_cluster.go:2765 +0x2a
created by github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine
/home/travis/gopath/src/github.com/nats-io/nats-server/server/server.go:3016 +0x87
```
Attached -DV logs
[nats.log](https://github.com/nats-io/nats-server/files/9156234/nats.log)
**Versions of nats-server and affected client libraries used:**
nats-server: 2.8.4
nats.go: 1.16.0
**OS/Container environment:**
docker
linux
|
https://github.com/nats-io/nats-server/issues/3278
|
https://github.com/nats-io/nats-server/pull/3279
|
51b6d5233f1da52804cea94ecd067874e1476417
|
231fdea6c4fcdbcd9c120c3046f50aa25d7ae451
| 2022-07-21T04:59:04Z |
go
| 2022-07-21T16:05:52Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,273 |
["server/errors.json", "server/jetstream_api.go", "server/jetstream_cluster.go", "server/jetstream_cluster_test.go", "server/jetstream_errors_generated.go", "server/jetstream_test.go", "server/stream.go"]
|
Add detail to JSStreamNameExistErr
|
The text received by the client on this error
```
JSStreamNameExistErr = 10058, ///< Stream name already in use
```
is
```
stream name already in use
```
This issue is to request an update to the error text to something that conveys
```
stream name already exists with different settings
```
...to highlight that calling `AddStream` on a stream name already in use *can* succeed, but doesn't when the existing stream has different settings than are specified in the current call.
This distinction may be obvious to the caller once it is known, however it was difficult to find supporting documentation, and this simple wording change would have made the problem clear.
|
https://github.com/nats-io/nats-server/issues/3273
|
https://github.com/nats-io/nats-server/pull/3280
|
231fdea6c4fcdbcd9c120c3046f50aa25d7ae451
|
a02a617c0571b4d7c1debe69ecd5e75ffadbd33a
| 2022-07-18T15:25:46Z |
go
| 2022-07-21T16:53:47Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,264 |
["server/consumer.go", "server/jetstream_cluster_test.go", "server/jetstream_test.go"]
|
JS consumer Maximum Deliveries update has no effect
|
## Defect
#### Versions of `nats-server` and affected client libraries used:
- `nats-server: 2.8.4 `
- `nats.go: 1.16.0 `
#### OS/Container environment:
- `docker`
- `macOS Monterey`
#### Steps or code to reproduce the issue:
1. Create stream
2. Create JS consumer on stream with Maximum Deliveries set to **2**
3. Verify that messages are not delivered more that **2 times**
4. Update JS consumer and set Maximum Deliveries to **4**
#### Expected result:
- Messages are delivered max **4 times**
#### Actual result:
- Messages are still delivered max **2 times**
#### Proposed changes:
- On `updateConfig` function, set `o.maxdc` to the new value
- Draft PR https://github.com/nats-io/nats-server/pull/3265
|
https://github.com/nats-io/nats-server/issues/3264
|
https://github.com/nats-io/nats-server/pull/3265
|
918ce307affc39a60ceb8fb0ed0ecaecea61b793
|
a3e62f000c99402c3334c811945ec20f7c30593c
| 2022-07-14T19:21:11Z |
go
| 2022-07-21T22:06:41Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,262 |
["server/jetstream_api.go", "server/jetstream_cluster.go", "server/jetstream_cluster_test.go"]
|
JetStream - Updating consumer from R3 to R1 doesn't work
|
`nats-server`: v2.8.4
`nats.go`: v1.16.0
`NATS cli`: v0.0.33
#### Step to reproduce:
1. Setup a 3-nodes JetStream cluster.
2. Create a stream with a replica count 3.
3. Create a consumer using `nats.go` client with a replica count 3.
4. Publish and consume some messages.
5. Update the consumer's replica count to 1 using `nats.go` client
#### Expected results
1. Replica count for the consumer should get updated to 1.
2. Consumer should have no replicas or metadata on replica nodes.
3. `nats con info` should not show any replicas field for a R1 consumer

4. When the leader node is killed while consuming, the client should get a "Consumer Migration" message (since, it is R1 now).
#### Actual results
1. `nats con info` does show replica count as 1 (which is correct)
2. Consumer still have its metadata on other nodes and it's being updated regularly (confirmed with `ls -l`)
3. `nats con info` shows replicas field

4. When the leader node is killed, the client gets a "Leadership Change" message (that means, it is still > R1)
|
https://github.com/nats-io/nats-server/issues/3262
|
https://github.com/nats-io/nats-server/pull/3293
|
5f12c244abee08023ab837511aefb2dfada8b75e
|
3358205de305cbdd6e7e9b74cc4065fdf38dd738
| 2022-07-14T08:41:44Z |
go
| 2022-07-27T01:56:28Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,244 |
["server/client.go", "server/mqtt.go", "server/mqtt_test.go", "server/opts.go"]
|
Support MQTT QoS=2
|
## Feature Request
I'm using NATS as an MQTT server together with Home Assistant, and the HASS.Agent Windows companion application. The HASS.Agent application uses QoS=2 when it wants to enable commands to run on the Windows machine from Home Assistant. When this is enabled, the MQTT connection is disconnected and a message about QoS=2 not supported is logged.
#### Proposed Change:
Support MQTT QoS=2.
#### Who Benefits From The Change(s)?
People using Home Assistant and HASS.Agent with NATS MQTT service.
#### Alternative Approaches
None at the moment, because HASS.Agent doesn't support setting QoS level.
|
https://github.com/nats-io/nats-server/issues/3244
|
https://github.com/nats-io/nats-server/pull/4349
|
63f81ae0d8cd6ef781de24a986ad9975590c05c6
|
bd93f087d4a98c3b1fcac014965c62c1290b8e0e
| 2022-07-06T11:13:58Z |
go
| 2023-08-29T18:09:49Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,239 |
[".goreleaser.yml"]
|
Nats-server installation location changed in latest version: 2.8.4
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [ X] Included `nats-server -DV` output
- [ X] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
nats-server: 2.8.4
#### OS/Container environment:
CentOS
#### Steps or code to reproduce the issue:
sudo yum install nats-server
which nats-server
#### Expected result:
/usr/local/bin/nats-server
#### Actual result:
/usr/bin/nats-server
|
https://github.com/nats-io/nats-server/issues/3239
|
https://github.com/nats-io/nats-server/pull/3242
|
90caf12d96260590da22a37fdf4b28a4c4a567fc
|
697811cd9bfa6cffdee3142a3455b57205027012
| 2022-07-04T09:07:23Z |
go
| 2022-07-06T18:04:50Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,226 |
["server/jetstream_cluster_test.go", "server/stream.go"]
|
Stream mirrors cannot have filters anymore
|
I think we used to have filters on stream mirrors, the terraform tests passed with that set, now they fail though:
```json
{
"name": "X",
"retention": "limits",
"max_consumers": -1,
"max_msgs_per_subject": -1,
"max_msgs": -1,
"max_bytes": -1,
"max_age": 0,
"max_msg_size": -1,
"storage": "file",
"discard": "old",
"num_replicas": 1,
"mirror": {
"name": "LON",
"filter_subject":"js.in.orders_69"
},
"sealed": false,
"deny_delete": false,
"deny_purge": true,
"allow_rollup_hdrs": false
}
```
```
$ nats req '$JS.API.STREAM.CREATE.X' "$(cat X.json)"
14:22:23 Sending request on "$JS.API.STREAM.CREATE.X"
14:22:23 Received with rtt 945.864µs
{"type":"io.nats.jetstream.api.v1.stream_create_response","error":{"code":400,"err_code":10033,"description":"stream mirrors can not contain filtered subjects"}}
|
https://github.com/nats-io/nats-server/issues/3226
|
https://github.com/nats-io/nats-server/pull/3227
|
6bd14e1b7a6825f00cb4cbda6484008398387f71
|
4a94a172c41b90e6de82e59e2ebce2aa2d89c907
| 2022-06-29T14:23:02Z |
go
| 2022-06-29T22:39:07Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,196 |
["server/auth.go", "server/leafnode.go", "server/leafnode_test.go", "server/opts.go"]
|
sample config with leafnode and nkeys
|
Is it possible to add to docs sample config with leaf nodes secured by nkey?
I have found ugly solution using credential file but it require pretense of JWT token.
I kept JWT token from docs and put correct USER NKEY SEED there.
Is there better solution?
on leafnode `nats.conf`
```
leafnodes {
remotes = [
{
url: "nats-leaf://@server_url:7422"
account: someuser
credentials: "server.creds"
}
]
}
```
where `server.creds`
is copy of example creds from docs
```
**** this part is keept as it is in docs ****
-----BEGIN NATS USER JWT-----
eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJUVlNNTEtTWkJBN01VWDNYQUxNUVQzTjRISUw1UkZGQU9YNUtaUFhEU0oyWlAzNkVMNVJBIiwiaWF0IjoxNTU4MDQ1NTYyLCJpc3MiOiJBQlZTQk0zVTQ1REdZRVVFQ0tYUVM3QkVOSFdHN0tGUVVEUlRFSEFKQVNPUlBWV0JaNEhPSUtDSCIsIm5hbWUiOiJvbWVnYSIsInN1YiI6IlVEWEIyVk1MWFBBU0FKN1pEVEtZTlE3UU9DRldTR0I0Rk9NWVFRMjVIUVdTQUY3WlFKRUJTUVNXIiwidHlwZSI6InVzZXIiLCJuYXRzIjp7InB1YiI6e30sInN1YiI6e319fQ.6TQ2ilCDb6m2ZDiJuj_D_OePGXFyN3Ap2DEm3ipcU5AhrWrNvneJryWrpgi_yuVWKo1UoD5s8bxlmwypWVGFAA
------END NATS USER JWT------
************************* IMPORTANT *************************
NKEY Seed printed below can be used to sign and prove identity.
NKEYs are sensitive and should be treated as secrets.
-----BEGIN USER NKEY SEED-----
REAL_USER_PRIVATE_KEY
------END USER NKEY SEED------
*************************************************************
```
Server config for reference:
```
leafnodes {
port: 7422
authorization: {
users: [ {user: UDXU4RCSJNZOIQHZNWXHXORDPRTGNJAHAHFRGZNEEJCPQTT2M7NLCNF4, account: someuser}]
}
}
```
|
https://github.com/nats-io/nats-server/issues/3196
|
https://github.com/nats-io/nats-server/pull/4940
|
34911b5f46bf7056a9242c4d31733cad63a15a0c
|
9537d73e30fd05aeb71b21440f568cf1472f4885
| 2022-06-16T15:59:15Z |
go
| 2024-01-10T15:10:58Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,188 |
["server/jetstream.go"]
|
Account.exports.services Data Race
|
Version: `v2.8.4`
Running some tests I noticed a datarace. Only managed to get it to show up once.
```
WARNING: DATA RACE
Read at 0x00c000263410 by goroutine 30:
runtime.mapaccess2_faststr()
/usr/lib/go/src/runtime/map_faststr.go:108 +0x0
github.com/nats-io/nats-server/v2/server.(*Account).getServiceExport()
/home/tyler/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/accounts.go:2570 +0x71
github.com/nats-io/nats-server/v2/server.(*Server).checkJetStreamExports()
/home/tyler/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/jetstream.go:459 +0x31
github.com/nats-io/nats-server/v2/server.(*Server).addSystemAccountExports()
/home/tyler/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/events.go:1018 +0x3bc
github.com/nats-io/nats-server/v2/server.(*Server).setSystemAccount()
/home/tyler/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/server.go:1268 +0x664
github.com/nats-io/nats-server/v2/server.(*Server).SetSystemAccount()
/home/tyler/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/server.go:1168 +0x92
github.com/nats-io/nats-server/v2/server.(*Server).SetDefaultSystemAccount()
/home/tyler/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/server.go:1212 +0xcf
github.com/nats-io/nats-server/v2/server.(*Server).Start()
/home/tyler/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/server.go:1685 +0x1071
Previous write at 0x00c000263410 by goroutine 27:
runtime.mapassign_faststr()
/usr/lib/go/src/runtime/map_faststr.go:203 +0x0
github.com/nats-io/nats-server/v2/server.(*Account).addServiceExportWithResponseAndAccountPos()
/home/tyler/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/accounts.go:1017 +0x4b8
github.com/nats-io/nats-server/v2/server.(*Account).AddServiceExport()
/home/tyler/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/accounts.go:973 +0x4e
github.com/nats-io/nats-server/v2/server.(*Server).setupJetStreamExports()
/home/tyler/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/jetstream.go:466 +0x27
github.com/nats-io/nats-server/v2/server.(*Server).enableJetStream()
/home/tyler/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/jetstream.go:382 +0xc71
github.com/nats-io/nats-server/v2/server.(*Server).EnableJetStream()
/home/tyler/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/jetstream.go:207 +0x651
```
|
https://github.com/nats-io/nats-server/issues/3188
|
https://github.com/nats-io/nats-server/pull/3189
|
a83001570397d6faaaa6a47776fb61543fef23a0
|
7ca3831289b30fe157a48ff64f6c4fef2a11de25
| 2022-06-13T23:16:07Z |
go
| 2022-06-14T17:49:54Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,178 |
["server/errors.go", "server/jetstream_cluster_test.go", "server/jetstream_helpers_test.go", "server/leafnode.go", "server/opts.go"]
|
Remote PULL consumer SPOF
|
## Problem description
In a system where two NATS clusters are connected using leaf node connections, clients connected to the remote NATS cluster cannot consume messages from a stream when the NATS server hosting the consumer leader is unable to establish leaf node connections due to network failure. In this scenario a connection to the remote system is still available from other nodes in the cluster but messages cannot be routed to the consumer leader resulting in an unexpected point of failure for the consuming application.
## More Details
<img width="932" alt="image" src="https://user-images.githubusercontent.com/48684307/172841297-0eec4575-4634-4f35-bd82-fea7841edc60.png">
To consume message from the consumer C attached to the stream S, the Application publishes NATS messages on the subject `$JS.a.API.CONSUMER.MSG.NEXT.S.C`
Messages published from the application in Cluster-B cannot reach the consumer leader which resides on `Cluster-A.nats-server-1`.
Current NATS routing does not allow message forwarding within the Cluster-A even if one instance of the Cluster-A has a healthy leaf connection to the Cluster-B.
## Remediation
The current remediation for this is to manually move the consumer leader to a node in the cluster that still has leaf node connections, while this is possible to automate externally it would be a beneficial enhancement to JetStream to have this as a built in feature.
|
https://github.com/nats-io/nats-server/issues/3178
|
https://github.com/nats-io/nats-server/pull/3243
|
1cc63c023d748969d8a64936ab71ba2566956c41
|
8b7f7effbd9ccbf5ec0c41edead9765f1a91684d
| 2022-06-09T11:59:17Z |
go
| 2022-07-05T20:58:08Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,177 |
["server/client.go", "server/events_test.go"]
|
Subscription with Queue Group to $SYS.ACCOUNT.*.DISCONNECT only gets messages for disconnects on the same node
|
## Defect
if using subscription with queue group then NATS is sending DISCONNECT messages only to clients connected to the same node where the disconnect is happening. This does not happen if NOT using queue group.
#### Versions of `NATS.Client` and `nats-server`:
2 nodes NATS cluster running on Windows - Version: 2.8.4 - Git: [66524ed] - Go build: go1.17.10
Browser using client nats.ws (v1.81)
C# Console App using nats-net (v0.14.6)
#### Steps or code to reproduce the issue:
Console App subscribes (using queue group) to the above $SYS subject and should get all client disconnects messages since it is the only one consumer of the above subscription/queue. Only one instance of consumer is running on my local dev environment, even tried many different unique random strings as a queue name, just to be sure there were no zombie consumer around...
Subscription code:
`IAsyncSubscription sAsyncClientDisconnect = sys_nats.SubscribeAsync(“$SYS.ACCOUNT.*.DISCONNECT”, “ClientDisconnectsQueue”, ClientDisconnectsHandler);`
If a browser client disconnects from the same node where the Console App is connected then the message is fired and gets to the handler.
If Console App and Browser are connected to different nodes then message is not fired. In other words I’m getting only messages for clients disconnecting on the same node.
If I remove the queue from subscription (ie I only do SubscribeAsync without passing the name of the queue):
`IAsyncSubscription sAsyncClientDisconnect = sys_nats.SubscribeAsync(“$SYS.ACCOUNT.*.DISCONNECT”, ClientDisconnectsHandler);`
then messages are correctly received doesn’t matter on which node the Console App and browser are connected.
### TRACE (possibile sensitive info replaced with arbitrary strings like AAAA, XXXXX, ...)
**NODE2 (node2_10.0.2.56) (On Browser disconnect)**
```
[6500] 2022/06/09 10:20:51.074098 [DBG] 176.105.156.140/172.16.249.252:28412 - wid:9 - "v1.8.1:nats.ws" - Client connection closed: Client Closed
[6500] 2022/06/09 10:20:51.074098 [TRC] 10.0.2.55:2222 - rid:5 - ->> [RMSG AAAAA $SYS.ACCOUNT.XXXXX.DISCONNECT + ClientDisconnectsQueue 1631]
```
**NODE1 (node1_10.0.2.55) (Where console app is connected to)**
```
[7496] 2022/06/09 10:20:51.061501 [TRC] 10.0.2.56:59849 - rid:16 - <<- [RMSG AAAAAAAAAAAAAAA $SYS.ACCOUNT.XXXXXXXXXXXX.DISCONNECT + ClientDisconnectsQueue 1631]
[7496] 2022/06/09 10:20:51.061501 [TRC] 10.0.2.56:59849 - rid:16 - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.server.advisory.v1.client_disconnect\",\"id\":\"KFrmR6iqRgcgh9odKOt4Mz\",\"timestamp\":\"2022-06-09T08:20:51.0740984Z\",\"server\":{\"name\":\"node2_10.0.2.56\",\"host\":\"0.0.0.0\",\"id\":\"NDG7UQ5I37GIQ5FGQEABY7LKKRB3V77Y5FZXH42NFOU56FRAJ2IVBKB4\",\"cluster\":\"mynatscluster\",\"ver\":\"2.8.4\",\"seq\":32,\"jetstream\":false,\"time\":\"2022-06-09T08:20:51.0740984Z\"},\"client\":{\"start\":\"2022-06-09T08:20:38.7493767Z\",\"host\":\"172.16.249.252\",\"id\":9,\"acc\":\" XXXXXXXXXXXX\",\"user\":\"ZZZZZZZZZZZZZ\",\"lang\":\"nats.ws\",\"ver\":\"1.8.1\",\"rtt\":82635500,\"stop\":\"2022-06-09T08:20:51.0740984Z\",\"jwt\":\"JJJJJJJJJJJJJ",\"issuer_key\":\"XXXXXXXXXXXX\",\"name_tag\":\"service006@demo\",\"tags\":[\"service006@demo\"],\"kind\":\"Client\",\"client_type\":\"websocket\"},\"sent\":{\"msgs\":7,\"bytes\":2121},\"received\":{\"msgs\":6,\"bytes\":10369},\"reason\":\"Client Closed\"}"]
```
Looks like node1 actually do receive disconnect message from node2 on queue group ClientDisconnectsQueue, but message does not show up on handler of Console App connected to node1
#### Expected result:
Receive DISCONNECT messages from all nodes on Subscription with queue group
#### Actual result:
using queue group subscription: getting DISCONNECT messages only for disconnects happening on the same node
|
https://github.com/nats-io/nats-server/issues/3177
|
https://github.com/nats-io/nats-server/pull/3185
|
0794eafa6f0eeefd0d71899d29932253381a4440
|
0bfe193043326469c30625fab520390093c74372
| 2022-06-09T11:57:13Z |
go
| 2022-06-12T16:30:21Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,167 |
["server/filestore.go", "server/jetstream_api.go", "server/jetstream_cluster.go", "server/jetstream_errors.go", "server/raft.go", "server/stream.go"]
|
Under heavy load a stream with lots of R>1 consumer is hard to delete
|
When a system is under load and behaving incorrectly, a stream delete should act like a big hammer and just work.
Currently the system sometimes does not respond, or responds slowly, and many times does not delete the stream or help stabilize the system.
2.8.4
|
https://github.com/nats-io/nats-server/issues/3167
|
https://github.com/nats-io/nats-server/pull/3168
|
c6946dd8a0318d1ab6ff8a0f4a163104d1e22ae9
|
301eb11725d70bbd0140ce43d9e464940da6a0a6
| 2022-06-03T22:28:27Z |
go
| 2022-06-06T13:01:20Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,155 |
["server/jetstream_api.go", "server/jetstream_cluster.go", "server/jetstream_cluster_test.go", "server/jetstream_test.go", "server/stream.go"]
|
Advisory updates/refinements
|
When a server starts up we see a lot of advisories as things recover from disk, it would be good to stop these initial advisories during recovery:
```
20:46:58 [JS API] $JS.API.CONSUMER.DURABLE.CREATE.ORDERS_54.C2 one @ one
20:46:58 [Stream Modify] KV_X
20:46:58 [JS API] $JS.API.CONSUMER.DURABLE.CREATE.ORDERS_87.C4 one @ one
```
One per every consumer on the node and the stream modify ones are streams that are R1 hosted on this server that restarted
Additionally I think the stream msg get API should not publish adivsories, in the realm of KV its essentially equivelant to a Pull request - on a busy network we see many many KV GETs each would raise this advisory, especially bad since the advisory would also include the full content of the data that was fetched.
So I propose we dont raise advisories for `JS.API.STREAM.MSG.GET.KV_X`
|
https://github.com/nats-io/nats-server/issues/3155
|
https://github.com/nats-io/nats-server/pull/3156
|
0bc1d96f65d1ac9be246d30c9a10fd97b1a05ce8
|
1aa8cc716e05fc6440f36aa465780abd6ac94223
| 2022-05-27T20:54:06Z |
go
| 2022-05-30T17:09:47Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,151 |
["server/accounts.go", "server/accounts_test.go", "server/errors.go", "server/sublist.go", "server/sublist_test.go"]
|
Mappings cannot have spaces in mapping functions
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
a52f12613e6eb53097d6d563d9b9681b88a037cb
#### OS/Container environment:
Linux EL8
#### Steps or code to reproduce the issue:
Use configuration:
```
mappings = {
"in.registration.*.>": "registration.{{ partition(5, 1) }}.{{wildcard(1)}}.>"
}
```
```
$ nats-server --config /tmp/server.conf
nats-server: /tmp/server.conf:2:4: Error adding mapping for "in.registration.*.>" to "registration.{{partition(5, 1)}}.{{wildcard(1)}}.>" : invalid subject
```
But if we adjust the subject mapping like this (remove all spaces):
```
mappings = {
"in.registration.*.>": "registration.{{partition(5,1)}}.{{wildcard(1)}}.>"
}
```
It works.
This is because https://github.com/nats-io/nats-server/blob/a52f12613e6eb53097d6d563d9b9681b88a037cb/server/accounts.go#L618-L621 is checking the transform subject for validity before any parsing is done, I think probably we should remove that check. Spaces in these mappings greatly improves readability and we specifically supported them in the code when implementing it.
|
https://github.com/nats-io/nats-server/issues/3151
|
https://github.com/nats-io/nats-server/pull/3231
|
8a94e14fe71703cb8a7ef5d7c2cec00fb68040db
|
e46b00639a88505d73f4f90dc8e3be3818b06cbe
| 2022-05-26T14:20:30Z |
go
| 2022-06-30T21:21:53Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,119 |
["server/jetstream_test.go", "server/stream.go"]
|
unexpected message re delivery by sourced stream
|
This appears to have to do with stream names, and resulting ordering by go fs, which at the very least may alter timing.
This further requires pending messages.
Digging around in `startingSequenceForSources`, I got the impression that seqs may have something to do with it.
But not 100% sure.
```go
func TestJetStreamWorkQueueSourceNamingRestart(t *testing.T) {
s := RunBasicJetStreamServer()
if config := s.JetStreamConfig(); config != nil {
defer removeDir(t, config.StoreDir)
}
defer s.Shutdown()
nc, js := jsClientConnect(t, s)
defer nc.Close()
_, err := js.AddStream(&nats.StreamConfig{Name: "C1", Subjects: []string{"foo.*"}})
_, err = js.AddStream(&nats.StreamConfig{Name: "C2", Subjects: []string{"bar.*"}})
sendCount := 10
for i := 0; i < sendCount; i++ {
_, err = js.Publish(fmt.Sprintf("foo.%d", i), nil)
require_NoError(t, err)
_, err = js.Publish(fmt.Sprintf("bar.%d", i), nil)
require_NoError(t, err)
}
// TODO Test will always pass if pending is 0
pending := 1
// For some yet unknown reason this failure seems to require 2 streams to source from.
// This might possibly be timing, as the test sometimes passes
streams := 2
totalPending := uint64(streams * pending)
totalMsgs := streams * sendCount
totalNonPending := streams * (sendCount - pending)
// TODO Test will always pass if this is named A (go returns directory names sorted)
// A: this stream is recovered BEFORE C1/C2, tbh, I'd expect this to be the case to fail, but it isn't
// D: this stream is recovered AFTER C1/C2, which is the case that fails (perhaps it is timing)
srcName := "D"
_, err = js.AddStream(&nats.StreamConfig{
Name: srcName,
Replicas: 1,
Retention: nats.WorkQueuePolicy,
Sources: []*nats.StreamSource{{Name: "C1"}, {Name: "C2"}}})
require_NoError(t, err)
// Add a consumer and consumer all but totalPending messages
_, err = js.AddConsumer(srcName, &nats.ConsumerConfig{Durable: "dur", AckPolicy: nats.AckExplicitPolicy})
require_NoError(t, err)
sub, err := js.PullSubscribe("", "dur", nats.BindStream(srcName))
require_NoError(t, err)
checkFor(t, 5*time.Second, time.Millisecond*200, func() error {
if ci, err := js.ConsumerInfo(srcName, "dur"); err != nil {
return err
} else if ci.NumPending != uint64(totalMsgs) {
return fmt.Errorf("not enough messages: %d", ci.NumPending)
}
return nil
})
// consume all but messages we want pending
msgs, err := sub.Fetch(totalNonPending)
require_NoError(t, err)
require_True(t, len(msgs) == totalNonPending)
for _, m := range msgs {
err = m.AckSync()
require_NoError(t, err)
}
ci, err := js.ConsumerInfo(srcName, "dur")
require_NoError(t, err)
require_True(t, ci.NumPending == totalPending)
si, err := js.StreamInfo(srcName)
require_NoError(t, err)
require_True(t, si.State.Msgs == totalPending)
// Restart server
nc.Close()
sd := s.JetStreamConfig().StoreDir
s.Shutdown()
time.Sleep(200 * time.Millisecond)
s = RunJetStreamServerOnPort(-1, sd)
defer s.Shutdown()
checkFor(t, 10*time.Second, 200*time.Millisecond, func() error {
hs := s.healthz()
if hs.Status == "ok" && hs.Error == _EMPTY_ {
return nil
}
return fmt.Errorf("healthz %s %s", hs.Error, hs.Status)
})
nc, js = jsClientConnect(t, s)
defer nc.Close()
si, err = js.StreamInfo(srcName)
require_NoError(t, err)
if si.State.Msgs != totalPending {
t.Fatalf("Expected 0 messages on restart, got %d", si.State.Msgs)
}
}
```
|
https://github.com/nats-io/nats-server/issues/3119
|
https://github.com/nats-io/nats-server/pull/3122
|
2cb2a8ebbc7137827dbe5411277d739f906c2d9e
|
35e373f6e6e4690b929d08cc0e870fe9a707ed77
| 2022-05-11T21:39:41Z |
go
| 2022-05-12T23:01:00Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,117 |
["server/consumer.go"]
|
JetStream: cluster processing of a pull consumer creation may panic
|
## Defect
Server panic due to pull consumer's waitQueue being nil
#### Versions of `nats-server` and affected client libraries used:
Server v2.8.2
#### OS/Container environment:
All
#### Steps or code to reproduce the issue:
Not sure
#### Expected result:
No panic
#### Actual result:
Get this panic:
```
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x781c5b]
goroutine 276 [running]:
github.com/nats-io/nats-server/v2/server.(*waitQueue).isEmpty(...)
#011/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/consumer.go:2413
github.com/nats-io/nats-server/v2/server.(*consumer).processWaiting(0xc00047bfd8)
#011/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/consumer.go:2770 +0x3b
github.com/nats-io/nats-server/v2/server.(*consumer).infoWithSnap(0xc000ac5c00, 0x0)
#011/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/consumer.go:2075 +0x5bb
github.com/nats-io/nats-server/v2/server.(*consumer).info(...)
#011/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/consumer.go:2021
github.com/nats-io/nats-server/v2/server.(*jetStream).processClusterCreateConsumer(0xc000338420, 0xc0016c7290, 0x0, 0x1)
#011/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/jetstream_cluster.go:3276 +0xf65
github.com/nats-io/nats-server/v2/server.(*jetStream).processConsumerAssignment(0xc000338420, 0xc0016c7290)
#011/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/jetstream_cluster.go:2999 +0x86f
github.com/nats-io/nats-server/v2/server.(*jetStream).applyMetaSnapshot(0xc000338420, {0xc00163c066, 0x8059741980594afc, 0x8059c65380599d36}, 0x1)
#011/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/jetstream_cluster.go:1125 +0x8c6
github.com/nats-io/nats-server/v2/server.(*jetStream).applyMetaEntries(0xc000338420, {0xc00000eef0, 0x1, 0x0}, 0x1, 0xc000117b98)
#011/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/jetstream_cluster.go:1383 +0x818
github.com/nats-io/nats-server/v2/server.(*jetStream).monitorCluster(0xc000338420)
#011/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/jetstream_cluster.go:887 +0xa65
created by github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine
#011/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/server.go:3013 +0x87
```
@ripienaar reported this, but not sure if this was during a server/cluster restart or not. I have not been able to reproduce the situation where the processing of a consumer assignment detects that the consumer existed but the consumer's waitQueue is nil.
I know that this is introduced in 2.8.2 with PR #3099 because waitQueue.len() used to protect against waitQueue being nil, and I missed that. So fix is easy, but not sure how to reproduce this or if it is not a sign of a deeper problem. Will have a PR soon to fix the code.
|
https://github.com/nats-io/nats-server/issues/3117
|
https://github.com/nats-io/nats-server/pull/3118
|
56d06fd8eb7ce251b9c0aa0861da6471e5b133e1
|
7da46d546d61333f067ecf110fa8eefa87aa832c
| 2022-05-11T17:00:06Z |
go
| 2022-05-11T19:15:06Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,199 |
["server/errors.json", "server/jetstream_api.go", "server/jetstream_cluster_test.go", "server/jetstream_errors_generated.go", "server/jetstream_test.go"]
|
restore stream error
|
nats-server version 2.8.2
natscli version 0.32
nats: error: restore failed: restore failed: error restoring consumer ["history"]: stat /data/jetstream/$G/streams/darksky_forecast_history/obs/history/meta.inf: no such file or directory (10062)
if the stream does not have a consumer restore is sucess

|
https://github.com/nats-io/nats-server/issues/3199
|
https://github.com/nats-io/nats-server/pull/3205
|
7a0c63af552c7024fe644e37d956acd4ead784c3
|
314abd60289c54a565bbd71730df53f76e8d1dc3
| 2022-05-10T15:52:47Z |
go
| 2022-06-21T16:04:44Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,114 |
["server/filestore.go", "server/filestore_test.go", "server/memstore.go", "server/memstore_test.go", "server/store.go"]
|
panic: runtime error: makeslice: cap out of range
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [x] Included `nats-server -DV` output
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
```
[54] 2022/05/10 03:53:10.909500 [INF] Starting nats-server
[54] 2022/05/10 03:53:10.909527 [INF] Version: 2.8.0
[54] 2022/05/10 03:53:10.909536 [INF] Git: [90721ee]
[54] 2022/05/10 03:53:10.909544 [DBG] Go build: go1.17.9
[54] 2022/05/10 03:53:10.909553 [INF] Name: NCFLZXZ6MUOIW6H4BRRNMQMOHFYAENP6OEU3XUYYKU3WHHE4ZRDPFRBS
[54] 2022/05/10 03:53:10.909572 [INF] ID: NCFLZXZ6MUOIW6H4BRRNMQMOHFYAENP6OEU3XUYYKU3WHHE4ZRDPFRBS
[54] 2022/05/10 03:53:10.909595 [DBG] Created system account: "$SYS"
```
Messages were sent and consumed using nats.py 2.1.0.
#### OS/Container environment:
GKE
#### Steps or code to reproduce the issue:
- Install NATS with Helm, enable jetstream filestorage
-- Stored on a volume with a storage class type=pd-ssd and WaitForFirstConsumer
- Send tens of thousands of messages without consuming them.
- Attempt to scale down the node that nats is running on for an upgrade and recover on a new node.
#### Expected result:
Not to panic, to recover gracefully, or at least to provide a more instructive error message.
#### Actual result:
```
[3426] 2022/05/10 03:10:51.017008 [INF] Starting nats-server
[3426] 2022/05/10 03:10:51.017092 [INF] Version: 2.8.0
[3426] 2022/05/10 03:10:51.017096 [INF] Git: [90721ee]
[3426] 2022/05/10 03:10:51.017100 [INF] Name: queue-nats-0
[3426] 2022/05/10 03:10:51.017110 [INF] Node: 4pcPsiNg
[3426] 2022/05/10 03:10:51.017116 [INF] ID: NC6TCJ2D5KDDFEJ37JNOLTZIBEDERC6T3MMGN6ZMVNHDGFW6IM3ETFIJ
[3426] 2022/05/10 03:10:51.017146 [INF] Using configuration file: /etc/nats-config/nats.conf
[3426] 2022/05/10 03:10:51.018272 [INF] Starting http monitor on 0.0.0.0:8222
[3426] 2022/05/10 03:10:51.018355 [INF] Starting JetStream
[3426] 2022/05/10 03:10:51.018534 [INF] _ ___ _____ ___ _____ ___ ___ _ __ __
[3426] 2022/05/10 03:10:51.018538 [INF] _ | | __|_ _/ __|_ _| _ \ __| /_\ | \/ |
[3426] 2022/05/10 03:10:51.018540 [INF] | || | _| | | \__ \ | | | / _| / _ \| |\/| |
[3426] 2022/05/10 03:10:51.018541 [INF] \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_| |_|
[3426] 2022/05/10 03:10:51.018543 [INF]
[3426] 2022/05/10 03:10:51.018545 [INF] https://docs.nats.io/jetstream
[3426] 2022/05/10 03:10:51.018546 [INF]
[3426] 2022/05/10 03:10:51.018548 [INF] ---------------- JETSTREAM ----------------
[3426] 2022/05/10 03:10:51.018556 [INF] Max Memory: 1.00 GB
[3426] 2022/05/10 03:10:51.018559 [INF] Max Storage: 100.00 GB
[3426] 2022/05/10 03:10:51.018561 [INF] Store Directory: "/data/jetstream"
[3426] 2022/05/10 03:10:51.018562 [INF] -------------------------------------------
[3426] 2022/05/10 03:10:51.321612 [INF] Restored 13,833 messages for stream '$G > INGEST'
[3426] 2022/05/10 03:10:51.421122 [INF] Server is ready
panic: runtime error: makeslice: cap out of range
goroutine 1 [running]:
github.com/nats-io/nats-server/v2/server.(*msgBlock).indexCacheBuf(0xc000292a80, {0xc00078c000, 0x186a22, 0x200000})
/home/travis/gopath/src/github.com/nats-io/nats-server/server/filestore.go:3165 +0x195
github.com/nats-io/nats-server/v2/server.(*msgBlock).loadMsgsWithLock(0xc000292a80)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/filestore.go:3466 +0x414
github.com/nats-io/nats-server/v2/server.(*msgBlock).generatePerSubjectInfo(0xc000292a80)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/filestore.go:4663 +0xf0
github.com/nats-io/nats-server/v2/server.(*msgBlock).readPerSubjectInfo(0xc000292a80)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/filestore.go:4716 +0x3b4
github.com/nats-io/nats-server/v2/server.(*fileStore).recoverMsgBlock(0xc000290280, {0xb1de38, 0xc000709790}, 0x3d)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/filestore.go:673 +0xff0
github.com/nats-io/nats-server/v2/server.(*fileStore).recoverMsgs(0xc000290280)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/filestore.go:958 +0x8bd
github.com/nats-io/nats-server/v2/server.newFileStoreWithCreated({{0xc000220990, 0x25}, _, _, _, _}, {{0xc000759020, 0xa}, {0x0, 0x0}, ...}, ...)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/filestore.go:315 +0x76c
github.com/nats-io/nats-server/v2/server.(*stream).setupStore(0xc00011c2c0, 0xc000160988)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/stream.go:2760 +0x2d5
github.com/nats-io/nats-server/v2/server.(*Account).addStreamWithAssignment(0xc000118900, 0xc0000f6108, 0x0, 0x0)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/stream.go:417 +0xee5
github.com/nats-io/nats-server/v2/server.(*Account).addStream(0xc000759030, 0xc000758ff0)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/stream.go:276 +0x1d
github.com/nats-io/nats-server/v2/server.(*Account).EnableJetStream(0xc000118900, 0xc00001f8c0)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/jetstream.go:1193 +0x2ecd
github.com/nats-io/nats-server/v2/server.(*Server).configJetStream(0xc0001ac000, 0xc000118900)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/jetstream.go:645 +0x9c
github.com/nats-io/nats-server/v2/server.(*Server).enableJetStreamAccounts(0xc0000d0160)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/jetstream.go:575 +0xb4
github.com/nats-io/nats-server/v2/server.(*Server).enableJetStream(0xc0001ac000, {0x40000000, 0x1900000000, {0xc0000c23e0, 0xf}, {0x0, 0x0}})
/home/travis/gopath/src/github.com/nats-io/nats-server/server/jetstream.go:392 +0x919
github.com/nats-io/nats-server/v2/server.(*Server).EnableJetStream(0xc0001ac000, 0xc000161e38)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/jetstream.go:204 +0x40c
github.com/nats-io/nats-server/v2/server.(*Server).Start(0xc0001ac000)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/server.go:1746 +0xf10
github.com/nats-io/nats-server/v2/server.Run(...)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/service.go:22
main.main()
/home/travis/gopath/src/github.com/nats-io/nats-server/main.go:118 +0x2fa
```
|
https://github.com/nats-io/nats-server/issues/3114
|
https://github.com/nats-io/nats-server/pull/3121
|
35e373f6e6e4690b929d08cc0e870fe9a707ed77
|
b6ebe34734fd1fa569c9b3a01a4520cad80bb289
| 2022-05-10T03:57:20Z |
go
| 2022-05-12T23:01:25Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,108 |
["server/accounts_test.go", "server/client.go"]
|
Account isolation: jetstream receives messages it shouldn't
|
## Defect
- [x] Included `nats-server -DV` output
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
nats-server: 2.8.2 (also tested with current nightly build)
nats-cli: 0.0.28
#### OS/Container environment:
Windows 10 (WSL2) 4.19.128-microsoft-standard; Ubuntu 20.04.3 LTS
Docker Desktop 4.7.2, Engine 20.10.14, Compose 1.29.2
#### Steps or code to reproduce the issue:
config:
```
jetstream {}
accounts: {
PUBLIC: {
users:[{user: public, password: public}]
exports: [
{ stream: orders.client.stream.> }
{ stream: orders.client2.stream.> }
]
},
CLIENT: {
users:[{user: client, password: client}]
imports: [
{ stream: { account: PUBLIC, subject: orders.client.stream.> }}
]
jetstream: enabled
},
CLIENT2: {
users:[{user: client2, password: client2}]
imports: [
{ stream: { account: PUBLIC, subject: orders.client2.stream.> }}
]
jetstream: enabled
},
}
```
1. Start a server with the config above
2. Start a subscription handler for `client` with `nats --user client --password client sub 'orders.>'` and for `client2` with `nats --user client2 --password client2 sub 'orders.>'` seperately
3. Publish a test message on the `PUBLIC` account: `nats --user public --password public pub 'orders.client.stream.entry' 'test1'`
4. Note that only the `CLIENT` account subscription receives the message, while the `CLIENT2` account does not (this is correct behavior)
5. Create a stream in both `CLIENT` and `CLIENT2` accounts: `nats --user client --password client stream create orders '--subjects=orders.*.stream.entry' --retention=work --max-consumers=-1 --max-msgs-per-subject=-1 --max-msgs=-1 --max-bytes=-1 --max-age=-1 --max-msg-size=-1 --storage=file --discard=old --replicas=1 --dupe-window="2m0s" --no-allow-rollup --no-deny-delete --no-deny-purge` and `nats --user client2 --password client2 stream create orders '--subjects=orders.*.stream.entry' --retention=work --max-consumers=-1 --max-msgs-per-subject=-1 --max-msgs=-1 --max-bytes=-1 --max-age=-1 --max-msg-size=-1 --storage=file --discard=old --replicas=1 --dupe-window="2m0s" --no-allow-rollup --no-deny-delete --no-deny-purge`
6. Publish a test message on `PUBLIC` again: `nats --user public --password public pub 'orders.client.stream.entry' 'test2'`
#### Expected result:
The message shows up in the `orders` stream of account `CLIENT` only and not on the `orders` stream of account `CLIENT2`. I check this via running `nats --user client --password client stream info orders` and observing the message count.
#### Actual result:
Both streams in both accounts contain the test message. With my current understanding of accounts, this should not happen, even though both streams listen on the same subject. Since the config for these accounts explicitly imports only one subject, only messages of that subject should be able to reach the stream in this particular account (as it is the case with "normal" subscriptions).
The reason why changing the streams' subjects to `orders.client.stream.entry` or `orders.client2.stream.entry` respectively is not the solution here is that a malicious user could just (re-)create a stream with `orders.*.stream.entry` and hence would be able to receive all messages.
#### Log:
```
2022/05/06 08:35:20.636272 [INF] Version: 2.8.2
2022/05/06 08:35:20.636274 [INF] Git: [9e5d25b]
2022/05/06 08:35:20.636275 [DBG] Go build: go1.17.9
2022/05/06 08:35:20.636276 [INF] Name: NBCK5PLGGAR3PH2ZATFKRSRBNYW6FMBOMGJDAY56GS2IDJT2JGYNP6UJ
2022/05/06 08:35:20.636278 [INF] Node: Q2dy3sSb
2022/05/06 08:35:20.636279 [INF] ID: NBCK5PLGGAR3PH2ZATFKRSRBNYW6FMBOMGJDAY56GS2IDJT2JGYNP6UJ
2022/05/06 08:35:20.636281 [WRN] Plaintext passwords detected, use nkeys or bcrypt
2022/05/06 08:35:20.636283 [INF] Using configuration file: /etc/nats/nats-server.conf
2022/05/06 08:35:20.636292 [DBG] Created system account: "$SYS"
2022/05/06 08:35:20.636543 [INF] Starting JetStream
2022/05/06 08:35:20.636740 [DBG] JetStream creating dynamic configuration - 9.32 GB memory, 146.55 GB disk
2022/05/06 08:35:20.636836 [INF] _ ___ _____ ___ _____ ___ ___ _ __ __
2022/05/06 08:35:20.636849 [INF] _ | | __|_ _/ __|_ _| _ \ __| /_\ | \/ |
2022/05/06 08:35:20.636851 [INF] | || | _| | | \__ \ | | | / _| / _ \| |\/| |
2022/05/06 08:35:20.636852 [INF] \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_| |_|
2022/05/06 08:35:20.636853 [INF]
2022/05/06 08:35:20.636854 [INF] https://docs.nats.io/jetstream
2022/05/06 08:35:20.636855 [INF]
2022/05/06 08:35:20.636856 [INF] ---------------- JETSTREAM ----------------
2022/05/06 08:35:20.636858 [INF] Max Memory: 9.32 GB
2022/05/06 08:35:20.636860 [INF] Max Storage: 146.55 GB
2022/05/06 08:35:20.636861 [INF] Store Directory: "/tmp/nats/jetstream"
2022/05/06 08:35:20.636861 [INF] -------------------------------------------
2022/05/06 08:35:20.636914 [DBG] Exports:
2022/05/06 08:35:20.636932 [DBG] $JS.API.>
2022/05/06 08:35:20.636947 [DBG] Enabled JetStream for account "CLIENT"
2022/05/06 08:35:20.636950 [DBG] Max Memory: -1 B
2022/05/06 08:35:20.636951 [DBG] Max Storage: -1 B
2022/05/06 08:35:20.637053 [DBG] JetStream state for account "CLIENT" recovered
2022/05/06 08:35:20.637078 [DBG] Enabled JetStream for account "CLIENT2"
2022/05/06 08:35:20.637080 [DBG] Max Memory: -1 B
2022/05/06 08:35:20.637081 [DBG] Max Storage: -1 B
2022/05/06 08:35:20.637166 [DBG] JetStream state for account "CLIENT2" recovered
2022/05/06 08:35:20.637348 [INF] Listening for client connections on 0.0.0.0:4222
2022/05/06 08:35:20.637361 [DBG] Get non local IPs for "0.0.0.0"
2022/05/06 08:35:20.637483 [DBG] ip=172.19.0.2
2022/05/06 08:35:20.637508 [INF] Server is ready
2022/05/06 08:35:22.696096 [DBG] 172.19.0.1:53914 - cid:7 - Client connection created
2022/05/06 08:35:22.696675 [TRC] 172.19.0.1:53914 - cid:7 - <<- [CONNECT {"verbose":false,"pedantic":false,"user":"client2","pass":"[REDACTED]","tls_required":false,"name":"NATS CLI Version 0.0.28","lang":"go","version":"1.13.0","protocol":1,"echo":true,"headers":true,"no_responders":true}]
2022/05/06 08:35:22.696756 [TRC] 172.19.0.1:53914 - cid:7 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
2022/05/06 08:35:22.696759 [TRC] 172.19.0.1:53914 - cid:7 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
2022/05/06 08:35:22.697130 [TRC] 172.19.0.1:53914 - cid:7 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [SUB orders.> 1]
2022/05/06 08:35:22.697156 [DBG] 172.19.0.1:53914 - cid:7 - "v1.13.0:go:NATS CLI Version 0.0.28" - Creating import subscription on "orders.client2.stream.>" from account "PUBLIC"
2022/05/06 08:35:22.697162 [TRC] 172.19.0.1:53914 - cid:7 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
2022/05/06 08:35:22.697164 [TRC] 172.19.0.1:53914 - cid:7 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
2022/05/06 08:35:24.716001 [DBG] 172.19.0.1:53914 - cid:7 - "v1.13.0:go:NATS CLI Version 0.0.28" - Client Ping Timer
2022/05/06 08:35:24.716041 [TRC] 172.19.0.1:53914 - cid:7 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PING]
2022/05/06 08:35:24.716530 [TRC] 172.19.0.1:53914 - cid:7 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PONG]
2022/05/06 08:35:38.318658 [DBG] 172.19.0.1:53918 - cid:8 - Client connection created
2022/05/06 08:35:38.319107 [TRC] 172.19.0.1:53918 - cid:8 - <<- [CONNECT {"verbose":false,"pedantic":false,"user":"client","pass":"[REDACTED]","tls_required":false,"name":"NATS CLI Version 0.0.28","lang":"go","version":"1.13.0","protocol":1,"echo":true,"headers":true,"no_responders":true}]
2022/05/06 08:35:38.319160 [TRC] 172.19.0.1:53918 - cid:8 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
2022/05/06 08:35:38.319170 [TRC] 172.19.0.1:53918 - cid:8 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
2022/05/06 08:35:38.319521 [TRC] 172.19.0.1:53918 - cid:8 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [SUB orders.> 1]
2022/05/06 08:35:38.319549 [DBG] 172.19.0.1:53918 - cid:8 - "v1.13.0:go:NATS CLI Version 0.0.28" - Creating import subscription on "orders.client.stream.>" from account "PUBLIC"
2022/05/06 08:35:38.319559 [TRC] 172.19.0.1:53918 - cid:8 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
2022/05/06 08:35:38.319561 [TRC] 172.19.0.1:53918 - cid:8 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
2022/05/06 08:35:40.322744 [DBG] 172.19.0.1:53918 - cid:8 - "v1.13.0:go:NATS CLI Version 0.0.28" - Client Ping Timer
2022/05/06 08:35:40.322777 [TRC] 172.19.0.1:53918 - cid:8 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PING]
2022/05/06 08:35:40.323233 [TRC] 172.19.0.1:53918 - cid:8 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PONG]
2022/05/06 08:35:45.063138 [DBG] 172.19.0.1:53922 - cid:9 - Client connection created
2022/05/06 08:35:45.063603 [TRC] 172.19.0.1:53922 - cid:9 - <<- [CONNECT {"verbose":false,"pedantic":false,"user":"public","pass":"[REDACTED]","tls_required":false,"name":"NATS CLI Version 0.0.28","lang":"go","version":"1.13.0","protocol":1,"echo":true,"headers":true,"no_responders":true}]
2022/05/06 08:35:45.063654 [TRC] 172.19.0.1:53922 - cid:9 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
2022/05/06 08:35:45.063663 [TRC] 172.19.0.1:53922 - cid:9 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
2022/05/06 08:35:45.064006 [TRC] 172.19.0.1:53922 - cid:9 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PUB orders.client.stream.entry 5]
2022/05/06 08:35:45.064033 [TRC] 172.19.0.1:53922 - cid:9 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- MSG_PAYLOAD: ["test1"]
2022/05/06 08:35:45.064040 [TRC] 172.19.0.1:53918 - cid:8 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [MSG orders.client.stream.entry 1 5]
2022/05/06 08:35:45.064052 [TRC] 172.19.0.1:53922 - cid:9 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
2022/05/06 08:35:45.064062 [TRC] 172.19.0.1:53922 - cid:9 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
2022/05/06 08:35:45.064489 [DBG] 172.19.0.1:53922 - cid:9 - "v1.13.0:go:NATS CLI Version 0.0.28" - Client connection closed: Client Closed
2022/05/06 08:35:49.023750 [DBG] 172.19.0.1:53926 - cid:10 - Client connection created
2022/05/06 08:35:49.024221 [TRC] 172.19.0.1:53926 - cid:10 - <<- [CONNECT {"verbose":false,"pedantic":false,"user":"public","pass":"[REDACTED]","tls_required":false,"name":"NATS CLI Version 0.0.28","lang":"go","version":"1.13.0","protocol":1,"echo":true,"headers":true,"no_responders":true}]
2022/05/06 08:35:49.024278 [TRC] 172.19.0.1:53926 - cid:10 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
2022/05/06 08:35:49.024289 [TRC] 172.19.0.1:53926 - cid:10 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
2022/05/06 08:35:49.024627 [TRC] 172.19.0.1:53926 - cid:10 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PUB orders.client2.stream.entry 5]
2022/05/06 08:35:49.024648 [TRC] 172.19.0.1:53926 - cid:10 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- MSG_PAYLOAD: ["test1"]
2022/05/06 08:35:49.024656 [TRC] 172.19.0.1:53914 - cid:7 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [MSG orders.client2.stream.entry 1 5]
2022/05/06 08:35:49.024665 [TRC] 172.19.0.1:53926 - cid:10 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
2022/05/06 08:35:49.024675 [TRC] 172.19.0.1:53926 - cid:10 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
2022/05/06 08:35:49.025023 [DBG] 172.19.0.1:53926 - cid:10 - "v1.13.0:go:NATS CLI Version 0.0.28" - Client connection closed: Client Closed
2022/05/06 08:35:52.620450 [DBG] 172.19.0.1:53918 - cid:8 - "v1.13.0:go:NATS CLI Version 0.0.28" - Client connection closed: Client Closed
2022/05/06 08:35:52.620487 [TRC] 172.19.0.1:53918 - cid:8 - "v1.13.0:go:NATS CLI Version 0.0.28" - <-> [DELSUB 1]
2022/05/06 08:35:53.938358 [DBG] 172.19.0.1:53914 - cid:7 - "v1.13.0:go:NATS CLI Version 0.0.28" - Client connection closed: Client Closed
2022/05/06 08:35:53.938397 [TRC] 172.19.0.1:53914 - cid:7 - "v1.13.0:go:NATS CLI Version 0.0.28" - <-> [DELSUB 1]
2022/05/06 08:36:00.502890 [DBG] 172.19.0.1:53930 - cid:11 - Client connection created
2022/05/06 08:36:00.503519 [TRC] 172.19.0.1:53930 - cid:11 - <<- [CONNECT {"verbose":false,"pedantic":false,"user":"client","pass":"[REDACTED]","tls_required":false,"name":"NATS CLI Version 0.0.28","lang":"go","version":"1.13.0","protocol":1,"echo":true,"headers":true,"no_responders":true}]
2022/05/06 08:36:00.503605 [TRC] 172.19.0.1:53930 - cid:11 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
2022/05/06 08:36:00.503611 [TRC] 172.19.0.1:53930 - cid:11 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
2022/05/06 08:36:00.504823 [TRC] 172.19.0.1:53930 - cid:11 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [SUB _INBOX.9MrmuxcASP88F3ZP9W8HfG.* 1]
2022/05/06 08:36:00.504842 [TRC] 172.19.0.1:53930 - cid:11 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PUB $JS.API.STREAM.CREATE.orders _INBOX.9MrmuxcASP88F3ZP9W8HfG.IHPIwjD5 344]
2022/05/06 08:36:00.504849 [TRC] 172.19.0.1:53930 - cid:11 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- MSG_PAYLOAD: ["{\"name\":\"orders\",\"subjects\":[\"orders.*.stream.entry\"],\"retention\":\"workqueue\",\"max_consumers\":-1,\"max_msgs_per_subject\":-1,\"max_msgs\":-1,\"max_bytes\":-1,\"max_age\":0,\"max_msg_size\":-1,\"storage\":\"file\",\"discard\":\"old\",\"num_replicas\":1,\"duplicate_window\":120000000000,\"sealed\":false,\"deny_delete\":false,\"deny_purge\":false,\"allow_rollup_hdrs\":false}"]
2022/05/06 08:36:00.505412 [DBG] JETSTREAM - Creating import subscription on "orders.*.stream.entry" from account "PUBLIC"
2022/05/06 08:36:00.505585 [TRC] 172.19.0.1:53930 - cid:11 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [MSG _INBOX.9MrmuxcASP88F3ZP9W8HfG.IHPIwjD5 1 616]
2022/05/06 08:36:00.505600 [TRC] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 1626]
2022/05/06 08:36:00.505616 [TRC] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"fL1RASL4DV808EpBZjvumU\",\"timestamp\":\"2022-05-06T08:36:00.5055495Z\",\"server\":\"NBCK5PLGGAR3PH2ZATFKRSRBNYW6FMBOMGJDAY56GS2IDJT2JGYNP6UJ\",\"client\":{\"start\":\"2022-05-06T08:36:00.5028486Z\",\"host\":\"172.19.0.1\",\"id\":11,\"acc\":\"CLIENT\",\"user\":\"client\",\"name\":\"NATS CLI Version 0.0.28\",\"lang\":\"go\",\"ver\":\"1.13.0\",\"rtt\":689400,\"server\":\"NBCK5PLGGAR3PH2ZATFKRSRBNYW6FMBOMGJDAY56GS2IDJT2JGYNP6UJ\",\"kind\":\"Client\",\"client_type\":\"nats\"},\"subject\":\"$JS.API.STREAM.CREATE.orders\",\"request\":\"{\\\"name\\\":\\\"orders\\\",\\\"subjects\\\":[\\\"orders.*.stream.entry\\\"],\\\"retention\\\":\\\"workqueue\\\",\\\"max_consumers\\\":-1,\\\"max_msgs_per_subject\\\":-1,\\\"max_msgs\\\":-1,\\\"max_bytes\\\":-1,\\\"max_age\\\":0,\\\"max_msg_size\\\":-1,\\\"storage\\\":\\\"file\\\",\\\"discard\\\":\\\"old\\\",\\\"num_replicas\\\":1,\\\"duplicate_window\\\":120000000000,\\\"sealed\\\":false,\\\"deny_delete\\\":false,\\\"deny_purge\\\":false,\\\"allow_rollup_hdrs\\\":false}\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.stream_create_response\\\",\\\"config\\\":{\\\"name\\\":\\\"orders\\\",\\\"subjects\\\":[\\\"orders.*.stream.entry\\\"],\\\"retention\\\":\\\"workqueue\\\",\\\"max_consumers\\\":-1,\\\"max_msgs\\\":-1,\\\"max_bytes\\\":-1,\\\"max_age\\\":0,\\\"max_msgs_per_subject\\\":-1,\\\"max_msg_size\\\":-1,\\\"discard\\\":\\\"old\\\",\\\"storage\\\":\\\"file\\\",\\\"num_replicas\\\":1,\\\"duplicate_window\\\":120000000000,\\\"sealed\\\":false,\\\"deny_delete\\\":false,\\\"deny_purge\\\":false,\\\"allow_rollup_hdrs\\\":false},\\\"created\\\":\\\"2022-05-06T08:36:00.505061Z\\\",\\\"state\\\":{\\\"messages\\\":0,\\\"bytes\\\":0,\\\"first_seq\\\":0,\\\"first_ts\\\":\\\"0001-01-01T00:00:00Z\\\",\\\"last_seq\\\":0,\\\"last_ts\\\":\\\"0001-01-01T00:00:00Z\\\",\\\"consumer_count\\\":0},\\\"did_create\\\":true}\"}"]
2022/05/06 08:36:00.507714 [DBG] 172.19.0.1:53930 - cid:11 - "v1.13.0:go:NATS CLI Version 0.0.28" - Client connection closed: Client Closed
2022/05/06 08:36:00.507741 [TRC] 172.19.0.1:53930 - cid:11 - "v1.13.0:go:NATS CLI Version 0.0.28" - <-> [DELSUB 1]
2022/05/06 08:36:05.225921 [DBG] 172.19.0.1:53934 - cid:16 - Client connection created
2022/05/06 08:36:05.226362 [TRC] 172.19.0.1:53934 - cid:16 - <<- [CONNECT {"verbose":false,"pedantic":false,"user":"client2","pass":"[REDACTED]","tls_required":false,"name":"NATS CLI Version 0.0.28","lang":"go","version":"1.13.0","protocol":1,"echo":true,"headers":true,"no_responders":true}]
2022/05/06 08:36:05.226412 [TRC] 172.19.0.1:53934 - cid:16 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
2022/05/06 08:36:05.226421 [TRC] 172.19.0.1:53934 - cid:16 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
2022/05/06 08:36:05.227615 [TRC] 172.19.0.1:53934 - cid:16 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [SUB _INBOX.j8hE8MdiIjXEBPT7FQw1nT.* 1]
2022/05/06 08:36:05.227638 [TRC] 172.19.0.1:53934 - cid:16 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PUB $JS.API.STREAM.CREATE.orders _INBOX.j8hE8MdiIjXEBPT7FQw1nT.ClBCeapf 344]
2022/05/06 08:36:05.227646 [TRC] 172.19.0.1:53934 - cid:16 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- MSG_PAYLOAD: ["{\"name\":\"orders\",\"subjects\":[\"orders.*.stream.entry\"],\"retention\":\"workqueue\",\"max_consumers\":-1,\"max_msgs_per_subject\":-1,\"max_msgs\":-1,\"max_bytes\":-1,\"max_age\":0,\"max_msg_size\":-1,\"storage\":\"file\",\"discard\":\"old\",\"num_replicas\":1,\"duplicate_window\":120000000000,\"sealed\":false,\"deny_delete\":false,\"deny_purge\":false,\"allow_rollup_hdrs\":false}"]
2022/05/06 08:36:05.228110 [DBG] JETSTREAM - Creating import subscription on "orders.*.stream.entry" from account "PUBLIC"
2022/05/06 08:36:05.228174 [TRC] 172.19.0.1:53934 - cid:16 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [MSG _INBOX.j8hE8MdiIjXEBPT7FQw1nT.ClBCeapf 1 617]
2022/05/06 08:36:05.228179 [TRC] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 1629]
2022/05/06 08:36:05.228216 [TRC] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"fL1RASL4DV808EpBZjvuwA\",\"timestamp\":\"2022-05-06T08:36:05.2281427Z\",\"server\":\"NBCK5PLGGAR3PH2ZATFKRSRBNYW6FMBOMGJDAY56GS2IDJT2JGYNP6UJ\",\"client\":{\"start\":\"2022-05-06T08:36:05.2258839Z\",\"host\":\"172.19.0.1\",\"id\":16,\"acc\":\"CLIENT2\",\"user\":\"client2\",\"name\":\"NATS CLI Version 0.0.28\",\"lang\":\"go\",\"ver\":\"1.13.0\",\"rtt\":490400,\"server\":\"NBCK5PLGGAR3PH2ZATFKRSRBNYW6FMBOMGJDAY56GS2IDJT2JGYNP6UJ\",\"kind\":\"Client\",\"client_type\":\"nats\"},\"subject\":\"$JS.API.STREAM.CREATE.orders\",\"request\":\"{\\\"name\\\":\\\"orders\\\",\\\"subjects\\\":[\\\"orders.*.stream.entry\\\"],\\\"retention\\\":\\\"workqueue\\\",\\\"max_consumers\\\":-1,\\\"max_msgs_per_subject\\\":-1,\\\"max_msgs\\\":-1,\\\"max_bytes\\\":-1,\\\"max_age\\\":0,\\\"max_msg_size\\\":-1,\\\"storage\\\":\\\"file\\\",\\\"discard\\\":\\\"old\\\",\\\"num_replicas\\\":1,\\\"duplicate_window\\\":120000000000,\\\"sealed\\\":false,\\\"deny_delete\\\":false,\\\"deny_purge\\\":false,\\\"allow_rollup_hdrs\\\":false}\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.stream_create_response\\\",\\\"config\\\":{\\\"name\\\":\\\"orders\\\",\\\"subjects\\\":[\\\"orders.*.stream.entry\\\"],\\\"retention\\\":\\\"workqueue\\\",\\\"max_consumers\\\":-1,\\\"max_msgs\\\":-1,\\\"max_bytes\\\":-1,\\\"max_age\\\":0,\\\"max_msgs_per_subject\\\":-1,\\\"max_msg_size\\\":-1,\\\"discard\\\":\\\"old\\\",\\\"storage\\\":\\\"file\\\",\\\"num_replicas\\\":1,\\\"duplicate_window\\\":120000000000,\\\"sealed\\\":false,\\\"deny_delete\\\":false,\\\"deny_purge\\\":false,\\\"allow_rollup_hdrs\\\":false},\\\"created\\\":\\\"2022-05-06T08:36:05.2277412Z\\\",\\\"state\\\":{\\\"messages\\\":0,\\\"bytes\\\":0,\\\"first_seq\\\":0,\\\"first_ts\\\":\\\"0001-01-01T00:00:00Z\\\",\\\"last_seq\\\":0,\\\"last_ts\\\":\\\"0001-01-01T00:00:00Z\\\",\\\"consumer_count\\\":0},\\\"did_create\\\":true}\"}"]
2022/05/06 08:36:05.230451 [DBG] 172.19.0.1:53934 - cid:16 - "v1.13.0:go:NATS CLI Version 0.0.28" - Client connection closed: Client Closed
2022/05/06 08:36:05.230481 [TRC] 172.19.0.1:53934 - cid:16 - "v1.13.0:go:NATS CLI Version 0.0.28" - <-> [DELSUB 1]
2022/05/06 08:36:15.842194 [DBG] 172.19.0.1:53938 - cid:20 - Client connection created
2022/05/06 08:36:15.842633 [TRC] 172.19.0.1:53938 - cid:20 - <<- [CONNECT {"verbose":false,"pedantic":false,"user":"public","pass":"[REDACTED]","tls_required":false,"name":"NATS CLI Version 0.0.28","lang":"go","version":"1.13.0","protocol":1,"echo":true,"headers":true,"no_responders":true}]
2022/05/06 08:36:15.842686 [TRC] 172.19.0.1:53938 - cid:20 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
2022/05/06 08:36:15.842691 [TRC] 172.19.0.1:53938 - cid:20 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
2022/05/06 08:36:15.843094 [TRC] 172.19.0.1:53938 - cid:20 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PUB orders.client.stream.entry 5]
2022/05/06 08:36:15.843106 [TRC] 172.19.0.1:53938 - cid:20 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- MSG_PAYLOAD: ["test2"]
2022/05/06 08:36:15.843181 [TRC] 172.19.0.1:53938 - cid:20 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
2022/05/06 08:36:15.843183 [TRC] 172.19.0.1:53938 - cid:20 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
2022/05/06 08:36:15.843562 [DBG] 172.19.0.1:53938 - cid:20 - "v1.13.0:go:NATS CLI Version 0.0.28" - Client connection closed: Client Closed
2022/05/06 08:36:33.714093 [DBG] 172.19.0.1:53946 - cid:21 - Client connection created
2022/05/06 08:36:33.714513 [TRC] 172.19.0.1:53946 - cid:21 - <<- [CONNECT {"verbose":false,"pedantic":false,"user":"client","pass":"[REDACTED]","tls_required":false,"name":"NATS CLI Version 0.0.28","lang":"go","version":"1.13.0","protocol":1,"echo":true,"headers":true,"no_responders":true}]
2022/05/06 08:36:33.714563 [TRC] 172.19.0.1:53946 - cid:21 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
2022/05/06 08:36:33.714573 [TRC] 172.19.0.1:53946 - cid:21 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
2022/05/06 08:36:33.715041 [TRC] 172.19.0.1:53946 - cid:21 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [SUB _INBOX.jYGYaXmljFfToIFgFUISBw.* 1]
2022/05/06 08:36:33.715065 [TRC] 172.19.0.1:53946 - cid:21 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PUB $JS.API.STREAM.INFO.orders _INBOX.jYGYaXmljFfToIFgFUISBw.xvI0Vrox 0]
2022/05/06 08:36:33.715068 [TRC] 172.19.0.1:53946 - cid:21 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- MSG_PAYLOAD: [""]
2022/05/06 08:36:33.715214 [TRC] 172.19.0.1:53946 - cid:21 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [MSG _INBOX.jYGYaXmljFfToIFgFUISBw.xvI0Vrox 1 630]
2022/05/06 08:36:33.715230 [TRC] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 1236]
2022/05/06 08:36:33.715251 [TRC] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"fL1RASL4DV808EpBZjvv10\",\"timestamp\":\"2022-05-06T08:36:33.7151798Z\",\"server\":\"NBCK5PLGGAR3PH2ZATFKRSRBNYW6FMBOMGJDAY56GS2IDJT2JGYNP6UJ\",\"client\":{\"start\":\"2022-05-06T08:36:33.714052Z\",\"host\":\"172.19.0.1\",\"id\":21,\"acc\":\"CLIENT\",\"user\":\"client\",\"name\":\"NATS CLI Version 0.0.28\",\"lang\":\"go\",\"ver\":\"1.13.0\",\"rtt\":472300,\"server\":\"NBCK5PLGGAR3PH2ZATFKRSRBNYW6FMBOMGJDAY56GS2IDJT2JGYNP6UJ\",\"kind\":\"Client\",\"client_type\":\"nats\"},\"subject\":\"$JS.API.STREAM.INFO.orders\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.stream_info_response\\\",\\\"config\\\":{\\\"name\\\":\\\"orders\\\",\\\"subjects\\\":[\\\"orders.*.stream.entry\\\"],\\\"retention\\\":\\\"workqueue\\\",\\\"max_consumers\\\":-1,\\\"max_msgs\\\":-1,\\\"max_bytes\\\":-1,\\\"max_age\\\":0,\\\"max_msgs_per_subject\\\":-1,\\\"max_msg_size\\\":-1,\\\"discard\\\":\\\"old\\\",\\\"storage\\\":\\\"file\\\",\\\"num_replicas\\\":1,\\\"duplicate_window\\\":120000000000,\\\"sealed\\\":false,\\\"deny_delete\\\":false,\\\"deny_purge\\\":false,\\\"allow_rollup_hdrs\\\":false},\\\"created\\\":\\\"2022-05-06T08:36:00.505061Z\\\",\\\"state\\\":{\\\"messages\\\":1,\\\"bytes\\\":61,\\\"first_seq\\\":1,\\\"first_ts\\\":\\\"2022-05-06T08:36:15.8431142Z\\\",\\\"last_seq\\\":1,\\\"last_ts\\\":\\\"2022-05-06T08:36:15.8431142Z\\\",\\\"num_subjects\\\":1,\\\"consumer_count\\\":0}}\"}"]
2022/05/06 08:36:33.717593 [DBG] 172.19.0.1:53946 - cid:21 - "v1.13.0:go:NATS CLI Version 0.0.28" - Client connection closed: Client Closed
2022/05/06 08:36:33.717625 [TRC] 172.19.0.1:53946 - cid:21 - "v1.13.0:go:NATS CLI Version 0.0.28" - <-> [DELSUB 1]
2022/05/06 08:36:47.472430 [DBG] 172.19.0.1:53950 - cid:22 - Client connection created
2022/05/06 08:36:47.472894 [TRC] 172.19.0.1:53950 - cid:22 - <<- [CONNECT {"verbose":false,"pedantic":false,"user":"client2","pass":"[REDACTED]","tls_required":false,"name":"NATS CLI Version 0.0.28","lang":"go","version":"1.13.0","protocol":1,"echo":true,"headers":true,"no_responders":true}]
2022/05/06 08:36:47.472946 [TRC] 172.19.0.1:53950 - cid:22 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
2022/05/06 08:36:47.472950 [TRC] 172.19.0.1:53950 - cid:22 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
2022/05/06 08:36:47.473292 [TRC] 172.19.0.1:53950 - cid:22 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [SUB _INBOX.gLmO7D2Hk7RF7MsWOPBEZ2.* 1]
2022/05/06 08:36:47.473316 [TRC] 172.19.0.1:53950 - cid:22 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PUB $JS.API.STREAM.INFO.orders _INBOX.gLmO7D2Hk7RF7MsWOPBEZ2.Y2t3J7UM 0]
2022/05/06 08:36:47.473321 [TRC] 172.19.0.1:53950 - cid:22 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- MSG_PAYLOAD: [""]
2022/05/06 08:36:47.473459 [TRC] 172.19.0.1:53950 - cid:22 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [MSG _INBOX.gLmO7D2Hk7RF7MsWOPBEZ2.Y2t3J7UM 1 631]
2022/05/06 08:36:47.473478 [TRC] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 1240]
2022/05/06 08:36:47.473501 [TRC] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"fL1RASL4DV808EpBZjvv5q\",\"timestamp\":\"2022-05-06T08:36:47.4734185Z\",\"server\":\"NBCK5PLGGAR3PH2ZATFKRSRBNYW6FMBOMGJDAY56GS2IDJT2JGYNP6UJ\",\"client\":{\"start\":\"2022-05-06T08:36:47.4723877Z\",\"host\":\"172.19.0.1\",\"id\":22,\"acc\":\"CLIENT2\",\"user\":\"client2\",\"name\":\"NATS CLI Version 0.0.28\",\"lang\":\"go\",\"ver\":\"1.13.0\",\"rtt\":517500,\"server\":\"NBCK5PLGGAR3PH2ZATFKRSRBNYW6FMBOMGJDAY56GS2IDJT2JGYNP6UJ\",\"kind\":\"Client\",\"client_type\":\"nats\"},\"subject\":\"$JS.API.STREAM.INFO.orders\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.stream_info_response\\\",\\\"config\\\":{\\\"name\\\":\\\"orders\\\",\\\"subjects\\\":[\\\"orders.*.stream.entry\\\"],\\\"retention\\\":\\\"workqueue\\\",\\\"max_consumers\\\":-1,\\\"max_msgs\\\":-1,\\\"max_bytes\\\":-1,\\\"max_age\\\":0,\\\"max_msgs_per_subject\\\":-1,\\\"max_msg_size\\\":-1,\\\"discard\\\":\\\"old\\\",\\\"storage\\\":\\\"file\\\",\\\"num_replicas\\\":1,\\\"duplicate_window\\\":120000000000,\\\"sealed\\\":false,\\\"deny_delete\\\":false,\\\"deny_purge\\\":false,\\\"allow_rollup_hdrs\\\":false},\\\"created\\\":\\\"2022-05-06T08:36:05.2277412Z\\\",\\\"state\\\":{\\\"messages\\\":1,\\\"bytes\\\":61,\\\"first_seq\\\":1,\\\"first_ts\\\":\\\"2022-05-06T08:36:15.8431591Z\\\",\\\"last_seq\\\":1,\\\"last_ts\\\":\\\"2022-05-06T08:36:15.8431591Z\\\",\\\"num_subjects\\\":1,\\\"consumer_count\\\":0}}\"}"]
2022/05/06 08:36:47.475765 [DBG] 172.19.0.1:53950 - cid:22 - "v1.13.0:go:NATS CLI Version 0.0.28" - Client connection closed: Client Closed
2022/05/06 08:36:47.475803 [TRC] 172.19.0.1:53950 - cid:22 - "v1.13.0:go:NATS CLI Version 0.0.28" - <-> [DELSUB 1]
2022/05/06 08:37:14.088132 [DBG] 172.19.0.1:53954 - cid:23 - Client connection created
2022/05/06 08:37:14.088585 [TRC] 172.19.0.1:53954 - cid:23 - <<- [CONNECT {"verbose":false,"pedantic":false,"user":"client","pass":"[REDACTED]","tls_required":false,"name":"NATS CLI Version 0.0.28","lang":"go","version":"1.13.0","protocol":1,"echo":true,"headers":true,"no_responders":true}]
2022/05/06 08:37:14.088624 [TRC] 172.19.0.1:53954 - cid:23 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
2022/05/06 08:37:14.088626 [TRC] 172.19.0.1:53954 - cid:23 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
2022/05/06 08:37:14.088962 [TRC] 172.19.0.1:53954 - cid:23 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [SUB orders.> 1]
2022/05/06 08:37:14.088986 [DBG] 172.19.0.1:53954 - cid:23 - "v1.13.0:go:NATS CLI Version 0.0.28" - Creating import subscription on "orders.client.stream.>" from account "PUBLIC"
2022/05/06 08:37:14.088995 [TRC] 172.19.0.1:53954 - cid:23 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
2022/05/06 08:37:14.088997 [TRC] 172.19.0.1:53954 - cid:23 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
2022/05/06 08:37:16.422182 [DBG] 172.19.0.1:53954 - cid:23 - "v1.13.0:go:NATS CLI Version 0.0.28" - Client Ping Timer
2022/05/06 08:37:16.422218 [TRC] 172.19.0.1:53954 - cid:23 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PING]
2022/05/06 08:37:16.422760 [TRC] 172.19.0.1:53954 - cid:23 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PONG]
2022/05/06 08:37:18.327957 [DBG] 172.19.0.1:53958 - cid:24 - Client connection created
2022/05/06 08:37:18.328399 [TRC] 172.19.0.1:53958 - cid:24 - <<- [CONNECT {"verbose":false,"pedantic":false,"user":"client2","pass":"[REDACTED]","tls_required":false,"name":"NATS CLI Version 0.0.28","lang":"go","version":"1.13.0","protocol":1,"echo":true,"headers":true,"no_responders":true}]
2022/05/06 08:37:18.328454 [TRC] 172.19.0.1:53958 - cid:24 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
2022/05/06 08:37:18.328463 [TRC] 172.19.0.1:53958 - cid:24 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
2022/05/06 08:37:18.328827 [TRC] 172.19.0.1:53958 - cid:24 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [SUB orders.> 1]
2022/05/06 08:37:18.328847 [DBG] 172.19.0.1:53958 - cid:24 - "v1.13.0:go:NATS CLI Version 0.0.28" - Creating import subscription on "orders.client2.stream.>" from account "PUBLIC"
2022/05/06 08:37:18.328853 [TRC] 172.19.0.1:53958 - cid:24 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
2022/05/06 08:37:18.328855 [TRC] 172.19.0.1:53958 - cid:24 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
2022/05/06 08:37:20.712247 [DBG] 172.19.0.1:53958 - cid:24 - "v1.13.0:go:NATS CLI Version 0.0.28" - Client Ping Timer
2022/05/06 08:37:20.712285 [TRC] 172.19.0.1:53958 - cid:24 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PING]
2022/05/06 08:37:20.712804 [TRC] 172.19.0.1:53958 - cid:24 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PONG]
2022/05/06 08:37:21.808333 [DBG] 172.19.0.1:53962 - cid:25 - Client connection created
2022/05/06 08:37:21.808974 [TRC] 172.19.0.1:53962 - cid:25 - <<- [CONNECT {"verbose":false,"pedantic":false,"user":"public","pass":"[REDACTED]","tls_required":false,"name":"NATS CLI Version 0.0.28","lang":"go","version":"1.13.0","protocol":1,"echo":true,"headers":true,"no_responders":true}]
2022/05/06 08:37:21.809028 [TRC] 172.19.0.1:53962 - cid:25 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
2022/05/06 08:37:21.809037 [TRC] 172.19.0.1:53962 - cid:25 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
2022/05/06 08:37:21.809382 [TRC] 172.19.0.1:53962 - cid:25 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PUB orders.client.stream.entry 5]
2022/05/06 08:37:21.809393 [TRC] 172.19.0.1:53962 - cid:25 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- MSG_PAYLOAD: ["test3"]
2022/05/06 08:37:21.809463 [TRC] 172.19.0.1:53954 - cid:23 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [MSG orders.client.stream.entry 1 5]
2022/05/06 08:37:21.809486 [TRC] 172.19.0.1:53962 - cid:25 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
2022/05/06 08:37:21.809489 [TRC] 172.19.0.1:53962 - cid:25 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
2022/05/06 08:37:21.809897 [DBG] 172.19.0.1:53962 - cid:25 - "v1.13.0:go:NATS CLI Version 0.0.28" - Client connection closed: Client Closed
2022/05/06 08:37:29.875239 [DBG] 172.19.0.1:53954 - cid:23 - "v1.13.0:go:NATS CLI Version 0.0.28" - Client connection closed: Client Closed
2022/05/06 08:37:29.875277 [TRC] 172.19.0.1:53954 - cid:23 - "v1.13.0:go:NATS CLI Version 0.0.28" - <-> [DELSUB 1]
2022/05/06 08:37:31.568420 [DBG] 172.19.0.1:53966 - cid:26 - Client connection created
2022/05/06 08:37:31.568867 [TRC] 172.19.0.1:53966 - cid:26 - <<- [CONNECT {"verbose":false,"pedantic":false,"user":"client","pass":"[REDACTED]","tls_required":false,"name":"NATS CLI Version 0.0.28","lang":"go","version":"1.13.0","protocol":1,"echo":true,"headers":true,"no_responders":true}]
2022/05/06 08:37:31.568920 [TRC] 172.19.0.1:53966 - cid:26 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
2022/05/06 08:37:31.568927 [TRC] 172.19.0.1:53966 - cid:26 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
2022/05/06 08:37:31.569243 [TRC] 172.19.0.1:53966 - cid:26 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [SUB _INBOX.mqgYPGaNAa3Dyzx3GiBY7q.* 1]
2022/05/06 08:37:31.569261 [TRC] 172.19.0.1:53966 - cid:26 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PUB $JS.API.STREAM.INFO.orders _INBOX.mqgYPGaNAa3Dyzx3GiBY7q.NW85gjg1 0]
2022/05/06 08:37:31.569263 [TRC] 172.19.0.1:53966 - cid:26 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- MSG_PAYLOAD: [""]
2022/05/06 08:37:31.569377 [TRC] 172.19.0.1:53966 - cid:26 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [MSG _INBOX.mqgYPGaNAa3Dyzx3GiBY7q.NW85gjg1 1 631]
2022/05/06 08:37:31.569407 [TRC] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 1238]
2022/05/06 08:37:31.569440 [TRC] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"fL1RASL4DV808EpBZjvvAg\",\"timestamp\":\"2022-05-06T08:37:31.5693429Z\",\"server\":\"NBCK5PLGGAR3PH2ZATFKRSRBNYW6FMBOMGJDAY56GS2IDJT2JGYNP6UJ\",\"client\":{\"start\":\"2022-05-06T08:37:31.5683836Z\",\"host\":\"172.19.0.1\",\"id\":26,\"acc\":\"CLIENT\",\"user\":\"client\",\"name\":\"NATS CLI Version 0.0.28\",\"lang\":\"go\",\"ver\":\"1.13.0\",\"rtt\":494200,\"server\":\"NBCK5PLGGAR3PH2ZATFKRSRBNYW6FMBOMGJDAY56GS2IDJT2JGYNP6UJ\",\"kind\":\"Client\",\"client_type\":\"nats\"},\"subject\":\"$JS.API.STREAM.INFO.orders\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.stream_info_response\\\",\\\"config\\\":{\\\"name\\\":\\\"orders\\\",\\\"subjects\\\":[\\\"orders.*.stream.entry\\\"],\\\"retention\\\":\\\"workqueue\\\",\\\"max_consumers\\\":-1,\\\"max_msgs\\\":-1,\\\"max_bytes\\\":-1,\\\"max_age\\\":0,\\\"max_msgs_per_subject\\\":-1,\\\"max_msg_size\\\":-1,\\\"discard\\\":\\\"old\\\",\\\"storage\\\":\\\"file\\\",\\\"num_replicas\\\":1,\\\"duplicate_window\\\":120000000000,\\\"sealed\\\":false,\\\"deny_delete\\\":false,\\\"deny_purge\\\":false,\\\"allow_rollup_hdrs\\\":false},\\\"created\\\":\\\"2022-05-06T08:36:00.505061Z\\\",\\\"state\\\":{\\\"messages\\\":2,\\\"bytes\\\":122,\\\"first_seq\\\":1,\\\"first_ts\\\":\\\"2022-05-06T08:36:15.8431142Z\\\",\\\"last_seq\\\":2,\\\"last_ts\\\":\\\"2022-05-06T08:37:21.8093991Z\\\",\\\"num_subjects\\\":1,\\\"consumer_count\\\":0}}\"}"]
2022/05/06 08:37:31.571718 [DBG] 172.19.0.1:53966 - cid:26 - "v1.13.0:go:NATS CLI Version 0.0.28" - Client connection closed: Client Closed
2022/05/06 08:37:31.571767 [TRC] 172.19.0.1:53966 - cid:26 - "v1.13.0:go:NATS CLI Version 0.0.28" - <-> [DELSUB 1]
2022/05/06 08:37:33.689066 [DBG] 172.19.0.1:53958 - cid:24 - "v1.13.0:go:NATS CLI Version 0.0.28" - Client connection closed: Client Closed
2022/05/06 08:37:33.689100 [TRC] 172.19.0.1:53958 - cid:24 - "v1.13.0:go:NATS CLI Version 0.0.28" - <-> [DELSUB 1]
2022/05/06 08:37:34.897953 [DBG] 172.19.0.1:53970 - cid:27 - Client connection created
2022/05/06 08:37:34.898416 [TRC] 172.19.0.1:53970 - cid:27 - <<- [CONNECT {"verbose":false,"pedantic":false,"user":"client2","pass":"[REDACTED]","tls_required":false,"name":"NATS CLI Version 0.0.28","lang":"go","version":"1.13.0","protocol":1,"echo":true,"headers":true,"no_responders":true}]
2022/05/06 08:37:34.898471 [TRC] 172.19.0.1:53970 - cid:27 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
2022/05/06 08:37:34.898484 [TRC] 172.19.0.1:53970 - cid:27 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
2022/05/06 08:37:34.898787 [TRC] 172.19.0.1:53970 - cid:27 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [SUB _INBOX.zRBdTDfESZ31ybnK25kxtC.* 1]
2022/05/06 08:37:34.898805 [TRC] 172.19.0.1:53970 - cid:27 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PUB $JS.API.STREAM.INFO.orders _INBOX.zRBdTDfESZ31ybnK25kxtC.Wu1XhPw1 0]
2022/05/06 08:37:34.898808 [TRC] 172.19.0.1:53970 - cid:27 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- MSG_PAYLOAD: [""]
2022/05/06 08:37:34.898926 [TRC] 172.19.0.1:53970 - cid:27 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [MSG _INBOX.zRBdTDfESZ31ybnK25kxtC.Wu1XhPw1 1 632]
2022/05/06 08:37:34.898943 [TRC] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 1241]
2022/05/06 08:37:34.899677 [TRC] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"fL1RASL4DV808EpBZjvvFW\",\"timestamp\":\"2022-05-06T08:37:34.8988994Z\",\"server\":\"NBCK5PLGGAR3PH2ZATFKRSRBNYW6FMBOMGJDAY56GS2IDJT2JGYNP6UJ\",\"client\":{\"start\":\"2022-05-06T08:37:34.8979173Z\",\"host\":\"172.19.0.1\",\"id\":27,\"acc\":\"CLIENT2\",\"user\":\"client2\",\"name\":\"NATS CLI Version 0.0.28\",\"lang\":\"go\",\"ver\":\"1.13.0\",\"rtt\":510700,\"server\":\"NBCK5PLGGAR3PH2ZATFKRSRBNYW6FMBOMGJDAY56GS2IDJT2JGYNP6UJ\",\"kind\":\"Client\",\"client_type\":\"nats\"},\"subject\":\"$JS.API.STREAM.INFO.orders\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.stream_info_response\\\",\\\"config\\\":{\\\"name\\\":\\\"orders\\\",\\\"subjects\\\":[\\\"orders.*.stream.entry\\\"],\\\"retention\\\":\\\"workqueue\\\",\\\"max_consumers\\\":-1,\\\"max_msgs\\\":-1,\\\"max_bytes\\\":-1,\\\"max_age\\\":0,\\\"max_msgs_per_subject\\\":-1,\\\"max_msg_size\\\":-1,\\\"discard\\\":\\\"old\\\",\\\"storage\\\":\\\"file\\\",\\\"num_replicas\\\":1,\\\"duplicate_window\\\":120000000000,\\\"sealed\\\":false,\\\"deny_delete\\\":false,\\\"deny_purge\\\":false,\\\"allow_rollup_hdrs\\\":false},\\\"created\\\":\\\"2022-05-06T08:36:05.2277412Z\\\",\\\"state\\\":{\\\"messages\\\":2,\\\"bytes\\\":122,\\\"first_seq\\\":1,\\\"first_ts\\\":\\\"2022-05-06T08:36:15.8431591Z\\\",\\\"last_seq\\\":2,\\\"last_ts\\\":\\\"2022-05-06T08:37:21.8094407Z\\\",\\\"num_subjects\\\":1,\\\"consumer_count\\\":0}}\"}"]
2022/05/06 08:37:34.901563 [DBG] 172.19.0.1:53970 - cid:27 - "v1.13.0:go:NATS CLI Version 0.0.28" - Client connection closed: Client Closed
2022/05/06 08:37:34.901605 [TRC] 172.19.0.1:53970 - cid:27 - "v1.13.0:go:NATS CLI Version 0.0.28" - <-> [DELSUB 1]
```
|
https://github.com/nats-io/nats-server/issues/3108
|
https://github.com/nats-io/nats-server/pull/3112
|
f87c7d8441ea4659a678b44093cd0b1da090738d
|
17cc2052931bd74dacc3f595b67468a5f0f33a20
| 2022-05-06T09:12:31Z |
go
| 2022-05-10T20:38:47Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,107 |
["server/filestore.go", "server/filestore_test.go", "server/jetstream_test.go", "server/stream.go"]
|
sourcing stream with retention work queue policy experiences re delivered messages
|
In this unit test a work queue sources another stream with messages.
It receives all messages, such that NumPending is 0.
After server restart, num pending is back to the original message count.
This happens only with work queue retention policy.
```
func TestJetStreamWorkQueueSourceRestart(t *testing.T) {
s := RunBasicJetStreamServer()
o := s.getOpts()
nc, js := jsClientConnect(t, s)
defer nc.Close()
pubCnt := 10
_, err := js.AddStream(&nats.StreamConfig{
Name: "FOO",
Replicas: 1,
Subjects: []string{"foo"},
})
for i := 0; i < pubCnt; i++ {
_, err = js.Publish("foo", nil)
require_NoError(t, err)
}
_, err = js.AddStream(&nats.StreamConfig{
Name: "TEST",
Replicas: 1,
// TODO test will pass when retention commented out
Retention: nats.WorkQueuePolicy,
Sources: []*nats.StreamSource{{Name: "FOO"}}})
require_NoError(t, err)
time.Sleep(time.Second)
_, err = js.AddConsumer("TEST", &nats.ConsumerConfig{Durable: "dur", AckPolicy: nats.AckExplicitPolicy})
require_NoError(t, err)
sub, err := js.PullSubscribe("", "dur", nats.BindStream("TEST"))
require_NoError(t, err)
ci, err := js.ConsumerInfo("TEST", "dur")
require_NoError(t, err)
require_True(t, ci.NumPending == uint64(pubCnt))
for i := 0; i < pubCnt; i++ {
m, err := sub.Fetch(1)
require_NoError(t, err)
require_NoError(t, m[0].AckSync())
}
ci, err = js.ConsumerInfo("TEST", "dur")
require_NoError(t, err)
require_True(t, ci.NumPending == 0)
// Restart server
s.Shutdown()
sr := RunServer(o)
defer sr.Shutdown()
checkFor(t, 10*time.Second, time.Second, func() error {
hs := sr.healthz()
if hs.Status != "ok" {
return fmt.Errorf("healthz %s %s", hs.Error, hs.Status)
}
return nil
})
ctest, err := js.ConsumerInfo("TEST", "dur")
//TODO (mh) I have experienced in other tests that numwaiting has a value of 1 post restart. (
// seems to go awary in single server setup. It's also unrelated to work queue
// but that error seems benign.
require_True(t, ctest.NumPending == 0)
_, err = sub.Fetch(1, nats.MaxWait(10*time.Second))
if err != nats.ErrTimeout {
require_NoError(t, err)
}
}
```
|
https://github.com/nats-io/nats-server/issues/3107
|
https://github.com/nats-io/nats-server/pull/3109
|
b47de12bbd9241518bb5d744170c11cd26af82f7
|
88ebfdaee8b5c4e963445a5c740b30190d6e6c5a
| 2022-05-05T23:24:30Z |
go
| 2022-05-09T16:13:48Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,084 |
["go.mod", "go.sum", "server/accounts.go", "server/auth.go", "server/events_test.go", "server/jetstream_jwt_test.go"]
|
Account Policy to Restrict Bearer Token Users
|
## Feature Request
In operator-mode configurations (JWT-based), an account owner can create bearer token User JWTs using the `--bearer` option of `nsc` or the the JWT client libraries.
Some operators may desire to create accounts that explicitly restrict the account owner from creating users with the bearer-token option.
#### Use Case:
Force client to present User JWT **and** perform a challenge-and-response authentication, i.e. sign nonce using possessed private NKEY.
#### Proposed Change:
Add an account-level claim that asserts that the account may not issue bearer User JWTs.
#### Who Benefits From The Change(s)?
Organizations/operators that have a policy against bearer token usage at runtime, but otherwise enable account owners to manage their own user lifecycle.
#### Alternative Approaches
|
https://github.com/nats-io/nats-server/issues/3084
|
https://github.com/nats-io/nats-server/pull/3127
|
b0580cdfc297019aa5f92943c39c7d0a600a8f86
|
6e5260893633de363646b25d10389ed7608c3c63
| 2022-04-28T16:32:20Z |
go
| 2022-06-29T16:19:14Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,083 |
["server/client.go", "server/jetstream_test.go"]
|
crash in transformSubject
|
While working on an issue, I noticed a crash in `transformSubject`. on import.
below you find the stack as well as a modified version of the test I was working on that produces the issue.
panic stack
```
panic: runtime error: index out of range [2] with length 1
goroutine 191 [running]:
github.com/nats-io/nats-server/v2/server.(*transform).transform(0xc000177200, {0xc00054b2c0, 0x1, 0x98f787b5478ed9d5})
/Users/matthiashanel/repos/nats-server/server/accounts.go:4364 +0x679
github.com/nats-io/nats-server/v2/server.(*transform).transformSubject(0xc0008dc31c, {0xc0007e00c2, 0x3})
/Users/matthiashanel/repos/nats-server/server/accounts.go:4321 +0x1e5
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc00053c600, 0x1542fe5, 0xc0000ec180, {0xc0008dc358, 0x4, 0xa8}, {0x0, 0x0, 0x0}, {0xc0008dc308, ...}, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:4162 +0x7cd
github.com/nats-io/nats-server/v2/server.(*client).processInboundRoutedMsg(0xc00053c600, {0xc0008dc358, 0x0, 0xa8})
/Users/matthiashanel/repos/nats-server/server/route.go:443 +0x159
github.com/nats-io/nats-server/v2/server.(*client).processInboundMsg(0xc00053c600, {0xc0008dc358, 0x51, 0xfb})
/Users/matthiashanel/repos/nats-server/server/client.go:3512 +0x36
github.com/nats-io/nats-server/v2/server.(*client).parse(0xc00053c600, {0xc0008dc300, 0x5c, 0xc09280d889ebfe48})
/Users/matthiashanel/repos/nats-server/server/parser.go:497 +0x246a
github.com/nats-io/nats-server/v2/server.(*client).readLoop(0xc00053c600, {0x0, 0x0, 0x0})
/Users/matthiashanel/repos/nats-server/server/client.go:1229 +0xe1f
github.com/nats-io/nats-server/v2/server.(*Server).createRoute.func1()
/Users/matthiashanel/repos/nats-server/server/route.go:1372 +0x25
created by github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine
/Users/matthiashanel/repos/nats-server/server/server.go:3013 +0x87
```
Modified unit test that produces the issue:
```
func TestJetStreamSuperClusterImportConsumerStreamSubjectRemap(t *testing.T) {
template := `
listen: 127.0.0.1:-1
server_name: %s
jetstream: {max_mem_store: 256MB, max_file_store: 2GB, domain: HUB, store_dir: '%s'}
cluster {
name: %s
listen: 127.0.0.1:%d
routes = [%s]
}
accounts: {
JS: {
jetstream: enabled
users: [ {user: js, password: pwd} ]
exports [
# This is streaming to a delivery subject for a push based consumer.
{ stream: "eliver.ORDERS.*" }
# This is to ack received messages. This is a service to support sync ack.
{ service: "$JS.ACK.ORDERS.*.>" }
# To support ordered consumers, flow control.
{ service: "$JS.FC.>" }
]
},
IM: {
users: [ {user: im, password: pwd} ]
imports [
{ stream: { account: JS, subject: "eliver.ORDERS.*" }, to: "deliver.ORDERS.*"}
{ service: {account: JS, subject: "$JS.FC.>" }}
]
},
$SYS { users = [ { user: "admin", pass: "s3cr3t!" } ] },
}
leaf {
listen: 127.0.0.1:-1
}`
test := func(t *testing.T, queue bool) {
c := createJetStreamSuperClusterWithTemplate(t, template, 3, 2)
defer c.shutdown()
s := c.randomServer()
nc, js := jsClientConnect(t, s, nats.UserInfo("js", "pwd"))
defer nc.Close()
_, err := js.AddStream(&nats.StreamConfig{
Name: "ORDERS",
Subjects: []string{"foo"}, // The JS subject.
Replicas: 3,
Placement: &nats.Placement{Cluster: "C1"},
})
require_NoError(t, err)
_, err = js.Publish("foo", []byte("OK"))
require_NoError(t, err)
for dur, deliver := range map[string]string{
"dur-route": "eliver.ORDERS.route",
"dur-gateway": "eliver.ORDERS.gateway",
"dur-leaf-1": "eliver.ORDERS.leaf1",
"dur-leaf-2": "eliver.ORDERS.leaf2",
} {
cfg := &nats.ConsumerConfig{
Durable: dur,
DeliverSubject: deliver,
AckPolicy: nats.AckExplicitPolicy,
}
if queue {
cfg.DeliverGroup = "queue"
}
_, err = js.AddConsumer("ORDERS", cfg)
require_NoError(t, err)
}
testCase := func(t *testing.T, s *Server, dSubj string) {
nc2, err := nats.Connect(s.ClientURL(), nats.UserInfo("im", "pwd"))
require_NoError(t, err)
defer nc2.Close()
var sub *nats.Subscription
if queue {
sub, err = nc2.QueueSubscribeSync(dSubj, "queue")
} else {
sub, err = nc2.SubscribeSync(dSubj)
}
require_NoError(t, err)
m, err := sub.NextMsg(time.Second)
require_NoError(t, err)
if m.Subject != "foo" {
t.Fatalf("Subject not mapped correctly across account boundary, expected %q got %q", "foo", m.Subject)
}
require_False(t, strings.Contains(m.Reply, "@"))
}
t.Run("route", func(t *testing.T) {
// pick random non consumer leader so we receive via route
s := c.clusterForName("C1").randomNonConsumerLeader("JS", "ORDERS", "dur-route")
testCase(t, s, "deliver.ORDERS.route")
})
t.Run("gateway", func(t *testing.T) {
// pick server with inbound gateway from consumer leader, so we receive from gateway and have no route in between
scl := c.clusterForName("C1").consumerLeader("JS", "ORDERS", "dur-gateway")
var sfound *Server
for _, s := range c.clusterForName("C2").servers {
s.mu.Lock()
for _, c := range s.gateway.in {
if c.GetName() == scl.info.ID {
sfound = s
break
}
}
s.mu.Unlock()
if sfound != nil {
break
}
}
testCase(t, sfound, "deliver.ORDERS.gateway")
})
t.Run("leaf-post-export", func(t *testing.T) {
// create leaf node server connected post export/import
scl := c.clusterForName("C1").consumerLeader("JS", "ORDERS", "dur-leaf-1")
cf := createConfFile(t, []byte(fmt.Sprintf(`
port: -1
leafnodes {
remotes [ { url: "nats://im:[email protected]:%d" } ]
}
authorization: {
user: im,
password: pwd
}
`, scl.getOpts().LeafNode.Port)))
defer removeFile(t, cf)
s, _ := RunServerWithConfig(cf)
defer s.Shutdown()
checkLeafNodeConnected(t, scl)
testCase(t, s, "deliver.ORDERS.leaf1")
})
t.Run("leaf-pre-export", func(t *testing.T) {
// create leaf node server connected pre export, perform export/import on leaf node server
scl := c.clusterForName("C1").consumerLeader("JS", "ORDERS", "dur-leaf-2")
cf := createConfFile(t, []byte(fmt.Sprintf(`
port: -1
leafnodes {
remotes [ { url: "nats://js:[email protected]:%d", account: JS2 } ]
}
accounts: {
JS2: {
users: [ {user: js, password: pwd} ]
exports [
# This is streaming to a delivery subject for a push based consumer.
{ stream: "deliver.ORDERS.leaf2" }
# This is to ack received messages. This is a service to support sync ack.
{ service: "$JS.ACK.ORDERS.*.>" }
# To support ordered consumers, flow control.
{ service: "$JS.FC.>" }
]
},
IM2: {
users: [ {user: im, password: pwd} ]
imports [
{ stream: { account: JS2, subject: "deliver.ORDERS.leaf2" }}
{ service: {account: JS2, subject: "$JS.FC.>" }}
]
},
}
`, scl.getOpts().LeafNode.Port)))
defer removeFile(t, cf)
s, _ := RunServerWithConfig(cf)
defer s.Shutdown()
checkLeafNodeConnected(t, scl)
testCase(t, s, "deliver.ORDERS.leaf2")
})
}
t.Run("noQueue", func(t *testing.T) {
test(t, false)
})
t.Run("queue", func(t *testing.T) {
test(t, true)
})
}
```
|
https://github.com/nats-io/nats-server/issues/3083
|
https://github.com/nats-io/nats-server/pull/3088
|
0d928c033846f2529703bcdede4b8c7a72411ed7
|
ea6a43ead9913995516e4e8b83dd85a2874d6b3b
| 2022-04-27T18:37:11Z |
go
| 2022-04-29T00:08:00Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,072 |
["server/consumer.go", "server/jetstream_api.go", "server/jetstream_events.go"]
|
Consider advisory subject for NAK
|
## Feature Request
#### Use Case:
From a Slack user:
> Is there a native way to monitor NAKs of messages? E.g. a trigger for when x amount of NAKs in the last 5 minutes were sent. Or when a message is "stuck" because it has been NAKed for y-times. The http monitoring endpoint nor JMS support this feature as far as I can tell, right?
#### Proposed Change:
Add an advisory subject similar to the [TERM subject](https://github.com/nats-io/nats-server/blob/646b3850bffe185b3efa0b58981ea6f84b32ef49/server/jetstream_api.go#L207-L208) so that clients can monitor for NAKs and react appropriately.
#### Who Benefits From The Change(s)?
Anyone who wants to monitor NAKs.
#### Alternative Approaches
Currently, there is no way to observe NAKs from the server. The alternative would be for the application to track this manually.
|
https://github.com/nats-io/nats-server/issues/3072
|
https://github.com/nats-io/nats-server/pull/3074
|
646b3850bffe185b3efa0b58981ea6f84b32ef49
|
1a80b2e716e3eb0f8c761cbcd49ce7578b134a59
| 2022-04-25T13:48:39Z |
go
| 2022-04-26T12:11:59Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,069 |
["server/filestore.go", "server/filestore_test.go", "server/jetstream_test.go", "server/stream.go"]
|
[JS] workqueue stream that is sourced from 2 different kvs with one of the subjects filtered, looses consumer sequence on metaleader change
|
I have a stream that is source from two KVs:
```
Subjects: ngs-workers.>
Acknowledgements: true
Retention: File - WorkQueue
Replicas: 3
Discard Policy: Old
Duplicate Window: 2m0s
Allows Msg Delete: true
Allows Purge: true
Allows Rollups: false
Maximum Messages: unlimited
Maximum Per Subject: 1
Maximum Bytes: unlimited
Maximum Age: unlimited
Maximum Message Size: unlimited
Maximum Consumers: unlimited
Sources: KV_ngs-accounts, Subject: $KV.ngs-accounts.billing.*
KV_ngs-config
```
Several different consumers on the above stream - note consumers cannot overlap.
When a metaleader gets moved, or in cases where the nats-server may be restarted, messages (all) from the source stream will get re-sent, causing the workers to spin up even though it is not because of a change/modification of the sourced kvs
|
https://github.com/nats-io/nats-server/issues/3069
|
https://github.com/nats-io/nats-server/pull/3109
|
b47de12bbd9241518bb5d744170c11cd26af82f7
|
88ebfdaee8b5c4e963445a5c740b30190d6e6c5a
| 2022-04-21T22:47:21Z |
go
| 2022-05-09T16:13:48Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,027 |
["server/jetstream_test.go", "server/sublist.go", "test/leafnode_test.go"]
|
MQTT client can't connect leafnode after cluster restart
|
> This issue may be related to #3009 .
## Description
The problem is currently found to occur with the problem described in #3009.
1. Under normal circumstances, restarting the entire cluster will not cause this problem.
2. After removing the Jetstream storage from the leaf node and reboot, the MQTT client can connect to it normally.
3. When the problem occurs, I can connect to the remote node normally using the MQTT client
**Environment**
nats-version : 2.7.3
system: centos 7.6
**Cluster Information**
3 remote nodes, 1 leaf node
Configuration as follows:
https://github.com/LLLLimbo/nats-conf
The full log and Jetstream storage file as follows (After restarting the cluster, trying to connect to the leaf node with the MQTT client) :
https://github.com/LLLLimbo/nats-logs/releases/tag/before-delete
Restarted at around 2022/04/12 09:30.
When I try to connect to the leaf node using the MQTT client, the log for the leaf node shows the following message:
```
[15891] 2022/04/12 09:44:07.556991 [TRC] 192.168.9.251:60863 - mid:311 - "mqttx_8fec4c09" - <<- [CONNECT clientID=mqttx_8fec4c09 keepAlive=1m30s username=seeiner-edge-mqtt password=****]
[15891] 2022/04/12 09:44:07.557184 [INF] Creating MQTT streams/consumers with replicas 1 for account "SEEINER"
[15891] 2022/04/12 09:44:07.557406 [TRC] 192.168.3.135:7422 - lid:15 - ->> [LS+ $MQTT.JSA.gf9flzF3.*.*]
[15891] 2022/04/12 09:44:07.557470 [TRC] 192.168.3.135:7422 - lid:15 - ->> [LS+ $MQTT.JSA.*.SP.*]
[15891] 2022/04/12 09:44:07.557504 [TRC] 192.168.3.135:7422 - lid:15 - ->> [LS+ $MQTT.sub.6mH3UM68HpFqAzAHYU9lix]
[15891] 2022/04/12 09:44:07.557521 [TRC] 192.168.3.135:7422 - lid:15 - ->> [LS+ $MQTT.JSA.*.RD]
[15891] 2022/04/12 09:44:07.558531 [DBG] JETSTREAM - JetStream connection closed: Client Closed
[15891] 2022/04/12 09:44:07.558559 [DBG] JETSTREAM - JetStream connection closed: Client Closed
[15891] 2022/04/12 09:44:07.558669 [DBG] 192.168.3.135:7422 - lid:15 - Not permitted to deliver to "$JS.API.STREAM.CREATE.$MQTT_sess"
[15891] 2022/04/12 09:44:07.558789 [TRC] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 824]
[15891] 2022/04/12 09:44:07.558878 [TRC] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"6mH3UM68HpFqAzAHYU9llu\",\"timestamp\":\"2022-04-12T01:44:07.558598406Z\",\"server\":\"nats-edge\",\"client\":{\"start\":\"2022-04-12T01:44:07.55710069Z\",\"id\":312,\"acc\":\"SEEINER\",\"server\":\"nats-edge\",\"cluster\":\"nats-edge\",\"kind\":\"Account\"},\"subject\":\"$JS.API.STREAM.CREATE.$MQTT_sess\",\"request\":\"{\\\"name\\\":\\\"$MQTT_sess\\\",\\\"subjects\\\":[\\\"$MQTT.sess.\\\\u003e\\\"],\\\"retention\\\":\\\"limits\\\",\\\"max_consumers\\\":0,\\\"max_msgs\\\":0,\\\"max_bytes\\\":0,\\\"max_age\\\":0,\\\"max_msgs_per_subject\\\":1,\\\"discard\\\":\\\"old\\\",\\\"storage\\\":\\\"file\\\",\\\"num_replicas\\\":1,\\\"sealed\\\":false,\\\"deny_delete\\\":false,\\\"deny_purge\\\":false,\\\"allow_rollup_hdrs\\\":false}\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.stream_create_response\\\",\\\"error\\\":{\\\"code\\\":500,\\\"err_code\\\":10049,\\\"description\\\":\\\"deleted message\\\"}}\"}"]
[15891] 2022/04/12 09:44:07.558881 [TRC] ACCOUNT - <-> [DELSUB 1]
[15891] 2022/04/12 09:44:07.559014 [TRC] 192.168.3.135:7422 - lid:15 - ->> [LS- $MQTT.JSA.gf9flzF3.*.*]
[15891] 2022/04/12 09:44:07.559040 [TRC] ACCOUNT - <-> [DELSUB 2]
[15891] 2022/04/12 09:44:07.559055 [TRC] 192.168.3.135:7422 - lid:15 - ->> [LS- $MQTT.JSA.*.SP.*]
[15891] 2022/04/12 09:44:07.559067 [TRC] ACCOUNT - <-> [DELSUB 3]
[15891] 2022/04/12 09:44:07.559081 [TRC] 192.168.3.135:7422 - lid:15 - ->> [LS- $MQTT.sub.6mH3UM68HpFqAzAHYU9lix]
[15891] 2022/04/12 09:44:07.559107 [TRC] ACCOUNT - <-> [DELSUB 4]
[15891] 2022/04/12 09:44:07.559122 [TRC] 192.168.3.135:7422 - lid:15 - ->> [LS- $MQTT.JSA.*.RD]
[15891] 2022/04/12 09:44:07.559151 [ERR] 192.168.9.251:60863 - mid:311 - "mqttx_8fec4c09" - unable to connect: create sessions stream for account "SEEINER": deleted message (10049)
[15891] 2022/04/12 09:44:07.559169 [DBG] 192.168.9.251:60863 - mid:311 - "mqttx_8fec4c09" - Client connection closed: Protocol Violation
```
Next, I stopped the leaf node using signal, deleted the jetstream storage, and restarted. I tried to connect to the leaf node using the MQTT client (client_id: `mqttx_8fec4c09`)and found that I was able to connect successfully.
The jetstream storage files and logs after these operations (only the most recent parts are intercepted):
https://github.com/LLLLimbo/nats-logs/releases/tag/after-delete
|
https://github.com/nats-io/nats-server/issues/3027
|
https://github.com/nats-io/nats-server/pull/3031
|
e06e0a247fe7bbf2a4462c90f359cd975d2d47c6
|
08d1507c500a031b0eb67c198e2fb0ed04750d5d
| 2022-04-12T02:41:38Z |
go
| 2022-04-13T19:00:51Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,024 |
["server/jetstream_test.go", "server/sublist.go", "test/leafnode_test.go"]
|
Leaf node jetstream not receiving messages after restart
|
## Defect
- [x] Included `nats-server -DV` output
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
### Description
In our production environment, we use two instances of nats servers. One instance is the main server (accessible only from within our Kubernetes cluster) while the other one is a leaf node. The leaf node is accessible from outside of our Kubernetes cluster (via websocket).
We use accounts to isolate and manage messages and topics between the main server and the leaf node. The main server account DMZ exports a stream which the leaf node account CLIENT imports. Note that there is another account, PUBLIC, on both main and leaf server which is used as authorization account (as shown in the configs below). Furthermore, we have a jetstream on this exported stream on the leaf node's CLIENT account. This configuration does work (the jetstream shows messages when they are published on the main server) on the first start of the instances, while it breaks whenever the instances are restarted. Messages published on the main server's DMZ account are not reaching the CLIENT's jetstream on the leaf node anymore. Interestingly, by manually subscribing to the jetstream's topic using `nats --server localhost:4111 --user client --password client sub 'orders.>'`, the messages reach the jetstream again. As soon as the subscription is terminated, the messages are not reaching the jetstream again. Note that this issue occured in our production environment first but I was able to replicate this issue on my local machine.
My guess is that some internal routing tables (in main or leaf server, or both?) are not updated when the leaf node restarts. The log does not show the re-importing of subscriptions after the leaf node has restarted.
I was not able to fix this without removing and re-creating the jetstream.
#### Versions of `nats-server` and affected client libraries used:
nats-server: 2.7.4
nast-cli: 0.0.28
#### OS/Container environment (used in the reproduction):
Windows 10 (WSL2) 4.19.128-microsoft-standard; Ubuntu 20.04.3 LTS
Docker Desktop 4.3.2, Engine 20.10.11, Compose 1.29.2
#### Steps or code to reproduce the issue:
Main server config:
```
jetstream {
domain: hub
}
monitor_port: 8222
leafnodes {
port: 7422
no_advertise: true
authorization: {
user: leaf
password: leaf
account: PUBLIC
}
}
accounts: {
DMZ: {
users = [
{ user: service, password: service}
]
exports: [
{stream: orders.*.stream.>}
]
jetstream: enabled
}
PUBLIC: {
imports: [
{ stream: { account: DMZ, subject: orders.*.stream.>}}
]
}
}
```
Leaf node config:
```
jetstream {
domain: leaf
}
monitor_port: 8222
websocket {
listen: "0.0.0.0:4112"
no_tls: true
compression: false
}
listen: "0.0.0.0:4111"
leafnodes {
remotes = [
{
url: "nats-leaf://leaf:[email protected]:7422"
account: PUBLIC
}
]
}
accounts: {
PUBLIC: {
exports: [
{ stream: orders.*.stream.> }
]
},
CLIENT: {
users:[{user: client, password: client}]
imports: [
{ stream: { account: PUBLIC, subject: orders.client.stream.> }}
]
jetstream: enabled
},
}
```
Docker compose file used:
```yaml
version: '3.5'
services:
nats-base:
image: nats
ports:
- "4222:4222"
- "8222:8222"
- "7422:7422"
volumes:
- ./main_test.conf:/etc/nats/nats-server.conf:ro
command: -c /etc/nats/nats-server.conf -DV
nats-leaf:
image: nats
ports:
- "4111:4111"
- "4112:4112"
- "8223:8222"
volumes:
- ./leaf_test.conf:/etc/nats/nats-server.conf:ro
command: -c /etc/nats/nats-server.conf -DV
depends_on:
- nats-base
links:
- "nats-base:nats.local"
```
1. Spin up two nats-server instances with the configurations above. One should be the "main" server and the other one should be configured as a leaf node.
2. Create a stream within the CLIENT account on the leaf node: `nats --server localhost:4111 --user client --password client stream create orders '--subjects=orders.*.stream.entry' --retention=work --max-consumers=-1 --max-msgs-per-subject=-1 --max-msgs=-1 --max-bytes=-1 --max-age=-1 --max-msg-size=-1 --storage=file --discard=old --replicas=1 --dupe-window="2m0s" --no-allow-rollup --no-deny-delete --no-deny-purge`
3. Publish a message on the main server using `nats --user service --password service pub 'orders.client.stream.entry' 'test1'`
4. Verify that the stream contains one message using `nats --server localhost:4111 --user client --password client stream info orders`
5. Now, restart the leaf node. I did that via the Docker Desktop GUI.
6. Try publishing a message on the main server again using `nats --user service --password service pub 'orders.client.stream.entry' 'test2'`
#### Expected result:
The stream contains the published message.
#### Actual result:
The stream *does not* contain the published message. Verify this by using `nats --server localhost:4111 --user client --password client stream info orders` again. It does not show two messages.
#### Log
```log
dev-nats-base-1 | [1] 2022/04/11 14:54:58.544648 [INF] Starting nats-server
dev-nats-base-1 | [1] 2022/04/11 14:54:58.544679 [INF] Version: 2.7.4
dev-nats-base-1 | [1] 2022/04/11 14:54:58.544681 [INF] Git: [a86b84a]
dev-nats-base-1 | [1] 2022/04/11 14:54:58.544683 [DBG] Go build: go1.17.8
dev-nats-base-1 | [1] 2022/04/11 14:54:58.544684 [INF] Name: NC5AELZGBN5SRQJFQLJQVJ4LWOSPWWB5KUHAP36HHYUXBWLDWBLDLB2V
dev-nats-base-1 | [1] 2022/04/11 14:54:58.544686 [INF] Node: nU3YjYyc
dev-nats-base-1 | [1] 2022/04/11 14:54:58.544687 [INF] ID: NC5AELZGBN5SRQJFQLJQVJ4LWOSPWWB5KUHAP36HHYUXBWLDWBLDLB2V
dev-nats-base-1 | [1] 2022/04/11 14:54:58.544689 [WRN] Plaintext passwords detected, use nkeys or bcrypt
dev-nats-base-1 | [1] 2022/04/11 14:54:58.544695 [INF] Using configuration file: /etc/nats/nats-server.conf
dev-nats-base-1 | [1] 2022/04/11 14:54:58.544705 [DBG] Created system account: "$SYS"
dev-nats-base-1 | [1] 2022/04/11 14:54:58.545042 [INF] Starting http monitor on 0.0.0.0:8222
dev-nats-base-1 | [1] 2022/04/11 14:54:58.545077 [INF] Starting JetStream
dev-nats-base-1 | [1] 2022/04/11 14:54:58.545215 [DBG] JetStream creating dynamic configuration - 9.32 GB memory, 144.53 GB disk
dev-nats-base-1 | [1] 2022/04/11 14:54:58.545297 [INF] _ ___ _____ ___ _____ ___ ___ _ __ __
dev-nats-base-1 | [1] 2022/04/11 14:54:58.545309 [INF] _ | | __|_ _/ __|_ _| _ \ __| /_\ | \/ |
dev-nats-base-1 | [1] 2022/04/11 14:54:58.545311 [INF] | || | _| | | \__ \ | | | / _| / _ \| |\/| |
dev-nats-base-1 | [1] 2022/04/11 14:54:58.545312 [INF] \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_| |_|
dev-nats-base-1 | [1] 2022/04/11 14:54:58.545313 [INF]
dev-nats-base-1 | [1] 2022/04/11 14:54:58.545314 [INF] https://docs.nats.io/jetstream
dev-nats-base-1 | [1] 2022/04/11 14:54:58.545315 [INF]
dev-nats-base-1 | [1] 2022/04/11 14:54:58.545316 [INF] ---------------- JETSTREAM ----------------
dev-nats-base-1 | [1] 2022/04/11 14:54:58.545318 [INF] Max Memory: 9.32 GB
dev-nats-base-1 | [1] 2022/04/11 14:54:58.545319 [INF] Max Storage: 144.53 GB
dev-nats-base-1 | [1] 2022/04/11 14:54:58.545321 [INF] Store Directory: "/tmp/nats/jetstream"
dev-nats-base-1 | [1] 2022/04/11 14:54:58.545322 [INF] Domain: hub
dev-nats-base-1 | [1] 2022/04/11 14:54:58.545323 [INF] -------------------------------------------
dev-nats-base-1 | [1] 2022/04/11 14:54:58.545354 [DBG] Exports:
dev-nats-base-1 | [1] 2022/04/11 14:54:58.545365 [DBG] $JS.API.>
dev-nats-base-1 | [1] 2022/04/11 14:54:58.545450 [DBG] Enabled JetStream for account "DMZ"
dev-nats-base-1 | [1] 2022/04/11 14:54:58.545461 [DBG] Max Memory: -1 B
dev-nats-base-1 | [1] 2022/04/11 14:54:58.545462 [DBG] Max Storage: -1 B
dev-nats-base-1 | [1] 2022/04/11 14:54:58.545571 [DBG] JetStream state for account "DMZ" recovered
dev-nats-base-1 | [1] 2022/04/11 14:54:58.545640 [INF] Listening for leafnode connections on 0.0.0.0:7422
dev-nats-base-1 | [1] 2022/04/11 14:54:58.545652 [DBG] Get non local IPs for "0.0.0.0"
dev-nats-base-1 | [1] 2022/04/11 14:54:58.545751 [DBG] ip=172.19.0.2
dev-nats-base-1 | [1] 2022/04/11 14:54:58.545868 [INF] Listening for client connections on 0.0.0.0:4222
dev-nats-base-1 | [1] 2022/04/11 14:54:58.545879 [DBG] Get non local IPs for "0.0.0.0"
dev-nats-base-1 | [1] 2022/04/11 14:54:58.545938 [DBG] ip=172.19.0.2
dev-nats-base-1 | [1] 2022/04/11 14:54:58.545970 [INF] Server is ready
dev-nats-base-1 | [1] 2022/04/11 14:54:59.446285 [INF] 172.19.0.3:42978 - lid:6 - Leafnode connection created
dev-nats-base-1 | [1] 2022/04/11 14:54:59.446612 [TRC] 172.19.0.3:42978 - lid:6 - <<- [CONNECT {"user":"leaf","pass":"[REDACTED]","tls_required":false,"server_id":"NBNVT6GVSBJ464DTGN7YN7OTDDLEHLEMHRZ26D2EFD6UC55AE3HM4TVS","domain":"leaf","name":"NBNVT6GVSBJ464DTGN7YN7OTDDLEHLEMHRZ26D2EFD6UC55AE3HM4TVS","headers":true}]
dev-nats-base-1 | [1] 2022/04/11 14:54:59.446688 [INF] 172.19.0.3:42978 - lid:6 - JetStream Not Extended, adding deny [$JS.API.> $KV.> $OBJ.>] for account "PUBLIC"
dev-nats-base-1 | [1] 2022/04/11 14:54:59.446715 [INF] 172.19.0.3:42978 - lid:6 - Adding JetStream Domain Mapping "$JS.hub.API.CONSUMER.>" -> $JS.API.CONSUMER.> to account "PUBLIC"
dev-nats-base-1 | [1] 2022/04/11 14:54:59.446727 [INF] 172.19.0.3:42978 - lid:6 - Adding JetStream Domain Mapping "$JS.hub.API.META.>" -> $JS.API.META.> to account "PUBLIC"
dev-nats-base-1 | [1] 2022/04/11 14:54:59.446731 [INF] 172.19.0.3:42978 - lid:6 - Adding JetStream Domain Mapping "$JS.hub.API.SERVER.>" -> $JS.API.SERVER.> to account "PUBLIC"
dev-nats-base-1 | [1] 2022/04/11 14:54:59.446741 [INF] 172.19.0.3:42978 - lid:6 - Adding JetStream Domain Mapping "$JS.hub.API.$KV.>" -> $KV.> to account "PUBLIC"
dev-nats-base-1 | [1] 2022/04/11 14:54:59.446745 [INF] 172.19.0.3:42978 - lid:6 - Adding JetStream Domain Mapping "$JS.hub.API.$OBJ.>" -> $OBJ.> to account "PUBLIC"
dev-nats-base-1 | [1] 2022/04/11 14:54:59.446748 [INF] 172.19.0.3:42978 - lid:6 - Adding JetStream Domain Mapping "$JS.hub.API.INFO" -> $JS.API.INFO to account "PUBLIC"
dev-nats-base-1 | [1] 2022/04/11 14:54:59.446752 [INF] 172.19.0.3:42978 - lid:6 - Adding JetStream Domain Mapping "$JS.hub.API.STREAM.>" -> $JS.API.STREAM.> to account "PUBLIC"
dev-nats-base-1 | [1] 2022/04/11 14:54:59.446756 [INF] 172.19.0.3:42978 - lid:6 - Adding deny "$JS.hub.API.>" for outgoing messages to account "PUBLIC"
dev-nats-base-1 | [1] 2022/04/11 14:54:59.446773 [TRC] 172.19.0.3:42978 - lid:6 - ->> [LS+ $JS.hub.API.$KV.>]
dev-nats-base-1 | [1] 2022/04/11 14:54:59.446775 [TRC] 172.19.0.3:42978 - lid:6 - ->> [LS+ $JS.hub.API.$OBJ.>]
dev-nats-base-1 | [1] 2022/04/11 14:54:59.446777 [TRC] 172.19.0.3:42978 - lid:6 - ->> [LS+ $JS.hub.API.INFO]
dev-nats-base-1 | [1] 2022/04/11 14:54:59.446779 [TRC] 172.19.0.3:42978 - lid:6 - ->> [LS+ $JS.hub.API.STREAM.>]
dev-nats-base-1 | [1] 2022/04/11 14:54:59.446780 [TRC] 172.19.0.3:42978 - lid:6 - ->> [LS+ $JS.hub.API.CONSUMER.>]
dev-nats-base-1 | [1] 2022/04/11 14:54:59.446782 [TRC] 172.19.0.3:42978 - lid:6 - ->> [LS+ $JS.hub.API.META.>]
dev-nats-base-1 | [1] 2022/04/11 14:54:59.446784 [TRC] 172.19.0.3:42978 - lid:6 - ->> [LS+ $SYS.REQ.SERVER.PING.CONNZ]
dev-nats-base-1 | [1] 2022/04/11 14:54:59.446785 [TRC] 172.19.0.3:42978 - lid:6 - ->> [LS+ $JS.API.>]
dev-nats-base-1 | [1] 2022/04/11 14:54:59.446787 [TRC] 172.19.0.3:42978 - lid:6 - ->> [LS+ $LDS.snwsb54Z6k3qSK5oqg3zU9]
dev-nats-base-1 | [1] 2022/04/11 14:54:59.446789 [TRC] 172.19.0.3:42978 - lid:6 - ->> [LS+ $SYS.REQ.ACCOUNT.PING.CONNZ]
dev-nats-base-1 | [1] 2022/04/11 14:54:59.446802 [TRC] 172.19.0.3:42978 - lid:6 - ->> [LS+ $JS.hub.API.SERVER.>]
dev-nats-base-1 | [1] 2022/04/11 14:54:59.447145 [TRC] 172.19.0.3:42978 - lid:6 - <<- [LS+ $JS.leaf.API.INFO]
dev-nats-base-1 | [1] 2022/04/11 14:54:59.447172 [TRC] 172.19.0.3:42978 - lid:6 - <<- [LS+ $JS.leaf.API.STREAM.>]
dev-nats-base-1 | [1] 2022/04/11 14:54:59.447176 [TRC] 172.19.0.3:42978 - lid:6 - <<- [LS+ $JS.leaf.API.CONSUMER.>]
dev-nats-base-1 | [1] 2022/04/11 14:54:59.447178 [TRC] 172.19.0.3:42978 - lid:6 - <<- [LS+ $SYS.REQ.SERVER.PING.CONNZ]
dev-nats-base-1 | [1] 2022/04/11 14:54:59.447182 [TRC] 172.19.0.3:42978 - lid:6 - <<- [LS+ $JS.leaf.API.SERVER.>]
dev-nats-base-1 | [1] 2022/04/11 14:54:59.447194 [TRC] 172.19.0.3:42978 - lid:6 - <<- [LS+ $JS.leaf.API.$KV.>]
dev-nats-base-1 | [1] 2022/04/11 14:54:59.447197 [TRC] 172.19.0.3:42978 - lid:6 - <<- [LS+ $LDS.4CoZJI0ZS86jV7ukmz8mla]
dev-nats-base-1 | [1] 2022/04/11 14:54:59.447200 [TRC] 172.19.0.3:42978 - lid:6 - <<- [LS+ $SYS.REQ.ACCOUNT.PING.CONNZ]
dev-nats-base-1 | [1] 2022/04/11 14:54:59.447211 [TRC] 172.19.0.3:42978 - lid:6 - <<- [LS+ $JS.leaf.API.META.>]
dev-nats-base-1 | [1] 2022/04/11 14:54:59.447215 [TRC] 172.19.0.3:42978 - lid:6 - <<- [LS+ $JS.leaf.API.$OBJ.>]
dev-nats-base-1 | [1] 2022/04/11 14:55:00.463767 [DBG] 172.19.0.3:42978 - lid:6 - Leafnode Ping Timer
dev-nats-base-1 | [1] 2022/04/11 14:55:00.463799 [TRC] 172.19.0.3:42978 - lid:6 - ->> [PING]
dev-nats-base-1 | [1] 2022/04/11 14:55:00.463972 [TRC] 172.19.0.3:42978 - lid:6 - <<- [PONG]
dev-nats-base-1 | [1] 2022/04/11 14:55:00.524291 [TRC] 172.19.0.3:42978 - lid:6 - <<- [PING]
dev-nats-base-1 | [1] 2022/04/11 14:55:00.524308 [TRC] 172.19.0.3:42978 - lid:6 - ->> [PONG]
dev-nats-base-1 | [1] 2022/04/11 14:55:03.418074 [TRC] 172.19.0.3:42978 - lid:6 - <<- [LS+ orders.*.stream.entry]
dev-nats-base-1 | [1] 2022/04/11 14:55:03.418097 [DBG] 172.19.0.3:42978 - lid:6 - Creating import subscription on "orders.*.stream.entry" from account "DMZ"
dev-nats-base-1 | [1] 2022/04/11 14:55:10.493729 [DBG] 172.19.0.1:59438 - cid:7 - Client connection created
dev-nats-base-1 | [1] 2022/04/11 14:55:10.494176 [TRC] 172.19.0.1:59438 - cid:7 - <<- [CONNECT {"verbose":false,"pedantic":false,"user":"service","pass":"[REDACTED]","tls_required":false,"name":"NATS CLI Version 0.0.28","lang":"go","version":"1.13.0","protocol":1,"echo":true,"headers":true,"no_responders":true}]
dev-nats-base-1 | [1] 2022/04/11 14:55:10.494230 [TRC] 172.19.0.1:59438 - cid:7 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
dev-nats-base-1 | [1] 2022/04/11 14:55:10.494240 [TRC] 172.19.0.1:59438 - cid:7 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
dev-nats-base-1 | [1] 2022/04/11 14:55:10.494585 [TRC] 172.19.0.1:59438 - cid:7 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PUB orders.client.stream.entry 5]
dev-nats-base-1 | [1] 2022/04/11 14:55:10.494600 [TRC] 172.19.0.1:59438 - cid:7 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- MSG_PAYLOAD: ["test1"]
dev-nats-base-1 | [1] 2022/04/11 14:55:10.494618 [TRC] 172.19.0.3:42978 - lid:6 - ->> [LMSG orders.client.stream.entry 5]
dev-nats-base-1 | [1] 2022/04/11 14:55:10.494628 [TRC] 172.19.0.1:59438 - cid:7 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
dev-nats-base-1 | [1] 2022/04/11 14:55:10.494630 [TRC] 172.19.0.1:59438 - cid:7 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
dev-nats-base-1 | [1] 2022/04/11 14:55:10.495049 [DBG] 172.19.0.1:59438 - cid:7 - "v1.13.0:go:NATS CLI Version 0.0.28" - Client connection closed: Client Closed
dev-nats-base-1 | [1] 2022/04/11 14:55:17.104119 [TRC] 172.19.0.3:42978 - lid:6 - <<- [LS- orders.*.stream.entry]
dev-nats-base-1 | [1] 2022/04/11 14:55:17.104134 [TRC] 172.19.0.3:42978 - lid:6 - <-> [DELSUB orders.*.stream.entry]
dev-nats-base-1 | [1] 2022/04/11 14:55:17.104506 [INF] 172.19.0.3:42978 - lid:6 - Leafnode connection closed: Client Closed account: PUBLIC
dev-nats-base-1 | [1] 2022/04/11 14:55:17.104557 [TRC] 172.19.0.3:42978 - lid:6 - <-> [DELSUB $JS.leaf.API.STREAM.>]
dev-nats-base-1 | [1] 2022/04/11 14:55:17.104575 [TRC] 172.19.0.3:42978 - lid:6 - <-> [DELSUB $JS.leaf.API.CONSUMER.>]
dev-nats-base-1 | [1] 2022/04/11 14:55:17.104578 [TRC] 172.19.0.3:42978 - lid:6 - <-> [DELSUB $SYS.REQ.SERVER.PING.CONNZ]
dev-nats-base-1 | [1] 2022/04/11 14:55:17.104582 [TRC] 172.19.0.3:42978 - lid:6 - <-> [DELSUB $JS.leaf.API.SERVER.>]
dev-nats-base-1 | [1] 2022/04/11 14:55:17.104594 [TRC] 172.19.0.3:42978 - lid:6 - <-> [DELSUB $JS.leaf.API.$KV.>]
dev-nats-base-1 | [1] 2022/04/11 14:55:17.104613 [TRC] 172.19.0.3:42978 - lid:6 - <-> [DELSUB $SYS.REQ.ACCOUNT.PING.CONNZ]
dev-nats-base-1 | [1] 2022/04/11 14:55:17.104618 [TRC] 172.19.0.3:42978 - lid:6 - <-> [DELSUB $JS.leaf.API.META.>]
dev-nats-base-1 | [1] 2022/04/11 14:55:17.104622 [TRC] 172.19.0.3:42978 - lid:6 - <-> [DELSUB $JS.leaf.API.INFO]
dev-nats-base-1 | [1] 2022/04/11 14:55:17.104626 [TRC] 172.19.0.3:42978 - lid:6 - <-> [DELSUB $JS.leaf.API.$OBJ.>]
dev-nats-base-1 | [1] 2022/04/11 14:55:17.104703 [TRC] 172.19.0.3:42978 - lid:6 - <-> [DELSUB $LDS.4CoZJI0ZS86jV7ukmz8mla]
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965052 [INF] 172.19.0.3:43004 - lid:8 - Leafnode connection created
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965348 [TRC] 172.19.0.3:43004 - lid:8 - <<- [CONNECT {"user":"leaf","pass":"[REDACTED]","tls_required":false,"server_id":"NB52VH4YKKOWJP3O65FTECPVKXMQLPWTH5U2XIYU6F52ALHE4JWALRWX","domain":"leaf","name":"NB52VH4YKKOWJP3O65FTECPVKXMQLPWTH5U2XIYU6F52ALHE4JWALRWX","headers":true}]
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965392 [INF] 172.19.0.3:43004 - lid:8 - JetStream Not Extended, adding deny [$JS.API.> $KV.> $OBJ.>] for account "PUBLIC"
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965413 [INF] 172.19.0.3:43004 - lid:8 - Adding JetStream Domain Mapping "$JS.hub.API.META.>" -> $JS.API.META.> to account "PUBLIC"
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965418 [INF] 172.19.0.3:43004 - lid:8 - Adding JetStream Domain Mapping "$JS.hub.API.SERVER.>" -> $JS.API.SERVER.> to account "PUBLIC"
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965422 [INF] 172.19.0.3:43004 - lid:8 - Adding JetStream Domain Mapping "$JS.hub.API.$KV.>" -> $KV.> to account "PUBLIC"
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965425 [INF] 172.19.0.3:43004 - lid:8 - Adding JetStream Domain Mapping "$JS.hub.API.$OBJ.>" -> $OBJ.> to account "PUBLIC"
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965427 [INF] 172.19.0.3:43004 - lid:8 - Adding JetStream Domain Mapping "$JS.hub.API.INFO" -> $JS.API.INFO to account "PUBLIC"
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965431 [INF] 172.19.0.3:43004 - lid:8 - Adding JetStream Domain Mapping "$JS.hub.API.STREAM.>" -> $JS.API.STREAM.> to account "PUBLIC"
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965444 [INF] 172.19.0.3:43004 - lid:8 - Adding JetStream Domain Mapping "$JS.hub.API.CONSUMER.>" -> $JS.API.CONSUMER.> to account "PUBLIC"
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965448 [INF] 172.19.0.3:43004 - lid:8 - Adding deny "$JS.hub.API.>" for outgoing messages to account "PUBLIC"
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965462 [TRC] 172.19.0.3:43004 - lid:8 - ->> [LS+ $JS.API.>]
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965468 [TRC] 172.19.0.3:43004 - lid:8 - ->> [LS+ $SYS.REQ.ACCOUNT.PING.CONNZ]
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965470 [TRC] 172.19.0.3:43004 - lid:8 - ->> [LS+ $JS.hub.API.$OBJ.>]
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965472 [TRC] 172.19.0.3:43004 - lid:8 - ->> [LS+ $JS.hub.API.INFO]
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965474 [TRC] 172.19.0.3:43004 - lid:8 - ->> [LS+ $SYS.REQ.SERVER.PING.CONNZ]
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965475 [TRC] 172.19.0.3:43004 - lid:8 - ->> [LS+ $JS.hub.API.SERVER.>]
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965477 [TRC] 172.19.0.3:43004 - lid:8 - ->> [LS+ $JS.hub.API.$KV.>]
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965479 [TRC] 172.19.0.3:43004 - lid:8 - ->> [LS+ $JS.hub.API.STREAM.>]
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965480 [TRC] 172.19.0.3:43004 - lid:8 - ->> [LS+ $JS.hub.API.CONSUMER.>]
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965482 [TRC] 172.19.0.3:43004 - lid:8 - ->> [LS+ $JS.hub.API.META.>]
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965484 [TRC] 172.19.0.3:43004 - lid:8 - ->> [LS+ $LDS.snwsb54Z6k3qSK5oqg3zU9]
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965763 [TRC] 172.19.0.3:43004 - lid:8 - <<- [LS+ $LDS.8mRek9GPVzK5CbybhoOS4b]
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965777 [TRC] 172.19.0.3:43004 - lid:8 - <<- [LS+ $JS.leaf.API.$OBJ.>]
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965782 [TRC] 172.19.0.3:43004 - lid:8 - <<- [LS+ $JS.leaf.API.STREAM.>]
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965785 [TRC] 172.19.0.3:43004 - lid:8 - <<- [LS+ $JS.leaf.API.$KV.>]
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965797 [TRC] 172.19.0.3:43004 - lid:8 - <<- [LS+ $JS.leaf.API.INFO]
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965811 [TRC] 172.19.0.3:43004 - lid:8 - <<- [LS+ $JS.leaf.API.CONSUMER.>]
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965814 [TRC] 172.19.0.3:43004 - lid:8 - <<- [LS+ $JS.leaf.API.META.>]
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965817 [TRC] 172.19.0.3:43004 - lid:8 - <<- [LS+ $JS.leaf.API.SERVER.>]
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965820 [TRC] 172.19.0.3:43004 - lid:8 - <<- [LS+ $SYS.REQ.SERVER.PING.CONNZ]
dev-nats-base-1 | [1] 2022/04/11 14:55:20.965825 [TRC] 172.19.0.3:43004 - lid:8 - <<- [LS+ $SYS.REQ.ACCOUNT.PING.CONNZ]
dev-nats-base-1 | [1] 2022/04/11 14:55:22.067457 [DBG] 172.19.0.3:43004 - lid:8 - Leafnode Ping Timer
dev-nats-base-1 | [1] 2022/04/11 14:55:22.067498 [TRC] 172.19.0.3:43004 - lid:8 - ->> [PING]
dev-nats-base-1 | [1] 2022/04/11 14:55:22.067684 [TRC] 172.19.0.3:43004 - lid:8 - <<- [PONG]
dev-nats-base-1 | [1] 2022/04/11 14:55:22.067694 [TRC] 172.19.0.3:43004 - lid:8 - <<- [PING]
dev-nats-base-1 | [1] 2022/04/11 14:55:22.067696 [TRC] 172.19.0.3:43004 - lid:8 - ->> [PONG]
dev-nats-base-1 | [1] 2022/04/11 14:55:25.229925 [DBG] 172.19.0.1:59452 - cid:9 - Client connection created
dev-nats-base-1 | [1] 2022/04/11 14:55:25.230832 [TRC] 172.19.0.1:59452 - cid:9 - <<- [CONNECT {"verbose":false,"pedantic":false,"user":"service","pass":"[REDACTED]","tls_required":false,"name":"NATS CLI Version 0.0.28","lang":"go","version":"1.13.0","protocol":1,"echo":true,"headers":true,"no_responders":true}]
dev-nats-base-1 | [1] 2022/04/11 14:55:25.230898 [TRC] 172.19.0.1:59452 - cid:9 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
dev-nats-base-1 | [1] 2022/04/11 14:55:25.230908 [TRC] 172.19.0.1:59452 - cid:9 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
dev-nats-base-1 | [1] 2022/04/11 14:55:25.231289 [TRC] 172.19.0.1:59452 - cid:9 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PUB orders.client.stream.entry 5]
dev-nats-base-1 | [1] 2022/04/11 14:55:25.231305 [TRC] 172.19.0.1:59452 - cid:9 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- MSG_PAYLOAD: ["test2"]
dev-nats-base-1 | [1] 2022/04/11 14:55:25.231311 [TRC] 172.19.0.1:59452 - cid:9 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
dev-nats-base-1 | [1] 2022/04/11 14:55:25.231313 [TRC] 172.19.0.1:59452 - cid:9 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
dev-nats-base-1 | [1] 2022/04/11 14:55:25.231626 [DBG] 172.19.0.1:59452 - cid:9 - "v1.13.0:go:NATS CLI Version 0.0.28" - Client connection closed: Client Closed
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.444053 [INF] Starting nats-server
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.444135 [INF] Version: 2.7.4
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.444137 [INF] Git: [a86b84a]
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.444139 [DBG] Go build: go1.17.8
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.444140 [INF] Name: NBNVT6GVSBJ464DTGN7YN7OTDDLEHLEMHRZ26D2EFD6UC55AE3HM4TVS
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.444143 [INF] Node: d3Z34jgW
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.444144 [INF] ID: NBNVT6GVSBJ464DTGN7YN7OTDDLEHLEMHRZ26D2EFD6UC55AE3HM4TVS
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.444146 [WRN] Plaintext passwords detected, use nkeys or bcrypt
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.444167 [INF] Using configuration file: /etc/nats/nats-server.conf
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.444180 [DBG] Created system account: "$SYS"
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.444610 [INF] Starting http monitor on 0.0.0.0:8222
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.444644 [INF] Starting JetStream
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.444763 [DBG] JetStream creating dynamic configuration - 9.32 GB memory, 144.53 GB disk
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.444851 [INF] _ ___ _____ ___ _____ ___ ___ _ __ __
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.444863 [INF] _ | | __|_ _/ __|_ _| _ \ __| /_\ | \/ |
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.444865 [INF] | || | _| | | \__ \ | | | / _| / _ \| |\/| |
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.444866 [INF] \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_| |_|
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.444867 [INF]
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.444868 [INF] https://docs.nats.io/jetstream
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.444869 [INF]
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.444870 [INF] ---------------- JETSTREAM ----------------
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.444872 [INF] Max Memory: 9.32 GB
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.444873 [INF] Max Storage: 144.53 GB
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.444874 [INF] Store Directory: "/tmp/nats/jetstream"
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.444875 [INF] Domain: leaf
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.444876 [INF] -------------------------------------------
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.444905 [DBG] Exports:
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.444914 [DBG] $JS.API.>
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.445022 [DBG] Enabled JetStream for account "CLIENT"
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.445037 [DBG] Max Memory: -1 B
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.445039 [DBG] Max Storage: -1 B
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.445392 [DBG] JetStream state for account "CLIENT" recovered
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.445456 [INF] Listening for websocket clients on ws://0.0.0.0:4112
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.445467 [WRN] Websocket not configured with TLS. DO NOT USE IN PRODUCTION!
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.445470 [DBG] Get non local IPs for "0.0.0.0"
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.445575 [DBG] ip=172.19.0.3
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.445617 [INF] Listening for client connections on 0.0.0.0:4111
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.445620 [DBG] Get non local IPs for "0.0.0.0"
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.445672 [DBG] ip=172.19.0.3
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.445694 [INF] Server is ready
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.446102 [DBG] Trying to connect as leafnode to remote server on "nats.local:7422" (172.19.0.2:7422)
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.446267 [INF] 172.19.0.2:7422 - lid:6 - Leafnode connection created for account: PUBLIC
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.446467 [DBG] 172.19.0.2:7422 - lid:6 - Remote leafnode connect msg sent
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.446898 [INF] 172.19.0.2:7422 - lid:6 - JetStream Not Extended, adding deny [$JS.API.> $KV.> $OBJ.>] for account "PUBLIC"
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.446923 [INF] 172.19.0.2:7422 - lid:6 - Adding JetStream Domain Mapping "$JS.leaf.API.META.>" -> $JS.API.META.> to account "PUBLIC"
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.446927 [INF] 172.19.0.2:7422 - lid:6 - Adding JetStream Domain Mapping "$JS.leaf.API.SERVER.>" -> $JS.API.SERVER.> to account "PUBLIC"
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.446930 [INF] 172.19.0.2:7422 - lid:6 - Adding JetStream Domain Mapping "$JS.leaf.API.$KV.>" -> $KV.> to account "PUBLIC"
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.446934 [INF] 172.19.0.2:7422 - lid:6 - Adding JetStream Domain Mapping "$JS.leaf.API.$OBJ.>" -> $OBJ.> to account "PUBLIC"
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.446936 [INF] 172.19.0.2:7422 - lid:6 - Adding JetStream Domain Mapping "$JS.leaf.API.INFO" -> $JS.API.INFO to account "PUBLIC"
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.446943 [INF] 172.19.0.2:7422 - lid:6 - Adding JetStream Domain Mapping "$JS.leaf.API.STREAM.>" -> $JS.API.STREAM.> to account "PUBLIC"
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.446947 [INF] 172.19.0.2:7422 - lid:6 - Adding JetStream Domain Mapping "$JS.leaf.API.CONSUMER.>" -> $JS.API.CONSUMER.> to account "PUBLIC"
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.446960 [INF] 172.19.0.2:7422 - lid:6 - Adding deny "$JS.leaf.API.>" for outgoing messages to account "PUBLIC"
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.446964 [DBG] 172.19.0.2:7422 - lid:6 - Not permitted to import service "$JS.API.>" on behalf of PUBLIC
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.446977 [DBG] 172.19.0.2:7422 - lid:6 - Not permitted to subscribe to "$JS.API.>" on behalf of PUBLIC
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.446980 [TRC] 172.19.0.2:7422 - lid:6 - ->> [LS+ $JS.leaf.API.INFO]
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.446982 [TRC] 172.19.0.2:7422 - lid:6 - ->> [LS+ $JS.leaf.API.STREAM.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.446985 [TRC] 172.19.0.2:7422 - lid:6 - ->> [LS+ $JS.leaf.API.CONSUMER.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.446993 [TRC] 172.19.0.2:7422 - lid:6 - ->> [LS+ $SYS.REQ.SERVER.PING.CONNZ]
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.446995 [TRC] 172.19.0.2:7422 - lid:6 - ->> [LS+ $JS.leaf.API.SERVER.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.446997 [TRC] 172.19.0.2:7422 - lid:6 - ->> [LS+ $JS.leaf.API.$KV.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.446999 [TRC] 172.19.0.2:7422 - lid:6 - ->> [LS+ $LDS.4CoZJI0ZS86jV7ukmz8mla]
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.447001 [TRC] 172.19.0.2:7422 - lid:6 - ->> [LS+ $SYS.REQ.ACCOUNT.PING.CONNZ]
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.447002 [TRC] 172.19.0.2:7422 - lid:6 - ->> [LS+ $JS.leaf.API.META.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.447004 [TRC] 172.19.0.2:7422 - lid:6 - ->> [LS+ $JS.leaf.API.$OBJ.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.447016 [TRC] 172.19.0.2:7422 - lid:6 - <<- [LS+ $JS.hub.API.$KV.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.447027 [TRC] 172.19.0.2:7422 - lid:6 - <<- [LS+ $JS.hub.API.$OBJ.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.447029 [TRC] 172.19.0.2:7422 - lid:6 - <<- [LS+ $JS.hub.API.INFO]
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.447043 [TRC] 172.19.0.2:7422 - lid:6 - <<- [LS+ $JS.hub.API.STREAM.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.447045 [TRC] 172.19.0.2:7422 - lid:6 - <<- [LS+ $JS.hub.API.CONSUMER.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.447049 [TRC] 172.19.0.2:7422 - lid:6 - <<- [LS+ $JS.hub.API.META.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.447051 [TRC] 172.19.0.2:7422 - lid:6 - <<- [LS+ $SYS.REQ.SERVER.PING.CONNZ]
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.447056 [TRC] 172.19.0.2:7422 - lid:6 - <<- [LS+ $JS.API.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.447079 [TRC] 172.19.0.2:7422 - lid:6 - <<- [LS+ $LDS.snwsb54Z6k3qSK5oqg3zU9]
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.447090 [TRC] 172.19.0.2:7422 - lid:6 - <<- [LS+ $SYS.REQ.ACCOUNT.PING.CONNZ]
dev-nats-leaf-1 | [1] 2022/04/11 14:54:59.447096 [TRC] 172.19.0.2:7422 - lid:6 - <<- [LS+ $JS.hub.API.SERVER.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:00.463894 [TRC] 172.19.0.2:7422 - lid:6 - <<- [PING]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:00.463905 [TRC] 172.19.0.2:7422 - lid:6 - ->> [PONG]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:00.524147 [DBG] 172.19.0.2:7422 - lid:6 - Leafnode Ping Timer
dev-nats-leaf-1 | [1] 2022/04/11 14:55:00.524188 [TRC] 172.19.0.2:7422 - lid:6 - ->> [PING]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:00.524394 [TRC] 172.19.0.2:7422 - lid:6 - <<- [PONG]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:03.415780 [DBG] 172.19.0.1:42780 - cid:7 - Client connection created
dev-nats-leaf-1 | [1] 2022/04/11 14:55:03.416226 [TRC] 172.19.0.1:42780 - cid:7 - <<- [CONNECT {"verbose":false,"pedantic":false,"user":"client","pass":"[REDACTED]","tls_required":false,"name":"NATS CLI Version 0.0.28","lang":"go","version":"1.13.0","protocol":1,"echo":true,"headers":true,"no_responders":true}]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:03.416299 [TRC] 172.19.0.1:42780 - cid:7 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:03.416309 [TRC] 172.19.0.1:42780 - cid:7 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:03.417332 [TRC] 172.19.0.1:42780 - cid:7 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [SUB _INBOX.mh3KRbojJSS3qO6AdoM1SX.* 1]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:03.417345 [TRC] 172.19.0.1:42780 - cid:7 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PUB $JS.API.STREAM.CREATE.orders _INBOX.mh3KRbojJSS3qO6AdoM1SX.g1oJNq1s 344]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:03.417357 [TRC] 172.19.0.1:42780 - cid:7 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- MSG_PAYLOAD: ["{\"name\":\"orders\",\"subjects\":[\"orders.*.stream.entry\"],\"retention\":\"workqueue\",\"max_consumers\":-1,\"max_msgs_per_subject\":-1,\"max_msgs\":-1,\"max_bytes\":-1,\"max_age\":0,\"max_msg_size\":-1,\"storage\":\"file\",\"discard\":\"old\",\"num_replicas\":1,\"duplicate_window\":120000000000,\"sealed\":false,\"deny_delete\":false,\"deny_purge\":false,\"allow_rollup_hdrs\":false}"]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:03.417878 [DBG] JETSTREAM - Creating import subscription on "orders.*.stream.entry" from account "PUBLIC"
dev-nats-leaf-1 | [1] 2022/04/11 14:55:03.417893 [TRC] 172.19.0.2:7422 - lid:6 - ->> [LS+ orders.*.stream.entry]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:03.418015 [TRC] 172.19.0.1:42780 - cid:7 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [MSG _INBOX.mh3KRbojJSS3qO6AdoM1SX.g1oJNq1s 1 617]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:03.418040 [TRC] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 1642]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:03.418056 [TRC] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"4CoZJI0ZS86jV7ukmz8myt\",\"timestamp\":\"2022-04-11T14:55:03.4179786Z\",\"server\":\"NBNVT6GVSBJ464DTGN7YN7OTDDLEHLEMHRZ26D2EFD6UC55AE3HM4TVS\",\"client\":{\"start\":\"2022-04-11T14:55:03.4157407Z\",\"host\":\"172.19.0.1\",\"id\":7,\"acc\":\"CLIENT\",\"user\":\"client\",\"name\":\"NATS CLI Version 0.0.28\",\"lang\":\"go\",\"ver\":\"1.13.0\",\"rtt\":496500,\"server\":\"NBNVT6GVSBJ464DTGN7YN7OTDDLEHLEMHRZ26D2EFD6UC55AE3HM4TVS\",\"kind\":\"Client\",\"client_type\":\"nats\"},\"subject\":\"$JS.API.STREAM.CREATE.orders\",\"request\":\"{\\\"name\\\":\\\"orders\\\",\\\"subjects\\\":[\\\"orders.*.stream.entry\\\"],\\\"retention\\\":\\\"workqueue\\\",\\\"max_consumers\\\":-1,\\\"max_msgs_per_subject\\\":-1,\\\"max_msgs\\\":-1,\\\"max_bytes\\\":-1,\\\"max_age\\\":0,\\\"max_msg_size\\\":-1,\\\"storage\\\":\\\"file\\\",\\\"discard\\\":\\\"old\\\",\\\"num_replicas\\\":1,\\\"duplicate_window\\\":120000000000,\\\"sealed\\\":false,\\\"deny_delete\\\":false,\\\"deny_purge\\\":false,\\\"allow_rollup_hdrs\\\":false}\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.stream_create_response\\\",\\\"config\\\":{\\\"name\\\":\\\"orders\\\",\\\"subjects\\\":[\\\"orders.*.stream.entry\\\"],\\\"retention\\\":\\\"workqueue\\\",\\\"max_consumers\\\":-1,\\\"max_msgs\\\":-1,\\\"max_bytes\\\":-1,\\\"max_age\\\":0,\\\"max_msgs_per_subject\\\":-1,\\\"max_msg_size\\\":-1,\\\"discard\\\":\\\"old\\\",\\\"storage\\\":\\\"file\\\",\\\"num_replicas\\\":1,\\\"duplicate_window\\\":120000000000,\\\"sealed\\\":false,\\\"deny_delete\\\":false,\\\"deny_purge\\\":false,\\\"allow_rollup_hdrs\\\":false},\\\"created\\\":\\\"2022-04-11T14:55:03.4175439Z\\\",\\\"state\\\":{\\\"messages\\\":0,\\\"bytes\\\":0,\\\"first_seq\\\":0,\\\"first_ts\\\":\\\"0001-01-01T00:00:00Z\\\",\\\"last_seq\\\":0,\\\"last_ts\\\":\\\"0001-01-01T00:00:00Z\\\",\\\"consumer_count\\\":0},\\\"did_create\\\":true}\",\"domain\":\"leaf\"}"]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:03.420296 [DBG] 172.19.0.1:42780 - cid:7 - "v1.13.0:go:NATS CLI Version 0.0.28" - Client connection closed: Client Closed
dev-nats-leaf-1 | [1] 2022/04/11 14:55:03.420325 [TRC] 172.19.0.1:42780 - cid:7 - "v1.13.0:go:NATS CLI Version 0.0.28" - <-> [DELSUB 1]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:07.158533 [DBG] 172.19.0.1:42784 - cid:12 - Client connection created
dev-nats-leaf-1 | [1] 2022/04/11 14:55:07.158965 [TRC] 172.19.0.1:42784 - cid:12 - <<- [CONNECT {"verbose":false,"pedantic":false,"user":"client","pass":"[REDACTED]","tls_required":false,"name":"NATS CLI Version 0.0.28","lang":"go","version":"1.13.0","protocol":1,"echo":true,"headers":true,"no_responders":true}]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:07.159021 [TRC] 172.19.0.1:42784 - cid:12 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:07.159027 [TRC] 172.19.0.1:42784 - cid:12 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:07.159301 [TRC] 172.19.0.1:42784 - cid:12 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [SUB _INBOX.b1hl73jDSy7kanBRfqaHJb.* 1]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:07.159321 [TRC] 172.19.0.1:42784 - cid:12 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PUB $JS.API.STREAM.INFO.orders _INBOX.b1hl73jDSy7kanBRfqaHJb.fsRWd9qA 0]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:07.159325 [TRC] 172.19.0.1:42784 - cid:12 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- MSG_PAYLOAD: [""]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:07.159458 [TRC] 172.19.0.1:42784 - cid:12 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [MSG _INBOX.b1hl73jDSy7kanBRfqaHJb.fsRWd9qA 1 613]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:07.159472 [TRC] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 1238]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:07.159484 [TRC] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"4CoZJI0ZS86jV7ukmz8n3K\",\"timestamp\":\"2022-04-11T14:55:07.1594322Z\",\"server\":\"NBNVT6GVSBJ464DTGN7YN7OTDDLEHLEMHRZ26D2EFD6UC55AE3HM4TVS\",\"client\":{\"start\":\"2022-04-11T14:55:07.1584862Z\",\"host\":\"172.19.0.1\",\"id\":12,\"acc\":\"CLIENT\",\"user\":\"client\",\"name\":\"NATS CLI Version 0.0.28\",\"lang\":\"go\",\"ver\":\"1.13.0\",\"rtt\":491200,\"server\":\"NBNVT6GVSBJ464DTGN7YN7OTDDLEHLEMHRZ26D2EFD6UC55AE3HM4TVS\",\"kind\":\"Client\",\"client_type\":\"nats\"},\"subject\":\"$JS.API.STREAM.INFO.orders\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.stream_info_response\\\",\\\"config\\\":{\\\"name\\\":\\\"orders\\\",\\\"subjects\\\":[\\\"orders.*.stream.entry\\\"],\\\"retention\\\":\\\"workqueue\\\",\\\"max_consumers\\\":-1,\\\"max_msgs\\\":-1,\\\"max_bytes\\\":-1,\\\"max_age\\\":0,\\\"max_msgs_per_subject\\\":-1,\\\"max_msg_size\\\":-1,\\\"discard\\\":\\\"old\\\",\\\"storage\\\":\\\"file\\\",\\\"num_replicas\\\":1,\\\"duplicate_window\\\":120000000000,\\\"sealed\\\":false,\\\"deny_delete\\\":false,\\\"deny_purge\\\":false,\\\"allow_rollup_hdrs\\\":false},\\\"created\\\":\\\"2022-04-11T14:55:03.4175439Z\\\",\\\"state\\\":{\\\"messages\\\":0,\\\"bytes\\\":0,\\\"first_seq\\\":0,\\\"first_ts\\\":\\\"0001-01-01T00:00:00Z\\\",\\\"last_seq\\\":0,\\\"last_ts\\\":\\\"0001-01-01T00:00:00Z\\\",\\\"consumer_count\\\":0},\\\"domain\\\":\\\"leaf\\\"}\",\"domain\":\"leaf\"}"]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:07.161804 [DBG] 172.19.0.1:42784 - cid:12 - "v1.13.0:go:NATS CLI Version 0.0.28" - Client connection closed: Client Closed
dev-nats-leaf-1 | [1] 2022/04/11 14:55:07.161833 [TRC] 172.19.0.1:42784 - cid:12 - "v1.13.0:go:NATS CLI Version 0.0.28" - <-> [DELSUB 1]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:10.494708 [TRC] 172.19.0.2:7422 - lid:6 - <<- [LMSG orders.client.stream.entry 5]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:10.494730 [TRC] 172.19.0.2:7422 - lid:6 - <<- MSG_PAYLOAD: ["test1"]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:11.916149 [DBG] 172.19.0.1:42792 - cid:13 - Client connection created
dev-nats-leaf-1 | [1] 2022/04/11 14:55:11.916599 [TRC] 172.19.0.1:42792 - cid:13 - <<- [CONNECT {"verbose":false,"pedantic":false,"user":"client","pass":"[REDACTED]","tls_required":false,"name":"NATS CLI Version 0.0.28","lang":"go","version":"1.13.0","protocol":1,"echo":true,"headers":true,"no_responders":true}]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:11.916651 [TRC] 172.19.0.1:42792 - cid:13 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:11.916661 [TRC] 172.19.0.1:42792 - cid:13 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:11.916968 [TRC] 172.19.0.1:42792 - cid:13 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [SUB _INBOX.OgqVU7byJZN0PkGYCckewu.* 1]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:11.916989 [TRC] 172.19.0.1:42792 - cid:13 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PUB $JS.API.STREAM.INFO.orders _INBOX.OgqVU7byJZN0PkGYCckewu.wgUwht6m 0]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:11.916994 [TRC] 172.19.0.1:42792 - cid:13 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- MSG_PAYLOAD: [""]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:11.917095 [TRC] 172.19.0.1:42792 - cid:13 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [MSG _INBOX.OgqVU7byJZN0PkGYCckewu.wgUwht6m 1 647]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:11.917109 [TRC] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 1274]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:11.917130 [TRC] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"4CoZJI0ZS86jV7ukmz8n7l\",\"timestamp\":\"2022-04-11T14:55:11.9170667Z\",\"server\":\"NBNVT6GVSBJ464DTGN7YN7OTDDLEHLEMHRZ26D2EFD6UC55AE3HM4TVS\",\"client\":{\"start\":\"2022-04-11T14:55:11.9161105Z\",\"host\":\"172.19.0.1\",\"id\":13,\"acc\":\"CLIENT\",\"user\":\"client\",\"name\":\"NATS CLI Version 0.0.28\",\"lang\":\"go\",\"ver\":\"1.13.0\",\"rtt\":500100,\"server\":\"NBNVT6GVSBJ464DTGN7YN7OTDDLEHLEMHRZ26D2EFD6UC55AE3HM4TVS\",\"kind\":\"Client\",\"client_type\":\"nats\"},\"subject\":\"$JS.API.STREAM.INFO.orders\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.stream_info_response\\\",\\\"config\\\":{\\\"name\\\":\\\"orders\\\",\\\"subjects\\\":[\\\"orders.*.stream.entry\\\"],\\\"retention\\\":\\\"workqueue\\\",\\\"max_consumers\\\":-1,\\\"max_msgs\\\":-1,\\\"max_bytes\\\":-1,\\\"max_age\\\":0,\\\"max_msgs_per_subject\\\":-1,\\\"max_msg_size\\\":-1,\\\"discard\\\":\\\"old\\\",\\\"storage\\\":\\\"file\\\",\\\"num_replicas\\\":1,\\\"duplicate_window\\\":120000000000,\\\"sealed\\\":false,\\\"deny_delete\\\":false,\\\"deny_purge\\\":false,\\\"allow_rollup_hdrs\\\":false},\\\"created\\\":\\\"2022-04-11T14:55:03.4175439Z\\\",\\\"state\\\":{\\\"messages\\\":1,\\\"bytes\\\":61,\\\"first_seq\\\":1,\\\"first_ts\\\":\\\"2022-04-11T14:55:10.4947619Z\\\",\\\"last_seq\\\":1,\\\"last_ts\\\":\\\"2022-04-11T14:55:10.4947619Z\\\",\\\"num_subjects\\\":1,\\\"consumer_count\\\":0},\\\"domain\\\":\\\"leaf\\\"}\",\"domain\":\"leaf\"}"]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:11.919661 [DBG] 172.19.0.1:42792 - cid:13 - "v1.13.0:go:NATS CLI Version 0.0.28" - Client connection closed: Client Closed
dev-nats-leaf-1 | [1] 2022/04/11 14:55:11.919690 [TRC] 172.19.0.1:42792 - cid:13 - "v1.13.0:go:NATS CLI Version 0.0.28" - <-> [DELSUB 1]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:17.103780 [DBG] Trapped "terminated" signal
dev-nats-leaf-1 | [1] 2022/04/11 14:55:17.103862 [INF] Initiating Shutdown...
dev-nats-leaf-1 | [1] 2022/04/11 14:55:17.103872 [INF] Initiating JetStream Shutdown...
dev-nats-leaf-1 | [1] 2022/04/11 14:55:17.103897 [TRC] JETSTREAM - <-> [DELSUB 1]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:17.103928 [TRC] 172.19.0.2:7422 - lid:6 - ->> [LS- orders.*.stream.entry]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:17.103970 [DBG] JETSTREAM - JetStream connection closed: Client Closed
dev-nats-leaf-1 | [1] 2022/04/11 14:55:17.103976 [DBG] JETSTREAM - JetStream connection closed: Client Closed
dev-nats-leaf-1 | [1] 2022/04/11 14:55:17.104042 [DBG] JETSTREAM - JetStream connection closed: Client Closed
dev-nats-leaf-1 | [1] 2022/04/11 14:55:17.104212 [INF] JetStream Shutdown
dev-nats-leaf-1 | [1] 2022/04/11 14:55:17.104340 [DBG] Client accept loop exiting..
dev-nats-leaf-1 | [1] 2022/04/11 14:55:17.104341 [INF] 172.19.0.2:7422 - lid:6 - Leafnode connection closed: Server Shutdown account: PUBLIC
dev-nats-leaf-1 | [1] 2022/04/11 14:55:17.104384 [TRC] 172.19.0.2:7422 - lid:6 - <-> [DELSUB $JS.hub.API.$KV.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:17.104389 [TRC] 172.19.0.2:7422 - lid:6 - <-> [DELSUB $JS.hub.API.$OBJ.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:17.104392 [TRC] 172.19.0.2:7422 - lid:6 - <-> [DELSUB $JS.hub.API.CONSUMER.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:17.104395 [TRC] 172.19.0.2:7422 - lid:6 - <-> [DELSUB $JS.hub.API.META.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:17.104399 [TRC] 172.19.0.2:7422 - lid:6 - <-> [DELSUB $JS.API.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:17.104402 [TRC] 172.19.0.2:7422 - lid:6 - <-> [DELSUB $LDS.snwsb54Z6k3qSK5oqg3zU9]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:17.104405 [TRC] 172.19.0.2:7422 - lid:6 - <-> [DELSUB $SYS.REQ.ACCOUNT.PING.CONNZ]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:17.104408 [TRC] 172.19.0.2:7422 - lid:6 - <-> [DELSUB $JS.hub.API.SERVER.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:17.104411 [TRC] 172.19.0.2:7422 - lid:6 - <-> [DELSUB $JS.hub.API.INFO]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:17.104414 [TRC] 172.19.0.2:7422 - lid:6 - <-> [DELSUB $JS.hub.API.STREAM.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:17.104417 [TRC] 172.19.0.2:7422 - lid:6 - <-> [DELSUB $SYS.REQ.SERVER.PING.CONNZ]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:17.104353 [DBG] SYSTEM - System connection closed: Client Closed
dev-nats-leaf-1 | [1] 2022/04/11 14:55:17.104505 [INF] Server Exiting..
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.962932 [INF] Starting nats-server
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.962955 [INF] Version: 2.7.4
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.962956 [INF] Git: [a86b84a]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.962958 [DBG] Go build: go1.17.8
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.962960 [INF] Name: NB52VH4YKKOWJP3O65FTECPVKXMQLPWTH5U2XIYU6F52ALHE4JWALRWX
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.962969 [INF] Node: YI420yud
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.962971 [INF] ID: NB52VH4YKKOWJP3O65FTECPVKXMQLPWTH5U2XIYU6F52ALHE4JWALRWX
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.962973 [WRN] Plaintext passwords detected, use nkeys or bcrypt
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.962979 [INF] Using configuration file: /etc/nats/nats-server.conf
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.963005 [DBG] Created system account: "$SYS"
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.963366 [INF] Starting http monitor on 0.0.0.0:8222
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.963397 [INF] Starting JetStream
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.963422 [DBG] JetStream creating dynamic configuration - 9.32 GB memory, 144.53 GB disk
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.963554 [INF] _ ___ _____ ___ _____ ___ ___ _ __ __
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.963567 [INF] _ | | __|_ _/ __|_ _| _ \ __| /_\ | \/ |
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.963568 [INF] | || | _| | | \__ \ | | | / _| / _ \| |\/| |
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.963569 [INF] \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_| |_|
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.963570 [INF]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.963571 [INF] https://docs.nats.io/jetstream
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.963571 [INF]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.963572 [INF] ---------------- JETSTREAM ----------------
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.963575 [INF] Max Memory: 9.32 GB
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.963576 [INF] Max Storage: 144.53 GB
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.963577 [INF] Store Directory: "/tmp/nats/jetstream"
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.963578 [INF] Domain: leaf
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.963579 [INF] -------------------------------------------
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.963606 [DBG] Exports:
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.963611 [DBG] $JS.API.>
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.963700 [DBG] Enabled JetStream for account "CLIENT"
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.963712 [DBG] Max Memory: -1 B
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.963714 [DBG] Max Storage: -1 B
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.963728 [DBG] Recovering JetStream state for account "CLIENT"
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.964110 [DBG] JETSTREAM - Creating import subscription on "orders.*.stream.entry" from account "PUBLIC"
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.964148 [INF] Restored 1 messages for stream 'CLIENT > orders'
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.964184 [DBG] JetStream state for account "CLIENT" recovered
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.964257 [INF] Listening for websocket clients on ws://0.0.0.0:4112
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.964268 [WRN] Websocket not configured with TLS. DO NOT USE IN PRODUCTION!
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.964270 [DBG] Get non local IPs for "0.0.0.0"
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.964359 [DBG] ip=172.19.0.3
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.964402 [INF] Listening for client connections on 0.0.0.0:4111
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.964415 [DBG] Get non local IPs for "0.0.0.0"
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.964482 [DBG] ip=172.19.0.3
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.964508 [INF] Server is ready
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.964852 [DBG] Trying to connect as leafnode to remote server on "nats.local:7422" (172.19.0.2:7422)
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965029 [INF] 172.19.0.2:7422 - lid:9 - Leafnode connection created for account: PUBLIC
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965240 [DBG] 172.19.0.2:7422 - lid:9 - Remote leafnode connect msg sent
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965559 [INF] 172.19.0.2:7422 - lid:9 - JetStream Not Extended, adding deny [$JS.API.> $KV.> $OBJ.>] for account "PUBLIC"
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965582 [INF] 172.19.0.2:7422 - lid:9 - Adding JetStream Domain Mapping "$JS.leaf.API.INFO" -> $JS.API.INFO to account "PUBLIC"
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965589 [INF] 172.19.0.2:7422 - lid:9 - Adding JetStream Domain Mapping "$JS.leaf.API.STREAM.>" -> $JS.API.STREAM.> to account "PUBLIC"
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965593 [INF] 172.19.0.2:7422 - lid:9 - Adding JetStream Domain Mapping "$JS.leaf.API.CONSUMER.>" -> $JS.API.CONSUMER.> to account "PUBLIC"
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965624 [INF] 172.19.0.2:7422 - lid:9 - Adding JetStream Domain Mapping "$JS.leaf.API.META.>" -> $JS.API.META.> to account "PUBLIC"
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965633 [INF] 172.19.0.2:7422 - lid:9 - Adding JetStream Domain Mapping "$JS.leaf.API.SERVER.>" -> $JS.API.SERVER.> to account "PUBLIC"
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965646 [INF] 172.19.0.2:7422 - lid:9 - Adding JetStream Domain Mapping "$JS.leaf.API.$KV.>" -> $KV.> to account "PUBLIC"
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965650 [INF] 172.19.0.2:7422 - lid:9 - Adding JetStream Domain Mapping "$JS.leaf.API.$OBJ.>" -> $OBJ.> to account "PUBLIC"
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965655 [INF] 172.19.0.2:7422 - lid:9 - Adding deny "$JS.leaf.API.>" for outgoing messages to account "PUBLIC"
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965667 [DBG] 172.19.0.2:7422 - lid:9 - Not permitted to import service "$JS.API.>" on behalf of PUBLIC
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965674 [DBG] 172.19.0.2:7422 - lid:9 - Not permitted to subscribe to "$JS.API.>" on behalf of PUBLIC
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965677 [TRC] 172.19.0.2:7422 - lid:9 - ->> [LS+ $LDS.8mRek9GPVzK5CbybhoOS4b]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965679 [TRC] 172.19.0.2:7422 - lid:9 - ->> [LS+ $JS.leaf.API.$OBJ.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965681 [TRC] 172.19.0.2:7422 - lid:9 - ->> [LS+ $JS.leaf.API.STREAM.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965683 [TRC] 172.19.0.2:7422 - lid:9 - ->> [LS+ $JS.leaf.API.$KV.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965685 [TRC] 172.19.0.2:7422 - lid:9 - ->> [LS+ $JS.leaf.API.INFO]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965686 [TRC] 172.19.0.2:7422 - lid:9 - ->> [LS+ $JS.leaf.API.CONSUMER.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965688 [TRC] 172.19.0.2:7422 - lid:9 - ->> [LS+ $JS.leaf.API.META.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965690 [TRC] 172.19.0.2:7422 - lid:9 - ->> [LS+ $JS.leaf.API.SERVER.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965691 [TRC] 172.19.0.2:7422 - lid:9 - ->> [LS+ $SYS.REQ.SERVER.PING.CONNZ]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965693 [TRC] 172.19.0.2:7422 - lid:9 - ->> [LS+ $SYS.REQ.ACCOUNT.PING.CONNZ]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965706 [TRC] 172.19.0.2:7422 - lid:9 - <<- [LS+ $JS.API.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965719 [TRC] 172.19.0.2:7422 - lid:9 - <<- [LS+ $SYS.REQ.ACCOUNT.PING.CONNZ]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965724 [TRC] 172.19.0.2:7422 - lid:9 - <<- [LS+ $JS.hub.API.$OBJ.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965743 [TRC] 172.19.0.2:7422 - lid:9 - <<- [LS+ $JS.hub.API.INFO]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965747 [TRC] 172.19.0.2:7422 - lid:9 - <<- [LS+ $SYS.REQ.SERVER.PING.CONNZ]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965750 [TRC] 172.19.0.2:7422 - lid:9 - <<- [LS+ $JS.hub.API.SERVER.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965753 [TRC] 172.19.0.2:7422 - lid:9 - <<- [LS+ $JS.hub.API.$KV.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965755 [TRC] 172.19.0.2:7422 - lid:9 - <<- [LS+ $JS.hub.API.STREAM.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965758 [TRC] 172.19.0.2:7422 - lid:9 - <<- [LS+ $JS.hub.API.CONSUMER.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965770 [TRC] 172.19.0.2:7422 - lid:9 - <<- [LS+ $JS.hub.API.META.>]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:20.965773 [TRC] 172.19.0.2:7422 - lid:9 - <<- [LS+ $LDS.snwsb54Z6k3qSK5oqg3zU9]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:22.067585 [TRC] 172.19.0.2:7422 - lid:9 - <<- [PING]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:22.067603 [TRC] 172.19.0.2:7422 - lid:9 - ->> [PONG]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:22.067624 [DBG] 172.19.0.2:7422 - lid:9 - Leafnode Ping Timer
dev-nats-leaf-1 | [1] 2022/04/11 14:55:22.067632 [TRC] 172.19.0.2:7422 - lid:9 - ->> [PING]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:22.067742 [TRC] 172.19.0.2:7422 - lid:9 - <<- [PONG]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:27.417361 [DBG] 172.19.0.1:42806 - cid:10 - Client connection created
dev-nats-leaf-1 | [1] 2022/04/11 14:55:27.417799 [TRC] 172.19.0.1:42806 - cid:10 - <<- [CONNECT {"verbose":false,"pedantic":false,"user":"client","pass":"[REDACTED]","tls_required":false,"name":"NATS CLI Version 0.0.28","lang":"go","version":"1.13.0","protocol":1,"echo":true,"headers":true,"no_responders":true}]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:27.417886 [TRC] 172.19.0.1:42806 - cid:10 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PING]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:27.417896 [TRC] 172.19.0.1:42806 - cid:10 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [PONG]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:27.418181 [TRC] 172.19.0.1:42806 - cid:10 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [SUB _INBOX.Rd7VW1R8dMNEBvhwGCxZvk.* 1]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:27.418200 [TRC] 172.19.0.1:42806 - cid:10 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- [PUB $JS.API.STREAM.INFO.orders _INBOX.Rd7VW1R8dMNEBvhwGCxZvk.mxldJ5NE 0]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:27.418205 [TRC] 172.19.0.1:42806 - cid:10 - "v1.13.0:go:NATS CLI Version 0.0.28" - <<- MSG_PAYLOAD: [""]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:27.418432 [TRC] 172.19.0.1:42806 - cid:10 - "v1.13.0:go:NATS CLI Version 0.0.28" - ->> [MSG _INBOX.Rd7VW1R8dMNEBvhwGCxZvk.mxldJ5NE 1 647]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:27.418447 [TRC] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 1274]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:27.418460 [TRC] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"8mRek9GPVzK5CbybhoOS6B\",\"timestamp\":\"2022-04-11T14:55:27.4183973Z\",\"server\":\"NB52VH4YKKOWJP3O65FTECPVKXMQLPWTH5U2XIYU6F52ALHE4JWALRWX\",\"client\":{\"start\":\"2022-04-11T14:55:27.4173164Z\",\"host\":\"172.19.0.1\",\"id\":10,\"acc\":\"CLIENT\",\"user\":\"client\",\"name\":\"NATS CLI Version 0.0.28\",\"lang\":\"go\",\"ver\":\"1.13.0\",\"rtt\":493200,\"server\":\"NB52VH4YKKOWJP3O65FTECPVKXMQLPWTH5U2XIYU6F52ALHE4JWALRWX\",\"kind\":\"Client\",\"client_type\":\"nats\"},\"subject\":\"$JS.API.STREAM.INFO.orders\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.stream_info_response\\\",\\\"config\\\":{\\\"name\\\":\\\"orders\\\",\\\"subjects\\\":[\\\"orders.*.stream.entry\\\"],\\\"retention\\\":\\\"workqueue\\\",\\\"max_consumers\\\":-1,\\\"max_msgs\\\":-1,\\\"max_bytes\\\":-1,\\\"max_age\\\":0,\\\"max_msgs_per_subject\\\":-1,\\\"max_msg_size\\\":-1,\\\"discard\\\":\\\"old\\\",\\\"storage\\\":\\\"file\\\",\\\"num_replicas\\\":1,\\\"duplicate_window\\\":120000000000,\\\"sealed\\\":false,\\\"deny_delete\\\":false,\\\"deny_purge\\\":false,\\\"allow_rollup_hdrs\\\":false},\\\"created\\\":\\\"2022-04-11T14:55:03.4175439Z\\\",\\\"state\\\":{\\\"messages\\\":1,\\\"bytes\\\":61,\\\"first_seq\\\":1,\\\"first_ts\\\":\\\"2022-04-11T14:55:10.4947619Z\\\",\\\"last_seq\\\":1,\\\"last_ts\\\":\\\"2022-04-11T14:55:10.4947619Z\\\",\\\"num_subjects\\\":1,\\\"consumer_count\\\":0},\\\"domain\\\":\\\"leaf\\\"}\",\"domain\":\"leaf\"}"]
dev-nats-leaf-1 | [1] 2022/04/11 14:55:27.420948 [DBG] 172.19.0.1:42806 - cid:10 - "v1.13.0:go:NATS CLI Version 0.0.28" - Client connection closed: Client Closed
dev-nats-leaf-1 | [1] 2022/04/11 14:55:27.420974 [TRC] 172.19.0.1:42806 - cid:10 - "v1.13.0:go:NATS CLI Version 0.0.28" - <-> [DELSUB 1]
```
|
https://github.com/nats-io/nats-server/issues/3024
|
https://github.com/nats-io/nats-server/pull/3031
|
e06e0a247fe7bbf2a4462c90f359cd975d2d47c6
|
08d1507c500a031b0eb67c198e2fb0ed04750d5d
| 2022-04-11T14:59:02Z |
go
| 2022-04-13T19:00:51Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 3,009 |
["server/leafnode_test.go", "server/reload.go", "test/leafnode_test.go"]
|
Leaf nodes do not receive messages from remote nodes properly
|
# Description
I started a NATS cluster with three remote nodes and one leaf node. They are configured as follows:
https://github.com/LLLLimbo/nats-conf/tree/main
At first they work fine, the leaf nodes receive the messages correctly when I publish them to the remote nodes and vice versa, but whenever they run for a while, it happens that the leaf nodes do not receive the messages from the remote nodes correctly.

~~The problem may be related to jetstream, when I delete the storage file of jetstream, the cluster is back to normal.~~
2022-04-12 correction:A simple restart restores communication between the remote node and the leaf node.
*Environment*
- nats-version : 2.7.3
- system: centos 7.6
|
https://github.com/nats-io/nats-server/issues/3009
|
https://github.com/nats-io/nats-server/pull/3058
|
69ea1ab5f4319992702b19d9f950fdf61e9862fa
|
d15d04be984ebc7c57efdaf60f404362cb3927a1
| 2022-04-06T06:47:51Z |
go
| 2022-04-20T14:42:33Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,995 |
["server/jetstream_api.go", "server/jetstream_cluster_test.go"]
|
When providing an invalid consumer name, consumer info returns unclear error of type Context Deadline
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [x] Included `nats-server -DV` output
- [X] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
- nats-server`v2.7.4`
- nats-client `v1.13.1-0.20220308171302-2f2f6968e98d`
#### OS/Container environment:
- Ubuntu 20.04.4 LTS
#### Steps or code to reproduce the issue:
##### MCVE test case
You can also remove nats-server startup, and simply connect it to the running nats-server by changing the `nats.Connect` call.
```golang
package main
import (
"context"
"testing"
"time"
"github.com/nats-io/nats-server/v2/server"
"github.com/nats-io/nats.go"
"github.com/stretchr/testify/require"
)
func TestConsumerInfoIncorrectDurableName(t *testing.T) {
opts := &server.Options{
JetStream: true,
Host: "127.0.0.1",
Port: -1,
HTTPPort: -1,
NoLog: true,
NoSigs: true,
Debug: true,
Trace: true,
StoreDir: t.TempDir(),
}
s, err := server.NewServer(opts)
require.NoError(t, err)
go s.Start()
require.True(t, s.ReadyForConnections(20*time.Second))
c, err := nats.Connect(s.ClientURL())
require.NoError(t, err)
jsCtx, err := c.JetStream(nats.MaxWait(30 * time.Second))
require.NoError(t, err)
streamName := "some-stream"
streamCfg := &nats.StreamConfig{
Name: streamName,
Subjects: []string{"subject.>"},
Storage: nats.MemoryStorage,
MaxAge: time.Hour,
}
_, err = jsCtx.AddStream(streamCfg)
require.NoError(t, err)
consumerCfg := &nats.ConsumerConfig{
AckPolicy: nats.AckExplicitPolicy,
DeliverPolicy: nats.DeliverAllPolicy,
ReplayPolicy: nats.ReplayInstantPolicy,
Durable: "durable-name",
}
_, err = jsCtx.AddConsumer(streamName, consumerCfg)
require.NoError(t, err)
invalidName := "foo.bar"
_, err = jsCtx.ConsumerInfo(streamName, invalidName)
require.ErrorIs(t, err, context.DeadlineExceeded)
}
```
##### Log of running the test case against binary
```
[2011249] 2022/04/02 18:45:44.942989 [INF] Starting nats-server
[2011249] 2022/04/02 18:45:44.943190 [INF] Version: 2.7.4
[2011249] 2022/04/02 18:45:44.943201 [INF] Git: [not set]
[2011249] 2022/04/02 18:45:44.943228 [DBG] Go build: go1.17.7
[2011249] 2022/04/02 18:45:44.943234 [INF] Name: ND3DUYGIEIJVMRMLCJFLQ6D7SCJRQCIYHGB4FZKAINBXEF3F6TXQSIIP
[2011249] 2022/04/02 18:45:44.943245 [INF] Node: oVJFP4KO
[2011249] 2022/04/02 18:45:44.943252 [INF] ID: ND3DUYGIEIJVMRMLCJFLQ6D7SCJRQCIYHGB4FZKAINBXEF3F6TXQSIIP
[2011249] 2022/04/02 18:45:44.943337 [DBG] Created system account: "$SYS"
[2011249] 2022/04/02 18:45:44.944235 [INF] Starting JetStream
[2011249] 2022/04/02 18:45:44.944329 [DBG] JetStream creating dynamic configuration - 23.24 GB memory, 531.26 GB disk
[2011249] 2022/04/02 18:45:44.946894 [INF] _ ___ _____ ___ _____ ___ ___ _ __ __
[2011249] 2022/04/02 18:45:44.946921 [INF] _ | | __|_ _/ __|_ _| _ \ __| /_\ | \/ |
[2011249] 2022/04/02 18:45:44.946929 [INF] | || | _| | | \__ \ | | | / _| / _ \| |\/| |
[2011249] 2022/04/02 18:45:44.946935 [INF] \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_| |_|
[2011249] 2022/04/02 18:45:44.946941 [INF]
[2011249] 2022/04/02 18:45:44.946948 [INF] https://docs.nats.io/jetstream
[2011249] 2022/04/02 18:45:44.946954 [INF]
[2011249] 2022/04/02 18:45:44.946961 [INF] ---------------- JETSTREAM ----------------
[2011249] 2022/04/02 18:45:44.946981 [INF] Max Memory: 23.24 GB
[2011249] 2022/04/02 18:45:44.946996 [INF] Max Storage: 531.26 GB
[2011249] 2022/04/02 18:45:44.947004 [INF] Store Directory: "/tmp/nats/jetstream"
[2011249] 2022/04/02 18:45:44.947011 [INF] -------------------------------------------
[2011249] 2022/04/02 18:45:44.947266 [DBG] Exports:
[2011249] 2022/04/02 18:45:44.947288 [DBG] $JS.API.>
[2011249] 2022/04/02 18:45:44.947400 [DBG] Enabled JetStream for account "$G"
[2011249] 2022/04/02 18:45:44.947424 [DBG] Max Memory: -1 B
[2011249] 2022/04/02 18:45:44.947435 [DBG] Max Storage: -1 B
[2011249] 2022/04/02 18:45:44.947490 [DBG] Recovering JetStream state for account "$G"
[2011249] 2022/04/02 18:45:44.951773 [INF] Restored 0 messages for stream '$G > kek'
[2011249] 2022/04/02 18:45:44.951962 [INF] Recovering 1 consumers for stream - '$G > kek'
[2011249] 2022/04/02 18:45:44.952679 [TRC] JETSTREAM - <<- [SUB $JSC.CI.$G.kek.kek 40]
[2011249] 2022/04/02 18:45:44.952913 [DBG] JetStream state for account "$G" recovered
[2011249] 2022/04/02 18:45:44.953365 [INF] Listening for client connections on 0.0.0.0:4222
[2011249] 2022/04/02 18:45:44.953387 [DBG] Get non local IPs for "0.0.0.0"
[2011249] 2022/04/02 18:45:44.958063 [INF] Server is ready
[2011249] 2022/04/02 18:45:53.588718 [DBG] 127.0.0.1:40926 - cid:9 - Client connection created
[2011249] 2022/04/02 18:45:53.589256 [TRC] 127.0.0.1:40926 - cid:9 - <<- [CONNECT {"verbose":false,"pedantic":false,"tls_required":false,"name":"","lang":"go","version":"1.13.0","protocol":1,"echo":true,"headers":true,"no_responders":true}]
[2011249] 2022/04/02 18:45:53.589414 [TRC] 127.0.0.1:40926 - cid:9 - "v1.13.0:go" - <<- [PING]
[2011249] 2022/04/02 18:45:53.589440 [TRC] 127.0.0.1:40926 - cid:9 - "v1.13.0:go" - ->> [PONG]
[2011249] 2022/04/02 18:45:53.589844 [TRC] 127.0.0.1:40926 - cid:9 - "v1.13.0:go" - <<- [SUB _INBOX.140aFWeG3YKlI1WF1qGiRd.* 1]
[2011249] 2022/04/02 18:45:53.589864 [TRC] 127.0.0.1:40926 - cid:9 - "v1.13.0:go" - <<- [PUB $JS.API.STREAM.CREATE.some-stream _INBOX.140aFWeG3YKlI1WF1qGiRd.jlIA4zsQ 219]
[2011249] 2022/04/02 18:45:53.589877 [TRC] 127.0.0.1:40926 - cid:9 - "v1.13.0:go" - <<- MSG_PAYLOAD: ["{\"name\":\"some-stream\",\"subjects\":[\"subject.\\u003e\"],\"retention\":\"limits\",\"max_consumers\":0,\"max_msgs\":0,\"max_bytes\":0,\"discard\":\"old\",\"max_age\":3600000000000,\"max_msgs_per_subject\":0,\"storage\":\"memory\",\"num_replicas\":0}"]
[2011249] 2022/04/02 18:45:53.590247 [TRC] 127.0.0.1:40926 - cid:9 - "v1.13.0:go" - ->> [MSG _INBOX.140aFWeG3YKlI1WF1qGiRd.jlIA4zsQ 1 628]
[2011249] 2022/04/02 18:45:53.590260 [TRC] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 1457]
[2011249] 2022/04/02 18:45:53.590293 [TRC] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"mq8H2tEDaHKgGe1JjEg4hl\",\"timestamp\":\"2022-04-02T17:45:53.590203791Z\",\"server\":\"ND3DUYGIEIJVMRMLCJFLQ6D7SCJRQCIYHGB4FZKAINBXEF3F6TXQSIIP\",\"client\":{\"start\":\"2022-04-02T17:45:53.588603608Z\",\"host\":\"127.0.0.1\",\"id\":9,\"acc\":\"$G\",\"lang\":\"go\",\"ver\":\"1.13.0\",\"rtt\":660517,\"server\":\"ND3DUYGIEIJVMRMLCJFLQ6D7SCJRQCIYHGB4FZKAINBXEF3F6TXQSIIP\",\"kind\":\"Client\",\"client_type\":\"nats\"},\"subject\":\"$JS.API.STREAM.CREATE.some-stream\",\"request\":\"{\\\"name\\\":\\\"some-stream\\\",\\\"subjects\\\":[\\\"subject.\\\\u003e\\\"],\\\"retention\\\":\\\"limits\\\",\\\"max_consumers\\\":0,\\\"max_msgs\\\":0,\\\"max_bytes\\\":0,\\\"discard\\\":\\\"old\\\",\\\"max_age\\\":3600000000000,\\\"max_msgs_per_subject\\\":0,\\\"storage\\\":\\\"memory\\\",\\\"num_replicas\\\":0}\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.stream_create_response\\\",\\\"config\\\":{\\\"name\\\":\\\"some-stream\\\",\\\"subjects\\\":[\\\"subject.\\\\u003e\\\"],\\\"retention\\\":\\\"limits\\\",\\\"max_consumers\\\":-1,\\\"max_msgs\\\":-1,\\\"max_bytes\\\":-1,\\\"max_age\\\":3600000000000,\\\"max_msgs_per_subject\\\":-1,\\\"max_msg_size\\\":-1,\\\"discard\\\":\\\"old\\\",\\\"storage\\\":\\\"memory\\\",\\\"num_replicas\\\":1,\\\"duplicate_window\\\":120000000000,\\\"sealed\\\":false,\\\"deny_delete\\\":false,\\\"deny_purge\\\":false,\\\"allow_rollup_hdrs\\\":false},\\\"created\\\":\\\"2022-04-02T17:45:53.590047781Z\\\",\\\"state\\\":{\\\"messages\\\":0,\\\"bytes\\\":0,\\\"first_seq\\\":0,\\\"first_ts\\\":\\\"0001-01-01T00:00:00Z\\\",\\\"last_seq\\\":0,\\\"last_ts\\\":\\\"0001-01-01T00:00:00Z\\\",\\\"consumer_count\\\":0},\\\"did_create\\\":true}\"}"]
[2011249] 2022/04/02 18:45:53.590785 [TRC] 127.0.0.1:40926 - cid:9 - "v1.13.0:go" - <<- [PUB $JS.API.CONSUMER.DURABLE.CREATE.some-stream.durable-name _INBOX.140aFWeG3YKlI1WF1qGiRd.yEqI56QO 143]
[2011249] 2022/04/02 18:45:53.590793 [TRC] 127.0.0.1:40926 - cid:9 - "v1.13.0:go" - <<- MSG_PAYLOAD: ["{\"stream_name\":\"some-stream\",\"config\":{\"durable_name\":\"durable-name\",\"deliver_policy\":\"all\",\"ack_policy\":\"explicit\",\"replay_policy\":\"instant\"}}"]
[2011249] 2022/04/02 18:45:53.590954 [TRC] JETSTREAM - <<- [SUB $JSC.CI.$G.some-stream.durable-name 41]
[2011249] 2022/04/02 18:45:53.591077 [TRC] 127.0.0.1:40926 - cid:9 - "v1.13.0:go" - ->> [MSG _INBOX.140aFWeG3YKlI1WF1qGiRd.yEqI56QO 1 593]
[2011249] 2022/04/02 18:45:53.591087 [TRC] ACCOUNT - <<- [PUB $JS.EVENT.ADVISORY.API 1349]
[2011249] 2022/04/02 18:45:53.591111 [TRC] ACCOUNT - <<- MSG_PAYLOAD: ["{\"type\":\"io.nats.jetstream.advisory.v1.api_audit\",\"id\":\"mq8H2tEDaHKgGe1JjEg4rX\",\"timestamp\":\"2022-04-02T17:45:53.591040096Z\",\"server\":\"ND3DUYGIEIJVMRMLCJFLQ6D7SCJRQCIYHGB4FZKAINBXEF3F6TXQSIIP\",\"client\":{\"start\":\"2022-04-02T17:45:53.588603608Z\",\"host\":\"127.0.0.1\",\"id\":9,\"acc\":\"$G\",\"lang\":\"go\",\"ver\":\"1.13.0\",\"rtt\":660517,\"server\":\"ND3DUYGIEIJVMRMLCJFLQ6D7SCJRQCIYHGB4FZKAINBXEF3F6TXQSIIP\",\"kind\":\"Client\",\"client_type\":\"nats\"},\"subject\":\"$JS.API.CONSUMER.DURABLE.CREATE.some-stream.durable-name\",\"request\":\"{\\\"stream_name\\\":\\\"some-stream\\\",\\\"config\\\":{\\\"durable_name\\\":\\\"durable-name\\\",\\\"deliver_policy\\\":\\\"all\\\",\\\"ack_policy\\\":\\\"explicit\\\",\\\"replay_policy\\\":\\\"instant\\\"}}\",\"response\":\"{\\\"type\\\":\\\"io.nats.jetstream.api.v1.consumer_create_response\\\",\\\"stream_name\\\":\\\"some-stream\\\",\\\"name\\\":\\\"durable-name\\\",\\\"created\\\":\\\"2022-04-02T17:45:53.590941143Z\\\",\\\"config\\\":{\\\"durable_name\\\":\\\"durable-name\\\",\\\"deliver_policy\\\":\\\"all\\\",\\\"ack_policy\\\":\\\"explicit\\\",\\\"ack_wait\\\":30000000000,\\\"max_deliver\\\":-1,\\\"replay_policy\\\":\\\"instant\\\",\\\"max_waiting\\\":512,\\\"max_ack_pending\\\":20000},\\\"delivered\\\":{\\\"consumer_seq\\\":0,\\\"stream_seq\\\":0},\\\"ack_floor\\\":{\\\"consumer_seq\\\":0,\\\"stream_seq\\\":0},\\\"num_ack_pending\\\":0,\\\"num_redelivered\\\":0,\\\"num_waiting\\\":0,\\\"num_pending\\\":0,\\\"cluster\\\":{\\\"leader\\\":\\\"ND3DUYGIEIJVMRMLCJFLQ6D7SCJRQCIYHGB4FZKAINBXEF3F6TXQSIIP\\\"}}\"}"]
[2011249] 2022/04/02 18:45:53.591362 [TRC] 127.0.0.1:40926 - cid:9 - "v1.13.0:go" - <<- [PUB $JS.API.CONSUMER.INFO.some-stream.foo.bar _INBOX.140aFWeG3YKlI1WF1qGiRd.ouje81FH 0]
[2011249] 2022/04/02 18:45:53.591389 [TRC] 127.0.0.1:40926 - cid:9 - "v1.13.0:go" - <<- MSG_PAYLOAD: [""]
[2011249] 2022/04/02 18:45:55.671302 [DBG] 127.0.0.1:40926 - cid:9 - "v1.13.0:go" - Client Ping Timer
[2011249] 2022/04/02 18:45:55.671332 [TRC] 127.0.0.1:40926 - cid:9 - "v1.13.0:go" - ->> [PING]
[2011249] 2022/04/02 18:45:55.671467 [TRC] 127.0.0.1:40926 - cid:9 - "v1.13.0:go" - <<- [PONG]
[2011249] 2022/04/02 18:46:23.691970 [DBG] 127.0.0.1:40926 - cid:9 - "v1.13.0:go" - Client connection closed: Client Closed
[2011249] 2022/04/02 18:46:23.692083 [TRC] 127.0.0.1:40926 - cid:9 - "v1.13.0:go" - <-> [DELSUB 1]
```
#### Expected result:
Expected to receive an error which indicates that the consumer name is invalid
#### Actual result:
Received deadline exceeded
## Investigation
I might be wrong on details, I'm not a maintainer of `nats-server`, just started using the product.
The [API-handlers](https://github.com/nats-io/nats-server/blob/main/server/jetstream_api.go#L734-L763) are by the looks of it essentially subscriptions. For the API call `JSApiConsumerInfo` the [subject](https://github.com/nats-io/nats-server/blob/main/server/jetstream_api.go#L141) is **`$JS.API.CONSUMER.INFO.*.*`**
Given the invalid consumer name of `foo.bar`, which would expand into the subject name `$JS.API.CONSUMER.INFO.some-stream.foo.bar`, it will not match the expected subject and therefore not find it's way to the appropriate handler `s.jsConsumerInfoRequest`.
Instead, it ends up in [apiDispatch](https://github.com/nats-io/nats-server/blob/main/server/jetstream_api.go#L652). Where the match function returns an empty set of SublistResult and the [request is dropped](https://github.com/nats-io/nats-server/blob/main/server/jetstream_api.go#L666), leaving the client hanging.
The found tokens are:
```
tokens = {[]string} len:6, cap:32
0 = {string} "$JS"
1 = {string} "API"
2 = {string} "CONSUMER"
3 = {string} "INFO"
4 = {string} "some-stream"
5 = {string} "foo"
```
Perhaps validation could be done on `subject` in the server's end of the client connection in [processMsgResult](https://github.com/nats-io/nats-server/blob/18bdabff35cf04ea1d947f01c100474f7f0d69c5/server/client.go#L4067) to make sure `subject` conforms to the required structure, at least when it comes to the API calls, given they are prefixed with the names, it should be feasible.
Maybe there is somewhere better to do this though, I'm not familiar with the code.
It was quite confusing when the NATS client suddenly started returning context deadline exceeded without any clear suggestion to it's the input that's wrong.
|
https://github.com/nats-io/nats-server/issues/2995
|
https://github.com/nats-io/nats-server/pull/2997
|
ee1341fa17093798c3bd2f6d7da6ef61a44f8fee
|
e8b118bae56cd16112ee0d915e1d8390a4ef5690
| 2022-04-02T18:11:07Z |
go
| 2022-04-04T18:27:27Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,980 |
["server/monitor_test.go", "server/server.go"]
|
Monitoring Server verify_and_map Exception?
|
Originally created as a discussion by @rsoberano-ld
From the nats-server [tls config documentation](https://docs.nats.io/running-a-nats-service/configuration/securing_nats/tls) about the verify_and_map option:
```
If true, require and verify client certificates and map certificate values for authentication purposes. Does not apply to monitoring either.
```
This seems to no longer work as expected after upgrading to version 2.6.6 (still present in 2.7.4); browser connections to the monitoring server on port 8222 seem to fail because the browser isn't presenting a client certificate. Example error:
```
This site can’t provide a secure connectionlocalhost didn’t accept your login certificate, or one may not have been provided.
Try contacting the system admin.
ERR_BAD_SSL_CLIENT_AUTH_CERT
```
TLS config for reference:
```
host: 0.0.0.0
port: 4222
syslog: False
https_port: 8222
tls {
cert_file: /etc/ssl/gnatsd/gnatsd-server.crt
key_file: /etc/ssl/gnatsd/gnatsd-server.key
ca_file: /etc/ssl/gnatsd/ca.crt
verify_and_map: true
}
```
I haven't seen any other reports about the client cert exception breaking after v2.6.6 though, is it something specific to our config that's causing this behavior?
Alternatively, looking at the changes to 2.6.6 in server/server-go:
```
config.GetConfigForClient = s.getTLSConfig
config.ClientAuth = tls.NoClientCert
```
Since GetConfigForClient is refreshing the TLS config after a ClientHello, is it possibly overwriting the `config.ClientAuth = tls.NoClientCert` in the line below?
|
https://github.com/nats-io/nats-server/issues/2980
|
https://github.com/nats-io/nats-server/pull/2981
|
520aa322e4489deae6f0e23c3617fcafb3adc99b
|
f207f9072853e5dc82610ae7a43736c7c693115d
| 2022-03-30T23:33:29Z |
go
| 2022-03-31T01:39:15Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,955 |
["server/jetstream_api.go", "server/jetstream_cluster.go", "server/jetstream_cluster_test.go", "server/raft.go", "server/stream.go"]
|
upgrading a stream's replication factor is not working
|
```go
func TestJetStreamStreamReplicationUpgrade(t *testing.T) {
c := createJetStreamClusterExplicit(t, "cluster", 3)
defer c.shutdown()
ncSys := natsConnect(t, c.randomServer().ClientURL(), nats.UserInfo("admin", "s3cr3t!"))
defer ncSys.Close()
nc := natsConnect(t, c.randomServer().ClientURL())
defer nc.Close()
js, err := nc.JetStream()
require_NoError(t, err)
cfg := &nats.StreamConfig{Name: "foo", Subjects: []string{"foo"}, Replicas: 1, Retention: nats.LimitsPolicy, MaxMsgs: 10}
si, err := js.AddStream(cfg)
require_NoError(t, err)
require_True(t, si.Config.Replicas == 1)
msg := [512]byte{}
for i := 0; i < 10; i++ {
js.Publish("foo", msg[:])
}
cfg.Replicas = 3
si, err = js.UpdateStream(cfg)
require_NoError(t, err)
require_True(t, si.Config.Replicas == 3)
require_True(t, si.State.Msgs == 10)
time.Sleep(time.Minute)
// INSTEAD: one server contains: s.Jsz().Store == blkfi.Size() == 5450 (this is what all server should look like)
// all other server contain: s.Jsz().Store == blkfi.Size() == 0
// indicating that no catchup has happened
refInfoSize := uint64(0)
for _, s := range c.servers {
jsi, err := s.Jsz(nil)
require_NoError(t, err)
require_True(t, jsi.Streams == 1)
refInfoSize = jsi.Store
fmt.Printf("storesize %d jsinfo %+v\n", refInfoSize, jsi)
// s.Shutdown() tried shutdown to rule out some weird flush issue, but then 1.blk file is gone
// (probably because it's empty)
}
time.Sleep(time.Minute)
for _, s := range c.servers {
blkfi, err := os.Stat(filepath.Join(s.StoreDir(), "$G/streams/foo/msgs/1.blk"))
require_NoError(t, err)
fmt.Printf("%d\n", blkfi.Size())
require_True(t, refInfoSize == uint64(blkfi.Size()))
}
}
```
|
https://github.com/nats-io/nats-server/issues/2955
|
https://github.com/nats-io/nats-server/pull/2958
|
0b8aa472594bb728e91bb2f4bdfc5804cfaf40b7
|
004e5ce2c6eba4710ecdd6db688b2a0d601ee309
| 2022-03-25T18:29:30Z |
go
| 2022-03-28T19:18:20Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,941 |
["server/consumer.go", "server/jetstream_test.go"]
|
JS Consumer's sampling events are not sent when sampling option is enabled using update configuration
|
## Defect
#### Versions of `nats-server` and affected client libraries used:
- nats-server: 2.7.4
- nats.go: 1.13.0
- natscli: 0.0.30
#### OS/Container environment:
- docker
- k8s
- ngs
#### Steps or code to reproduce the issue:
1. Create JS Consumer without specifying sampling option
2. Update existing JS Consumer using configuration which contains sampling option (e.g. `--sample=100`)
#### Expected result:
- Sampling events are sent on the subject: `$JS.EVENT.METRIC.>`
#### Actual result:
- Events are not sent on the subject: `$JS.EVENT.METRIC.>`
- Nats `consumer info` command reports that sampling is active
- See failing test [here](https://github.com/nats-io/nats-server/pull/2942)
|
https://github.com/nats-io/nats-server/issues/2941
|
https://github.com/nats-io/nats-server/pull/2966
|
004e5ce2c6eba4710ecdd6db688b2a0d601ee309
|
929f849b9350b1e3d295a08c69c310ffd2d5a7c2
| 2022-03-22T00:05:25Z |
go
| 2022-03-28T20:13:10Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,936 |
["server/consumer.go", "server/filestore.go", "server/filestore_test.go", "server/jetstream_cluster.go", "server/jetstream_cluster_test.go", "server/norace_test.go"]
|
JetStream $SYS folder keep growing after network breakdown
|
#### Versions of `nats-server` and affected client libraries used:
- nats-server: 2.7.4
- nats helm chart: 0.14.2
#### OS/Container environment:
- K8s
#### Steps or code to reproduce the issue:
- r5 2.7.4 nats cluster with jetstream enabled
- network breakdown (each nats server can't connect to each other) and back again soon
- Every NATS server start to write more than 45 GB data on SYS folder in 17 hours and cause `JetStream out of resources, will be DISABLED` error.
```
[709] 2022/03/18 09:21:58.795751 [INF] Starting nats-server
[709] 2022/03/18 09:21:58.795804 [INF] Version: 2.7.4
[709] 2022/03/18 09:21:58.795839 [INF] Git: [a86b84a]
[709] 2022/03/18 09:21:58.799055 [INF] ---------------- JETSTREAM ----------------
[709] 2022/03/18 09:21:58.799062 [INF] Max Memory: 1.00 GB
[709] 2022/03/18 09:21:58.799065 [INF] Max Storage: 50.00 GB
[709] 2022/03/18 09:21:58.799066 [INF] Store Directory: "/data/jetstream"
[709] 2022/03/18 09:21:58.799068 [INF] Domain: central-domain2
[709] 2022/03/18 09:21:58.799069 [INF] -------------------------------------------
[709] 2022/03/18 09:22:28.872163 [ERR] Error trying to connect to route (attempt 1): lookup for host "nats-central-2.nats.ns.svc.cluster.local": …
[709] 2022/03/18 09:22:28.872223 [ERR] Error trying to connect to route (attempt 1): lookup for host "nats-central-3.nats.ns.svc.cluster.local": …
[709] 2022/03/18 09:22:28.872397 [ERR] Error trying to connect to route (attempt 1): lookup for host "nats-central-0.nats.ns.svc.cluster.local ": …
[709] 2022/03/18 09:22:28.872511 [ERR] Error trying to connect to route (attempt 1): lookup for host "nats-central-1.nats.ns.svc.cluster.local ": …
[709] 2022/03/18 09:22:29.873274 [INF] …:6222 - rid:297 - Route connection created
[709] 2022/03/18 09:22:29.873337 [INF] …:6222 - rid:298 - Route connection created
[709] 2022/03/18 09:22:29.873353 [INF] …:6222 - rid:299 - Route connection created
[709] 2022/03/18 09:22:29.873487 [INF] …:6222 - rid:300 - Route connection created
[709] 2022/03/18 09:22:29.879409 [INF] …:6222 - rid:299 - Router connection closed: Duplicate Route
[709] 2022/03/18 09:22:29.880010 [INF] …:6222 - rid:298 - Router connection closed: Duplicate Route
[709] 2022/03/18 09:22:29.880215 [INF] …:6222 - rid:300 - Router connection closed: Duplicate Route
[709] 2022/03/18 09:22:31.260515 [INF] …:58657 - rid:301 - Route connection created
[709] 2022/03/18 09:22:31.265904 [INF] …:58657 - rid:301 - Router connection closed: Duplicate Route
…
[709] 2022/03/19 02:36:23.295996 [ERR] RAFT [6LiLb2LI - C-R5F-n7LP9zYB] Critical write error: write /data/jetstream/ADXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/_js_/C-R5F-n7LP9zYB/msgs/12589.blk: no space left on device
[709] 2022/03/19 02:36:23.296016 [WRN] RAFT [6LiLb2LI - C-R5F-n7LP9zYB] Error storing entry to WAL: write /data/jetstream/ADXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/_js_/C-R5F-n7LP9zYB/msgs/12589.blk: no space left on device
[709] 2022/03/19 02:36:23.296022 [ERR] JetStream out of resources, will be DISABLED
[709] 2022/03/19 02:36:23.296130 [WRN] JetStream initiating meta leader transfer
[709] 2022/03/19 02:36:23.296307 [INF] JetStream cluster no metadata leader
[709] 2022/03/19 02:36:23.545130 [INF] JetStream cluster new metadata leader: nats-central-3/nats-central
[709] 2022/03/19 02:36:25.298158 [INF] Initiating JetStream Shutdown...
[709] 2022/03/19 02:36:25.318866 [INF] JetStream Shutdown
```
Log into container and execute `du -h` command
```
8.0K ./ADXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/_js_/S-R5F-txJKtNAz/snapshots
12.0K ./ADXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/_js_/S-R5F-txJKtNAz/msgs
4.0K ./ADXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/_js_/S-R5F-txJKtNAz/obs
44.0K ./ADXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/_js_/S-R5F-txJKtNAz
8.0K ./ADXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/_js_/C-R5F-n7LP9zYB/snapshots
46.9G ./ADXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/_js_/C-R5F-n7LP9zYB/msgs
4.0K ./ADXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/_js_/C-R5F-n7LP9zYB/obs
46.9G ./ADXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/_js_/C-R5F-n7LP9zYB
8.0K ./ADXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/_js_/C-R5F-ROn9Qm5Z/snapshots
12.0K ./ADXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/_js_/C-R5F-ROn9Qm5Z/msgs
4.0K ./ADXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/_js_/C-R5F-ROn9Qm5Z/obs
44.0K ./ADXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/_js_/C-R5F-ROn9Qm5Z
4.0K ./ADXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/_js_/S-R5F-IEvk6yQ8/snapshots
8.0K ./ADXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/_js_/S-R5F-IEvk6yQ8/msgs
4.0K ./ADXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/_js_/S-R5F-IEvk6yQ8/obs
36.0K ./ADXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/_js_/S-R5F-IEvk6yQ8
8.0K ./ADXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/_js_/C-R5F-F3DNZaww/snapshots
576.0K ./ADXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/_js_/C-R5F-F3DNZaww/msgs
4.0K ./ADXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/_js_/C-R5F-F3DNZaww/obs
608.0K ./ADXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/_js_/C-R5F-F3DNZaww
8.0K ./ADXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/_js_/C-R5F-aNj4b0WT/snapshots
12.0K ./ADXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/_js_/C-R5F-aNj4b0WT/msgs
4.0K ./ADXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/_js_/C-R5F-aNj4b0WT/obs
44.0K ./ADXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/_js_/C-R5F-aNj4b0WT
...
46.9G ./ADXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/_js_
46.9G ./ADXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
```
|
https://github.com/nats-io/nats-server/issues/2936
|
https://github.com/nats-io/nats-server/pull/2973
|
953dad44053beca2bb5777bf5442445883468f38
|
bfc1462fb399fcc55da6ec065cc3bed3311ec460
| 2022-03-21T14:53:02Z |
go
| 2022-03-30T01:29:31Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,926 |
["server/config_check_test.go", "server/opts.go", "server/opts_test.go"]
|
Client occasionally incorrectly assigned to global account
|
## Defect
Occasionally a client which is supposed to be in a specific account is added to the global `$G` account instead.
This is visible in the trace log because when the client is correctly added to the account (`lagoonRemote` in this example) you can see the subscription propagated to connected nodes with a trace like this after the client connects:
```
->> [RS+ lagoonRemote lagoon.sshportal.api sshportalapi 1]
```
However sometimes you see a trace like this:
```
->> [RS+ $G lagoon.sshportal.api sshportalapi 1]
```
Because this seems to be some kind of race condition, I haven't nailed down the exact scenario when it happens. However I've got an environment where it is reliably reproducible. See the link below to a repository containing a `docker-compose` environment which you can run locally to reproduce the problem. That repository contains the full `nats.conf`.
- [x] Included `nats-server -DV` output
- [x] Included a [Minimal, Complete, and Verifiable example](https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
Server:
```
mre-nats-0-1 | [1] 2022/03/16 15:58:07.153560 [INF] Starting nats-server
mre-nats-0-1 | [1] 2022/03/16 15:58:07.153594 [INF] Version: 2.7.4
mre-nats-0-1 | [1] 2022/03/16 15:58:07.153597 [INF] Git: [a86b84a]
mre-nats-0-1 | [1] 2022/03/16 15:58:07.153601 [DBG] Go build: go1.17.8
```
Client:
```
~ # nats --version
0.0.28
```
#### OS/Container environment:
Docker container `nats:2.7.4-alpine3.15` (server).
Docker container `natsio/nats-box:0.8.1` (client)
#### Steps or code to reproduce the issue:
1. Check out this repository: https://github.com/smlx/nats-mre
2. Run `docker-compose up -d; ./find.bug.sh`
#### Expected result:
Client is always added to the `lagoonRemote` account.
#### Actual result:
Client is occasionally added to the global `$G` account.
|
https://github.com/nats-io/nats-server/issues/2926
|
https://github.com/nats-io/nats-server/pull/2943
|
60773be03fba6dd64f3e8f628590f0ffe698529d
|
9d6525c8a3a5470da3d98570936caca79319e17a
| 2022-03-16T16:17:30Z |
go
| 2022-03-22T20:38:42Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,920 |
["server/jetstream_test.go", "server/stream.go"]
|
A node that has a remote source removed from a local stream still receives messages from the remote stream until full stop & restart
|
## Defect
A node that has a remote source removed from a local stream still receives messages from the remote stream until full stop & restart.
We need to be able to dynamically remove sourced streams without forcing a server outage to no longer receive unwanted messages.
Important to note that a reload does not stop receiving messages, only a full stop & restart.
Make sure that these boxes are checked before submitting your issue -- thank you!
- [ X] Included `nats-server -DV` output
- [ X] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
[nats-sourcing-issue.zip](https://github.com/nats-io/nats-server/files/8233184/nats-sourcing-issue.zip)
The above zip has all configs and -DV output for all servers.
#### Versions of `nats-server` and affected client libraries used:
using nats-server v2.7.4
#### OS/Container environment:
Ubuntu 20.04
#### Steps or code to reproduce the issue:
Launch 3 servers based on configs from attached zip:
`nats-server -c hub.json`
`nats-server -c aleaf.json`
`nats-server -c bleaf.json`
Create queue stream on a-leaf:
`nats --server nats://localhost:4111 str add --config ./queue.json`
Create test stream on b-leaf:
`nats --server nats://localhost:2111 str add --config ./test.json`
Add entries to test stream:
`nats --server nats://localhost:2111 pub test hello --count 10`
See entries in test stream:
` nats --server nats://localhost:2111 str report`
Add test as source to queue:
`nats --server nats://localhost:4111 pub '$JS.a-leaf.API.STREAM.UPDATE.queue' '{ "name": "queue", "subjects": [ "queue" ], "retention": "limits", "max_consumers": -1, "max_msgs": -1, "max_bytes": -1, "max_age": 0, "max_msgs_per_subject": -1, "max_msg_size": -1, "discard": "old", "storage": "file", "num_replicas": 1, "duplicate_window": 120000000000, "sealed": false, "deny_delete": false, "deny_purge": false, "allow_rollup_hdrs": false, "sources": [ { "name": "test", "external": { "api": "$JS.b-leaf.API", "deliver": "" } } ] }'`
Verify entries are in queue:
`nats --server nats://localhost:4111 str report`
Add more entries to test:
`nats --server nats://localhost:2111 pub test hello --count 10`
Verify entries are in queue:
`nats --server nats://localhost:4111 str report`
Remove source:
`nats --server nats://hdb:admin@localhost:4111 pub '$JS.a-leaf.API.STREAM.UPDATE.queue' '{ "name": "queue", "subjects": [ "queue" ], "retention": "limits", "max_consumers": -1, "max_msgs": -1, "max_bytes": -1, "max_age": 0, "max_msgs_per_subject": -1, "max_msg_size": -1, "discard": "old", "storage": "file", "num_replicas": 1, "duplicate_window": 120000000000, "sealed": false, "deny_delete": false, "deny_purge": false, "allow_rollup_hdrs": false }'`
Add more entries to test
`nats --server nats://localhost:2111 pub test hello --count 10`
See entries are incorrectly queue
`nats --server nats://localhost:4111 str report`
Stop / start a-leaf server
Add more entries to test:
`nats --server nats://localhost:2111 pub test hello --count 10`
Now new entries are not syncing:
`nats --server nats://localhost:4111 str report`
#### Expected result:
We would expect the stream sourcing from a remote stream to no longer receive messages after a stream update that removes the remote stream without full stop & restart.
#### Actual result:
We continue to receive messages until the server is fully stopped & restarted.
|
https://github.com/nats-io/nats-server/issues/2920
|
https://github.com/nats-io/nats-server/pull/2938
|
33cfc748bfaa43f80ef0d3bf55df6b142bc4db44
|
f11f7a61e838217d84a3dea88e6648846d2a23c8
| 2022-03-11T15:31:31Z |
go
| 2022-03-21T18:29:57Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,913 |
["server/consumer.go", "server/filestore.go", "server/filestore_test.go", "server/jetstream_cluster.go", "server/jetstream_cluster_test.go"]
|
Unused JetStream consumers go out of sync
|
## Defect
#### Versions of `nats-server` and affected client libraries used:
Nats: 2.7.4 nightly (24067d7374e278385f8784040f3af454756b46b9)
#### OS/Container environment:
Linux / k8s
#### Steps or code to reproduce the issue:
* Have a stream with pull consumers on different subjects
* Publish a few hundred thousand messages to only some of the subjects, leaving some subjects completely unused
#### Expected result:
* All works. Nothing bad happens. Business as usual.
#### Actual result:
The unused consumers that never receive work go out of sync.
The log contains a lot of warnings like `JetStream cluster consumer '$G > [stream] > [consumer]' has NO quorum, stalled.` and errors like `RAFT [EEUOgTxU - C-R3F-IsTtTeSi] Error sending snapshot to follower [VpUo9efM]: raft: no snapshot available` and `Received an error looking up message for consumer: store is closed`.
This started to happen after the update from 2.7.2 to nightly, since I ran into the issue #2885. This also seems to be related, because it apparently affects the same consumers.
|
https://github.com/nats-io/nats-server/issues/2913
|
https://github.com/nats-io/nats-server/pull/2914
|
9a2da9ed8c04992baa66a558069f09d0b717a533
|
0cb0f6d380bbcb7c335365f379d53af7c747ed9d
| 2022-03-09T07:41:08Z |
go
| 2022-03-09T18:55:50Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,912 |
["locksordering.txt", "server/consumer.go", "server/jetstream.go", "server/jetstream_cluster_test.go", "server/stream.go"]
|
Jetstream can't restore after a network breakdown
|
I’m deploying a Jetstream cluster with 3 nodes in k8s.
It occurs that Jetstream doesn’t work after a network breakdown .
The same issue occurred yesterday. And it can only resume after manual restarting the servers.
![Uploading image.png…]()
|
https://github.com/nats-io/nats-server/issues/2912
|
https://github.com/nats-io/nats-server/pull/2951
|
3f6d3c4936b6e1a5279daba438a48c92c4578476
|
27cfd22f5fe3aa14ed8b7eba8c821f63977888c8
| 2022-03-09T07:26:57Z |
go
| 2022-03-25T19:21:57Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,895 |
["server/jetstream_api.go", "server/jetstream_cluster.go", "server/jetstream_cluster_test.go"]
|
JetStream: In clustering mode, consumers count in Stream info/list may be wrong
|
In clustering mode, the number of consumers in stream info may be wrong in presence of non durable consumers. Ephemeral are handled by specific nodes. The StreamInfo response would contain only the consumer count that the stream leader is handling.
|
https://github.com/nats-io/nats-server/issues/2895
|
https://github.com/nats-io/nats-server/pull/2896
|
1712ee3707e9b252a197946fc20f3690735e9407
|
636d1eb0cef3e88653c2e99826449b051a051206
| 2022-03-03T16:44:17Z |
go
| 2022-03-03T17:31:51Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,886 |
["server/jetstream_cluster_test.go", "server/stream.go"]
|
Some JetStream Asset CRUD events aren't being published
|
## Defect
A NATS user reported that:
_Hello, quick question about Jetstream Advisories. Looking at the [Monitoring Jetstream](https://docs.nats.io/running-a-nats-service/nats_admin/monitoring/monitoring_jetstream) documentation, there is an avisory which you title “Stream CRUD operations”. That suggests all operations on streams (create, update, delete) would fire events. However, we don’t seem to get events for other than CREATED events? The subject you have on that table suggests that as well as it’s $JS.EVENT.ADVISORY.STREAM.CREATED.<STREAM>, rather than $JS.EVENT.ADVISORY.STREAM.<STREAM_OPERATION>.<STREAM> . Is this on purpose?_
I verified this locally. After discussion on NATS slack, @kozlovic discovered that the stream closes its “internalLoop” responsible to send messages before sending the advisory.
It's possible other events are affected as well.
|
https://github.com/nats-io/nats-server/issues/2886
|
https://github.com/nats-io/nats-server/pull/2887
|
d52f607881f756e34280610a70f0e2f80a76b5ae
|
54da7586c5d620edf3838b9e35857ebbd3fd00b4
| 2022-02-25T16:19:52Z |
go
| 2022-03-06T17:54:58Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,885 |
["server/jetstream_cluster.go", "server/jetstream_cluster_test.go"]
|
JetStream $SYS folder keep growing
|
When we create a consumer with filter subject and that subject never got published the consumer sequence remain to 0 and raft folder size for that consumer will grow indefinitely.
|
https://github.com/nats-io/nats-server/issues/2885
|
https://github.com/nats-io/nats-server/pull/2899
|
30009fdd78225e20985f274043245d1a850e3fab
|
c94ee6570ed2cddb581e54bab1fd70f49a86b4f5
| 2022-02-25T04:02:26Z |
go
| 2022-03-04T18:05:03Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.