status
stringclasses 1
value | repo_name
stringlengths 9
24
| repo_url
stringlengths 28
43
| issue_id
int64 1
104k
| updated_files
stringlengths 8
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[ns, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,882 |
["server/jetstream_test.go", "server/stream.go"]
|
ReservedStore and ReservedMemory report 18EB
|
## Description
Here's an excerpt from a `/varz` endpoint. It shows that 18 exabytes have been reserved for JetStream storage and memory.
```
"jetstream": {
"config": {
"max_memory": 25255962624, // 25GB
"max_storage": 15415142400, // 15GB
},
"stats": {
"memory": 0,
"storage": 0,
"reserved_memory": 18446744073709552000, // 18EB (1EB = 1000PB)
"reserved_storage": 18446744073709552000, // 18EB (1EB = 1000PB)
"api": {
"total": 14,
"errors": 0
}
},
```
I have a suspicion that 18EB is coming from these lines: https://github.com/nats-io/nats-server/blob/main/server/jetstream.go#L1740-L1741
```go
type JetStreamStats struct {
ReservedMemory uint64 `json:"reserved_memory"`
ReservedStore uint64 `json:"reserved_storage"`
}
type jetStream struct {
memReserved int64
storeReserved int64
}
// Convert reserved int64 to uint64
stats.ReservedMemory = (uint64)(js.memReserved)
stats.ReservedStore = (uint64)(js.storeReserved)
```
If `js.memReserved` or `js.storeReserved` is negative, then we could be assigning a number close to the number seen in the `/varz` info.
```go
var a int64 = -1
fmt.Println(uint64(a)) // prints 18446744073709551615, which is 18EB
```
@bwerthmann originally discovered this while editing existing streams to set max bytes (Inf => 1024), then deleting the streams.
## Steps to Reproduce
This issue can be reliably reproduced with the following commands.
```
nats-server-2.7.2 -m 8222 -js
```
```
nats stream create str1 --subjects=subj1
nats stream edit str1 --max-bytes=1234
nats stream rm str1
```
Then see the `reserved_storage` value in http://localhost:8222/varz.
## Fixes
It seems like the main issue is that we don't update the reserved bytes during a stream update. Below is a history of `js.storeReserved` as different operations are carried out.
1. Create stream with no MaxBytes: `js.storeReserved` → 0
2. Update stream with MaxBytes=1234: `js.storeReserved` → 0
3. Delete stream: `js.storeReserved` → -1234
Here are two ways we can prevent this issue:
* Not allow updating MaxBytes: https://github.com/nats-io/nats-server/commit/4c80dd3a9d37a198827854cd3dd75adb89a8c8cc
* Update reserved bytes: https://github.com/nats-io/nats-server/commit/fe81bb3b4e69df962c257c2af34c1e8fcbc29e91
|
https://github.com/nats-io/nats-server/issues/2882
|
https://github.com/nats-io/nats-server/pull/2907
|
59753ec0daa8a6a058b1c64217384c08238fb38a
|
acfd456758dc777005650627eadb32cf7f16cd5b
| 2022-02-24T01:25:46Z |
go
| 2022-03-16T22:19:35Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,873 |
["server/consumer.go", "server/jetstream_cluster_test.go"]
|
Interest policy: nats server won't delete acked messages if consumers are not keeping up
|
## Defect
when using Interest policy, nats server won't delete acked messages if consumers are not keeping up
Make sure that these boxes are checked before submitting your issue -- thank you!
- [x] Included `nats-server -DV` output - unfortunately I have no access atm, please let me know if you need it
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
nats-server 2.7.2
python nats client 2.0.0
#### OS/Container environment:
docker container nats 2.7.2 alpine
#### Steps or code to reproduce the issue:
1) create jetstream with following config
```
Configuration:
Subjects: messages.*
Acknowledgements: true
Retention: File - Interest
Replicas: 1
Discard Policy: New
Duplicate Window: 2m0s
Allows Msg Delete: true
Allows Purge: true
Allows Rollups: false
Maximum Messages: unlimited
Maximum Bytes: unlimited
Maximum Age: unlimited
Maximum Message Size: unlimited
Maximum Consumers: unlimited
```
2. create two polling consumers, one for messages.msg1 and another for messages.msg2
```
Configuration:
Durable Name: MSG_XXX_CONSUMER
Pull Mode: true
Filter Subject: messages.message1
Deliver Policy: All
Ack Policy: Explicit
Ack Wait: 30s
Replay Policy: Instant
Max Ack Pending: 20,000
Max Waiting Pulls: 512
```
3. consumers are slower than inbound producer but ack messages normally within configured timeframe
4. messages get stacked up inside nats server filesystem, waiting for consumer to pick them up
5. stop producer
#### Expected result:
messages should be eventually all consumed, acked and removed from nats file system
#### Actual result:
messages are all consumed, acked, but stay on nats file system.
when we delete durable consumers from server and reregister consumers without producing any messages on stream, messages eventually disappear from nats server

|
https://github.com/nats-io/nats-server/issues/2873
|
https://github.com/nats-io/nats-server/pull/2875
|
19b16df21b26310486ab783f52819ddd02aada22
|
255a7c37928a7f1e645dcf5b18f52d337fcb88f2
| 2022-02-16T12:29:28Z |
go
| 2022-02-17T19:31:24Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,810 |
["server/auth.go", "server/auth_test.go", "server/opts.go", "server/parser.go"]
|
server configuration using accounts and nkeys for authentication emits plaintext password warning
|
When a configuration as the following is used, the server emits a plain text warning
```
system_account: SYS
accounts: {
SYS: {
users: [
{ nkey: UBDLZ6YYFQVINKWTYJFF5BGJ6I3JW2WO6WHZ4EVBAEQG3G7W3STGLIMZ }
]
}
}
```
> [92763] 2022/01/21 10:35:57.803364 [INF] Starting nats-server
[92763] 2022/01/21 10:35:57.803526 [INF] Version: 2.6.6-beta
[92763] 2022/01/21 10:35:57.803529 [INF] Git: [not set]
[92763] 2022/01/21 10:35:57.803539 [INF] Name: NBGJ36EW3TMD4QQ2TXQQQUWIL6TY2XD2RZU2ZH6XZWUVY4DY6NHPJRNR
[92763] 2022/01/21 10:35:57.803541 [INF] ID: NBGJ36EW3TMD4QQ2TXQQQUWIL6TY2XD2RZU2ZH6XZWUVY4DY6NHPJRNR
__[92763] 2022/01/21 10:35:57.803546 [WRN] Plaintext passwords detected, use nkeys or bcrypt__
When not using accounts no such message is emitted:
```
authorization: {
users: [
{ nkey: UDXU4RCSJNZOIQHZNWXHXORDPRTGNJAHAHFRGZNEEJCPQTT2M7NLCNF4 }
]
}
```
> [92820] 2022/01/21 10:40:11.444052 [INF] Starting nats-server
[92820] 2022/01/21 10:40:11.444150 [INF] Version: 2.6.6-beta
[92820] 2022/01/21 10:40:11.444153 [INF] Git: [not set]
[92820] 2022/01/21 10:40:11.444156 [INF] Name: NBRYFWGB6NPYV5R3MYPJSUPQ5B5O6355PR4ENGANQJEDFLS7AFMK3WKP
[92820] 2022/01/21 10:40:11.444158 [INF] ID: NBRYFWGB6NPYV5R3MYPJSUPQ5B5O6355PR4ENGANQJEDFLS7AFMK3WKP
[92820] 2022/01/21 10:40:11.444162 [INF] Using configuration file: /tmp/ns.conf
[92820] 2022/01/21 10:40:11.444804 [INF] Listening for client connections on 0.0.0.0:4222
[92820] 2022/01/21 10:40:11.445002 [INF] Server is ready
|
https://github.com/nats-io/nats-server/issues/2810
|
https://github.com/nats-io/nats-server/pull/2811
|
7aba8a8e9ee7f44302c1965d61e11e2491c5a43c
|
cfdca3df7649a4a634f80b4c490ce3200a9c2ab8
| 2022-01-21T16:42:09Z |
go
| 2022-01-21T19:42:45Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,801 |
["server/ipqueue.go", "server/jetstream_test.go", "server/leafnode_test.go", "server/norace_test.go", "server/stream.go"]
|
Stream editing with external source cause message multiplying in target stream
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [x] Included `nats-server -DV` output
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
```
[118] 2022/01/19 14:35:06.298597 [INF] Starting nats-server
[118] 2022/01/19 14:35:06.298643 [INF] Version: 2.7.0
[118] 2022/01/19 14:35:06.298647 [INF] Git: [not set]
[118] 2022/01/19 14:35:06.298654 [DBG] Go build: go1.17.6
[118] 2022/01/19 14:35:06.298685 [INF] Name: NDGUT5AJFRTTN5SZAO765QM3XUPFISJQV7QX4WHYVBF7IZPBIFF7PDKE
[118] 2022/01/19 14:35:06.298694 [INF] ID: NDGUT5AJFRTTN5SZAO765QM3XUPFISJQV7QX4WHYVBF7IZPBIFF7PDKE
[118] 2022/01/19 14:35:06.298789 [DBG] Created system account: "$SYS"
```
#### Versions of `nats-server` and affected client libraries used:
nats-server 2.7.0
synadia/nats-box:0.6.0
#### OS/Container environment:
OS: Ubuntu 20.04.2
Docker image: nats:2.7.0-alpine3.15
#### Steps or code to reproduce the issue:
All steps a very similar to the official guide:
https://docs.nats.io/running-a-nats-service/configuration/leafnodes/jetstream_leafnodes#cross-account-and-domain-import
The only difference is that we have to create stream with source, not mirror.
1. Grant permissions like in official cross account guide
2. Create stream with external source, not mirror
3. Update any stream parameter via "nats stream edit", "Duplicate Window" or subjects for example
4. Send any message to source stream with appropriate subject
#### Expected result:
Get 1 message in target stream.
#### Actual result:
Get 2 copies of original message in target stream.
Each update of stream increases number of copies by one.
#### Possible reasons:
Issues #2202 #2209 that introduces iname for using multiple sources with same name.
More precisely old config sources collected by Name.
https://github.com/nats-io/nats-server/blob/297b44394f92e990da450178cffaa7adecc88837/server/stream.go#L1024
And new config sources than compares by iname.
https://github.com/nats-io/nats-server/blob/297b44394f92e990da450178cffaa7adecc88837/server/stream.go#L1028
#### Possible solutions:
Change s.Name to s.iname in:
https://github.com/nats-io/nats-server/blob/297b44394f92e990da450178cffaa7adecc88837/server/stream.go#L1024
### Workarounds
1. Restart server after each stream configuration
2. Provide Nats-Msg-Id for each message, but message copies take disk space and slow down server
|
https://github.com/nats-io/nats-server/issues/2801
|
https://github.com/nats-io/nats-server/pull/3061
|
254c9708764e2e375c25ce98f822694c500eaa53
|
df61a335c7b02f60d1fe6b0f05470afa19219765
| 2022-01-20T08:15:04Z |
go
| 2022-04-21T05:20:41Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,767 |
["server/client.go", "server/websocket_test.go"]
|
Forwarding headers should not be trusted by default
|
## Defect
**SECURITY PROBLEM** in _unreleased_ code.
_nb: I was asked for review on the original but didn't see the request until a sweep just now, so this is partly on me for missing it, sorry._
Issue #2734 adds support for `X-Forwarded-For:` headers. It does this always. This means that anyone wishing to hide their tracks just needs to include a bogus `X-Forwarded-For:` header. This is a well-known technique for getting around ACL/logging systems which are too permissive.
#### Versions of `nats-server` and affected client libraries used:
UNRELEASED, but anything including #2734
`git tag --contains 67c345270cd2b9ba8b62afc2eebee473199b07e2` confirms this is still not in a released version.
#### Expected result:
There should be a configuration block enabling parsing of `X-Forwarded-For:` (XFF) headers, providing a set of CIDR netblocks which as "trusted". Each purported proxy in turn appends their own idea of the origin IP. If Mallory includes "X-Forwarded-For: 192.0.2.42" in the original request, then that will still be there when it reaches the target server.
We have to start at the very end (our idea of the origin IP without the header) and check that, then check the header working backwards, checking that it's allowed to tell us of a different origin IP. And yes, people do chain proxies, particularly once CDNs come into play with ingress and egress XFF stamping.
So the correct algorithm is, in pseudocode:
```
def trusted_forwarder(ip):
for block in trusted_cidrs:
if block.family != ip.family:
continue
if block.contains(ip):
return True
return False
def set_source_ip(request):
request.source_addr = request.network_source_addr
if not trusted_forwarder(request.network_source_addr):
return
xff = request.headers.find('x-forwarded-for') # case-insensitive, and consider multiple headers, check spec
if not xff:
return
purported_ips = reverse(httphdr_commalist_split(xff))
while len(purported_ips) > 1 and trusted_forwarder(purported_ips[0]):
purported_ips.drop_first()
request.source_addr = purported_ips[0]
```
A CIDR netblock such as `192.0.2.0/24` is the sanest unit of configuration for a trusted IP, and you can consider treating bare IPs as as /32, (and then there's IPv6 too). Usually there's a _short_ list of netblocks, at most 3 entries, so it's probably not worth using a trie or anything for matching: just iterate the very short list of configured netblocks.
#### Actual result:
The `X-Forwarded-For:` header is trusted always.
|
https://github.com/nats-io/nats-server/issues/2767
|
https://github.com/nats-io/nats-server/pull/2769
|
fbcf1aba30066822696b26b12bb1c44453476f3a
|
ccc9e1621da00db095b518fbcb9d7bc09309aa5a
| 2021-12-31T19:19:03Z |
go
| 2022-01-04T17:14:49Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,754 |
["server/opts.go", "server/opts_test.go"]
|
The "max_file_store" option cannot be quoted
|
## Defect
The server does not process a quoted string of the `max_file_store` option. It'd be great it if could.
```bash
$ cat quoted.conf
jetstream {
max_file_store: "200M"
}
$ nats-server -c quoted.conf
nats-server: quoted.conf:1:0: interface conversion: interface {} is string, not int64
```
Others like this should be checked as well...
|
https://github.com/nats-io/nats-server/issues/2754
|
https://github.com/nats-io/nats-server/pull/2777
|
08ff14a24e373aaa21faed7f3ff9a299297be097
|
a7554bd5dde8a41ade760946334cf3d31868c5b5
| 2021-12-21T20:58:10Z |
go
| 2022-01-13T03:41:38Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,745 |
["server/consumer.go", "server/jetstream_cluster_test.go"]
|
QueueSub... is not normal in a super-cluster, when using different cluster to subscribe without providing durable name.
|
## Defect
- [x] nats-server -DV
```shell
[82] 2021/12/15 04:47:13.935429 [INF] Starting nats-server
[82] 2021/12/15 04:47:13.935484 [INF] Version: 2.6.4
[82] 2021/12/15 04:47:13.935487 [INF] Git: [a27de5a]
[82] 2021/12/15 04:47:13.935490 [DBG] Go build: go1.16.10
[82] 2021/12/15 04:47:13.935499 [INF] Name: NCONUHPM3NLBQJ4QCSQWCYTEINVWAAHCBSVXJY6N3JYZ77CLOY2UTURJ
[82] 2021/12/15 04:47:13.935502 [INF] ID: NCONUHPM3NLBQJ4QCSQWCYTEINVWAAHCBSVXJY6N3JYZ77CLOY2UTURJ
[82] 2021/12/15 04:47:13.935533 [DBG] Created system account: "$SYS"
```
#### Versions of `nats-server` and affected client libraries used:
- client libraries: nats.go v1.13.1
#### OS/Container environment:
- nats:2.6.4-alpine3.14 on k8s
#### Steps or code to reproduce the issue:
1. Supposed we have a super-cluster, which contains C1 cluster and C2 cluster2, details shown as below:
|ClusterName|Jetstream| k8s svc|accounts|
|---|---|----|---|
|c1|enabled| nats-c1:4222|```"PUB":{user: "pub1", password: "pub1"}```|
|c2|enabled| nats-c2:4222|```"PUB":{user: "pub1", password: "pub1"}```|
2. first, we run the test code as below, which uses the ```nats-c1:4222``` as the serverURL.
using nats-cli tool to see the consumer info and stream info, we can get the stream and consumer are both created on c1 cluster.
```go
const serverURL = ""
const streamName = "demo"
const subject = "demo.test"
const queue = "test"
connect, err := nats.Connect(serverURL, nats.UserInfo("pub1", "pub1"))
defer connect.Close()
if err != nil {
log.Fatal(err)
}
js, err := connect.JetStream()
if err != nil {
log.Fatal(err)
}
_, err = js.StreamInfo(streamName)
if err != nil {
if err == nats.ErrStreamNotFound {
_, err := js.AddStream(&nats.StreamConfig{
Name: streamName,
Subjects: []string{streamName + ".>"},
Discard: 0,
Storage: nats.FileStorage,
Replicas: 3,
})
if err != nil {
return
}
} else {
return
}
}
ch1 := make(chan *nats.Msg)
ch2 := make(chan *nats.Msg)
_, err = js.ChanQueueSubscribe(subject, queue, ch1, nats.BindStream(streamName))
if err != nil {
return
}
_, err = js.ChanQueueSubscribe(subject, queue, ch2, nats.BindStream(streamName))
if err != nil {
return
}
go func() {
for i := 0; i < 100000; i++ {
msg := nats.NewMsg(subject)
msg.Data = []byte(fmt.Sprintf("MSG %d", i))
pubAck, err := js.PublishMsg(msg)
if err != nil {
return
}
fmt.Printf("%s %s %d\n", pubAck.Domain, pubAck.Stream, pubAck.Sequence)
}
}()
for {
select {
case msg := <-ch1:
fmt.Printf("ch1: %s\n", string(msg.Data[:]))
err := msg.Ack()
if err != nil {
return
}
case msg := <-ch2:
fmt.Printf("ch2: %s\n", string(msg.Data[:]))
err := msg.Ack()
if err != nil {
return
}
}
}
```
3. second, we run the code again, but changing the serverURL to ```nats-c2:4222```.
#### Expected result:
In queue subscribe model, two case should have the same result. All sending msg should be received from ch1 and ch2.
#### Actual result:
Only in the first case, the result is as expected. in the second case, only sending go-routine is normal, the queue-sub can't get any msg. Using nats-cli to get the consumer info, we get a error consumer state: ``` Active Interest: No interest```.
But when we call the
```go
_, err = js.ChanQueueSubscribe(subject, queue, ch1, nats.BindStream(streamName), nats.Durable(consumerName))
_, err = js.ChanQueueSubscribe(subject, queue, ch2, nats.BindStream(streamName), nats.Durable(consumerName))
```
with additional subOpts. (consumer is not required to exist), everying is normal.
|
https://github.com/nats-io/nats-server/issues/2745
|
https://github.com/nats-io/nats-server/pull/2750
|
3e8b66286d705df59197ff2660666c35c316140a
|
6810e488741d69e688d94244eb103264e5d54d58
| 2021-12-15T06:30:07Z |
go
| 2021-12-20T16:45:50Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,742 |
["server/consumer.go", "server/filestore.go", "server/norace_test.go"]
|
JetStream `Consumer assignment not cleaned up, retrying`, degraded cluster performance
|
## Defect
When using a lot of ephemeral consumers, a cluster with significant load eventually starts spamming a sample like below:
```
[1] 2021/12/13 02:44:15.208575 [WRN] Consumer assignment not cleaned up, retrying
[1] 2021/12/13 02:44:15.230045 [WRN] Consumer assignment not cleaned up, retrying
[1] 2021/12/13 02:44:15.241153 [INF] JetStream cluster new consumer leader for 'XXXX > YYYY > ZZZZ'
[1] 2021/12/13 02:44:15.289728 [WRN] Consumer assignment not cleaned up, retrying
[1] 2021/12/13 02:44:15.289972 [WRN] Consumer assignment not cleaned up, retrying
[1] 2021/12/13 02:44:15.300970 [WRN] Consumer assignment not cleaned up, retrying
[1] 2021/12/13 02:44:15.308939 [INF] JetStream cluster new consumer leader for 'XXXX > YYYY > ZZZZ'
[1] 2021/12/13 02:44:15.321413 [WRN] Consumer assignment not cleaned up, retrying
[1] 2021/12/13 02:44:15.359668 [WRN] Consumer assignment not cleaned up, retrying
```
I'm unsure if the leader migration for ephemeral consumers is normal even if none of the servers have rebooted.
`nats str info` of an affected stream:
```
Information for Stream XXXX created 2021-12-10T15:32:39Z
Configuration:
Subjects: XXXX.>
Acknowledgements: true
Retention: File - Limits
Replicas: 3
Discard Policy: Old
Duplicate Window: 2m0s
Maximum Messages: unlimited
Maximum Bytes: unlimited
Maximum Age: 0.00s
Maximum Message Size: unlimited
Maximum Consumers: unlimited
Cluster Information:
Name: XXXX
Leader: prod-nats-js-2
Replica: prod-nats-js-0, current, seen 0.36s ago
Replica: prod-nats-js-1, current, seen 0.36s ago
State:
Messages: 308,916
Bytes: 63 MiB
FirstSeq: 37,439 @ 2021-12-11T03:57:36 UTC
LastSeq: 346,354 @ 2021-12-13T03:23:29 UTC
Active Consumers: 57
```
`nats con info` of an affected ephemeral consumer:
```
Information for Consumer XXXX > 07ix4IRz created 2021-12-13T03:24:25Z
Configuration:
Delivery Subject: XXXX.deliver.ephemeral.XXXX-34af714a-aa41-4434-8189-a70c80d85af4.01605c32-a7d7-452d-9287-db2b3aa85924
Filter Subject: XXXX.34af714a-aa41-4434-8189-a70c80d85af4
Deliver Policy: All
Ack Policy: None
Replay Policy: Instant
Flow Control: false
Cluster Information:
Name: XXXX
Leader: XXXX
State:
Last Delivered Message: Consumer sequence: 0 Stream sequence: 347268
Unprocessed Messages: 0
Active Interest: Active
````
Make sure that these boxes are checked before submitting your issue -- thank you!
- [ ] Included `nats-server -DV` output (can't reproduce in staging)
- [ ] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
Server: Docker `nats:2.6.6`
#### OS/Container environment:
docker-compose
Ubuntu 20.04
#### Steps or code to reproduce the issue:
1. Start a 3-node JetStream-enabled cluster
2. Create a stream with replication set to 3
3. Create a large amount of ephemeral consumers, many of which never have any messages delivered
#### Expected result:
Server CPU should remain low. Should not see warnings in console. Ephemeral consumers should be removed and not migrated.
#### Actual result:
CPU load steadily increases. CPU sits at effectively 0% until a steady stream of messages (~70 mps) are sent through the cluster. The two screenshots below are of the same 6-hour window.
<img width="512" alt="image" src="https://user-images.githubusercontent.com/96033673/145747931-ac4c4c91-bbff-4377-ba7c-a564eae2b68e.png">
<img width="1570" alt="image" src="https://user-images.githubusercontent.com/96033673/145748231-2bd8e2e6-4635-4089-8356-307f33e80cc6.png">
|
https://github.com/nats-io/nats-server/issues/2742
|
https://github.com/nats-io/nats-server/pull/2764
|
36d34492cd2da0ff063108c51f51c6942c7f6aa0
|
bd495f3b18860cbbb08ec0d690910b74c13164f6
| 2021-12-13T03:35:22Z |
go
| 2021-12-29T15:42:30Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,732 |
["server/consumer.go", "server/filestore.go", "server/filestore_test.go"]
|
Consumer stopped working after errPartialCache (nats-server oom-killed)
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [x] Included `nats-server -DV` output
- [ ] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
```
# nats-server -DV
[92] 2021/12/06 15:16:05.235349 [INF] Starting nats-server
[92] 2021/12/06 15:16:05.235397 [INF] Version: 2.6.6
[92] 2021/12/06 15:16:05.235401 [INF] Git: [878afad]
[92] 2021/12/06 15:16:05.235406 [DBG] Go build: go1.16.10
[92] 2021/12/06 15:16:05.235416 [INF] Name: NASX72BQAFBIH4QBLZ36RADTPKSO6LCKRDEAS37XRJ7SYZ53RYYOFHHS
[92] 2021/12/06 15:16:05.235436 [INF] ID: NASX72BQAFBIH4QBLZ36RADTPKSO6LCKRDEAS37XRJ7SYZ53RYYOFHHS
[92] 2021/12/06 15:16:05.235457 [DBG] Created system account: "$SYS"
```
```
Image: nats:2.6.6-alpine
Limits:
cpu: 200m
memory: 256Mi
Requests:
cpu: 200m
memory: 256Mi
```
go library:
```
github.com/nats-io/nats.go v1.13.1-0.20211018182449-f2416a8b1483
```
#### OS/Container environment:
```
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:42:41Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
CONTAINER-RUNTIME
cri-o://1.21.4
```
#### Steps or code to reproduce the issue:
1. Start nats cluster (3 replicas) with Jetstream enabled.
JS Config:
```
jetstream {
max_mem: 64Mi
store_dir: /data
max_file:10Gi
}
```
2. Start to push messages into stream. Stream config:
```
Configuration:
Subjects: widget-request-collector
Acknowledgements: true
Retention: File - WorkQueue
Replicas: 3
Discard Policy: Old
Duplicate Window: 2m0s
Allows Msg Delete: true
Allows Purge: true
Allows Rollups: false
Maximum Messages: unlimited
Maximum Bytes: 1.9 GiB
Maximum Age: 1d0h0m0s
Maximum Message Size: unlimited
Maximum Consumers: unlimited
```
3. Shutdown one of the nats nodes for a while and rate limit consumer (or shutdown consumer) for collecting messages in file storage.
4. Wait until storage reached it's maximum capacity (1.9G).
5. Bring up nats server. (Do not bring up consumer)
#### Expected result:
Outdated node should become current.
#### Actual result:
Outdated node tries to become current, gets messages from stream leader, but reached memory limit and killed by OOM. It restarts again, and again killed by OOM.
```
Cluster Information:
Name: nats
Leader: promo-widget-collector-event-nats-2
Replica: promo-widget-collector-event-nats-1, outdated, OFFLINE, seen 2m8s ago, 13,634 operations behind
Replica: promo-widget-collector-event-nats-0, current, seen 0.00s ago
State:
Messages: 2,695,412
Bytes: 1.9 GiB
FirstSeq: 3,957,219 @ 2021-12-06T14:04:00 UTC
LastSeq: 6,652,630 @ 2021-12-06T15:09:36 UTC
Active Consumers: 1
```
Crashed pod info:
```
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Mon, 06 Dec 2021 14:30:26 +0000
Finished: Mon, 06 Dec 2021 14:31:08 +0000
Ready: False
Restart Count: 3
```
Is it possible to configure memory limits for nats-server to prevent memory overeating?
|
https://github.com/nats-io/nats-server/issues/2732
|
https://github.com/nats-io/nats-server/pull/2761
|
42ae3f532520538c8d0d445eaef1ff706b9aa622
|
34555aecca87987f9d56dab0bc76f0494dc5fa2b
| 2021-12-06T17:09:26Z |
go
| 2021-12-27T20:03:31Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,720 |
["server/filestore.go", "server/filestore_test.go"]
|
JetStream cluster could not decode consumer snapshot
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [ ] Included `nats-server -DV` output
- [ ] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
2.6.5
#### OS/Container environment:
alpine
#### Steps or code to reproduce the issue:
- stop 1 of 3 pods
- start 1 of 3 pods
#### Expected result:
- correct start
#### Actual result:
- incorrect start
```
kubectl -n production logs collector-event-nats-2 -c nats
[87] 2021/12/01 17:08:48.861842 [INF] Starting nats-server
[87] 2021/12/01 17:08:48.861966 [INF] Version: 2.6.5
[87] 2021/12/01 17:08:48.861971 [INF] Git: [ea48105]
[87] 2021/12/01 17:08:48.861974 [INF] Name: collector-event-nats-2
[87] 2021/12/01 17:08:48.861989 [INF] Node: dcRC6Zf4
[87] 2021/12/01 17:08:48.861992 [INF] ID: NBPSM6UJMEPOKF6FNJODGZX2YDULV3L7NVNK4RNXECA72OE467G27VDD
[87] 2021/12/01 17:08:48.861998 [INF] Using configuration file: /etc/nats-config/nats.conf
[87] 2021/12/01 17:08:48.863486 [INF] Starting JetStream
[87] 2021/12/01 17:08:48.863829 [INF] _ ___ _____ ___ _____ ___ ___ _ __ __
[87] 2021/12/01 17:08:48.863846 [INF] _ | | __|_ _/ __|_ _| _ \ __| /_\ | \/ |
[87] 2021/12/01 17:08:48.863848 [INF] | || | _| | | \__ \ | | | / _| / _ \| |\/| |
[87] 2021/12/01 17:08:48.863850 [INF] \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_| |_|
[87] 2021/12/01 17:08:48.863851 [INF]
[87] 2021/12/01 17:08:48.863853 [INF] https://docs.nats.io/jetstream
[87] 2021/12/01 17:08:48.863854 [INF]
[87] 2021/12/01 17:08:48.863856 [INF] ---------------- JETSTREAM ----------------
[87] 2021/12/01 17:08:48.863869 [INF] Max Memory: 512.00 MB
[87] 2021/12/01 17:08:48.863872 [INF] Max Storage: 50.00 GB
[87] 2021/12/01 17:08:48.863875 [INF] Store Directory: "/data/jetstream"
[87] 2021/12/01 17:08:48.863876 [INF] -------------------------------------------
[87] 2021/12/01 17:08:48.870153 [INF] Restored 0 messages for stream "error-collector"
[87] 2021/12/01 17:08:56.299199 [INF] Restored 5,019,337 messages for stream "event-collector"
[87] 2021/12/01 17:09:08.709647 [INF] Restored 5,184,221 messages for stream "request-collector"
[87] 2021/12/01 17:09:08.709714 [INF] Recovering 1 consumers for stream - "error-collector"
[87] 2021/12/01 17:09:08.710007 [INF] Recovering 1 consumers for stream - "event-collector"
[87] 2021/12/01 17:09:08.710163 [INF] Recovering 1 consumers for stream - "request-collector"
[87] 2021/12/01 17:09:08.710383 [INF] Starting JetStream cluster
[87] 2021/12/01 17:09:08.710387 [INF] Creating JetStream metadata controller
[87] 2021/12/01 17:09:08.710598 [INF] JetStream cluster recovering state
[87] 2021/12/01 17:09:08.712197 [INF] Starting http monitor on 0.0.0.0:8222
[87] 2021/12/01 17:09:08.712280 [INF] Listening for client connections on 0.0.0.0:4222
[87] 2021/12/01 17:09:08.712503 [INF] Server is ready
[87] 2021/12/01 17:09:08.712522 [INF] Cluster name is nats
[87] 2021/12/01 17:09:08.712548 [INF] Listening for route connections on 0.0.0.0:6222
[87] 2021/12/01 17:09:08.716585 [INF] 10.244.243.194:6222 - rid:22 - Route connection created
[87] 2021/12/01 17:09:08.716643 [INF] 10.244.69.177:6222 - rid:23 - Route connection created
[87] 2021/12/01 17:09:08.753549 [WRN] Error applying entries to '$G > event-collector': last sequence mismatch
[87] 2021/12/01 17:09:08.768254 [WRN] Resetting stream cluster state for '$G > event-collector'
[87] 2021/12/01 17:09:08.795096 [WRN] Error applying entries to '$G > event-collector': last sequence mismatch
[87] 2021/12/01 17:09:08.797944 [ERR] JetStream cluster could not decode consumer snapshot for '$G > request-collector > request-collector-1' [C-R3F-XO5nmQ6F]
panic: corrupt state file
goroutine 179 [running]:
github.com/nats-io/nats-server/server.(*jetStream).applyConsumerEntries(0xc000162160, 0xc000625500, 0xc00059e4c0, 0x0, 0x5, 0x1)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/jetstream_cluster.go:3037 +0xe6f
github.com/nats-io/nats-server/server.(*jetStream).monitorConsumer(0xc000162160, 0xc000625500, 0xc00067eab0)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/jetstream_cluster.go:3006 +0x68f
github.com/nats-io/nats-server/server.(*jetStream).processClusterCreateConsumer.func1()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/jetstream_cluster.go:2800 +0x3c
created by github.com/nats-io/nats-server/server.(*Server).startGoRoutine
/home/travis/gopath/src/github.com/nats-io/nats-server/server/server.go:2867 +0xc5
```
|
https://github.com/nats-io/nats-server/issues/2720
|
https://github.com/nats-io/nats-server/pull/2738
|
f55ee219419145ee8887857c9dbdb9fcd5654935
|
be066b7a21441520cf2fee67a7f6772da1f18f1a
| 2021-12-01T17:50:59Z |
go
| 2021-12-09T00:16:51Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,708 |
["server/stream.go"]
|
After removing a source from a source stream the NATS Server crashes
|
## Defect
When I remove a source from a source stream it causes a fatal crash of the NATS Server a few seconds after the stream has been updated. Also when I list streams right after the removal of a source, the stream still is showing a source but it is an incomplete fragment of the original source. This can all be seen by debugging the code in the linked gist.
Make sure that these boxes are checked before submitting your issue -- thank you!
- [x ] Included `nats-server -DV` output
[nats-server dv.txt](https://github.com/nats-io/nats-server/files/7590219/nats-server.dv.txt)
- [ x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
https://gist.github.com/kylebernhardy/a3d64e93175ec92bc3859fe2cc522bf0
#### Versions of `nats-server` and affected client libraries used:
nats-server 2.6.5
nats.js 2.4.0
#### OS/Container environment:
Ubuntu 20.04
#### Steps or code to reproduce the issue:
Run the commands `nats-server -c ./hub.json` `nas-server -c ./leaf.json` (these server configs are included in the linked gist)
Run/debug the test.js script
#### Expected result:
Desired source is successfully removed from stream & server does not crash.
#### Actual result:
The stream still has a partial source fragment & the server has a fatal crash.
|
https://github.com/nats-io/nats-server/issues/2708
|
https://github.com/nats-io/nats-server/pull/2712
|
d3125c5a3c9f55fccfe54545c4759f6fdec3e973
|
df581337eaf2f3eee8afb12f8c8366d21788828d
| 2021-11-23T18:21:52Z |
go
| 2021-11-29T21:26:18Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,706 |
["server/const.go", "server/consumer.go", "server/filestore.go", "server/jetstream_cluster.go", "server/memstore.go", "server/stream.go"]
|
Stream state with large interior deletes slow
|
When using Max Messages Per Subject style streams interior deletes are the norm, streams can easily build up millions or even 10s of millions of interior deleted messages.
In those cases Stream State API is very slow primarily because we duplicate and sort the list of these interior deletes - even when detail is not requested.
In memory store its not too bad, still we duplicate potentially 10s of millions of uint64s and then sort them - this can take minutes.
https://github.com/nats-io/nats-server/blob/ea48105526db36b566b3afb36ddfacccf3f97f3a/server/memstore.go#L750-L766
In filestore we seem to also do some block level maintenance in the middle of state gathering:
https://github.com/nats-io/nats-server/blob/ea48105526db36b566b3afb36ddfacccf3f97f3a/server/filestore.go#L3386-L3398
In the typical stream state we just discard all these numbers:
https://github.com/nats-io/nats-server/blob/ea48105526db36b566b3afb36ddfacccf3f97f3a/server/stream.go#L3525-L3538
For memory stream its quite easy to optimise this down to only giving or even calculating this list but for disk things are a bit more complex. We'd need to extend the Store and WAL interfaces State() with a boolean and then carefully go through places where deleted is not needed, or a new form of State() should be added - but I think the design of the current State() functions need a rethink.
|
https://github.com/nats-io/nats-server/issues/2706
|
https://github.com/nats-io/nats-server/pull/2711
|
f094918f35b8d834f6d27939db8201837267d1a6
|
d3125c5a3c9f55fccfe54545c4759f6fdec3e973
| 2021-11-22T09:42:11Z |
go
| 2021-11-29T19:32:44Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,679 |
["server/client.go", "server/client_test.go"]
|
A slow/disconnected subscriber can cause publishing to become blocked
|
## Defect
With 1 publisher and multiple subscribers I have observed that under certain circumstances a subscriber
which is slow/has a poor connection can block publishing (so preventing other subscribers from receiving data).
### Reproduction
It is possible to reproduce this with a local `nats-server` and `nats-cli`.
- Run `nats-server` (default config/no arguments)
- Start publishing messages ever 20ms of ~20kb: `./nats --trace pub test --count=-10000000 --sleep=20ms "{{Random 20000 20000}}"`
- Start subscribing (just print out the message number to make the drops obvious) `./nats sub test | grep Received`
- Start a second subscriber, `./nats sub test | grep Received`. Cause this subscriber to simulate a disconnect by suspending the process (i.e. pressing `CTRL+Z` in the terminal.)
After a few seconds, at the same time, the first subscriber stops receiving messages, the publisher stops sending (message count stops increasing). This lasts for about 10 seconds.
I also see the following in the nats console logs.
```
[129148] 2021/11/05 11:45:56.973798 [INF] 127.0.0.1:50642 - cid:12 - "v1.12.0:go:NATS CLI Version 0.0.26" - Slow Consumer Detected: WriteDeadline of 10s exceeded with 1 chunks of 20020 total bytes.
[129148] 2021/11/05 11:45:56.973825 [WRN] 127.0.0.1:50616 - cid:10 - "v1.12.0:go:NATS CLI Version 0.0.26" - Readloop processing time: 10.000846205s
```
I've tested this on Ubuntu 20.04 with `nats-server` 2.6.4 and `nats-cli` 0.0.26.
The largish message size is required to trigger the issue.
Suspending the nats-cli is a crude way to simulate a subscriber with a connection issue - but I have seen this happens in more realistic/reasonable situations (standard slow subscribers/poor network conditions).
### Investigation/Potential Fix
I am not familiar with golang or the nat-server codebase- but here's my attempt at figure out what is going on.
Using a shorter write deadline option makes the drop/blockage shorter - but there are still gaps of write deadline length when the subscriber disconnects.
I think the client `readLoop` for the publisher calling `flushClientsWithCheck` is causing the issue. When flushing messages to the subscriber the publisher `readLoop` ends up being blocked for the duration of `WriteDeadline`. The attempt to avoid this by using a time budget does not work since the heuristic of using the `Last flush time for Write` to guess how long the flush will take fails under the disconnect condition.
I verified this by changing the `budget` that `flushClientsWithCheck` is called with to zero in `client.go` - so clients are only notified, not flushed. i.e:
```
@@ -1204,9 +1204,9 @@ func (c *client) readLoop(pre []byte) {
// if applicable. Routes and Gateways will never
// spend time flushing outbound in place.
var budget time.Duration
- if c.kind == CLIENT {
- budget = time.Millisecond
- }
+ //if c.kind == CLIENT {
+ // budget = time.Millisecond
+ //}
// Flush, or signal to writeLoop to flush to socket.
last := c.flushClientsWithCheck(budget, true)
```
(This might have performance implications but is the simplest way to fix it - assuming my reasoning here is correct.)
After I made this change I cannot reproduce the issue. When suspending a subscriber we see the `Slow consumer` log - but no slow `Readloop processing` warning.
|
https://github.com/nats-io/nats-server/issues/2679
|
https://github.com/nats-io/nats-server/pull/2684
|
91d6531b8bdbcee0fdc1d2a4810712cdf6f76a7b
|
8ac1ca0e980ec68bc77d5143207498eedca457d8
| 2021-11-05T12:25:42Z |
go
| 2021-11-10T16:33:25Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,670 |
["server/consumer.go", "server/dirstore_test.go", "server/jetstream_cluster.go", "server/jetstream_cluster_test.go", "server/test_test.go"]
|
Allow modification of some JS push consumer settings
|
The request is for being able to alter the MaxAckPending value for an existing push consumer, as it is possible in STAN.
Use case: when you deploy a larger number of workers than the current MaxAckPending value, you are not making use of all your available workers and you need to adjust that value.
Looking further ahead, it could also be interesting to allow adjusting AckWait, Description, RateLimit and MaxDeliver
|
https://github.com/nats-io/nats-server/issues/2670
|
https://github.com/nats-io/nats-server/pull/2674
|
fe3abafaeb796d0c903f62df4c1fb6e64ace31a5
|
ee3009e121c5b2a84d9baf59ce22b7b5586a71ef
| 2021-11-02T22:18:48Z |
go
| 2021-11-04T20:58:56Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,666 |
["server/jetstream_cluster.go", "server/jetstream_cluster_test.go", "server/raft.go", "server/stream.go"]
|
Excessive logging in clustered mode for KV Create()
|
when creating a kv stream like this:
```json
{
"name": "KV_DEMO_ELECTION",
"subjects": [
"$KV.DEMO_ELECTION.>"
],
"retention": "limits",
"max_consumers": -1,
"max_msgs_per_subject": 1,
"max_msgs": -1,
"max_bytes": -1,
"max_age": 60000000000,
"max_msg_size": -1,
"storage": "file",
"discard": "old",
"num_replicas": 3,
"duplicate_window": 60000000000,
"sealed": false,
"deny_delete": true,
"deny_purge": false,
"allow_rollup_hdrs": true
}
```
using KV create in a R3 clustrer logs this all the time on failed updates this should probably not happen just if someone uses Create() on an existing key
```
Nov 2 12:48:57 n2-lon nats-server[1336]: [1] [WRN] Error applying entries to 'one > KV_DEMO_ELECTION': last sequence by subject mismatch: 0 vs 2
Nov 2 12:48:57 n3-lon nats-server[1336]: [1] [WRN] Error applying entries to 'one > KV_DEMO_ELECTION': last sequence by subject mismatch: 0 vs 2
Nov 2 12:48:57 n1-lon nats-server[1348]: [1] [WRN] Error applying entries to 'one > KV_DEMO_ELECTION': last sequence by subject mismatch: 0 vs 2
Nov 2 12:48:59 n2-lon nats-server[1336]: [1] [WRN] Error applying entries to 'one > KV_DEMO_ELECTION': last sequence by subject mismatch: 0 vs 2
Nov 2 12:48:59 n1-lon nats-server[1348]: [1] [WRN] Error applying entries to 'one > KV_DEMO_ELECTION': last sequence by subject mismatch: 0 vs 2
Nov 2 12:48:59 n3-lon nats-server[1336]: [1] [WRN] Error applying entries to 'one > KV_DEMO_ELECTION': last sequence by subject mismatch: 0 vs 2
```
If I do this a lot and concurrently with one of the concurrent puts passing and others therefor failing, the stream resets and becomes unusable:
```
Nov 2 12:51:20 n2-lon nats-server[1336]: [1] [WRN] Error applying entries to 'one > KV_DEMO_ELECTION': last sequence by subject mismatch: 0 vs 1
Nov 2 12:51:20 n2-lon nats-server[1336]: [1] [WRN] Error applying entries to 'one > KV_DEMO_ELECTION': last sequence mismatch
Nov 2 12:51:20 n2-lon nats-server[1336]: [1] [WRN] Resetting stream cluster state for 'one > KV_DEMO_ELECTION'
Nov 2 12:51:20 n1-lon nats-server[1348]: [1] [WRN] Error applying entries to 'one > KV_DEMO_ELECTION': last sequence by subject mismatch: 0 vs 1
Nov 2 12:51:20 n1-lon nats-server[1348]: [1] [WRN] Error applying entries to 'one > KV_DEMO_ELECTION': last sequence mismatch
Nov 2 12:51:20 n1-lon nats-server[1348]: [1] [WRN] Resetting stream cluster state for 'one > KV_DEMO_ELECTION'
Nov 2 12:51:20 n3-lon nats-server[1336]: [1] [WRN] Error applying entries to 'one > KV_DEMO_ELECTION': last sequence by subject mismatch: 0 vs 1
Nov 2 12:51:20 n3-lon nats-server[1336]: [1] [WRN] Error applying entries to 'one > KV_DEMO_ELECTION': last sequence mismatch
Nov 2 12:51:20 n3-lon nats-server[1336]: [1] [WRN] Resetting stream cluster state for 'one > KV_DEMO_ELECTION'
Nov 2 12:51:20 n2-lon nats-server[1336]: [1] [INF] JetStream cluster new stream leader for 'one > KV_DEMO_ELECTION'
```
Single server does not do this
|
https://github.com/nats-io/nats-server/issues/2666
|
https://github.com/nats-io/nats-server/pull/2668
|
1097ac9234f459dc99747c5d49e7c07ae9286230
|
6987480a14d88be257146e7df9aa966d2c3eeb10
| 2021-11-02T12:53:37Z |
go
| 2021-11-02T22:34:08Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,662 |
["server/filestore.go", "server/jetstream_test.go"]
|
JetStream messages that have large expirations appear to not expire at the correct time after server restart.
|
Server restart sets the expire timer for each stream after messages have been expired to the default.
|
https://github.com/nats-io/nats-server/issues/2662
|
https://github.com/nats-io/nats-server/pull/2665
|
2ce09f0dc5c97360eed7e54c83665e79260715b4
|
1097ac9234f459dc99747c5d49e7c07ae9286230
| 2021-11-02T00:01:10Z |
go
| 2021-11-02T15:50:55Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,658 |
["server/jetstream_api.go", "server/jetstream_cluster.go", "server/jetstream_cluster_test.go", "server/norace_test.go", "server/stream.go"]
|
Stream Info response received for duplicate Stream create
|
Given config.json
```
{
"name": "monitoring",
"subjects": [
"mon.*"
],
"retention": "limits",
"max_consumers": 128,
"max_msgs_per_subject": 128,
"max_msgs": 1024,
"max_bytes": 100000,
"max_age": 0,
"max_msg_size": 1024,
"storage": "file",
"discard": "old",
"num_replicas": 1,
"duplicate_window": 120000000000
}
```
We first create a stream it's all fine:
```
$ nats s add BEN --config config.json
Stream BEN was created
....
```
But if we do just the exact same immediately after, we should get a dupe error or if identical a create response so this API is idempotent when called with identical arguments:
```
nats s add BEN --config --trace
11:32:25 >>> $JS.API.STREAM.CREATE.BEN
{"name":"BEN","subjects":["mon.*"],"retention":"limits","max_consumers":128,"max_msgs_per_subject":128,"max_msgs":1024,"max_bytes":100000,"max_age":0,"max_msg_size":1024,"storage":"file","discard":"old","num_replicas":1,"duplicate_window":120000000000,"sealed":false,"deny_delete":false,
"deny_purge":false,"allow_rollup_hdrs":false}
11:32:25 <<< $JS.API.STREAM.CREATE.BEN
{"type":"io.nats.jetstream.api.v1.stream_info_response","config":{"name":"BEN","subjects":["mon.*"],"retention":"limits","max_consumers":128,"max_msgs":1024,"max_bytes":100000,"max_age":0,"max_msgs_per_subject":128,"max_msg_size":1024,"discard":"old","storage":"file","num_replicas":1,"d
uplicate_window":120000000000,"sealed":false,"deny_delete":false,"deny_purge":false,"allow_rollup_hdrs":false},"created":"2021-11-01T11:31:29.590690386Z","state":{"messages":0,"bytes":0,"first_seq":0,"first_ts":"0001-01-01T00:00:00Z","last_seq":0,"last_ts":"0001-01-01T00:00:00Z","consum
er_count":0},"domain":"hub","cluster":{"name":"lon","leader":"n2-lon"}}
```
Note first trace we send the create request all good, but the response is a INFO response which is illegal.
This appears to be a effort to make stream create idempotent, but instead its a invalid response:
https://github.com/nats-io/nats-server/blob/530ea6a5c371e944fc3141a59dc7d3a69f2ab132/server/jetstream_cluster.go#L3580-L3588
In single server mode the response is a create response:
```
12:36:23 >>> $JS.API.STREAM.CREATE.BEN
{"name":"BEN","subjects":["mon.*"],"retention":"limits","max_consumers":128,"max_msgs_per_subject":128,"max_msgs":1024,"max_bytes":100000,"max_age":0,"max_msg_size":1024,"storage":"file","discard":"old","num_replicas":1,"duplicate_window":120000000000,"sealed":false,"deny_delete":false,"deny_purge":false,"allow_rollup_hdrs":false}
12:36:23 <<< $JS.API.STREAM.CREATE.BEN
{"type":"io.nats.jetstream.api.v1.stream_create_response","config":{"name":"BEN","subjects":["mon.*"],"retention":"limits","max_consumers":128,"max_msgs":1024,"max_bytes":100000,"max_age":0,"max_msgs_per_subject":128,"max_msg_size":1024,"discard":"old","storage":"file","num_replicas":1,"duplicate_window":120000000000},"created":"2021-11-01T11:36:18.494935319Z","state":{"messages":0,"bytes":0,"first_seq":0,"first_ts":"0001-01-01T00:00:00Z","last_seq":0,"last_ts":"0001-01-01T00:00:00Z","consumer_count":0},"did_create":true}
```
Introduced in https://github.com/nats-io/nats-server/commit/cfbc69b12c1a1b0708d623d50420179206f54b10
Tests for this:
```diff
diff --git a/server/jetstream_cluster_test.go b/server/jetstream_cluster_test.go
index cb5eeca6..e7c7c152 100644
--- a/server/jetstream_cluster_test.go
+++ b/server/jetstream_cluster_test.go
@@ -9017,6 +9017,9 @@ func addStream(t *testing.T, nc *nats.Conn, cfg *StreamConfig) *StreamInfo {
var resp JSApiStreamCreateResponse
err = json.Unmarshal(rmsg.Data, &resp)
require_NoError(t, err)
+ if resp.Type != JSApiStreamCreateResponseType {
+ t.Fatalf("Invalid response type %s expected %s", resp.Type, JSApiStreamCreateResponseType)
+ }
if resp.Error != nil {
t.Fatalf("Unexpected error: %+v", resp.Error)
}
@@ -9167,6 +9170,25 @@ func TestJetStreamRollupSubjectAndWatchers(t *testing.T) {
expectUpdate("age", "50", 6)
}
+func TestJetStreamClusteredStreamCreateIdempotent(t *testing.T) {
+ c := createJetStreamClusterExplicit(t, "JSC", 3)
+ defer c.shutdown()
+
+ nc, _ := jsClientConnect(t, c.randomServer())
+ defer nc.Close()
+
+ cfg := &StreamConfig{
+ Name: "AUDIT",
+ Storage: MemoryStorage,
+ Subjects: []string{"foo"},
+ Replicas: 3,
+ DenyDelete: true,
+ DenyPurge: true,
+ }
+ addStream(t, nc, cfg)
+ addStream(t, nc, cfg)
+}
+
func TestJetStreamAppendOnly(t *testing.T) {
c := createJetStreamClusterExplicit(t, "JSC", 3)
defer c.shutdown()
```
|
https://github.com/nats-io/nats-server/issues/2658
|
https://github.com/nats-io/nats-server/pull/2669
|
6987480a14d88be257146e7df9aa966d2c3eeb10
|
ae999aabe9cb4b885d4550e33ab6f3bfe73e35bb
| 2021-11-01T11:44:28Z |
go
| 2021-11-02T22:39:30Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,644 |
["server/consumer.go", "server/jetstream_api.go", "server/jetstream_cluster_test.go", "server/norace_test.go", "server/stream.go"]
|
Jetstream server panic
|
When publishing messages to jetstream, we noticed our nats jetstream nodes would occasionally crash with the following panic.
```
[fatal error: sync: Unlock of unlocked RWMutex](<fatal error: sync: Unlock of unlocked RWMutex
goroutine 4066 [running]:
runtime.throw(0xb0a29a, 0x20)
/home/travis/.gimme/versions/go1.16.8.linux.amd64/src/runtime/panic.go:1117 +0x72 fp=0xc001702f50 sp=0xc001702f20 pc=0x437d12
sync.throw(0xb0a29a, 0x20)
/home/travis/.gimme/versions/go1.16.8.linux.amd64/src/runtime/panic.go:1103 +0x35 fp=0xc001702f70 sp=0xc001702f50 pc=0x46c3f5
sync.(*RWMutex).Unlock(0xc001640a80)
/home/travis/.gimme/versions/go1.16.8.linux.amd64/src/sync/rwmutex.go:142 +0xc6 fp=0xc001702fa8 sp=0xc001702f70 pc=0x489ba6
github.com/nats-io/nats-server/server.(*consumer).processNextMsgReq.func3(0xc001640a80, 0xc0024d5a10, 0xc00280b540, 0xc000412000)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/consumer.go:1950 +0x50 fp=0xc001702fc0 sp=0xc001702fa8 pc=0x9a2150
runtime.goexit()
/home/travis/.gimme/versions/go1.16.8.linux.amd64/src/runtime/asm_amd64.s:1371 +0x1 fp=0xc001702fc8 sp=0xc001702fc0 pc=0x471341
created by github.com/nats-io/nats-server/server.(*consumer).processNextMsgReq
/home/travis/gopath/src/github.com/nats-io/nats-server/server/consumer.go:1947 +0x669>)
```
Looking at the source, it appears we have an extra mutex unlock.
* https://github.com/nats-io/nats-server/blob/v2.6.1/server/consumer.go#L1937-L1944
The mutex has already a deferred unlock: https://github.com/nats-io/nats-server/blob/v2.6.1/server/consumer.go#L1862-L1863
which is manually unlocked again here https://github.com/nats-io/nats-server/blob/v2.6.1/server/consumer.go#L1940
Running: nats:2.6.1-alpine on k8s
Config:
```
jetstream {
max_mem: 1Gi
store_dir: /data
max_file:10Gi
}
```
Not sure what message or subscriber triggers this as we are sending a few thousand messages a second.
|
https://github.com/nats-io/nats-server/issues/2644
|
https://github.com/nats-io/nats-server/pull/2645
|
81ccce9422a8bb03872e5eb38597ec090386d878
|
4ba1ab27bd6cf8230487a37d5d859ff9c909ddb1
| 2021-10-22T05:42:48Z |
go
| 2021-10-25T20:56:28Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,642 |
["server/jetstream_cluster.go", "server/jetstream_cluster_test.go", "server/norace_test.go", "server/store.go", "server/stream.go"]
|
JetStream Cluster becomes inconsistent: catchup for stream stalled
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [ ] Included `nats-server -DV` output
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
`nats:2.6.2-alpine`
#### OS/Container environment:
k8s
#### Context
* I don't know how this problem arises, but it happens every few hours or days at max.
* The production system this is captured on has a message rate of about 1/s, with a few bursts every few hours of maybe 5/s (at least according to Grafana).
* The cluster consists of three nodes, and a stream with 3 replicas.
* After hours/days, one of the nodes start emitting `Catchup for stream '$G > corejobs' stalled`
* All consumers, where this node is the leader, stop getting messages.
* I dumped the jetstream data directory so you can get a look at the current state.
#### Steps or code to reproduce the issue:
* Start the cluster with the data and config provided [here](https://cloud.aksdb.de/s/ecg53yRQ2C8ps3n).
* Make sure, `nats-cluster-2` is **not** the leader. (use `nats stream cluster step-down` if necessary)
* Send a message to the stream: `nats pub core.jobs.notifications.SendPush foobar`
* Observe the logs of `nats-cluster-2` (which now throws warnings)
Additionally:
check the message count in the stream; when `nats-cluster-2` is the leader, it differs from when `nats-cluster-0` or `nats-cluster-1` are leaders.
#### Expected result:
Either:
1. All three nodes have the same data.
1. The node which can't catch up is marked faulty and steps down from all leader positions
Also:
JetStream should always be able to recover from inconsistent states (especially if there is still a majority of healthy nodes around).
#### Actual result:
* The cluster thinks it's healthy.
* One node doesn't receive any data anymore.
Additionally: depending on who becomes leader, the amount of messages varies (obviously, since it's no longer synching).
Node0: 310 messages, Node1: 310 messages, Node2: 81 messages
|
https://github.com/nats-io/nats-server/issues/2642
|
https://github.com/nats-io/nats-server/pull/2648
|
59291954707b4f70d710baf17563be12001e41f5
|
ad04a3b7b18b9a7386ff45c758d568973713303d
| 2021-10-21T16:24:15Z |
go
| 2021-10-27T16:06:04Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,633 |
["server/jetstream_cluster_test.go", "server/stream.go"]
|
stream info options: sealed, deny_delete, deny_purge should return default values
|
In Go, the default value for bool fields is `false`. However, in JSON, the omission is effectively an NULL/undefined value.
For operations that return the stream info, these fields should be set to `false` (remove omit empty) from the struct declarations to let the JSON marshaling provide a sane value.
Also, if a stream is sealed, wouldn't that also imply that `deny_delete` and `deny_purge` is also `true`.
|
https://github.com/nats-io/nats-server/issues/2633
|
https://github.com/nats-io/nats-server/pull/2650
|
03a4f2b26805a04bbd8fe52c8f782ef04d7718ee
|
862c0c647dbac06fb0be1e1be1f9a73980ebf4f3
| 2021-10-18T17:00:20Z |
go
| 2021-10-27T22:34:05Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,630 |
["server/leafnode_test.go", "server/opts.go"]
|
Memory Leak on Leafnode Error 'Authorization Violation'
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [x] Included `nats-server -DV` output
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
- `nats-server: v2.6.2`
- No client libraries are needed.
#### OS/Container environment:
Executing directly on my Windows laptop.
#### Steps or code to reproduce the issue:
You will need a certificate `cert.pem` plus its corresponding private key `key.pem` that:
- has a Subject Alternative Name (SAN) with a DNS Name of `localhost`
- is valid for both `TLS Client Authentication` as well as `TLS Server Authentication`
- is trusted by the OS
**`A.conf`**
```
server_name: A
listen: 127.0.0.1:4223
leafnodes: {
tls: {
cert_file: cert.pem
key_file: key.pem
verify_and_map: true
}
authorization: {
users: [
{ user: 'notamatch' }
]
}
listen: 127.0.0.1:7422
}
```
**`B.conf`**
```
server_name: B
listen: 127.0.0.1:4224
leafnodes: {
remotes: [
{
tls: {
cert_file: cert.pem
key_file: key.pem
}
url: 'nats-leaf://localhost'
}
]
}
```
Execute:
1. `.\nats-server --config A.conf -DV`
2. `.\nats-server --config B.conf -DV`
#### Expected result:
I expect the RAM consumption of both `nats-server.exe` processes to be stable.
#### Actual result:
NATS Server `B` has a memory leak.
**Logs from `A`**:
```
[27776] 2021/10/18 11:21:24.905975 [INF] Starting nats-server
[27776] 2021/10/18 11:21:24.906515 [INF] Version: 2.6.2
[27776] 2021/10/18 11:21:24.906515 [INF] Git: [f7c3ac5]
[27776] 2021/10/18 11:21:24.906515 [DBG] Go build: go1.16.9
[27776] 2021/10/18 11:21:24.906515 [INF] Name: A
[27776] 2021/10/18 11:21:24.906515 [INF] ID: ND7OCTYLZZO6ADLAOUH4EILURCGGFEXBJUI6VOQ37EDQED7B5PJ6BORN
[27776] 2021/10/18 11:21:24.907063 [INF] Using configuration file: A.conf
[27776] 2021/10/18 11:21:24.907063 [DBG] Created system account: "$SYS"
[27776] 2021/10/18 11:21:24.908094 [INF] Listening for leafnode connections on 127.0.0.1:7422
[27776] 2021/10/18 11:21:24.909135 [INF] Listening for client connections on 127.0.0.1:4223
[27776] 2021/10/18 11:21:24.909680 [INF] Server is ready
[27776] 2021/10/18 11:21:36.866259 [INF] 127.0.0.1:49180 - lid:4 - Leafnode connection created
[27776] 2021/10/18 11:21:36.866853 [DBG] 127.0.0.1:49180 - lid:4 - Starting TLS leafnode server handshake
[27776] 2021/10/18 11:21:36.916573 [TRC] 127.0.0.1:49180 - lid:4 - <<- [CONNECT {"tls_required":true,"server_id":"NB3VRN7MA5PG47BOFXYMB4XZ5XHQFM5UD5A45O464MQH5MNGD6NW6HI2","name":"B","cluster":"B","headers":true}]
[27776] 2021/10/18 11:21:36.916930 [DBG] 127.0.0.1:49180 - lid:4 - Multiple peer certificates found, selecting first
[27776] 2021/10/18 11:21:36.916930 [DBG] 127.0.0.1:49180 - lid:4 - DistinguishedNameMatch could not be used for auth ["CN=temp"]
[27776] 2021/10/18 11:21:36.917456 [DBG] 127.0.0.1:49180 - lid:4 - User in cert ["CN=temp"], not found
[27776] 2021/10/18 11:21:36.917456 [ERR] 127.0.0.1:49180 - lid:4 - authentication error
[27776] 2021/10/18 11:21:36.917456 [TRC] 127.0.0.1:49180 - lid:4 - ->> [-ERR Authorization Violation]
[27776] 2021/10/18 11:21:36.918067 [INF] 127.0.0.1:49180 - lid:4 - Leafnode connection closed: Authentication Failure account:
...
```
**Logs from `B`**:
```
[29052] 2021/10/18 11:21:36.855188 [INF] Starting nats-server
[29052] 2021/10/18 11:21:36.855702 [INF] Version: 2.6.2
[29052] 2021/10/18 11:21:36.856213 [INF] Git: [f7c3ac5]
[29052] 2021/10/18 11:21:36.856213 [DBG] Go build: go1.16.9
[29052] 2021/10/18 11:21:36.856213 [INF] Name: B
[29052] 2021/10/18 11:21:36.856213 [INF] ID: NB3VRN7MA5PG47BOFXYMB4XZ5XHQFM5UD5A45O464MQH5MNGD6NW6HI2
[29052] 2021/10/18 11:21:36.856720 [INF] Using configuration file: B.conf
[29052] 2021/10/18 11:21:36.856729 [DBG] Created system account: "$SYS"
[29052] 2021/10/18 11:21:36.859174 [INF] Listening for client connections on 127.0.0.1:4224
[29052] 2021/10/18 11:21:36.860800 [INF] Server is ready
[29052] 2021/10/18 11:21:36.864684 [DBG] Trying to connect as leafnode to remote server on "localhost:7422" (127.0.0.1:7422)
[29052] 2021/10/18 11:21:36.867375 [INF] 127.0.0.1:7422 - lid:4 - Leafnode connection created for account: $G
[29052] 2021/10/18 11:21:36.868421 [DBG] 127.0.0.1:7422 - lid:4 - Starting TLS leafnode client handshake
[29052] 2021/10/18 11:21:36.893687 [DBG] 127.0.0.1:7422 - lid:4 - Remote leafnode connect msg sent
[29052] 2021/10/18 11:21:36.918144 [ERR] 127.0.0.1:7422 - lid:4 - Leafnode Error 'Authorization Violation'
[29052] 2021/10/18 11:21:36.918144 [INF] 127.0.0.1:7422 - lid:4 - Leafnode connection closed: Client Closed account: $G
...
```
**Performance Graph of `A`**:

**Performance Graph of `B`**:

As you can see from the graph above, `B` is showing signs of a memory leak, where the memory constantly increases. The graph shows what happens over a number of minutes, and if we leave the process running overnight, then by the next morning its memory usage will be several GBs.
|
https://github.com/nats-io/nats-server/issues/2630
|
https://github.com/nats-io/nats-server/pull/2637
|
ed97718f68f1dc6b2a2de8b0bffb5ff9c2df7b2d
|
5ce7cfa9a6f7096c6e105c5b2cc58e8b70bce6c5
| 2021-10-18T09:30:44Z |
go
| 2021-10-19T15:01:32Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,622 |
["server/client.go", "server/const.go", "server/filestore.go", "server/filestore_test.go"]
|
Double put of large objects results in loss metadata
|
With the new object store preview a put of a large object followed by another put of the same name will incorrectly discard the metadata from the first put.
|
https://github.com/nats-io/nats-server/issues/2622
|
https://github.com/nats-io/nats-server/pull/2623
|
f7c3ac5e51f628bf6a85a43878f85e6e87847207
|
3fbf9ddcbc6a8c40d278aec1e65981299febbad3
| 2021-10-14T16:17:17Z |
go
| 2021-10-14T17:30:21Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,611 |
["server/consumer.go", "server/jetstream_test.go"]
|
JS: `num_pending` with `last_per_subject` doesn't report the correct number of messages the consumer has
|
For stream:
```
{
type: "io.nats.jetstream.api.v1.stream_info_response",
config: {
name: "KV_ngs-account-notifications",
subjects: [ "$KV.ngs-account-notifications.>" ],
retention: "limits",
max_consumers: -1,
max_msgs: -1,
max_bytes: -1,
max_age: 0,
max_msgs_per_subject: 100,
max_msg_size: -1,
discard: "old",
storage: "file",
num_replicas: 1,
duplicate_window: 120000000000
},
created: "2021-10-11T17:08:39.32354681Z",
state: {
messages: 248,
bytes: 63391,
first_seq: 1,
first_ts: "2021-10-11T17:12:20.956786183Z",
last_seq: 250,
last_ts: "2021-10-11T17:14:48.673274472Z",
num_deleted: 2,
consumer_count: 1
}
}
```
If a consumer is created to process all messages:
```
{
type: "io.nats.jetstream.api.v1.consumer_create_response",
stream_name: "KV_ngs-account-notifications",
name: "bVQx30Zs",
created: "2021-10-11T17:20:32.355846071Z",
config: {
deliver_subject: "_INBOX.5BSRSCEE52KHF3PCS67Z8V",
deliver_policy: "all",
ack_policy: "explicit",
ack_wait: 30000000000,
max_deliver: -1,
filter_subject: "$KV.ngs-account-notifications.>",
replay_policy: "instant",
max_ack_pending: 20000,
idle_heartbeat: 10000000000,
flow_control: true
},
delivered: { consumer_seq: 0, stream_seq: 0 },
ack_floor: { consumer_seq: 0, stream_seq: 0 },
num_ack_pending: 0,
num_redelivered: 0,
num_waiting: 0,
num_pending: 248,
cluster: { leader: "nats-server-a" }
}
```
With last per_subject:
```
{
type: "io.nats.jetstream.api.v1.consumer_create_response",
stream_name: "KV_ngs-account-notifications",
name: "t4W9bGal",
created: "2021-10-11T17:19:33.07336111Z",
config: {
deliver_subject: "_INBOX.5WM5NRT62YVMA8BJO4XM38",
deliver_policy: "last_per_subject",
ack_policy: "explicit",
ack_wait: 30000000000,
max_deliver: -1,
filter_subject: "$KV.ngs-account-notifications.>",
replay_policy: "instant",
max_ack_pending: 20000,
idle_heartbeat: 10000000000,
flow_control: true
},
delivered: { consumer_seq: 0, stream_seq: 11 },
ack_floor: { consumer_seq: 0, stream_seq: 11 },
num_ack_pending: 0,
num_redelivered: 0,
num_waiting: 0,
num_pending: 237,
cluster: { leader: "nats-server-a" }
}
```
Internally the server is indeed sending the correct count in this case it is about 40 or so messages, but the mismatch on the `num_pending` causes the client to hang, as it is waiting for more messages. Note that processing the `pending` on the reply subject also will not be `0`, ever, so whether the client is waiting for the initial count, or dynamically adjusting until the server tells it there are no more messages, it will block. Sure processing an idle heartbeat or flow control will probably provide the right information, but then that means that the client will wait around when it should have had apriori knowledge of the count.
|
https://github.com/nats-io/nats-server/issues/2611
|
https://github.com/nats-io/nats-server/pull/2616
|
f8a8367ed232fc904067859ff1c90bb282dba757
|
cbbab295ec707ca60c0a236200c8a990fa1fe5d0
| 2021-10-11T17:27:03Z |
go
| 2021-10-12T15:46:42Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,607 |
["server/consumer.go", "server/jetstream_test.go"]
|
Jetstream: Purging filtered stream resets all stream consumer offsets
|
## Defect
When purging a jetstream stream with a filtered subject, all consumers listening on the stream have their offset reset to the latest offset.
e.g.
* durable consumers subscribed and filtering on subjects `FOO.adam` and `FOO.eve`
* messages published to `FOO.adam` and `FOO.eve`
* Purge messages filtered on subject `FOO.adam`
* Consumers filtering `FOO.eve` have their offsets reset
Diving into the purge implementation,
* As expected, we see explicit handling for defined subjects and offsets when purging the store (https://github.com/nats-io/nats-server/blob/v2.6.1/server/stream.go#L994-L1000)
* However, all stream consumers regardless of their subject filters have their offset updated to the last offset of the purge (https://github.com/nats-io/nats-server/blob/v2.6.1/server/stream.go#L1004-L1009)
(will fill our the mcve after work)
- [ ] Included `nats-server -DV` output
### MVCE
```go
package nats_bug_report
import (
"fmt"
"strings"
"testing"
"time"
"github.com/google/uuid"
"github.com/nats-io/nats-server/v2/server"
natsserver "github.com/nats-io/nats-server/v2/test"
"github.com/nats-io/nats.go"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
"github.com/nats-io/jsm.go"
"github.com/nats-io/jsm.go/api"
)
const (
user = "hehexd"
pass = "hehepass"
DEBUG = false
)
var (
stream = strings.ToUpper(uuid.New().String())
)
// RunBasicJetstreamServer is ripped from github.com/nats-io/nats-server/v2/test/test.go
func RunBasicJetStreamServer() *server.Server {
opts := natsserver.DefaultTestOptions
opts.Port = -1
opts.NoLog = true
opts.JetStream = true
if DEBUG {
opts.Trace = true
opts.NoLog = false
opts.TraceVerbose = true
opts.Debug = true
}
opts.HTTPPort = -1
opts.Username = user
opts.Password = pass
s, err := server.NewServer(&opts)
if err != nil || s == nil {
panic(fmt.Sprintf("No NATS Server object returned: %v", err))
}
s.ConfigureLogger()
// Run server in Go routine.
go s.Start()
// Wait for accept loop(s) to be started
if !s.ReadyForConnections(10 * time.Second) {
panic("Unable to start NATS Server in Go Routine")
}
return s
}
func initConnection(url string) (*nats.Conn, nats.JetStreamContext, error) {
nc, err := nats.Connect(url,
nats.MaxReconnects(-1),
nats.PingInterval(5*time.Second),
nats.UserInfo(user, pass),
)
if err != nil {
return nil, nil, errors.Wrap(err, "failed to connect to nats")
}
js, err := nc.JetStream(nats.PublishAsyncMaxPending(256))
if err != nil {
return nil, nil, errors.Wrap(err, "failed to subscribe to jetstream")
}
// Stream name must be allcaps https://docs.nats.io/jetstream/administration/naming
stream := strings.ToUpper(stream)
_, err = js.AddStream(&nats.StreamConfig{
Name: stream,
Subjects: []string{stream + ".*"},
// Non-aggressive retention policy, if we ever want to replay messages
// we can start here, or have multiple subscribers listen to the same queue
Retention: nats.LimitsPolicy,
// If things get too ful, we gotta start culling somewhere
Discard: nats.DiscardOld,
MaxAge: 7 * 24 * time.Hour,
Storage: nats.FileStorage,
})
if err != nil {
return nil, nil, errors.Wrap(err, "failed to create stream")
}
return nc, js, nil
}
// addConsumer creates a fetch based consumer
func addConsumer(js nats.JetStreamContext, queue string) error {
durable := stream + "_" + queue
queueName := stream + "." + queue
cconf := &nats.ConsumerConfig{
FilterSubject: queueName,
Durable: durable,
MaxAckPending: 1000,
DeliverPolicy: nats.DeliverLastPolicy,
AckPolicy: nats.AckExplicitPolicy,
AckWait: 10 * time.Second,
ReplayPolicy: nats.ReplayInstantPolicy,
MaxDeliver: 5,
MaxWaiting: 128,
}
_, err := js.AddConsumer(stream, cconf)
if err != nil {
return err
}
_, err = js.PullSubscribe(queueName, durable,
nats.Bind(stream, durable),
nats.ManualAck())
return err
}
func publish(js nats.JetStreamContext, queue string) error {
_, err := js.PublishMsg(&nats.Msg{
Subject: stream + "." + queue,
})
return err
}
func getQueueInfo(nc *nats.Conn, queueName string) (*api.ConsumerInfo, error) {
jsm, err := jsm.New(nc, jsm.WithTimeout(10*time.Second))
if err != nil {
return nil, errors.Wrap(err, "failed to create jsm client")
}
// we use load consumer to detect the existeince
con, err := jsm.LoadConsumer(stream, stream+"_"+queueName)
if err != nil {
return nil, errors.Wrap(err, "failed to retreieve consumer")
}
state, err := con.LatestState()
if err != nil {
return nil, errors.Wrap(err, "failed to load consumer state")
}
return &state, nil
}
func purge(nc *nats.Conn, queue string) error {
jsm, err := jsm.New(nc, jsm.WithTimeout(10*time.Second))
if err != nil {
return errors.Wrap(err, "failed to create jsm client")
}
sm, err := jsm.LoadStream(stream)
if err != nil {
return errors.Wrap(err, "failed to load stream")
}
err = sm.Purge(&api.JSApiStreamPurgeRequest{
Subject: stream + "." + queue,
})
return err
}
func TestNatsPurge(t *testing.T) {
ns := RunBasicJetStreamServer()
nc, js, err := initConnection(ns.ClientURL())
assert.Nil(t, err)
defer ns.Shutdown()
for i := 0; i < 2; i++ {
name := fmt.Sprintf("topic-%d", i)
assert.Nil(t, addConsumer(js, name))
assert.Nil(t, publish(js, name))
state, err := getQueueInfo(nc, name)
log.Infof("%+v", state)
assert.Nil(t, err)
// we expect 1 message on each queue
assert.Equal(t, uint64(1), state.NumPending)
}
assert.Nil(t, purge(nc, "topic-0"))
{
state, err := getQueueInfo(nc, "topic-0")
assert.Nil(t, err)
// we expect 0 message on purged consumer
assert.Equal(t, uint64(0), state.NumPending)
}
{
state, err := getQueueInfo(nc, "topic-1")
assert.Nil(t, err)
// we expect 1 message on the unpurged consumer
assert.Equal(t, uint64(1), state.NumPending)
}
}
```
#### Versions of `nats-server` and affected client libraries used:
v2.6.1
#### OS/Container environment:
Linux
#### Steps or code to reproduce the issue:
1. Create stream `FOO.*`
2. Create durable consumers listening on `FOO.adam` and `FOO.eve`
3. Publish messages to `FOO.adam` and `FOO.eve`
4. Observe pending messages on each are non-zero
5. Purge stream on filtered subject `FOO.adam`
6. Observe only adam's messages are purged as per the nats-server DV logs
7. Observe pending messages on both `FOO.adam` and `FOO.eve` are now 0
#### Expected result:
Purging a stream on filtered subject `FOO.adam` should not reset consumers listening on `FOO.eve`
#### Actual result:
Purging a stream on filtered subject `FOO.adam` resets offsets of consumers listening on `FOO.eve`
|
https://github.com/nats-io/nats-server/issues/2607
|
https://github.com/nats-io/nats-server/pull/2617
|
03d48cf774388c253d223de90b77734ef99d3f88
|
8cf09e962716df67a0e480de61906dba793bba7d
| 2021-10-11T06:10:21Z |
go
| 2021-10-12T15:14:48Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,599 |
["server/jetstream.go", "server/jetstream_test.go", "server/opts.go"]
|
JetStream resource limits exceeded for server
|
I get this message, where my config is like this:
```bash
#!/bin/bash
cd ~/nats-server
cat <<-EOF > /tmp/natsstrJS.cf
port: 4223
http_port: 8223
max_payload: 10485760
server_name: "xxxcluster"
jetstream: {
storedir: "/tmp/xxxdata"
max_memory_store: 1073741824
max_file_store: 10737418240
}
EOF
./nats-server -c /tmp/natsstrJS.cf
```
its been running for a day adding data at steady rate.
Once this message occurs, if I stop and start nats, this message reappears and it seems to reload all the messages in filestore back into memory.
I have configured the stream like this:
```
_, err = js.AddStream(&nats.StreamConfig{
Name: "ORDERS",
Subjects: []string{"ORDERS.*"},
})
```
I like to keep my messages around for a day and discard them after that.
my `nats-streaming-server` config looks like this and never hit this issue:
```bash
#!/bin/bash
OS=`uname`
if [ $OS == "Darwin" ]; then
cd ~/nats-streaming-server-v0.12.0-darwin-amd64
cat <<- EOF > /tmp/natsstr.cfg
http_port: 8222
max_payload: 10485760
streaming: {
cluster_id: "xxx-cluster"
store: "file"
dir: "/temp/xxxdata"
store_limits: {
max_age: "24h"
max_channels: 10000
channels : {
query.>: {
max_inactivity: "5m"
}
alert.>: {
max_inactivity: "5m"
}
temp.>: {
max_inactivity: "5m"
}
}
}
}
EOF
# ./nats-streaming-server -sc /tmp/natsstr.cfg
nohup ./nats-streaming-server -sc /tmp/natsstr.cfg > ./nats.log 2>&1 &
fi
```
How do I achieve the same config (I know the channel stuff is not there), but for the rest?
|
https://github.com/nats-io/nats-server/issues/2599
|
https://github.com/nats-io/nats-server/pull/2618
|
433a3e7c2b264b74ed1a7ab422a16a64a1f0a216
|
adf50f261fa027e4ffe7036d7d3ac1059ee53e5c
| 2021-10-06T23:37:19Z |
go
| 2021-10-12T21:17:28Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,588 |
["server/consumer.go", "server/jetstream.go", "server/jetstream_test.go"]
|
Race condition during tests
|
## Defect
Some of the CLI tests very infrequently triggers this race condition:
```
==================
WARNING: DATA RACE
Write at 0x00c004ecc668 by goroutine 577:
github.com/nats-io/nats-server/v2/server.(*Server).shutdownEventing()
/home/runner/work/jsm.go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/events.go:1187 +0x1c4
github.com/nats-io/nats-server/v2/server.(*Server).Shutdown()
/home/runner/work/jsm.go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/server.go:1772 +0x84
github.com/nats-io/jsm%2ego_test.TestStream_IsSourced()
/home/runner/work/jsm.go/src/github.com/nats-io/jsm.go/streams_test.go:792 +0x87f
testing.tRunner()
/opt/hostedtoolcache/go/1.16.8/x64/src/testing/testing.go:1193 +0x202
Previous read at 0x00c004ecc668 by goroutine 428:
github.com/nats-io/nats-server/v2/server.(*Server).eventsEnabled()
/home/runner/work/jsm.go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/events.go:545 +0xb3
github.com/nats-io/nats-server/v2/server.(*consumer).subscribeInternal()
/home/runner/work/jsm.go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/consumer.go:866 +0x79
github.com/nats-io/nats-server/v2/server.(*consumer).setLeader()
/home/runner/work/jsm.go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/consumer.go:766 +0x2ab
github.com/nats-io/nats-server/v2/server.(*stream).addConsumerWithAssignment()
/home/runner/work/jsm.go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/consumer.go:673 +0x1d44
github.com/nats-io/nats-server/v2/server.(*Server).jsStreamCreateRequest()
/home/runner/work/jsm.go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/jetstream_api.go:1272 +0xa66
github.com/nats-io/nats-server/v2/server.(*Server).jsStreamCreateRequest-fm()
/home/runner/work/jsm.go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/jetstream_api.go:1094 +0xd2
github.com/nats-io/nats-server/v2/server.(*jetStream).apiDispatch()
/home/runner/work/jsm.go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/jetstream_api.go:626 +0xc6e
github.com/nats-io/nats-server/v2/server.(*jetStream).apiDispatch-fm()
/home/runner/work/jsm.go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/jetstream_api.go:602 +0xd2
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg()
/home/runner/work/jsm.go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/client.go:3147 +0x526
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults()
/home/runner/work/jsm.go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/client.go:4122 +0x870
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport()
/home/runner/work/jsm.go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/client.go:3945 +0x1004
github.com/nats-io/nats-server/v2/server.(*Account).addServiceImportSub.func1()
/home/runner/work/jsm.go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/accounts.go:1873 +0x84
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg()
/home/runner/work/jsm.go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/client.go:3145 +0x658
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults()
/home/runner/work/jsm.go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/client.go:4122 +0x870
github.com/nats-io/nats-server/v2/server.(*client).processInboundClientMsg()
/home/runner/work/jsm.go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/client.go:3611 +0xe84
github.com/nats-io/nats-server/v2/server.(*client).processInboundMsg()
/home/runner/work/jsm.go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/client.go:3458 +0xbe
github.com/nats-io/nats-server/v2/server.(*client).parse()
/home/runner/work/jsm.go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/parser.go:477 +0x3f44
github.com/nats-io/nats-server/v2/server.(*client).readLoop()
/home/runner/work/jsm.go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/client.go:1174 +0x824
github.com/nats-io/nats-server/v2/server.(*Server).createClient.func1()
/home/runner/work/jsm.go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/server.go:2448 +0x5c
```
|
https://github.com/nats-io/nats-server/issues/2588
|
https://github.com/nats-io/nats-server/pull/2590
|
c9eeab1d0df2085006c6fdaa2e647f213eafb2b5
|
74988e68f08f4514a76d8f9ed7f24013843f8e5a
| 2021-09-30T13:02:04Z |
go
| 2021-09-30T19:48:13Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,587 |
["server/mqtt.go", "server/mqtt_test.go"]
|
NATS server debug logs are not verbose
|
I have deployed NATS-2.6.1 with MQTT and i observed that debug logs are not verbose
Like I enabled debug logs to identify which client is continuously opening and closing connection
```
[114] 2021/09/30 10:30:41.386108 [DBG] 10.244.0.1:53288 - mid:37739 - Client connection closed: Client Closed
[114] 2021/09/30 10:30:41.838201 [DBG] 10.244.0.1:49586 - mid:37740 - Client connection created
[114] 2021/09/30 10:30:41.959710 [DBG] 10.244.0.1:49586 - mid:37740 - Client connection closed: Client Closed
[114] 2021/09/30 10:30:51.255027 [DBG] 10.244.0.1:53798 - mid:37741 - Client connection created
[114] 2021/09/30 10:30:51.401562 [DBG] 10.244.0.1:53798 - mid:37741 - Client connection closed: Client Closed
[114] 2021/09/30 10:30:51.847412 [DBG] 10.244.0.1:49702 - mid:37742 - Client connection created
[114] 2021/09/30 10:30:51.981592 [DBG] 10.244.0.1:49702 - mid:37742 - Client connection closed: Client Closed
[114] 2021/09/30 10:31:01.251372 [DBG] 10.244.0.1:54290 - mid:37743 - Client connection created
```
Above logs are not giving any hint, its good to have clientid in debug logs, enabling trace would not be good idea :(
Another example, I restarted MQTT clients and start getting below error and client were not able to connect, No hint so i restarted NATS and it fixed issue :(
```
[7] 2021/09/29 13:38:31.210414 [ERR] 10.244.0.1:58548 - mid:24322 - wrong last sequence: 0 (10071)
[7] 2021/09/29 13:38:31.242765 [ERR] 10.244.0.1:52434 - mid:24324 - wrong last sequence: 0 (10071)
[7] 2021/09/29 13:38:31.246050 [ERR] 10.244.0.1:52436 - mid:24325 - wrong last sequence: 0 (10071)
[7] 2021/09/29 13:38:31.249087 [ERR] 10.244.0.1:52438 - mid:24326 - wrong last sequence: 0 (10071)
[7] 2021/09/29 13:38:31.253428 [ERR] 10.244.0.1:52440 - mid:24327 - wrong last sequence: 0 (10071)
[7] 2021/09/29 13:38:31.288618 [ERR] 10.244.0.1:52138 - mid:24328 - wrong last sequence: 0 (10071)
[7] 2021/09/29 13:38:31.310963 [ERR] 10.244.0.1:52040 - mid:24329 - wrong last sequence: 0 (10071)
```
It would be nice if we have more detail description for errors.
|
https://github.com/nats-io/nats-server/issues/2587
|
https://github.com/nats-io/nats-server/pull/2598
|
3f12216fcc349ae0f7af779c6a4647209fbbe9ab
|
208146aade8977377cd8382a2eff6f654cf6905a
| 2021-09-30T10:51:31Z |
go
| 2021-10-06T19:12:55Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,551 |
["server/jetstream.go", "server/jetstream_test.go"]
|
JetStream: Invalid consumer filter subject on server restart in some cases
|
## Defect
A consumer C1 is created and functions normally, but upon server restart fails to be recovered with following warning:
```text
[184164] 2021/09/20 21:12:07.346160 [INF] Recovering 1 consumers for stream - "Q1"
[184164] 2021/09/20 21:12:07.346963 [WRN] Error adding consumer: consumer filter subject is not a valid subset of the interest subjects (10093)
```
C1 can be created successfully again, but above error will always reproduce on next server restart.
The issue occurs with C1 on replica stream Q1 (which sources upstream S1). Creating C2 with identical filter subject but on non-replica stream S1 works and does not have restart issue.
The consumer filter subject `message.>` is in fact a possible superset of upstream S1 subject `message.s1.*`.
[nats-server -DV output](https://github.com/nats-io/nats-server/files/7200175/output.txt)
#### Versions of `nats-server` and affected client libraries used:
* nats-server 2.5.0 and 2.5.1-beta.1 both reproduce.
* nats-cli 0.0.26 (nats con add)
#### OS/Container environment:
Linux, non-Dockerized nats-server
#### Steps or code to reproduce the issue:
With streams S1, Q1 (sourcing S1) and consumer C1 (on Q1) as [tbeets/nats-replica](https://github.com/tbeets/nats-replica), C1 creates and functions normally. On server restart, C1 fails to be restored with:
```text
[184164] 2021/09/20 21:12:07.346160 [INF] Recovering 1 consumers for stream - "Q1"
[184164] 2021/09/20 21:12:07.346963 [WRN] Error adding consumer: consumer filter subject is not a valid subset of the interest subjects (10093)
```
#### Expected result:
Consumer creation with a filter subject that is not a valid subset of stream subjects should consistently fail or succeed.
#### Actual result:
In C1 case (consumer on a replica stream), initial creation succeeds, but server restore on restart fails.
In C2 case (consumer on non-replica stream), initial creation and server restore both succeed.
|
https://github.com/nats-io/nats-server/issues/2551
|
https://github.com/nats-io/nats-server/pull/2554
|
0dd4e9fe6a34feddebd8e614f173476316e537fd
|
29037a4f5cd6729b6a166af4cf4104cbf02e0f3c
| 2021-09-21T04:55:24Z |
go
| 2021-09-21T16:15:23Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,612 |
["server/websocket.go", "server/websocket_test.go"]
|
iOS 15 WebSocket Connection Established
|
Hello there,
I did the iOS 15 update.
I am getting socket connection error.
Is this issue related to you?
Or is it related to iOS 15?
Thanks.

|
https://github.com/nats-io/nats-server/issues/2612
|
https://github.com/nats-io/nats-server/pull/2613
|
28709c653dcfa64c52c10b475047ef4f4717d094
|
fe0836ac4416df8d54e42797d5bbf706dea0c490
| 2021-09-20T22:28:03Z |
go
| 2021-10-12T19:14:04Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,548 |
["server/filestore.go", "server/jetstream_cluster.go", "server/memstore.go", "server/norace_test.go", "server/store.go"]
|
In memory storage memory leak
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [x] Included `nats-server -DV` output
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
Server: docker.io/synadia/nats-server:nightly-20210920
Go client: v1.12.0
#### OS/Container environment:
Docker
#### Steps or code to reproduce the issue:
docker-compose.yam for clustered mode server
```yaml
version: '3'
services:
n1:
container_name: n1
image: docker.io/synadia/nats-server:nightly-20210920
entrypoint: /bin/nats-server
command: "--config /config/jetstream.conf --server_name S1 --profile 6060"
networks:
- nats
ports:
- 4222:4222
- 6061:6060
volumes:
- ./config:/config
- ./persistent-data/server-n1/:/data/nats-server/jetstream
n2:
container_name: n2
image: docker.io/synadia/nats-server:nightly-20210920
entrypoint: /bin/nats-server
command: "--config /config/jetstream.conf --server_name S2 --profile 6060"
networks:
- nats
ports:
- 4223:4222
- 6062:6060
volumes:
- ./config:/config
- ./persistent-data/server-n2/:/data/nats-server/jetstream
n3:
container_name: n3
image: docker.io/synadia/nats-server:nightly-20210920
entrypoint: /bin/nats-server
command: "--config /config/jetstream.conf --server_name S3 --profile 6060"
networks:
- nats
ports:
- 4224:4222
- 6063:6060
volumes:
- ./config:/config
- ./persistent-data/server-n3/:/data/nats-server/jetstream
networks:
nats: {}
```
Test case
```go
js.AddStream(&nats.StreamConfig{
Name: "memory-leak",
Subjects: []string{"memory-leak"},
Retention: nats.LimitsPolicy,
MaxMsgs: 1000,
Discard: nats.DiscardOld,
MaxAge: time.Minute,
Storage: nats.MemoryStorage,
Replicas: 3,
})
_, _ = js.QueueSubscribe("memory-leak", "q1", func(msg *nats.Msg) {
fmt.Println("recv")
time.Sleep(1 * time.Second)
msg.Ack()
})
msg := []byte("NATS is a connective technology that powers modern distributed systems.")
for {
_, err := js.Publish("memory-leak", msg)
if err != nil {
panic(err)
}
fmt.Println("pub")
}
```
Stream report
```
╭──────────────────────────────────────────────────────────────────────────────────────╮
│ Stream Report │
├─────────────┬─────────┬───────────┬──────────┬────────┬──────┬─────────┬─────────────┤
│ Stream │ Storage │ Consumers │ Messages │ Bytes │ Lost │ Deleted │ Replicas │
├─────────────┼─────────┼───────────┼──────────┼────────┼──────┼─────────┼─────────────┤
│ memory-leak │ Memory │ 1 │ 1,000 │ 96 KiB │ 0 │ 0 │ S1, S2*, S3 │
╰─────────────┴─────────┴───────────┴──────────┴────────┴──────┴─────────┴─────────────╯
```
Consumer report
```
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Consumer report for memory-leak with 1 consumers │
├──────────┬──────┬────────────┬──────────┬─────────────┬─────────────┬─────────────┬───────────┬─────────────┤
│ Consumer │ Mode │ Ack Policy │ Ack Wait │ Ack Pending │ Redelivered │ Unprocessed │ Ack Floor │ Cluster │
├──────────┼──────┼────────────┼──────────┼─────────────┼─────────────┼─────────────┼───────────┼─────────────┤
│ q1 │ Push │ Explicit │ 30.00s │ 1,000 │ 0 │ 0 │ 201,486 │ S1, S2*, S3 │
╰──────────┴──────┴────────────┴──────────┴─────────────┴─────────────┴─────────────┴───────────┴─────────────╯
```
Memory profile

#### Expected result:
Memory doesn't increase. Actual stream size is only 96KiB, but keeps increasing.
#### Actual result:
It looks that old messages are not removed from the memory store and has something to do with consumer which is not able to keep up with message rate. I tried publishing messages without active consumers. In such case there is no problem.
Maybe it could be related to some bug in `return ms.removeMsg(ms.state.FirstSeq, false)` as messages are removed by `ms.state.FirstSeq` index.
|
https://github.com/nats-io/nats-server/issues/2548
|
https://github.com/nats-io/nats-server/pull/2550
|
e69c6164a9db4553c52c2656cfb63d71ae5ec1be
|
a6f95e988689ce2f37af658258678e6e658cb923
| 2021-09-20T20:22:38Z |
go
| 2021-09-21T15:06:04Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,544 |
["server/const.go", "server/filestore.go", "server/jetstream_cluster.go", "server/jetstream_cluster_test.go", "server/jetstream_test.go", "server/raft.go", "server/stream.go"]
|
nats-server slice bounds out of range
|
```
[2129626] 2021/09/19 20:03:14.358456 [INF] Starting nats-server
[2129626] 2021/09/19 20:03:14.358544 [INF] Version: 2.5.1-beta.3
[2129626] 2021/09/19 20:03:14.358551 [INF] Git: [not set]
[2129626] 2021/09/19 20:03:14.358556 [DBG] Go build: go1.17.1
[2129626] 2021/09/19 20:03:14.358562 [INF] Name: nats-server-post
[2129626] 2021/09/19 20:03:14.358568 [INF] Node: qMvPUrcA
[2129626] 2021/09/19 20:03:14.358574 [INF] ID: NDI6JPZ3XDCWONBGNG3ZH6H73L346TGMIS775JTNERPGSJTW5D7MA4AM
[2129626] 2021/09/19 20:03:14.358584 [INF] Using configuration file: /usr/local/etc/nats-server-post.conf
[2129626] 2021/09/19 20:03:14.358640 [DBG] Created system account: "$SYS"
[2129626] 2021/09/19 20:03:14.359216 [INF] Starting JetStream
[2129626] 2021/09/19 20:03:14.359395 [INF] _ ___ _____ ___ _____ ___ ___ _ __ __
[2129626] 2021/09/19 20:03:14.359406 [INF] _ | | __|_ _/ __|_ _| _ \ __| /_\ | \/ |
[2129626] 2021/09/19 20:03:14.359411 [INF] | || | _| | | \__ \ | | | / _| / _ \| |\/| |
[2129626] 2021/09/19 20:03:14.359416 [INF] \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_| |_|
[2129626] 2021/09/19 20:03:14.359421 [INF]
[2129626] 2021/09/19 20:03:14.359426 [INF] https://docs.nats.io/jetstream
[2129626] 2021/09/19 20:03:14.359430 [INF]
[2129626] 2021/09/19 20:03:14.359438 [INF] ---------------- JETSTREAM ----------------
[2129626] 2021/09/19 20:03:14.359450 [INF] Max Memory: 10.00 MB
[2129626] 2021/09/19 20:03:14.359456 [INF] Max Storage: 900.00 MB
[2129626] 2021/09/19 20:03:14.359462 [INF] Store Directory: "/var/nats/jetstream"
[2129626] 2021/09/19 20:03:14.359467 [INF] -------------------------------------------
[2129626] 2021/09/19 20:03:14.359552 [DBG] Exports:
[2129626] 2021/09/19 20:03:14.359558 [DBG] $JS.API.>
[2129626] 2021/09/19 20:03:14.359597 [DBG] Enabled JetStream for account "$G"
[2129626] 2021/09/19 20:03:14.359608 [DBG] Max Memory: -1 B
[2129626] 2021/09/19 20:03:14.359616 [DBG] Max Storage: -1 B
[2129626] 2021/09/19 20:03:14.359635 [DBG] Recovering JetStream state for account "$G"
[2129626] 2021/09/19 20:03:14.420453 [INF] Restored 168 messages for stream "aikodb"
[2129626] 2021/09/19 20:03:14.420600 [INF] Recovering 5 consumers for stream - "aikodb"
[2129626] 2021/09/19 20:03:14.523977 [INF] Restored 7,920 messages for stream "aikopsender"
[2129626] 2021/09/19 20:03:14.524084 [INF] Recovering 1 consumers for stream - "aikopsender"
[2129626] 2021/09/19 20:03:14.524312 [DBG] JetStream state for account "$G" recovered
[2129626] 2021/09/19 20:03:14.524782 [INF] Starting http monitor on 192.168.109.177:4223
[2129626] 2021/09/19 20:03:14.524879 [INF] Listening for client connections on 192.168.109.177:4222
[2129626] 2021/09/19 20:03:14.524896 [INF] Server is ready
Sep 19 20:02:13 pagasarri systemd[1]: Started nats-server-post.
Sep 19 20:03:10 pagasarri nats-server[2129551]: panic: runtime error: slice bounds out of range [2284012:0]
Sep 19 20:03:10 pagasarri nats-server[2129551]: goroutine 38 [running]:
Sep 19 20:03:10 pagasarri nats-server[2129551]: github.com/nats-io/nats-server/v2/server.(*msgBlock).cacheLookup(0xc000164b60, 0x224e7b)
Sep 19 20:03:10 pagasarri nats-server[2129551]: /usr/src/nats-server/server/filestore.go:3179 +0x525
Sep 19 20:03:10 pagasarri nats-server[2129551]: github.com/nats-io/nats-server/v2/server.(*msgBlock).fetchMsg(0xc000164b60, 0x8)
Sep 19 20:03:10 pagasarri nats-server[2129551]: /usr/src/nats-server/server/filestore.go:3104 +0x95
Sep 19 20:03:10 pagasarri nats-server[2129551]: github.com/nats-io/nats-server/v2/server.(*fileStore).msgForSeq(0xc000178280, 0x0)
Sep 19 20:03:10 pagasarri nats-server[2129551]: /usr/src/nats-server/server/filestore.go:3239 +0x174
Sep 19 20:03:10 pagasarri nats-server[2129551]: github.com/nats-io/nats-server/v2/server.(*fileStore).LoadMsg(0xc000026cf0, 0xc0000a8384)
Sep 19 20:03:10 pagasarri nats-server[2129551]: /usr/src/nats-server/server/filestore.go:3311 +0x19
Sep 19 20:03:10 pagasarri nats-server[2129551]: github.com/nats-io/nats-server/v2/server.(*consumer).getNextMsg(0xc000080380)
Sep 19 20:03:10 pagasarri nats-server[2129551]: /usr/src/nats-server/server/consumer.go:2047 +0x413
Sep 19 20:03:10 pagasarri nats-server[2129551]: github.com/nats-io/nats-server/v2/server.(*consumer).processNextMsgReq.func2(0xc00005ea00)
Sep 19 20:03:10 pagasarri nats-server[2129551]: /usr/src/nats-server/server/consumer.go:1913 +0xfd
Sep 19 20:03:10 pagasarri nats-server[2129551]: github.com/nats-io/nats-server/v2/server.(*consumer).processNextMsgReq(0xc000080380, 0xc0000e1140, 0xc000609300, 0x1d, {0x0, 0xc0000a8300}, {0xc0000d22a0, 0x1d}, {0xc0000a83d5, 0x20, ...})
Sep 19 20:03:10 pagasarri nats-server[2129551]: /usr/src/nats-server/server/consumer.go:1935 +0x712
Sep 19 20:03:10 pagasarri nats-server[2129551]: github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000609300, 0xc00016d8c0, 0x21a176, {0xc0000a8384, 0x21d0f4, 0x7c}, {0xc0000a83b3, 0x220021, 0x4d}, {0xc00060a781, ...}, ...)
Sep 19 20:03:10 pagasarri nats-server[2129551]: /usr/src/nats-server/server/client.go:3147 +0xbb0
Sep 19 20:03:10 pagasarri nats-server[2129551]: github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000609300, 0xc000613938, 0xc000605c20, {0xc0000a83d5, 0x22, 0x2b}, {0x0, 0x0, 0xc000072120}, {0xc0000a8384, ...}, ...)
Sep 19 20:03:10 pagasarri nats-server[2129551]: /usr/src/nats-server/server/client.go:4122 +0x9af
Sep 19 20:03:10 pagasarri nats-server[2129551]: github.com/nats-io/nats-server/v2/server.(*client).processInboundClientMsg(0xc000609300, {0xc0000a83d5, 0x2, 0x2b})
Sep 19 20:03:10 pagasarri nats-server[2129551]: /usr/src/nats-server/server/client.go:3611 +0xa71
Sep 19 20:03:10 pagasarri nats-server[2129551]: github.com/nats-io/nats-server/v2/server.(*client).processInboundMsg(0xc000609300, {0xc0000a83d5, 0x4f, 0x6091fc3e3c019})
Sep 19 20:03:10 pagasarri nats-server[2129551]: /usr/src/nats-server/server/client.go:3458 +0x3d
Sep 19 20:03:10 pagasarri nats-server[2129551]: github.com/nats-io/nats-server/v2/server.(*client).parse(0xc000609300, {0xc0000a8380, 0x77, 0x95e200})
Sep 19 20:03:10 pagasarri nats-server[2129551]: /usr/src/nats-server/server/parser.go:477 +0x21aa
Sep 19 20:03:10 pagasarri nats-server[2129551]: github.com/nats-io/nats-server/v2/server.(*client).readLoop(0xc000609300, {0x0, 0x0, 0x0})
Sep 19 20:03:10 pagasarri nats-server[2129551]: /usr/src/nats-server/server/client.go:1174 +0xe3a
Sep 19 20:03:10 pagasarri nats-server[2129551]: github.com/nats-io/nats-server/v2/server.(*Server).createClient.func1()
Sep 19 20:03:10 pagasarri nats-server[2129551]: /usr/src/nats-server/server/server.go:2448 +0x29
Sep 19 20:03:10 pagasarri nats-server[2129551]: created by github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine
Sep 19 20:03:10 pagasarri nats-server[2129551]: /usr/src/nats-server/server/server.go:2867 +0x87
Sep 19 20:03:10 pagasarri systemd[1]: nats-server-post.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Sep 19 20:03:10 pagasarri systemd[1]: nats-server-post.service: Failed with result 'exit-code'.
```
|
https://github.com/nats-io/nats-server/issues/2544
|
https://github.com/nats-io/nats-server/pull/2545
|
3bfeff67f26b0903af86e2a8cec8c686e79f32d5
|
83825a2ae527c4ae62a6c66b4fdb5076f2ecef43
| 2021-09-19T18:11:47Z |
go
| 2021-09-20T15:59:56Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,528 |
["server/client.go", "server/const.go", "server/events.go", "server/events_test.go", "server/jetstream.go", "server/jetstream_api.go", "server/jetstream_cluster.go", "server/jetstream_cluster_test.go", "server/mqtt.go"]
|
Adding an already defined stream behaves inconsistently depending if connected to a single server or a cluster
|
Currently if you call js.AddStream for a stream that has already been defined (and the attributes of the stream that you are trying to create with js.AddStream match the attributes of the already defined stream) behave inconsistently:
- If the client application is connected to a single nats-server then the call behaves as if it is idempotent (i.e. returns an error only if the stream you are trying to add has a mismatch of attribute(s) with the already defined stream)
- If the client application is connected to a cluster of nats-servers the call behaves as if it is _not_ idempotent (i.e. always returns an error if there's a stream with the same name already defined regardless of whether the attributes match or not)
|
https://github.com/nats-io/nats-server/issues/2528
|
https://github.com/nats-io/nats-server/pull/2535
|
2f579abf0a0d77e717f0e4813306a2e65e4311fe
|
decbae30aeb6508134a4042809a5afa8d2cedbe6
| 2021-09-14T17:36:05Z |
go
| 2021-09-16T15:49:25Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,525 |
["server/errors.json", "server/jetstream_cluster.go", "server/jetstream_cluster_test.go", "server/jetstream_errors_generated.go", "server/stream.go"]
|
Report of large message headers causing JetStream issues.
| null |
https://github.com/nats-io/nats-server/issues/2525
|
https://github.com/nats-io/nats-server/pull/2526
|
9cdab0682aef4d93274739a9bc30b38118d3252b
|
20574ffaad9b842ce770363505fbd6c456c5a48d
| 2021-09-14T00:26:43Z |
go
| 2021-09-14T13:41:18Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,520 |
["server/errors.json", "server/jetstream_api.go", "server/jetstream_cluster_test.go", "server/jetstream_errors_generated.go", "server/jetstream_test.go"]
|
Jetstream: Require Heartbeat when Flow Control is Set or provide alternative
|
## Feature Request
1A. Require `idle_heartbeat` when `flow_control` is requested when creating a consumer; server returns an error when the consumer is not created correctly.
-or -
1B. Default `idle_heartbeat` when `flow_control` is requested but `idle_heartbeat` is not supplied when creating a consumer.
-or-
2. Provide an api call to manually request a heartbeat status message
#### Use Case:
Currently if a consumer is created with flow control is set, but heartbeat is not, if a flow control message is missed, there is no way to recover without restarting the server. The consumer simply stops getting messages.
#### Proposed Change:
Implement option (1A or 1B) and/or 2
#### Who Benefits From The Change(s)?
Consumers can recover from missed flow control.
|
https://github.com/nats-io/nats-server/issues/2520
|
https://github.com/nats-io/nats-server/pull/2533
|
decbae30aeb6508134a4042809a5afa8d2cedbe6
|
47359c453e9fb4d5b604e6aeb9efd051b4128cf7
| 2021-09-13T18:52:18Z |
go
| 2021-09-16T16:20:48Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,514 |
["server/client.go", "server/websocket.go", "server/websocket_test.go"]
|
Nginx Proxy WebSocket Client IP Address
|
What should I do to forward the IP addresses of users logging in with Nginx Client correctly?
Architectural
```
Server Private IP Address: 10.0.0.15
Nats Server WebSocket Port: 4025
Nginx Proxy External IP Access
-> Nats Server (WebSocket Port) Private IP
```
Nats-Client
```
Nats-Ws 1.3.0
```
Nginx Config
```
upstream natsNodes {
ip_hash;
server 10.0.0.15:4025;
}
server {
listen 80;
listen 443 ssl;
ssl_certificate /path/dynamic.pem;
ssl_certificate_key /path/dynamic.key;
server_name n1.dynamic.com;
location / {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_http_version 1.1;
proxy_redirect off;
proxy_pass http://natsNodes;
}
}
```
Nats Node Config
```
# Server Listen
listen: 10.0.0.15:4029
authorization: {
timeout: 5
ADMIN_PERMISSION = {
publish = ">"
subscribe = ">"
}
USER_PERMISSION = {
publish = "request"
subscribe = ">"
}
users = [
{user: admin, password: OwnAdminPassword, permissions: $ADMIN_PERMISSION}
{user: dynamicUser, password: nil, permissions: $USER_PERMISSION}
]
}
# Cluster
cluster {
listen: 10.0.0.15:6922
name: "Cluster Name"
authorization {
user: cUser
password: ClusterPasswords
timeout: 0.75
}
routes [
# nats-route://cUser:[email protected]:6922
nats-route://cUserr:[email protected]:6922
]
cluster_advertise: 10.0.0.15
connect_retries: 5
}
#Web Socket
websocket {
listen: "10.0.0.15:4025"
no_tls: true
compression: true
}
# Monitoring Port
http_port: 7569
# Configs
# 1MB
max_connections: 1048576
# 5MB
max_payload: 5242880
# 64KB
max_control_line: 65536
# 2 mins
ping_interval: "2m"
write_deadline: "10s"
# Logging
debug: false
trace: false
logtime: true
logfile_size_limit: 1GB
log_file: "/var/log/nats-server.log"
```
/connz output
```
{
"cid": 439,
"kind": "Client",
"type": "websocket",
"ip": "10.0.0.15",
"port": 35216,
"start": "2021-09-12T17:47:00.393533655Z",
"last_activity": "2021-09-12T21:18:02.393267634Z",
"rtt": "147.964595ms",
"uptime": "3h31m15s",
"idle": "13s",
"pending_bytes": 0,
"in_msgs": 0,
"out_msgs": 1428,
"in_bytes": 0,
"out_bytes": 39834554,
"subscriptions": 3,
"lang": "nats.ws",
"version": "1.3.0"
},
```
|
https://github.com/nats-io/nats-server/issues/2514
|
https://github.com/nats-io/nats-server/pull/2734
|
893b4154348891458954c9595922d0faaf131dbc
|
67c345270cd2b9ba8b62afc2eebee473199b07e2
| 2021-09-12T21:44:56Z |
go
| 2021-12-06T23:11:17Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,510 |
["server/websocket.go", "server/websocket_test.go"]
|
Server websocket.go crash
|
Hello there,
I have been using version 2.4.0 for 2 days.
Error occurred 2 times, I am sending the error output.
```
systemctl status nats-sever response;
Sep 11 18:15:50 nats-2.system.com nats-server[7961]: /root/nats-server-2.4.0/server/websocket.go:275 +0x6a6
Sep 11 18:15:50 nats-2.system.com nats-server[7961]: github.com/nats-io/nats-server/v2/server.(*client).readLoop(0xc024660c80, 0x0, 0x0, 0x0)
Sep 11 18:15:50 nats-2.system.com nats-server[7961]: /root/nats-server-2.4.0/server/client.go:1142 +0x48e
Sep 11 18:15:50 nats-2.system.com nats-server[7961]: github.com/nats-io/nats-server/v2/server.(*Server).createWSClient.func1()
Sep 11 18:15:50 nats-2.system.com nats-server[7961]: /root/nats-server-2.4.0/server/websocket.go:1219 +0x3b
Sep 11 18:15:50 nats-2.system.com nats-server[7961]: created by github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine
Sep 11 18:15:50 nats-2.system.com nats-server[7961]: /root/nats-server-2.4.0/server/server.go:2867 +0xc5
```
log errors line;
`wid:1087296 - "v1.3.0:nats.ws" - Slow Consumer Detected: WriteDeadline of 5s exceeded with 844 chunks of 1723747 total bytes`
|
https://github.com/nats-io/nats-server/issues/2510
|
https://github.com/nats-io/nats-server/pull/2519
|
2495180d766cccc232f99be7e46b05c3007dae71
|
6e705c2decb7120ac20202ea63bd40ec27f4f794
| 2021-09-11T15:49:19Z |
go
| 2021-09-13T20:45:20Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,500 |
["server/jetstream_cluster_test.go", "server/leafnode.go"]
|
automatically inserted LN export/import deny rules for JetStream keep accumulating on LN reconnects
|
The user confirmed this is happening on reconnect and that the line below causes it
https://github.com/nats-io/nats-server/blob/874c79fe411f31291d7196b3ddd3500c8c82283b/server/leafnode.go#L1069
We need to make sure that these functions
```go
s.addInJSDenyAll(remote)
s.addInJSDeny(remote)
s.addInJSDenyExport(remote)
acc.AddMapping(src, jsAllAPI)
```
Original user observation (presumably from our monitoring endpoints)
```json
"deny": {
"exports": [
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>"
],
"imports": [
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>",
"$JSC.>",
"$NRG.>",
"$JS.API.>"
]
```
|
https://github.com/nats-io/nats-server/issues/2500
|
https://github.com/nats-io/nats-server/pull/2502
|
309856da4e2f159a3a8419d30c975b10e52e3bdf
|
d29acb8bfd9f4518ce24c267b40b9ef5e228a48e
| 2021-09-08T22:12:23Z |
go
| 2021-09-09T00:46:49Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,494 |
["server/jetstream_test.go", "server/mqtt.go", "server/stream.go"]
|
Ability to publish to a kv style topic only if empty
|
## Feature Request
#### Use Case:
Today we have `Nats-Expected-Last-Subject-Sequence` that limits publishing to a stream if the previous message had a specific sequence
Along the same lines it would be useful to ensure only the first client can write a message when the subject is empty.
We discussed maybe (ab)using `Nats-Expected-Last-Subject-Sequence:0` but decided not to, so maybe `Nats-Expected-Empty-Subject:1`
A sample use case could be for example registering a schema that relates to a specific topic, the first publisher to this topic should register the subject but that's true only ever once.
This setting should probably only be accepted in `max_msgs_per_subject > 0` stream
|
https://github.com/nats-io/nats-server/issues/2494
|
https://github.com/nats-io/nats-server/pull/2506
|
6fa3a0ecc8811341696e1e1d4727e4427e929773
|
bae93c44ef5e2703149002e72a79585c306640e1
| 2021-09-07T17:50:16Z |
go
| 2021-09-09T19:42:33Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,488 |
["server/filestore.go", "server/filestore_test.go"]
|
Consumers stops receiving messages
|
## Defect
#### Versions of `nats-server` and affected client libraries used:
Nats server version
```
[83] 2021/09/04 18:51:12.239432 [INF] Starting nats-server
[83] 2021/09/04 18:51:12.239488 [INF] Version: 2.4.0
[83] 2021/09/04 18:51:12.239494 [INF] Git: [219a7c98]
[83] 2021/09/04 18:51:12.239496 [DBG] Go build: go1.16.7
[83] 2021/09/04 18:51:12.239517 [INF] Name: NBVE7O7DMRAZ63STC7Z644KHF5HJ6QQUGLZVGDIKEG32CFL2J6O2456M
[83] 2021/09/04 18:51:12.239533 [INF] ID: NBVE7O7DMRAZ63STC7Z644KHF5HJ6QQUGLZVGDIKEG32CFL2J6O2456M
[83] 2021/09/04 18:51:12.239605 [DBG] Created system account: "$SYS"
```
Go client version: `v1.12.0`
#### OS/Container environment:
GKE Kubernetes. Running nats js HA cluster. Deployed via nats helm chart.
#### Steps or code to reproduce the issue:
Stream configuration:
```yaml
apiVersion: jetstream.nats.io/v1beta1
kind: Stream
metadata:
name: agent
spec:
name: agent
subjects: ["data.*"]
storage: file
maxAge: 1h
replicas: 3
retention: interest
```
There are two consumers to this stream. Each runs as queue subscriber in two services with 2 pod replicas. Note that I don't care if message is not processed, this is why ack none is set.
```go
// 2 pods for service A.
js.QueueSubscribe(
"data.received",
"service1_queue",
func(msg *nats.Msg) {},
nats.DeliverNew(),
nats.AckNone(),
)
// 2 pods for service B.
s.js.QueueSubscribe(
"data.received",
"service2_queue",
func(msg *nats.Msg) {},
nats.DeliverNew(),
nats.AckNone(),
)
```
#### Expected result:
Consumer receives messages.
#### Actual result:
Stream stats after few days:
```
agent │ File │ 3 │ 28,258 │ 18 MiB │ 0 │ 84 │ nats-js-0, nats-js-1*, nats-js-2
```
Consumers stats:
```
service1_queue │ Push │ None │ 0.00s │ 0 │ 0 │ 0 │ 60,756 │ nats-js-0, nats-js-1*, nats-js-2
service2_queue │ Push │ None │ 0.00s │ 0 │ 0 │ 8,193 / 28% │ 60,843 │ nats-js-0, nats-js-1*, nats-js-2
```
1. Non of the nats server pods contains errors indicating any problem.
2. Unprocessed messages count for second consumer stays the same and doesn't decrease.
3. The only fix which helped is after I changed second consumer raft leader with `nats consumer cluster step-down`. But after some time problem still comes back.
4. There are active connections to the server. Checked with `nats server report connections`.
/cc @kozlovic @derekcollison
|
https://github.com/nats-io/nats-server/issues/2488
|
https://github.com/nats-io/nats-server/pull/2505
|
9e5526cc9d4ff17e99b65552c558bcc0e80d853d
|
6fa3a0ecc8811341696e1e1d4727e4427e929773
| 2021-09-04T19:16:10Z |
go
| 2021-09-09T17:22:50Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,469 |
["server/client.go", "server/leafnode.go", "server/leafnode_test.go", "server/sublist.go", "test/leafnode_test.go"]
|
Leaf Node credentials require subscribe permissions to publish
|
## Defect
- [x] Included `nats-server -DV` output
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
Using Credentials to control access for Leaf Nodes it appears it is now necessary to provide the Leaf Node credentials with permission to Subscribe and Publish in order to Publish to a subject where a Subscriber already exists on the Hub Server. The problem appears to be during the Leaf connect processing where Subscriptions aren't being propagated.
This appears similar to [issue 2454](https://github.com/nats-io/nats-server/issues/2454) but for the Leaf publishing messages to a subscriber on the Hub Server.
#### Versions of `nats-server` and affected client libraries used:
nats-server version 2.3.4
#### OS/Container environment:
Ubuntu 20.04
#### Steps or code to reproduce the issue:
See attached repro.txt for sample script to reproduce.
[repro.txt](https://github.com/nats-io/nats-server/files/7041787/repro.txt)
Server with a Leaf Node both in Operator mode using the same Operator and Account.
Leaf Node uses the LEAF credentials in its leafnodes->remotes section.
User SRV with no restrictions.
User LEAF with permissions to Publish only on "foo" and Subscribe only on "bar"
Start the Hub Server
User SRV performs a nats-sub on subject "foo" on the Server Node
Start Leaf Node Server
User LEAF performs a nats-pub on subject "foo" on the Leaf Node
#### Expected result:
Message should be received by the Subscriber on the Hub Server.
#### Actual result:
Message is not received by the Subscriber on the Hub Server.
Log message in Server.log indicates the Leaf Node requires Subscribe permission to receive the LS+ foo
`[DBG] 127.0.0.1:54098 - lid:2 - Not permitted to subscribe to foo on behalf of ABGFQWGDDETV2R5LARUIGBEXH2KS44FKBEE57D6SLV6MYOH6JEAY7CQ4/A`
Confirmed giving the LEAF credentials Subscribe permission on "foo" allows it to receive the message.
Also confirmed Subscriptions started after the Leaf node has connected work ok so this seems to be limited to the propagation of subscription on connect.
This seems like it could be another issue where Spoke and non Spoke nodes require different permission checks?
```
leafnode.go:
func (s *Server) initLeafNodeSmapAndSendSubs(c *client)
.
.
.
for _, sub := range subs {
if !c.canSubscribe(string(sub.subject)) {
c.Debugf("Not permitted to subscribe to %s on behalf of %s/%s", string(sub.subject), accName, accNTag)
continue
}
```
|
https://github.com/nats-io/nats-server/issues/2469
|
https://github.com/nats-io/nats-server/pull/2470
|
be17afb0a50463780f542f27bbf8b928d718bb1c
|
5dfdac28ef2f9d1b0ac02da8b2a7b967e3de0a43
| 2021-08-24T20:04:12Z |
go
| 2021-08-25T20:04:38Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,454 |
["server/client.go", "test/leafnode_test.go"]
|
Leaf Node credentials require publish permissions to subscribe
|
## Defect
- [x] Included `nats-server -DV` output
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
Using Credentials to control access for Leaf Nodes has changed between 2.1.9 and 2.2 and above. It appears it is now necessary to provide the Leaf Node credentials with permission to Publish and Subscribe in order to Subscribe to a subject. This presents an issue for us because we don't want to distribute privileged credentials to the Leaf Nodes.
#### Versions of `nats-server` and affected client libraries used:
nats-server version 2.2.0 and above (verified up to 2.3.4)
#### OS/Container environment:
Ubuntu 20.04
#### Steps or code to reproduce the issue:
See attached repro.txt for sample script to reproduce. [repro.txt](https://github.com/nats-io/nats-server/files/7014969/repro.txt)
Server with a Leaf Node both in Operator mode using the same Operator and Account.
Leaf Node uses the LEAF credentials in its leafnodes->remotes section.
User SRV with no restrictions.
User LEAF with permissions to Publish only on "foo" and Subscribe only on "bar"
User SRV performs a nats-sub on subject "bar" on the Server Node (as a baseline)
User LEAF performs a nats-sub on subject "bar" on the Leaf Node (demonstrates missing message)
User SRV performs a nats-pub on subject "bar" on the Server Node.
#### Expected result:
Both Server Node and Leaf Node should receive the "bar" message.
#### Actual result:
Only Server Node receives the "bar" message.
Log message in Server.log indication the Leaf Node requires Publish permission
`[DBG] 127.0.0.1:54526 - lid:1 - Not permitted to publish to "bar"`
Also confirmed giving the LEAF credentials Publish permission on "bar" allows it to receive the message. This is not ideal though as it would mean credentials exist on the Leaf Node allowing Publish to subjects it shouldn't be able to.
I found with these changes I had the results I expected but I'm not sure of the larger ramifications of this:
```
diff --git a/server/client.go b/server/client.go
index 9fc10586..b43d6ce4 100644
--- a/server/client.go
+++ b/server/client.go
@@ -3056,9 +3056,9 @@ func (c *client) deliverMsg(sub *subscription, acc *Account, subject, reply, mh,
// Check if we are a leafnode and have perms to check.
if client.kind == LEAF && client.perms != nil {
- if !client.pubAllowedFullCheck(string(subject), true, true) {
+ if !client.canSubscribe(string(subject)) {
client.mu.Unlock()
- client.Debugf("Not permitted to publish to %q", subject)
+ client.Debugf("Not permitted to subscribe to %q", subject)
return false
}
}
```
|
https://github.com/nats-io/nats-server/issues/2454
|
https://github.com/nats-io/nats-server/pull/2455
|
f8d503fa81ba6fa7c47ac6944a38bb3bb113b0bc
|
7dcd75aa1d23b7ed65838e0d2d208f8768365f7a
| 2021-08-19T14:07:49Z |
go
| 2021-08-19T22:30:06Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,445 |
["server/disk_avail.go", "server/disk_avail_openbsd.go"]
|
nats-server doen´t build on OpenBSD 6.9
|
Hi, I tried to compile the nats-server on OpenBSD 6.9 and I ran the `go get` command:
ud$ go version
go version go1.16.2 openbsd/amd64
ud$ GO111MODULE=on go get github.com/nats-io/nats-server/v2
go: downloading github.com/nats-io/nats-server/v2 v2.3.4
go: downloading github.com/nats-io/jwt/v2 v2.0.3
\# github.com/nats-io/nats-server/v2/server
go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/disk_avail.go:31:23: fs.Bavail undefined (type syscall.Statfs_t has no field or method Bavail)
go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/disk_avail.go:31:43: fs.Bsize undefined (type syscall.Statfs_t has no field or method Bsize)
ud$
|
https://github.com/nats-io/nats-server/issues/2445
|
https://github.com/nats-io/nats-server/pull/2472
|
41a253dabb43bec74445bea7a1fb6008db47f3fc
|
32646f8211a3b47c50b7771c2fdbd37f874b2c12
| 2021-08-16T18:04:22Z |
go
| 2021-08-26T15:51:55Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,433 |
["server/auth.go", "server/client.go", "server/mqtt.go", "server/mqtt_test.go", "server/server_test.go", "server/websocket.go", "server/websocket_test.go"]
|
MQTT: support MQTT over WebSocket
|
## Feature Request
#### Use Case:
#### Proposed Change:
support http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html#_Toc398718127
`mqtt {
port: 1883
ws_port: 9001
}`
#### Who Benefits From The Change(s)?
The web application does not support native TCP connection.
MQTT over WebSockets allows you to receive MQTT data directly into a web browser.
#### Alternative Approaches
|
https://github.com/nats-io/nats-server/issues/2433
|
https://github.com/nats-io/nats-server/pull/2735
|
67c345270cd2b9ba8b62afc2eebee473199b07e2
|
f55ee219419145ee8887857c9dbdb9fcd5654935
| 2021-08-12T06:21:15Z |
go
| 2021-12-07T16:09:27Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,423 |
["server/consumer.go", "server/jetstream_test.go"]
|
max_waiting in consumer create request causes "consumer deliver subject has wildcards 10079"
|
## Defect
When `"max_waiting": 512` is supplied on the consumer create request, the server responds with an api error
`consumer deliver subject has wildcards 10079`
#### Versions of `nats-server` and affected client libraries used:
nats-server: v2.3.5-beta
#### OS/Container environment:
Windows
#### Steps or code to reproduce the issue:
```
CONSUMER.DURABLE.CREATE.streamname.durname
{
"stream_name": "streamname",
"config": {
"durable_name": "durname",
"deliver_subject": "_INBOX.YZsveC3tro2QrGNpGscOCt",
"deliver_policy": "all",
"opt_start_seq": 0,
"ack_policy": "none",
"ack_wait": 30000000000,
"max_ack_pending": 0,
"filter_subject": "subject.A",
"replay_policy": "instant",
"rate_limit_bps": 0,
"flow_control": false,
"max_waiting": 512
}
}
```
#### Expected result:
Consumer is created.
#### Actual result:
Api Error
```
INFO: [18284] 2021/08/09 21:51:21.943117 [TRC] 127.0.0.1:55436 - cid:5 - "v2.11.6:java" - <<- [PUB $JS.API.STREAM.CREATE.stream _INBOX.YZsveC3tro2QrGNpGscOCt.YZsveC3tro2QrGNpGscOEd 130]
...
INFO: [18284] 2021/08/09 21:51:21.943117 [TRC] 127.0.0.1:55436 - cid:5 - "v2.11.6:java" - <<- MSG_PAYLOAD: ["{\"name\":\"stream\",\"subjects\":[\"subject.*\"],\"retention\":\"limits\",\"storage\":\"memory\",\"num_replicas\":1,\"no_ack\":false,\"discard\":\"old\"}"]
...
INFO: [18284] 2021/08/09 21:51:21.973127 [TRC] 127.0.0.1:55436 - cid:5 - "v2.11.6:java" - <<- MSG_PAYLOAD: ["{\"stream_name\":\"streamname\",\"config\":{\"durable_name\":\"durname\",\"deliver_subject\":\"_INBOX.YZsveC3tro2QrGNpGscOKj\",\"deliver_policy\":\"all\",\"opt_start_seq\":0,\"ack_policy\":\"none\",\"ack_wait\":30000000000,\"max_ack_pending\":0,\"filter_subject\":\"subject.A\",\"replay_policy\":\"instant\",\"rate_limit_bps\":0,\"flow_control\":false,\"max_waiting\":512}}"]
...
\\\"error\\\":{\\\"code\\\":400,\\\"err_code\\\":10079,\\\"description\\\":\\\"consumer deliver subject has wildcards\\\"}}\"}"]
```
|
https://github.com/nats-io/nats-server/issues/2423
|
https://github.com/nats-io/nats-server/pull/2427
|
6f4cc20d2726c85abf5c8557bc16286503dd13ac
|
b38c361afdf5779f835dadf6c5394d1a692d2ba4
| 2021-08-10T01:58:39Z |
go
| 2021-08-10T14:03:39Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,420 |
["server/jetstream_test.go", "server/stream.go"]
|
consumer info max_msgs_per_subject default to 0 should be -1
|
## Defect
If `max_msgs_per_subject` is not provided during stream create, it's value becomes 0, not -1. This is different behavior than
`max_consumers`, `max_msgs`, `max_bytes` and `max_msg_size`
#### Versions of `nats-server` and affected client libraries used:
nats-server: v2.3.5-beta
#### OS/Container environment:
Windows
#### Steps or code to reproduce the issue:
Execute `$JS.API.STREAM.CREATE.foo` with a minimal payload, i.e. no `max_` values are supplied.
```
{"name":"foo","subjects":["bar"],"retention":"limits","storage":"memory","num_replicas":1,"no_ack":false,"discard":"old"}
```
The response comes back with
```
{
"name": "foo",
"subjects": ["bar"],
"retention": "limits",
"max_consumers": -1,
"max_msgs": -1,
"max_bytes": -1,
"max_age": 0,
"max_msgs_per_subject": 0,
"max_msg_size": -1,
"discard": "old",
"storage": "memory",
"num_replicas": 1,
"duplicate_window": 120000000000
}
```
#### Expected result:
max_msgs_per_subject is -1
#### Actual result:
max_msgs_per_subject is -0
|
https://github.com/nats-io/nats-server/issues/2420
|
https://github.com/nats-io/nats-server/pull/2426
|
b38c361afdf5779f835dadf6c5394d1a692d2ba4
|
cf1258b73a5394ea765202b119c166d96b7437ac
| 2021-08-07T22:47:44Z |
go
| 2021-08-10T14:35:05Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,417 |
["server/filestore.go", "server/filestore_test.go"]
|
Stream delete can fail for non empty directory
|
During concurrent consumer create/delete and stream create/delete it can happen that a stream directory during delete is not empty
```
stream delete failed: unlinkat /var/folders/2p/lsd279293x7618xcjfl2bc380000gn/T/jstest632941891/jetstream/$G/streams/ELECTION: directory not empty (10050)
```
|
https://github.com/nats-io/nats-server/issues/2417
|
https://github.com/nats-io/nats-server/pull/2418
|
9f8b73d6854ac99b194894c78eb74038a75fa919
|
7c089a423a778dd3596d2b04dedefc04a55490dd
| 2021-08-06T13:35:51Z |
go
| 2021-08-07T19:35:19Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,416 |
["server/consumer.go", "server/stream.go"]
|
Panic when concurrent stream remove and consumer create
|
## Defect
While testing the reliability of some code I randomly delete consumers and streams trying to see if my client will recover - recreating as needed.
If a stream delete and consumer create happens concurrently the server panics.
https://github.com/nats-io/nats-server/blob/7112ae06a54899471910c82a7edaf068040f6dae/server/consumer.go#L597
```
goroutine 146 [running]:
github.com/nats-io/nats-server/v2/server.(*stream).addConsumerWithAssignment(0xc0002b2500, 0xc0007dae50, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc000c41180)
/Users/rip/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/consumer.go:597 +0x13ae
github.com/nats-io/nats-server/v2/server.(*stream).addConsumer(...)
/Users/rip/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/consumer.go:263
github.com/nats-io/nats-server/v2/server.(*Server).jsConsumerCreate(0xc0002f0000, 0xc0002c4000, 0xc000922000, 0xc00010a6c0, 0xc0008232c0, 0x20, 0xc000b16870, 0x11, 0xc000c41180, 0x1c1, ...)
/Users/rip/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/jetstream_api.go:3128 +0x765
github.com/nats-io/nats-server/v2/server.(*Server).jsConsumerCreateRequest(...)
/Users/rip/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/jetstream_api.go:3018
github.com/nats-io/nats-server/v2/server.(*jetStream).apiDispatch(0xc0002560a0, 0xc0002c4000, 0xc000922000, 0xc00010a6c0, 0xc0008232c0, 0x20, 0xc000b16870, 0x11, 0xc000c41180, 0x1c1, ...)
/Users/rip/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/jetstream_api.go:625 +0x7bc
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000922000, 0xc0002c4000, 0xc00010a6c0, 0xc0008232a0, 0x20, 0x20, 0xc000b16858, 0x11, 0x18, 0xc000923480, ...)
/Users/rip/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/client.go:3137 +0xa7c
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000922000, 0xc00010a6c0, 0xc000772330, 0xc000c41180, 0x1c3, 0x36a, 0x0, 0x0, 0x0, 0xc0008232a0, ...)
/Users/rip/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/client.go:4115 +0x5ca
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc000922000, 0xc000526480, 0xc0002be480, 0xc000c41180, 0x1c3, 0x36a)
/Users/rip/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/client.go:3938 +0xa28
github.com/nats-io/nats-server/v2/server.(*Account).addServiceImportSub.func1(0xc000602cc0, 0xc000922000, 0xc0002be480, 0xc000823260, 0x20, 0xc0005711d0, 0x26, 0xc000652051, 0xda, 0x1af)
/Users/rip/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/accounts.go:1873 +0x65
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000922000, 0xc000602cc0, 0xc0002be480, 0xc000652004, 0x20, 0x1fc, 0xc000652025, 0x26, 0x1db, 0xc000923481, ...)
/Users/rip/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/client.go:3135 +0xb90
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000922000, 0xc0002be480, 0xc00023aba0, 0xc000652051, 0xda, 0x1af, 0x0, 0x0, 0x0, 0xc000652004, ...)
/Users/rip/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/client.go:4115 +0x5ca
github.com/nats-io/nats-server/v2/server.(*client).processInboundClientMsg(0xc000922000, 0xc000652051, 0xda, 0x1af, 0x1b4)
/Users/rip/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/client.go:3604 +0x4e7
github.com/nats-io/nats-server/v2/server.(*client).processInboundMsg(0xc000922000, 0xc000652051, 0xda, 0x1af)
/Users/rip/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/client.go:3451 +0x95
github.com/nats-io/nats-server/v2/server.(*client).parse(0xc000922000, 0xc000652000, 0x12b, 0x200, 0x12b, 0x0)
/Users/rip/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/parser.go:477 +0x2525
github.com/nats-io/nats-server/v2/server.(*client).readLoop(0xc000922000, 0x0, 0x0, 0x0)
/Users/rip/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/client.go:1174 +0x5f8
github.com/nats-io/nats-server/v2/server.(*Server).createClient.func1()
/Users/rip/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/server.go:2420 +0x45
created by github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine
/Users/rip/go/pkg/mod/github.com/nats-io/nats-server/[email protected]/server/server.go:2839 +0xc5```
|
https://github.com/nats-io/nats-server/issues/2416
|
https://github.com/nats-io/nats-server/pull/2419
|
7112ae06a54899471910c82a7edaf068040f6dae
|
9f8b73d6854ac99b194894c78eb74038a75fa919
| 2021-08-06T13:08:30Z |
go
| 2021-08-07T19:34:53Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,415 |
["server/jwt_test.go", "server/leafnode.go", "server/leafnode_test.go"]
|
User authorization in leafnode connection doesn't work in v2.3.4
|
## Defect
+ user authorization doesnt work in leafnode connection when account(created by nsc) in central cluster has jetstream enabled.
+ after I changed my image from `nats:2.3.4-alpine` to `nats:2.2.6-alpine3.13` ,it backs to work
- [ ] Included `nats-server -DV` output
- [ ] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
docker image : nats:2.3.4-alpine
natscli : the latest dev version
#### OS/Container environment:
Linux mpc-ubuntu-server 5.4.0-80-generic #90-Ubuntu SMP Fri Jul 9 22:49:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
#### Steps or code to reproduce the issue:

1. use nsc to create 3 accounts as follow ,and each account has 4 users.
```
a: {
jetstream:enable
users:[
{a1,deny-pubsub:test.a1},
{a2,deny-pubsub:test.a2},
{a3,deny-pubsub:test.a3},
{a4,deny-pubsub:test.a4},
]
}
b: {
users:[
{b1,deny-pubsub:test.b1},
{b2,deny-pubsub:test.b2},
{b3,deny-pubsub:test.b3},
{b4,deny-pubsub:test.b4},
]
}
c: {
jetstream:enable
users:[
{c1,deny-pubsub:test.c1},
{c2,deny-pubsub:test.c2},
{c3,deny-pubsub:test.c3},
{c4,deny-pubsub:test.c4},
]
}
```
2. establish a central cluster of 3 nodes.each node uses nats-based-resolver and preload the accounts above.
3. deploy 4 leafnodes(leaf1~leaf4) connecting to the central cluster . each node has three account named A,B,C
```
leafnode connection :
leaf1
A->a1->a
B->b1->b
C->c1->c
leaf2
A->a2->a
B->b2->b
C->c2->c
....
```
4. use client1 to sub test.a1 in leaf1(account :A) while client2 to pub msg to test.a1 in leaf2(account:A)
5. use client3 to sub test.b1 in leaf1(account :B) while client4 to pub msg to test.b1 in leaf2(account:B)
6. use client5 to sub test.c1 in leaf1(account :C) while client6 to pub msg to test.c1 in leaf2(account:C)
#### Expected result:
client1,3,5 can't receive msg sent by client2,4,6
#### Actual result:
client 2,4,6 successed to pub msg
client1,5 received the msg sent by client 2,6
client 3 cant receive msgs sent by client 4
|
https://github.com/nats-io/nats-server/issues/2415
|
https://github.com/nats-io/nats-server/pull/2430
|
4e0eaf026bd114a3d702fad0e1f39e2680408a53
|
1d9a89147722b6bc6a5c59dcc696cbace872a57e
| 2021-08-05T09:31:36Z |
go
| 2021-08-12T19:12:37Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,403 |
["server/jetstream_test.go", "server/server.go", "server/stream.go"]
|
JS: deadlock when removing a message
|
Stream deadlock where we have stream's lock in storeUpdates when calling decStreamPending(), but then ackMsg() is trying to get the stream's lock again.
```
goroutine 124300 [semacquire, 8 minutes]:
sync.runtime_SemacquireMutex(0xc0000cdb8c, 0xc000c26000, 0x0)
/opt/hostedtoolcache/go/1.16.6/x64/src/runtime/sema.go:71 +0x47
sync.(*RWMutex).RLock(...)
/opt/hostedtoolcache/go/1.16.6/x64/src/sync/rwmutex.go:63
github.com/nats-io/nats-server/server.(*stream).ackMsg(0xc0000cdb80, 0xc008278a80, 0x690)
/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/stream.go:3296 +0x1e5
github.com/nats-io/nats-server/server.(*consumer).processAckMsg(0xc008278a80, 0x690, 0x3, 0x1, 0xc000107500)
/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/consumer.go:1616 +0x27f
github.com/nats-io/nats-server/server.(*consumer).decStreamPending(0xc008278a80, 0x690, 0xc000fc7300, 0x3d)
/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/consumer.go:3075 +0xf6
github.com/nats-io/nats-server/server.(*stream).storeUpdates(0xc0000cdb80, 0xffffffffffffffff, 0xfffffffffffffafc, 0x690, 0xc000fc7300, 0x3d)
/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/stream.go:2346 +0xf6
github.com/nats-io/nats-server/server.(*memStore).removeMsg(0xc000217ba0, 0x690, 0x600, 0xc0007316f0)
/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/memstore.go:721 +0x2c2
github.com/nats-io/nats-server/server.(*memStore).deleteFirstMsg(...)
/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/memstore.go:578
github.com/nats-io/nats-server/server.(*memStore).deleteFirstMsgOrPanic(...)
/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/memstore.go:572
github.com/nats-io/nats-server/server.(*memStore).expireMsgs(0xc000217ba0)
/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/memstore.go:390 +0x12f
created by time.goFunc
/opt/hostedtoolcache/go/1.16.6/x64/src/time/sleep.go:180 +0x45
```
Then one of the AccountInfo request has the jsa lock and try to get the stream lock to get the number of consumers:
```
goroutine 124310 [semacquire, 8 minutes]:
sync.runtime_SemacquireMutex(0xc0000cdb84, 0xc000754300, 0x1)
/opt/hostedtoolcache/go/1.16.6/x64/src/runtime/sema.go:71 +0x47
sync.(*Mutex).lockSlow(0xc0000cdb80)
/opt/hostedtoolcache/go/1.16.6/x64/src/sync/mutex.go:138 +0x105
sync.(*Mutex).Lock(...)
/opt/hostedtoolcache/go/1.16.6/x64/src/sync/mutex.go:81
sync.(*RWMutex).Lock(0xc0000cdb80)
/opt/hostedtoolcache/go/1.16.6/x64/src/sync/rwmutex.go:111 +0x90
github.com/nats-io/nats-server/server.(*stream).numConsumers(0xc0000cdb80, 0x0)
/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/stream.go:3197 +0x47
github.com/nats-io/nats-server/server.(*Account).JetStreamUsage(0xc0000ca900, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/jetstream.go:1324 +0x3ab
github.com/nats-io/nats-server/server.(*Server).jsAccountInfoRequest(0xc00018a000, 0xc000213b00, 0xc00c4ba000, 0xc00007e000, 0xc006475ee0, 0xc, 0xc007174060, 0x11, 0xc0030e5e40, 0xee, ...)
/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/jetstream_api.go:866 +0x490
github.com/nats-io/nats-server/server.(*jetStream).apiDispatch(0xc000222140, 0xc000213b00, 0xc00c4ba000, 0xc00007e000, 0xc006475ee0, 0xc, 0xc007174060, 0x11, 0xc0030e5e40, 0xee, ...)
(...)
```
Then all other access to this jsa lock are waiting for the lock:
```
goroutine 126312 [semacquire, 6 minutes]:
sync.runtime_SemacquireMutex(0xc00022e00c, 0xc00b73a300, 0x0)
/opt/hostedtoolcache/go/1.16.6/x64/src/runtime/sema.go:71 +0x47
sync.(*RWMutex).RLock(...)
/opt/hostedtoolcache/go/1.16.6/x64/src/sync/rwmutex.go:63
github.com/nats-io/nats-server/server.(*Account).JetStreamUsage(0xc0000ca900, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/jetstream.go:1305 +0x3f8
github.com/nats-io/nats-server/server.(*Server).jsAccountInfoRequest(0xc00018a000, 0xc000213b00, 0xc00af09300, 0xc00007e000, 0xc0085b9520, 0xc, 0xc006519e00, 0x11, 0xc0021f18c0, 0xee, ...)
/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/jetstream_api.go:866 +0x490
github.com/nats-io/nats-server/server.(*jetStream).apiDispatch(0xc000222140, 0xc000213b00, 0xc00af09300, 0xc00007e000, 0xc0085b9520, 0xc, 0xc006519e00, 0x11, 0xc0021f18c0, 0xee, ...)
(...)
```
Or:
```
goroutine 127822 [semacquire, 4 minutes]:
sync.runtime_SemacquireMutex(0xc00022e00c, 0xadc800, 0x0)
/opt/hostedtoolcache/go/1.16.6/x64/src/runtime/sema.go:71 +0x47
sync.(*RWMutex).RLock(...)
/opt/hostedtoolcache/go/1.16.6/x64/src/sync/rwmutex.go:63
github.com/nats-io/nats-server/server.(*Server).Jsz(0xc00018a000, 0xc0042d4a80, 0xadcb65, 0x3, 0xecfd40)
/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/monitor.go:2456 +0x68e
github.com/nats-io/nats-server/server.(*Server).HandleJsz(0xc00018a000, 0xba7780, 0xc0055b8a80, 0xc0020e9f00)
/home/runner/work/nats-server/src/github.com/nats-io/nats-server/server/monitor.go:2534 +0x3d8
(...)
```
etc..
|
https://github.com/nats-io/nats-server/issues/2403
|
https://github.com/nats-io/nats-server/pull/2404
|
aaba459d52e6fb6e804aa6542bcbff5345d7abf3
|
f417d20cd1d6714f794b2194b61e1afcdc233dfd
| 2021-08-03T18:32:13Z |
go
| 2021-08-03T23:18:27Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,401 |
["server/jetstream_cluster_test.go", "server/leafnode.go"]
|
When two server are connected via leaf node and both have JS enabled and domain set to the same value, JS is currently not extended
|
It would be expected that they behave as if domain where not set at all.
Meaning they should be part of the same meta group.
|
https://github.com/nats-io/nats-server/issues/2401
|
https://github.com/nats-io/nats-server/pull/2410
|
883e2876e95e5cacb450ca74ba1820479a8de8d8
|
2afaf1cd5e48e930e3653aa2cd7ead2e9db3fd19
| 2021-08-03T17:15:43Z |
go
| 2021-08-04T15:32:23Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,392 |
["server/consumer.go", "server/jetstream_test.go"]
|
Not Acking a message before Purging stream prevents redeliver of subsequent Non acked message for consumer
|
## Defect
Following issue was reported on slack:
> whenever I purge the stream and publish a message on the stream then a consumer gets a message only once even if I did not send ack signal. Before I purge, stream and consumer act as I expected. But whenever I do as following, a consumer cannot get the same message more than once.
pub message --> sub (no ack) --> purge stream --> pub message --> sub(no ack) --> wait for ack wait time --> sub(no ack) << cannot get message
When I check the stream there is still 1 message but consumer just can’t get that message
Go code to reproduce this behaviour below.
Make sure that these boxes are checked before submitting your issue -- thank you!
- [x ] Included `nats-server -DV` output
- [x ] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
Version: 2.3.3-beta
Git: [19bfef59]
Go build: go1.16.6
Tested with github.com/nats-io/nats.go v1.11.1-0.20210714175522-2b2bb8f326df
#### OS/Container environment:
Using synadia/nats-server:nightly
#### Steps or code to reproduce the issue:
```
package main
import (
"fmt"
"github.com/nats-io/nats.go"
"time"
)
func main() {
nc, _ := nats.Connect(nats.DefaultURL)
defer nc.Drain()
js, _ := nc.JetStream()
fmt.Println("\n-- Ack Before Purge --")
test(js, true)
fmt.Println("\n-- No Ack Before Purge --")
test(js, false)
}
func test(js nats.JetStreamContext, ack bool) {
js.AddStream(&nats.StreamConfig{
Name: "foo",
Subjects: []string{"foo.*"},
})
defer js.DeleteStream("foo")
js.Publish("foo.a", []byte("show once"))
sub, _ := js.SubscribeSync("foo.*", nats.AckWait(2*time.Second), nats.DeliverAll(), nats.AckExplicit())
defer sub.Unsubscribe()
next(sub, ack) // No ack / nack here is the issue
js.PurgeStream("foo")
fmt.Println("*Stream Purged*")
js.Publish("foo.a", []byte("show twice"))
next(sub, false)
next(sub, false)
}
func next(sub *nats.Subscription, ack bool) {
m, err := sub.NextMsg(5 * time.Second)
ci, _ := sub.ConsumerInfo()
if err == nats.ErrTimeout {
fmt.Printf("should not see (Ack Pending: %d)\n", ci.NumAckPending)
} else {
if ack {
m.Ack()
}
fmt.Printf("%s (Ack Pending: %d)\n", m.Data, ci.NumAckPending)
}
}
```
#### Expected result:
Message published after purge should be redelivered after ack wait when not acknowledged
-- Ack Before Purge --
show once (Ack Pending: 1)
*Stream Purged*
show twice (Ack Pending: 1)
show twice (Ack Pending: 1)
-- No Ack Before Purge --
show once (Ack Pending: 1)
*Stream Purged*
show twice (Ack Pending: 1)
show twice (Ack Pending: 1)
#### Actual result:
Message published after purge **NOT** being redelivered after ack wait when not acknowledged
-- Ack Before Purge --
show once (Ack Pending: 1)
*Stream Purged*
show twice (Ack Pending: 1)
show twice (Ack Pending: 1)
-- No Ack Before Purge --
show once (Ack Pending: 1)
*Stream Purged*
show twice (Ack Pending: 1)
should not see (Ack Pending: 1)
|
https://github.com/nats-io/nats-server/issues/2392
|
https://github.com/nats-io/nats-server/pull/2394
|
3c6618d4099ea28a2bc32289866a2086b6cff2c4
|
e34f365fa1ae8850e531ef64e7f0651137742cf3
| 2021-08-01T05:40:39Z |
go
| 2021-08-02T00:56:47Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,388 |
["server/events.go", "server/jetstream.go", "server/jetstream_api.go", "server/jetstream_cluster.go", "server/jetstream_cluster_test.go", "server/leafnode.go", "server/reload.go", "server/route.go", "server/server.go"]
|
JetStream API deny permissions along leaf node connections are inserted to easily.
|
A leafnode will always insert deny clauses in `leafnode.go` function `processLeafnodeInfo`.
This is irrespective of the domain name used.
```txt
[42059] [INF] Adding deny export of "$JS.API.>" for leafnode configuration on "JSY" that bridges system account
[42059] [INF] Adding deny import of "$JS.API.>" for leafnode configuration on "JSY" that bridges system account
```
Specifically this should only be added when the domain names on either end of the leaf node connection are not empty and do not differ. Thus line `1012` should add a condition for `!s.sameDomain(info.Domain)` to only execute the code below then.
In the specific use case, jetstream is not even enabled on the leaf node.
Thus, we could just check for `opts.JetStream` instead. Meaning don't add the denies when JS is not enabled.
But consider such a daisy chained setup:
s1 (JetStream) <-ln- s2 (no JetStream) <-ln- s3 (JetStream)
Both approaches will break down.
The check for `!s.sameDomain` suffers from the domain not being settable without jetstream.
I'd suggest we make domain settable without jetsttream and only insert the denies when `!s.sameDomain(info.Domain)`
|
https://github.com/nats-io/nats-server/issues/2388
|
https://github.com/nats-io/nats-server/pull/2393
|
19bfef59ed763780ccc18d715a37bef647e7dc21
|
3c6618d4099ea28a2bc32289866a2086b6cff2c4
| 2021-07-30T16:51:09Z |
go
| 2021-08-01T23:34:52Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,366 |
[".goreleaser-nightly.yml", ".goreleaser.yml", "docker/Dockerfile.nightly"]
|
Include a copy of nats server with symbols in each release
|
## Feature Request
In each release, include an additional nats-server executable that includes symbols.
#### Use Case:
During debugging sessions, it would be useful to have symbols for breaking point.
#### Proposed Change:
In addition to the stripped executable, include another executable with symbols.
#### Who Benefits From The Change(s)?
Operators of nats server.
#### Alternative Approaches
Someone build their own nats-server with symbols.
|
https://github.com/nats-io/nats-server/issues/2366
|
https://github.com/nats-io/nats-server/pull/2383
|
534df9a14093c8ad918cc1338119ecd6483c5909
|
bb126f4527fda9fd580d412ee04bd503cec6dc6d
| 2021-07-21T19:59:52Z |
go
| 2021-07-29T03:34:19Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,361 |
["server/accounts_test.go", "server/client.go", "server/sublist.go"]
|
Subscriptions to streams which are imported with a wildcard subject and a prefix do not work correctly
|
## Defect
In Accounts configuration, if I import a stream from another account, with a subject containing a wildcard and any prefix, subscribers that subscribe to a subject respecting the prefix combined with the wildcard, do not receive any message.
In particular, given an import like the following `{stream: {account: ACCOUNT_X, subject: > }, prefix: myprefix}`, subscribers using a subject which is constructed with `*` instead of `myprefix` and any string instead of `>` (e.g. `*.test`) seem to not receive any message.
#### Versions of `nats-server` and affected client libraries used:
nats-server: v2.3.2
#### OS/Container environment:
CentOS Linux release 7.9.2009 (Core)
#### Steps or code to reproduce the issue:
1. Use this nats-server accounts configuration
```
accounts: {
ACCOUNT_X: {
users: [
{user: publisher}
]
exports: [
{stream: >}
]
},
ACCOUNT_Y: {
users: [
{user: subscriber}
]
imports: [
{stream: {account: ACCOUNT_X, subject: > }, prefix: myprefix}
]
}
}
```
2. Configure a nats client with username `subscriber` to subscribe to `*.testsubject`
3. With a nats client with username `publisher`, publish a message on the subject `testsubject`
#### Expected result:
I expect that `subscriber` receives the event published by `publisher`
#### Actual result:
The subscriber does not receive any message
|
https://github.com/nats-io/nats-server/issues/2361
|
https://github.com/nats-io/nats-server/pull/2369
|
e8fea67b1a38342bc62af6535a99bf690dccc9b4
|
07375182e75a242af33bdbcd39aa8788e751b257
| 2021-07-13T16:18:36Z |
go
| 2021-07-23T01:01:38Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,352 |
["server/jetstream_api.go"]
|
data race on JetStream shutdown
|
```text
=== RUN TestJetStreamClusterAckPendingWithMaxRedelivered
1864==================
1865WARNING: DATA RACE
1866Write at 0x00c0007179a0 by goroutine 172:
1867 github.com/nats-io/nats-server/server.(*Server).shutdownJetStream()
1868 /home/travis/gopath/src/github.com/nats-io/nats-server/server/jetstream.go:791 +0x56f
1869 github.com/nats-io/nats-server/server.(*Server).Shutdown()
1870 /home/travis/gopath/src/github.com/nats-io/nats-server/server/server.go:1758 +0x219
1871 github.com/nats-io/nats-server/server.(*cluster).shutdown()
1872 /home/travis/gopath/src/github.com/nats-io/nats-server/server/test_test.go:198 +0xd8
1873 github.com/nats-io/nats-server/server.TestJetStreamClusterAckPendingWithMaxRedelivered()
1874 /home/travis/gopath/src/github.com/nats-io/nats-server/server/jetstream_cluster_test.go:5758 +0x8d3
1875 testing.tRunner()
1876 /home/travis/.gimme/versions/go1.16.5.linux.amd64/src/testing/testing.go:1193 +0x202
1877
1878Previous read at 0x00c0007179a0 by goroutine 112:
1879 github.com/nats-io/nats-server/server.(*Server).jsConsumerInfoRequest()
1880 /home/travis/gopath/src/github.com/nats-io/nats-server/server/jetstream_api.go:3423 +0x153a
1881 github.com/nats-io/nats-server/server.(*Server).jsConsumerInfoRequest-fm()
1882 /home/travis/gopath/src/github.com/nats-io/nats-server/server/jetstream_api.go:3351 +0xc4
1883 github.com/nats-io/nats-server/server.(*jetStream).apiDispatch.func1()
1884 /home/travis/gopath/src/github.com/nats-io/nats-server/server/jetstream_api.go:657 +0xba
1885
1886Goroutine 172 (running) created at:
1887 testing.(*T).Run()
1888 /home/travis/.gimme/versions/go1.16.5.linux.amd64/src/testing/testing.go:1238 +0x5d7
1889 testing.runTests.func1()
1890 /home/travis/.gimme/versions/go1.16.5.linux.amd64/src/testing/testing.go:1511 +0xa6
1891 testing.tRunner()
1892 /home/travis/.gimme/versions/go1.16.5.linux.amd64/src/testing/testing.go:1193 +0x202
1893 testing.runTests()
1894 /home/travis/.gimme/versions/go1.16.5.linux.amd64/src/testing/testing.go:1509 +0x612
1895 testing.(*M).Run()
1896 /home/travis/.gimme/versions/go1.16.5.linux.amd64/src/testing/testing.go:1417 +0x3b3
1897 github.com/nats-io/nats-server/server.TestMain()
1898 /home/travis/gopath/src/github.com/nats-io/nats-server/server/sublist_test.go:1448 +0x384
1899 main.main()
1900 _testmain.go:2655 +0x271
1901
1902Goroutine 112 (finished) created at:
1903 github.com/nats-io/nats-server/server.(*jetStream).apiDispatch()
1904 /home/travis/gopath/src/github.com/nats-io/nats-server/server/jetstream_api.go:656 +0x624
1905 github.com/nats-io/nats-server/server.(*jetStream).apiDispatch-fm()
1906 /home/travis/gopath/src/github.com/nats-io/nats-server/server/jetstream_api.go:599 +0xc4
1907 github.com/nats-io/nats-server/server.(*client).deliverMsg()
1908 /home/travis/gopath/src/github.com/nats-io/nats-server/server/client.go:3091 +0x507
1909 github.com/nats-io/nats-server/server.(*client).processMsgResults()
1910 /home/travis/gopath/src/github.com/nats-io/nats-server/server/client.go:4076 +0x864
1911 github.com/nats-io/nats-server/server.(*client).processInboundRoutedMsg()
1912 /home/travis/gopath/src/github.com/nats-io/nats-server/server/route.go:443 +0x328
1913 github.com/nats-io/nats-server/server.(*client).processInboundMsg()
1914 /home/travis/gopath/src/github.com/nats-io/nats-server/server/client.go:3407 +0x95
1915 github.com/nats-io/nats-server/server.(*client).parse()
1916 /home/travis/gopath/src/github.com/nats-io/nats-server/server/parser.go:477 +0x3f44
1917 github.com/nats-io/nats-server/server.(*client).readLoop()
1918 /home/travis/gopath/src/github.com/nats-io/nats-server/server/client.go:1174 +0x824
1919 github.com/nats-io/nats-server/server.(*Server).createRoute.func1()
1920 /home/travis/gopath/src/github.com/nats-io/nats-server/server/route.go:1375 +0x52
1921==================
1922 testing.go:1092: race detected during execution of test
1923--- FAIL: TestJetStreamClusterAckPendingWithMaxRedelivered (1.36s)
```
|
https://github.com/nats-io/nats-server/issues/2352
|
https://github.com/nats-io/nats-server/pull/2353
|
c68ffe5ad574154c34e3df675d3ef0bf629ce563
|
2534434e24d612b3da7ead517b1876de98497ad9
| 2021-07-07T22:20:34Z |
go
| 2021-07-08T13:20:09Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,332 |
["server/jetstream_cluster.go", "server/norace_test.go", "server/raft.go"]
|
PeerInfo Lag contains exceptionally large uint64 value
|
## Defect
THIS ISSUE IS AWAITING MORE INFO
Replica.PeerInfo.Lag value is `18446744073709551614` (`fffffffffffffffe`) which seems like an overflow?
From a user:
> This issue occurs especially when we purge some data in stream and starting sequence number is other than 1 for ephemeral consumer with deliver policy as all.
- [ ] Included `nats-server -DV` output
- [ ] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
Server:
Java Client: 2.11.4
#### OS/Container environment:
#### Steps or code to reproduce the issue:
#### Expected result:
#### Actual result:
|
https://github.com/nats-io/nats-server/issues/2332
|
https://github.com/nats-io/nats-server/pull/2346
|
5f7d8be4ed661d17e1a23436c46a2da37191d8dc
|
eca1629f7737712b7a1149b14404050b13b54f45
| 2021-06-30T22:06:01Z |
go
| 2021-07-06T17:25:37Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,329 |
["server/filestore.go", "server/jetstream_api.go", "server/jetstream_test.go"]
|
[JS] getLastBySubject functionality doesn't work unless the stream is CREATED with a wildcard subject or multiple subjects
|
## Defect
Using nats-server 2.3.1
If you create a stream with a non wildcard subject, and modify by adding a wildcard subject or another subject, so the resulting stream looks like this:
```javascript
{
name: "5PYSBLS7BW933CKC49XKBX",
subjects: [ "5PYSBLS7BW933CKC49XKBX.A", "5PYSBLS7BW933CKC49XKBX.B" ],
retention: "limits",
max_consumers: -1,
max_msgs: -1,
max_bytes: -1,
max_age: 0,
max_msgs_per_subject: 100,
max_msg_size: -1,
discard: "old",
storage: "file",
num_replicas: 1,
duplicate_window: 120000000000
}
```
And send a request for the last message on one of the subjects:
```
< PUB $JS.API.STREAM.MSG.GET.5PYSBLS7BW933CKC49XKBX _INBOX.5PYSBLS7BW933CKC49XKEX.5PYSBLS7BW933CKC49XKYF 43␍␊{"last_by_subj":"5PYSBLS7BW933CKC49XKBX.A"}␍␊
```
The server will respond with the last message on the stream:
```
> MSG _INBOX.5PYSBLS7BW933CKC49XKEX.5PYSBLS7BW933CKC49XKYF 1 237␍␊{"type":"io.nats.jetstream.api.v1.stream_msg_get_response","message":{"subject":"5PYSBLS7BW933CKC49XKBX.B","seq":4,"hdrs":"TkFUUy8xLjANCk5hdHMtRXhwZWN0ZWQtTGFzdC1TZXF1ZW5jZTogMw0KDQo=","data":"YmI=","time":"2021-06-30T13:42:15.979551Z"}}␍␊
```
This functionality works correctly if the stream is created with more than one subject or with a wildcard.
|
https://github.com/nats-io/nats-server/issues/2329
|
https://github.com/nats-io/nats-server/pull/2334
|
907fef49796ae77896c02883102909ad1df094f1
|
8794fd7265b765daa0869b5ad4ebd5faac92b930
| 2021-06-30T13:46:03Z |
go
| 2021-07-01T04:26:59Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,291 |
["server/auth.go", "server/client.go", "server/client_test.go", "server/events.go", "server/ocsp.go", "server/parser.go"]
|
$SYS.ACCOUNT.<id>.LEAFNODE.CONNECT event is missing
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [X] Included `nats-server -DV` output
- [X] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
nats-server v2.2.6
The log for a quick overview of the config:
```
[1] 2021/06/18 09:45:20.037579 [INF] Starting nats-server
[1] 2021/06/18 09:45:20.037625 [INF] Version: 2.2.6
[1] 2021/06/18 09:45:20.037629 [INF] Git: [cf433ae]
[1] 2021/06/18 09:45:20.037636 [DBG] Go build: go1.16.4
[1] 2021/06/18 09:45:20.037639 [INF] Name: connect-server
[1] 2021/06/18 09:45:20.037650 [INF] Node: oD1ds9eQ
[1] 2021/06/18 09:45:20.037656 [INF] ID: NCFSASLGS2OLS5LVTBZIHO7OBYUHNPDU3GQKYFX6SV4FAAKQ4Y767OMT
[1] 2021/06/18 09:45:20.037672 [INF] Using configuration file: /nats/config/nats-server.conf
[1] 2021/06/18 09:45:20.037677 [INF] Trusted Operators
[1] 2021/06/18 09:45:20.037682 [INF] System : ""
[1] 2021/06/18 09:45:20.037688 [INF] Operator: "operator"
[1] 2021/06/18 09:45:20.037707 [INF] Issued : 2021-05-31 16:23:38 +0000 UTC
[1] 2021/06/18 09:45:20.037719 [INF] Expires : 1970-01-01 00:00:00 +0000 UTC
[1] 2021/06/18 09:45:20.039075 [INF] Managing all jwt in exclusive directory /nats/accounts
[1] 2021/06/18 09:45:20.039120 [INF] Starting JetStream
[1] 2021/06/18 09:45:20.039287 [DBG] JetStream creating dynamic configuration - 5.84 GB memory, 56.38 GB disk
[1] 2021/06/18 09:45:20.040041 [INF] _ ___ _____ ___ _____ ___ ___ _ __ __
[1] 2021/06/18 09:45:20.040060 [INF] _ | | __|_ _/ __|_ _| _ \ __| /_\ | \/ |
[1] 2021/06/18 09:45:20.040068 [INF] | || | _| | | \__ \ | | | / _| / _ \| |\/| |
[1] 2021/06/18 09:45:20.040077 [INF] \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_| |_|
[1] 2021/06/18 09:45:20.040085 [INF]
[1] 2021/06/18 09:45:20.040093 [INF] https://docs.nats.io/jetstream
[1] 2021/06/18 09:45:20.040102 [INF]
[1] 2021/06/18 09:45:20.040110 [INF] ---------------- JETSTREAM ----------------
[1] 2021/06/18 09:45:20.040124 [INF] Max Memory: 5.84 GB
[1] 2021/06/18 09:45:20.040138 [INF] Max Storage: 56.38 GB
[1] 2021/06/18 09:45:20.040146 [INF] Store Directory: "nats/jetstream"
[1] 2021/06/18 09:45:20.040153 [INF] -------------------------------------------
[1] 2021/06/18 09:45:20.040254 [DBG] Exports:
[1] 2021/06/18 09:45:20.040268 [DBG] $JS.API.>
[1] 2021/06/18 09:45:20.040600 [INF] Starting http monitor on 0.0.0.0:8222
[1] 2021/06/18 09:45:20.040664 [INF] Listening for websocket clients on ws://0.0.0.0:4443
[1] 2021/06/18 09:45:20.040673 [WRN] Websocket not configured with TLS. DO NOT USE IN PRODUCTION!
[1] 2021/06/18 09:45:20.040679 [DBG] Get non local IPs for "0.0.0.0"
[1] 2021/06/18 09:45:20.040818 [DBG] ip=10.200.0.3
[1] 2021/06/18 09:45:20.040866 [INF] Listening for leafnode connections on 0.0.0.0:7422
[1] 2021/06/18 09:45:20.040877 [DBG] Get non local IPs for "0.0.0.0"
[1] 2021/06/18 09:45:20.040984 [DBG] ip=10.200.0.3
[1] 2021/06/18 09:45:20.041167 [INF] Listening for MQTT clients on mqtt://0.0.0.0:1883
[1] 2021/06/18 09:45:20.041195 [INF] Listening for client connections on 0.0.0.0:4222
[1] 2021/06/18 09:45:20.041203 [DBG] Get non local IPs for "0.0.0.0"
[1] 2021/06/18 09:45:20.041331 [DBG] ip=10.200.0.3
[1] 2021/06/18 09:45:20.041348 [INF] Server is ready
[1] 2021/06/18 09:45:20.342093 [INF] 192.168.1.152:42190 - lid:4 - Leafnode connection created
[1] 2021/06/18 09:45:20.360723 [TRC] 192.168.1.152:42190 - lid:4 - <<- [CONNECT {"jwt":"XXX","sig":"XXX","tls_required":false,"server_id":"NBG5EVDFX3ZP7NP5J7OWBF4LBKXMVLPKHR4OKSFZ7YPWFUAWB7UFLYXY","name":"XXX","cluster":"XXX","headers":true}]
```
#### OS/Container environment:
Official nats docker image
#### Steps or code to reproduce the issue:
1. Start a primary nats server with accounts (using full nats resolver) and leafnode enabled
2. Subscribe to $SYS.> with nats cli tools
3. Connect a secondary server and connect it to the primary as a leaf node
#### Expected result:
I would expect the following events to be received by the subscriber:
$SYS.ACCOUNT.\<id>.LEAFNODE.CONNECT
$SYS.SERVER.ACCOUNT.\<id>.CONNS (connections for an account changed) (deprecated)
$SYS.ACCOUNT.\<id>.SERVER.CONNS (Sent when an accounts connections change)
$SYS.ACCOUNT.\<id>.CONNECT (Sent when client connected)
#### Actual result:
Only these ones are received:
$SYS.SERVER.ACCOUNT.\<id>.CONNS (connections for an account changed) (deprecated)
$SYS.ACCOUNT.\<id>.SERVER.CONNS (Sent when an accounts connections change)
$SYS.ACCOUNT.\<id>.CONNECT (Sent when client connected)
Please let me know if you cannot reproduce and I'll try to give more data.
|
https://github.com/nats-io/nats-server/issues/2291
|
https://github.com/nats-io/nats-server/pull/2351
|
54e16e80c5291e5d22895a8bbad23c6f42ca811c
|
c68ffe5ad574154c34e3df675d3ef0bf629ce563
| 2021-06-18T09:51:37Z |
go
| 2021-07-07T21:43:50Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,287 |
["server/websocket.go", "server/websocket_test.go"]
|
Websocket failed to deliver big json obj when compression = true
|
Hi,
_First of all please let me say thank you for such awesome open source project, it's simply brilliant!_
- **Large JSON Silently fails to transfer.**
Lately, when testing nats directly from browser (using `nats.ws`), when I tried to send an JSON object that's a bit large (>3000bytes, consists of random numbers), the other party (browser) yields `invalid frame header` error & disconnects.
- **No visible error on server log**
By observing nats log, the initial message (PUB) is successfully delivered to server, but then something went wrong, & there's no visible 'error' in server side.
- **`ws-to-tcp` and other transparent `ws-to-tcp` proxy works**
Upon further testing: when used `ws-to-tcp` (npm i ws-to-tcp -g) instead of the built in `ws`, with same code, same nats-server, only difference is the ws is now supplied by 3rd party proxy (connect directly to 4222), everything works.
- **compression = false fixes the problem**
Thus I came to suspect something might be improperly handled inside `ws` related logic of nats-server.
Finally: I've tried messed around the conf, & found out when ***compression = false***, the problem disappears.
My config & log are attached.
### _Logs_
[compression-off.log](https://github.com/nats-io/nats-server/files/6668274/compression-off.log)
[compression-on.log](https://github.com/nats-io/nats-server/files/6668275/compression-on.log)
### _Snapshot when one side fails_
<img width="1245" alt="snap" src="https://user-images.githubusercontent.com/1134623/122352694-ee428600-cf81-11eb-85d0-a53c1e932910.png">
### _Config_
```
listen: "0.0.0.0:4222"
accounts: {
base: { }
}
system_account: base
websocket {
port: 8000
no_tls: true
same_origin: false
compression: false #or true
}
```
#### Versions of `nats-server` and affected client libraries used:
nats-server: v2.2.6
nats.ws: ^1.1.4
#### OS/Container environment:
macOS Catalina 10.15.7
Darwin Kernel Version 19.6.0: Thu May 6 00:48:39 PDT 2021;
#### Steps or code to reproduce the issue:
Attached above.
#### Expected result:
Large JSON (<max_payload_size) should deliver via ws when `compression = true`
#### Actual result:
Fails to deliver when `compression = true`
|
https://github.com/nats-io/nats-server/issues/2287
|
https://github.com/nats-io/nats-server/pull/2300
|
6129562b63df2fc21f986c23bca6a6737b655a8c
|
4d76a0d55cfce905e413d93cc1e75c1c7c871cc9
| 2021-06-17T07:43:53Z |
go
| 2021-06-21T19:34:48Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,282 |
["server/reload_test.go", "server/server.go"]
|
reloading server with $SYS account disconnects clients connected to $G
|
this is the config
```
accounts {
$SYS {
users = [
{
user: "system",
pass: "password"
}
]
}
}
```
Then I start a server like this
```
nats-server -c x.cfg
```
I subscribe
```
nats --server localhost:4222 sub foo
```
Then I reload the server
```
kill -HUP 12588
```
as a result of this, the client connected to $G (which is not listed) gets disconnected probably because $G is not explicitly listed.
Meaning I would either not have expected the client to connect in the first place OR it not be forced into reconnect.
```
nats --server localhost:4222 sub foo
17:50:18 Subscribing on foo
17:50:28 Disconnected due to: EOF, will attempt reconnect
17:50:30 Reconnected [nats://localhost:4222]
```
|
https://github.com/nats-io/nats-server/issues/2282
|
https://github.com/nats-io/nats-server/pull/2301
|
3185658f42188cd3841a934fd83f5645fdeae520
|
364d9600a6eda19f2c628640c54311b99645245a
| 2021-06-14T21:55:41Z |
go
| 2021-06-22T15:29:56Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,273 |
["server/leafnode.go", "server/leafnode_test.go", "server/opts_test.go", "server/reload.go", "server/reload_test.go"]
|
reload failed for leafnode configuration
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [X] Included `nats-server -DV` output
- [X] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
(Derek said this is a bug and to file a report, during a debugging meeting for another issue)
#### Versions of `nats-server` and affected client libraries used:
* nats-server: `v2.3.0-beta.1`
* client library: whatever @wallyqs was using
#### OS/Container environment:
Ubuntu 20.04 LTS inside systemd
#### Steps or code to reproduce the issue:
```
systemctl kill --signal=HUP jsngs@pennock_cheddar.service
journalctl -u jsngs@pennock_cheddar.service -f
```
#### Expected result:
Debugging output enabled, per @derekcollison
#### Actual result:
```
Jun 09 15:23:47 ip-10-32-164-190 nats-server[42869]: [42869] 2021/06/09 19:23:47.373862 [ERR] Failed to reload server configuration: config reload not supported for LeafNode: old={ 0 [] 0 <nil> 0 false map[] false 1s [0xc0000fe3c0] <nil> 0 0 <nil>}, new={ 0 [] 0 <nil> 0 false map[] false 1s [0xc0000fe320] <nil> 0 0 <nil>}
```
|
https://github.com/nats-io/nats-server/issues/2273
|
https://github.com/nats-io/nats-server/pull/2274
|
eec9cb849e3e87a3a6f1ffb0e1708c9fd4ae0a53
|
14d3cc6b03d01b54d9699dc82b27750ff3fd7886
| 2021-06-09T19:32:58Z |
go
| 2021-06-09T22:47:08Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,272 |
["server/reload.go"]
|
Reload programmatically defined options
|
## Feature Request
Provide a way to dynamically load a new configuration provided programmatically.
#### Use Case:
Same use case a for Server.Reload, but programmatically (without a configuration file).
#### Proposed Change:
Either by providing a Server.Load(Option) or by providing public versions of the options (implementing the hot-swap interface)
In case of a Server.Load(Option), we may have the need of a function to get the full options, as Option must have all its properties set it would be difficult to change only a single option without that.
The option.Reload(Server) version is more specific and therefore probably less error-prone. It also makes sense as when configuration is given programmatically, granularity would generally be higher.
#### Who Benefits From The Change(s)?
nats-server users that don't use a configuration file
|
https://github.com/nats-io/nats-server/issues/2272
|
https://github.com/nats-io/nats-server/pull/2341
|
f441c7bc8de2e3ca0817a3da283824d35e04f232
|
670be37646b20b3d5159a89fd66d87b81d6530ed
| 2021-06-09T17:43:41Z |
go
| 2021-07-08T16:24:03Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,270 |
["server/client.go", "server/events.go", "server/monitor.go", "server/mqtt.go", "server/mqtt_test.go"]
|
MQTT connect event
|
hello, i recently deployed an mqtt broker based on nats-server. thanks to nats2.2, that's a great work.
is there any way i can get notified when an mqtt client is connected, disconnected ?
thanks.
|
https://github.com/nats-io/nats-server/issues/2270
|
https://github.com/nats-io/nats-server/pull/2507
|
bae93c44ef5e2703149002e72a79585c306640e1
|
a5b016f8abeccd17eb60f35385a96c9f067e7806
| 2021-06-09T09:41:05Z |
go
| 2021-09-09T20:51:05Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,265 |
["server/client.go", "server/mqtt.go", "server/mqtt_test.go"]
|
invalid memory address or nil pointer dereference: server/mqtt.go:3074
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [X] Included `nats-server -DV` output
- [ ] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
Stack trace should be enough
#### Versions of `nats-server` and affected client libraries used:
nats-server v2.2.6
#### OS/Container environment:
nats:latest
#### Steps or code to reproduce the issue:
Not sure how to make that exhaustive, but main points of the config are
- Operator mode with full nats resolver
- Leaf node connected (the issue is not on the leafnode but on the main cluster server)
- Websocket enabled
- MQTT enabled
- JetStream enabled
```
port: 4222
monitor_port: 8222
jetstream {
store_dir=nats
}
leafnodes {
port: 7422
}
websocket {
port: 4443
no_tls: true
}
mqtt {
port: 1883
}
operator: "/nats/config/operator.jwt"
resolver: {
type: full
dir: "/nats/accounts"
allow_delete: false
}
```
#### Expected result:
Normal operation
#### Actual result:
```
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x8a8e23]
goroutine 19443 [running]:
github.com/nats-io/nats-server/server.mqttDeliverMsgCbQos0(0xc000157c80, 0xc0002c4600, 0xc000141000, 0x35, 0x0, 0x0, 0xc0002c4859, 0xef, 0xfc7)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/mqtt.go:3074 +0x103
github.com/nats-io/nats-server/server.(*client).deliverMsg(0xc0002c4600, 0xc000157c80, 0xc0002c4820, 0x35, 0x1000, 0x0, 0x0, 0x0, 0xc0002c5a81, 0x52, ...)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/client.go:3072 +0xa6f
github.com/nats-io/nats-server/server.(*client).processMsgResults(0xc0002c4600, 0xc0002c8b40, 0xc0006be780, 0xc0002c4859, 0xf1, 0xfc7, 0x0, 0x0, 0x0, 0xc0002c4820, ...)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/client.go:4043 +0x5a5
github.com/nats-io/nats-server/server.(*client).processInboundLeafMsg(0xc0002c4600, 0xc0002c4859, 0xf1, 0xfc7)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/leafnode.go:2095 +0x278
github.com/nats-io/nats-server/server.(*client).processInboundMsg(0xc0002c4600, 0xc0002c4859, 0xf1, 0xfc7)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/client.go:3392 +0xcd
github.com/nats-io/nats-server/server.(*client).parse(0xc0002c4600, 0xc000663a00, 0x106, 0x200, 0x106, 0x0)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/parser.go:477 +0x2525
github.com/nats-io/nats-server/server.(*client).readLoop(0xc0002c4600, 0x0, 0x0, 0x0)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/client.go:1160 +0x625
github.com/nats-io/nats-server/server.(*Server).createLeafNode.func1()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/leafnode.go:921 +0x45
created by github.com/nats-io/nats-server/server.(*Server).startGoRoutine
/home/travis/gopath/src/github.com/nats-io/nats-server/server/server.go:2804 +0xc5
```
I guess this stack trace should be enough to find the issue. If not, let me know and I'll add details about the configuration.
|
https://github.com/nats-io/nats-server/issues/2265
|
https://github.com/nats-io/nats-server/pull/2268
|
376b60d297453315a765c6dab0584319cf3d83ec
|
8a712daf8f9e0a838fbfd30a5afcb514e69be12b
| 2021-06-08T09:34:17Z |
go
| 2021-06-08T22:07:43Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,242 |
["server/jetstream_cluster.go", "server/jetstream_cluster_test.go", "server/stream.go"]
|
expected sequence does not match store when using Nats-Msg-Id header
|
When using a msg id to prevent duplicates, an error is always returned when sending a non duplicate after having received a duplicate ack result :
```
docker run -ti --rm --network host natsio/nats-box
nats str add ORDERS --subjects "ORDERS.*" --ack --max-msgs=-1 --max-bytes=-1 --max-age=1y --storage file --retention limits --max-msg-size=-1 --discard old --dupe-window="0s" --replicas 2
nats con add ORDERS NEW --filter ORDERS.new --ack explicit --pull --deliver all --sample 100 --max-deliver 20 --replay instant --max-pending 20
nats req -H Nats-Msg-Id:1 ORDERS.new hello1
nats req -H Nats-Msg-Id:1 ORDERS.new hello2 # duplicate
nats req -H Nats-Msg-Id:2 ORDERS.new hello3 # new msg always fails after duplicate
nats req -H Nats-Msg-Id:2 ORDERS.new hello3 # retry 1 fails
nats req -H Nats-Msg-Id:2 ORDERS.new hello3 # retry 2 succeeds
```
First message + duplicate :
```
# nats req -H Nats-Msg-Id:1 ORDERS.new hello1
05:16:03 Sending request on "ORDERS.new"
05:16:03 Received on "_INBOX.r0yfSLDniVtFWbJXzrdkyZ.OGFSlLFI" rtt 1.003804ms
{"stream":"ORDERS","seq":1}
# nats req -H Nats-Msg-Id:1 ORDERS.new hello2 # duplicate
05:16:03 Sending request on "ORDERS.new"
05:16:03 Received on "_INBOX.Gu8EQT3FXwoZTn2pXW8oRb.MPEuuDBY" rtt 973.104µs
{"stream":"ORDERS","seq":1,"duplicate": true}
```
First attempt of new message after duplicate
```
# nats req -H Nats-Msg-Id:2 ORDERS.new hello3 # new msg always fails after duplicate
05:16:03 Sending request on "ORDERS.new"
05:16:03 Received on "_INBOX.9MXKkqfTzruzga2f9zztXo.nRG0SrMJ" rtt 1.058504ms
{"error":{"code":503,"description":"expected sequence does not match store"},"stream":"ORDERS","seq":0}
```
Second attempt of new message after duplicate
```
# nats req -H Nats-Msg-Id:2 ORDERS.new hello3 # retry 1 fails
05:16:03 Sending request on "ORDERS.new"
05:16:03 Received on "_INBOX.oNSsmTtzsFQ2kQKyypIpAp.HStDnu1s" rtt 1.014404ms
{"error":{"code":503,"description":"expected stream sequence does not match"},"stream":"ORDERS","seq":0}
```
Third attempt of new message after duplicate
```
# nats req -H Nats-Msg-Id:2 ORDERS.new hello3 # retry 2 succeeds
05:16:04 Sending request on "ORDERS.new"
05:16:04 Received on "_INBOX.9V2tVYbsCj5QAYZUdglPSz.vRWPMBoS" rtt 946.503µs
{"stream":"ORDERS","seq":2}
```
Configs :
```
server_name: n1-c1
port: 4221
monitor_port: 8221
jetstream {
store_dir: /nats/data1
}
cluster {
name: c1
port: 6221
authorization {
user: ruser
password: T0pS3cr3t
timeout: 2
}
routes = [
nats-route://ruser:T0pS3cr3t@localhost:6222
nats-route://ruser:T0pS3cr3t@localhost:6223
]
}
#######################################################
server_name: n2-c1
port: 4222
monitor_port: 8222
jetstream {
store_dir: /nats/data2
}
cluster {
name: c1
port: 6222
authorization {
user: ruser
password: T0pS3cr3t
timeout: 2
}
routes = [
nats-route://ruser:T0pS3cr3t@localhost:6221
nats-route://ruser:T0pS3cr3t@localhost:6223
]
}
#######################################################
server_name: n3-c1
port: 4223
monitor_port: 8223
jetstream {
store_dir: /nats/data3
}
cluster {
name: c1
port: 6223
authorization {
user: ruser
password: T0pS3cr3t
timeout: 2
}
routes = [
nats-route://ruser:T0pS3cr3t@localhost:6221
nats-route://ruser:T0pS3cr3t@localhost:6222
]
}
```
Docker images used and start commands:
```
docker run --rm --network host --name n1c1 -ti -v /opt/nats/n1:/nats nats:2.2.5-scratch -js -c /nats/nats-server-js.conf
docker run --rm --network host --name n2c1 -ti -v /opt/nats/n2:/nats nats:2.2.5-scratch -js -c /nats/nats-server-js.conf
docker run --rm --network host --name n3c1 -ti -v /opt/nats/n3:/nats nats:2.2.5-scratch -js -c /nats/nats-server-js.conf
```
Logs :
```
# nats str add ORDERS --subjects "ORDERS.*" --ack --max-msgs=-1 --max-bytes=-1 --max-age=1y --storage file --retention limits --max-msg-size=-1 --discard old --dupe-window="0s" --replicas 2
Information for Stream ORDERS created 2021-05-24T05:16:02Z
Configuration:
Subjects: ORDERS.*
Acknowledgements: true
Retention: File - Limits
Replicas: 2
Discard Policy: Old
Duplicate Window: 2m0s
Maximum Messages: unlimited
Maximum Bytes: unlimited
Maximum Age: 1y0d0h0m0s
Maximum Message Size: unlimited
Maximum Consumers: unlimited
Cluster Information:
Name: c1
Leader: n2-c1
Replica: n1-c1, current, seen 0.00s ago
State:
Messages: 0
Bytes: 0 B
FirstSeq: 0
LastSeq: 0
Active Consumers: 0
# nats con add ORDERS NEW --filter ORDERS.new --ack explicit --pull --deliver all --sample 100 --max-deliver 20 --replay instant --max-pending 20
Information for Consumer ORDERS > NEW created 2021-05-24T05:16:02Z
Configuration:
Durable Name: NEW
Pull Mode: true
Filter Subject: ORDERS.new
Deliver All: true
Ack Policy: Explicit
Ack Wait: 30s
Replay Policy: Instant
Maximum Deliveries: 20
Sampling Rate: 100
Max Ack Pending: 20
Cluster Information:
Name: c1
Leader: n2-c1
Replica: n1-c1, current, seen 0.00s ago
State:
Last Delivered Message: Consumer sequence: 0 Stream sequence: 0
Acknowledgment floor: Consumer sequence: 0 Stream sequence: 0
Outstanding Acks: 0 out of maximum 20
Redelivered Messages: 0
Unprocessed Messages: 0
# n1c1
[1] 2021/05/24 05:15:45.501791 [INF] Starting nats-server
[1] 2021/05/24 05:15:45.501840 [INF] Version: 2.2.5
[1] 2021/05/24 05:15:45.501850 [INF] Git: [b7e1f66]
[1] 2021/05/24 05:15:45.501852 [INF] Name: n1-c1
[1] 2021/05/24 05:15:45.501856 [INF] Node: oP5LzZ64
[1] 2021/05/24 05:15:45.501858 [INF] ID: NA2LUWZUM4K5LXSTMBVJBVV2BDHOQUM2RZVTOFV2KCXEECCO7NWJRMW7
[1] 2021/05/24 05:15:45.501866 [INF] Using configuration file: /nats/nats-server-js.conf
[1] 2021/05/24 05:15:45.503345 [INF] Starting JetStream
[1] 2021/05/24 05:15:45.503687 [INF] _ ___ _____ ___ _____ ___ ___ _ __ __
[1] 2021/05/24 05:15:45.503699 [INF] _ | | __|_ _/ __|_ _| _ \ __| /_\ | \/ |
[1] 2021/05/24 05:15:45.503702 [INF] | || | _| | | \__ \ | | | / _| / _ \| |\/| |
[1] 2021/05/24 05:15:45.503704 [INF] \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_| |_|
[1] 2021/05/24 05:15:45.503705 [INF]
[1] 2021/05/24 05:15:45.503707 [INF] https://docs.nats.io/jetstream
[1] 2021/05/24 05:15:45.503709 [INF]
[1] 2021/05/24 05:15:45.503711 [INF] ---------------- JETSTREAM ----------------
[1] 2021/05/24 05:15:45.503716 [INF] Max Memory: 47.11 GB
[1] 2021/05/24 05:15:45.503719 [INF] Max Storage: 71.63 GB
[1] 2021/05/24 05:15:45.503721 [INF] Store Directory: "/nats/data1/jetstream"
[1] 2021/05/24 05:15:45.503722 [INF] -------------------------------------------
[1] 2021/05/24 05:15:45.503954 [INF] Starting JetStream cluster
[1] 2021/05/24 05:15:45.503966 [INF] Creating JetStream metadata controller
[1] 2021/05/24 05:15:45.504617 [INF] JetStream cluster bootstrapping
[1] 2021/05/24 05:15:45.507293 [INF] Starting http monitor on 0.0.0.0:8221
[1] 2021/05/24 05:15:45.507364 [INF] Listening for client connections on 0.0.0.0:4221
[1] 2021/05/24 05:15:45.507844 [INF] Server is ready
[1] 2021/05/24 05:15:45.507878 [INF] Cluster name is c1
[1] 2021/05/24 05:15:45.507917 [INF] Listening for route connections on 0.0.0.0:6221
[1] 2021/05/24 05:15:45.508760 [ERR] Error trying to connect to route (attempt 1): dial tcp 127.0.0.1:6222: connect: connection refused
[1] 2021/05/24 05:15:45.508761 [ERR] Error trying to connect to route (attempt 1): dial tcp 127.0.0.1:6223: connect: connection refused
[1] 2021/05/24 05:15:45.834585 [INF] 127.0.0.1:55950 - rid:6 - Route connection created
[1] 2021/05/24 05:15:46.509468 [INF] 127.0.0.1:6222 - rid:7 - Route connection created
[1] 2021/05/24 05:15:46.509703 [INF] 127.0.0.1:6222 - rid:7 - Router connection closed: Duplicate Route
[1] 2021/05/24 05:15:46.763209 [INF] 127.0.0.1:55958 - rid:8 - Route connection created
[1] 2021/05/24 05:15:47.510420 [INF] 127.0.0.1:6223 - rid:9 - Route connection created
[1] 2021/05/24 05:15:47.510764 [INF] 127.0.0.1:6223 - rid:9 - Router connection closed: Duplicate Route
[1] 2021/05/24 05:16:03.262354 [ERR] JetStream failed to store a msg on stream '$G > ORDERS' - expected sequence does not match store
[1] 2021/05/24 05:16:03.323042 [WRN] Got stream sequence mismatch for '$G > ORDERS'
[1] 2021/05/24 05:16:03.323801 [WRN] Resetting stream '$G > ORDERS'
# n2c1
[1] 2021/05/24 05:15:45.827275 [INF] Starting nats-server
[1] 2021/05/24 05:15:45.827315 [INF] Version: 2.2.5
[1] 2021/05/24 05:15:45.827317 [INF] Git: [b7e1f66]
[1] 2021/05/24 05:15:45.827319 [INF] Name: n2-c1
[1] 2021/05/24 05:15:45.827340 [INF] Node: YGhnH3VX
[1] 2021/05/24 05:15:45.827343 [INF] ID: NBFT2DN4J3K7CHME3GZES5QSHJITYMPVGTUMMDQU7EIRUENXQZCXPWI4
[1] 2021/05/24 05:15:45.827356 [INF] Using configuration file: /nats/nats-server-js.conf
[1] 2021/05/24 05:15:45.828060 [INF] Starting JetStream
[1] 2021/05/24 05:15:45.828292 [INF] _ ___ _____ ___ _____ ___ ___ _ __ __
[1] 2021/05/24 05:15:45.828306 [INF] _ | | __|_ _/ __|_ _| _ \ __| /_\ | \/ |
[1] 2021/05/24 05:15:45.828308 [INF] | || | _| | | \__ \ | | | / _| / _ \| |\/| |
[1] 2021/05/24 05:15:45.828309 [INF] \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_| |_|
[1] 2021/05/24 05:15:45.828311 [INF]
[1] 2021/05/24 05:15:45.828313 [INF] https://docs.nats.io/jetstream
[1] 2021/05/24 05:15:45.828314 [INF]
[1] 2021/05/24 05:15:45.828316 [INF] ---------------- JETSTREAM ----------------
[1] 2021/05/24 05:15:45.828321 [INF] Max Memory: 47.11 GB
[1] 2021/05/24 05:15:45.828324 [INF] Max Storage: 71.63 GB
[1] 2021/05/24 05:15:45.828326 [INF] Store Directory: "/nats/data2/jetstream"
[1] 2021/05/24 05:15:45.828328 [INF] -------------------------------------------
[1] 2021/05/24 05:15:45.828611 [INF] Starting JetStream cluster
[1] 2021/05/24 05:15:45.828625 [INF] Creating JetStream metadata controller
[1] 2021/05/24 05:15:45.829221 [INF] JetStream cluster bootstrapping
[1] 2021/05/24 05:15:45.833060 [INF] Starting http monitor on 0.0.0.0:8222
[1] 2021/05/24 05:15:45.833112 [INF] Listening for client connections on 0.0.0.0:4222
[1] 2021/05/24 05:15:45.833505 [INF] Server is ready
[1] 2021/05/24 05:15:45.833520 [INF] Cluster name is c1
[1] 2021/05/24 05:15:45.833560 [INF] Listening for route connections on 0.0.0.0:6222
[1] 2021/05/24 05:15:45.834256 [ERR] Error trying to connect to route (attempt 1): dial tcp 127.0.0.1:6223: connect: connection refused
[1] 2021/05/24 05:15:45.834573 [INF] 127.0.0.1:6221 - rid:6 - Route connection created
[1] 2021/05/24 05:15:46.509514 [INF] 127.0.0.1:51868 - rid:7 - Route connection created
[1] 2021/05/24 05:15:46.509817 [INF] 127.0.0.1:51868 - rid:7 - Router connection closed: Duplicate Route
[1] 2021/05/24 05:15:46.763164 [INF] 127.0.0.1:51872 - rid:8 - Route connection created
[1] 2021/05/24 05:15:46.834928 [INF] 127.0.0.1:6223 - rid:9 - Route connection created
[1] 2021/05/24 05:15:46.835205 [INF] 127.0.0.1:6223 - rid:9 - Router connection closed: Duplicate Route
[1] 2021/05/24 05:15:49.118613 [INF] JetStream cluster new metadata leader
[1] 2021/05/24 05:16:02.829210 [INF] JetStream cluster new stream leader for '$G > ORDERS'
[1] 2021/05/24 05:16:03.068025 [INF] JetStream cluster new consumer leader for '$G > ORDERS > NEW'
[1] 2021/05/24 05:16:03.262225 [ERR] JetStream failed to store a msg on stream '$G > ORDERS' - expected sequence does not match store
[1] 2021/05/24 05:16:03.322924 [WRN] Got stream sequence mismatch for '$G > ORDERS'
[1] 2021/05/24 05:16:03.324023 [WRN] Resetting stream '$G > ORDERS'
[1] 2021/05/24 05:16:03.448718 [INF] JetStream cluster new stream leader for '$G > ORDERS'
# n3c1
[1] 2021/05/24 05:15:46.757037 [INF] Starting nats-server
[1] 2021/05/24 05:15:46.757084 [INF] Version: 2.2.5
[1] 2021/05/24 05:15:46.757087 [INF] Git: [b7e1f66]
[1] 2021/05/24 05:15:46.757090 [INF] Name: n3-c1
[1] 2021/05/24 05:15:46.757094 [INF] Node: hxTjz3J4
[1] 2021/05/24 05:15:46.757096 [INF] ID: NAAIP3PK6PMRE5C5ZFKYMO4JHPZZ23VKIWOI7CNWHIDJL23H7ZXML3FD
[1] 2021/05/24 05:15:46.757104 [INF] Using configuration file: /nats/nats-server-js.conf
[1] 2021/05/24 05:15:46.757729 [INF] Starting JetStream
[1] 2021/05/24 05:15:46.757995 [INF] _ ___ _____ ___ _____ ___ ___ _ __ __
[1] 2021/05/24 05:15:46.758008 [INF] _ | | __|_ _/ __|_ _| _ \ __| /_\ | \/ |
[1] 2021/05/24 05:15:46.758010 [INF] | || | _| | | \__ \ | | | / _| / _ \| |\/| |
[1] 2021/05/24 05:15:46.758012 [INF] \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_| |_|
[1] 2021/05/24 05:15:46.758014 [INF]
[1] 2021/05/24 05:15:46.758016 [INF] https://docs.nats.io/jetstream
[1] 2021/05/24 05:15:46.758017 [INF]
[1] 2021/05/24 05:15:46.758019 [INF] ---------------- JETSTREAM ----------------
[1] 2021/05/24 05:15:46.758023 [INF] Max Memory: 47.11 GB
[1] 2021/05/24 05:15:46.758025 [INF] Max Storage: 71.63 GB
[1] 2021/05/24 05:15:46.758027 [INF] Store Directory: "/nats/data3/jetstream"
[1] 2021/05/24 05:15:46.758029 [INF] -------------------------------------------
[1] 2021/05/24 05:15:46.758188 [INF] Starting JetStream cluster
[1] 2021/05/24 05:15:46.758215 [INF] Creating JetStream metadata controller
[1] 2021/05/24 05:15:46.758801 [INF] JetStream cluster bootstrapping
[1] 2021/05/24 05:15:46.761751 [INF] Starting http monitor on 0.0.0.0:8223
[1] 2021/05/24 05:15:46.761801 [INF] Listening for client connections on 0.0.0.0:4223
[1] 2021/05/24 05:15:46.762113 [INF] Server is ready
[1] 2021/05/24 05:15:46.762130 [INF] Cluster name is c1
[1] 2021/05/24 05:15:46.762151 [INF] Listening for route connections on 0.0.0.0:6223
[1] 2021/05/24 05:15:46.763260 [INF] 127.0.0.1:6222 - rid:6 - Route connection created
[1] 2021/05/24 05:15:46.763315 [INF] 127.0.0.1:6221 - rid:7 - Route connection created
[1] 2021/05/24 05:15:46.835032 [INF] 127.0.0.1:39006 - rid:8 - Route connection created
[1] 2021/05/24 05:15:46.835251 [INF] 127.0.0.1:39006 - rid:8 - Router connection closed: Duplicate Route
[1] 2021/05/24 05:15:47.510547 [INF] 127.0.0.1:39008 - rid:9 - Route connection created
[1] 2021/05/24 05:15:47.510781 [INF] 127.0.0.1:39008 - rid:9 - Router connection closed: Duplicate Route
```
|
https://github.com/nats-io/nats-server/issues/2242
|
https://github.com/nats-io/nats-server/pull/2245
|
d4a0b87235002161c545ee8e49dea283c14b34b8
|
11539ecdd79bf4bf0863690e35d1bdd4d061a601
| 2021-05-24T05:43:02Z |
go
| 2021-05-24T16:29:43Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,243 |
["server/jetstream_cluster.go", "server/jetstream_cluster_test.go"]
|
Messages not deleted from `WorkQueuePolicy` stream when replicas > 1
|
It may be that I misunderstand the meaning of the `Replicas` configuration for a stream. For demo purposes, I have a NATS cluster with two servers running JetStream.
First I created a stream called `FOO` using the `Work Queue` retention policy and set the number of replicas to 2.
```
Stream FOO was created
Information for Stream FOO created 2021-05-21T15:32:16-04:00
Configuration:
Subjects: FOO.*
Acknowledgements: true
Retention: File - WorkQueue
Replicas: 2
Discard Policy: Old
Duplicate Window: 2m0s
Maximum Messages: unlimited
Maximum Bytes: unlimited
Maximum Age: 0.00s
Maximum Message Size: unlimited
Maximum Consumers: unlimited
Cluster Information:
Name: local
Leader: nats1
Replica: nats2, current, seen 0.00s ago
State:
Messages: 0
Bytes: 0 B
FirstSeq: 0
LastSeq: 0
Active Consumers: 0
```
I then created a pull-based consumer called TEST, that pulls from the `FOO.test` subject.
```
Information for Consumer FOO > TEST created 2021-05-21T15:36:52-04:00
Configuration:
Durable Name: TEST
Pull Mode: true
Filter Subject: FOO.test
Deliver All: true
Ack Policy: Explicit
Ack Wait: 30s
Replay Policy: Instant
Max Ack Pending: 20,000
Flow Control: false
Cluster Information:
Name: local
Leader: nats2
Replica: nats1, current, seen 0.01s ago
State:
Last Delivered Message: Consumer sequence: 0 Stream sequence: 0
Acknowledgment floor: Consumer sequence: 0 Stream sequence: 0
Outstanding Acks: 0 out of maximum 20000
Redelivered Messages: 0
Unprocessed Messages: 0
```
I then published a single message to the FOO.test subject and confirmed it was sitting in the stream and was unprocessed by the consumer.
```
❯ nats s report
Obtaining Stream stats
+----------------------------------------------------------------------------------+
| Stream Report |
+--------+---------+-----------+----------+-------+------+---------+---------------+
| Stream | Storage | Consumers | Messages | Bytes | Lost | Deleted | Replicas |
+--------+---------+-----------+----------+-------+------+---------+---------------+
| FOO | File | 1 | 1 | 43 B | 0 | 0 | nats1*, nats2 |
+--------+---------+-----------+----------+-------+------+---------+---------------+
❯ nats c report
? Select a Stream FOO
Consumer report for FOO with 1 consumers
+----------+------+------------+----------+-------------+-------------+-------------+-----------+---------------+
| Consumer | Mode | Ack Policy | Ack Wait | Ack Pending | Redelivered | Unprocessed | Ack Floor | Cluster |
+----------+------+------------+----------+-------------+-------------+-------------+-----------+---------------+
| TEST | Pull | Explicit | 30.00s | 0 | 0 | 1 / 100% | 0 | nats1, nats2* |
+----------+------+------------+----------+-------------+-------------+-------------+-----------+---------------+
```
Finally, I pulled the message from the `TEST` consumer and acknowledged it. The message is removed from the consumer but remains on the stream.
```
❯ nats con next FOO TEST
[15:40:11] subj: FOO.test / tries: 1 / cons seq: 1 / str seq: 1 / pending: 0
hello
Acknowledged message
❯ nats c report
? Select a Stream FOO
Consumer report for FOO with 1 consumers
+----------+------+------------+----------+-------------+-------------+-------------+-----------+---------------+
| Consumer | Mode | Ack Policy | Ack Wait | Ack Pending | Redelivered | Unprocessed | Ack Floor | Cluster |
+----------+------+------------+----------+-------------+-------------+-------------+-----------+---------------+
| TEST | Pull | Explicit | 30.00s | 0 | 0 | 0 | 1 | nats1, nats2* |
+----------+------+------------+----------+-------------+-------------+-------------+-----------+---------------+
❯ nats s report
Obtaining Stream stats
+----------------------------------------------------------------------------------+
| Stream Report |
+--------+---------+-----------+----------+-------+------+---------+---------------+
| Stream | Storage | Consumers | Messages | Bytes | Lost | Deleted | Replicas |
+--------+---------+-----------+----------+-------+------+---------+---------------+
| FOO | File | 1 | 1 | 43 B | 0 | 0 | nats1*, nats2 |
+--------+---------+-----------+----------+-------+------+---------+---------------+
```
I expected the message to also be removed from the stream since this is a `work queue` stream. I confirmed that the message is removed from the stream when `replicas` is set to 1.
Is this a bug or am I misinterpreting the `replicas` configuration?
Thanks for your help.
|
https://github.com/nats-io/nats-server/issues/2243
|
https://github.com/nats-io/nats-server/pull/2246
|
11539ecdd79bf4bf0863690e35d1bdd4d061a601
|
9d867889cabca95b3bf3908976ef663cf7bcb322
| 2021-05-21T19:48:08Z |
go
| 2021-05-24T17:01:04Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,226 |
["server/mqtt.go", "server/mqtt_test.go"]
|
enable the MQTT function
|
I use three test machines to run nats and try to enable the MQTT function, but when simulating a server down (that is, exiting the program and running), it is easy to get the following warning:
JetStream cluster stream'$G> $MQTT_sess_KwEmus3X' has NO quorum, stalled.
And the mqtt client is also easy to be unable to connect to nats all the time.
|
https://github.com/nats-io/nats-server/issues/2226
|
https://github.com/nats-io/nats-server/pull/2236
|
6f6f22e9a76b050e4d2adfbbe7c1a4ea51294c2c
|
b5ea80dd75ff59218ce9453ec1a70a404b8930ec
| 2021-05-19T09:15:43Z |
go
| 2021-05-20T21:21:19Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,213 |
["server/consumer.go", "server/jetstream_api.go", "server/jetstream_cluster.go", "server/jetstream_test.go", "server/monitor.go", "server/stream.go"]
|
single server JS does show direct consumer when it shouldn't (clustered JS does not do that)
|
config used (ignore the names saying leaf, this is a single server cluster):
```
port: 4111
server_name: leaf-server-1
jetstream {
store_dir="./store_leaf_1"
}
cluster {
name: leaf
}
include ./accounts.conf
```
Steps to reproduce:
1. add a stream
2. add a source stream sourcing first stream
3. first stream now lists extra consumer
```
> nats -s nats://rmt:rmt@localhost:4111 s ls
No Streams defined
> nats -s nats://rmt:rmt@localhost:4111 s add
? Stream Name test
? Subjects to consume test
? Storage backend file
? Retention Policy Limits
? Discard Policy Old
? Stream Messages Limit -1
? Message size limit -1
? Maximum message age limit -1
? Maximum individual message size -1
? Duplicate tracking time window 2m
? Replicas 1
Stream test was created
Information for Stream test created 2021-05-12T00:18:40-04:00
Configuration:
Subjects: test
Acknowledgements: true
Retention: File - Limits
Replicas: 1
Discard Policy: Old
Duplicate Window: 2m0s
Maximum Messages: unlimited
Maximum Bytes: unlimited
Maximum Age: 0.00s
Maximum Message Size: unlimited
Maximum Consumers: unlimited
State:
Messages: 0
Bytes: 0 B
FirstSeq: 0
LastSeq: 0
Active Consumers: 0
> nats -s nats://rmt:rmt@localhost:4111 s add --source test
X Sorry, your reply was invalid: Value is required
? Stream Name agg
? Storage backend file
? Retention Policy Limits
? Discard Policy Old
? Stream Messages Limit -1
? Message size limit -1
? Maximum message age limit -1
? Maximum individual message size -1
? Duplicate tracking time window 2m
? Replicas 1
? Adjust source "test" start No
? Import "test" from a different JetStream domain No
? Import "test" from a different account No
Stream agg was created
Information for Stream agg created 2021-05-12T00:18:59-04:00
Configuration:
Acknowledgements: true
Retention: File - Limits
Replicas: 1
Discard Policy: Old
Duplicate Window: 2m0s
Maximum Messages: unlimited
Maximum Bytes: unlimited
Maximum Age: 0.00s
Maximum Message Size: unlimited
Maximum Consumers: unlimited
Sources: test
State:
Messages: 0
Bytes: 0 B
FirstSeq: 0
LastSeq: 0
Active Consumers: 0
> nats -s nats://rmt:rmt@localhost:4111 c ls
? Select a Stream test
Consumers for Stream test:
abCyIUyQ
> nats -s nats://rmt:rmt@localhost:4111 c info test
? Select a Consumer abCyIUyQ
Information for Consumer test > abCyIUyQ created 2021-05-12T00:22:25-04:00
Configuration:
Delivery Subject: $JS.S.1o9KHHEE
Deliver All: true
Ack Policy: None
Replay Policy: Instant
Maximum Deliveries: 1
Idle Heartbeat: 2.00s
Flow Control: true
Cluster Information:
Name: leaf
Leader: leaf-server-1
State:
Last Delivered Message: Consumer sequence: 0 Stream sequence: 0
Acknowledgment floor: Consumer sequence: 0 Stream sequence: 0
Outstanding Acks: 0
Redelivered Messages: 0
Unprocessed Messages: 0
>
```
|
https://github.com/nats-io/nats-server/issues/2213
|
https://github.com/nats-io/nats-server/pull/2214
|
bc9ac880322647d0f121ade6cba999f1812e1be9
|
30191ada962ffc21b58004fcf9b4a10b10961533
| 2021-05-12T04:24:32Z |
go
| 2021-05-12T15:45:30Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,207 |
["server/mqtt_test.go", "server/websocket.go", "server/websocket_test.go", "test/leafnode_test.go"]
|
How to provide origin header when connecting a leafnode server via websocket?
|
Currently we are testing NATS leafnode connections via websocket. This is a very cool feature and important to us, as it could reduce the infrastructure costs a lot.
I saw the "allowed_origins" parameter in the websocket config block and tried it out. Unfortunately I don't know how to provide the matching value from the leafnode side, there is no config option.
Our NATS server logs the following error: "websocket handshake error: origin not allowed: origin not provided".
With the restriction commented out the connection works as expected.
Could you describe, how we have to provide the origin info (client's request Origin header) to the server?
|
https://github.com/nats-io/nats-server/issues/2207
|
https://github.com/nats-io/nats-server/pull/2211
|
a4061f4579ea400041675d9681b84c6ebc2994c6
|
bc9ac880322647d0f121ade6cba999f1812e1be9
| 2021-05-11T09:33:20Z |
go
| 2021-05-12T15:13:40Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,205 |
["server/client.go", "server/jetstream_cluster_test.go", "server/test_test.go"]
|
When the leaf node is a cluster, stream create request responses (and others) do not seem to return.
|
I ran a version of the following setup, that only has two JS domains (hub/spoke-1).
When spoke-1 is a single node, everything appears to be working.
When spoke-1 is a cluster, responses seem to get lost. I'm writing this as I observe the streams get created.
```
+------------------------------------------------------------------------------+
| Stream Report |
+--------+---------+-----------+----------+-------+------+---------+-----------+
| Stream | Storage | Consumers | Messages | Bytes | Lost | Deleted | Replicas |
+--------+---------+-----------+----------+-------+------+---------+-----------+
| test0 | File | 0 | 0 | 0 B | 0 | 0 | srv-4252* |
| test1 | File | 0 | 0 | 0 B | 0 | 0 | srv-4252* |
| test2 | File | 0 | 0 | 0 B | 0 | 0 | srv-4252* |
| test3 | File | 0 | 0 | 0 B | 0 | 0 | srv-4242* |
+--------+---------+-----------+----------+-------+------+---------+-----------+
```
only one response was received. (when `--js-domain spoke-1` is specified, the request goes to `$JS.spoke-1.API.STREAM.CREATE.test5`)
```
> nats --context=hub s add --js-domain spoke-1 --config test.conf --subjects test0 test0
nats: error: could not create Stream: context deadline exceeded
> nats --context=hub s add --js-domain spoke-1 --config test.conf --subjects test1 test1
nats: error: could not create Stream: context deadline exceeded
> nats --context=hub s add --js-domain spoke-1 --config test.conf --subjects test2 test2
^[[Anats: error: could not create Stream: context deadline exceeded
> nats --context=hub s add --js-domain spoke-1 --config test.conf --subjects test3 test3
Stream test3 was created
Information for Stream test3 created 2021-05-10T21:37:34-04:00
Configuration:
Subjects: test3
Acknowledgements: true
Retention: File - Limits
Replicas: 1
Discard Policy: Old
Duplicate Window: 2m0s
Maximum Messages: unlimited
Maximum Bytes: unlimited
Maximum Age: 0.00s
Maximum Message Size: unlimited
Maximum Consumers: unlimited
Cluster Information:
Name: cluster-spoke-1
Leader: srv-4242
State:
Messages: 0
Bytes: 0 B
FirstSeq: 0
LastSeq: 0
Active Consumers: 0
>
```
Here with tracing enabled, first attempt passes, second attempt exceeds deadline.
```
> nats --context=hub s add --js-domain spoke-1 --config test.conf --subjects test5 test5 --trace
21:43:53 >>> $JS.spoke-1.API.STREAM.CREATE.test5
{"name":"test5","subjects":["test5"],"retention":"limits","max_consumers":-1,"max_msgs":-1,"max_bytes":-1,"max_age":0,"max_msg_size":-1,"storage":"file","discard":"old","num_replicas":1,"duplicate_window":120000000000}
21:43:53 <<< $JS.spoke-1.API.STREAM.CREATE.test5
{"type":"io.nats.jetstream.api.v1.stream_create_response","config":{"name":"test5","subjects":["test5"],"retention":"limits","max_consumers":-1,"max_msgs":-1,"max_bytes":-1,"discard":"old","max_age":0,"max_msg_size":-1,"storage":"file","num_replicas":1,"duplicate_window":120000000000},"created":"2021-05-11T01:43:53.599287Z","state":{"messages":0,"bytes":0,"first_seq":0,"first_ts":"0001-01-01T00:00:00Z","last_seq":0,"last_ts":"0001-01-01T00:00:00Z","consumer_count":0},"cluster":{"name":"cluster-spoke-1","leader":"srv-4242"}}
Stream test5 was created
Information for Stream test5 created 2021-05-10T21:43:53-04:00
Configuration:
Subjects: test5
Acknowledgements: true
Retention: File - Limits
Replicas: 1
Discard Policy: Old
Duplicate Window: 2m0s
Maximum Messages: unlimited
Maximum Bytes: unlimited
Maximum Age: 0.00s
Maximum Message Size: unlimited
Maximum Consumers: unlimited
Cluster Information:
Name: cluster-spoke-1
Leader: srv-4242
State:
Messages: 0
Bytes: 0 B
FirstSeq: 0
LastSeq: 0
Active Consumers: 0
> nats --context=hub s add --js-domain spoke-1 --config test.conf --subjects test6 test6 --trace
21:44:07 >>> $JS.spoke-1.API.STREAM.CREATE.test6
{"name":"test6","subjects":["test6"],"retention":"limits","max_consumers":-1,"max_msgs":-1,"max_bytes":-1,"max_age":0,"max_msg_size":-1,"storage":"file","discard":"old","num_replicas":1,"duplicate_window":120000000000}
21:44:12 <<< $JS.spoke-1.API.STREAM.CREATE.test6: context deadline exceeded
nats: error: could not create Stream: context deadline exceeded
>
```
errors appear to work. (I tried already existing stream name 10 times, got an error every time)
```
> nats --context=hub s add --js-domain spoke-1 --config test.conf --subjects test6 test6 --trace
21:56:48 >>> $JS.spoke-1.API.STREAM.CREATE.test6
{"name":"test6","subjects":["test6"],"retention":"limits","max_consumers":-1,"max_msgs":-1,"max_bytes":-1,"max_age":0,"max_msg_size":-1,"storage":"file","discard":"old","num_replicas":1,"duplicate_window":120000000000}
21:56:48 <<< $JS.spoke-1.API.STREAM.CREATE.test6
{"type":"io.nats.jetstream.api.v1.stream_create_response","error":{"code":500,"description":"stream name already in use"}}
nats: error: could not create Stream: stream name already in use
> nats --context=hub s add --js-domain spoke-1 --config test.conf --subjects test6 test6 --trace
21:56:49 >>> $JS.spoke-1.API.STREAM.CREATE.test6
{"name":"test6","subjects":["test6"],"retention":"limits","max_consumers":-1,"max_msgs":-1,"max_bytes":-1,"max_age":0,"max_msg_size":-1,"storage":"file","discard":"old","num_replicas":1,"duplicate_window":120000000000}
21:56:49 <<< $JS.spoke-1.API.STREAM.CREATE.test6
{"type":"io.nats.jetstream.api.v1.stream_create_response","error":{"code":500,"description":"stream name already in use"}}
nats: error: could not create Stream: stream name already in use
```
|
https://github.com/nats-io/nats-server/issues/2205
|
https://github.com/nats-io/nats-server/pull/2212
|
310105fb53780fd6b6ee611363b8b4d101c9bcbc
|
a4061f4579ea400041675d9681b84c6ebc2994c6
| 2021-05-11T01:47:38Z |
go
| 2021-05-12T15:10:37Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,202 |
["server/errors.go", "server/jetstream_cluster_test.go", "server/jetstream_test.go", "server/stream.go"]
|
Source stream does not import from another stream, it that stream name is not unique within the importing stream sources
|
If I create a stream named test in domain spoke-1 as well as spoke-1 , the resulting source stream only receives from one of them.
This issue also applies to sourcing identically named streams from two different accounts.
The issue is that `StreamConfig.Sources` are a list but internally `stream.sources` is a map indexed by the source name.
We should probably index it by prefix+name to avoid issues with identically named streams residing in different domains.
Functions to look at are located is `stream.go` and named `startingSequenceForSources` and `update`.
|
https://github.com/nats-io/nats-server/issues/2202
|
https://github.com/nats-io/nats-server/pull/2209
|
51071c8aa9d163f296d5c1ac8cc77359ada8bf63
|
310105fb53780fd6b6ee611363b8b4d101c9bcbc
| 2021-05-10T23:00:45Z |
go
| 2021-05-11T21:28:24Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,170 |
["server/monitor.go", "server/monitor_test.go"]
|
Resource usage issues after upgrading Jetstream cluster to 2.2.2
|
We started seeing strange resource usage patterns after updating to 2.2.2. Our application code hasn't changed at all, and is very simple. It just creates a Jetstream stream and publishes messages to it (via new style requests). There are no consumers at all.
The memory usage of one of servers kept growing despite having zero connections to it. Also, it looks like the subscriptions are growing indefinitely, though that might be a reporting error.

After upgrading to 2.2.2, you can see the subscriptions keeps growing indefinitely, but we figured that was just a reporting issue due to metric names being changed. I.e. `gnatsd_varz_subscriptions` vs `gnatsd_subsz_num_subscriptions` (the latter doesn't grow indefinitely, but holds constant).
But then seeing memory usage grow on one server, seeming indefinitely until a sudden drop, caused us to open this issue. Something seem strange.
Also, all the server logs show this error happening a lot:
```
[1] 2021/04/30 17:11:03.145831 [WRN] JetStream resource limits exceeded for account: "$G"
```
We have very few connections to the Nats cluster (as seen in the graphs). Each connection makes a subscription to `_INBOX.<inbox-uid>.*` and makes requests using `reply_to`s like `_INBOX.<inbox-uid>.<request-uid>`. Lastly, the data volumes attached to each node are only at 20% capacity.
How to debug this further? Thanks for the help.
|
https://github.com/nats-io/nats-server/issues/2170
|
https://github.com/nats-io/nats-server/pull/2172
|
670f44f1e82eec556ee44a954c0c67d24bbc575d
|
850307ae4ac9d8d80f1f5a5732664050c779d322
| 2021-05-01T18:34:44Z |
go
| 2021-05-04T01:57:57Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,160 |
["server/jwt.go", "server/jwt_test.go", "server/leafnode_test.go", "server/server.go"]
|
Nats server with resolver: {type: full} crashes if `system_account` is not set
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [x] Included `nats-server -DV` output
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
```sh
$ nats-server -DV
[3662451] 2021/04/24 12:08:54.602759 [INF] Starting nats-server
[3662451] 2021/04/24 12:08:54.602821 [INF] Version: 2.2.2
[3662451] 2021/04/24 12:08:54.602826 [INF] Git: [a5f3aab]
[3662451] 2021/04/24 12:08:54.602830 [DBG] Go build: go1.16.3
[3662451] 2021/04/24 12:08:54.602834 [INF] Name: NCHSX3HTBSRJNWB2IJAM6R2FGBNN5FRZ3X33RGHG23OWHTILXXB7ORLX
[3662451] 2021/04/24 12:08:54.602838 [INF] ID: NCHSX3HTBSRJNWB2IJAM6R2FGBNN5FRZ3X33RGHG23OWHTILXXB7ORLX
[3662451] 2021/04/24 12:08:54.602868 [DBG] Created system account: "$SYS"
[3662451] 2021/04/24 12:08:54.603528 [INF] Listening for client connections on 0.0.0.0:4222
[3662451] 2021/04/24 12:08:54.603537 [DBG] Get non local IPs for "0.0.0.0"
[3662451] 2021/04/24 12:08:54.603921 [DBG] ip=192.168.199.2
[3662451] 2021/04/24 12:08:54.603928 [DBG] ip=fdfc:6c1c:2d83:10:c1dc:4824:353:6a4a
[3662451] 2021/04/24 12:08:54.603932 [DBG] ip=fdfc:6c1c:2d83:10:81f2:d9e:85d:6b40
[3662451] 2021/04/24 12:08:54.603936 [DBG] ip=fdfc:6c1c:2d83:10:c416:6500:883c:682c
[3662451] 2021/04/24 12:08:54.603965 [DBG] ip=192.168.122.1
[3662451] 2021/04/24 12:08:54.604022 [DBG] ip=192.168.39.1
[3662451] 2021/04/24 12:08:54.604076 [DBG] ip=172.19.0.1
[3662451] 2021/04/24 12:08:54.604116 [DBG] ip=172.17.0.1
[3662451] 2021/04/24 12:08:54.604155 [INF] Server is ready
^C[3662451] 2021/04/24 12:08:57.148228 [DBG] Trapped "interrupt" signal
[3662451] 2021/04/24 12:08:57.148324 [INF] Initiating Shutdown...
[3662451] 2021/04/24 12:08:57.148462 [DBG] SYSTEM - System connection closed: Client Closed
[3662451] 2021/04/24 12:08:57.148471 [DBG] Client accept loop exiting..
[3662451] 2021/04/24 12:08:57.148503 [INF] Server Exiting..
```
#### OS/Container environment:
Ubuntu 20.04.2 LTS
#### Steps or code to reproduce the issue:
Use this config
```
trace: true
operator: /home/sindre/.nsc/nats/O_operator/O_operator.jwt
resolver: {
type: full
dir: '/home/sindre/src/iterate/app.iterate.no/nats/testing_nsc/jwt'
}
```
Start nats-server
```
nats-server -c nas.config
```
#### Expected result:
Error message: `system_account required` or at least not crashing
#### Actual result:
Server crashes with a SIGSEGV
```
$ nats-server -c nas.config
[3663003] 2021/04/24 12:10:15.199546 [INF] Starting nats-server
[3663003] 2021/04/24 12:10:15.199613 [INF] Version: 2.2.2
[3663003] 2021/04/24 12:10:15.199618 [INF] Git: [a5f3aab]
[3663003] 2021/04/24 12:10:15.199627 [INF] Name: NDREZQBDOGGJJFX24Z5JMY3J6PC2QKF5PCP64ZTSPQHXEX667D3WVSA3
[3663003] 2021/04/24 12:10:15.199631 [INF] ID: NDREZQBDOGGJJFX24Z5JMY3J6PC2QKF5PCP64ZTSPQHXEX667D3WVSA3
[3663003] 2021/04/24 12:10:15.199640 [INF] Using configuration file: nas.config
[3663003] 2021/04/24 12:10:15.199645 [INF] Trusted Operators
[3663003] 2021/04/24 12:10:15.199651 [INF] System : ""
[3663003] 2021/04/24 12:10:15.199656 [INF] Operator: "O_operator"
[3663003] 2021/04/24 12:10:15.199677 [INF] Issued : 2021-04-24 11:41:08 +0200 CEST
[3663003] 2021/04/24 12:10:15.199683 [INF] Expires : 1970-01-01 01:00:00 +0100 CET
[3663003] 2021/04/24 12:10:15.199690 [WRN] Trusted Operators should utilize a System Account
[3663003] 2021/04/24 12:10:15.199761 [INF] Server is ready
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0xa0 pc=0x7e467a]
goroutine 1 [running]:
github.com/nats-io/nats-server/server.(*Server).newRespInbox(0xc0001b2000, 0xc000144150, 0x38)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/events.go:1705 +0x3a
github.com/nats-io/nats-server/server.(*DirAccResolver).Start(0xc00007eb60, 0xc0001b2000, 0x0, 0x0)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/accounts.go:3634 +0x1d9
github.com/nats-io/nats-server/server.(*Server).Start(0xc0001b2000)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/server.go:1524 +0xf78
github.com/nats-io/nats-server/server.Run(...)
/home/travis/gopath/src/github.com/nats-io/nats-server/server/service.go:21
main.main()
/home/travis/gopath/src/github.com/nats-io/nats-server/main.go:116 +0x188
```
|
https://github.com/nats-io/nats-server/issues/2160
|
https://github.com/nats-io/nats-server/pull/2162
|
4430a55eed6c03daf7bab02483e984bf53bcfdfb
|
a67704e245a07cef9afc2be551761d28bd8a95e3
| 2021-04-24T10:12:37Z |
go
| 2021-04-27T00:50:56Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,153 |
["server/leafnode.go", "server/leafnode_test.go", "server/opts.go", "server/opts_test.go"]
|
NATS LeafNode remote URLs : Config to shuffle, on by default
|
## Feature Request
Please allow the shuffling of leaf node remote URLs.
#### Use Case:
All leaf nodes start with the same config, the same ordered list of leaf remote URLs. This causes the first server to pick up all leafs by default. Unlike other client connections, which have the option to disable node shuffling, this is on by default.
#### Proposed Change:
Add a config for LeafRemotes "noShuffle" mode.
Add a function to randomize or shuffle the pool of remote URLs.
#### Who Benefits From The Change(s)?
All nats cluster operators with a multitude of leaf-node connections.
|
https://github.com/nats-io/nats-server/issues/2153
|
https://github.com/nats-io/nats-server/pull/2156
|
a8346465e57c1a7f71caff981373d80871f0c890
|
0ba3aaf7bbae57beb7f232c2df63aa740b5a615e
| 2021-04-22T17:01:05Z |
go
| 2021-04-26T18:26:54Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,144 |
["server/jetstream_cluster.go", "server/jetstream_cluster_test.go"]
|
stream subjects overlap is not checked on stream edit/update
|
Streams are not allowed to listen to the same subjects as other streams. When trying to add a stream that listens to the same subjects of an existing stream the error `subjects overlap with an existing stream` is produced.
This check does not occur when editing a streams subject.
To reproduce:
Add a stream that listens to a subject.
Add a second stream that listens to a different subject.
Edit the second stream to listen to the same subject as the first stream.
This is confirmed to be the case using nats.go UpdateStream() as well ass the natscli nats stream edit.
|
https://github.com/nats-io/nats-server/issues/2144
|
https://github.com/nats-io/nats-server/pull/2145
|
a48a39251636ed5690ef839169998de99f39339d
|
2ddb95867ec11b88c0ef949ccda56e8789d52edf
| 2021-04-21T20:58:03Z |
go
| 2021-04-21T22:55:31Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,126 |
["server/mqtt.go", "server/mqtt_test.go"]
|
[SUGGESTION]More detailed error messages when MQTT client failed to connect
|
When I connect to NATS using the MQTT client, I get the following error:
```
PS D:\Kubernetes\playground\nats\nats-server_> ./nats-server -js -sd D:\Kubernetes\playground\nats\datastore -c mqtt.conf
[7744] 2021/04/19 10:46:56.564893 [INF] Starting nats-server
[7744] 2021/04/19 10:46:56.586893 [INF] Version: 2.2.2-beta.3
[7744] 2021/04/19 10:46:56.586893 [INF] Git: [not set]
[7744] 2021/04/19 10:46:56.586893 [INF] Name: NDKS56R6OHWWZRWEX3GWKQTKGBB6TOS66CGZ252MSHTOF7IP4R5253VC
[7744] 2021/04/19 10:46:56.586893 [INF] Node: 3zV1qX12
[7744] 2021/04/19 10:46:56.587895 [INF] ID: NDKS56R6OHWWZRWEX3GWKQTKGBB6TOS66CGZ252MSHTOF7IP4R5253VC
[7744] 2021/04/19 10:46:56.587895 [INF] Using configuration file: mqtt.conf
[7744] 2021/04/19 10:46:56.587895 [INF] Starting JetStream
[7744] 2021/04/19 10:46:56.588893 [INF] _ ___ _____ ___ _____ ___ ___ _ __ __
[7744] 2021/04/19 10:46:56.588893 [INF] _ | | __|_ _/ __|_ _| _ \ __| /_\ | \/ |
[7744] 2021/04/19 10:46:56.588893 [INF] | || | _| | | \__ \ | | | / _| / _ \| |\/| |
[7744] 2021/04/19 10:46:56.588893 [INF] \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_| |_|
[7744] 2021/04/19 10:46:56.588893 [INF]
[7744] 2021/04/19 10:46:56.589893 [INF] https://docs.nats.io/jetstream
[7744] 2021/04/19 10:46:56.589893 [INF]
[7744] 2021/04/19 10:46:56.589893 [INF] ---------------- JETSTREAM ----------------
[7744] 2021/04/19 10:46:56.589893 [INF] Max Memory: 11.77 GB
[7744] 2021/04/19 10:46:56.589893 [INF] Max Storage: 1.00 TB
[7744] 2021/04/19 10:46:56.589893 [INF] Store Directory: "D:\\Kubernetes\\playground\\nats\\datastore\\jetstream"
[7744] 2021/04/19 10:46:56.589893 [INF] -------------------------------------------
[7744] 2021/04/19 10:46:56.592893 [INF] Restored 0 messages for Stream "CONTRACT"
[7744] 2021/04/19 10:46:56.592893 [INF] Recovering 1 Consumers for Stream - "CONTRACT"
[7744] 2021/04/19 10:46:56.594898 [INF] Listening for MQTT clients on mqtt://127.0.0.1:1883
[7744] 2021/04/19 10:46:56.595894 [INF] Listening for client connections on 0.0.0.0:4222
[7744] 2021/04/19 10:46:56.651893 [INF] Server is ready
[7744] 2021/04/19 10:46:58.530190 [ERR] 127.0.0.1:49488 - mid:9 - not connected
[7744] 2021/04/19 10:55:56.425784 [ERR] 127.0.0.1:49875 - mid:10 - not connected
```
I cannot find the cause based on the error message.
Here is my MQTT configuration for NATS:
```
mqtt {
# Specify a host and port to listen for websocket connections
#
listen: "127.0.0.1:1883"
# It can also be configured with individual parameters,
# namely host and port.
#
host: "127.0.0.1"
port: 1883
# TLS configuration.
#
#tls {
#cert_file: "server/configs/certs/cert.new.pem"
#key_file: "server/configs/certs/key.new.pem"
# Root CA file
#
# ca_file: "/path/to/ca.pem"
# If true, require and verify client certificates.
#
#verify: false
# TLS handshake timeout in fractional seconds.
#
# timeout: 2.0
# If true, require and verify client certificates and map certificate
# values for authentication purposes.
#
#verify_and_map: false
#}
# If no user name is provided when an MQTT client connects, will default
# this user name in the authentication phase. If specified, this will
# override, for MQTT clients, any `no_auth_user` value defined in the
# main configuration file.
# Note that this is not compatible with running the server in operator mode.
#
# no_auth_user: "my_username_for_apps_not_providing_credentials"
# See below to know what is the normal way of limiting MQTT clients
# to specific users.
# If there are no users specified in the configuration, this simple authorization
# block allows you to override the values that would be configured in the
# equivalent block in the main section.
#
authorization {
# # If this is specified, the client has to provide the same username
# # and password to be able to connect.
# # username: "limbo"
# # password: "limbo"
#
# # If this is specified, the password field in the CONNECT packet has to
# # match this token.
# # token: "my_token"
#
# # This overrides the main's authorization timeout. For consistency
# # with the main's authorization configuration block, this is expressed
# # as a number of seconds.
# # timeout: 2.0
}
# This is the amount of time after which a QoS 1 message sent to
# a client is redelivered as a DUPLICATE if the server has not
# received the PUBACK packet on the original Packet Identifier.
# The value has to be positive.
# Zero will cause the server to use the default value (30 seconds).
# Note that changes to this option is applied only to new MQTT subscriptions.
#
# Expressed as a time duration, with "s", "m", "h" indicating seconds,
# minutes and hours respectively. For instance "10s" for 10 seconds,
# "1m" for 1 minute, etc...
#
# ack_wait: "1m"
# This is the amount of QoS 1 messages the server can send to
# a subscription without receiving any PUBACK for those messages.
# The valid range is [0..65535].
#
# The total of subscriptions' max_ack_pending on a given session cannot
# exceed 65535. Attempting to create a subscription that would bring
# the total above the limit would result in the server returning 0x80
# in the SUBACK for this subscription.
# Due to how the NATS Server handles the MQTT "#" wildcard, each
# subscription ending with "#" will use 2 times the max_ack_pending value.
# Note that changes to this option is applied only to new subscriptions.
#
# max_ack_pending: 100
}
```
|
https://github.com/nats-io/nats-server/issues/2126
|
https://github.com/nats-io/nats-server/pull/2151
|
b0292e40d281ca994dcc061cc56d0b539aa5f808
|
96546040a395d9021b09d768a296328ab1f32e0c
| 2021-04-19T03:02:16Z |
go
| 2021-04-22T15:27:43Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,097 |
["server/consumer.go", "server/jetstream_cluster_test.go"]
|
the messeges are deleted after rm consumer on InterestPolicy stream
|
1、create stream:t24 and consumer:t24,The details are as follows:
Configuration:
Subjects: t24
Acknowledgements: true
Retention: File - Interest
Replicas: 1
Discard Policy: New
Duplicate Window: 2m0s
Maximum Messages: unlimited
Maximum Bytes: unlimited
Maximum Age: 0.00s
Maximum Message Size: unlimited
Maximum Consumers: unlimited
State:
Messages: 0
Bytes: 0 B
FirstSeq: 0
LastSeq: 0
Active Consumers: 0
Configuration:
Durable Name: t24
Delivery Subject: t244
Deliver All: true
Ack Policy: All
Ack Wait: 30s
Replay Policy: Instant
Max Ack Pending: 3
State:
Last Delivered Message: Consumer sequence: 0 Stream sequence: 0
Acknowledgment floor: Consumer sequence: 0 Stream sequence: 0
Outstanding Acks: 0 out of maximum 3
Redelivered Messages: 0
Unprocessed Messages: 0
2、pub 11 messeges to subject :t24; Subscribe to subject: t244, get 3 messages, and don't reply to ack.
The details are as follows:
`[leaf_1@host5 nats-cli]$ ./nats -s nats://leaf_1:leaf_1@localhost:5222 sub t244`
`22:29:14 Subscribing on t244`
`[#1] Received JetStream message: consumer: t24 > t24 / subject: t24 / delivered: 1 / consumer seq: 1 / stream seq: 1 / ack: false`
`1`
`[#2] Received JetStream message: consumer: t24 > t24 / subject: t24 / delivered: 1 / consumer seq: 2 / stream seq: 2 / ack: false`
`2`
`[#3] Received JetStream message: consumer: t24 > t24 / subject: t24 / delivered: 1 / consumer seq: 3 / stream seq: 3 / ack: false`
Configuration:
Durable Name: t24
Delivery Subject: t244
Deliver All: true
Ack Policy: All
Ack Wait: 30s
Replay Policy: Instant
Max Ack Pending: 3
State:
Last Delivered Message: Consumer sequence: 3 Stream sequence: 3
Acknowledgment floor: Consumer sequence: 0 Stream sequence: 0
Outstanding Acks: 3 out of maximum 3
Redelivered Messages: 0
Unprocessed Messages: 7
Configuration:
Subjects: t24
Acknowledgements: true
Retention: File - Interest
Replicas: 1
Discard Policy: New
Duplicate Window: 2m0s
Maximum Messages: unlimited
Maximum Bytes: unlimited
Maximum Age: 0.00s
Maximum Message Size: unlimited
Maximum Consumers: unlimited
State:
Messages: 10
Bytes: 341 B
FirstSeq: 1 @ 2021-04-10T14:27:26 UTC
LastSeq: 10 @ 2021-04-10T14:27:41 UTC
Active Consumers: 1
3、rm consumer : t24,The stream details are as follows:
Configuration:
Subjects: t24
Acknowledgements: true
Retention: File - Interest
Replicas: 1
Discard Policy: New
Duplicate Window: 2m0s
Maximum Messages: unlimited
Maximum Bytes: unlimited
Maximum Age: 0.00s
Maximum Message Size: unlimited
Maximum Consumers: unlimited
State:
Messages: 7
Bytes: 239 B
FirstSeq: 4 @ 2021-04-10T14:27:30 UTC
LastSeq: 10 @ 2021-04-10T14:27:41 UTC
Active Consumers: 0
**Why are the 3 un ack messages of stream: t24 deleted? Is this normal?**
|
https://github.com/nats-io/nats-server/issues/2097
|
https://github.com/nats-io/nats-server/pull/2105
|
c63f1d78b28127726d1b82580c6ba59542e22d8c
|
e00cbf4927af53814cdd0c84e88557640a8d8e71
| 2021-04-10T15:01:32Z |
go
| 2021-04-12T18:41:37Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,083 |
["server/consumer.go", "server/jetstream_cluster_test.go"]
|
stream expired messages are mot removed from consumer pending ack list
|
A stream with age limits on the messages will remove messages once they hit that time. However any messages that at that point were not acked on a consumer will continue to be considered as ack outstanding.
If the max ack outstanding is reached by such a consumer/stream the consmer is forever dead as it can never recover from that scenario.
Here is a stream that recently expired all its messages:
```
State:
Messages: 0
Bytes: 0 B
FirstSeq: 101
LastSeq: 100 @ 2021-04-08T11:09:40 UTC
Active Consumers: 1
```
And a consumer on the same stream
```
State:
Last Delivered Message: Consumer sequence: 100 Stream sequence: 10
Acknowledgment floor: Consumer sequence: 0 Stream sequence: 0
Outstanding Acks: 10 out of maximum 10
Redelivered Messages: 10
Unprocessed Messages: 0
```
At this point the consumer will never again deliver messages even if new messages are added to the stream
|
https://github.com/nats-io/nats-server/issues/2083
|
https://github.com/nats-io/nats-server/pull/2085
|
36e18c20ff39cf3ce1f509924596fa4fa3caaadd
|
7030b0eb460c47ced8f8a5f8657271902a4453cc
| 2021-04-08T11:18:06Z |
go
| 2021-04-08T19:31:35Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,082 |
["server/client.go"]
|
prevent stalled wait for client to gateway
|
## Feature Request
recently, I experienced network issue between clusters affect main cluster. gateway slowed down > client to gateway flushing slowed and client stalledWait occur.
it blocked some clients inbound message, as a result whole system blocked.
#### Use Case:
one main cluster & other sub clusters connected with gateway
massive messages produces on main cluster and copied to sub clusters
#### Proposed Change:
flushing gateway (maybe network issue) don't block client inbound message
1. when client kind is GATEWAY flushClients() call flushSignal(), not flushOutbound()
2. Add gateway option, not stall channel when client kind is GATEWAY
may need an option for proposal
#### Who Benefits From The Change(s)?
gateway network issue don't affect to local cluster
#### Alternative Approaches
small write_deadline & max_pending reduce stalledWait but exist
|
https://github.com/nats-io/nats-server/issues/2082
|
https://github.com/nats-io/nats-server/pull/2093
|
a5f3aabb1309b554c31ebde5ff873286c28cf8cf
|
8d4102c404185cc8d1bd979f9dfaaf55cd214453
| 2021-04-08T08:38:49Z |
go
| 2021-04-23T15:18:56Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,069 |
["go.mod", "go.sum", "server/jwt.go", "vendor/github.com/nats-io/jwt/v2/creds_utils.go", "vendor/github.com/nats-io/jwt/v2/header.go", "vendor/modules.txt"]
|
error parsing operator JWT: illegal base64 data. On OS Windows 10
|
## Defect
Server won't run on Windows 10, error parsing operator JWT: illegal base64 data
I know this works on other environments because it is used in a unit test that runs on linux.
#### Versions of `nats-server` and affected client libraries used:
nats-server: v2.2.1 but I know this has been around for some time.
#### OS/Container environment:
Windows 10 Pro no container
go version go1.16 windows/amd64
#### Steps or code to reproduce the issue:
1. `jwt.conf` and `op.jwt` are in the same directory
1. from that directory run `nats-server --config jwt.conf -DV`
jwt.conf
```
operator = "op.jwt"
system_account = "ADRNVKNXUYQGTX5AXSPYRKAO427VMF6JG3UDE2OROYP3XRVNQ3GT3BZU"
resolver = MEMORY
resolver_preload = {
ADRNVKNXUYQGTX5AXSPYRKAO427VMF6JG3UDE2OROYP3XRVNQ3GT3BZU : "eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJUTjRYUFhNV01JMjJQN1dPWVM1TE1FV0lPWDJJMkJOVUY2VlRQMklYV0RCRTVTVTJHU1dRIiwiaWF0IjoxNTY1ODg5OTk4LCJpc3MiOiJPQU01VlNINDJXRlZWTkpXNFNMRTZRVkpCREpVRTJGUVNYWkxRTk1SRDdBMlBaTTIzTDIyWFlVWSIsIm5hbWUiOiJzeXN0ZW0iLCJzdWIiOiJBRFJOVktOWFVZUUdUWDVBWFNQWVJLQU80MjdWTUY2SkczVURFMk9ST1lQM1hSVk5RM0dUM0JaVSIsInR5cGUiOiJhY2NvdW50IiwibmF0cyI6eyJsaW1pdHMiOnsic3VicyI6LTEsImNvbm4iOi0xLCJsZWFmIjotMSwiaW1wb3J0cyI6LTEsImV4cG9ydHMiOi0xLCJkYXRhIjotMSwicGF5bG9hZCI6LTEsIndpbGRjYXJkcyI6dHJ1ZX19fQ.Zlz9PN5Fnw2etIFaLXF4YiWS7tA4k22oTwaGxDdgXh8fpA1RmPVKHiJGCMoQidmtHC5C5munhtjhFV7wF44vBg"
AAHVSI55YPNBQZ5P7F676CFAO4PHBTDYFQIEGTKLQTIPEYPFDTJNHHO4: "eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJJWEdFSDNGQ1NVTkxGNDZUVFUzVlBPNVZMMkhUVlNLM003TU1VS09FUk4zUExJWlkzTk5RIiwiaWF0IjoxNTY1ODg5OTEwLCJpc3MiOiJPQU01VlNINDJXRlZWTkpXNFNMRTZRVkpCREpVRTJGUVNYWkxRTk1SRDdBMlBaTTIzTDIyWFlVWSIsIm5hbWUiOiJkZW1vIiwic3ViIjoiQUFIVlNJNTVZUE5CUVo1UDdGNjc2Q0ZBTzRQSEJURFlGUUlFR1RLTFFUSVBFWVBGRFRKTkhITzQiLCJ0eXBlIjoiYWNjb3VudCIsIm5hdHMiOnsiZXhwb3J0cyI6W3sibmFtZSI6ImNyb24iLCJzdWJqZWN0IjoiY3Jvbi5cdTAwM2UiLCJ0eXBlIjoic3RyZWFtIn0seyJuYW1lIjoibGlnbyIsInN1YmplY3QiOiJsaWdvIiwidHlwZSI6InNlcnZpY2UifV0sImxpbWl0cyI6eyJzdWJzIjotMSwiY29ubiI6LTEsImxlYWYiOi0xLCJpbXBvcnRzIjotMSwiZXhwb3J0cyI6LTEsImRhdGEiOi0xLCJwYXlsb2FkIjotMSwid2lsZGNhcmRzIjp0cnVlfX19.oo6CZHPBKCRyz3NQEZK8Xi6ic_4Vb5kIw-cSoSdDwT8T97EvIIZ-ie8MupQgNVinq68zSr2SzCEfTPVkyW84AA"
}
```
op.jwt
```
-----BEGIN TEST OPERATOR JWT-----
eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJKV01TUzNRUFpDS0lHSE1BWko3RUpQSlVHN01DTFNQUkJaTEpSUUlRQkRVTkFaUE5MQVVBIiwiaWF0IjoxNTY1ODg5NzEyLCJpc3MiOiJPQU01VlNINDJXRlZWTkpXNFNMRTZRVkpCREpVRTJGUVNYWkxRTk1SRDdBMlBaTTIzTDIyWFlVWSIsIm5hbWUiOiJzeW5hZGlhIiwic3ViIjoiT0FNNVZTSDQyV0ZWVk5KVzRTTEU2UVZKQkRKVUUyRlFTWFpMUU5NUkQ3QTJQWk0yM0wyMlhZVVkiLCJ0eXBlIjoib3BlcmF0b3IiLCJuYXRzIjp7ImFjY291bnRfc2VydmVyX3VybCI6Imh0dHA6Ly9sb2NhbGhvc3Q6NjA2MC9qd3QvdjEiLCJvcGVyYXRvcl9zZXJ2aWNlX3VybHMiOlsibmF0czovL2xvY2FsaG9zdDo0MTQxIl19fQ.XPvAezQj3AxwEvYLVBq-EIssP4OhjoMGLbIaripzBKv1oCtHdPNKz96YwB2vUoY-4OrN9ZOPo9TKR3jVxq0uBQ
------END TEST OPERATOR JWT------
```
#### Expected result:
Server runs
#### Actual result:
nats-server: jwt.conf:6:1: error parsing operator JWT: illegal base64 data at input byte 10
|
https://github.com/nats-io/nats-server/issues/2069
|
https://github.com/nats-io/nats-server/pull/2181
|
55c87226b3e5b8b34a95cd6d5ca59d22e8805172
|
0bfa7f679335c6287e34b0c3ef4c9401b43c1c92
| 2021-04-06T13:49:45Z |
go
| 2021-05-05T18:36:54Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,068 |
["server/filestore.go"]
|
When the messages cached in the stream are empty, kill -9 nats-server and start again, and firstSeq and firstSeq become 0
|
## Defect
When the messages cached in the stream are empty, kill -9 nats-server and start again, and firstSeq and firstSeq become 0
#### Versions of `nats-server` and affected client libraries used:
nats-server : 2.2.1
nats : 0.0.22
#### OS/Container environment:
Red Hat Enterprise Linux Server release 7.2 (Maipo) or other
#### Steps or code to reproduce the issue:
**Describe:**
1、Build a leaf node and persist it in file mode
2、Create stream:denggc8, Consumer :denggc8, for subject: denggc8
`nats -s nats://leaf_1:leaf_1@localhost:5222 str add denggc8 --subjects="denggc8 " --storage=file --retention=workq --discard=new --max-msgs=-1 --max-bytes=-1 --max-age=-1 --max-msg-size=-1 --dupe-window=5s --replicas=1`
`nats -s nats://leaf_1:leaf_1@localhost:5222 con add denggc8 denggc8 2 --deliver=all --replay=original --filter="denggc8 " --max-deliver=-1 --max-pending=1 --pull`
3、Send 10 messages to Stream Denggc8; And through Consumer: Denggc8 to complete the consumption
stream和consumer info:

4、kill -9 leaf node nats-server
5、start leaf node nats-server again
6、stream和consumer info:

#### Expected result:

#### Actual result:

|
https://github.com/nats-io/nats-server/issues/2068
|
https://github.com/nats-io/nats-server/pull/2104
|
a7db58c899854a366698968f3b827327dc1a772b
|
c63f1d78b28127726d1b82580c6ba59542e22d8c
| 2021-04-06T11:49:42Z |
go
| 2021-04-12T18:19:09Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,019 |
["server/auth.go"]
|
Account connection events not sent for custom authentication
|
## Defect
Account connect events is not sent when a custom authenticator is set:
https://github.com/nats-io/nats-server/blob/4784205c2d60ee57f1c539e3eb5f19d03974a71d/server/auth.go#L323-L334
Above the events are only sent in `processClientOrLeafAuthentication`
|
https://github.com/nats-io/nats-server/issues/2019
|
https://github.com/nats-io/nats-server/pull/2020
|
4784205c2d60ee57f1c539e3eb5f19d03974a71d
|
52667159e764c19058b23a2cf8ac102031c4504a
| 2021-03-18T16:12:39Z |
go
| 2021-03-18T18:18:02Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 2,011 |
["server/server.go"]
|
unaligned 64-bit atomic operation on arm7 architecture
|
panic : unaligned 64-bit atomic operation on arm7 architecture
**Describe**:
i deploy cluster on ARMV7l use nats-server-v2.2.0-linux-arm7,but the nast-server crash when i create stream by natscli
Crash information is as follows:
the panic info: unaligned 64-bit atomic operation

**Environment**:
nats-server package : nats-server-v2.2.0-linux-arm7.tar.gz
my environment cpu info:

|
https://github.com/nats-io/nats-server/issues/2011
|
https://github.com/nats-io/nats-server/pull/2915
|
7a98563ce6287f5b2893a777fc4ce37df7fe3648
|
3538aea34e2e0f6671b113708b80d9e5d802bceb
| 2021-03-16T05:53:58Z |
go
| 2022-03-09T17:30:50Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,921 |
["server/leafnode_test.go", "server/reload.go"]
|
Add support to leafnode config reload
|
## Feature Request
It would be useful to enable config reload for a few fields of a leafnode config, like the remotes which are not possible to prune right now:
```
[32903] 2021/02/17 19:55:43.113029 [ERR] Failed to reload server configuration: config reload not supported for LeafNode: old={ 0 [] 0 <nil> 0 false false 1s [0xc0000d02d0] <nil> 0 0}, new={ 0 [] 0 <nil> 0 false false 1s [0xc00028e090] <nil> 0 0}
```
#### Use Case:
Environments where it is not ideal to restart a server when modifying the leafnode edge configuration.
#### Alternative Approaches
Restart a server.
|
https://github.com/nats-io/nats-server/issues/1921
|
https://github.com/nats-io/nats-server/pull/3204
|
b108a84007a6907bed7e70a62ea4217413b56379
|
7a0c63af552c7024fe644e37d956acd4ead784c3
| 2021-02-18T04:04:45Z |
go
| 2022-06-21T16:03:07Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,912 |
["server/gateway.go", "server/gateway_test.go"]
|
Authentication for discovered gateways doesn't work
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [x] Included `nats-server -DV` output
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
- ```nats-server: v2.1.9```
- No client libraries are needed.
#### OS/Container environment:
Executing directly on my Windows laptop.
#### Steps or code to reproduce the issue:
**`A.conf`**
```
server_name: A1
listen: 127.0.0.1:4223
gateway: {
name: A
authorization: { user: gwuser, password: changeme }
listen: 127.0.0.1:5223
gateways: [
{ name: A, url: nats://gwuser:[email protected]:5223 }
]
}
```
**`B.conf`**
```
server_name: B1
listen: 127.0.0.1:4224
gateway: {
name: B
authorization: { user: gwuser, password: changeme }
listen: 127.0.0.1:5224
gateways: [
{ name: A, url: nats://gwuser:[email protected]:5223 }
]
}
```
Execute:
1. `.\nats-server --config A.conf -DV`
2. `.\nats-server --config B.conf -DV`
#### Expected result:
I expect the two NATS Servers to form a super-cluster containing 2 clusters (with each cluster containing 1 server).
This is because of the [documented behaviour of the config setting `gateway.authorization`](https://docs.nats.io/nats-server/configuration/gateways/gateway#:~:text=They%20also%20specify,discovered%20gateway), which is:
> Authorization map for gateways. When `token` or a single `username`/`password` are used, they define the authentication mechanism this server expects. What authentication values other server have to provide when connecting. **They also specify how this server will authenticate itself when establishing a connection to a discovered gateway**. This will not be used for gateways explicitly listed in `gateways` and therefore have to be provided as part of the URL. If you use token or password based authentication, either use the same credentials throughout the system or list every gateway explicitly on every server.
#### Actual result:
The gateway connection from `B1` to `A1` succeeds (as expected), but the gateway connection from `A1` back to `B1` does not. The documentation above says that "_`gateway.authorization` also specify how this server will authenticate itself when establishing a connection to a discovered gateway_" but when `A1` discovers the gateway `B1` it appears to be using no credentials, and therefore the connection does not succeed. To fix this defect, `A1` should use the credentials in `gateway.authorization` when it connects to the discovered gateway `B1`.
**Logs from `A1`**:
```
[7312] 2021/02/16 09:07:40.240626 [INF] Starting nats-server version 2.1.9
[7312] 2021/02/16 09:07:40.241625 [DBG] Go build version go1.14.10
[7312] 2021/02/16 09:07:40.241625 [INF] Git commit [7c76626]
[7312] 2021/02/16 09:07:40.242623 [INF] Gateway name is A
[7312] 2021/02/16 09:07:40.242623 [INF] Listening for gateways connections on 127.0.0.1:5223
[7312] 2021/02/16 09:07:40.242623 [INF] Address for gateway "A" is 127.0.0.1:5223
[7312] 2021/02/16 09:07:40.242623 [INF] Listening for client connections on 127.0.0.1:4223
[7312] 2021/02/16 09:07:40.242623 [INF] Server id is NDPLXL2EZ7H6VMMX6FPFKFRII7FBTNBQXVOIRNAOYBUXECZVTAZ433NT
[7312] 2021/02/16 09:07:40.243626 [INF] Server is ready
[7312] 2021/02/16 09:07:51.140971 [INF] 127.0.0.1:60787 - gid:1 - Processing inbound gateway connection
[7312] 2021/02/16 09:07:51.144985 [TRC] 127.0.0.1:60787 - gid:1 - <<- [CONNECT {"echo":false,"verbose":false,"pedantic":false,"user":"gwuser","pass":"[REDACTED]","tls_required":false,"name":"NDCTWZ7F3BG7BR255PFYRKS4YFVBD7TD7OPAGQCSZ72FFSIT4GDUDGOP","gateway":"B"}]
[7312] 2021/02/16 09:07:51.145973 [INF] 127.0.0.1:60787 - gid:1 - Inbound gateway connection from "B" (NDCTWZ7F3BG7BR255PFYRKS4YFVBD7TD7OPAGQCSZ72FFSIT4GDUDGOP) registered
[7312] 2021/02/16 09:07:51.146983 [INF] Connecting to implicit gateway "B" (127.0.0.1:5224) at 127.0.0.1:5224 (attempt 1)
[7312] 2021/02/16 09:07:51.149973 [INF] 127.0.0.1:5224 - gid:2 - Creating outbound gateway connection to "B"
[7312] 2021/02/16 09:07:51.152030 [DBG] 127.0.0.1:5224 - gid:2 - Gateway connect protocol sent to "B"
[7312] 2021/02/16 09:07:51.154002 [INF] 127.0.0.1:5224 - gid:2 - Outbound gateway connection to "B" (NDCTWZ7F3BG7BR255PFYRKS4YFVBD7TD7OPAGQCSZ72FFSIT4GDUDGOP) registered
[7312] 2021/02/16 09:07:51.161120 [ERR] 127.0.0.1:5224 - gid:2 - Gateway Error 'Authorization Violation'
[7312] 2021/02/16 09:07:51.161962 [INF] 127.0.0.1:5224 - gid:2 - Gateway connection closed: Parse Error
[7312] 2021/02/16 09:07:51.161962 [DBG] Attempting reconnect for gateway "B"
[7312] 2021/02/16 09:07:51.218898 [INF] Connecting to implicit gateway "B" (127.0.0.1:5224) at 127.0.0.1:5224 (attempt 1)
[7312] 2021/02/16 09:07:51.219701 [INF] 127.0.0.1:5224 - gid:3 - Creating outbound gateway connection to "B"
[7312] 2021/02/16 09:07:51.221668 [DBG] 127.0.0.1:5224 - gid:3 - Gateway connect protocol sent to "B"
[7312] 2021/02/16 09:07:51.228752 [INF] 127.0.0.1:5224 - gid:3 - Outbound gateway connection to "B" (NDCTWZ7F3BG7BR255PFYRKS4YFVBD7TD7OPAGQCSZ72FFSIT4GDUDGOP) registered
[7312] 2021/02/16 09:07:51.245737 [ERR] 127.0.0.1:5224 - gid:3 - Gateway Error 'Authorization Violation'
[7312] 2021/02/16 09:07:51.248755 [INF] 127.0.0.1:5224 - gid:3 - Gateway connection closed: Parse Error
[7312] 2021/02/16 09:07:51.252752 [DBG] Attempting reconnect for gateway "B"
...
```
**Logs from `B1`**:
```
[9800] 2021/02/16 09:07:50.135566 [INF] Starting nats-server version 2.1.9
[9800] 2021/02/16 09:07:50.136564 [DBG] Go build version go1.14.10
[9800] 2021/02/16 09:07:50.136564 [INF] Git commit [7c76626]
[9800] 2021/02/16 09:07:50.137565 [INF] Gateway name is B
[9800] 2021/02/16 09:07:50.137565 [INF] Listening for gateways connections on 127.0.0.1:5224
[9800] 2021/02/16 09:07:50.137565 [INF] Address for gateway "B" is 127.0.0.1:5224
[9800] 2021/02/16 09:07:50.137565 [INF] Listening for client connections on 127.0.0.1:4224
[9800] 2021/02/16 09:07:50.137565 [INF] Server id is NDCTWZ7F3BG7BR255PFYRKS4YFVBD7TD7OPAGQCSZ72FFSIT4GDUDGOP
[9800] 2021/02/16 09:07:50.137565 [INF] Server is ready
[9800] 2021/02/16 09:07:51.138464 [INF] Connecting to explicit gateway "A" (127.0.0.1:5223) at 127.0.0.1:5223 (attempt 1)
[9800] 2021/02/16 09:07:51.140971 [INF] 127.0.0.1:5223 - gid:1 - Creating outbound gateway connection to "A"
[9800] 2021/02/16 09:07:51.142973 [DBG] 127.0.0.1:5223 - gid:1 - Gateway connect protocol sent to "A"
[9800] 2021/02/16 09:07:51.143987 [INF] 127.0.0.1:5223 - gid:1 - Outbound gateway connection to "A" (NDPLXL2EZ7H6VMMX6FPFKFRII7FBTNBQXVOIRNAOYBUXECZVTAZ433NT) registered
[9800] 2021/02/16 09:07:51.150969 [INF] 127.0.0.1:60788 - gid:2 - Processing inbound gateway connection
[9800] 2021/02/16 09:07:51.154002 [TRC] 127.0.0.1:60788 - gid:2 - <<- [CONNECT {"echo":false,"verbose":false,"pedantic":false,"tls_required":false,"name":"NDPLXL2EZ7H6VMMX6FPFKFRII7FBTNBQXVOIRNAOYBUXECZVTAZ433NT","gateway":"A"}]
[9800] 2021/02/16 09:07:51.154961 [ERR] 127.0.0.1:60788 - gid:2 - authentication error
[9800] 2021/02/16 09:07:51.154961 [TRC] 127.0.0.1:60788 - gid:2 - ->> [-ERR Authorization Violation]
[9800] 2021/02/16 09:07:51.154961 [INF] 127.0.0.1:60788 - gid:2 - Gateway connection closed: Authentication Failure
[9800] 2021/02/16 09:07:51.219701 [INF] 127.0.0.1:60789 - gid:3 - Processing inbound gateway connection
[9800] 2021/02/16 09:07:51.228752 [TRC] 127.0.0.1:60789 - gid:3 - <<- [CONNECT {"echo":false,"verbose":false,"pedantic":false,"tls_required":false,"name":"NDPLXL2EZ7H6VMMX6FPFKFRII7FBTNBQXVOIRNAOYBUXECZVTAZ433NT","gateway":"A"}]
[9800] 2021/02/16 09:07:51.229667 [ERR] 127.0.0.1:60789 - gid:3 - authentication error
[9800] 2021/02/16 09:07:51.229667 [TRC] 127.0.0.1:60789 - gid:3 - ->> [-ERR Authorization Violation]
[9800] 2021/02/16 09:07:51.230672 [INF] 127.0.0.1:60789 - gid:3 - Gateway connection closed: Authentication Failure
...
```
|
https://github.com/nats-io/nats-server/issues/1912
|
https://github.com/nats-io/nats-server/pull/1915
|
546f7a981695fac0a1c6da0493fd3eb1b6a5a8d3
|
84ae705ccf891e83e5fad218f380a3a0d15479e7
| 2021-02-16T08:11:09Z |
go
| 2021-02-16T18:05:25Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,879 |
["server/mqtt.go", "server/mqtt_test.go"]
|
MQTT Clients compatibility issue with '.' char and use in Eclipse Tahu MQTT specification
|
There is an issue using NATS.io as a MQTT broker for with the Eclipse Tahu specification.
Unfortunately the root namespace used for the topic contains a '.' char. (See [page 12 of the specification](https://www.eclipse.org/tahu/spec/Sparkplug%20Topic%20Namespace%20and%20State%20ManagementV2.2-with%20appendix%20B%20format%20-%20Eclipse.pdf))
Given that the '/' topic delimiter chars are converted to NATS topic alternative, should the NATS topic delimiter chars be converted or escaped for MQTT clients?
Could there be a known substitution used internally, like converting '.' to '_' and NATS clients would need to know about?
So MQTT client publish to `spBv1.0/group/message/node` and NATS client subscribe to `spBv1_0.group.message.node`?
|
https://github.com/nats-io/nats-server/issues/1879
|
https://github.com/nats-io/nats-server/pull/4243
|
694cc7d2b7a7e9bf84a7f235747b155a7d4412f4
|
91d0b6ad3a0bd54a873ff9845cfad6f9a828f43c
| 2021-02-03T01:18:26Z |
go
| 2023-06-14T03:44:02Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,827 |
["server/jetstream_api.go", "test/jetstream_test.go"]
|
jetstream, invalid memory / nil pointer dereference, update a non-existent stream
|
## Defect
When sending an update stream message to the JetStream server and the stream does not exist. Expecting some sort of error message. Instead server crashes.
#### Versions/OS/Environment:
Direct build `go build main.go` of source. Able to reproduce on both windows and ubuntu. Single (not clustered) server
nats-server version 2.2.0-beta.44 master branch https://github.com/nats-io/nats-server/commit/c4a284b58fa91eb20f4f1049373b9e99898f9a63
```
Ubuntu 20.04.1 LTS
nats-server version 2.2.0-beta.44
Go version go1.15.7 linux/amd64
nats-server -DV -p 4222 -js -sd /home/scottf/nats/jetstream-storage
```
```
Windows 10 Pro, Version 1909, Build 18363.1216
nats-server version 2.2.0-beta.42
Go version go1.15.3 windows/amd64
nats-server -DV -p 4222 -js -sd C:\nats\jetstream-storage
```
#### Steps or code to reproduce the issue:
Using the java client (new code) sent this message:
```
PUB $JS.API.STREAM.UPDATE.bugStreamName _INBOX.gnlEkQi5jTKYwCCR0Z4SsB 138␍␊{"name":"bugStreamName","subjects":["bugSubject"],"retention":"limits","storage":"memory","discard":"old","num_replicas":1,"no_ack":false}␍␊
```
#### Expected result:
Some error response.
#### Actual result:
Server fault.
Ubuntu:
```
[45705] 2021/01/20 11:35:00.981467 [TRC] 192.168.50.43:57882 - cid:4 - "v2.9.0:java" - <<- [PUB $JS.API.STREAM.UPDATE.bugStreamName _INBOX.yZ0KBPDXfzWKCNjSGLTpHT 138]
[45705] 2021/01/20 11:35:00.981499 [TRC] 192.168.50.43:57882 - cid:4 - "v2.9.0:java" - <<- MSG_PAYLOAD: ["{\"name\":\"bugStreamName\",\"subjects\":[\"bugSubject\"],\"retention\":\"limits\",\"storage\":\"memory\",\"discard\":\"old\",\"num_replicas\":1,\"no_ack\":false}"]
[45705] 2021/01/20 11:35:00.981849 [DBG] 192.168.50.43:57882 - cid:4 - "v2.9.0:java" - Auto-unsubscribe limit of 1 reached for sid '3'
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x992f7b]
goroutine 121 [running]:
github.com/nats-io/nats-server/v2/server.(*Stream).Config(0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/home/scottf/nats/nats-server/server/stream.go:635 +0x5b
github.com/nats-io/nats-server/v2/server.(*Stream).Update(0x0, 0xc0003ba000, 0xc0000ea000, 0xc00025a480)
/home/scottf/nats/nats-server/server/stream.go:671 +0x14d
github.com/nats-io/nats-server/v2/server.(*Server).jsStreamUpdateRequest(0xc0000c1500, 0xc000131ce0, 0xc00019ac80, 0xc00025a480, 0x23, 0xc000218ee0, 0x11, 0xc0003b8000, 0x172, 0x31a)
/home/scottf/nats/nats-server/server/jetstream_api.go:881 +0x707
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc00019ac80, 0xc000131ce0, 0xc00025a450, 0x23, 0x30, 0xc000218ec0, 0x11, 0x20, 0xc00019c0e8, 0x47, ...)
/home/scottf/nats/nats-server/server/client.go:3017 +0x36f
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc00019ac80, 0xc00000c1e0, 0xc000211440, 0xc0003b8000, 0x174, 0x31a, 0x0, 0x0, 0x0, 0xc00025a450, ...)
/home/scottf/nats/nats-server/server/client.go:3875 +0x62b
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc00019ac80, 0xc0001bc870, 0xc0000ea000, 0xc0003b8000, 0x174, 0x31a)
/home/scottf/nats/nats-server/server/client.go:3696 +0xa25
github.com/nats-io/nats-server/v2/server.(*Account).addServiceImportSub.func1(0xc0001b8c60, 0xc00019ac80, 0xc00025a3c0, 0x23, 0xc000218e40, 0x1d, 0xc00019aecd, 0x8c, 0xfbb)
/home/scottf/nats/nats-server/server/accounts.go:1807 +0x5e
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc00019ac80, 0xc0001b8c60, 0xc00019ae88, 0x23, 0x1000, 0xc00019aeac, 0x1d, 0xfdc, 0xc00019c0e9, 0x4d, ...)
/home/scottf/nats/nats-server/server/client.go:3015 +0x47d
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc00019ac80, 0xc0000ea000, 0xc0002113e0, 0xc00019aecd, 0x8c, 0xfbb, 0x0, 0x0, 0x0, 0xc00019ae88, ...)
/home/scottf/nats/nats-server/server/client.go:3875 +0x62b
github.com/nats-io/nats-server/v2/server.(*client).processInboundClientMsg(0xc00019ac80, 0xc00019aecd, 0x8c, 0xfbb, 0xc00019ae50)
/home/scottf/nats/nats-server/server/client.go:3450 +0x3be
github.com/nats-io/nats-server/v2/server.(*client).processInboundMsg(0xc00019ac80, 0xc00019aecd, 0x8c, 0xfbb)
/home/scottf/nats/nats-server/server/client.go:3316 +0x95
github.com/nats-io/nats-server/v2/server.(*client).parse(0xc00019ac80, 0xc000384a00, 0x7, 0x200, 0x7, 0x0)
/home/scottf/nats/nats-server/server/parser.go:468 +0x24fb
github.com/nats-io/nats-server/v2/server.(*client).readLoop(0xc00019ac80, 0x0, 0x0, 0x0)
/home/scottf/nats/nats-server/server/client.go:1092 +0x568
github.com/nats-io/nats-server/v2/server.(*Server).createClient.func2()
/home/scottf/nats/nats-server/server/server.go:2302 +0x45
created by github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine
/home/scottf/nats/nats-server/server/server.go:2685 +0xc5
```
Windows:
```
[2492] 2021/01/20 08:00:32.886004 [TRC] 127.0.0.1:52198 - cid:4 - "v2.9.0:java" - <<- [PUB $JS.API.STREAM.UPDATE.bugStreamName _INBOX.B70zqeU5pelh7Ropg7dlvw 138]
[2492] 2021/01/20 08:00:32.886004 [TRC] 127.0.0.1:52198 - cid:4 - "v2.9.0:java" - <<- MSG_PAYLOAD: ["{\"name\":\"bugStreamName\",\"subjects\":[\"bugSubject\"],\"retention\":\"limits\",\"storage\":\"memory\",\"discard\":\"old\",\"num_replicas\":1,\"no_ack\":false}"]
[2492] 2021/01/20 08:00:32.887002 [DBG] 127.0.0.1:52198 - cid:4 - "v2.9.0:java" - Auto-unsubscribe limit of 1 reached for sid '3'
[2492] 2021/01/20 08:00:32.887002 [TRC] 127.0.0.1:52198 - cid:4 - "v2.9.0:java" - ->> [MSG _INBOX.B70zqeU5pelh7Ropg7dlvw 3 136]
[2492] 2021/01/20 08:00:32.887002 [TRC] 127.0.0.1:52198 - cid:4 - "v2.9.0:java" - <-> [DELSUB 3]
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xc0000005 code=0x0 addr=0x0 pc=0xa7ad22]
goroutine 20 [running]:
github.com/nats-io/nats-server/v2/server.(*Stream).Config(0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
C:/nats/nats-server/server/stream.go:613 +0x62
github.com/nats-io/nats-server/v2/server.(*Stream).Update(0x0, 0xc0001a6780, 0xc0001dc000, 0xc0001bf050)
C:/nats/nats-server/server/stream.go:649 +0x14d
github.com/nats-io/nats-server/v2/server.(*Server).jsStreamUpdateRequest(0xc0001b1500, 0xc0001fda20, 0xc000152000, 0xc0001bf050, 0x23, 0xc000276080, 0x11, 0xc000278000, 0x16c, 0x30e)
C:/nats/nats-server/server/jetstream_api.go:880 +0x6e7
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000152000, 0xc0001fda20, 0xc0001bf020, 0x23, 0x30, 0xc000276060, 0x11, 0x20, 0xc000153468, 0x47, ...)
C:/nats/nats-server/server/client.go:3017 +0x37a
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000152000, 0xc0001dc1e0, 0xc000229170, 0xc000278000, 0x16e, 0x30e, 0x0, 0x0, 0x0, 0xc0001bf020, ...)
C:/nats/nats-server/server/client.go:3875 +0x62b
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc000152000, 0xc0002267e0, 0xc0001dc000, 0xc000278000, 0x16e, 0x30e)
C:/nats/nats-server/server/client.go:3696 +0xa25
github.com/nats-io/nats-server/v2/server.(*Account).addServiceImportSub.func1(0xc0002209a0, 0xc000152000, 0xc0001bef90, 0x23, 0xc00018bfe0, 0x1d, 0xc00015224d, 0x8c, 0xfbb)
C:/nats/nats-server/server/accounts.go:1803 +0x65
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000152000, 0xc0002209a0, 0xc000152208, 0x23, 0x1000, 0xc00015222c, 0x1d, 0xfdc, 0xc000153469, 0x4d, ...)
C:/nats/nats-server/server/client.go:3015 +0x482
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000152000, 0xc0001dc000, 0xc0002290e0, 0xc00015224d, 0x8c, 0xfbb, 0x0, 0x0, 0x0, 0xc000152208, ...)
C:/nats/nats-server/server/client.go:3875 +0x62b
github.com/nats-io/nats-server/v2/server.(*client).processInboundClientMsg(0xc000152000, 0xc00015224d, 0x8c, 0xfbb, 0xc0000802c0)
C:/nats/nats-server/server/client.go:3450 +0x3de
github.com/nats-io/nats-server/v2/server.(*client).processInboundMsg(0xc000152000, 0xc00015224d, 0x8c, 0xfbb)
C:/nats/nats-server/server/client.go:3316 +0xa5
github.com/nats-io/nats-server/v2/server.(*client).parse(0xc000152000, 0xc00021a500, 0x47, 0x100, 0x47, 0x0)
C:/nats/nats-server/server/parser.go:468 +0x251b
github.com/nats-io/nats-server/v2/server.(*client).readLoop(0xc000152000, 0x0, 0x0, 0x0)
C:/nats/nats-server/server/client.go:1092 +0x568
github.com/nats-io/nats-server/v2/server.(*Server).createClient.func2()
C:/nats/nats-server/server/server.go:2302 +0x4c
created by github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine
C:/nats/nats-server/server/server.go:2685 +0xca
[process exited with code 2]
```
|
https://github.com/nats-io/nats-server/issues/1827
|
https://github.com/nats-io/nats-server/pull/1828
|
7d1a4778b8c792aef5b38c920bd34af8ea85bfbe
|
114076a30f13276784b7f7aec2accf74fecc4c52
| 2021-01-20T18:00:30Z |
go
| 2021-01-20T19:53:02Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,799 |
["server/monitor.go", "server/monitor_test.go"]
|
Wrong authorized_user returned from /connz?auth=true when using decentralized pki key
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [x] Included `nats-server -DV` output
```
[153896] 2021/01/09 15:39:44.068164 [INF] Starting nats-server version 2.1.9
[153896] 2021/01/09 15:39:44.068221 [DBG] Go build version go1.14.10
[153896] 2021/01/09 15:39:44.068230 [INF] Git commit [7c76626]
[153896] 2021/01/09 15:39:44.068238 [INF] Trusted Operators
[153896] 2021/01/09 15:39:44.068245 [INF] System : ""
[153896] 2021/01/09 15:39:44.068251 [INF] Operator: ""
[153896] 2021/01/09 15:39:44.068279 [INF] Issued : 2020-12-26 22:02:01 +0700 +07
[153896] 2021/01/09 15:39:44.068291 [INF] Expires : 1970-01-01 08:00:00 +0800 +08
[153896] 2021/01/09 15:39:44.068714 [INF] Starting http monitor on 0.0.0.0:8222
[153896] 2021/01/09 15:39:44.068987 [INF] Listening for client connections on 0.0.0.0:4222
[153896] 2021/01/09 15:39:44.068995 [INF] Server id is NCETGXKTUPAU2ZRWDXIW2RHFV72VG4DJLZPE2K2F2YH46N5SI72ZMXGY
[153896] 2021/01/09 15:39:44.069003 [INF] Server is ready
[153896] 2021/01/09 15:39:44.069012 [DBG] Get non local IPs for "0.0.0.0"
[153896] 2021/01/09 15:39:44.069342 [DBG] ip=192.168.1.10
[153896] 2021/01/09 15:39:44.069353 [DBG] ip=2402:800:6374:6e12:350b:dca8:726d:c781
[153896] 2021/01/09 15:39:44.069360 [DBG] ip=2402:800:6374:6e12:f2c7:7e30:9839:2076
[153896] 2021/01/09 15:39:44.069408 [DBG] ip=172.18.0.1
[153896] 2021/01/09 15:39:44.069454 [DBG] ip=172.17.0.1
[153896] 2021/01/09 15:40:02.348590 [DBG] 127.0.0.1:34316 - cid:2 - Client connection created
[153896] 2021/01/09 15:40:02.349431 [TRC] 127.0.0.1:34316 - cid:2 - <<- [CONNECT {"verbose":false,"pedantic":false,"jwt":"eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJLTEhTUTZYRERYV1dNRzZISjRTRkJKRjdZVjJHTk9TUFhVMlhYSEQ2NUdXU0UzQldETlZBIiwiaWF0IjoxNjA4OTk0OTIxLCJpc3MiOiJBQ0o2TlVORzNKRVNQQk1LUkpUQVNNVzQ0SkhGNUFMUEtFR1JTNERPNVZUQ0NSVFVQN1VXM1ZGWSIsInN1YiI6IlVBNDZMU1dRSzJJQVVJNFRBSFBDSjZVUUNVRktUNjRVQjVNT09IT1Q0N0FFWkFUQlRPVFc3MkFWIiwidHlwZSI6InVzZXIiLCJuYXRzIjp7InB1YiI6e30sInN1YiI6e319fQ.OfygsGsljS_PYYAC4Ls-uTQxh3CVheYblpj0euVMSZpbOUKEcPzY50RE2FTffLrhfDp-a7Ox_LtcDtcdyjg8Dw","sig":"jSdLMVjggK15J8ifgaOpCvx-YV5RnGr7Sul6XiyQfEmVBxG4jtHojK9kDZzgpm8yqWcXs_IEw7pxarLJW5U8BA","tls_required":false,"name":"NATS Sample Subscriber","lang":"go","version":"1.11.0","protocol":1,"echo":true,"headers":false,"no_responders":false}]
[153896] 2021/01/09 15:40:02.350081 [TRC] 127.0.0.1:34316 - cid:2 - <<- [PING]
[153896] 2021/01/09 15:40:02.350089 [TRC] 127.0.0.1:34316 - cid:2 - ->> [PONG]
[153896] 2021/01/09 15:40:02.350331 [TRC] 127.0.0.1:34316 - cid:2 - <<- [SUB ne.123 1]
[153896] 2021/01/09 15:40:02.350356 [TRC] 127.0.0.1:34316 - cid:2 - <<- [PING]
[153896] 2021/01/09 15:40:02.350365 [TRC] 127.0.0.1:34316 - cid:2 - ->> [PONG]
[153896] 2021/01/09 15:40:04.607841 [DBG] 127.0.0.1:34316 - cid:2 - Client Ping Timer
[153896] 2021/01/09 15:40:04.608018 [TRC] 127.0.0.1:34316 - cid:2 - ->> [PING]
[153896] 2021/01/09 15:40:04.608496 [TRC] 127.0.0.1:34316 - cid:2 - <<- [PONG]
```
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
```
http_port = 8222
operator = eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.xxx
system_account = ACJ6NUNG3JESPBMKRJTASMWxxx
resolver = MEMORY
resolver_preload = {
ACJ6NUNG3JESPBMKRJTASMWxxx: eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.xxx
}
```
#### Versions of `nats-server` and affected client libraries used:
nats-server v2.1.9
nats-sub get from `go get github.com/nats-io/nats.go/examples/nats-sub`
#### OS/Container environment:
Ubuntu 20.04
#### Steps or code to reproduce the issue:
1. Create Operator, Account, and one User
2. Start the server with above config
3. Use nats-sub to subscribe a random topic `nats-sub -creds U1.creds "ne.123"`
4. Get connz http://localhost:8222/connz?auth=true
#### Expected result:
connections[0].authorized_user must not be empty
#### Actual result:
```
{
"server_id": "NCETGXKTUPAU2ZRWDXIW2RHFV72VG4DJLZPE2K2F2YH46N5SI72ZMXGY",
"now": "2021-01-09T15:40:07.738491674+07:00",
"num_connections": 1,
"total": 1,
"offset": 0,
"limit": 1024,
"connections": [
{
"cid": 2,
"ip": "127.0.0.1",
"port": 34316,
"start": "2021-01-09T15:40:02.348519983+07:00",
"last_activity": "2021-01-09T15:40:02.350390626+07:00",
"rtt": "633µs",
"uptime": "5s",
"idle": "5s",
"pending_bytes": 0,
"in_msgs": 0,
"out_msgs": 0,
"in_bytes": 0,
"out_bytes": 0,
"subscriptions": 1,
"name": "NATS Sample Subscriber",
"lang": "go",
"version": "1.11.0",
"account": "ACJ6NUNG3JESPBMKRJTASMW44JHF5ALPKEGRS4DO5VTCCRTUP7UW3VFY"
}
]
}
```
|
https://github.com/nats-io/nats-server/issues/1799
|
https://github.com/nats-io/nats-server/pull/1800
|
b88cbe2e0df387ad7911e71dcec290375d919282
|
0d34688c4b1c5d982bd98626eb6d07cccc2ba65c
| 2021-01-09T08:47:58Z |
go
| 2021-01-11T21:28:28Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,786 |
["server/client.go", "test/bench_test.go"]
|
publish same subject with multiple clients will lead to lower performance
|
I have a cluster with 3 nats nodes. and I use nats-bench tool to publish and subscribe to test the performance
There are 3 subscription with nats-bench CLI:
nats-bench -np 0 -ns 1 -n 100000000 -s nats://127.0.0.1:4421 foo
nats-bench -np 0 -ns 1 -n 100000000 -s nats://127.0.0.1:4421 foo
nats-bench -np 0 -ns 1 -n 100000000 -s nats://127.0.0.1:4421 foo
And, When I publish with CLI:
nats-bench -np 1 -n 100000000 -s nats://127.0.0.1:4421 foo
I got the result:
> Starting benchmark [msgs=100000000, msgsize=128, pubs=1, subs=0]
> Pub stats: 1,265,771 msgs/sec ~ 154.51 MB/sec
but when I increase the publish client to 2, I got lower performance
nats-bench -np 2 -n 100000000 -s nats://127.0.0.1:4421 foo
> Starting benchmark [msgs=100000000, msgsize=128, pubs=2, subs=0]
> Pub stats: 994,603 msgs/sec ~ 121.41 MB/sec
> [1] 497,817 msgs/sec ~ 60.77 MB/sec (50000000 msgs)
> [2] 497,306 msgs/sec ~ 60.71 MB/sec (50000000 msgs)
> min 497,306 | avg 497,561 | max 497,817 | stddev 255 msgs
I test multiple times with different size of publish client, when the publish size is 1, the performance is the best
I read the code, I found the function client.deliverMsg have a lock operation and the granularity of the lock is really too big. Could you consider to refine the granularity of the lock. I am not sure that is the root cause, but it is really weird, why it performs worse when increasing the publishing client size
Thanks and waiting for your response : )
|
https://github.com/nats-io/nats-server/issues/1786
|
https://github.com/nats-io/nats-server/pull/1801
|
0d34688c4b1c5d982bd98626eb6d07cccc2ba65c
|
25ea8973e8ca1cd7c7712824a4a771465bb0e002
| 2020-12-29T10:50:24Z |
go
| 2021-01-13T15:06:17Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,781 |
["server/leafnode.go", "test/leafnode_test.go"]
|
Leafnodes filter headers
|
## Defect
#### Versions of `nats-server` and affected client libraries used:
Commit d7741b9aa16a7687c5426697b17a6eb5e5dd44f3
#### OS/Container environment:
OS/X
#### Steps or code to reproduce the issue:
```go
func TestLeafnodeHeaders(t *testing.T) {
srv, opts := runLeafServer()
defer srv.Shutdown()
leaf, _ := runSolicitLeafServer(opts)
defer leaf.Shutdown()
snc, err := nats.Connect(srv.ClientURL())
if err != nil {
t.Fatalf(err.Error())
}
defer snc.Close()
ssub, err := snc.SubscribeSync("test")
if err != nil {
t.Fatalf("subscribe failed: %s", err)
}
lnc, err := nats.Connect(leaf.ClientURL())
if err != nil {
t.Fatalf(err.Error())
}
defer lnc.Close()
lsub, err := lnc.SubscribeSync("test")
if err != nil {
t.Fatalf("subscribe failed: %s", err)
}
// wait for things to settle
time.Sleep(20 * time.Millisecond)
msg := nats.NewMsg("test")
msg.Header.Add("Test", "Header")
if len(msg.Header) == 0 {
t.Fatalf("msg header is empty")
}
err = snc.PublishMsg(msg)
if err != nil {
t.Fatalf(err.Error())
}
smsg, err := ssub.NextMsg(time.Second)
if err != nil {
t.Fatalf("next failed: %s", err)
}
if len(smsg.Header) == 0 {
t.Fatalf("server msgs header is empty")
}
lmsg, err := lsub.NextMsg(time.Second)
if err != nil {
t.Fatalf("next failed: %s", err)
}
if len(lmsg.Header) == 0 {
t.Fatalf("leaf msg header is empty")
}
}
```
```
=== RUN TestLeafnodeHeaders
leafnode_test.go:4293: leaf msg header is empty
--- FAIL: TestLeafnodeHeaders (0.08s)
```
If I do `err = lnc.PublishMsg(msg)` instead it all works, I also have a case where the headers do traverse but end up in the message body instead, still trying to properly recreate that one
|
https://github.com/nats-io/nats-server/issues/1781
|
https://github.com/nats-io/nats-server/pull/1782
|
f09992a889a06500630ff836460e6f29f940fe78
|
e04d3d5a0ad20e7a9c426eeb86c341715515da94
| 2020-12-20T11:38:43Z |
go
| 2020-12-21T19:41:53Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,769 |
["server/accounts.go", "server/client.go", "server/parser.go", "test/accounts_cycles_test.go"]
|
stack overflow when using wildcard service exports/imports
|
Below shows the config, server start, replyer start and resulting stack overflow when the requestor is started.
Ran with a build from today.
```bash
> cat ac.cfg
port: 4202
Accounts: {
FOO: {
users: [
{user: foo, password: foo}
]
imports: [
{service: {account: NCS, subject: >}}
]
},
NCS: {
users: [
{user: ncs, password: ncs}
]
exports: [
{service: >}
]
}
}
no_auth_user: foo
> nats-server -c ac.cfg &
[1] 71641
[71641] 2020/12/11 15:05:39.512522 [INF] Starting nats-server version 2.2.0-beta.35
[71641] 2020/12/11 15:05:39.512620 [INF] Git commit [not set]
[71641] 2020/12/11 15:05:39.512630 [WRN] Plaintext passwords detected, use nkeys or bcrypt
[71641] 2020/12/11 15:05:39.512635 [INF] Using configuration file: ac.cfg
[71641] 2020/12/11 15:05:39.513033 [INF] Listening for client connections on 0.0.0.0:4202
[71641] 2020/12/11 15:05:39.513041 [INF] Server id is NA3GRPSZ5ERDNMIT5UYYLNAZMNIXZDTTCDCQFCKSHZZIGZOOHVKDDHZN
[71641] 2020/12/11 15:05:39.513044 [INF] Server name is NA3GRPSZ5ERDNMIT5UYYLNAZMNIXZDTTCDCQFCKSHZZIGZOOHVKDDHZN
[71641] 2020/12/11 15:05:39.513047 [INF] Server is ready
> nats reply -s nats://ncs:ncs@localhost:4202 ">" world &
[2] 71650
15:05:47 Listening on ">" in group "NATS-RPLY-22"
> nats req -s nats://foo:foo@localhost:4202 bar hello
15:05:55 Sending request on "bar"
15:05:55 [#0] Received on subject "bar":
hello
runtime: goroutine stack exceeds 1000000000-byte limit
runtime: sp=0xc020360398 stack=[0xc020360000, 0xc040360000]
fatal error: stack overflow
runtime stack:
runtime.throw(0x1699e73, 0xe)
/usr/local/Cellar/go/1.14.1/libexec/src/runtime/panic.go:1114 +0x72
runtime.newstack()
/usr/local/Cellar/go/1.14.1/libexec/src/runtime/stack.go:1034 +0x6ce
runtime.morestack()
/usr/local/Cellar/go/1.14.1/libexec/src/runtime/asm_amd64.s:449 +0x8f
goroutine 20 [running]:
github.com/nats-io/nats-server/v2/server.shouldSample(0x0, 0xc000231980, 0x0, 0x0)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1884 +0x108d fp=0xc0203603a8 sp=0xc0203603a0 pc=0x13ec36d
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc000231980, 0xc0000a86e0, 0xc00000c780, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/client.go:3574 +0xcc fp=0xc020360568 sp=0xc0203603a8 pc=0x1414f6c
github.com/nats-io/nats-server/v2/server.(*Account).addServiceImportSub.func1(0xc0000e6000, 0xc000231980, 0xc007187cb0, 0x26, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1797 +0x5e fp=0xc0203605a8 sp=0xc020360568 pc=0x15345fe
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000231980, 0xc0000e6000, 0xc007187c50, 0x26, 0x30, 0x0, 0x0, 0x0, 0xc000232dd9, 0x30, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3018 +0x467 fp=0xc020360810 sp=0xc0203605a8 pc=0x14115d7
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000231980, 0xc00000c780, 0xc00011bc80, 0xc000304019, 0x7, 0xe7, 0x0, 0x0, 0x0, 0xc007187c50, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3827 +0x603 fp=0xc020360be0 sp=0xc020360810 pc=0x1416833
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc000231980, 0xc000358000, 0xc00000c960, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/client.go:3645 +0x906 fp=0xc020360da0 sp=0xc020360be0 pc=0x14157a6
github.com/nats-io/nats-server/v2/server.(*Account).processServiceImportResponse(0xc00000c960, 0xc00033e160, 0xc000231980, 0xc00882e400, 0x11, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1973 +0x157 fp=0xc020360df0 sp=0xc020360da0 pc=0x13ec4d7
github.com/nats-io/nats-server/v2/server.(*Account).processServiceImportResponse-fm(0xc00033e160, 0xc000231980, 0xc00882e400, 0x11, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1958 +0x97 fp=0xc020360e50 sp=0xc020360df0 pc=0x15554d7
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000231980, 0xc00033e160, 0xc020361580, 0x11, 0x20, 0x0, 0x0, 0x0, 0xc000232dd9, 0x1b, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3018 +0x467 fp=0xc0203610b8 sp=0xc020360e50 pc=0x14115d7
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000231980, 0xc00000c960, 0xc00011bc50, 0xc000304019, 0x7, 0xe7, 0x0, 0x0, 0x0, 0xc020361580, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3827 +0x603 fp=0xc020361488 sp=0xc0203610b8 pc=0x1416833
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc000231980, 0xc0000a86e0, 0xc00000c780, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/client.go:3645 +0x906 fp=0xc020361648 sp=0xc020361488 pc=0x14157a6
github.com/nats-io/nats-server/v2/server.(*Account).addServiceImportSub.func1(0xc0000e6000, 0xc000231980, 0xc007187bf0, 0x26, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1797 +0x5e fp=0xc020361688 sp=0xc020361648 pc=0x15345fe
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000231980, 0xc0000e6000, 0xc007187b90, 0x26, 0x30, 0x0, 0x0, 0x0, 0xc000232dd9, 0x30, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3018 +0x467 fp=0xc0203618f0 sp=0xc020361688 pc=0x14115d7
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000231980, 0xc00000c780, 0xc00011bc80, 0xc000304019, 0x7, 0xe7, 0x0, 0x0, 0x0, 0xc007187b90, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3827 +0x603 fp=0xc020361cc0 sp=0xc0203618f0 pc=0x1416833
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc000231980, 0xc000358000, 0xc00000c960, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/client.go:3645 +0x906 fp=0xc020361e80 sp=0xc020361cc0 pc=0x14157a6
github.com/nats-io/nats-server/v2/server.(*Account).processServiceImportResponse(0xc00000c960, 0xc00033e160, 0xc000231980, 0xc00882e3c0, 0x11, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1973 +0x157 fp=0xc020361ed0 sp=0xc020361e80 pc=0x13ec4d7
github.com/nats-io/nats-server/v2/server.(*Account).processServiceImportResponse-fm(0xc00033e160, 0xc000231980, 0xc00882e3c0, 0x11, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1958 +0x97 fp=0xc020361f30 sp=0xc020361ed0 pc=0x15554d7
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000231980, 0xc00033e160, 0xc020362660, 0x11, 0x20, 0x0, 0x0, 0x0, 0xc000232dd9, 0x1b, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3018 +0x467 fp=0xc020362198 sp=0xc020361f30 pc=0x14115d7
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000231980, 0xc00000c960, 0xc00011bc50, 0xc000304019, 0x7, 0xe7, 0x0, 0x0, 0x0, 0xc020362660, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3827 +0x603 fp=0xc020362568 sp=0xc020362198 pc=0x1416833
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc000231980, 0xc0000a86e0, 0xc00000c780, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/client.go:3645 +0x906 fp=0xc020362728 sp=0xc020362568 pc=0x14157a6
github.com/nats-io/nats-server/v2/server.(*Account).addServiceImportSub.func1(0xc0000e6000, 0xc000231980, 0xc007187b30, 0x26, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1797 +0x5e fp=0xc020362768 sp=0xc020362728 pc=0x15345fe
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000231980, 0xc0000e6000, 0xc007187ad0, 0x26, 0x30, 0x0, 0x0, 0x0, 0xc000232dd9, 0x30, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3018 +0x467 fp=0xc0203629d0 sp=0xc020362768 pc=0x14115d7
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000231980, 0xc00000c780, 0xc00011bc80, 0xc000304019, 0x7, 0xe7, 0x0, 0x0, 0x0, 0xc007187ad0, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3827 +0x603 fp=0xc020362da0 sp=0xc0203629d0 pc=0x1416833
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc000231980, 0xc000358000, 0xc00000c960, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/client.go:3645 +0x906 fp=0xc020362f60 sp=0xc020362da0 pc=0x14157a6
github.com/nats-io/nats-server/v2/server.(*Account).processServiceImportResponse(0xc00000c960, 0xc00033e160, 0xc000231980, 0xc00882e380, 0x11, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1973 +0x157 fp=0xc020362fb0 sp=0xc020362f60 pc=0x13ec4d7
github.com/nats-io/nats-server/v2/server.(*Account).processServiceImportResponse-fm(0xc00033e160, 0xc000231980, 0xc00882e380, 0x11, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1958 +0x97 fp=0xc020363010 sp=0xc020362fb0 pc=0x15554d7
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000231980, 0xc00033e160, 0xc020363740, 0x11, 0x20, 0x0, 0x0, 0x0, 0xc000232dd9, 0x1b, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3018 +0x467 fp=0xc020363278 sp=0xc020363010 pc=0x14115d7
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000231980, 0xc00000c960, 0xc00011bc50, 0xc000304019, 0x7, 0xe7, 0x0, 0x0, 0x0, 0xc020363740, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3827 +0x603 fp=0xc020363648 sp=0xc020363278 pc=0x1416833
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc000231980, 0xc0000a86e0, 0xc00000c780, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/client.go:3645 +0x906 fp=0xc020363808 sp=0xc020363648 pc=0x14157a6
github.com/nats-io/nats-server/v2/server.(*Account).addServiceImportSub.func1(0xc0000e6000, 0xc000231980, 0xc007187a70, 0x26, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1797 +0x5e fp=0xc020363848 sp=0xc020363808 pc=0x15345fe
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000231980, 0xc0000e6000, 0xc007187a10, 0x26, 0x30, 0x0, 0x0, 0x0, 0xc000232dd9, 0x30, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3018 +0x467 fp=0xc020363ab0 sp=0xc020363848 pc=0x14115d7
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000231980, 0xc00000c780, 0xc00011bc80, 0xc000304019, 0x7, 0xe7, 0x0, 0x0, 0x0, 0xc007187a10, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3827 +0x603 fp=0xc020363e80 sp=0xc020363ab0 pc=0x1416833
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc000231980, 0xc000358000, 0xc00000c960, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/client.go:3645 +0x906 fp=0xc020364040 sp=0xc020363e80 pc=0x14157a6
github.com/nats-io/nats-server/v2/server.(*Account).processServiceImportResponse(0xc00000c960, 0xc00033e160, 0xc000231980, 0xc00882e340, 0x11, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1973 +0x157 fp=0xc020364090 sp=0xc020364040 pc=0x13ec4d7
github.com/nats-io/nats-server/v2/server.(*Account).processServiceImportResponse-fm(0xc00033e160, 0xc000231980, 0xc00882e340, 0x11, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1958 +0x97 fp=0xc0203640f0 sp=0xc020364090 pc=0x15554d7
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000231980, 0xc00033e160, 0xc020364820, 0x11, 0x20, 0x0, 0x0, 0x0, 0xc000232dd9, 0x1b, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3018 +0x467 fp=0xc020364358 sp=0xc0203640f0 pc=0x14115d7
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000231980, 0xc00000c960, 0xc00011bc50, 0xc000304019, 0x7, 0xe7, 0x0, 0x0, 0x0, 0xc020364820, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3827 +0x603 fp=0xc020364728 sp=0xc020364358 pc=0x1416833
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc000231980, 0xc0000a86e0, 0xc00000c780, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/client.go:3645 +0x906 fp=0xc0203648e8 sp=0xc020364728 pc=0x14157a6
github.com/nats-io/nats-server/v2/server.(*Account).addServiceImportSub.func1(0xc0000e6000, 0xc000231980, 0xc0071879b0, 0x26, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1797 +0x5e fp=0xc020364928 sp=0xc0203648e8 pc=0x15345fe
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000231980, 0xc0000e6000, 0xc007187950, 0x26, 0x30, 0x0, 0x0, 0x0, 0xc000232dd9, 0x30, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3018 +0x467 fp=0xc020364b90 sp=0xc020364928 pc=0x14115d7
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000231980, 0xc00000c780, 0xc00011bc80, 0xc000304019, 0x7, 0xe7, 0x0, 0x0, 0x0, 0xc007187950, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3827 +0x603 fp=0xc020364f60 sp=0xc020364b90 pc=0x1416833
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc000231980, 0xc000358000, 0xc00000c960, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/client.go:3645 +0x906 fp=0xc020365120 sp=0xc020364f60 pc=0x14157a6
github.com/nats-io/nats-server/v2/server.(*Account).processServiceImportResponse(0xc00000c960, 0xc00033e160, 0xc000231980, 0xc00882e300, 0x11, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1973 +0x157 fp=0xc020365170 sp=0xc020365120 pc=0x13ec4d7
github.com/nats-io/nats-server/v2/server.(*Account).processServiceImportResponse-fm(0xc00033e160, 0xc000231980, 0xc00882e300, 0x11, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1958 +0x97 fp=0xc0203651d0 sp=0xc020365170 pc=0x15554d7
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000231980, 0xc00033e160, 0xc020365900, 0x11, 0x20, 0x0, 0x0, 0x0, 0xc000232dd9, 0x1b, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3018 +0x467 fp=0xc020365438 sp=0xc0203651d0 pc=0x14115d7
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000231980, 0xc00000c960, 0xc00011bc50, 0xc000304019, 0x7, 0xe7, 0x0, 0x0, 0x0, 0xc020365900, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3827 +0x603 fp=0xc020365808 sp=0xc020365438 pc=0x1416833
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc000231980, 0xc0000a86e0, 0xc00000c780, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/client.go:3645 +0x906 fp=0xc0203659c8 sp=0xc020365808 pc=0x14157a6
github.com/nats-io/nats-server/v2/server.(*Account).addServiceImportSub.func1(0xc0000e6000, 0xc000231980, 0xc0071878f0, 0x26, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1797 +0x5e fp=0xc020365a08 sp=0xc0203659c8 pc=0x15345fe
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000231980, 0xc0000e6000, 0xc007187890, 0x26, 0x30, 0x0, 0x0, 0x0, 0xc000232dd9, 0x30, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3018 +0x467 fp=0xc020365c70 sp=0xc020365a08 pc=0x14115d7
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000231980, 0xc00000c780, 0xc00011bc80, 0xc000304019, 0x7, 0xe7, 0x0, 0x0, 0x0, 0xc007187890, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3827 +0x603 fp=0xc020366040 sp=0xc020365c70 pc=0x1416833
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc000231980, 0xc000358000, 0xc00000c960, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/client.go:3645 +0x906 fp=0xc020366200 sp=0xc020366040 pc=0x14157a6
github.com/nats-io/nats-server/v2/server.(*Account).processServiceImportResponse(0xc00000c960, 0xc00033e160, 0xc000231980, 0xc00882e2c0, 0x11, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1973 +0x157 fp=0xc020366250 sp=0xc020366200 pc=0x13ec4d7
github.com/nats-io/nats-server/v2/server.(*Account).processServiceImportResponse-fm(0xc00033e160, 0xc000231980, 0xc00882e2c0, 0x11, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1958 +0x97 fp=0xc0203662b0 sp=0xc020366250 pc=0x15554d7
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000231980, 0xc00033e160, 0xc0203669e0, 0x11, 0x20, 0x0, 0x0, 0x0, 0xc000232dd9, 0x1b, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3018 +0x467 fp=0xc020366518 sp=0xc0203662b0 pc=0x14115d7
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000231980, 0xc00000c960, 0xc00011bc50, 0xc000304019, 0x7, 0xe7, 0x0, 0x0, 0x0, 0xc0203669e0, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3827 +0x603 fp=0xc0203668e8 sp=0xc020366518 pc=0x1416833
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc000231980, 0xc0000a86e0, 0xc00000c780, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/client.go:3645 +0x906 fp=0xc020366aa8 sp=0xc0203668e8 pc=0x14157a6
github.com/nats-io/nats-server/v2/server.(*Account).addServiceImportSub.func1(0xc0000e6000, 0xc000231980, 0xc007187830, 0x26, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1797 +0x5e fp=0xc020366ae8 sp=0xc020366aa8 pc=0x15345fe
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000231980, 0xc0000e6000, 0xc0071877d0, 0x26, 0x30, 0x0, 0x0, 0x0, 0xc000232dd9, 0x30, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3018 +0x467 fp=0xc020366d50 sp=0xc020366ae8 pc=0x14115d7
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000231980, 0xc00000c780, 0xc00011bc80, 0xc000304019, 0x7, 0xe7, 0x0, 0x0, 0x0, 0xc0071877d0, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3827 +0x603 fp=0xc020367120 sp=0xc020366d50 pc=0x1416833
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc000231980, 0xc000358000, 0xc00000c960, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/client.go:3645 +0x906 fp=0xc0203672e0 sp=0xc020367120 pc=0x14157a6
github.com/nats-io/nats-server/v2/server.(*Account).processServiceImportResponse(0xc00000c960, 0xc00033e160, 0xc000231980, 0xc00882e280, 0x11, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1973 +0x157 fp=0xc020367330 sp=0xc0203672e0 pc=0x13ec4d7
github.com/nats-io/nats-server/v2/server.(*Account).processServiceImportResponse-fm(0xc00033e160, 0xc000231980, 0xc00882e280, 0x11, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1958 +0x97 fp=0xc020367390 sp=0xc020367330 pc=0x15554d7
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000231980, 0xc00033e160, 0xc020367ac0, 0x11, 0x20, 0x0, 0x0, 0x0, 0xc000232dd9, 0x1b, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3018 +0x467 fp=0xc0203675f8 sp=0xc020367390 pc=0x14115d7
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000231980, 0xc00000c960, 0xc00011bc50, 0xc000304019, 0x7, 0xe7, 0x0, 0x0, 0x0, 0xc020367ac0, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3827 +0x603 fp=0xc0203679c8 sp=0xc0203675f8 pc=0x1416833
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc000231980, 0xc0000a86e0, 0xc00000c780, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/client.go:3645 +0x906 fp=0xc020367b88 sp=0xc0203679c8 pc=0x14157a6
github.com/nats-io/nats-server/v2/server.(*Account).addServiceImportSub.func1(0xc0000e6000, 0xc000231980, 0xc007187770, 0x26, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1797 +0x5e fp=0xc020367bc8 sp=0xc020367b88 pc=0x15345fe
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000231980, 0xc0000e6000, 0xc007187710, 0x26, 0x30, 0x0, 0x0, 0x0, 0xc000232dd9, 0x30, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3018 +0x467 fp=0xc020367e30 sp=0xc020367bc8 pc=0x14115d7
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000231980, 0xc00000c780, 0xc00011bc80, 0xc000304019, 0x7, 0xe7, 0x0, 0x0, 0x0, 0xc007187710, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3827 +0x603 fp=0xc020368200 sp=0xc020367e30 pc=0x1416833
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc000231980, 0xc000358000, 0xc00000c960, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/client.go:3645 +0x906 fp=0xc0203683c0 sp=0xc020368200 pc=0x14157a6
github.com/nats-io/nats-server/v2/server.(*Account).processServiceImportResponse(0xc00000c960, 0xc00033e160, 0xc000231980, 0xc00882e240, 0x11, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1973 +0x157 fp=0xc020368410 sp=0xc0203683c0 pc=0x13ec4d7
github.com/nats-io/nats-server/v2/server.(*Account).processServiceImportResponse-fm(0xc00033e160, 0xc000231980, 0xc00882e240, 0x11, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1958 +0x97 fp=0xc020368470 sp=0xc020368410 pc=0x15554d7
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000231980, 0xc00033e160, 0xc020368ba0, 0x11, 0x20, 0x0, 0x0, 0x0, 0xc000232dd9, 0x1b, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3018 +0x467 fp=0xc0203686d8 sp=0xc020368470 pc=0x14115d7
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000231980, 0xc00000c960, 0xc00011bc50, 0xc000304019, 0x7, 0xe7, 0x0, 0x0, 0x0, 0xc020368ba0, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3827 +0x603 fp=0xc020368aa8 sp=0xc0203686d8 pc=0x1416833
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc000231980, 0xc0000a86e0, 0xc00000c780, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/client.go:3645 +0x906 fp=0xc020368c68 sp=0xc020368aa8 pc=0x14157a6
github.com/nats-io/nats-server/v2/server.(*Account).addServiceImportSub.func1(0xc0000e6000, 0xc000231980, 0xc0071876b0, 0x26, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1797 +0x5e fp=0xc020368ca8 sp=0xc020368c68 pc=0x15345fe
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000231980, 0xc0000e6000, 0xc007187650, 0x26, 0x30, 0x0, 0x0, 0x0, 0xc000232dd9, 0x30, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3018 +0x467 fp=0xc020368f10 sp=0xc020368ca8 pc=0x14115d7
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000231980, 0xc00000c780, 0xc00011bc80, 0xc000304019, 0x7, 0xe7, 0x0, 0x0, 0x0, 0xc007187650, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3827 +0x603 fp=0xc0203692e0 sp=0xc020368f10 pc=0x1416833
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc000231980, 0xc000358000, 0xc00000c960, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/client.go:3645 +0x906 fp=0xc0203694a0 sp=0xc0203692e0 pc=0x14157a6
github.com/nats-io/nats-server/v2/server.(*Account).processServiceImportResponse(0xc00000c960, 0xc00033e160, 0xc000231980, 0xc00882e200, 0x11, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1973 +0x157 fp=0xc0203694f0 sp=0xc0203694a0 pc=0x13ec4d7
github.com/nats-io/nats-server/v2/server.(*Account).processServiceImportResponse-fm(0xc00033e160, 0xc000231980, 0xc00882e200, 0x11, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1958 +0x97 fp=0xc020369550 sp=0xc0203694f0 pc=0x15554d7
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000231980, 0xc00033e160, 0xc020369c80, 0x11, 0x20, 0x0, 0x0, 0x0, 0xc000232dd9, 0x1b, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3018 +0x467 fp=0xc0203697b8 sp=0xc020369550 pc=0x14115d7
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000231980, 0xc00000c960, 0xc00011bc50, 0xc000304019, 0x7, 0xe7, 0x0, 0x0, 0x0, 0xc020369c80, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3827 +0x603 fp=0xc020369b88 sp=0xc0203697b8 pc=0x1416833
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc000231980, 0xc0000a86e0, 0xc00000c780, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/client.go:3645 +0x906 fp=0xc020369d48 sp=0xc020369b88 pc=0x14157a6
github.com/nats-io/nats-server/v2/server.(*Account).addServiceImportSub.func1(0xc0000e6000, 0xc000231980, 0xc0071875f0, 0x26, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1797 +0x5e fp=0xc020369d88 sp=0xc020369d48 pc=0x15345fe
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000231980, 0xc0000e6000, 0xc007187590, 0x26, 0x30, 0x0, 0x0, 0x0, 0xc000232dd9, 0x30, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3018 +0x467 fp=0xc020369ff0 sp=0xc020369d88 pc=0x14115d7
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000231980, 0xc00000c780, 0xc00011bc80, 0xc000304019, 0x7, 0xe7, 0x0, 0x0, 0x0, 0xc007187590, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3827 +0x603 fp=0xc02036a3c0 sp=0xc020369ff0 pc=0x1416833
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc000231980, 0xc000358000, 0xc00000c960, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/client.go:3645 +0x906 fp=0xc02036a580 sp=0xc02036a3c0 pc=0x14157a6
github.com/nats-io/nats-server/v2/server.(*Account).processServiceImportResponse(0xc00000c960, 0xc00033e160, 0xc000231980, 0xc00882e1c0, 0x11, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1973 +0x157 fp=0xc02036a5d0 sp=0xc02036a580 pc=0x13ec4d7
github.com/nats-io/nats-server/v2/server.(*Account).processServiceImportResponse-fm(0xc00033e160, 0xc000231980, 0xc00882e1c0, 0x11, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1958 +0x97 fp=0xc02036a630 sp=0xc02036a5d0 pc=0x15554d7
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000231980, 0xc00033e160, 0xc02036ad60, 0x11, 0x20, 0x0, 0x0, 0x0, 0xc000232dd9, 0x1b, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3018 +0x467 fp=0xc02036a898 sp=0xc02036a630 pc=0x14115d7
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000231980, 0xc00000c960, 0xc00011bc50, 0xc000304019, 0x7, 0xe7, 0x0, 0x0, 0x0, 0xc02036ad60, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3827 +0x603 fp=0xc02036ac68 sp=0xc02036a898 pc=0x1416833
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc000231980, 0xc0000a86e0, 0xc00000c780, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/client.go:3645 +0x906 fp=0xc02036ae28 sp=0xc02036ac68 pc=0x14157a6
github.com/nats-io/nats-server/v2/server.(*Account).addServiceImportSub.func1(0xc0000e6000, 0xc000231980, 0xc007187530, 0x26, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1797 +0x5e fp=0xc02036ae68 sp=0xc02036ae28 pc=0x15345fe
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000231980, 0xc0000e6000, 0xc0071874d0, 0x26, 0x30, 0x0, 0x0, 0x0, 0xc000232dd9, 0x30, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3018 +0x467 fp=0xc02036b0d0 sp=0xc02036ae68 pc=0x14115d7
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000231980, 0xc00000c780, 0xc00011bc80, 0xc000304019, 0x7, 0xe7, 0x0, 0x0, 0x0, 0xc0071874d0, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3827 +0x603 fp=0xc02036b4a0 sp=0xc02036b0d0 pc=0x1416833
github.com/nats-io/nats-server/v2/server.(*client).processServiceImport(0xc000231980, 0xc000358000, 0xc00000c960, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/client.go:3645 +0x906 fp=0xc02036b660 sp=0xc02036b4a0 pc=0x14157a6
github.com/nats-io/nats-server/v2/server.(*Account).processServiceImportResponse(0xc00000c960, 0xc00033e160, 0xc000231980, 0xc00882e180, 0x11, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1973 +0x157 fp=0xc02036b6b0 sp=0xc02036b660 pc=0x13ec4d7
github.com/nats-io/nats-server/v2/server.(*Account).processServiceImportResponse-fm(0xc00033e160, 0xc000231980, 0xc00882e180, 0x11, 0x0, 0x0, 0xc000304019, 0x7, 0xe7)
/Users/matthiashanel/repos/nats-server/server/accounts.go:1958 +0x97 fp=0xc02036b710 sp=0xc02036b6b0 pc=0x15554d7
github.com/nats-io/nats-server/v2/server.(*client).deliverMsg(0xc000231980, 0xc00033e160, 0xc02036be40, 0x11, 0x20, 0x0, 0x0, 0x0, 0xc000232dd9, 0x1b, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3018 +0x467 fp=0xc02036b978 sp=0xc02036b710 pc=0x14115d7
github.com/nats-io/nats-server/v2/server.(*client).processMsgResults(0xc000231980, 0xc00000c960, 0xc00011bc50, 0xc000304019, 0x7, 0xe7, 0x0, 0x0, 0x0, 0xc02036be40, ...)
/Users/matthiashanel/repos/nats-server/server/client.go:3827 +0x603 fp=0xc02036bd48 sp=0xc02036b978 pc=0x1416833
...additional frames elided...
created by github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine
/Users/matthiashanel/repos/nats-server/server/server.go:2641 +0xc1
goroutine 1 [chan receive]:
github.com/nats-io/nats-server/v2/server.(*Server).WaitForShutdown(...)
/Users/matthiashanel/repos/nats-server/server/server.go:1707
main.main()
/Users/matthiashanel/repos/nats-server/main.go:118 +0x15f
goroutine 5 [syscall]:
os/signal.signal_recv(0x0)
/usr/local/Cellar/go/1.14.1/libexec/src/runtime/sigqueue.go:144 +0x96
os/signal.loop()
/usr/local/Cellar/go/1.14.1/libexec/src/os/signal/signal_unix.go:23 +0x22
created by os/signal.Notify.func1
/usr/local/Cellar/go/1.14.1/libexec/src/os/signal/signal.go:127 +0x44
goroutine 34 [select]:
github.com/nats-io/nats-server/v2/server.(*Server).handleSignals.func1(0xc00008a4e0, 0xc0000b1500)
/Users/matthiashanel/repos/nats-server/server/signal.go:47 +0xbc
created by github.com/nats-io/nats-server/v2/server.(*Server).handleSignals
/Users/matthiashanel/repos/nats-server/server/signal.go:45 +0x11a
goroutine 35 [select]:
github.com/nats-io/nats-server/v2/server.(*Server).internalSendLoop(0xc0000b1500, 0xc0002260f0)
/Users/matthiashanel/repos/nats-server/server/events.go:284 +0x1e6
created by github.com/nats-io/nats-server/v2/server.(*Server).setSystemAccount
/Users/matthiashanel/repos/nats-server/server/server.go:1051 +0x30a
goroutine 36 [select]:
github.com/nats-io/nats-server/v2/server.(*Server).startGWReplyMapExpiration.func1()
/Users/matthiashanel/repos/nats-server/server/gateway.go:3004 +0x15b
created by github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine
/Users/matthiashanel/repos/nats-server/server/server.go:2641 +0xc1
goroutine 37 [IO wait]:
internal/poll.runtime_pollWait(0x2898f18, 0x72, 0x0)
/usr/local/Cellar/go/1.14.1/libexec/src/runtime/netpoll.go:203 +0x55
internal/poll.(*pollDesc).wait(0xc00020c118, 0x72, 0x0, 0x0, 0x1694394)
/usr/local/Cellar/go/1.14.1/libexec/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
/usr/local/Cellar/go/1.14.1/libexec/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Accept(0xc00020c100, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/local/Cellar/go/1.14.1/libexec/src/internal/poll/fd_unix.go:384 +0x1d4
net.(*netFD).accept(0xc00020c100, 0xc000284ec8, 0x10656a0, 0xc000284f10)
/usr/local/Cellar/go/1.14.1/libexec/src/net/fd_unix.go:238 +0x42
net.(*TCPListener).accept(0xc000204880, 0x15117e1, 0xc000000000, 0xc00030a060)
/usr/local/Cellar/go/1.14.1/libexec/src/net/tcpsock_posix.go:139 +0x32
net.(*TCPListener).Accept(0xc000204880, 0xc00030a060, 0xc000300001, 0x0, 0x0)
/usr/local/Cellar/go/1.14.1/libexec/src/net/tcpsock.go:261 +0x64
github.com/nats-io/nats-server/v2/server.(*Server).acceptConnections(0xc0000b1500, 0x1782600, 0xc000204880, 0x169408e, 0x6, 0xc0002069e0, 0xc0002069f0)
/Users/matthiashanel/repos/nats-server/server/server.go:1790 +0x42
created by github.com/nats-io/nats-server/v2/server.(*Server).AcceptLoop
/Users/matthiashanel/repos/nats-server/server/server.go:1768 +0xa0c
goroutine 21 [select]:
github.com/nats-io/nats-server/v2/server.(*client).writeLoop(0xc000231980)
/Users/matthiashanel/repos/nats-server/server/client.go:927 +0x31d
github.com/nats-io/nats-server/v2/server.(*Server).createClient.func3()
/Users/matthiashanel/repos/nats-server/server/server.go:2265 +0x2a
created by github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine
/Users/matthiashanel/repos/nats-server/server/server.go:2641 +0xc1
goroutine 51 [IO wait]:
internal/poll.runtime_pollWait(0x2898d58, 0x72, 0xffffffffffffffff)
/usr/local/Cellar/go/1.14.1/libexec/src/runtime/netpoll.go:203 +0x55
internal/poll.(*pollDesc).wait(0xc000308018, 0x72, 0x200, 0x200, 0xffffffffffffffff)
/usr/local/Cellar/go/1.14.1/libexec/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
/usr/local/Cellar/go/1.14.1/libexec/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc000308000, 0xc00033a200, 0x200, 0x200, 0x0, 0x0, 0x0)
/usr/local/Cellar/go/1.14.1/libexec/src/internal/poll/fd_unix.go:169 +0x201
net.(*netFD).Read(0xc000308000, 0xc00033a200, 0x200, 0x200, 0x3cc906a56, 0x0, 0xbfed1228f14fb7e0)
/usr/local/Cellar/go/1.14.1/libexec/src/net/fd_unix.go:202 +0x4f
net.(*conn).Read(0xc000300008, 0xc00033a200, 0x200, 0x200, 0x0, 0x0, 0x0)
/usr/local/Cellar/go/1.14.1/libexec/src/net/net.go:184 +0x8e
github.com/nats-io/nats-server/v2/server.(*client).readLoop(0xc000310000, 0x0, 0x0, 0x0)
/Users/matthiashanel/repos/nats-server/server/client.go:1059 +0x369
github.com/nats-io/nats-server/v2/server.(*Server).createClient.func2()
/Users/matthiashanel/repos/nats-server/server/server.go:2262 +0x45
created by github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine
/Users/matthiashanel/repos/nats-server/server/server.go:2641 +0xc1
goroutine 52 [select]:
github.com/nats-io/nats-server/v2/server.(*client).writeLoop(0xc000310000)
/Users/matthiashanel/repos/nats-server/server/client.go:927 +0x31d
github.com/nats-io/nats-server/v2/server.(*Server).createClient.func3()
/Users/matthiashanel/repos/nats-server/server/server.go:2265 +0x2a
created by github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine
/Users/matthiashanel/repos/nats-server/server/server.go:2641 +0xc1
15:05:57 Disconnected due to: EOF, will attempt reconnects for 10m
15:05:57 Disconnected due to: EOF, will attempt reconnects for 10m
[1] - 71641 exit 2 nats-server -c ac.cfg
nats: error: nats: timeout, try --help
>
```
|
https://github.com/nats-io/nats-server/issues/1769
|
https://github.com/nats-io/nats-server/pull/1773
|
e7106df78bc826be8a65336aaf29e965d26e3a9a
|
eb403ed4d01577d0a3a12fdf47a1b5d2bb5bd648
| 2020-12-11T20:11:39Z |
go
| 2020-12-14T16:20:55Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,751 |
["server/leafnode_test.go", "server/parser.go", "server/route.go"]
|
No response from any moment in leaf node configuration
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [v ] Included `nats-server -DV` output
- [ ] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
2.2.0-beta.33
master branch (2e28de50)
#### OS/Container environment:
windows 2010
#### Steps or code to reproduce the issue:
The configuration is the same as https://github.com/nats-io/nats-server/issues/1743
While testing the fixed, I found another issue.
I used nats-rply and a simple request client written by go.
simple client code is as follows.
```
func main() {
var urls = flag.String("s", nats.DefaultURL, "The nats server URLs (separated by comma)")
var userCreds = flag.String("creds", "", "User Credentials File")
var showHelp = flag.Bool("h", false, "Show help message")
log.SetFlags(0)
flag.Usage = usage
flag.Parse()
if *showHelp {
showUsageAndExit(0)
}
args := flag.Args()
if len(args) < 2 {
showUsageAndExit(1)
}
subj, payload, numStr := args[0], []byte(args[1]), args[2]
num, err := strconv.Atoi(numStr)
// Connect Options.
opts := []nats.Option{nats.Name("NATS Sample Requestor")}
// Use UserCredentials
if *userCreds != "" {
opts = append(opts, nats.UserCredentials(*userCreds))
}
// Connect to NATS
nc, err := nats.Connect(*urls, opts...)
if err != nil {
log.Fatal(err)
}
defer nc.Close()
var reqCount, resCount int = 0, 0
// Subscribe reply subject
reSubject := "RE"
subRe, err := nc.QueueSubscribe(reSubject, reSubject, func(m *nats.Msg) {
resCount++
log.Printf("Received [%v,%d] : '%s'", m.Subject, resCount, string(m.Data))
})
subRe.SetPendingLimits(-1, -1)
for i :=0; i < num; i++ {
err := nc.PublishRequest(subj, "RE", payload)
if err != nil {
if nc.LastError() != nil {
log.Fatalf("%v for request", nc.LastError())
}
log.Fatalf("%v for request", err)
}
reqCount++
log.Printf("Published [%s,%d] : '%s'", subj, reqCount, payload)
}
for (resCount < reqCount) {}
}
```
This simple client subscribe 'RE' to receive response.
And publish_request num times.
**Step of execution**
1) nats-rply subscribe 'foo' on srv-2
2) execute request-client foo hello 10 on leaf-2
3) execute request-client repeatedly
#### Expected result:
The request-client receives all responses.
#### Actual result:
The request-client doesn't receive responses from any moment.

Looking at the srv-2 log, count of publish response to 'RE' is 20, but count of RMSG is 17.
And 'processRemoteUnsub parse error' logged at srv-2.
```
[22940] 2020/11/27 08:15:42.506173 [TRC] [::1]:6222 - rid:2 - <<- [RS- ncs $G RE RE]
[22940] 2020/11/27 08:15:42.506173 [ERR] [::1]:6222 - rid:2 - processRemoteUnsub parse error: 'ncs $G RE RE'
[22940] 2020/11/27 08:15:42.506173 [INF] [::1]:6222 - rid:2 - Router connection closed: Protocol Violation
[22940] 2020/11/27 08:15:42.506173 [DBG] Attempting reconnect for solicited route "nats://localhost:6222"
```
It is not reproduced in the release version.
Full logs are attatched.
[1127_natslog.zip](https://github.com/nats-io/nats-server/files/5605450/1127_natslog.zip)
|
https://github.com/nats-io/nats-server/issues/1751
|
https://github.com/nats-io/nats-server/pull/1755
|
ebe63db3e3ed594560892aed2b2e81f3246ef2f3
|
5168e3b1c31af9ade1645a5ed308a2cc18a75ebf
| 2020-11-27T01:13:58Z |
go
| 2020-11-30T21:08:50Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,743 |
["server/leafnode_test.go", "server/parser.go"]
|
router parser error in leaf node configuration
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [v ] Included `nats-server -DV` output
- [ ] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
2.2.0-beta.33 (master branch a3af4915)
#### OS/Container environment:
windows 10
nats-rply/nats-req
#### Steps or code to reproduce the issue:
There are two clusters, a srv cluster and a leaf cluster.
Each cluster has 2 nodes. (srv-1,2/leaf-1,2)
The two clusters are connected by leaf connections.
The configures are as follows:
srv-1
```
port: 4222
http: 8222
cluster {
name: "ncs"
port: 6222
routes: [
nats://localhost:6222
nats://localhost:6223
]
}
leafnodes {
port: 7422
}
```
srv-2
```
port: 4223
http: 8223
cluster {
name: "ncs"
port: 6223
routes: [
nats://localhost:6222
nats://localhost:6223
]
}
leafnodes {
port: 7423
}
```
leaf-1
```
port: 4212
http: 8212
leafnodes {
remotes = [
{
urls: [
"nats-leaf://localhost:7422",
"nats-leaf://localhost:7423"
]
}
]
}
cluster {
port: 6212
name: "ncs"
routes: [
nats://localhost:6212
nats://localhost:6213
]
}
```
leaf-2
```
port: 4213
http: 8213
leafnodes {
remotes = [
{
urls: [
"nats-leaf://localhost:7422",
"nats-leaf://localhost:7423"
]
}
]
}
cluster {
port: 6213
name: "ncs"
routes: [
nats://localhost:6212
nats://localhost:6213
]
}
```
1) nats-rply subscribe 'foo' on srv-2
2) nats-req publish messages longer than 100 bytes continuously on leaf-2
#### Expected result:
nats-req receives all responses.
#### Actual result:
nats-req doesn't receive some response, and router parser error logged at nats-server

error log on srv-1
```
[21800] 2020/11/24 14:19:32.405357 [[31mERR[0m] [::1]:6223 - rid:8 - Route Error 'Unknown Protocol Operation'
[21800] 2020/11/24 14:19:32.405357 [[32mINF[0m] [::1]:6223 - rid:8 - Router connection closed: Parse Error
```
error log on srv-2
```
[18616] 2020/11/24 14:19:32.405357 [[31mERR[0m] [::1]:30043 - rid:7 - Router parser ERROR, state=40, i=157: proto='""...'
[18616] 2020/11/24 14:19:32.405357 [[32mINF[0m] [::1]:30043 - rid:7 - Router connection closed: Protocol Violation
```
Full logs are attatched.
[route-error-log.zip](https://github.com/nats-io/nats-server/files/5587877/route-error-log.zip)
|
https://github.com/nats-io/nats-server/issues/1743
|
https://github.com/nats-io/nats-server/pull/1745
|
120b031ffdf58837310eb41c7f7f8fa5039f3208
|
2e28de50828b6237dc80b24d03b711f6bd8e32b5
| 2020-11-24T06:50:15Z |
go
| 2020-11-24T22:40:58Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,740 |
["server/accounts.go", "server/auth.go", "server/jwt_test.go"]
|
user jwts that set account issuer to be same as the account public key fail authentication
|
an user JWT which sets "issuer_account" to be the same as the account id, gets rejected.
The server has a check for that tests signing keys and the issuer to match - however the public key for the account is set on the 'Name` field, and so the test fails rejecting the user.
|
https://github.com/nats-io/nats-server/issues/1740
|
https://github.com/nats-io/nats-server/pull/1741
|
c199bec7c3bf3a6f6bb277938a69d046fe3ac1a4
|
25a5fa62ebc8ed7e4c153d9002eeb874625b1db1
| 2020-11-23T23:37:34Z |
go
| 2020-12-09T00:12:53Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,736 |
["server/const.go", "server/jetstream.go", "server/jetstream_api.go", "server/reload.go", "test/jetstream_test.go"]
|
SIGHUP reload loses JetStream availability
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [x] Included `nats-server -DV` output
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
* `nats-server` v2.2.0-beta.33 (git commit 4d51a41d) with one patch, that from #1734 (to unbreak FreeBSD)
* client `nats` CLI tool from the Jetstream repo, `v0.0.19-56-g4d382ff` with build modified only to replace nats-server module with `../nats-server` to pull in the FreeBSD fix
* client `nats` CLI tool, on Linux, version `v0.0.19-27-g8d386d1` built with clean repo
<details><summary><tt>/usr/local/sbin/nats-server -c /etc/nats/nats.conf -DV</tt></summary>
```
[56863] 2020/11/22 20:51:28.835595 [INF] Starting nats-server version 2.2.0-beta.33
[56863] 2020/11/22 20:51:28.835736 [DBG] Go build version go1.15.5
[56863] 2020/11/22 20:51:28.835744 [INF] Git commit [4d51a41d]
[56863] 2020/11/22 20:51:28.835823 [WRN] Plaintext passwords detected, use nkeys or bcrypt
[56863] 2020/11/22 20:51:28.835833 [INF] Using configuration file: /etc/nats/nats.conf
[56863] 2020/11/22 20:51:28.836179 [INF] Starting JetStream
[56863] 2020/11/22 20:51:28.836747 [WRN] _ ___ _____ ___ _____ ___ ___ _ __ __
[56863] 2020/11/22 20:51:28.836763 [WRN] _ | | __|_ _/ __|_ _| _ \ __| /_\ | \/ |
[56863] 2020/11/22 20:51:28.836770 [WRN] | || | _| | | \__ \ | | | / _| / _ \| |\/| |
[56863] 2020/11/22 20:51:28.836776 [WRN] \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_| |_|
[56863] 2020/11/22 20:51:28.836782 [WRN]
[56863] 2020/11/22 20:51:28.836787 [WRN] _ _
[56863] 2020/11/22 20:51:28.836793 [WRN] | |__ ___| |_ __ _
[56863] 2020/11/22 20:51:28.836798 [WRN] | '_ \/ -_) _/ _` |
[56863] 2020/11/22 20:51:28.836803 [WRN] |_.__/\___|\__\__,_|
[56863] 2020/11/22 20:51:28.836809 [WRN]
[56863] 2020/11/22 20:51:28.836814 [WRN] JetStream is a Beta feature
[56863] 2020/11/22 20:51:28.836819 [WRN] https://github.com/nats-io/jetstream
[56863] 2020/11/22 20:51:28.836825 [INF]
[56863] 2020/11/22 20:51:28.836830 [INF] ----------- JETSTREAM -----------
[56863] 2020/11/22 20:51:28.836848 [INF] Max Memory: 32.00 MB
[56863] 2020/11/22 20:51:28.836860 [INF] Max Storage: 16.00 GB
[56863] 2020/11/22 20:51:28.836870 [INF] Store Directory: "/srv/nats/js1/store"
[56863] 2020/11/22 20:51:28.836877 [DBG] Exports:
[56863] 2020/11/22 20:51:28.836884 [DBG] $JS.API.INFO
[56863] 2020/11/22 20:51:28.836894 [DBG] $JS.API.STREAM.TEMPLATE.CREATE.*
[56863] 2020/11/22 20:51:28.836903 [DBG] $JS.API.STREAM.TEMPLATE.NAMES
[56863] 2020/11/22 20:51:28.836912 [DBG] $JS.API.STREAM.TEMPLATE.INFO.*
[56863] 2020/11/22 20:51:28.836921 [DBG] $JS.API.STREAM.TEMPLATE.DELETE.*
[56863] 2020/11/22 20:51:28.836930 [DBG] $JS.API.STREAM.CREATE.*
[56863] 2020/11/22 20:51:28.836939 [DBG] $JS.API.STREAM.UPDATE.*
[56863] 2020/11/22 20:51:28.836948 [DBG] $JS.API.STREAM.NAMES
[56863] 2020/11/22 20:51:28.836959 [DBG] $JS.API.STREAM.LIST
[56863] 2020/11/22 20:51:28.836969 [DBG] $JS.API.STREAM.INFO.*
[56863] 2020/11/22 20:51:28.836979 [DBG] $JS.API.STREAM.LOOKUP
[56863] 2020/11/22 20:51:28.836989 [DBG] $JS.API.STREAM.DELETE.*
[56863] 2020/11/22 20:51:28.836999 [DBG] $JS.API.STREAM.PURGE.*
[56863] 2020/11/22 20:51:28.837012 [DBG] $JS.API.STREAM.SNAPSHOT.*
[56863] 2020/11/22 20:51:28.837022 [DBG] $JS.API.STREAM.RESTORE.*
[56863] 2020/11/22 20:51:28.837032 [DBG] $JS.API.STREAM.MSG.DELETE.*
[56863] 2020/11/22 20:51:28.837043 [DBG] $JS.API.STREAM.MSG.GET.*
[56863] 2020/11/22 20:51:28.837053 [DBG] $JS.API.CONSUMER.CREATE.*
[56863] 2020/11/22 20:51:28.837064 [DBG] $JS.API.CONSUMER.DURABLE.CREATE.*.*
[56863] 2020/11/22 20:51:28.837077 [DBG] $JS.API.CONSUMER.NAMES.*
[56863] 2020/11/22 20:51:28.837089 [DBG] $JS.API.CONSUMER.LIST.*
[56863] 2020/11/22 20:51:28.837099 [DBG] $JS.API.CONSUMER.INFO.*.*
[56863] 2020/11/22 20:51:28.837110 [DBG] $JS.API.CONSUMER.DELETE.*.*
[56863] 2020/11/22 20:51:28.837121 [INF] ----------------------------------------
[56863] 2020/11/22 20:51:28.837377 [DBG] Enabled JetStream for account "Foobar"
[56863] 2020/11/22 20:51:28.837394 [DBG] Max Memory: -1 B
[56863] 2020/11/22 20:51:28.837404 [DBG] Max Storage: -1 B
[56863] 2020/11/22 20:51:28.837450 [INF] Recovering JetStream state for account "Foobar"
[56863] 2020/11/22 20:51:28.837683 [INF] JetStream state for account "Foobar" recovered
[56863] 2020/11/22 20:51:28.837906 [DBG] Enabled JetStream for account "Anonymous"
[56863] 2020/11/22 20:51:28.837921 [DBG] Max Memory: -1 B
[56863] 2020/11/22 20:51:28.837931 [DBG] Max Storage: -1 B
[56863] 2020/11/22 20:51:28.837967 [INF] Recovering JetStream state for account "Anonymous"
[56863] 2020/11/22 20:51:28.839761 [INF] Restored 7 messages for Stream "SVEN-GIT-MAINLOG"
[56863] 2020/11/22 20:51:28.839946 [INF] Recovering 1 Consumers for Stream - "SVEN-GIT-MAINLOG"
[56863] 2020/11/22 20:51:28.841803 [INF] Restored 1 messages for Stream "SVEN-GIT-PUSHES"
[56863] 2020/11/22 20:51:28.843210 [INF] Restored 1 messages for Stream "SVEN-GIT-REFS"
[56863] 2020/11/22 20:51:28.843418 [INF] JetStream state for account "Anonymous" recovered
[56863] 2020/11/22 20:51:28.843577 [INF] Starting http monitor on 0.0.0.0:8222
[56863] 2020/11/22 20:51:28.843654 [INF] Listening for client connections on 0.0.0.0:4222
[56863] 2020/11/22 20:51:28.843665 [INF] TLS required for client connections
[56863] 2020/11/22 20:51:28.843673 [INF] Server id is NBFQQQMBBS4JA4DXYSKM6UJMX4CEQKNN7YUVQJRYLL6YIY7Y6MCWDU4L
[56863] 2020/11/22 20:51:28.843680 [INF] Server name is NBFQQQMBBS4JA4DXYSKM6UJMX4CEQKNN7YUVQJRYLL6YIY7Y6MCWDU4L
[56863] 2020/11/22 20:51:28.843686 [INF] Server is ready
```
</details>
<details><summary><tt>nats.conf</tt>, partially redacted</summary>
```conf
port: 4222
monitor_port: 8222
client_advertise: nats.lan
jetstream {
store_dir: "/srv/nats/js1/store"
max_memory_store: 32MiB
max_file_store: 16GiB
}
tls {
ca_file: /etc/ssl/cert.pem
cert_file: /etc/nats/tls.crt
key_file: /etc/nats/tls.key
}
accounts: {
Anonymous: {
jetstream: enabled
users: [
{user: anonymous, password: "none"}
]
}
System: {
users: [
{user: control, password: "$crypted-pw-censored"}
]
}
Foobar: {
jetstream: enabled
users: [
{user: foo1, password: "$crypted-pw-censored"}
]
}
}
system_account: System
no_auth_user: anonymous
```
</details>
#### OS/Container environment:
FreeBSD 11.4, but I think this causes a symptom I saw in an Alpine Linux 3.12.1 container too.
#### Steps or code to reproduce the issue:
1. Start NATS server. See it load JS data store, have client able to query JS streams
2. Send server a SIGHUP
3. See the `nats: error: could not create Stream: nats: no responders available for request` error message
4. Restart server, have nats client able to query streams
5. Send server a SIGHUP, see that this is repeatable: once one or more SIGHUPs has been received by the NATS server, it stops responding to jetstream client requests
#### Expected result:
Server should work right after a reload.
#### Actual result:
Reload breaks Jetstream.
In the logs here, I hit enter for a blank line between the client request, the reload, and the second client request.
<details><summary>nats -DV output across the reload</summary>
```
[58811] 2020/11/22 21:02:27.007071 [DBG] 192.168.120.1:57228 - cid:9 - Client connection created
[58811] 2020/11/22 21:02:27.007594 [DBG] 192.168.120.1:57228 - cid:9 - Starting TLS client connection handshake
[58811] 2020/11/22 21:02:27.054567 [DBG] 192.168.120.1:57228 - cid:9 - TLS handshake complete
[58811] 2020/11/22 21:02:27.054595 [DBG] 192.168.120.1:57228 - cid:9 - TLS version 1.3, cipher suite TLS_AES_128_GCM_SHA256
[58811] 2020/11/22 21:02:27.054733 [TRC] 192.168.120.1:57228 - cid:9 - <<- [CONNECT {"verbose":false,"pedantic":false,"tls_required":false,"name":"NATS CLI Version v0.0.19-27-g8d386
d1","lang":"go","version":"1.11.0","protocol":1,"echo":true,"headers":true,"no_responders":true}]
[58811] 2020/11/22 21:02:27.054985 [TRC] 192.168.120.1:57228 - cid:9 - "v1.11.0:go:NATS CLI Version v0.0.19-27-g8d386d1" - <<- [PING]
[58811] 2020/11/22 21:02:27.055004 [TRC] 192.168.120.1:57228 - cid:9 - "v1.11.0:go:NATS CLI Version v0.0.19-27-g8d386d1" - ->> [PONG]
[58811] 2020/11/22 21:02:27.056574 [TRC] 192.168.120.1:57228 - cid:9 - "v1.11.0:go:NATS CLI Version v0.0.19-27-g8d386d1" - <<- [SUB _INBOX.e2g0RbGKlGGqOaTYEqwBir.* 1]
[58811] 2020/11/22 21:02:27.056629 [TRC] 192.168.120.1:57228 - cid:9 - "v1.11.0:go:NATS CLI Version v0.0.19-27-g8d386d1" - <<- [PUB $JS.API.INFO _INBOX.e2g0RbGKlGGqOaTYEqwBir.ZrEW2a
tg 0]
[58811] 2020/11/22 21:02:27.056647 [TRC] 192.168.120.1:57228 - cid:9 - "v1.11.0:go:NATS CLI Version v0.0.19-27-g8d386d1" - <<- MSG_PAYLOAD: [""]
[58811] 2020/11/22 21:02:27.057170 [TRC] 192.168.120.1:57228 - cid:9 - "v1.11.0:go:NATS CLI Version v0.0.19-27-g8d386d1" - ->> [MSG _INBOX.e2g0RbGKlGGqOaTYEqwBir.ZrEW2atg 1 223]
[58811] 2020/11/22 21:02:27.061477 [DBG] 192.168.120.1:57228 - cid:9 - "v1.11.0:go:NATS CLI Version v0.0.19-27-g8d386d1" - Client connection closed: Client Closed
[58811] 2020/11/22 21:02:27.061524 [TRC] 192.168.120.1:57228 - cid:9 - "v1.11.0:go:NATS CLI Version v0.0.19-27-g8d386d1" - <-> [DELSUB 1]
[58811] 2020/11/22 21:02:40.664207 [DBG] Trapped "hangup" signal
[58811] 2020/11/22 21:02:40.877400 [INF] Reloaded: authorization users
[58811] 2020/11/22 21:02:40.877431 [INF] Reloaded: accounts
[58811] 2020/11/22 21:02:40.877442 [INF] Reloaded: tls = enabled
[58811] 2020/11/22 21:02:40.877545 [TRC] ACCOUNT - <-> [DELSUB 1]
[58811] 2020/11/22 21:02:40.877572 [DBG] ACCOUNT - Account connection closed: Internal Client
[58811] 2020/11/22 21:02:40.877611 [TRC] ACCOUNT - <-> [DELSUB 23]
[58811] 2020/11/22 21:02:40.877634 [TRC] ACCOUNT - <-> [DELSUB 2]
[58811] 2020/11/22 21:02:40.877653 [TRC] ACCOUNT - <-> [DELSUB 8]
[58811] 2020/11/22 21:02:40.877672 [TRC] ACCOUNT - <-> [DELSUB 13]
[58811] 2020/11/22 21:02:40.877691 [TRC] ACCOUNT - <-> [DELSUB 14]
[58811] 2020/11/22 21:02:40.877709 [TRC] ACCOUNT - <-> [DELSUB 22]
[58811] 2020/11/22 21:02:40.877728 [TRC] ACCOUNT - <-> [DELSUB 21]
[58811] 2020/11/22 21:02:40.877746 [TRC] ACCOUNT - <-> [DELSUB 3]
[58811] 2020/11/22 21:02:40.877764 [TRC] ACCOUNT - <-> [DELSUB 5]
[58811] 2020/11/22 21:02:40.877783 [TRC] ACCOUNT - <-> [DELSUB 9]
[58811] 2020/11/22 21:02:40.877803 [TRC] ACCOUNT - <-> [DELSUB 11]
[58811] 2020/11/22 21:02:40.877822 [TRC] ACCOUNT - <-> [DELSUB 20]
[58811] 2020/11/22 21:02:40.877840 [TRC] ACCOUNT - <-> [DELSUB 6]
[58811] 2020/11/22 21:02:40.877857 [TRC] ACCOUNT - <-> [DELSUB 12]
[58811] 2020/11/22 21:02:40.877878 [TRC] ACCOUNT - <-> [DELSUB 16]
[58811] 2020/11/22 21:02:40.877896 [TRC] ACCOUNT - <-> [DELSUB 18]
[58811] 2020/11/22 21:02:40.877914 [TRC] ACCOUNT - <-> [DELSUB 19]
[58811] 2020/11/22 21:02:40.877932 [TRC] ACCOUNT - <-> [DELSUB 17]
[58811] 2020/11/22 21:02:40.877951 [TRC] ACCOUNT - <-> [DELSUB 1]
[58811] 2020/11/22 21:02:40.877968 [TRC] ACCOUNT - <-> [DELSUB 4]
[58811] 2020/11/22 21:02:40.877988 [TRC] ACCOUNT - <-> [DELSUB 7]
[58811] 2020/11/22 21:02:40.878006 [TRC] ACCOUNT - <-> [DELSUB 10]
[58811] 2020/11/22 21:02:40.878024 [TRC] ACCOUNT - <-> [DELSUB 15]
[58811] 2020/11/22 21:02:40.878040 [DBG] ACCOUNT - Account connection closed: Internal Client
[58811] 2020/11/22 21:02:40.878056 [DBG] ACCOUNT - Account connection closed: Internal Client
[58811] 2020/11/22 21:02:40.878079 [TRC] ACCOUNT - <-> [DELSUB 15]
[58811] 2020/11/22 21:02:40.878099 [TRC] ACCOUNT - <-> [DELSUB 18]
[58811] 2020/11/22 21:02:40.878117 [TRC] ACCOUNT - <-> [DELSUB 23]
[58811] 2020/11/22 21:02:40.878135 [TRC] ACCOUNT - <-> [DELSUB 11]
[58811] 2020/11/22 21:02:40.878154 [TRC] ACCOUNT - <-> [DELSUB 20]
[58811] 2020/11/22 21:02:40.878172 [TRC] ACCOUNT - <-> [DELSUB 22]
[58811] 2020/11/22 21:02:40.878190 [TRC] ACCOUNT - <-> [DELSUB 2]
[58811] 2020/11/22 21:02:40.878208 [TRC] ACCOUNT - <-> [DELSUB 3]
[58811] 2020/11/22 21:02:40.878228 [TRC] ACCOUNT - <-> [DELSUB 8]
[58811] 2020/11/22 21:02:40.878247 [TRC] ACCOUNT - <-> [DELSUB 10]
[58811] 2020/11/22 21:02:40.878266 [TRC] ACCOUNT - <-> [DELSUB 12]
[58811] 2020/11/22 21:02:40.878284 [TRC] ACCOUNT - <-> [DELSUB 14]
[58811] 2020/11/22 21:02:40.878302 [TRC] ACCOUNT - <-> [DELSUB 16]
[58811] 2020/11/22 21:02:40.878320 [TRC] ACCOUNT - <-> [DELSUB 17]
[58811] 2020/11/22 21:02:40.878339 [TRC] ACCOUNT - <-> [DELSUB 1]
[58811] 2020/11/22 21:02:40.878357 [TRC] ACCOUNT - <-> [DELSUB 5]
[58811] 2020/11/22 21:02:40.878376 [TRC] ACCOUNT - <-> [DELSUB 7]
[58811] 2020/11/22 21:02:40.878394 [TRC] ACCOUNT - <-> [DELSUB 9]
[58811] 2020/11/22 21:02:40.878411 [TRC] ACCOUNT - <-> [DELSUB 19]
[58811] 2020/11/22 21:02:40.878429 [TRC] ACCOUNT - <-> [DELSUB 4]
[58811] 2020/11/22 21:02:40.878447 [TRC] ACCOUNT - <-> [DELSUB 6]
[58811] 2020/11/22 21:02:40.878466 [TRC] ACCOUNT - <-> [DELSUB 13]
[58811] 2020/11/22 21:02:40.878484 [TRC] ACCOUNT - <-> [DELSUB 21]
[58811] 2020/11/22 21:02:40.878500 [DBG] ACCOUNT - Account connection closed: Internal Client
[58811] 2020/11/22 21:02:40.878536 [INF] Reloaded server configuration
[58811] 2020/11/22 21:02:46.959419 [DBG] 192.168.120.1:57230 - cid:11 - Client connection created
[58811] 2020/11/22 21:02:46.959556 [DBG] 192.168.120.1:57230 - cid:11 - Starting TLS client connection handshake
[58811] 2020/11/22 21:02:47.019747 [DBG] 192.168.120.1:57230 - cid:11 - TLS handshake complete
[58811] 2020/11/22 21:02:47.019775 [DBG] 192.168.120.1:57230 - cid:11 - TLS version 1.3, cipher suite TLS_AES_128_GCM_SHA256
[58811] 2020/11/22 21:02:47.019910 [TRC] 192.168.120.1:57230 - cid:11 - <<- [CONNECT {"verbose":false,"pedantic":false,"tls_required":false,"name":"NATS CLI Version v0.0.19-27-g8d38
6d1","lang":"go","version":"1.11.0","protocol":1,"echo":true,"headers":true,"no_responders":true}]
[58811] 2020/11/22 21:02:47.020014 [TRC] 192.168.120.1:57230 - cid:11 - "v1.11.0:go:NATS CLI Version v0.0.19-27-g8d386d1" - <<- [PING]
[58811] 2020/11/22 21:02:47.020033 [TRC] 192.168.120.1:57230 - cid:11 - "v1.11.0:go:NATS CLI Version v0.0.19-27-g8d386d1" - ->> [PONG]
[58811] 2020/11/22 21:02:47.021745 [TRC] 192.168.120.1:57230 - cid:11 - "v1.11.0:go:NATS CLI Version v0.0.19-27-g8d386d1" - <<- [SUB _INBOX.7sX7qtrb7vQnbJxaXwD9F2.* 1]
[58811] 2020/11/22 21:02:47.021792 [TRC] 192.168.120.1:57230 - cid:11 - "v1.11.0:go:NATS CLI Version v0.0.19-27-g8d386d1" - <<- [PUB $JS.API.INFO _INBOX.7sX7qtrb7vQnbJxaXwD9F2.GcNCP
j2g 0]
[58811] 2020/11/22 21:02:47.021814 [TRC] 192.168.120.1:57230 - cid:11 - "v1.11.0:go:NATS CLI Version v0.0.19-27-g8d386d1" - <<- MSG_PAYLOAD: [""]
[58811] 2020/11/22 21:02:47.024865 [DBG] 192.168.120.1:57230 - cid:11 - "v1.11.0:go:NATS CLI Version v0.0.19-27-g8d386d1" - Client connection closed: Client Closed
[58811] 2020/11/22 21:02:47.024905 [TRC] 192.168.120.1:57230 - cid:11 - "v1.11.0:go:NATS CLI Version v0.0.19-27-g8d386d1" - <-> [DELSUB 1]
^C[58811] 2020/11/22 21:02:49.844273 [DBG] Trapped "interrupt" signal
[58811] 2020/11/22 21:02:49.844379 [TRC] JETSTREAM - <-> [DELSUB 2]
[58811] 2020/11/22 21:02:49.844408 [TRC] JETSTREAM - <-> [DELSUB 3]
[58811] 2020/11/22 21:02:49.844512 [DBG] JETSTREAM - JetStream connection closed: Client Closed
[58811] 2020/11/22 21:02:49.846995 [DBG] JETSTREAM - JetStream connection closed: Client Closed
[58811] 2020/11/22 21:02:49.847203 [DBG] JETSTREAM - JetStream connection closed: Client Closed
[58811] 2020/11/22 21:02:49.847342 [DBG] JETSTREAM - JetStream connection closed: Client Closed
[58811] 2020/11/22 21:02:49.847474 [INF] Initiating Shutdown...
[58811] 2020/11/22 21:02:49.847590 [DBG] Client accept loop exiting..
[58811] 2020/11/22 21:02:49.847609 [INF] Server Exiting..
```
</details>
|
https://github.com/nats-io/nats-server/issues/1736
|
https://github.com/nats-io/nats-server/pull/1756
|
5168e3b1c31af9ade1645a5ed308a2cc18a75ebf
|
2e26d9195eb155d01a5e5c4542448595c7e22c42
| 2020-11-23T02:06:16Z |
go
| 2020-12-01T02:38:12Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,730 |
["server/client.go", "server/leafnode.go", "server/leafnode_test.go", "server/monitor.go"]
|
leaf node eviction of old connections seems racy (for mis configured setups)
|
## Defect
Leaf node configuration where server A accepts incomming leaf nodes connections on $G.
Server B has two remotes to server A. This is a misconfiguration but:
Server A only sometimes disconnects server B. (there is a race. initial startup usually shows no issue)
Server B is configured, but none of it's output even hints at an issue.
The picture shows the config and server A and B not having an issue. That's the race condition in A.

|
https://github.com/nats-io/nats-server/issues/1730
|
https://github.com/nats-io/nats-server/pull/1738
|
c0bc788c6dd5cded0e6707ed3c6c6feabe4aaaa3
|
637717a9f34e10b6598d3f54f715df231c9e9431
| 2020-11-20T18:47:51Z |
go
| 2020-11-24T16:22:11Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,723 |
["internal/ldap/dn.go", "server/auth.go", "test/configs/certs/rdns/client-e.key", "test/configs/certs/rdns/client-e.pem", "test/configs/certs/rdns/client-f.key", "test/configs/certs/rdns/client-f.pem", "test/tls_test.go"]
|
Multi Value RDNs are not supported for auth with TLS verify map
|
## Defect
Given a cert with a DN that includes a multi value RDN created like:
```sh
openssl req -newkey rsa:2048 \
-nodes -keyout client-$CLIENT_ID.key \
-multivalue-rdn \
-subj "/DC=org/DC=OpenSSL/DC=DEV+O=users/CN=John Doe" \
-addext extendedKeyUsage=clientAuth -out client-$CLIENT_ID.csr
```
When using the result of the RFC2253 representation of the subject as the auth string, validation will fail because only the first value from an RDN is being considered.
```sh
openssl x509 -in client-$CLIENT_ID.pem -noout -subject -nameopt RFC2253
subject=CN=John Doe,DC=DEV+O=users,DC=OpenSSL,DC=org
```
```hcl
authorization {
users = [
{ user = "CN=John Doe,DC=DEV+O=users,DC=OpenSSL,DC=org" }
]
}
```
|
https://github.com/nats-io/nats-server/issues/1723
|
https://github.com/nats-io/nats-server/pull/1732
|
2e26d9195eb155d01a5e5c4542448595c7e22c42
|
a9a6bdc04fc3283b2aeef37e643ba3db52069e89
| 2020-11-18T01:33:28Z |
go
| 2020-12-02T17:25:36Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,722 |
["server/client.go", "server/events_test.go", "server/leafnode_test.go"]
|
Duplicate message occurs when configuring leaf nodes between clusters
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [X] Included `nats-server -DV` output
- [ ] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
### Versions of `nats-server` and affected client libraries used:
2.2.0-beta.32 (latest master branch, 958c0484)
nats sample client (nats-req/nats-rply)
### OS/Container environment:
windows 10
### Steps or code to reproduce the issue:
There are two clusters, a srv cluster and a leaf cluster.
Each cluster has 3 nodes. (srv-1,2,3/leaf-1,2,3)
As shown in the figure below, The two clusters are connected by leaf connections.

[nats-config.zip](https://github.com/nats-io/nats-server/files/5556939/nats-config.zip)
#### Issue 1)
Initially I used the release 2.1.7 version.
1-1) Client(nats-rply) subscribe Foo on leaf-1
1-2) Client(nats-req) publish one message to Foo on leaf-2
1-3) Client(nats-rply) receive two message on leaf-1
I refer to the issue https://github.com/nats-io/nats-server/issues/1585, updated the nats-server version to 2.2.0-beta.32 and configured the same cluster name as 'a' on the leaf cluster and srv cluster.
After setting it, the problem was solved.
Is this solution right?
Is this configuration correct?
#### Issue 2)
The same subscription (subject: foo queuegroup: NATS-RPLY-22) is on srv-1 and leaf-1.
Client(nats-req) publish message to foo on leaf-2.
Randomly, srv-1 or leaf-1 receive the message or both srv-1 and leaf-1 receive the message.
This issue also occurred in 2.2.0-beta.
I attach nats-server -DV log and capture file.
[nats-log.zip](https://github.com/nats-io/nats-server/files/5556959/nats-log.zip)
### Expected result:
One node(srv-1 or leaf-1) receive the message.
(It's better for leaf-1 always receive it.
Is there a priority for routing messages between leaf cluster and remote cluster?)
### Actual result:
Randomly, srv-1 or leaf-1 receive the message or both srv-1 and leaf-1 receive the message.
|
https://github.com/nats-io/nats-server/issues/1722
|
https://github.com/nats-io/nats-server/pull/1725
|
bfd388e8b4e7472bdc53e572eab46e186ed5064e
|
a3af49152e6f0ec0bee629dfa875de27afb1eb71
| 2020-11-18T00:44:55Z |
go
| 2020-11-19T16:06:28Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,647 |
["main.go"]
|
--no_advertise flag documented twice with different descriptions
|
Hi,
We have notice that the `--no_advertise` flag is documented twice once in [1] described as "Do not advertise known cluster information to clients" and once in [2] described as "When set to true, do not advertise this server to clients" . We have checked in code and were only able to find the [flag](https://github.com/nats-io/nats-server/blob/e846dab45dad399eefb0ac46b05be7c92c56e568/main.go#L70) as a cluster option which matches the former description in the documentation.
We are raising the question because the option has been recently added to the nats-release with this [PR](https://github.com/cloudfoundry/nats-release/commit/8cd7d93a8fc29d8226a31a5aecd87e8701da3c0e). The flag is introduced with the description "When configured to true, this nats server will not be advertised to any nats clients." and we are not sure how to consume it.
Could you please clarify what is exactly the expected behaviour if the flag is set to true? which of these descriptions is valid?
[1] https://docs.nats.io/nats-server/flags#cluster-options
[2] https://docs.nats.io/nats-server/configuration/clustering/cluster_config
|
https://github.com/nats-io/nats-server/issues/1647
|
https://github.com/nats-io/nats-server/pull/1673
|
773ac7fbd1feadde12e8c0e0592ff1b1b31f7512
|
6d76156b12372948c9cef28a2ad36386c34c132c
| 2020-10-16T12:51:04Z |
go
| 2020-10-26T20:25:21Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,627 |
["server/consumer.go", "server/filestore.go", "server/stream.go", "test/jetstream_test.go"]
|
JetStream: AckExplicit message removed while durable consumer "offline"
|
Have 2 durable consumers on the same stream.
Both consumers filter on the same subject and use explicit ack.
Send 2 messages, each consumer receive and ack.
Stop the subscription on the 2nd durable.
Send 2 more messages.
First subscription receives and acks.
"Restart" 2nd subscription (update the 2nd durable with new delivery subject)
No message is received, the stream info shows that there are no message in the stream (message 3 and 4 have been removed).
|
https://github.com/nats-io/nats-server/issues/1627
|
https://github.com/nats-io/nats-server/pull/1628
|
8989fb652463a5dbcb94ac0a7ecfd46884959321
|
6da5d2f4907a03c8ba26fc8b6ca2aed903ac80f8
| 2020-10-01T18:16:12Z |
go
| 2020-10-02T23:41:11Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.