status
stringclasses 1
value | repo_name
stringlengths 9
24
| repo_url
stringlengths 28
43
| issue_id
int64 1
104k
| updated_files
stringlengths 8
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[ns, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,621 |
["server/consumer.go", "test/jetstream_test.go"]
|
JetStream: possible panic on consumer delete during server shutdown
|
This would produce the following stack:
```
panic: close of closed channel
goroutine 2360 [running]:
github.com/nats-io/nats-server/v2/server.(*Consumer).stop(0xc0003c4e00, 0x10101, 0x0, 0x0)
/Users/ivan/dev/go/src/github.com/nats-io/nats-server/server/consumer.go:1722 +0xf5
github.com/nats-io/nats-server/v2/server.(*Consumer).Delete(...)
/Users/ivan/dev/go/src/github.com/nats-io/nats-server/server/consumer.go:1706
```
|
https://github.com/nats-io/nats-server/issues/1621
|
https://github.com/nats-io/nats-server/pull/1622
|
7f44d075f7ad27b89ca9014ce7aec7a4f437d2e6
|
f5934a8d31bda542a55e06b92734fc2fd842bf45
| 2020-09-29T15:56:24Z |
go
| 2020-09-29T17:52:23Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,619 |
["server/consumer.go", "test/jetstream_test.go"]
|
JetStream: AddConsumer() causes first message to be redelivered twice
|
- Create a durable consumer
- Send a message while the sub is running but don't ack it
- Stop the subscription
- Send another message while the sub is not running.
- Call AddConsumer() again with a different delivery subject
- The message number 1 is delivered twice, followed by message number 2.
|
https://github.com/nats-io/nats-server/issues/1619
|
https://github.com/nats-io/nats-server/pull/1620
|
2792fd2ef1eb9cb821c759935b175d5124aa9336
|
467c0b265cc0c1ff9abd698ed2f7b20633b6a2c7
| 2020-09-28T18:28:47Z |
go
| 2020-09-29T00:06:52Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,613 |
["server/client_test.go", "server/events.go", "server/jwt_test.go"]
|
Race condition where account conns timer was disabled too soon
|
as a result remote servers did not observer the remote connection count of the server where the issue occurred go down to 0.
|
https://github.com/nats-io/nats-server/issues/1613
|
https://github.com/nats-io/nats-server/pull/1614
|
63cc9a0936989425a35067d4da8cedcfda55d337
|
b19b2e17d518ca2b2257d56d7fd6f8e02590983f
| 2020-09-24T22:48:58Z |
go
| 2020-09-24T23:06:03Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,606 |
["server/leafnode.go", "server/leafnode_test.go", "test/leafnode_test.go"]
|
Possible Leafnode loop detected if soliciting server reconnects before accepting detects failure
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [X] Included `nats-server -DV` output
- [X] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
Latest from main branch: 1dd0c0666f1a910b461ca3b9dd33f2b9886cbb7e
#### OS/Container environment:
N/A
#### Steps or code to reproduce the issue:
Was observed by a user that had a router that would switch interfaces causing reconnects.
Otherwise, using a proxy that causes the soliciting side connection to be closed while the accepting side does not detect that failure right away.
#### Expected result:
The soliciting server should be able to reconnect as soon as it has detected that the previous connection was broken. The accepting server should not reject the reconnect as forming a loop.
#### Actual result:
The error "Loop detected" is reported and the soliciting server will be disconnected and attempt to reconnect after 30 seconds
|
https://github.com/nats-io/nats-server/issues/1606
|
https://github.com/nats-io/nats-server/pull/1607
|
1dd0c0666f1a910b461ca3b9dd33f2b9886cbb7e
|
9b7c472c093577c0a425e7b51fea76a07fb9e0dc
| 2020-09-22T22:58:10Z |
go
| 2020-09-22T23:27:01Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,604 |
["server/accounts.go", "server/events.go", "server/events_test.go", "server/jwt_test.go", "server/monitor.go", "server/monitor_test.go", "server/route.go", "server/server.go"]
|
expose account specific monitoring/debugging information as system service
|
This covers SUBSZ and CONNZ.
Also add an INFO so to retrieve state associated with the account struct
|
https://github.com/nats-io/nats-server/issues/1604
|
https://github.com/nats-io/nats-server/pull/1611
|
12d84c646c798f56024707d5b614e430c71aed72
|
63cc9a0936989425a35067d4da8cedcfda55d337
| 2020-09-22T17:37:50Z |
go
| 2020-09-24T19:30:45Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,600 |
["internal/ldap/dn.go", "server/auth.go", "test/configs/certs/rdns/client-e.key", "test/configs/certs/rdns/client-e.pem", "test/configs/certs/rdns/client-f.key", "test/configs/certs/rdns/client-f.pem", "test/tls_test.go"]
|
Enhancement: Match server identities with certificate RDNs regardless of order
|
## Enhancement
Look at extending matching a certificate DN to an identity regardless of the certificate's individual RDN order. Some users have services that can change the order of RDNs when regenerating their certificates, which then creates a mismatch between the certificate DN and vs what was originally configured in the server.
|
https://github.com/nats-io/nats-server/issues/1600
|
https://github.com/nats-io/nats-server/pull/1732
|
2e26d9195eb155d01a5e5c4542448595c7e22c42
|
a9a6bdc04fc3283b2aeef37e643ba3db52069e89
| 2020-09-18T14:58:36Z |
go
| 2020-12-02T17:25:36Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,588 |
["server/dirstore_test.go", "server/events.go", "server/events_test.go", "server/monitor.go", "server/monitor_test.go", "server/opts.go", "server/reload.go"]
|
Add a tag map to the server configuration
|
## Feature Request
Add support for an arbitrary key/value map for configuration properties per server.
#### Use Case:
From an ops perspective it is useful to be able to associate meta-tags to an instance that would be queryable through the various mechanisms supported by the server.
The server name (which defaults to the server id) is a singular instance of why you'd want to tag an instance with more human-friendly metadata. But there are many other examples.
- Tag the server with the latitude and longitude of its data center.
- Tag the server with its cloud, region, and availability zone.
- Tag the server with its environment (e.g., test or production).
Note that we wouldn't expect this tagging to be unmanageably large, nor to change often. Perhaps limits as to number of tags and key/value sizing would be reasonable.
#### Proposed Change:
Add a string to string key/value dictionary to the nats server configuration. Read it on startup, and perhaps re-read it during reload signal processing. Emit its values in, say, `varz`, and/or make it queryable through server messages.
#### Who Benefits From The Change(s)?
Anyone who needs more domain-specific tagging in a large cloud of nats servers.
#### Alternative Approaches
The configured server name could be used as a key or key fragment to some external key/value store. But that adds extra moving parts where none might be needed.
|
https://github.com/nats-io/nats-server/issues/1588
|
https://github.com/nats-io/nats-server/pull/1832
|
5d7e69b5402c09888b3dfd6787bc324fd34b74e5
|
9081646109b0597bd6c2ef4c5f9c8f9dc9c89200
| 2020-09-04T21:36:29Z |
go
| 2021-01-22T02:16:09Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,586 |
["server/client.go", "server/gateway.go", "server/leafnode.go", "server/opts.go", "server/route.go", "server/routes_test.go", "server/server.go", "server/server_test.go"]
|
Support DNS discovery of DNS names on routes
|
## Feature Request
Support resolving DNS names for routes and using resolved ips, excluding self ip.
#### Use Case:
Boostraping a NATS cluster using Consul
#### Proposed Change:
Resolve DNS names instead of directly calling net.DialTCP() for routes
#### Who Benefits From The Change(s)?
Environments that use DNS as service discovery, including Consul clusters.
|
https://github.com/nats-io/nats-server/issues/1586
|
https://github.com/nats-io/nats-server/pull/1590
|
8fb4d2a0b1cf417f7f57f69ae1d4dc2e94371712
|
5cd11bf77d06b9b481217067427ca3d3a7e65d03
| 2020-09-04T14:54:02Z |
go
| 2020-09-08T20:55:05Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,580 |
["server/consumer.go", "server/stream.go", "test/jetstream_test.go", "test/leafnode_test.go"]
|
JetStream: durable restart resends last stored message
|
## Defect
#### Versions of `nats-server` and affected client libraries used:
- From main, rev: d7946169454a724c7aab47d4aa301a5983811ad2
#### OS/Container environment:
- All
#### Steps or code to reproduce the issue:
- Create a stream:
```
&StreamConfig{
Name: streamName,
Storage: FileStorage, // same if it was mem storage
Subjects: []string{"foo"},
Retention: InterestPolicy,
}
```
- Add a durable consumer with this config:
```
&ConsumerConfig{
DeliverSubject: inbox,
DeliverPolicy: DeliverNew,
AckPolicy: AckNone,
}
```
- While the consumer is running send 2 messages M1 and M2, they are received.
- Stop the consumer, which means delete the subscription on the given inbox (but do not delete the durable JS consumer).
- Send 2 more messages M3 and M4
- Restart the durable with a new inbox attached
- It will receive M2, M3 and M4.
- You can now stop and restart it, it will keep receiving M4.
#### Expected result:
- Receive only M3 and M4 after the first restart, nothing after the second.
#### Actual result:
- The last stored message is always redelivered.
|
https://github.com/nats-io/nats-server/issues/1580
|
https://github.com/nats-io/nats-server/pull/1581
|
da546c2dce2f098015b82bc6d42d2b4bb0e92ab7
|
959b35a7753a295b42a316f60356fba38e70498a
| 2020-09-02T22:01:27Z |
go
| 2020-09-03T19:19:40Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,571 |
["internal/ldap/dn.go", "internal/ldap/dn_test.go", "server/auth.go", "test/configs/certs/rdns/client-d.key", "test/configs/certs/rdns/client-d.pem", "test/tls_test.go"]
|
RDN elements order shouldn't change when NATS server check for the authorization
|
NATS server changes the sequence of the DN element in the principal name of the cert while authorization. Also, it removes the <space> after the comma in the principal name string.
It should exact match of the DN from the cert string. And it shouldn’t remove the spaces. Please find the examples below.
Actual cert: **CN=hostname, O=company name, OU=ou, L= location, ST=state, C=country**
Using certificate subject for auth **["CN=hostname,OU=ou,O=company name,L=location,ST=state,C=country"]**
In my above example, we can see that OU and O elements got interchanged which authorization and removed spaces
And the same issue observed with DC elements as well.
Please let me know if you need any more details from my side.
Thank you.
|
https://github.com/nats-io/nats-server/issues/1571
|
https://github.com/nats-io/nats-server/pull/1577
|
d7946169454a724c7aab47d4aa301a5983811ad2
|
da546c2dce2f098015b82bc6d42d2b4bb0e92ab7
| 2020-08-25T02:33:19Z |
go
| 2020-09-03T15:57:28Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,556 |
["server/client.go", "server/events.go", "server/events_test.go", "server/monitor.go", "server/monitor_test.go", "server/server.go"]
|
Client connections load balancing among nats cluster nodes
|
Hello! Currently the mechanism of load balancing of client connections to nats cluster is based on random node selection. The problem (already discussed in some of the issues, like https://github.com/nats-io/nats-server/issues/1359) is when we restart nats cluster. To be able to serve clients we restart cluster one node after another and this will lead us to approximately the following distribution (step-by-step example for 3 node cluster):
1) original connection distribution is 24 / 24 / 24
2) after node1 restart : 0 / 36 / 36
3) after node2 restart: 18 / 0 / 54
4) after node3 restart: 45 / 27 / 0
As you can see original uniform distribution 1/3 | 1/3 | 1/3 tends to 2/3 | 1/3 | 0 . I suggest you to add some modifications that will allow developers to get around this problem (for example, we use nats cluster in 24/7 fashion and never stop it completely but sometimes need to upgrade server version or our hardware). There can be several solutions:
1) Implement clever load balancing (when nats cluster node tells client to try to connect to another known node which have less active clients)
2) Provide a command which can be sent to cluster and cause it to disconnect existing clients - we can use it after cluster restart is finished to redistribute client connections among nodes.
|
https://github.com/nats-io/nats-server/issues/1556
|
https://github.com/nats-io/nats-server/pull/4298
|
37d3220dfba3b626291be7b50a2d495ff5f1ba77
|
d474e3b725e92daeca9cb5c08365bbf5372883d1
| 2020-08-12T06:02:02Z |
go
| 2023-08-11T08:39:42Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,543 |
["server/leafnode.go", "server/norace_test.go", "server/route.go", "server/routes_test.go", "server/server.go"]
|
Server can deadlock when solicited leafnode connection is closed on update
|
During the dynamic cluster name negotiation, a cluster name update can trigger a notification via the leafnode that could trigger a cluster name conflict error that causes the connection to be closed and leave the server hanging.
```
goroutine 43 [semacquire]:
sync.runtime_SemacquireMutex(0xc0000d0034, 0xc000186a00, 0x1)
/usr/local/go/src/runtime/sema.go:71 +0x47
sync.(*Mutex).lockSlow(0xc0000d0030)
/usr/local/go/src/sync/mutex.go:138 +0xfc
sync.(*Mutex).Lock(...)
/usr/local/go/src/sync/mutex.go:81
github.com/nats-io/nats-server/v2/server.(*Server).removeLeafNodeConnection(0xc0000d0000, 0xc000304000)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/leafnode.go:985 +0x167
github.com/nats-io/nats-server/v2/server.(*Server).removeClient(0xc0000d0000, 0xc000304000)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:2326 +0x1d5
github.com/nats-io/nats-server/v2/server.(*client).teardownConn(0xc000304000)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/client.go:4079 +0x44e
github.com/nats-io/nats-server/v2/server.(*client).closeConnection(0xc000304000, 0x1d)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/client.go:3993 +0xd2
github.com/nats-io/nats-server/v2/server.(*Server).updatedSolicitedLeafnodes(0xc0000d0000)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/leafnode.go:1846 +0x101
github.com/nats-io/nats-server/v2/server.(*Server).setClusterName(0xc0000d0000, 0xc00018d100, 0x16)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:438 +0xd1
github.com/nats-io/nats-server/v2/server.(*client).processRouteInfo(0xc0001b7980, 0xc0001ded00)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/route.go:519 +0x271
github.com/nats-io/nats-server/v2/server.(*client).processInfo(0xc0001b7980, 0xc0001f6405, 0x1c2, 0x1fb, 0x103fe2f, 0xc000035000)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/client.go:1382 +0x107
github.com/nats-io/nats-server/v2/server.(*client).parse(0xc0001b7980, 0xc0001f6400, 0x1c9, 0x200, 0x1c9, 0x0)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/parser.go:982 +0x4322
github.com/nats-io/nats-server/v2/server.(*client).readLoop(0xc0001b7980, 0x0, 0x0, 0x0)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/client.go:971 +0x50a
github.com/nats-io/nats-server/v2/server.(*Server).createRoute.func2()
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/route.go:1394 +0x3b
created by github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:2523 +0xc1
```
## Defect
- [X] Included `nats-server -DV` output
- [X] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
2.2.0-beta.20 (not yet released)
#### Steps or code to reproduce the issue:
- Start a node that can be used for leafnode connections
```hcl
port = 4222
leaf { port = 7422 }
```
- Connect a couple of node as a leafnode that has it as a remote and that can also be used as a seed server
```hcl
server_name = X-A
port = 23234
cluster { port = 63234 }
leaf {
remotes [
{ url = "nats://localhost:7422" }
]
}
```
- Start another server that becomes part of a cluster that has the same leafnode remote connection and that becomes part of the cluster with a static route. Sometimes this can end up in a deadlock.
```hcl
server_name = X-C
port = 23236
cluster {
port = 63236
routes [
"nats://localhost:63234"
]
}
leaf {
remotes [
{ url = "nats://localhost:7422" }
]
}
```
From the stack trace, it looks like the reason could be that on cluster name change the lock is held at:
https://github.com/nats-io/nats-server/blob/fbab1daf063e3ed8e26b89ee69f752148fefea3d/server/server.go#L430
and still held when this is called:
https://github.com/nats-io/nats-server/blob/4a4a36f9ec7efe674e9975b4a3e69cf042140902/server/leafnode.go#L985
#### Expected result:
Server to start or exit without hanging.
#### Actual result:
Deadlock of the server, have to use SIGKILL/SIGABRT can be used to stop it.
Sample run:
```
nats-server-head -c /tmp/X-C.conf -DV
[73694] 2020/07/29 22:44:02.407035 [INF] Starting nats-server version 2.2.0-beta.20
[73694] 2020/07/29 22:44:02.407135 [DBG] Go build version go1.14.4
[73694] 2020/07/29 22:44:02.407138 [INF] Git commit [not set]
[73694] 2020/07/29 22:44:02.407140 [INF] Using configuration file: /tmp/X-C.conf
[73694] 2020/07/29 22:44:02.407157 [DBG] Created system account: "$SYS"
[73694] 2020/07/29 22:44:02.407665 [INF] Listening for client connections on 0.0.0.0:23236
[73694] 2020/07/29 22:44:02.407670 [INF] Server id is NBUSECKWQPSXH2AW6NXKZXHYL3AE2XQZUZB34DEJCZP32ZP2JLCNHVXQ
[73694] 2020/07/29 22:44:02.407672 [INF] Server is ready
[73694] 2020/07/29 22:44:02.407676 [DBG] Get non local IPs for "0.0.0.0"
[73694] 2020/07/29 22:44:02.407860 [DBG] ip=10.0.0.144
[73694] 2020/07/29 22:44:02.407864 [DBG] ip=2601:644:680:2530:145d:2149:ac90:d4bf
[73694] 2020/07/29 22:44:02.407866 [DBG] ip=2601:644:680:2530:9bc:a483:3169:1b71
[73694] 2020/07/29 22:44:02.407868 [DBG] ip=2601:644:680:2530::68c8
[73694] 2020/07/29 22:44:02.407988 [INF] Cluster name is NROFpxO6EYjh4vG82C55ed
[73694] 2020/07/29 22:44:02.407993 [WRN] Cluster name was dynamically generated, consider setting one
[73694] 2020/07/29 22:44:02.408046 [INF] Listening for route connections on 0.0.0.0:63236
[73694] 2020/07/29 22:44:02.408305 [DBG] Trying to connect to route on localhost:63234
[73694] 2020/07/29 22:44:02.409482 [DBG] Trying to connect as leafnode to remote server on "localhost:7422" ([::1]:7422)
[73694] 2020/07/29 22:44:02.409797 [DBG] [::1]:63234 - rid:2 - Route connect msg sent
[73694] 2020/07/29 22:44:02.409875 [INF] [::1]:63234 - rid:2 - Route connection created
[73694] 2020/07/29 22:44:02.409956 [DBG] [::1]:7422 - lid:3 - Remote leafnode connect msg sent
[73694] 2020/07/29 22:44:02.409964 [DBG] [::1]:7422 - lid:3 - Leafnode connection created
[73694] 2020/07/29 22:44:02.409984 [DBG] [::1]:7422 - lid:3 - Leafnode connection closed: Cluster Name Conflict
[73694] 2020/07/29 22:44:02.410009 [TRC] [::1]:7422 - lid:3 - ->> [LS+ $LDS.NROFpxO6EYjh4vG82C55fp]
[73694] 2020/07/29 22:44:03.598038 [DBG] [::1]:63234 - rid:2 - Router Ping Timer
[73694] 2020/07/29 22:44:03.598080 [TRC] [::1]:63234 - rid:2 - ->> [PING]
[73694] 2020/07/29 22:44:03.598131 [ERR] [::1]:63234 - rid:2 - Error flushing: writev tcp [::1]:59910->[::1]:63234: writev: broken pipe
[73694] 2020/07/29 22:44:03.598142 [INF] [::1]:63234 - rid:2 - Router connection closed: Write Error
^C[73694] 2020/07/29 22:44:08.834187 [DBG] Trapped "interrupt" signal
^C^C^C^C
SIGABRT: abort
PC=0x7fff6c19386a m=0 sigcode=0
goroutine 0 [idle]:
runtime.pthread_cond_wait(0x1ad1ce8, 0x1ad1ca8, 0x7ffe00000000)
/usr/local/go/src/runtime/sys_darwin.go:390 +0x39
runtime.semasleep(0xffffffffffffffff, 0x7ffeefbff760)
/usr/local/go/src/runtime/os_darwin.go:63 +0x85
runtime.notesleep(0x1ad1aa8)
/usr/local/go/src/runtime/lock_sema.go:173 +0xe0
runtime.stoplockedm()
/usr/local/go/src/runtime/proc.go:1977 +0x88
runtime.schedule()
/usr/local/go/src/runtime/proc.go:2460 +0x4a6
runtime.park_m(0xc000082480)
/usr/local/go/src/runtime/proc.go:2696 +0x9d
runtime.mcall(0x1066576)
/usr/local/go/src/runtime/asm_amd64.s:318 +0x5b
goroutine 1 [chan receive]:
github.com/nats-io/nats-server/v2/server.(*Server).WaitForShutdown(...)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:1598
main.main()
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/main.go:118 +0x15f
goroutine 34 [semacquire]:
sync.runtime_SemacquireMutex(0xc0000d0034, 0x1ad0900, 0x1)
/usr/local/go/src/runtime/sema.go:71 +0x47
sync.(*Mutex).lockSlow(0xc0000d0030)
/usr/local/go/src/sync/mutex.go:138 +0xfc
sync.(*Mutex).Lock(...)
/usr/local/go/src/sync/mutex.go:81
github.com/nats-io/nats-server/v2/server.(*Server).eventsRunning(0xc0000d0000, 0x1ac1820)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/events.go:384 +0xae
github.com/nats-io/nats-server/v2/server.(*Server).shutdownEventing(0xc0000d0000)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/events.go:760 +0x3c
github.com/nats-io/nats-server/v2/server.(*Server).Shutdown(0xc0000d0000)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:1459 +0x43
github.com/nats-io/nats-server/v2/server.(*Server).handleSignals.func1(0xc00008e4e0, 0xc0000d0000)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/signal.go:52 +0x30f
created by github.com/nats-io/nats-server/v2/server.(*Server).handleSignals
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/signal.go:45 +0x11a
goroutine 18 [syscall]:
os/signal.signal_recv(0x170f340)
/usr/local/go/src/runtime/sigqueue.go:144 +0x96
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:23 +0x22
created by os/signal.Notify.func1
/usr/local/go/src/os/signal/signal.go:127 +0x44
goroutine 35 [semacquire]:
sync.runtime_SemacquireMutex(0xc0000d0034, 0x0, 0x1)
/usr/local/go/src/runtime/sema.go:71 +0x47
sync.(*Mutex).lockSlow(0xc0000d0030)
/usr/local/go/src/sync/mutex.go:138 +0xfc
sync.(*Mutex).Lock(...)
/usr/local/go/src/sync/mutex.go:81
github.com/nats-io/nats-server/v2/server.(*Server).eventsRunning(0xc0000d0000, 0x0)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/events.go:384 +0xae
github.com/nats-io/nats-server/v2/server.(*Server).internalSendLoop(0xc0000d0000, 0xc0001ae0f0)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/events.go:256 +0xc2
created by github.com/nats-io/nats-server/v2/server.(*Server).setSystemAccount
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:1001 +0x30a
goroutine 36 [select]:
github.com/nats-io/nats-server/v2/server.(*Server).startGWReplyMapExpiration.func1()
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/gateway.go:2999 +0x15b
created by github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:2523 +0xc1
goroutine 37 [semacquire]:
sync.runtime_SemacquireMutex(0xc0000d0034, 0x0, 0x1)
/usr/local/go/src/runtime/sema.go:71 +0x47
sync.(*Mutex).lockSlow(0xc0000d0030)
/usr/local/go/src/sync/mutex.go:138 +0xfc
sync.(*Mutex).Lock(...)
/usr/local/go/src/sync/mutex.go:81
github.com/nats-io/nats-server/v2/server.(*Server).removeLeafNodeConnection(0xc0000d0000, 0xc000304000)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/leafnode.go:985 +0x167
github.com/nats-io/nats-server/v2/server.(*Server).createLeafNode(0xc0000d0000, 0x1719680, 0xc0000960a8, 0xc00019e100, 0x3b9aca00)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/leafnode.go:814 +0xadd
github.com/nats-io/nats-server/v2/server.(*Server).connectToRemoteLeafNode(0xc0000d0000, 0xc00019e100, 0x1)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/leafnode.go:316 +0x6f2
github.com/nats-io/nats-server/v2/server.(*Server).solicitLeafNodeRemotes.func1()
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/leafnode.go:109 +0x38
created by github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:2523 +0xc1
goroutine 11 [semacquire]:
sync.runtime_SemacquireMutex(0xc0000d0034, 0x1e40100, 0x1)
/usr/local/go/src/runtime/sema.go:71 +0x47
sync.(*Mutex).lockSlow(0xc0000d0030)
/usr/local/go/src/sync/mutex.go:138 +0xfc
sync.(*Mutex).Lock(...)
/usr/local/go/src/sync/mutex.go:81
github.com/nats-io/nats-server/v2/server.(*Server).globalAccount(0xc0000d0000, 0xc00016c000)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:521 +0x8c
github.com/nats-io/nats-server/v2/server.(*Server).createClient(0xc0000d0000, 0x1719680, 0xc000010070, 0x0, 0xc0001d77d0)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:1990 +0x1d5
github.com/nats-io/nats-server/v2/server.(*Server).AcceptLoop.func2(0x1719680, 0xc000010070)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:1658 +0x47
github.com/nats-io/nats-server/v2/server.(*Server).acceptConnections.func1()
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:1692 +0x40
created by github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:2523 +0xc1
goroutine 39 [IO wait]:
internal/poll.runtime_pollWait(0x7b15f18, 0x72, 0x0)
/usr/local/go/src/runtime/netpoll.go:203 +0x55
internal/poll.(*pollDesc).wait(0xc00019e198, 0x72, 0x0, 0x0, 0x16312cb)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Accept(0xc00019e180, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:384 +0x1d4
net.(*netFD).accept(0xc00019e180, 0xc000051ec8, 0x1065ac0, 0xc000051f10)
/usr/local/go/src/net/fd_unix.go:238 +0x42
net.(*TCPListener).accept(0xc0001846a0, 0x14d5961, 0xc000000000, 0xc00007f710)
/usr/local/go/src/net/tcpsock_posix.go:139 +0x32
net.(*TCPListener).Accept(0xc0001846a0, 0xc00007f710, 0xc000010001, 0x0, 0x0)
/usr/local/go/src/net/tcpsock.go:261 +0x64
github.com/nats-io/nats-server/v2/server.(*Server).acceptConnections(0xc0000d0000, 0x1712300, 0xc0001846a0, 0x1630fd1, 0x6, 0xc000186790, 0xc0001867a0)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:1680 +0x42
created by github.com/nats-io/nats-server/v2/server.(*Server).AcceptLoop
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:1658 +0x947
goroutine 40 [IO wait]:
internal/poll.runtime_pollWait(0x7b15e38, 0x72, 0x0)
/usr/local/go/src/runtime/netpoll.go:203 +0x55
internal/poll.(*pollDesc).wait(0xc00019fa18, 0x72, 0x0, 0x0, 0x16312cb)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Accept(0xc00019fa00, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:384 +0x1d4
net.(*netFD).accept(0xc00019fa00, 0x1007ecb, 0xc00008e780, 0xc0001be8f8)
/usr/local/go/src/net/fd_unix.go:238 +0x42
net.(*TCPListener).accept(0xc0001847e0, 0xc00008e780, 0x0, 0xc000192780)
/usr/local/go/src/net/tcpsock_posix.go:139 +0x32
net.(*TCPListener).Accept(0xc0001847e0, 0x0, 0xc0001d5f78, 0x1007b1b, 0xc0001be8a0)
/usr/local/go/src/net/tcpsock.go:261 +0x64
github.com/nats-io/nats-server/v2/server.(*Server).acceptConnections(0xc0000d0000, 0x1712300, 0xc0001847e0, 0x16305aa, 0x5, 0xc000186970, 0x0)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:1680 +0x42
created by github.com/nats-io/nats-server/v2/server.(*Server).startRouteAcceptLoop
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/route.go:1753 +0x90a
goroutine 45 [semacquire]:
sync.runtime_SemacquireMutex(0xc0000d0034, 0xc00004db00, 0x1)
/usr/local/go/src/runtime/sema.go:71 +0x47
sync.(*Mutex).lockSlow(0xc0000d0030)
/usr/local/go/src/sync/mutex.go:138 +0xfc
sync.(*Mutex).Lock(...)
/usr/local/go/src/sync/mutex.go:81
github.com/nats-io/nats-server/v2/server.(*Server).accountDisconnectEvent(0xc0000d0000, 0xc000304000, 0xbfc0b56898702a20, 0x6017e1, 0x1ad09e0, 0x163ada3, 0x15)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/events.go:1098 +0x708
github.com/nats-io/nats-server/v2/server.(*Server).saveClosedClient(0xc0000d0000, 0xc000304000, 0x1719680, 0xc0000960a8, 0x1d)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:2169 +0xe7
created by github.com/nats-io/nats-server/v2/server.(*client).markConnAsClosed
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/client.go:1315 +0x215
goroutine 43 [semacquire]:
sync.runtime_SemacquireMutex(0xc0000d0034, 0xc000186a00, 0x1)
/usr/local/go/src/runtime/sema.go:71 +0x47
sync.(*Mutex).lockSlow(0xc0000d0030)
/usr/local/go/src/sync/mutex.go:138 +0xfc
sync.(*Mutex).Lock(...)
/usr/local/go/src/sync/mutex.go:81
github.com/nats-io/nats-server/v2/server.(*Server).removeLeafNodeConnection(0xc0000d0000, 0xc000304000)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/leafnode.go:985 +0x167
github.com/nats-io/nats-server/v2/server.(*Server).removeClient(0xc0000d0000, 0xc000304000)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:2326 +0x1d5
github.com/nats-io/nats-server/v2/server.(*client).teardownConn(0xc000304000)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/client.go:4079 +0x44e
github.com/nats-io/nats-server/v2/server.(*client).closeConnection(0xc000304000, 0x1d)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/client.go:3993 +0xd2
github.com/nats-io/nats-server/v2/server.(*Server).updatedSolicitedLeafnodes(0xc0000d0000)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/leafnode.go:1846 +0x101
github.com/nats-io/nats-server/v2/server.(*Server).setClusterName(0xc0000d0000, 0xc00018d100, 0x16)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:438 +0xd1
github.com/nats-io/nats-server/v2/server.(*client).processRouteInfo(0xc0001b7980, 0xc0001ded00)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/route.go:519 +0x271
github.com/nats-io/nats-server/v2/server.(*client).processInfo(0xc0001b7980, 0xc0001f6405, 0x1c2, 0x1fb, 0x103fe2f, 0xc000035000)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/client.go:1382 +0x107
github.com/nats-io/nats-server/v2/server.(*client).parse(0xc0001b7980, 0xc0001f6400, 0x1c9, 0x200, 0x1c9, 0x0)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/parser.go:982 +0x4322
github.com/nats-io/nats-server/v2/server.(*client).readLoop(0xc0001b7980, 0x0, 0x0, 0x0)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/client.go:971 +0x50a
github.com/nats-io/nats-server/v2/server.(*Server).createRoute.func2()
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/route.go:1394 +0x3b
created by github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:2523 +0xc1
goroutine 44 [semacquire]:
sync.runtime_SemacquireMutex(0xc0000d0034, 0xc000281a00, 0x1)
/usr/local/go/src/runtime/sema.go:71 +0x47
sync.(*Mutex).lockSlow(0xc0000d0030)
/usr/local/go/src/sync/mutex.go:138 +0xfc
sync.(*Mutex).Lock(...)
/usr/local/go/src/sync/mutex.go:81
github.com/nats-io/nats-server/v2/server.(*Server).removeRoute(0xc0000d0000, 0xc0001b7980)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/route.go:1985 +0x336
github.com/nats-io/nats-server/v2/server.(*Server).removeClient(0xc0000d0000, 0xc0001b7980)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:2322 +0x1ad
github.com/nats-io/nats-server/v2/server.(*client).teardownConn(0xc0001b7980)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/client.go:4079 +0x44e
github.com/nats-io/nats-server/v2/server.(*client).writeLoop(0xc0001b7980)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/client.go:839 +0x204
github.com/nats-io/nats-server/v2/server.(*Server).createRoute.func3()
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/route.go:1397 +0x2a
created by github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:2523 +0xc1
goroutine 12 [semacquire]:
sync.runtime_SemacquireMutex(0xc0000d0034, 0x1e40100, 0x1)
/usr/local/go/src/runtime/sema.go:71 +0x47
sync.(*Mutex).lockSlow(0xc0000d0030)
/usr/local/go/src/sync/mutex.go:138 +0xfc
sync.(*Mutex).Lock(...)
/usr/local/go/src/sync/mutex.go:81
github.com/nats-io/nats-server/v2/server.(*Server).globalAccount(0xc0000d0000, 0xc00016d980)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:521 +0x8c
github.com/nats-io/nats-server/v2/server.(*Server).createClient(0xc0000d0000, 0x1719680, 0xc000010078, 0x0, 0x103f176)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:1990 +0x1d5
github.com/nats-io/nats-server/v2/server.(*Server).AcceptLoop.func2(0x1719680, 0xc000010078)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:1658 +0x47
github.com/nats-io/nats-server/v2/server.(*Server).acceptConnections.func1()
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:1692 +0x40
created by github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:2523 +0xc1
goroutine 48 [semacquire]:
sync.runtime_SemacquireMutex(0xc0000d0034, 0x1e40e00, 0x1)
/usr/local/go/src/runtime/sema.go:71 +0x47
sync.(*Mutex).lockSlow(0xc0000d0030)
/usr/local/go/src/sync/mutex.go:138 +0xfc
sync.(*Mutex).Lock(...)
/usr/local/go/src/sync/mutex.go:81
github.com/nats-io/nats-server/v2/server.(*Server).globalAccount(0xc0000d0000, 0xc0001b9300)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:521 +0x8c
github.com/nats-io/nats-server/v2/server.(*Server).createClient(0xc0000d0000, 0x1719680, 0xc0001880d0, 0x0, 0x165f770)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:1990 +0x1d5
github.com/nats-io/nats-server/v2/server.(*Server).AcceptLoop.func2(0x1719680, 0xc0001880d0)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:1658 +0x47
github.com/nats-io/nats-server/v2/server.(*Server).acceptConnections.func1()
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:1692 +0x40
created by github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:2523 +0xc1
goroutine 22 [semacquire]:
sync.runtime_SemacquireMutex(0xc0000d0034, 0x1e40700, 0x1)
/usr/local/go/src/runtime/sema.go:71 +0x47
sync.(*Mutex).lockSlow(0xc0000d0030)
/usr/local/go/src/sync/mutex.go:138 +0xfc
sync.(*Mutex).Lock(...)
/usr/local/go/src/sync/mutex.go:81
github.com/nats-io/nats-server/v2/server.(*Server).globalAccount(0xc0000d0000, 0xc000305980)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:521 +0x8c
github.com/nats-io/nats-server/v2/server.(*Server).createClient(0xc0000d0000, 0x1719680, 0xc0000960d8, 0x0, 0x1068761)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:1990 +0x1d5
github.com/nats-io/nats-server/v2/server.(*Server).AcceptLoop.func2(0x1719680, 0xc0000960d8)
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:1658 +0x47
github.com/nats-io/nats-server/v2/server.(*Server).acceptConnections.func1()
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:1692 +0x40
created by github.com/nats-io/nats-server/v2/server.(*Server).startGoRoutine
/Users/wallyqs/repos/k8s-dev/src/github.com/nats-io/nats-server/server/server.go:2523 +0xc1
rax 0x104
rbx 0x2
rcx 0x7ffeefbff598
rdx 0x1b00
rdi 0x1ad1ce8
rsi 0x200100002100
rbp 0x7ffeefbff620
rsp 0x7ffeefbff598
r8 0x0
r9 0xa0
r10 0x0
r11 0x202
r12 0x1ad1ce8
r13 0x16
r14 0x200100002100
r15 0x7c5b5c0
rip 0x7fff6c19386a
rflags 0x203
cs 0x7
fs 0x0
gs 0x0
```
|
https://github.com/nats-io/nats-server/issues/1543
|
https://github.com/nats-io/nats-server/pull/1545
|
60f9256d400577d3fed8dd3ac8a03a6e88ce6fd5
|
33d41c7447b55bd65b2b0d6a62fb81f24c93570e
| 2020-07-30T06:03:04Z |
go
| 2020-07-31T15:11:31Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,532 |
["server/accounts.go", "server/jwt_test.go", "server/route.go", "server/server.go"]
|
Silent subscription loss on route when the account resolver fails to fetch the account
|
## Defect
This happens in operator mode with non memory based resolver.
When a subscriber connects to a server in a cluster,
The server it is directly connected to will fetch the account jwt.
The subscription is forwarded to other server in a cluster.
Triggered by this subscription, each server will fetch the corresponding account jwt as well.
**If that fetch fails the subscription is dropped.**
Later, when a publisher connecting to one of these server sends a message to that subject, that message does not make it to the subscriber.
I created a unit test to demonstrate the issue in master.
I could not find a mechanism to recover from that.
If the fetch is slow, it will also block all other incoming traffic on that route.
In addition to that, an inline fetch would impede receiving a nats based fetch response.
|
https://github.com/nats-io/nats-server/issues/1532
|
https://github.com/nats-io/nats-server/pull/1538
|
b71ba6e22383ad1ebdf6e25d6096b6ffdd376f4f
|
fbab1daf063e3ed8e26b89ee69f752148fefea3d
| 2020-07-24T16:09:02Z |
go
| 2020-07-27T23:29:09Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,529 |
["server/opts.go", "server/server_test.go", "server/websocket.go"]
|
Allow non-secure WebSocket Transport
|
## Feature Request
Allow non-TLS websocket configuration
#### Use Case:
in one of the use case i am trying out is publish real time market data to HTML5 apps running under Electron. e.g. volume is 1k msg size per record 1000 records and each record update 4 times per second.
WSS will spend considerable time in decrypting
#### Proposed Change:
Allow flag in webSocket configuration block to bypass TLS
#### Who Benefits From The Change(s)?
HTML5 Clients e..g Electron Dekstop apps running in secure network intranet where TLS may be optional
#### Alternative Approaches
|
https://github.com/nats-io/nats-server/issues/1529
|
https://github.com/nats-io/nats-server/pull/1542
|
6e1a892740d869b95b0655b9d4adb9bd3a283768
|
60f9256d400577d3fed8dd3ac8a03a6e88ce6fd5
| 2020-07-23T17:48:19Z |
go
| 2020-07-30T00:11:10Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,527 |
["server/client.go", "server/events_test.go"]
|
Queue subscriptions for system events do not work
|
## Defect
When subscribing to `$SYS.>` using queue subscriptions, no messages are received.
Started at the same time this nats.go example:
`./nats-qsub -s localhost:4222 -creds ~/test/SU.creds "\$SYS.>" q`
will yield no result, while this:
`./nats-sub -s localhost:4222 -creds ~/test/SU.creds "\$SYS.>"`
does.
Shows both side by side. (left qsub, right sub)

|
https://github.com/nats-io/nats-server/issues/1527
|
https://github.com/nats-io/nats-server/pull/1530
|
9514576b727bf61982dd88330be281eaebb74acd
|
aa67270ea53c8315860a4e9eb5fa83470c9367a5
| 2020-07-23T00:41:24Z |
go
| 2020-07-23T18:40:38Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,515 |
["server/gateway.go", "server/leafnode.go", "server/route.go", "server/server.go", "server/server_test.go", "server/util.go", "server/websocket.go", "test/gateway_test.go", "test/leafnode_test.go"]
|
missing client connect url after re connect of restarted server is processed before disconnect.
|
## Defect
added a unit test #1516 to illustrate the issue.
Basically we use a map of url, when that kind or re-ordering happens we forget about a url.
This can also be re-produced whit the following setup.
Server_A)
Client_advertise URLA
Server_B and Server_C have the same value in client_advertise
client_advertise URLBC
When starting all 3 server and connecting via telnet to server_A,
connect_urls will contain URLA and URLBC
if Server_B OR Server_C is shut down, server_A will only return URLA.
This can be solved by ref counting the url entires in the map or using a list instead.
Another oddity is that if all server provide the same value for client_advertise, the list will contain two identical entries.
One for the current server, one for the other server.
The unit test and the example here are a bit contrived.
However, I only looked at this because a user reported that connect_urls was off.
|
https://github.com/nats-io/nats-server/issues/1515
|
https://github.com/nats-io/nats-server/pull/1517
|
8312f3fb6d34135ca3b65955950f60e492d19342
|
4a4a36f9ec7efe674e9975b4a3e69cf042140902
| 2020-07-14T23:56:49Z |
go
| 2020-07-16T18:56:02Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,509 |
["server/reload.go", "server/reload_test.go"]
|
Config reload fails if leafnode remotes are defined
|
## Defect
#### Versions of `nats-server` and affected client libraries used:
2.2.0-beta.19
#### Steps or code to reproduce the issue:
```hcl
port: 4222
http: 8222
cluster {
port: 6222
routes: []
}
leafnodes {
remotes: [
{
url: "nats://foo:bar@localhost:7422"
}
]
}
```
#### Expected result:
No op config reload
#### Actual result:
```
[57372] 2020/07/09 10:20:16.654872 [ERR] Failed to reload server configuration: config reload not supported for LeafNode: old={ 0 [] 0 <nil> 0 false false 1s [0xc0000ca240] <nil> 0 0}, new={ 0 [] 0 <nil> 0 false false 1s [0xc000302090] <nil> 0 0}
```
|
https://github.com/nats-io/nats-server/issues/1509
|
https://github.com/nats-io/nats-server/pull/1510
|
29d5fdc4832617f8001c6a201f1854b50eadfee5
|
6ba1559598fa8611f554a21653fbf1554f0beb56
| 2020-07-09T17:20:44Z |
go
| 2020-07-10T02:46:51Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,505 |
["server/client.go", "server/client_test.go"]
|
Invalid tracing with some IPv6 addresses
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [X] Included `nats-server -DV` output
- [X] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
v2.1.7 and master
#### OS/Container environment:
Any
#### Steps or code to reproduce the issue:
Make sure that your client has an IPv6 address that contains "%" characters, such as:
```
fe80::111:2222:3333:4444%uth0
```
then run the server with -DV and publish a message on foo:
#### Expected result:
```
[640] 2020/07/08 02:26:19.189311 [TRC] [fe80::111:2222:3333:4444%uth0]:57576 - cid:4 - <<- PUB foo 2
```
#### Actual result:
```
[640] 2020/07/08 02:26:19.189311 [TRC] [fe80::111:2222:3333:4444[%!e(string=PUB) %!e(string=undefined 2)]th0]:57576 - cid:4 - <<- %!s(MISSING)
```
This was reported by @aricart
|
https://github.com/nats-io/nats-server/issues/1505
|
https://github.com/nats-io/nats-server/pull/1506
|
84a0229aa3e8cf99c796769cf4247e2a792d9497
|
bfe4eb68b21b9c42304fe0e0c975fe8beb954e46
| 2020-07-08T16:16:11Z |
go
| 2020-07-08T20:59:25Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,502 |
["server/consumer.go", "test/jetstream_test.go"]
|
Bug when consuming msgs but not ack-ing using jetstream
|
In order to reproduce this issue you need to run this program twice. First run after it publish 2 msgs you can terminate, then on a second run you have to wait ~1-2 min for CPU to spike and that's coming from nats server.
Nats server and jsm.go are latest version I just pulled today.
As you can see in the program I'm not ack-ing (m.Respond(nil)), if I do then everything works.
```
package main
import (
"context"
"fmt"
"log"
"time"
"github.com/nats-io/jsm.go"
"github.com/nats-io/nats.go"
)
func main() {
nc, err := nats.Connect("localhost:4222")
if err != nil {
log.Fatal(err)
}
defer nc.Close()
jsm.SetConnection(nc)
stream, err := jsm.NewStreamFromDefault(
"ORDERS",
jsm.DefaultStream,
jsm.StreamConnection(jsm.WithConnection(nc)),
jsm.FileStorage(),
jsm.MaxAge(24*time.Hour),
jsm.Subjects("ORDERS.*"),
)
if err != nil {
log.Fatal(err)
}
defer stream.Delete()
nc.Publish("ORDERS.test", []byte("something"))
nc.Publish("ORDERS.test", []byte("something"))
ctx, cancel := context.WithTimeout(context.Background(), time.Hour*1)
defer cancel()
c, err := jsm.NewConsumer("ORDERS",
jsm.DurableName("PULL"),
jsm.FilterStreamBySubject("ORDERS.test"),
jsm.StartAtSequence(1),
)
if err != nil {
log.Fatal(err)
}
readPullMsg(c, ctx)
if err := c.Delete(); err != nil {
log.Fatal(err)
}
}
func readPullMsg(c *jsm.Consumer, ctx context.Context) {
state, err := c.DeliveredState()
if err != nil {
fmt.Println(err)
return
}
pendingMsgs, err := c.PendingMessageCount()
if err != nil {
fmt.Println(err)
return
}
fmt.Println("durableName:", c.DurableName(), "consumerSeq:", state.ConsumerSeq, "streamSeq:", state.StreamSeq, "startSequence:", c.StartSequence(), "pendingMsgs:", pendingMsgs)
for {
m, err := c.NextMsg(jsm.WithContext(ctx))
if err != nil {
log.Println(err)
break
}
fmt.Printf("msg data: %s\n", m.Data)
//m.Respond(nil)
}
}
|
https://github.com/nats-io/nats-server/issues/1502
|
https://github.com/nats-io/nats-server/pull/1503
|
5cf858e4e45c6935ab7976d7642022672991084b
|
735e06b9b94cde17420b8c5dc8edb234bc4fe86c
| 2020-07-05T01:41:02Z |
go
| 2020-07-07T12:38:21Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,492 |
["server/reload.go", "server/reload_test.go"]
|
Cluster name is not updated on reload
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [x] Included `nats-server -DV` output
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
- master (2.2.X)
#### Steps or code to reproduce the issue:
Starting with
```hcl
port = 4222
cluster {
name = "a"
port = 6222
}
http = 8222
```
If updating then the name to:
```hcl
port = 4222
cluster {
name = "b"
port = 6222
}
http = 8222
```
#### Expected/Actual result:
Cluster name as `b` but still is `a` after the reload:
```sh
curl http://localhost:8222/varz | grep 'cluster\"' -A5
"cluster": {
"name": "a",
"addr": "0.0.0.0",
"cluster_port": 6222,
"auth_timeout": 1
},
```
|
https://github.com/nats-io/nats-server/issues/1492
|
https://github.com/nats-io/nats-server/pull/1494
|
1a590eea78efead81a162a7ab465c6c4e6f3ec40
|
ae93c5cfd03a309270f67d56049bcd4462f4c536
| 2020-06-25T23:05:06Z |
go
| 2020-06-26T17:31:38Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,486 |
["server/client.go", "server/filestore_test.go", "server/gateway.go", "test/services_test.go"]
|
`allow_responses` does not apply across accounts
|
## Defect
#### Versions of `nats-server` and affected client libraries used:
- nats-server v2.2.0
#### Steps or code to reproduce the issue:
With the configuration:
```hcl
accounts {
Foo {
users = [
{ user = "a", pass = "a",
permissions = {
subscribe = {
allow = [ "foo.api" ]
}
publish = {
allow = [ "foo.stream" ]
},
allow_responses = true
}
},
{ user = "aa", pass = "aa"}
]
exports [
{ service = "foo.api" }
{ stream = "foo.stream"}
]
}
Bar {
users = [
{ user = "b", pass = "b" }
]
imports [
{ service = { account: "Foo", subject: "foo.api" }, to: "foo.api" }
]
}
}
```
```
nats-rply -s nats://a:a@localhost:4222 foo.api hi
# User in same account is fine
nats-req -s nats://aa:aa@localhost:4222 foo.api ''
Published [foo.api] : ''
Received [_INBOX.v6foEFjf5dQwoUgQjGpGfS.73wMBgQ6] : 'hi'
# User in remote account cannot receive message
nats-req -s nats://b:b@localhost:4222 foo.api ''
nats: timeout for request
```
#### Expected result:
Response from service in account Foo
#### Actual result:
```
[12299] 2020/06/18 17:33:08.539700 [TRC] 127.0.0.1:55581 - cid:3 - ->> [MSG foo.api 1 _R_.rdrfkQ.UFskRw 0]
[12299] 2020/06/18 17:33:08.539887 [TRC] 127.0.0.1:55581 - cid:3 - <<- [PUB _R_.rdrfkQ.UFskRw 2]
[12299] 2020/06/18 17:33:08.539898 [TRC] 127.0.0.1:55581 - cid:3 - <<- MSG_PAYLOAD: ["hi"]
[12299] 2020/06/18 17:33:08.539913 [TRC] 127.0.0.1:55581 - cid:3 - ->> [-ERR Permissions Violation for Publish to "_R_.rdrfkQ.UFskRw"]```
|
https://github.com/nats-io/nats-server/issues/1486
|
https://github.com/nats-io/nats-server/pull/1487
|
f13c47487057f57e55781e9e88f36a4aa0b78d68
|
f9fd8bafff41cc50135ef19b20095f78cf7f7305
| 2020-06-19T00:33:42Z |
go
| 2020-06-19T05:43:51Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,482 |
["server/filestore.go", "server/filestore_test.go", "server/memstore.go"]
|
TestFileStoreAgeLimitRecovery failed
|
## Defect
When running nats server unit tests I ran into the failure below.
I reran unit tests a few more times and did not run into this again.
```
=== RUN TestFileStoreAgeLimitRecovery
--- FAIL: TestFileStoreAgeLimitRecovery (0.04s)
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x171ab8e]
goroutine 8003 [running]:
testing.tRunner.func1(0xc0005ca100)
/usr/local/go/src/testing/testing.go:874 +0x69f
panic(0x1bb5ee0, 0x23c8190)
/usr/local/go/src/runtime/panic.go:679 +0x1b2
github.com/nats-io/nats-server/v2/server.(*fileStore).deleteMsgFromBlock(0xc0005cc540, 0xc0002c1a40, 0x62, 0xc0008b2f60, 0x0)
/Users/matthiashanel/repos/nats-server/server/filestore.go:781 +0x11e
github.com/nats-io/nats-server/v2/server.(*fileStore).removeMsg(0xc0005cc540, 0x62, 0x0, 0x0, 0x62, 0x0)
/Users/matthiashanel/repos/nats-server/server/filestore.go:717 +0x100
github.com/nats-io/nats-server/v2/server.(*fileStore).deleteFirstMsg(0xc0005cc540, 0x62, 0xc0008b2ea0, 0x0)
/Users/matthiashanel/repos/nats-server/server/filestore.go:675 +0x85
github.com/nats-io/nats-server/v2/server.(*fileStore).expireMsgs(0xc0005cc540)
/Users/matthiashanel/repos/nats-server/server/filestore.go:897 +0x131
github.com/nats-io/nats-server/v2/server.(*fileStore).expireMsgsLocked(0xc0005cc540)
/Users/matthiashanel/repos/nats-server/server/filestore.go:885 +0x49
github.com/nats-io/nats-server/v2/server.(*fileStore).recoverMsgs(0xc0005cc540, 0x0, 0x0)
/Users/matthiashanel/repos/nats-server/server/filestore.go:467 +0x61c
github.com/nats-io/nats-server/v2/server.newFileStoreWithCreated(0xc0003c3590, 0x43, 0x4000000, 0x12a05f200, 0x2540be400, 0x1c7a4b4, 0x3, 0x0, 0x0, 0x0, ...)
/Users/matthiashanel/repos/nats-server/server/filestore.go:239 +0xa2c
github.com/nats-io/nats-server/v2/server.newFileStore(0xc0003c3590, 0x43, 0x0, 0x0, 0x0, 0x1c7a4b4, 0x3, 0x0, 0x0, 0x0, ...)
/Users/matthiashanel/repos/nats-server/server/filestore.go:175 +0x129
github.com/nats-io/nats-server/v2/server.TestFileStoreAgeLimitRecovery(0xc0005ca100)
/Users/matthiashanel/repos/nats-server/server/filestore_test.go:628 +0x582
testing.tRunner(0xc0005ca100, 0x1cd4e38)
/usr/local/go/src/testing/testing.go:909 +0x19a
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x652
FAIL github.com/nats-io/nats-server/v2/server 35.451s
```
Ran the unit test to test my filter change. This is the last commit in main branch
```
commit c8b4b2efa39b38ac2a5a3dfd7b30257189109e5c (origin/master, origin/HEAD, master)
Date: Mon Jun 15 10:39:07 2020 -0700
```
|
https://github.com/nats-io/nats-server/issues/1482
|
https://github.com/nats-io/nats-server/pull/1483
|
05fa11ba2f79626164a04e2a26e611f116d45eae
|
9bf85aca2b2f67f13b9e9047c573667c2b4dd1cf
| 2020-06-17T20:07:21Z |
go
| 2020-06-18T21:07:33Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,459 |
["server/config_check_test.go", "server/const.go", "server/opts.go", "server/opts_test.go", "server/server.go", "server/server_test.go"]
|
Provide a configuration option for the lame duck mode grace period
|
## Feature Request
Provide a configuration option for the lame duck mode grace period.
#### Use Case:
Applications might need more than 10s to cleanup or migrate during this period or there's a manual process involved in server rollouts, upgrades, etc.
#### Proposed Change:
Add something like a `lame_duck_grace_period=<duration>` option in the configuration file.
|
https://github.com/nats-io/nats-server/issues/1459
|
https://github.com/nats-io/nats-server/pull/1460
|
ede25f65a620cb00c20b0b335aa2dda2a35325aa
|
6413fcd9c0fda6245faa3e6a2fe79ee0b26731f0
| 2020-06-08T15:39:04Z |
go
| 2020-06-08T18:07:47Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,451 |
["server/gateway_test.go", "server/server.go", "server/server_test.go"]
|
Print configuration file used in startup banner
|
## Feature Request
Report the name of the configuration file being loaded by the server.
#### Use Case:
One of the docker image is incorrectly not using the provided configuration file by default.
Having the config file being used printed would have helped users notice this issue faster.
See https://github.com/nats-io/nats-docker/issues/36
#### Proposed Change:
If a configuration file is specified, print it in the banner.
#### Who Benefits From The Change(s)?
People deploying a NATS Server and want to make sure that proper configuration file is being used.
#### Alternative Approaches
N/A
|
https://github.com/nats-io/nats-server/issues/1451
|
https://github.com/nats-io/nats-server/pull/1473
|
02eb98c3c605de98f1cea2ac46f07d1f40c329e4
|
7545ff1cef8b6bef52a7d10e5079b588e8a8f8db
| 2020-06-04T19:21:34Z |
go
| 2020-06-12T19:51:38Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,436 |
["server/consumer.go", "test/jetstream_test.go"]
|
AckAll and AckWait
|
Need to double check that AckAll properly responds to AckWait and redelivery.
|
https://github.com/nats-io/nats-server/issues/1436
|
https://github.com/nats-io/nats-server/pull/1437
|
e584d4efee1c294c642bc87e2f3622e2463ee9d6
|
78aa2488f4a398b1afefddd670f4d98ca8de5ee5
| 2020-05-30T18:40:50Z |
go
| 2020-05-31T20:58:20Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,421 |
["server/client.go", "server/server_test.go", "test/leafnode_test.go"]
|
QueueSubscriptions stop working with leafnode
|
If I extend my cluster with a LeafNode and then connect QueueSubscribers to the LeafNode everything works fine at first.
If I stop one of the subscribers (ctrl+c) all other subscribers stop receiving data. As soon as I restart any QueueSubscriber, they all work again.
I don't see this behavior without the leafnode.
Here is my little test case:
```go
package main
import (
"fmt"
"log"
nats "github.com/nats-io/nats.go"
)
func main() {
nc, err := nats.Connect(nats.DefaultURL)
if err != nil {
log.Fatal(err)
}
defer nc.Close()
// Channel Subscriber
ch := make(chan *nats.Msg)
_, err = nc.ChanQueueSubscribe("queueTest1", "q1", ch)
for v := range ch {
fmt.Println(string(v.Data))
}
}
```
|
https://github.com/nats-io/nats-server/issues/1421
|
https://github.com/nats-io/nats-server/pull/1424
|
5512b45cba303ba2f31fbcd062d663fa01f9aca7
|
5d949bf1ea3b9aa29ebd9be25044cae0910dc320
| 2020-05-27T20:37:15Z |
go
| 2020-05-28T17:16:28Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,384 |
["server/client_test.go", "server/server.go"]
|
Closed connection can linger in server
|
## Defect
Make sure that these boxes are checked before submitting your issue -- thank you!
- [X] Included `nats-server -DV` output
- [X] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
Latest server 2.1.6, any client
#### OS/Container environment:
N/A
#### Steps or code to reproduce the issue:
Run the server and have an application create the TCP connection to the server but have the process exit before it has a chance to complete the connect protocol process
#### Expected result:
Monitoring should not show this connection.
#### Actual result:
Monitoring shows a connection with pending bytes, no lang/version, no rtt, etc..
```
"connections": [
{
"cid": 1,
"ip": "::1",
"port": 50841,
"start": "2020-05-08T10:42:46.816593-06:00",
"last_activity": "2020-05-08T10:42:46.816593-06:00",
"uptime": "2s",
"idle": "2s",
"pending_bytes": 280,
"in_msgs": 0,
"out_msgs": 0,
"in_bytes": 0,
"out_bytes": 0,
"subscriptions": 0
}
]
```
|
https://github.com/nats-io/nats-server/issues/1384
|
https://github.com/nats-io/nats-server/pull/1385
|
81fabde729b297fff9c3757c506f274d2d7974ca
|
80b4857cd774346986742d8c0494cb048dbf2b65
| 2020-05-08T16:54:11Z |
go
| 2020-05-08T18:01:57Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,378 |
["server/reload.go", "server/reload_test.go", "server/server.go"]
|
make sure validateOptions is called after reload
|
currently it is not
|
https://github.com/nats-io/nats-server/issues/1378
|
https://github.com/nats-io/nats-server/pull/1381
|
7e60769f9e1c175212366c18ec2fa9da7bff47aa
|
1ab26dfa0f4641a0199c182f726861f2e5f34288
| 2020-05-06T19:52:02Z |
go
| 2020-05-06T21:50:03Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,372 |
["server/accounts.go", "server/events.go", "server/events_test.go", "server/reload.go", "server/reload_test.go", "server/server.go"]
|
configuration reload that changes the system account does not take effect everywere
|
The reason for this is that config reload recreates all accounts including the system account but does not change the pointer server.sys.account and it's copy server.sys.client.acc (taken by internalSendLoop).
Below config change changes the field to from CONNECT to DISCONNECT, then calls reload.
While the server is running, the change does not take effect.
As far as I can tell, JWT code actually updates the account instead of creating a new one.
I don't think we should allow modifying the system account.
While the config may seem contrived for now, it's hard to say that the pointer in question will always remain being used this way. Meaning future usages of c.acc may introduce another problem.
With the below config before and after reload, this subscriber will only receive the CONNECT event. `nats -s nats://user:pwd@localhost:4222 sub ">"`
Config before/after reload
```bash
> cat test.cfg
listen: localhost:4222
accounts: {
USERS {
users: [
{user: "user", password: "pwd"}
]
exports: [{service: "test"}]
},
SYS: {
users: [
{user: admin, password: changeit}
]
imports: [
{service: {account: USERS, subject: "test"}, to: "$SYS.ACCOUNT.USERS.CONNECT"},
]
},
}
system_account: SYS
> vi test.cfg
> nats-server --signal reload
> cat test.cfg
listen: localhost:4222
accounts: {
USERS {
users: [
{user: "user", password: "pwd"}
]
exports: [{service: "test"}]
},
SYS: {
users: [
{user: admin, password: changeit}
]
imports: [
{service: {account: USERS, subject: "test"}, to: "$SYS.ACCOUNT.USERS.DISCONNECT"},
]
},
}
system_account: SYS
>
```
|
https://github.com/nats-io/nats-server/issues/1372
|
https://github.com/nats-io/nats-server/pull/1387
|
9a6c840039e636a7ee99cc11d3c76a677ca220ad
|
09a58cbada86a61fa15b8f8744636814f868c3ae
| 2020-05-05T01:15:38Z |
go
| 2020-05-12T22:09:39Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,371 |
["server/monitor.go", "server/monitor_test.go", "server/sublist.go"]
|
Add missing metadata keys to monitoring port subsz JSON
|
Unlike the other `*z` (e.g., `connz` and `varz`), `subsz` is missing the `server_id` key.
It is also missing `now`.
There are probably other metadata fields that could be added, but these are the notable omissions.
Here is a recent `subsz` payload:
```
{
"num_subscriptions": 0,
"num_cache": 0,
"num_inserts": 0,
"num_removes": 0,
"num_matches": 0,
"cache_hit_rate": 0,
"max_fanout": 0,
"avg_fanout": 0
}
```
|
https://github.com/nats-io/nats-server/issues/1371
|
https://github.com/nats-io/nats-server/pull/1377
|
b1f1e878c6819c8b246edcea2e3a9179c735468c
|
7e60769f9e1c175212366c18ec2fa9da7bff47aa
| 2020-05-04T19:34:43Z |
go
| 2020-05-06T20:06:12Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,357 |
["server/monitor.go", "server/monitor_test.go", "server/sublist.go"]
|
/subs monitoring endpoint does not work when accounts are enabled.
|
- [x] Defect
- [ ] Feature Request or Change Proposal
## Defects
Below example shows starting the server without/with accounts.
Then subscribing on ">"
followed by querying /subsz.
When started with accounts the subscription does not appear.
(I tried /connz etc.. these worked. /connz?subs=1 worked as expected)
With accounts:
```bash
> cat X_BAD.cfg
listen: localhost:4222
http: localhost:8000
accounts: {
USERS {
users: [
{user: "user", password: "pwd"}
]
},
}
> nats-server -c X_BAD.cfg -D &
[1] 1865
[1865] 2020/04/27 11:51:54.350485 [INF] Starting nats-server version 2.1.6
...
[1865] 2020/04/27 11:51:54.352773 [INF] Server is ready
> nats -s nats://user:pwd@localhost:4222 sub ">"
[1865] 2020/04/27 11:52:00.470127 [DBG] 127.0.0.1:49657 - cid:1 - Client connection created
11:52:00 Subscribing on >
[1865] 2020/04/27 11:52:02.661034 [DBG] 127.0.0.1:49657 - cid:1 - Client Ping Timer
^Z
[2] + 1874 suspended nats -s nats://user:pwd@localhost:4222 sub ">"
> bg
[2] - 1874 continued nats -s nats://user:pwd@localhost:4222 sub ">"
> curl "http://localhost:8000/subsz"
{
"num_subscriptions": 0,
"num_cache": 0,
"num_inserts": 0,
"num_removes": 0,
"num_matches": 0,
"cache_hit_rate": 0,
"max_fanout": 0,
"avg_fanout": 0
}%
>
```
Without accounts:
```bash
> cat X_GOOD.cfg
listen: localhost:4222
http: localhost:8000
> nats-server -c X_GOOD.cfg -D &
[1] 1921
[1921] 2020/04/27 11:52:35.951091 [INF] Starting nats-server version 2.1.6
...
[1921] 2020/04/27 11:52:35.953342 [INF] Server is ready
> nats -s nats://user:pwd@localhost:4222 sub ">" &
[2] 1930
[1921] 2020/04/27 11:52:45.226069 [DBG] 127.0.0.1:49671 - cid:1 - Client connection created
11:52:45 Subscribing on >
[1921] 2020/04/27 11:52:47.350221 [DBG] 127.0.0.1:49671 - cid:1 - Client Ping Timer
> curl "http://localhost:8000/subsz"
{
"num_subscriptions": 1,
"num_cache": 0,
"num_inserts": 1,
"num_removes": 0,
"num_matches": 0,
"cache_hit_rate": 0,
"max_fanout": 0,
"avg_fanout": 0
}%
>
```
|
https://github.com/nats-io/nats-server/issues/1357
|
https://github.com/nats-io/nats-server/pull/1377
|
b1f1e878c6819c8b246edcea2e3a9179c735468c
|
7e60769f9e1c175212366c18ec2fa9da7bff47aa
| 2020-04-27T16:11:02Z |
go
| 2020-05-06T20:06:12Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,344 |
["server/leafnode.go", "test/leafnode_test.go"]
|
service responses seem to not be delivered after leaf node restart.
|
- [x] Defect
- [ ] Feature Request or Change Proposal
## Defects
I am starting two server that are connected via a leaf node connection.
I then start
'./nats-rply -s "nats://evil:[email protected]:4222" foo leaf' connected to srv 'nats-server -c evilinc.cfg -DV
I issue a request and get a response across account and leaf node connection.
```
./nats-req -s "nats://good:[email protected]:5222" from_evilfoo hub
Published [from_evilfoo] : 'hub'
Received [_INBOX.ztXHx36DOCOpVHfkGDmmq3.r2bQvqdu] : 'leaf'
```
If I restart 'nats-server -c evilinc.cfg -DV' as well as the nats-rply and start nats-req again, it times out. reply however has received the request.
```bash
./nats-req -s "nats://good:[email protected]:5222" from_evilfoo hub
nats: timeout for request
```
This persists until I am restarting the server nats-req is connected to.
Having done one service invocation prior to the restart is necessary for this to happen.
**Perhaps the _R_. subject changes due to the restart and the server kept running is not updating accordingly.** (see recorded traces)
```bash
[23357] 2020/04/13 18:49:28.067296 [DBG] 127.0.0.1:54470 - cid:5 - Client connection created
[23357] 2020/04/13 18:49:28.067723 [TRC] 127.0.0.1:54470 - cid:5 - <<- [CONNECT {"verbose":false,"pedantic":false,"user":"good","pass":"[REDACTED]","tls_required":false,"name":"NATS Sample Requestor","lang":"go","version":"1.9.1","protocol":1,"echo":true}]
[23357] 2020/04/13 18:49:28.067753 [TRC] 127.0.0.1:54470 - cid:5 - <<- [PING]
[23357] 2020/04/13 18:49:28.067757 [TRC] 127.0.0.1:54470 - cid:5 - ->> [PONG]
[23357] 2020/04/13 18:49:28.067924 [TRC] 127.0.0.1:54470 - cid:5 - <<- [SUB _INBOX.4CkeI5F4bYENm2qxaTb3U9.* 1]
[23357] 2020/04/13 18:49:28.067943 [TRC] 127.0.0.1:54470 - cid:5 - <<- [PUB from_evilfoo _INBOX.4CkeI5F4bYENm2qxaTb3U9.FsJybq3H 3]
[23357] 2020/04/13 18:49:28.067949 [TRC] 127.0.0.1:54470 - cid:5 - <<- MSG_PAYLOAD: ["hub"]
[23357] 2020/04/13 18:49:28.067970 [TRC] 192.168.1.111:4223 - lid:3 - ->> [LMSG evilfoo _R_.RVpdqh.nx4mLr 3]
[23357] 2020/04/13 18:49:30.072216 [DBG] 127.0.0.1:54470 - cid:5 - Client Ping Timer
[23357] 2020/04/13 18:49:30.072251 [TRC] 127.0.0.1:54470 - cid:5 - ->> [PING]
[23357] 2020/04/13 18:49:30.072393 [TRC] 127.0.0.1:54470 - cid:5 - <<- [PONG]
[23357] 2020/04/13 18:49:38.074052 [DBG] 127.0.0.1:54470 - cid:5 - Client connection closed
[23357] 2020/04/13 18:49:38.074093 [TRC] 127.0.0.1:54470 - cid:5 - <-> [DELSUB 1]
```
```bash
23373] 2020/04/13 18:49:28.068038 [TRC] 192.168.1.111:54463 - lid:2 - <<- [LMSG evilfoo _R_.RVpdqh.nx4mLr 3]
[23373] 2020/04/13 18:49:28.068049 [TRC] 192.168.1.111:54463 - lid:2 - <<- MSG_PAYLOAD: ["hub"]
[23373] 2020/04/13 18:49:28.068060 [TRC] 127.0.0.1:54462 - cid:1 - ->> [MSG foo 1 _R_.EnM6YO.wRf9tF 3]
[23373] 2020/04/13 18:49:28.068175 [TRC] 127.0.0.1:54462 - cid:1 - <<- [PUB _R_.EnM6YO.wRf9tF 4]
[23373] 2020/04/13 18:49:28.068182 [TRC] 127.0.0.1:54462 - cid:1 - <<- MSG_PAYLOAD: ["leaf"]
```
Config used
```bash
> cat good.cfg
listen: 127.0.0.1:5222
accounts: {
INTERNAL: {
users: [
{user: good, password: pwd}
]
exports: [{service: "foo", response: singleton}]
imports: [
{
service: {
account: EXTERNAL
subject: "evilfoo"
}, to: from_evilfoo
}
]
},
EXTERNAL: {
users: [
{user: evil, password: pwd}
]
exports: [{service: "evilfoo", response: singleton}]
imports: [
{
service: {
account: INTERNAL
subject: "foo"
}, to: goodfoo
}
]
},
}
leafnodes {
remotes = [
{
url:"nats://127.0.0.1:4223"
account:"EXTERNAL"
},
]
}
>cat evilinc.cfg
listen: 127.0.0.1:4222
accounts: {
INTERNAL_EVILINC: {
users: [
{user: evil, password: pwd}
]
exports: [{service: "foo", response: singleton}]
imports: [
{
service: {
account: EXTERNAL_GOOD
subject: "goodfoo"
}, to: from_goodfoo
}
]
},
EXTERNAL_GOOD: {
users: [
{user: good, password: pwd}
]
exports: [{service: "goodfoo", response: singleton}]
imports: [
{
service: {
account: INTERNAL_EVILINC
subject: "foo"
}, to: evilfoo
}
]
},
}
leafnodes {
port: 4223
authorization {
account:"EXTERNAL_GOOD"
}
}
>
```

|
https://github.com/nats-io/nats-server/issues/1344
|
https://github.com/nats-io/nats-server/pull/1345
|
dda10463059d03e5aa953938d5eb5d841ff1c338
|
9af6dcd19d8e24c07f3e53376749d61c22eeda95
| 2020-04-13T22:57:34Z |
go
| 2020-04-14T18:46:14Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,332 |
["server/client.go", "test/leafnode_test.go"]
|
inconsistent subscription forwarding behavior across accounts and leaf nodes
|
- [x] Defect
- [ ] Feature Request or Change Proposal
## Defects
Below are two server configurations.
When subscribing to `>` on the server running good.cfg (this subscriber uses account INTERNAL)
`nats-sub -s "nats://good:[email protected]:5222" ">"`
And publishing to the server running evil.cfg
`nats-pub -s "nats://0.0.0.0:4222" foo bar`
Even though import/export are such that one could expect the message to end up on the subscriber, it does not.
If I also start another subscriber on the server running good.cfg (this subscriber uses account EXTERNAL)
`nats-sub -s "nats://evil:[email protected]:5222" ">"`
Now when publishing to the server running evil.cfg again
`nats-pub -s "nats://0.0.0.0:4222" foo bar`
**Both** subscriber get it. Now that the subscriber in the external account exists, the subscriber in the internal gets the message.
This behavior is odd as receiving the message by one subscriber depends on the other subscriber being present.
The expected result would be that the subscription of the subscriber in the internal account propagates through the export and then is forwarded to the leaf node.
```Bash
>cat good.cfg
listen: 127.0.0.1:5222
leafnodes {
remotes = [
{
url:"nats://127.0.0.1:4223"
account:"EXTERNAL"
}
]
}
accounts: {
INTERNAL: {
users: [
{user: good, password: pwd}
]
imports: [
{
stream: {
account: EXTERNAL
subject: "foo"
}
}
]
},
EXTERNAL: {
users: [
{user: evil, password: pwd}
]
exports: [{stream: "foo"}]
},
}
> cat evil.cfg
listen: 127.0.0.1:4222
leafnodes {
port: 4223
}
```
|
https://github.com/nats-io/nats-server/issues/1332
|
https://github.com/nats-io/nats-server/pull/1335
|
84841a35bbc1561821f4e59b6c58ead5dd11fefe
|
84b634eb52b9b82b02a4145b60624b8203b73671
| 2020-04-10T00:47:16Z |
go
| 2020-04-10T18:19:18Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,325 |
["server/auth.go", "test/configs/certs/svid/ca.key", "test/configs/certs/svid/ca.pem", "test/configs/certs/svid/server.key", "test/configs/certs/svid/server.pem", "test/configs/certs/svid/svid-user-a.key", "test/configs/certs/svid/svid-user-a.pem", "test/configs/certs/svid/svid-user-b.key", "test/configs/certs/svid/svid-user-b.pem", "test/tls_test.go"]
|
Support SPIFFE x.509 SVIDs for client authentication
|
- [ ] Defect
- [X] Feature Request or Change Proposal
## Feature Requests
The SPIFFE specification creates a URI entry in the SAN field ref: https://github.com/spiffe/spiffe/blob/master/standards/X509-SVID.md#7-conclusion
#### Use Case:
Integrate NATS authorization with SPIFFE identity documents
#### Proposed Change:
add ```hasSANs := len(cert.URIs) > 0``` and logic to https://github.com/nats-io/nats-server/blob/e126a1f9d83a96d8c48d06718cf392ca7fba6cbd/server/auth.go#L536
ref: https://golang.org/src/crypto/x509/x509.go#L744
#### Who Benefits From The Change(s)?
Users using SPIFFE for identity (i.e. Istio users)
#### Alternative Approaches
Consume the client library ref: https://github.com/spiffe/go-spiffe for a SPIFFE specific implimentation
|
https://github.com/nats-io/nats-server/issues/1325
|
https://github.com/nats-io/nats-server/pull/1389
|
a34fb0836bcf808db6f1f5f27f73c58c0407c3a3
|
d72dff4e0f6fb93ca98daca425c151ea4f242250
| 2020-04-07T17:57:24Z |
go
| 2020-05-27T21:07:30Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,314 |
["server/accounts.go", "server/client.go", "server/monitor.go", "server/reload.go"]
|
data race with server.gacc
|
- [x] Defect
- [ ] Feature Request or Change Proposal
## Defects
Ran into this during a unit test.
accounts.go line 441
removeClient
if c != nil && c.srv != nil && a != c.srv.gacc && removed {
reload.go line 961
reloadAuthorization
s.gacc = nil
Code inspection revealed missing lock.
```test
=== RUN TestLoggingReload
==================
WARNING: DATA RACE
Write at 0x00c0004579f8 by goroutine 1333:
github.com/nats-io/nats-server/server.(*Server).reloadAuthorization()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/reload.go:961 +0xa0a
github.com/nats-io/nats-server/server.(*Server).applyOptions()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/reload.go:878 +0x20f
github.com/nats-io/nats-server/server.(*Server).reloadOptions()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/reload.go:679 +0x237
github.com/nats-io/nats-server/server.(*Server).Reload()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/reload.go:627 +0x39e
github.com/nats-io/nats-server/server.TestLoggingReload.func6()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/reload_test.go:4063 +0x13a
github.com/nats-io/nats-server/server.TestLoggingReload()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/reload_test.go:4111 +0x5a4
testing.tRunner()
/home/travis/.gimme/versions/go1.12.17.linux.amd64/src/testing/testing.go:865 +0x163
Previous read at 0x00c0004579f8 by goroutine 942:
github.com/nats-io/nats-server/server.(*Account).removeClient()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/accounts.go:441 +0x1d8
github.com/nats-io/nats-server/server.(*client).teardownConn()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/client.go:3619 +0xd16
github.com/nats-io/nats-server/server.(*client).writeLoop()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/client.go:801 +0x23e
github.com/nats-io/nats-server/server.(*Server).createClient.func3()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/server.go:1802 +0x41
Goroutine 1333 (running) created at:
testing.(*T).Run()
/home/travis/.gimme/versions/go1.12.17.linux.amd64/src/testing/testing.go:916 +0x65a
testing.runTests.func1()
/home/travis/.gimme/versions/go1.12.17.linux.amd64/src/testing/testing.go:1157 +0xa8
testing.tRunner()
/home/travis/.gimme/versions/go1.12.17.linux.amd64/src/testing/testing.go:865 +0x163
testing.runTests()
/home/travis/.gimme/versions/go1.12.17.linux.amd64/src/testing/testing.go:1155 +0x523
testing.(*M).Run()
/home/travis/.gimme/versions/go1.12.17.linux.amd64/src/testing/testing.go:1072 +0x2eb
github.com/nats-io/nats-server/server.TestMain()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/sublist_test.go:1095 +0x2a6
main.main()
_testmain.go:1214 +0x222
Goroutine 942 (finished) created at:
github.com/nats-io/nats-server/server.(*Server).startGoRoutine()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/server.go:2146 +0xb8
github.com/nats-io/nats-server/server.(*Server).createClient()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/server.go:1802 +0x8b3
github.com/nats-io/nats-server/server.(*Server).AcceptLoop.func2()
/home/travis/gopath/src/github.com/nats-io/nats-server/server/server.go:1426 +0x58
==================
```
|
https://github.com/nats-io/nats-server/issues/1314
|
https://github.com/nats-io/nats-server/pull/1315
|
649af1b5c16957df10ff74ced7128e7c3472b03c
|
ff920c31b32a1199c88ee887ff4e4919fdcefb5b
| 2020-03-17T18:22:00Z |
go
| 2020-03-19T23:29:25Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,313 |
["server/ciphersuites.go", "server/server.go", "server/server_test.go"]
|
monitoring only shows up to TLS 1.2 info (version, cipher)
|
- [X] Feature Request or Change Proposal
## Component
Monitoring
## Feature Requests
I would love to see my monitoring showing correct info for TLS 1.3 connz.
### currently
TLS 1.3 version shown as Unknown [304]
TLS 1.3 cipher shown as Unknown [%x]
### proposed
TLS 1.3 version shown as 1.3
TLS 1.3 ciper shown as value from cipherMap
### changes
This would require few changes:
a ) Extend the switch statement in
`func tlsVersion(ver uint16) string` in server/server.go
to also check for the TLS 1.3 version constant.
```
// from "tls"
VersionTLS13 = 0x0304
```
b) Extend the ciperMap and cipherMapByID to include the new TLS 1.3 constants in server/ciperhsuites.go.
```
// from "tls"
TLS_AES_128_GCM_SHA256 uint16 = 0x1301
TLS_AES_256_GCM_SHA384 uint16 = 0x1302
TLS_CHACHA20_POLY1305_SHA256 uint16 = 0x1303
```
c) Just a matter of taste: The TLS version 0x0304 represented as "Unknown [304]" in the JSON output of the connz route might be considered misleading. I would prefer it being prefixed with some indication of base 16 encoding like "0x". see `func tlsVersion(ver uint16) string` in server/server.go.
From my point of view it seems these changes would not introduce new problems.
|
https://github.com/nats-io/nats-server/issues/1313
|
https://github.com/nats-io/nats-server/pull/1316
|
6d92410a1feb98cab754968349be2bb285417643
|
b46a011c8779e138026244a1ce35e2f419062ffe
| 2020-03-14T11:21:19Z |
go
| 2020-03-19T23:30:51Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,310 |
["server/parser.go"]
|
pub traced message can be traced twice sometimes
|
- [X] Defect
## Defects
Seems that when using `telnet` based connections specially, the `PUB` message can end up twice in the logs:
```
# client
$ telnet 127.0.0.1 4222
pub hello 5
world
# server
$ go run main.go
[50109] 2020/03/12 15:23:42.830153 [DBG] 127.0.0.1:62509 - cid:1 - Client connection created
[50109] 2020/03/12 15:23:46.682241 [TRC] 127.0.0.1:62509 - cid:1 - <<- [PUB hello 5]
[50109] 2020/03/12 15:23:46.682288 [TRC] 127.0.0.1:62509 - cid:1 - <<- [PUB hello 5]
[50109] 2020/03/12 15:23:47.370417 [TRC] 127.0.0.1:62509 - cid:1 - <<- MSG_PAYLOAD: ["world"]
[50109] 2020/03/12 15:23:47.370461 [TRC] 127.0.0.1:62509 - cid:1 - ->> [OK]
```
Think this may have something to do with the handling of `clonePubArg` here: https://github.com/nats-io/nats-server/commit/6a1c3fc29b0cf1991ba5ea1157528d293b6234ff#r37800363
Interestingly, this does not happen when using tooling like `nats-pub hello world`:
```sh
$ go run main.go -DV &
[51904] 2020/03/12 16:01:05.529460 [DBG] 127.0.0.1:62988 - cid:1 - Client connection created
[51904] 2020/03/12 16:01:09.627714 [TRC] 127.0.0.1:62988 - cid:1 - <<- [PUB hello 5]
[51904] 2020/03/12 16:01:09.627746 [TRC] 127.0.0.1:62988 - cid:1 - <<- [PUB hello 5]
[51904] 2020/03/12 16:01:10.489145 [TRC] 127.0.0.1:62988 - cid:1 - <<- MSG_PAYLOAD: ["world"]
[51904] 2020/03/12 16:01:10.489199 [TRC] 127.0.0.1:62988 - cid:1 - ->> [OK]
[51904] 2020/03/12 16:01:33.135830 [TRC] 127.0.0.1:62988 - cid:1 - ->> [-ERR Unknown Protocol Operation]
[51904] 2020/03/12 16:01:33.135912 [ERR] 127.0.0.1:62988 - cid:1 - Client parser ERROR, state=0, i=0: proto='"\xff\xf4\xff\xfd"...'
[51904] 2020/03/12 16:01:33.136141 [DBG] 127.0.0.1:62988 - cid:1 - Client connection closed
[51904] 2020/03/12 16:01:39.165754 [DBG] 127.0.0.1:62992 - cid:2 - Client connection created
[51904] 2020/03/12 16:01:39.167807 [TRC] 127.0.0.1:62992 - cid:2 - <<- [CONNECT {"verbose":false,"pedantic":false,"tls_required":false,"name":"NATS Sample Publisher","lang":"go","version":"1.9.1","protocol":1,"echo":true}]
[51904] 2020/03/12 16:01:39.168404 [TRC] 127.0.0.1:62992 - cid:2 - <<- [PING]
[51904] 2020/03/12 16:01:39.168444 [TRC] 127.0.0.1:62992 - cid:2 - ->> [PONG]
[51904] 2020/03/12 16:01:39.168886 [TRC] 127.0.0.1:62992 - cid:2 - <<- [PUB hello 5]
[51904] 2020/03/12 16:01:39.168909 [TRC] 127.0.0.1:62992 - cid:2 - <<- MSG_PAYLOAD: ["world"]
[51904] 2020/03/12 16:01:39.168924 [TRC] 127.0.0.1:62992 - cid:2 - <<- [PING]
[51904] 2020/03/12 16:01:39.168930 [TRC] 127.0.0.1:62992 - cid:2 - ->> [PONG]
[51904] 2020/03/12 16:01:39.169270 [DBG] 127.0.0.1:62992 - cid:2 - Client connection closed
```
#### Versions of `nats-server` and affected client libraries used:
master
#### OS/Container environment:
Confirmed this in OS X
|
https://github.com/nats-io/nats-server/issues/1310
|
https://github.com/nats-io/nats-server/pull/1312
|
6d5ac0a5e3944a4744c5da27e9c2322fcbb0c8b5
|
649af1b5c16957df10ff74ced7128e7c3472b03c
| 2020-03-12T23:04:50Z |
go
| 2020-03-14T17:02:28Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,305 |
["server/accounts.go", "server/events_test.go", "server/leafnode.go", "server/leafnode_test.go", "test/leafnode_test.go"]
|
leaf node loop detection does not
|
- [x] Defect
- [ ] Feature Request or Change Proposal
## Defects
Make sure that these boxes are checked before submitting your issue -- thank you!
- [X] Included `nats-server -DV` output
- [ ] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
trunk
#### OS/Container environment:
maxos
#### Steps or code to reproduce the issue:
leafA.cfg:
listen: 127.0.0.1:4222
leafnodes {
port: 4223
}
leafB.cfg:
listen: 127.0.0.1:5222
leafnodes {
remotes = [{url:"nats://127.0.0.1:4223"}]
}
leafC.cfg :
listen: 127.0.0.1:6222
leafnodes {
port: 6223
remotes = [{url:"nats://127.0.0.1:4223"}]
}
nats-server -c leafA.cfg -DV
[67739] 2020/03/06 19:22:24.022131 [INF] Starting nats-server version 2.1.4
[67739] 2020/03/06 19:22:24.022367 [DBG] Go build version go1.13.6
[67739] 2020/03/06 19:22:24.022378 [INF] Git commit [not set]
[67739] 2020/03/06 19:22:24.022829 [INF] Listening for leafnode connections on 0.0.0.0:4223
[67739] 2020/03/06 19:22:24.022863 [DBG] Get non local IPs for "0.0.0.0"
[67739] 2020/03/06 19:22:24.023286 [DBG] ip=2604:2000:1201:87a8:cd3:9436:a312:dde9
[67739] 2020/03/06 19:22:24.024101 [INF] Listening for client connections on 127.0.0.1:4222
[67739] 2020/03/06 19:22:24.024118 [INF] Server id is NAK4DAATZTM6DAMLMQLV6J674IG7AJQ4Y4ZPWTVOWUQLVPGIKSQ6MMJ7
[67739] 2020/03/06 19:22:24.024130 [INF] Server is ready
[67739] 2020/03/06 19:22:24.953636 [DBG] 127.0.0.1:60458 - lid:1 - Leafnode connection created
[67739] 2020/03/06 19:22:24.954744 [TRC] 127.0.0.1:60458 - lid:1 - <<- [CONNECT {"tls_required":false,"name":"NBDTS3OTNDM455B7FCLGOGO5BQVK2WVEF2RFILSVPS5UM2MGM3WE2EB3"}]
[67739] 2020/03/06 19:22:24.955099 [TRC] 127.0.0.1:60458 - lid:1 - ->> [LS+ lds.TpnN2Ff590FtAOFHv9xjrb]
[67739] 2020/03/06 19:22:25.950019 [DBG] 127.0.0.1:60461 - lid:2 - Leafnode connection created
[67739] 2020/03/06 19:22:25.950652 [TRC] 127.0.0.1:60461 - lid:2 - <<- [CONNECT {"tls_required":false,"name":"NDR2WP46OQC6VPJCLFH26BWSWK64NW4ZONE7JE7AQ4NFD47DK7DRF5MG"}]
[67739] 2020/03/06 19:22:25.950760 [TRC] 127.0.0.1:60461 - lid:2 - ->> [LS+ lds.TpnN2Ff590FtAOFHv9xjrb]
[67739] 2020/03/06 19:22:25.988505 [DBG] 127.0.0.1:60458 - lid:1 - LeafNode Ping Timer
[67739] 2020/03/06 19:22:25.988605 [TRC] 127.0.0.1:60458 - lid:1 - ->> [PING]
[67739] 2020/03/06 19:22:25.988998 [TRC] 127.0.0.1:60458 - lid:1 - <<- [PONG]
[67739] 2020/03/06 19:22:26.148146 [TRC] 127.0.0.1:60458 - lid:1 - <<- [PING]
[67739] 2020/03/06 19:22:26.148185 [TRC] 127.0.0.1:60458 - lid:1 - ->> [PONG]
[67739] 2020/03/06 19:22:27.118495 [DBG] 127.0.0.1:60461 - lid:2 - LeafNode Ping Timer
[67739] 2020/03/06 19:22:27.118691 [TRC] 127.0.0.1:60461 - lid:2 - ->> [PING]
[67739] 2020/03/06 19:22:27.119322 [TRC] 127.0.0.1:60461 - lid:2 - <<- [PONG]
[67739] 2020/03/06 19:22:27.145222 [TRC] 127.0.0.1:60461 - lid:2 - <<- [PING]
[67739] 2020/03/06 19:22:27.145249 [TRC] 127.0.0.1:60461 - lid:2 - ->> [PONG]
nats-server -c leafB.cfg -DV
[67740] 2020/03/06 19:22:24.952269 [INF] Starting nats-server version 2.1.4
[67740] 2020/03/06 19:22:24.952485 [DBG] Go build version go1.13.6
[67740] 2020/03/06 19:22:24.952496 [INF] Git commit [not set]
[67740] 2020/03/06 19:22:24.952774 [DBG] Trying to connect as leafnode to remote server on "127.0.0.1:4223"
[67740] 2020/03/06 19:22:24.952966 [INF] Listening for client connections on 127.0.0.1:5222
[67740] 2020/03/06 19:22:24.952985 [INF] Server id is NBDTS3OTNDM455B7FCLGOGO5BQVK2WVEF2RFILSVPS5UM2MGM3WE2EB3
[67740] 2020/03/06 19:22:24.952995 [INF] Server is ready
[67740] 2020/03/06 19:22:24.954200 [DBG] 127.0.0.1:4223 - lid:1 - Remote leafnode connect msg sent
[67740] 2020/03/06 19:22:24.954387 [DBG] 127.0.0.1:4223 - lid:1 - Leafnode connection created
[67740] 2020/03/06 19:22:24.954465 [INF] Connected leafnode to "127.0.0.1:4223"
[67740] 2020/03/06 19:22:24.955347 [TRC] 127.0.0.1:4223 - lid:1 - <<- [LS+ lds.TpnN2Ff590FtAOFHv9xjrb]
[67740] 2020/03/06 19:22:25.988865 [TRC] 127.0.0.1:4223 - lid:1 - <<- [PING]
[67740] 2020/03/06 19:22:25.988899 [TRC] 127.0.0.1:4223 - lid:1 - ->> [PONG]
[67740] 2020/03/06 19:22:26.147778 [DBG] 127.0.0.1:4223 - lid:1 - LeafNode Ping Timer
[67740] 2020/03/06 19:22:26.147975 [TRC] 127.0.0.1:4223 - lid:1 - ->> [PING]
[67740] 2020/03/06 19:22:26.148347 [TRC] 127.0.0.1:4223 - lid:1 - <<- [PONG]
nats-server -c leafC.cfg -DV
[67741] 2020/03/06 19:22:25.946618 [INF] Starting nats-server version 2.1.4
[67741] 2020/03/06 19:22:25.946859 [DBG] Go build version go1.13.6
[67741] 2020/03/06 19:22:25.946874 [INF] Git commit [not set]
[67741] 2020/03/06 19:22:25.947386 [INF] Listening for leafnode connections on 0.0.0.0:6223
[67741] 2020/03/06 19:22:25.947427 [DBG] Get non local IPs for "0.0.0.0"
[67741] 2020/03/06 19:22:25.947921 [DBG] ip=2604:2000:1201:87a8:cd3:9436:a312:dde9
[67741] 2020/03/06 19:22:25.948927 [INF] Listening for client connections on 127.0.0.1:6222
[67741] 2020/03/06 19:22:25.948947 [INF] Server id is NDR2WP46OQC6VPJCLFH26BWSWK64NW4ZONE7JE7AQ4NFD47DK7DRF5MG
[67741] 2020/03/06 19:22:25.948958 [INF] Server is ready
[67741] 2020/03/06 19:22:25.949058 [DBG] Trying to connect as leafnode to remote server on "127.0.0.1:4223"
[67741] 2020/03/06 19:22:25.950151 [DBG] 127.0.0.1:4223 - lid:1 - Remote leafnode connect msg sent
[67741] 2020/03/06 19:22:25.950383 [DBG] 127.0.0.1:4223 - lid:1 - Leafnode connection created
[67741] 2020/03/06 19:22:25.950542 [INF] Connected leafnode to "127.0.0.1:4223"
[67741] 2020/03/06 19:22:25.950952 [TRC] 127.0.0.1:4223 - lid:1 - <<- [LS+ lds.TpnN2Ff590FtAOFHv9xjrb]
[67741] 2020/03/06 19:22:27.118994 [TRC] 127.0.0.1:4223 - lid:1 - <<- [PING]
[67741] 2020/03/06 19:22:27.119070 [TRC] 127.0.0.1:4223 - lid:1 - ->> [PONG]
[67741] 2020/03/06 19:22:27.144911 [DBG] 127.0.0.1:4223 - lid:1 - LeafNode Ping Timer
[67741] 2020/03/06 19:22:27.144991 [TRC] 127.0.0.1:4223 - lid:1 - ->> [PING]
[67741] 2020/03/06 19:22:27.145348 [TRC] 127.0.0.1:4223 - lid:1 - <<- [PONG]
#### Expected result:
Warning that a loop was detected followed by disconnect
#### Actual result:
All server connect, no warning.
When subscribing/publishing, results differ.
|
https://github.com/nats-io/nats-server/issues/1305
|
https://github.com/nats-io/nats-server/pull/1308
|
ff920c31b32a1199c88ee887ff4e4919fdcefb5b
|
6d92410a1feb98cab754968349be2bb285417643
| 2020-03-07T00:25:05Z |
go
| 2020-03-19T23:29:48Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,290 |
["server/opts.go"]
|
Triggering a config warning in the server causes -DV to be ignored
|
- [X] Defect
- [ ] Feature Request or Change Proposal
## Defects
Triggering a warning in the server can cause some flags to be ignored (such as -DV), for example with config:
```
ping_interval = 1
```
Will result in debug being disabled:
```
nats-server -c foo.conf -DV
foo.conf:2:1: invalid use of field "ping_interval": ping_interval should be converted to a duration
[9136] 2020/02/28 10:58:40.230949 [INF] Starting nats-server version 2.1.4
[9136] 2020/02/28 10:58:40.231099 [INF] Git commit [not set]
[9136] 2020/02/28 10:58:40.231494 [INF] Listening for client connections on 0.0.0.0:4222
[9136] 2020/02/28 10:58:40.231508 [INF] Server id is NANZ5MC2MGS624DJFYZBFMNWNBGSVGAHWTP4QO5Z4KEFLKB3EGE5NCDI
[9136] 2020/02/28 10:58:40.231514 [INF] Server is ready
^C[9136] 2020/02/28 10:58:41.567036 [INF] Initiating Shutdown...
[9136] 2020/02/28 10:58:41.567319 [INF] Server Exiting..
```
#### Versions of `nats-server` and affected client libraries used:
2.0.0 series
|
https://github.com/nats-io/nats-server/issues/1290
|
https://github.com/nats-io/nats-server/pull/1291
|
08d1da459bb617e18c24d364dbdc750e93524641
|
6465afd06230fd349904a84b2150911bd104fff5
| 2020-02-28T19:12:09Z |
go
| 2020-03-03T16:11:18Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,275 |
["main.go", "server/server.go"]
|
nats-server does not exit after lame duck duration is over
|
- [X] Defect
- [ ] Feature Request or Change Proposal
After putting the server in lame duck mode and after lame_duck_duration is over, the server should exit. (Without clients it should exit right away)
#### Versions of `nats-server` and affected client libraries used:
trunk, nats-2.1.4
commit: c0b97b7a224a3b54488d042c917e7f5a5d38181f
#### OS/Container environment:
mac os: 10.15.2
#### Steps or code to reproduce the issue:
start nats-server
execute: nats-server --signal ldm
#### Expected result:
server exited
#### Actual result:
server does not exit
$ nats-server -DV
[60157] 2020/02/10 17:19:18.916627 [INF] Starting nats-server version 2.1.4
[60157] 2020/02/10 17:19:18.916729 [DBG] Go build version go1.13.6
[60157] 2020/02/10 17:19:18.916732 [INF] Git commit [not set]
[60157] 2020/02/10 17:19:18.916956 [INF] Listening for client connections on 0.0.0.0:4222
[60157] 2020/02/10 17:19:18.916960 [INF] Server id is NBHZ6POHLZFQNR4MSR4SMWLVCFKR3U2IJ3R5WAU527RQC4TBG5Q5A2UU
[60157] 2020/02/10 17:19:18.916962 [INF] Server is ready
[60157] 2020/02/10 17:19:18.916968 [DBG] Get non local IPs for "0.0.0.0"
[60157] 2020/02/10 17:19:18.917146 [DBG] ip=192.168.1.111
[60157] 2020/02/10 17:19:18.917150 [DBG] ip=2604:2000:1201:87a8:cd3:9436:a312:dde9
[60157] 2020/02/10 17:19:18.917152 [DBG] ip=2604:2000:1201:87a8:c844:2c76:85d3:9a3b
[60157] 2020/02/10 17:19:18.917155 [DBG] ip=2604:2000:1201:87a8::353
^Z
[1] + 60157 suspended nats-server -DV
$ bg
[1] + 60157 continued nats-server -DV
$ nats-server --signal ldm
[60157] 2020/02/10 17:19:45.044331 [DBG] Trapped "user defined signal 2" signal
[60157] 2020/02/10 17:19:45.044386 [INF] Entering lame duck mode, stop accepting new clients
[60157] 2020/02/10 17:19:45.044457 [INF] Initiating Shutdown...
[60157] 2020/02/10 17:19:45.044470 [INF] Server Exiting..
$ sleep 130
$ fg
[1] + 60157 running nats-server -DV
^\SIGQUIT: quit
PC=0x7fff67dedce6 m=0 sigcode=0
goroutine 0 [idle]:
runtime.pthread_cond_wait(0x19ad928, 0x19ad8e8, 0x7ffe00000000)
/usr/local/go/src/runtime/sys_darwin.go:378 +0x39
runtime.semasleep(0xffffffffffffffff, 0x7ffeefbff528)
/usr/local/go/src/runtime/os_darwin.go:63 +0x85
runtime.notesleep(0x19ad6e8)
/usr/local/go/src/runtime/lock_sema.go:173 +0xe0
runtime.stoplockedm()
/usr/local/go/src/runtime/proc.go:2068 +0x88
runtime.schedule()
/usr/local/go/src/runtime/proc.go:2469 +0x485
runtime.park_m(0xc00010c480)
/usr/local/go/src/runtime/proc.go:2610 +0x9d
runtime.mcall(0x105e7a6)
/usr/local/go/src/runtime/asm_amd64.s:318 +0x5b
goroutine 18 [syscall]:
os/signal.signal_recv(0x1655ea0)
/usr/local/go/src/runtime/sigqueue.go:144 +0x96
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:23 +0x22
created by os/signal.init.0
/usr/local/go/src/os/signal/signal_unix.go:29 +0x41
rax 0x104
rbx 0x2
rcx 0x7ffeefbff348
rdx 0x1b00
rdi 0x19ad928
rsi 0x1c0100001d00
rbp 0x7ffeefbff3e0
rsp 0x7ffeefbff348
r8 0x0
r9 0xa0
r10 0x0
r11 0x202
r12 0x19ad928
r13 0x16
r14 0x1c0100001d00
r15 0xf1bddc0
rip 0x7fff67dedce6
rflags 0x203
cs 0x7
fs 0x0
gs 0x0
$
|
https://github.com/nats-io/nats-server/issues/1275
|
https://github.com/nats-io/nats-server/pull/1276
|
59f99c1233e4272bcf320bd4e01f4d371f38e4d0
|
c22b2c097da065c63156ff705fca7c336bb9f88c
| 2020-02-10T22:39:42Z |
go
| 2020-02-11T01:13:49Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,271 |
["server/jwt_test.go", "server/opts.go", "server/reload.go", "server/reload_test.go", "server/server.go"]
|
Proposal: Usage of a different TLS certificate for Account Server Communication
|
- [ ] Defect
- [ x] Feature Request or Change Proposal
## Feature Requests
#### Use Case:
I would like to be able to setup a different TLS certificate for client usage than Account Server communication. Currently I have setup a let's encrypt certificate so my clients can easily connect to the cluster without using any .crt files. I want my Account Server to communicate with my Nats cluster as well, preferably over lan, but they both live inside a private network and I cannot communicate over lan.
If I try to communicate over the kubernetes namespaces, using a nats-0.nats.namespace.svc.cluster.local DNS, it fails to validate the certificate because let's encrypt cannot approve this kind of DNS. So my account server is forced to communicate using the public DNS name, routing traffic outside the private network.
#### Proposed Change:
I won't go into deep proposals but I'll try something :
Maybe a different port to ingest messages from the Nats Account Server, let's say 9222.
And a different TLS block in Nats Cluster configuration file for this port that people in my position would feed with a "Local Issuer" certificate.
Example :
```
pid_file: "/var/run/nats/nats.pid"
http: 8222
debug: true
cluster {
port: 6222
cluster_advertise: $CLUSTER_ADVERTISE
connect_retries: 30
routes [
nats://nats-0.nats.default.svc.cluster.local:6222
nats://nats-1.nats.default.svc.cluster.local:6222
nats://nats-2.nats.default.svc.cluster.local:6222
]
}
# No need for CA file as we use a public authority
tls {
cert_file: "/tls/client-tls.crt"
key_file: "/tls/client-tls.key"
}
management {
port: 9222
tls {
cert_file: "/tls/local-tls.crt"
key_file: "/tls/local-tls.key"
}
}
```
#### Who Benefits From The Change(s)?
Everyone I believe, considering it's probably better security wise to not route the request outside the private network.
#### Alternative Approaches
I don't know because my knowledge of Nats internals is limited.
Hopefully this make sense :)
Thanks !
|
https://github.com/nats-io/nats-server/issues/1271
|
https://github.com/nats-io/nats-server/pull/1272
|
fb009afa8c434eb2f026d0c0d4b30aafbc618a9e
|
c0b97b7a224a3b54488d042c917e7f5a5d38181f
| 2020-01-31T16:42:15Z |
go
| 2020-02-04T00:17:14Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,229 |
["server/accounts.go", "server/jwt_test.go", "server/reload.go", "server/server.go"]
|
Reload fails when URL resolver request to account server fails temporarily
|
- [X] Defect
## Defects
In a NATS v2 decentralized auth setup, in some scenarios when using the account server and the http accounts resolver a request can fail due to temporary network errors. If the lookup request fails during a config reload, the configuration won't be reloaded and the config reload has to be retried until the account server is available again.
#### Versions of `nats-server` and affected client libraries used:
NATS v2.0 series
##### OS/Container environment:
Kubernetes
#### Steps or code to reproduce the issue:
Reload server using URL resolver when the resolver is temporarily unavailable
```
[16] [ERR] Failed to reload server configuration: /etc/nats-config/nats.conf:81:4: could not fetch <"https://<account-server-url>/jwt/v1/accounts/">: Get https://<account-server-url>/jwt/v1/accounts/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
```
|
https://github.com/nats-io/nats-server/issues/1229
|
https://github.com/nats-io/nats-server/pull/1239
|
9fb55bf373e6d3c7f5ee19b88a107d8d3fc1762e
|
98e07a74bc0aac255541881cff7d78e8ac469560
| 2019-12-18T23:51:23Z |
go
| 2020-01-06T22:19:02Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,207 |
["server/jwt_test.go", "server/opts.go", "server/reload_test.go"]
|
Account Resolver TLS (Docker)
|
- [ ] Defect
- [ X ] Feature Request or Change Proposal
## Defects
Make sure that these boxes are checked before submitting your issue -- thank you!
- [ ] Included `nats-server -DV` output
- [ ] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
2.0.4
#### OS/Container environment:
Docker Image
#### Steps or code to reproduce the issue:
resolver: URL(http://<server>:<port>/jwt/v1/accounts/)
#### Expected result:
Nats should be able to connect to the account resolver using HTTPS with Docker Images.
#### Actual result:
/etc/stan/nats-secret.conf:2:1: could not fetch <"https://nats-account-service.eventhorizon.svc.cluster.local:9090/jwt/v1/accounts/">: Get https://nats-account-service.eventhorizon.svc.cluster.local:9090/jwt/v1/accounts/: x509: certificate signed by unknown authority
## Feature Requests
Allow configuration to pass in trusted cert in configuration
#### Use Case:
Using a Docker Image to deploy FT to K8S
#### Proposed Change:
Allow configuration to pass in trusted cert in configuration
#### Who Benefits From The Change(s)?
Deployment to K8S with Account Server HTTPS Traffic
#### Alternative Approaches
Create own Docker image with base image, other than Scratch, that pulls in static binaries and then place trusted CA's in OS.
|
https://github.com/nats-io/nats-server/issues/1207
|
https://github.com/nats-io/nats-server/pull/2483
|
cd258e73bd5042749afc5fcf0fca61fbb841bb0c
|
171e29f954672049650a15182cb46d2712b3cd63
| 2019-12-04T02:15:30Z |
go
| 2021-09-02T19:22:16Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,197 |
["logger/log.go", "logger/log_test.go", "server/log.go", "server/opts.go"]
|
Log size limit?
|
- [x] Defect
- [ ] Feature Request or Change Proposal
## Defects
Make sure that these boxes are checked before submitting your issue -- thank you!
- [ ] Included `nats-server -DV` output
- [ ] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
nats-server 2.1.0
#### OS/Container environment:
windows server 2012 R2
#### Steps or code to reproduce the issue:
long time run
#### Expected result:
log auto truncate
#### Actual result:
the log is keep increasing to 88GB and eat all disk space...
## Feature Requests
#### Use Case:
#### Proposed Change:
#### Who Benefits From The Change(s)?
#### Alternative Approaches
|
https://github.com/nats-io/nats-server/issues/1197
|
https://github.com/nats-io/nats-server/pull/1202
|
a49c8b5f6af7fa4e92535224f73a766c6c5e5a23
|
1930159087ead1fe70e0885af72e9426eebc555e
| 2019-11-18T03:19:51Z |
go
| 2019-11-22T02:11:08Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,176 |
["server/events.go"]
|
Data race in test
|
```
=== RUN TestSystemServiceSubscribersLeafNodesWithSystem
==================
WARNING: DATA RACE
Read at 0x00c0001cc6d8 by goroutine 236:
github.com/nats-io/nats-server/server.(*Server).debugSubscribers.func2()
/home/ivan/go/src/github.com/nats-io/nats-server/server/events.go:1260 +0x1c1
Previous write at 0x00c0001cc6d8 by goroutine 154:
sync/atomic.AddInt32()
/home/ivan/tmp/go112linux/go/src/runtime/race_amd64.s:269 +0xb
github.com/nats-io/nats-server/server.(*Server).debugSubscribers.func1()
/home/ivan/go/src/github.com/nats-io/nats-server/server/events.go:1231 +0x11c
github.com/nats-io/nats-server/server.(*Server).inboxReply()
/home/ivan/go/src/github.com/nats-io/nats-server/server/events.go:1103 +0x1e6
github.com/nats-io/nats-server/server.(*Server).inboxReply-fm()
/home/ivan/go/src/github.com/nats-io/nats-server/server/events.go:1093 +0xb1
github.com/nats-io/nats-server/server.(*Server).deliverInternalMsg()
/home/ivan/go/src/github.com/nats-io/nats-server/server/events.go:979 +0x288
github.com/nats-io/nats-server/server.(*client).deliverMsg()
/home/ivan/go/src/github.com/nats-io/nats-server/server/client.go:2303 +0x410
github.com/nats-io/nats-server/server.(*client).processMsgResults()
/home/ivan/go/src/github.com/nats-io/nats-server/server/client.go:2855 +0x78b
github.com/nats-io/nats-server/server.(*client).processInboundRoutedMsg()
/home/ivan/go/src/github.com/nats-io/nats-server/server/route.go:310 +0x358
github.com/nats-io/nats-server/server.(*client).processInboundMsg()
/home/ivan/go/src/github.com/nats-io/nats-server/server/client.go:2568 +0x92
github.com/nats-io/nats-server/server.(*client).parse()
/home/ivan/go/src/github.com/nats-io/nats-server/server/parser.go:280 +0x26a4
github.com/nats-io/nats-server/server.(*client).readLoop()
/home/ivan/go/src/github.com/nats-io/nats-server/server/client.go:843 +0x482
github.com/nats-io/nats-server/server.(*Server).createRoute.func2()
/home/ivan/go/src/github.com/nats-io/nats-server/server/route.go:1202 +0x41
Goroutine 236 (running) created at:
github.com/nats-io/nats-server/server.(*Server).debugSubscribers()
/home/ivan/go/src/github.com/nats-io/nats-server/server/events.go:1250 +0x9e9
github.com/nats-io/nats-server/server.(*Server).debugSubscribers-fm()
/home/ivan/go/src/github.com/nats-io/nats-server/server/events.go:1167 +0xb1
github.com/nats-io/nats-server/server.(*Server).deliverInternalMsg()
/home/ivan/go/src/github.com/nats-io/nats-server/server/events.go:979 +0x288
github.com/nats-io/nats-server/server.(*client).deliverMsg()
/home/ivan/go/src/github.com/nats-io/nats-server/server/client.go:2303 +0x410
github.com/nats-io/nats-server/server.(*client).processMsgResults()
/home/ivan/go/src/github.com/nats-io/nats-server/server/client.go:2855 +0x78b
github.com/nats-io/nats-server/server.(*client).checkForImportServices()
/home/ivan/go/src/github.com/nats-io/nats-server/server/client.go:2749 +0x4db
github.com/nats-io/nats-server/server.(*client).processInboundClientMsg()
/home/ivan/go/src/github.com/nats-io/nats-server/server/client.go:2610 +0x12b3
github.com/nats-io/nats-server/server.(*client).processInboundMsg()
/home/ivan/go/src/github.com/nats-io/nats-server/server/client.go:2566 +0xbb
github.com/nats-io/nats-server/server.(*client).parse()
/home/ivan/go/src/github.com/nats-io/nats-server/server/parser.go:280 +0x26a4
github.com/nats-io/nats-server/server.(*client).readLoop()
/home/ivan/go/src/github.com/nats-io/nats-server/server/client.go:843 +0x482
github.com/nats-io/nats-server/server.(*Server).createClient.func2()
/home/ivan/go/src/github.com/nats-io/nats-server/server/server.go:1705 +0x41
Goroutine 154 (running) created at:
github.com/nats-io/nats-server/server.(*Server).startGoRoutine()
/home/ivan/go/src/github.com/nats-io/nats-server/server/server.go:2052 +0xb8
github.com/nats-io/nats-server/server.(*Server).createRoute()
/home/ivan/go/src/github.com/nats-io/nats-server/server/route.go:1202 +0x682
github.com/nats-io/nats-server/server.(*Server).routeAcceptLoop.func2()
/home/ivan/go/src/github.com/nats-io/nats-server/server/route.go:1553 +0x65
==================
--- FAIL: TestSystemServiceSubscribersLeafNodesWithSystem (0.90s)
```
|
https://github.com/nats-io/nats-server/issues/1176
|
https://github.com/nats-io/nats-server/pull/1178
|
636ff95627555a34855e7a05446b6f1201872d58
|
ef710923160906c7f1609d632a1ca2d777922cd4
| 2019-10-31T02:05:44Z |
go
| 2019-10-31T02:33:30Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,174 |
["server/client.go", "server/client_test.go", "server/monitor.go"]
|
During CONNECT, client may receive from server PING before initial PONG
|
Lots of client implementations during the connect process send a CONNECT protocol followed by a PING and expect the server to send the PONG (or -ERR) before they return to the user that the connection is created.
While the server is processing the client's CONNECT, and if Connz() is requested, it is possible that the server sends a PING (to compute RTT) before the initial PONG (in response to client PING) is sent back, causing the client to report:
```
nats: expected 'PONG', got 'PING'
```
|
https://github.com/nats-io/nats-server/issues/1174
|
https://github.com/nats-io/nats-server/pull/1175
|
2706a15590ca2c2a8ff6f577f88e000beff693fc
|
eb1c2ab72abe69262803e3b8c339331296bea2fe
| 2019-10-30T20:24:40Z |
go
| 2019-10-31T02:36:07Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,159 |
["server/accounts.go", "server/accounts_test.go", "server/errors.go"]
|
Runtime error when subscribing to an export
|
- [x] Defect
- [ ] Feature Request or Change Proposal
## Defects
Make sure that these boxes are checked before submitting your issue -- thank you!
- [x] Included `nats-server -DV` output
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `nats-server` and affected client libraries used:
```
$ nats-server version
nats-server: v2.1.0
````
#### OS/Container environment:
MacOS
#### Steps or code to reproduce the issue:
```
# Creation of an operator
nsc add operator -n preprod
# Creation of an admin account
nsc add account --name admin
# Add an export to the account
nsc add export --account admin --name "client1" --subject "client1.>" --private
# Add user to the account
nsc add user --account admin --name admin --allow-pub ">" --allow-sub ">"
# Creation of a client account
nsc add account --name client1
CLIENT1_ACCOUNT_ID=$(nsc describe account client1 | grep 'Account ID' | cut -c 31-86)
# Add user to the account
nsc add user --account client1 --name user1 --deny-pub ">" --allow-sub "events.>"
# Securing the import
nsc generate activation -o /tmp/activation.jwt --account admin --target-account $CLIENT1_ACCOUNT_ID --subject "client1.>"
# Add an import to the account
nsc add import --account client1 --token /tmp/activation.jwt --local-subject "events.>"
# Running an account server
nats-account-server -nsc ~/.nsc/nats/preprod
```
Then I run a NATS server with the following configuration
```
$ cat ./server.conf
operator: /Users/luc/.nsc/nats/preprod/preprod.jwt
resolver: URL(http://localhost:9090/jwt/v1/accounts/)
$ nats-server -c server.conf
```
I got an error when trying to subscribe to the `events.>` subject:
```
$ nats-sub -creds ~/.nkeys/creds/preprod/client1/user1.creds "events.>"
Listening on [events.>]
Disconnected: will attempt reconnects for 10m
```
#### Expected result:
User connected to NATS and subscribed to the `events.>` subject.
#### Actual result:
````
$ nats-server -c server.conf -DV
[17544] 2019/10/16 15:31:58.066351 [INF] Starting nats-server version 2.1.0
[17544] 2019/10/16 15:31:58.066440 [DBG] Go build version go1.13.1
[17544] 2019/10/16 15:31:58.066443 [INF] Git commit [not set]
[17544] 2019/10/16 15:31:58.066446 [INF] Trusted Operators
[17544] 2019/10/16 15:31:58.066450 [INF] System : ""
[17544] 2019/10/16 15:31:58.066453 [INF] Operator: "preprod"
[17544] 2019/10/16 15:31:58.066472 [INF] Issued : 2019-10-16 11:30:57 +0200 CEST
[17544] 2019/10/16 15:31:58.066476 [INF] Expires : 1970-01-01 01:00:00 +0100 CET
[17544] 2019/10/16 15:31:58.066483 [WRN] Trusted Operators should utilize a System Account
[17544] 2019/10/16 15:31:58.066544 [INF] Listening for client connections on 0.0.0.0:4222
[17544] 2019/10/16 15:31:58.066547 [INF] Server id is NBCWFSGWL22DQKC4C75KPPMMQTOAMQJQE47BDFFECIQ3DAARRLK2UI5L
[17544] 2019/10/16 15:31:58.066549 [INF] Server is ready
[17544] 2019/10/16 15:31:58.066555 [DBG] Get non local IPs for "0.0.0.0"
[17544] 2019/10/16 15:31:58.066716 [DBG] ip=192.168.5.153
[17544] 2019/10/16 15:31:58.066772 [DBG] ip=192.168.99.1
[17544] 2019/10/16 15:31:58.066791 [DBG] ip=192.168.64.1
[17544] 2019/10/16 15:32:02.346307 [DBG] 127.0.0.1:62459 - cid:1 - Client connection created
[17544] 2019/10/16 15:32:02.347764 [TRC] 127.0.0.1:62459 - cid:1 - <<- [CONNECT {"verbose":false,"pedantic":false,"jwt":"eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJLSVUyVU83RU9BUlJOQVk2VVM3TEQySTM3SkhXUUlLUVpTUlVNTDdMVjZMNFY3UEFYNk1BIiwiaWF0IjoxNTcxMjI5MjEyLCJpc3MiOiJBRFZQQ1pTUUpCTU5LRE9HRTRUSUxFT0NaUlNHSTZDRlZMUEhXNDZYMkVSNktPT09OM01ZUUFQSiIsIm5hbWUiOiJ1c2VyMSIsInN1YiI6IlVEWVFSSklUREw2NVJYWE1OV1hUVFlKWU40RzVQSFhEUkRLUFdTRktPSlhaSlhUNjRaU0VQVTZHIiwidHlwZSI6InVzZXIiLCJuYXRzIjp7InB1YiI6eyJkZW55IjpbIlx1MDAzZSJdfSwic3ViIjp7ImFsbG93IjpbImV2ZW50cy5cdTAwM2UiXX19fQ.UgrcgQ07onJHrpgU5fKEhCJtLv4G_4oF1IjPdpBms1uMV3G-ad455G7nAnsLRnwahBy11vvsE71LZBWClQQMDQ","sig":"-AndA2UzFNoHxCPFaJtNL2e6YxrTFifQHhNoVF33D14LyN0pcc_AifYp2oY2c51LbmUCAD_mj5zE46dOW6oXBg","tls_required":false,"name":"NATS Sample Subscriber","lang":"go","version":"1.8.1","protocol":1,"echo":true}]
[17544] 2019/10/16 15:32:02.349315 [DBG] Account [ADVPCZSQJBMNKDOGE4TILEOCZRSGI6CFVLPHW46X2ER6KOOON3MYQAPJ] fetch took 1.144973ms
[17544] 2019/10/16 15:32:02.349905 [DBG] Updating account claims: ADVPCZSQJBMNKDOGE4TILEOCZRSGI6CFVLPHW46X2ER6KOOON3MYQAPJ
[17544] 2019/10/16 15:32:02.351296 [DBG] Account [ACMEIBWBQPU3P7CTIGZXP2JL6ZLPAKM72NCKFYZUIZOOL7UT2TV4XFF6] fetch took 1.372478ms
[17544] 2019/10/16 15:32:02.351504 [DBG] Updating account claims: ACMEIBWBQPU3P7CTIGZXP2JL6ZLPAKM72NCKFYZUIZOOL7UT2TV4XFF6
[17544] 2019/10/16 15:32:02.351511 [DBG] Adding stream export "client1.>" for ACMEIBWBQPU3P7CTIGZXP2JL6ZLPAKM72NCKFYZUIZOOL7UT2TV4XFF6
[17544] 2019/10/16 15:32:02.351535 [DBG] Adding stream import ACMEIBWBQPU3P7CTIGZXP2JL6ZLPAKM72NCKFYZUIZOOL7UT2TV4XFF6:"client1.>" for ADVPCZSQJBMNKDOGE4TILEOCZRSGI6CFVLPHW46X2ER6KOOON3MYQAPJ:"events.>"
[17544] 2019/10/16 15:32:02.352042 [TRC] 127.0.0.1:62459 - cid:1 - <<- [PING]
[17544] 2019/10/16 15:32:02.352047 [TRC] 127.0.0.1:62459 - cid:1 - ->> [PONG]
[17544] 2019/10/16 15:32:02.352277 [TRC] 127.0.0.1:62459 - cid:1 - <<- [SUB events.> 1]
panic: runtime error: slice bounds out of range [9:8]
goroutine 29 [running]:
github.com/nats-io/nats-server/server.(*client).addShadowSub(0xc000224000, 0xc0000c6ab0, 0xc0001c8180, 0xc000010100, 0x12, 0xc000024501, 0x12)
/Users/luc/Development/src/github.com/nats-io/nats-server/server/client.go:1951 +0x444
github.com/nats-io/nats-server/server.(*client).addShadowSubscriptions(0xc000224000, 0xc0001f0160, 0xc0000c6ab0, 0x0, 0xc00009fdc8)
/Users/luc/Development/src/github.com/nats-io/nats-server/server/client.go:1916 +0x9d6
github.com/nats-io/nats-server/server.(*client).processSub(0xc000224000, 0xc0000d2404, 0xb, 0x3fc, 0xc00027be00, 0xc000206080, 0xc000206098, 0x105fc12)
/Users/luc/Development/src/github.com/nats-io/nats-server/server/client.go:1827 +0x490
github.com/nats-io/nats-server/server.(*client).parse(0xc000224000, 0xc0000d2400, 0x17, 0x400, 0x17, 0x0)
/Users/luc/Development/src/github.com/nats-io/nats-server/server/parser.go:410 +0x1b06
github.com/nats-io/nats-server/server.(*client).readLoop(0xc000224000)
/Users/luc/Development/src/github.com/nats-io/nats-server/server/client.go:842 +0x2f6
github.com/nats-io/nats-server/server.(*Server).createClient.func2()
/Users/luc/Development/src/github.com/nats-io/nats-server/server/server.go:1698 +0x2a
created by github.com/nats-io/nats-server/server.(*Server).startGoRoutine
`````
|
https://github.com/nats-io/nats-server/issues/1159
|
https://github.com/nats-io/nats-server/pull/1160
|
116ae2a9c2dafb7e151c2315c741b519ed30af0d
|
9ec4efa12ba312140e01280179beeacb73def19c
| 2019-10-16T14:25:27Z |
go
| 2019-10-16T17:48:10Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,144 |
["server/client.go", "test/bench_test.go", "test/client_cluster_test.go"]
|
Messages to queue subscriptions are not distributed evenly
|
- [ ] Defect
- [x] Feature Request or Change Proposal
## Feature Requests
#### Use Case:
As a consumer that is part of a queue subscription, it should receive a (roughly) equal proportion of the messages sent to that subject.
#### Proposed Change:
Currently the code handles queue subscriptions by randomly picking a place in the list of queue subscriptions and starting there. This is fine if the nats instance is the first nats instance to receive this message (i.e. from a client). [See https://github.com/nats-io/nats-server/blob/d44b0dec51f6f2b332d02959dda83c544f69fba3/server/client.go#L2814] However, if the nats instance receives the message from another nats instance it will "prefer" a local client over sending to another nats instance. This is what causes the current problem. Take this example with two nats instances in a cluster and 12 subscribers that are part of the same queue group:
Assume two subscribers connect to nats instance A and then 6 subscribers connect to nats instance B then 4 more subscribers connect to nats instance A, since the list of subscribers is kept in a slice which is automatically ordered you get this list structure on both nats instances:
0 - A
1 - A
2 - B
3 - B
4 - B
5 - B
6 - B
7 - B
8 - A
9 - A
10 - A
11 - A
If nats instance B receives a message for the queue it will pick at random from the list, if it picks 0-1 or 8-11 then it will route the message to nats instance A. Now nats instance A will redo the work done by nats instance B and pick at random from the list, however it will require use of 0-1 or 8-11 but because it will randomly pick it can pick 2-7, if it does then it will start iterating the list, this will always lead to nats instance A picking subscriber 8 if it randomly starts with 2-8, effectively sending 1x traffic to subscribers 0-1, 9-11 and 7x traffic sent to subscriber 8.
One possible solution for this is to have nats instance A honor the pick that nats instance B made (if possible) because nats instance B correctly uniformly distributed that message.
Another possible solution is to have all nats instances shuffle the list of queue subscribers before every message (or on a regular short interval or on a small number of messages).
#### Who Benefits From The Change(s)?
Anyone using a load balancer for connections and using queue subscriptions.
#### Alternative Approaches
One alternative approach is to not use a load balancer for connections and manage balancing connections in some other way, also all clients that wish to use the same queue subscription would need to use the same method for balancing connections and this could get very tricky with many subscribers and many different queue subscriptions.
Another alternative approach is to have more members of the queue subscription. This will only lessen the severity because effectively "x" from the above example is smaller, it doesn't really fix the distribution problem. It just means that when one instance gets 7x the messages compared to its peers that 7x will be smaller.
|
https://github.com/nats-io/nats-server/issues/1144
|
https://github.com/nats-io/nats-server/pull/1215
|
5f110924c0938253afad24a9faa9d3310fa4e8cf
|
c87bf545e4ce9760624c1260cdcea6cad729a3c8
| 2019-09-26T17:45:36Z |
go
| 2019-12-09T21:34:50Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,118 |
["server/client.go", "server/client_test.go"]
|
Options.MaxPending is defined as int64 but used as int32
|
- [x] Defect
- [ ] Feature Request or Change Proposal
## Defects
Make sure that these boxes are checked before submitting your issue -- thank you!
- [ ] Included `gnatsd -DV` output
- [x] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `gnatsd` and affected client libraries used: v2.0.4
#### OS/Container environment: Mac OS X
#### Steps or code to reproduce the issue:
```
package main
import (
"math"
"time"
"github.com/nats-io/nats-server/v2/server"
natsc "github.com/nats-io/nats.go"
)
func main() {
svc, _ := server.NewServer(&server.Options{
MaxPending: math.MaxInt32 + 1,
})
svc.ConfigureLogger()
go func() {
time.Sleep(1 * time.Second)
opts := natsc.GetDefaultOptions()
conn, _ := opts.Connect()
_, _ = conn.Subscribe("*", nil)
conn.Flush()
}()
svc.Start()
}
```
#### Expected result:
It's less than math.MaxInt64 so it should run without any issues.
#### Actual result:
Exits with this error:
`[75524] [INF] 127.0.0.1:52013 - cid:1 - Slow Consumer Detected: MaxPending of -503316480 Exceeded`
## Feature Requests
#### Use Case:
#### Proposed Change:
Either change `Options.MaxPending` to `int32` or use it as `int64` in the codebase.
#### Who Benefits From The Change(s)?
Everyone
#### Alternative Approaches
|
https://github.com/nats-io/nats-server/issues/1118
|
https://github.com/nats-io/nats-server/pull/1121
|
d19b13d093bbe65efd905a8ab3f60e5ebc8de88c
|
d125f06eafc06c0ab00e6ae83c20431c867f8701
| 2019-09-10T17:21:04Z |
go
| 2019-09-11T20:48:11Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,117 |
["server/accounts.go", "server/events.go", "server/jwt_test.go", "server/server.go"]
|
mutual service import between two accounts creates an infinite loop
|
- [ X ] Defect
- [ ] Feature Request or Change Proposal
## Defects
When two accounts import each other's services, the server will go in an infinite loop trying to resolve the services.
- [ X ] Included `gnatsd -DV` output
- [ X ] Included a [Minimal, Complete, and Verifiable example] (https://stackoverflow.com/help/mcve)
#### Versions of `gnatsd` and affected client libraries used:
2.0.4 - release
#### Steps or code to reproduce the issue:
```
➜ nsc add operator -n O
Generated operator key - private key stored "~/.nkeys/keys/O/CX/OCXB3WBUIH467O5M3RMFSETSZFCYOU2ZQFAIJIYIHMK7RTVOEVT5G35B.nk"
Success! - added operator "O"
~
➜ nsc edit operator --service-url nats://localhost:4222
Success! - edited operator
-----BEGIN NATS OPERATOR JWT-----
eyJ0eXAiOiJqd3QiLCJhbGciOiJlZDI1NTE5In0.eyJqdGkiOiJNS0ZKSEQ0WDNTTUZPUVFCNEVRSTJLNlJRUkpJQ0FIUU9TWjZUVlE0RE42SkhNNUZENEtBIiwiaWF0IjoxNTY4MTM1MzU1LCJpc3MiOiJPQ1hCM1dCVUlINDY3TzVNM1JNRlNFVFNaRkNZT1UyWlFGQUlKSVlJSE1LN1JUVk9FVlQ1RzM1QiIsIm5hbWUiOiJPIiwic3ViIjoiT0NYQjNXQlVJSDQ2N081TTNSTUZTRVRTWkZDWU9VMlpRRkFJSklZSUhNSzdSVFZPRVZUNUczNUIiLCJ0eXBlIjoib3BlcmF0b3IiLCJuYXRzIjp7Im9wZXJhdG9yX3NlcnZpY2VfdXJscyI6WyJuYXRzOi8vbG9jYWxob3N0OjQyMjIiXX19.BMfWx7flCNiZOZEpCLO1cuBevw5KMNmIgiZY3tB2GJHm1rqnEvxgG8SgmBr-YKCZSpw3p-g5G0ObocpZ_7GGDw
------END NATS OPERATOR JWT------
~
➜ nsc add account -n A
Generated account key - private key stored "~/.nkeys/keys/A/BW/ABWYQPONJDQQY3GIPVQQLDMB2Z2MJMNMDS5GPJEIDMSTOA5MUYRIW7GG.nk"
Success! - added account "A"
~
➜ nsc add user -n a
Generated user key - private key stored "~/.nkeys/keys/U/BI/UBIY3WHUDEF7XLQRJUK7AAE7B2VAAML4I4UXFECCOPRDCKAPIO2WHENA.nk"
Generated user creds file "~/.nkeys/creds/O/A/a.creds"
Success! - added user "a" to "A"
~
➜ nsc add export --private --subject aaa --service
Success! - added private service export "aaa"
~
➜ nsc add account -n B
Generated account key - private key stored "~/.nkeys/keys/A/AE/AAEE3DHFULHRSP7ODW5MVFNC4VSVMN5JUNDDEUZI3G7KYRIGTAF2SHUF.nk"
Success! - added account "B"
~
➜ nsc add user -n b
Generated user key - private key stored "~/.nkeys/keys/U/A7/UA7FIUHAUGV73IR2CYFTS54RQNEJNHCMTDRWPISFVML6QEZZIQDIDAU5.nk"
Generated user creds file "~/.nkeys/creds/O/B/b.creds"
Success! - added user "b" to "B"
~
➜ nsc add export --private --subject bbb --service
Success! - added private service export "bbb"
~
➜ nsc add import -i
? select account A
? pick from locally available exports? Yes
? select the export <- bbb [!]
? name bbb
? local subject bbb
Success! - added service import "bbb"
~
➜ nsc add import -i
? select account B
? pick from locally available exports? Yes
? select the export <- aaa [!]
? name aaa
? local subject aaa
Success! - added service import "aaa"
~
➜ nsc generate config --mem-resolver --config-file /tmp/server.conf -F
Success!! - generated "/tmp/server.conf"
✘130 ➜ nats-server -c /tmp/server.conf -DV
[25770] 2019/09/10 12:13:36.274137 [INF] Starting nats-server version 2.0.4
[25770] 2019/09/10 12:13:36.274237 [DBG] Go build version go1.12.8
[25770] 2019/09/10 12:13:36.274244 [INF] Git commit [c8ca58e]
[25770] 2019/09/10 12:13:36.274248 [INF] Trusted Operators
[25770] 2019/09/10 12:13:36.274256 [INF] System : ""
[25770] 2019/09/10 12:13:36.274261 [INF] Operator: "O"
[25770] 2019/09/10 12:13:36.274284 [INF] Issued : 2019-09-10 12:09:15 -0500 CDT
[25770] 2019/09/10 12:13:36.274295 [INF] Expires : 1969-12-31 18:00:00 -0600 CST
[25770] 2019/09/10 12:13:36.274307 [WRN] Trusted Operators should utilize a System Account
[25770] 2019/09/10 12:13:36.275291 [INF] Listening for client connections on 0.0.0.0:4222
[25770] 2019/09/10 12:13:36.275301 [INF] Server id is ND26HAPTBVESECHXFCNMN2NVVMF6HBG27BI5SDRUGGS5PARBSB7D2A2M
[25770] 2019/09/10 12:13:36.275306 [INF] Server is ready
[25770] 2019/09/10 12:13:36.275312 [DBG] Get non local IPs for "0.0.0.0"
[25770] 2019/09/10 12:13:36.275568 [DBG] ip=192.168.86.52
✘130 ➜ nsc tool sub -a A aaa
server will go in an infinite loop:
[25770] 2019/09/10 12:13:54.317156 [DBG] Updating account claims: ABWYQPONJDQQY3GIPVQQLDMB2Z2MJMNMDS5GPJEIDMSTOA5MUYRIW7GG
[25770] 2019/09/10 12:13:54.317161 [DBG] Adding service export "aaa" for ABWYQPONJDQQY3GIPVQQLDMB2Z2MJMNMDS5GPJEIDMSTOA5MUYRIW7GG
[25770] 2019/09/10 12:13:54.317164 [DBG] Account [AAEE3DHFULHRSP7ODW5MVFNC4VSVMN5JUNDDEUZI3G7KYRIGTAF2SHUF] fetch took 147ns
[25770] 2019/09/10 12:13:54.317454 [DBG] Updating account claims: AAEE3DHFULHRSP7ODW5MVFNC4VSVMN5JUNDDEUZI3G7KYRIGTAF2SHUF
[25770] 2019/09/10 12:13:54.317459 [DBG] Adding service export "bbb" for AAEE3DHFULHRSP7ODW5MVFNC4VSVMN5JUNDDEUZI3G7KYRIGTAF2SHUF
[25770] 2019/09/10 12:13:54.317463 [DBG] Account [ABWYQPONJDQQY3GIPVQQLDMB2Z2MJMNMDS5GPJEIDMSTOA5MUYRIW7GG] fetch took 173ns
[25770] 2019/09/10 12:13:54.317754 [DBG] Updating account claims: ABWYQPONJDQQY3GIPVQQLDMB2Z2MJMNMDS5GPJEIDMSTOA5MUYRIW7GG
[25770] 2019/09/10 12:13:54.317759 [DBG] Adding service export "aaa" for ABWYQPONJDQQY3GIPVQQLDMB2Z2MJMNMDS5GPJEIDMSTOA5MUYRIW7GG
[25770] 2019/09/10 12:13:54.317790 [DBG] Account [AAEE3DHFULHRSP7ODW5MVFNC4VSVMN5JUNDDEUZI3G7KYRIGTAF2SHUF] fetch took 180ns
[25770] 2019/09/10 12:13:54.318080 [DBG] Updating account claims: AAEE3DHFULHRSP7ODW5MVFNC4VSVMN5JUNDDEUZI3G7KYRIGTAF2SHUF
[25770] 2019/09/10 12:13:54.318085 [DBG] Adding service export "bbb" for AAEE3DHFULHRSP7ODW5MVFNC4VSVMN5JUNDDEUZI3G7KYRIGTAF2SHUF
[25770] 2019/09/10 12:13:54.318089 [DBG] Account [ABWYQPONJDQQY3GIPVQQLDMB2Z2MJMNMDS5GPJEIDMSTOA5MUYRIW7GG] fetch took 164ns
[25770] 2019/09/10 12:13:54.318375 [DBG] Updating account claims: ABWYQPONJDQQY3GIPVQQLDMB2Z2MJMNMDS5GPJEIDMSTOA5MUYRIW7GG
[25770] 2019/09/10 12:13:54.318380 [DBG] Adding service export "aaa" for ABWYQPONJDQQY3GIPVQQLDMB2Z2MJMNMDS5GPJEIDMSTOA5MUYRIW7GG
[25770] 2019/09/10 12:13:54.318383 [DBG] Account [AAEE3DHFULHRSP7ODW5MVFNC4VSVMN5JUNDDEUZI3G7KYRIGTAF2SHUF] fetch took 146ns
[25770] 2019/09/10 12:13:54.318670 [DBG] Updating account claims: AAEE3DHFULHRSP7ODW5MVFNC4VSVMN5JUNDDEUZI3G7KYRIGTAF2SHUF
[25770] 2019/09/10 12:13:54.318676 [DBG] Adding service export "bbb" for AAEE3DHFULHRSP7ODW5MVFNC4VSVMN5JUNDDEUZI3G7KYRIGTAF2SHUF
[25770] 2019/09/10 12:13:54.318679 [DBG] Account [ABWYQPONJDQQY3GIPVQQLDMB2Z2MJMNMDS5GPJEIDMSTOA5MUYRIW7GG] fetch took 143ns
```
|
https://github.com/nats-io/nats-server/issues/1117
|
https://github.com/nats-io/nats-server/pull/1119
|
390afecd92e5d604187cee06c7d8aa6054fec2b9
|
d19b13d093bbe65efd905a8ab3f60e5ebc8de88c
| 2019-09-10T17:17:36Z |
go
| 2019-09-11T00:31:13Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,082 |
["server/client.go", "server/norace_test.go"]
|
memory increase in clustered mode
|
This is a follow on from https://github.com/nats-io/nats-server/issues/1065
While looking into the above issue I noticed memory growth, we wanted to focus on one issue at a time so with 1065 done I looked at the memory situation. The usage patterns and so forth is identical to 1065.

Above is 12 hours, now as you know I embed your broker into one of my apps and I run a bunch of things in there. However in order to isolate the problem I did a few things:
1. Same version of everything with the same usage pattern on a single unclustered broker does not show memory growth
1. Turning off all the related feature in my code where I embed nats-server when clustered I still see the growth
1. I made my code respond to SIGQUIT to write memory profiles on demand so I can interrogate a running nats server
The nats-server is `github.com/nats-io/nats-server/v2 v2.0.3-0.20190723153225-9cf534bc5e97`
From above memory dumps when comparing 6 hours apart dumps I see:
8am:
```
(pprof) top10
Showing nodes accounting for 161.44MB, 90.17% of 179.04MB total
Dropped 66 nodes (cum <= 0.90MB)
Showing top 10 nodes out of 51
flat flat% sum% cum cum%
73.82MB 41.23% 41.23% 73.82MB 41.23% github.com/nats-io/nats-server/v2/server.(*client).queueOutbound
29.18MB 16.30% 57.53% 29.68MB 16.58% github.com/nats-io/nats-server/v2/server.(*Server).createClient
19.60MB 10.95% 68.48% 19.60MB 10.95% math/rand.NewSource
15.08MB 8.42% 76.90% 140.30MB 78.37% github.com/nats-io/nats-server/v2/server.(*client).readLoop
6.50MB 3.63% 80.53% 12MB 6.70% github.com/nats-io/nats-server/v2/server.(*client).processSub
5.25MB 2.93% 83.46% 11.25MB 6.28% github.com/nats-io/nats-server/v2/server.(*Sublist).Insert
4.01MB 2.24% 85.70% 65.85MB 36.78% github.com/nats-io/nats-server/v2/server.(*client).processInboundClientMsg
3.50MB 1.95% 87.65% 3.50MB 1.95% github.com/nats-io/nats-server/v2/server.newLevel
2.50MB 1.40% 89.05% 2.50MB 1.40% github.com/nats-io/nats-server/v2/server.newNode
2MB 1.12% 90.17% 2MB 1.12% github.com/nats-io/nats-server/v2/server.(*client).addSubToRouteTargets
```
1pm
```
(pprof) top10
Showing nodes accounting for 185.64MB, 90.87% of 204.29MB total
Dropped 69 nodes (cum <= 1.02MB)
Showing top 10 nodes out of 46
flat flat% sum% cum cum%
86.33MB 42.26% 42.26% 86.33MB 42.26% github.com/nats-io/nats-server/v2/server.(*client).queueOutbound
30.19MB 14.78% 57.04% 30.69MB 15.02% github.com/nats-io/nats-server/v2/server.(*Server).createClient
25.75MB 12.60% 69.64% 165.05MB 80.79% github.com/nats-io/nats-server/v2/server.(*client).readLoop
19.60MB 9.59% 79.24% 19.60MB 9.59% math/rand.NewSource
6.50MB 3.18% 82.42% 12.55MB 6.14% github.com/nats-io/nats-server/v2/server.(*client).processSub
5.25MB 2.57% 84.99% 11.25MB 5.51% github.com/nats-io/nats-server/v2/server.(*Sublist).Insert
4.02MB 1.97% 86.95% 73.70MB 36.08% github.com/nats-io/nats-server/v2/server.(*client).processInboundClientMsg
3.50MB 1.71% 88.67% 3.50MB 1.71% github.com/nats-io/nats-server/v2/server.newLevel
2.50MB 1.22% 89.89% 2.50MB 1.22% github.com/nats-io/nats-server/v2/server.newNode
2MB 0.98% 90.87% 2MB 0.98% github.com/nats-io/nats-server/v2/server.(*client).addSubToRouteTargets
```
|
https://github.com/nats-io/nats-server/issues/1082
|
https://github.com/nats-io/nats-server/pull/1087
|
5d9ca4a919795035a03b42ef30dd2f9cd0bfdf9f
|
f5a6c0d476aa14ec714b912f8d073491da5c7cc5
| 2019-07-25T12:12:54Z |
go
| 2019-07-30T03:00:30Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,066 |
["server/sublist.go", "server/sublist_test.go"]
|
Subscriptions are not propagated correctly upon new leaf node joining
|
- [x] Defect
#### Versions of `gnatsd` and affected client libraries used:
nats-server 2.0.0 official released
### Consider the following setup

publish foo.bar will failed (not received on L1) if with the following order:
Assuming operator, account, user have configured correctly
1. Setup `main cluster`
2. `L1` join to main cluster
3. Subscribe `foo.bar` from `L1`
4. Publish `foo.bar`
Result: the subscriber will receive nothing
I think this related to #982
|
https://github.com/nats-io/nats-server/issues/1066
|
https://github.com/nats-io/nats-server/pull/3440
|
434bc34713c0a05e6c86b7ef379e6513c250b8e4
|
35d7e9f2800763bf73911403f7c31031b1c9f9ad
| 2019-07-10T14:07:55Z |
go
| 2022-09-06T15:16:45Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,065 |
["server/accounts.go", "server/client.go", "server/route.go", "server/server.go", "test/new_routes_test.go"]
|
subscription count in subsz is wrong
|
SInce updating one of my brokers to 2.0.0 I noticed a slow increate in subscription counts - I also did a bunch of other updates like move to the newly renamed libraries etc so in order to find the cause I eventually concluded the server is just counting things wrongly.

Ignoring the annoying popup, you can see a steady increase in subscriptions.
Data below is from the below dependency embedded in another go process:
```
github.com/nats-io/nats-server/v2 v2.0.1-0.20190701212751-a171864ae7df
```
```
$ curl -s http://localhost:6165/varz|jq .subscriptions
29256
```
I then tried to verify this number, and assuming I have no bugs in the script below I think the varz counter is off by a lot, comparing snapshots of connz over time I see no growth reflected there not in connection counts nor subscriptions:
```
$ curl "http://localhost:6165/connz?limit=200000&subs=1"|./countsubs.rb
Connections: 3659
Subscriptions: 25477
```
I also captured connz output over time 15:17, 15:56 and 10:07 the next day:
```
$ cat connz-1562685506.json|./countsubs.rb
Connections: 3657
Subscriptions: 25463
$ cat connz-1562687791.json|./countsubs.rb
Connections: 3658
Subscriptions: 25463
$ cat connz-1562687791.json|./countsubs.rb
Connections: 3658
Subscriptions: 25463
```
Using the script here:
```ruby
require "json"
subs = JSON.parse(STDIN.read)
puts "Connections: %d" % subs["connections"].length
count = 0
subs["connections"].each do |conn|
count += subs.length if subs = conn["subscriptions_list"]
end
puts "Subscriptions: %d" % count
```
|
https://github.com/nats-io/nats-server/issues/1065
|
https://github.com/nats-io/nats-server/pull/1079
|
d749954b7f42a88c515cbc1add61d0e290cbfe7f
|
9cf534bc5e97e38945646a26bebe98919897b5ea
| 2019-07-10T10:31:59Z |
go
| 2019-07-23T15:32:25Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,063 |
["server/leafnode.go", "server/leafnode_test.go", "server/monitor.go", "server/monitor_test.go", "server/opts.go", "server/opts_test.go", "server/server_test.go", "test/leafnode_test.go"]
|
support an array of url for a single leafnode remote
|
- [x] Feature Request or Change Proposal
#### Use Case:
Today leafnodes take a single seed url and then rely on discovery to find the rest of the cluster for failover. This dynamic nature is problematic for more enterprise orientated setups or just people who prefer clarity in configuration == reality. Further if the seed node is down for maintenance or due to failure its trouble and would then require config changes etc.
Gateways accept both a url and urls, the intention is that you'd give it a number of remote hosts in the same single leafnode remote and have it try to connect to any of those.
#### Proposed Change:
Support this:
```
leafnodes {
remotes = [
{ urls: nats-leaf://r2:4000, nats-leaf://r1:4000,
credentials: /Users/synadia/.nkeys/O/accounts/A/users/leaf.creds
},
]
}
```
|
https://github.com/nats-io/nats-server/issues/1063
|
https://github.com/nats-io/nats-server/pull/1069
|
32cfde9720d82d6395f3d6da9b78ea02595d87f3
|
79e69640bcffbe7dba6f0f1327ef68a267bba29c
| 2019-07-09T14:16:27Z |
go
| 2019-07-10T20:40:39Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,062 |
["server/gateway.go", "server/route.go", "server/server.go", "server/server_test.go", "test/cluster_tls_test.go"]
|
Support InsecureSkipVerify again
|
- [x] Feature Request or Change Proposal
#### Use Case:
Internal use, testing, enterprise systems running in black box style appliances we cant change and countless other situations make having the option to disable TLS verify worth having.
Discussions with @derekcollison centered around Ruby got burned lets not reproduce the mistakes but I think the situation is not the same. Ruby got burned because many widely used libraries DEFAULTED to essentially `InsecureSkipVerify=true` thus making the whole internet more insecure. What I am asking is not to default to that, but not to remove the support for this feature. This does not make the world worse and gives people who make a specific decision to disable it the ability to do so. Ruby internally also had issues where they would not do hostname verification even when OpenSSL::SSL::VERIFY_PEER is configured. Again not the issue here, you are by default at an appropriate security level - but people should have the choice to opt out of that. Especially those who vendor the server
#### Proposed Change:
https://github.com/nats-io/nats-server/blob/94071d32a98d86d26431ee9405e6b0f19b1994c5/server/server.go#L312-L316
https://github.com/nats-io/nats-server/blob/94071d32a98d86d26431ee9405e6b0f19b1994c5/server/gateway.go#L234-L236
https://github.com/nats-io/nats-server/blob/94071d32a98d86d26431ee9405e6b0f19b1994c5/server/gateway.go#L244-L246
Remove that restriction.
#### Who Benefits From The Change(s)?
#### Alternative Approaches
|
https://github.com/nats-io/nats-server/issues/1062
|
https://github.com/nats-io/nats-server/pull/1071
|
b3f6997bc0eff63f2e48a0a1451171e4b72904c5
|
8458a093de00a5b3cc0493e4b465677ff4658dff
| 2019-07-09T13:25:04Z |
go
| 2019-07-12T20:29:21Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,061 |
["server/monitor.go", "server/monitor_test.go", "server/server.go"]
|
Statistics for gateways and leafnodes
|
- [x] Feature Request or Change Proposal
#### Use Case:
Monitoring of leaf node traffic and health
#### Proposed Change:
Expose leaf and gateway stats in the stats port.
Specifically how many are connected (like current routes gauge), bytes across the wire. Connection counts - a number that increases with each connection to monitor frequent reconnects etc. I am sure lots more would make sense
#### Who Benefits From The Change(s)?
leaf and gateway users
#### Alternative Approaches
something to listen to the system publishes, but would prefer not to add a dependency
|
https://github.com/nats-io/nats-server/issues/1061
|
https://github.com/nats-io/nats-server/pull/1108
|
6c4a88f34e2ef93d686a7ffa244bced6c5751ae4
|
8a0120d1b88179a306140cc8b5cb0b31a61c9893
| 2019-07-09T11:57:09Z |
go
| 2019-08-26T18:46:44Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,039 |
["server/config_check_test.go", "server/const.go", "server/leafnode_test.go", "server/opts.go", "server/opts_test.go"]
|
Leafnode TLS set at 500ms
|
When a server tries to make a leafnode connect across a long RTT (such as on a plane using GoGo to NGS) the TLS will timeout. It's not easy to update the TLS timeout for a solicited leafnode connection.
@wallyqs could you take a look at making this easy to configure in the conf file?
|
https://github.com/nats-io/nats-server/issues/1039
|
https://github.com/nats-io/nats-server/pull/1042
|
436d955fc4a6867371a2c6880cd63df569ca536b
|
1bd5cedee766d96e13447a46d0965c9f33bcbf57
| 2019-06-12T02:50:41Z |
go
| 2019-06-14T20:06:36Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 1,006 |
["server/const.go", "server/gateway.go", "server/parser.go", "test/gateway_test.go"]
|
panic when using wrong protocol against gateway port
|
- [x] Defect
To reproduce:
```
$ echo '
"gateway": {
"name": "do-lon1-kubecon-eu",
"port": 4223,
}
' > conf/gateways.conf
# Run the server, then try to telnet and send:
# echo 'sub > 90' | nc 127.0.0.1 4223
#
$ nats-server -c conf/gateways.conf
[82250] 2019/05/22 09:15:29.094808 [INF] Starting nats-server version 2.0.0-RC12
[82250] 2019/05/22 09:15:29.094987 [INF] Git commit [not set]
[82250] 2019/05/22 09:15:29.095596 [INF] Gateway name is do-lon1-kubecon-eu
[82250] 2019/05/22 09:15:29.095781 [INF] Listening for gateways connections on 0.0.0.0:4223
[82250] 2019/05/22 09:15:29.096960 [INF] Address for gateway "do-lon1-kubecon-eu" is 172.16.48.18:4223
[82250] 2019/05/22 09:15:29.097596 [INF] Listening for client connections on 0.0.0.0:4222
[82250] 2019/05/22 09:15:29.097611 [INF] Server id is NADSGJ2DYU32GW3EFSIWA37KNYWFU4AFH6BBRGL3M4XHWY67FGYK2WUZ
[82250] 2019/05/22 09:15:29.097618 [INF] Server is ready
[82250] 2019/05/22 09:15:31.732195 [INF] 127.0.0.1:57115 - gid:1 - Processing inbound gateway connection
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1070ca6]
goroutine 22 [running]:
sync.(*Map).Load(0x0, 0x149aee0, 0xc000055d60, 0x1fc, 0x19264be, 0x1)
/usr/local/Cellar/go/1.12/libexec/src/sync/map.go:103 +0x26
github.com/nats-io/nats-server/server.(*client).processGatewayRSub(0xc00022a000, 0xc000238204, 0x4, 0x1fc, 0x0, 0x0)
/Users/wallyqs/repos/nats-dev/src/github.com/nats-io/nats-server/server/gateway.go:1716 +0x342
github.com/nats-io/nats-server/server.(*client).parse(0xc00022a000, 0xc000238200, 0xa, 0x200, 0xa, 0x0)
/Users/wallyqs/repos/nats-dev/src/github.com/nats-io/nats-server/server/parser.go:405 +0x1acd
github.com/nats-io/nats-server/server.(*client).readLoop(0xc00022a000)
/Users/wallyqs/repos/nats-dev/src/github.com/nats-io/nats-server/server/client.go:752 +0x1d5
github.com/nats-io/nats-server/server.(*Server).createGateway.func2()
/Users/wallyqs/repos/nats-dev/src/github.com/nats-io/nats-server/server/gateway.go:737 +0x2a
created by github.com/nats-io/nats-server/server.(*Server).startGoRoutine
/Users/wallyqs/repos/nats-dev/src/github.com/nats-io/nats-server/server/server.go:1911 +0x91
```
|
https://github.com/nats-io/nats-server/issues/1006
|
https://github.com/nats-io/nats-server/pull/1008
|
698d9e642c5d68c4978d1d2f2d188bf7882252e1
|
68e3fb634480875b0d2b46609ae508839cbede2d
| 2019-05-22T07:23:30Z |
go
| 2019-05-22T18:40:23Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 988 |
["server/accounts.go", "server/leafnode.go", "server/server.go"]
|
Slow response when concurrent clients connect
|
- [x] Defect
- [ ] Feature Request or Change Proposal
## Defects
It takes time (~ 3 seconds) for nats-server to establish connections when a lot of new connections come concurrently.
#### Versions of `gnatsd` and affected client libraries used:
2.0-RC11
#### OS/Container environment:
Ubuntu 16.04 (client: cnats 1.7.6)
#### Steps or code to reproduce the issue:
1. Build a nats-server cluster
2. Create a large amount of connections to the cluster (say 1000+) simultaneously (< 500ms)
#### Expected result:
The connection should be established within 1 second.
#### Actual result:
```
09:02:41.452738 IP 10.228.255.152.38288 > 10.228.193.29.4222: Flags [S], seq 3354597297, win 65535, options [mss 1420,nop,nop,sackOK,nop,wscale 11], length 0
09:02:41.453731 IP 10.228.193.29.4222 > 10.228.255.152.38288: Flags [S.], seq 1515096616, ack 3354597298, win 65535, options [mss 1420,nop,nop,sackOK,nop,wscale 11], length 0
09:02:41.453767 IP 10.228.255.152.38288 > 10.228.193.29.4222: Flags [.], ack 1, win 8192, length 0
# Timeout maybe?
09:02:43.451637 IP 10.228.255.152.38288 > 10.228.193.29.4222: Flags [F.], seq 1, ack 1, win 8192, length 0
09:02:43.455534 IP 10.228.193.29.4222 > 10.228.255.152.38288: Flags [.], ack 2, win 8192, length 0
# NATS responses INFO
09:02:44.100083 IP 10.228.193.29.4222 > 10.228.255.152.38288: Flags [P.], seq 1:4081, ack 2, win 8192, length 4080
09:02:44.100120 IP 10.228.255.152.38288 > 10.228.193.29.4222: Flags [R], seq 3354597299, win 0, length 0
```
|
https://github.com/nats-io/nats-server/issues/988
|
https://github.com/nats-io/nats-server/pull/990
|
19bb4863fbf9029f826d6e0b72b18e29b42c457d
|
034a9fd1e4bf79a25d137eaf936d3fe6e810c768
| 2019-05-09T09:26:40Z |
go
| 2019-05-10T01:12:29Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 977 |
["server/accounts.go", "server/events.go", "server/gateway.go", "server/server.go", "test/leafnode_test.go", "test/service_latency_test.go"]
|
LeafNode: need to remember to switch GW to interest-mode only
|
Creating this issue for tracking purposes.
Right now, when a leaf node connects, through system events, the super-cluster is asked to switch to interest-mode only (for gateways).
Need to cover the case where a server joins a cluster after the fact. It needs to know to perform the switch without the presence of that event.
|
https://github.com/nats-io/nats-server/issues/977
|
https://github.com/nats-io/nats-server/pull/1327
|
43d67424aaced10a6a5b814decb82af322291884
|
743c6ec7741f486c6deb6324d034cc078f9c0366
| 2019-05-01T20:17:43Z |
go
| 2020-04-08T17:57:49Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 969 |
["server/configs/test.conf", "server/const.go", "server/gateway.go", "server/leafnode.go", "server/opts.go", "server/opts_test.go", "server/route.go", "server/server_test.go", "server/util.go"]
|
Prevent that route errors fill up the logfile
|
- [ ] Defect
- [X ] Feature Request or Change Proposal
## Feature Requests
#### Use Case:
In [https://github.com/nats-io/gnatsd/pull/611](https://github.com/nats-io/gnatsd/pull/611) the log level for route errors has changed from debug to error.
While I understand the reasoning behind this, it can pollute the logfile(s) significantly.
Example scenario:
Set up a cluster with 5 servers A, B, C, D and E and point the routes to each other (A has routes to B, C, D and E, B has routes to A, C, D and E, etc)
Now for some reason server A goes down (unexpected, maintenance, whatever).
This is not an issue, because the other 4 servers are still fully operational.
BUT the logfiles get filled with error messages:
> [11611] 2019/04/24 17:28:58.703831 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:29:00.703400 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: connect: no route to host
[11611] 2019/04/24 17:29:02.703647 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:29:04.703841 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:29:06.703472 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: connect: no route to host
[11611] 2019/04/24 17:29:08.703734 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:29:10.703986 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:29:12.703407 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: connect: no route to host
[11611] 2019/04/24 17:29:14.704059 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:29:16.704340 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:29:18.703541 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: connect: no route to host
[11611] 2019/04/24 17:29:20.703891 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:29:22.704361 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:29:24.703535 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: connect: no route to host
[11611] 2019/04/24 17:29:26.703876 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:29:28.704237 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:29:30.703526 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: connect: no route to host
[11611] 2019/04/24 17:29:32.703931 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:29:34.704280 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:29:36.704605 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:29:38.705062 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:29:40.703498 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: connect: no route to host
[11611] 2019/04/24 17:29:42.703865 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:29:44.704232 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:29:46.703502 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: connect: no route to host
[11611] 2019/04/24 17:29:48.704083 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:29:50.704420 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:29:52.703521 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: connect: no route to host
[11611] 2019/04/24 17:29:54.703846 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:29:56.704259 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:29:58.703730 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: connect: no route to host
[11611] 2019/04/24 17:30:00.704083 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:30:02.704461 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:30:04.703539 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: connect: no route to host
[11611] 2019/04/24 17:30:06.703901 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:30:08.704351 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:30:10.703530 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: connect: no route to host
[11611] 2019/04/24 17:30:12.703941 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:30:14.704365 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:30:16.704713 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:30:18.705014 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:30:20.703742 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: connect: no route to host
[11611] 2019/04/24 17:30:22.704378 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:30:24.704795 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:30:26.703463 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: connect: no route to host
[11611] 2019/04/24 17:30:28.703807 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:30:30.704256 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:30:32.704696 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:30:34.705111 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:30:36.703552 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: connect: no route to host
[11611] 2019/04/24 17:30:38.703925 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:30:40.704253 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:30:42.703487 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: connect: no route to host
[11611] 2019/04/24 17:30:44.703952 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:30:46.704419 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:30:48.703518 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: connect: no route to host
[11611] 2019/04/24 17:30:50.703890 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:30:52.704414 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:30:54.703560 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: connect: no route to host
[11611] 2019/04/24 17:30:56.704000 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:30:58.704400 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:31:00.704814 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:31:02.705217 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:31:04.703553 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: connect: no route to host
[11611] 2019/04/24 17:31:06.703952 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:31:08.704347 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:31:10.703568 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: connect: no route to host
[11611] 2019/04/24 17:31:12.703960 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:31:14.704315 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:31:16.703568 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: connect: no route to host
[11611] 2019/04/24 17:31:18.703909 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:31:20.704292 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:31:22.704658 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:31:24.705073 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:31:26.703550 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: connect: no route to host
[11611] 2019/04/24 17:31:28.703885 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:31:30.704262 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: i/o timeout
[11611] 2019/04/24 17:31:32.703518 [ERR] Error trying to connect to route: dial tcp 10.10.10.7:7244: connect: no route to host
As you can see, the error is logged every 2 seconds, polluting the logfiles significantly.
Note: A similar scenario can also be e.g. when you have a pool of several servers, let's say 6, of which at least 3 servers should be operational at any given time (others *can* be down for maintenance, updates, ...).
Because you never know beforehand which servers are operational, all servers should be defined as possible routes in the cluster config.
#### Proposed Change:
I think this can be solved in several ways:
- introduce a new loglevel for route errors that sits between debug and error level so one can set the desired loglevel in accordance to whether route errors should be logged or not
- make the loglevel for route errors configurable
- Lower the loglevel for route errors again to debug (probably not acceptable considering [https://github.com/nats-io/gnatsd/issues/610](https://github.com/nats-io/gnatsd/issues/610) )
#### Who Benefits From The Change(s)?
Anyone using NATS clusters
#### Alternative Approaches
see multiple options provided under proposed changes.
|
https://github.com/nats-io/nats-server/issues/969
|
https://github.com/nats-io/nats-server/pull/1001
|
36fd47df5d0206098bc9867cd8732509ef7700a9
|
0144e27afdf0d4751d1f14eabdf2f0b3fcdaf821
| 2019-04-24T17:59:44Z |
go
| 2019-05-21T14:35:13Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 960 |
["server/gateway.go", "server/route.go", "server/server.go"]
|
Misleading references to gateway.go in stack
|
When getting a stack, the routines for readLoop and writeLoop reference gateway.go, even when no gateway is running.
For instance, starting `gnatsd` and having one client connected, then getting the stack shows something like this:
```
goroutine 24 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00013d210, 0x0)
/usr/local/go/src/runtime/sema.go:510 +0xeb
sync.(*Cond).Wait(0xc00013d200)
/usr/local/go/src/sync/cond.go:56 +0x92
github.com/nats-io/gnatsd/server.(*client).writeLoop(0xc0001ac000)
/Users/ivan/dev/go/src/github.com/nats-io/gnatsd/server/client.go:642 +0x12d
github.com/nats-io/gnatsd/server.(*client).writeLoop-fm()
/Users/ivan/dev/go/src/github.com/nats-io/gnatsd/server/gateway.go:703 +0x2a
created by github.com/nats-io/gnatsd/server.(*Server).startGoRoutine
/Users/ivan/dev/go/src/github.com/nats-io/gnatsd/server/server.go:1872 +0x94
```
There should not be reference to `gateway.go` in that context.
|
https://github.com/nats-io/nats-server/issues/960
|
https://github.com/nats-io/nats-server/pull/961
|
b1d0ec10c6bff24e2ad840600910ec6a8610cf6b
|
f17c5142dd3d77add2f580a99f8ca7149fd53526
| 2019-04-18T01:32:15Z |
go
| 2019-04-18T15:59:39Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 955 |
["server/client.go", "server/routes_test.go"]
|
Panic when receiving message on cluster with large number of peers
|
Found the following panic while trying to reproduce https://github.com/nats-io/gnatsd/issues/954
#### Versions of `gnatsd` and affected client libraries used:
- master (https://github.com/nats-io/gnatsd/commit/a4af582dfb2a9a8a6dc7b0f11da91f25da763410)
#### Steps or code to reproduce the issue:
Trying to create a cluster of a very large size (28 nodes cluster) with a server acting as the seed, then have at least one client connected to each of the servers and send a single message (installed `parallel` on Mac via `brew install parallel`).
```
$ gnatsd -m 8222 -p 4222 --cluster nats://0.0.0.0:6222 --routes nats://127.0.0.1:6222
$ seq 23 50 | parallel -j 27 -u 'gnatsd -p 42{} -m 82{} --cluster nats://0.0.0.0:62{} --routes nats://127.0.0.1:6222'
$ seq 23 50 | parallel -j 27 -u 'nats-sub -s nats://127.0.0.1:42{}' foo
Listening on [foo]
...
# Sending this message causes the panic
$ nats-pub -s nats://127.0.0.1:4222 foo 'Hello world'
```
#### Result:
```
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x13bce50]
goroutine 78 [running]:
github.com/nats-io/gnatsd/server.(*client).addSubToRouteTargets(0xc0009c8600, 0xc0036fe360)
/Users/wallyqs/repos/nats-dev/src/github.com/nats-io/gnatsd/server/client.go:2346 +0x90
github.com/nats-io/gnatsd/server.(*client).processMsgResults(0xc0009c8600, 0xc000144000, 0xc00010eb40, 0xc001e2900c, 0xd, 0x1f4, 0xc001e29004, 0x3, 0x1fc, 0x0, ...)
/Users/wallyqs/repos/nats-dev/src/github.com/nats-io/gnatsd/server/client.go:2398 +0x15b
github.com/nats-io/gnatsd/server.(*client).processInboundClientMsg(0xc0009c8600, 0xc001e2900c, 0xd, 0x1f4)
/Users/wallyqs/repos/nats-dev/src/github.com/nats-io/gnatsd/server/client.go:2283 +0x345
github.com/nats-io/gnatsd/server.(*client).processInboundMsg(0xc0009c8600, 0xc001e2900c, 0xd, 0x1f4)
/Users/wallyqs/repos/nats-dev/src/github.com/nats-io/gnatsd/server/client.go:2192 +0x95
github.com/nats-io/gnatsd/server.(*client).parse(0xc0009c8600, 0xc001e29000, 0x1f, 0x200, 0x1f, 0x0)
/Users/wallyqs/repos/nats-dev/src/github.com/nats-io/gnatsd/server/parser.go:271 +0x14e1
github.com/nats-io/gnatsd/server.(*client).readLoop(0xc0009c8600)
/Users/wallyqs/repos/nats-dev/src/github.com/nats-io/gnatsd/server/client.go:740 +0x1d5
created by github.com/nats-io/gnatsd/server.(*Server).startGoRoutine
/Users/wallyqs/repos/nats-dev/src/github.com/nats-io/gnatsd/server/server.go:1872 +0x91
```
|
https://github.com/nats-io/nats-server/issues/955
|
https://github.com/nats-io/nats-server/pull/957
|
0c8bf0ee8b51f8fa691abcb2b1bb89dba3d34668
|
8f35c6451ff77e26a86fd9d501f1fae5c6d6b42b
| 2019-04-17T17:19:43Z |
go
| 2019-04-17T19:40:46Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 947 |
["README.md"]
|
Typo in README
|
See https://github.com/nats-io/gnatsd/blame/master/README.md#L308. I'm a bit busy getting NATS up and running, so I'm submitting small issues here. Sorry for not sending PRs for these things.
|
https://github.com/nats-io/nats-server/issues/947
|
https://github.com/nats-io/nats-server/pull/948
|
1777e2d1e2bf8c9cb424bf2f99dcdb6d5513a635
|
e2ee8127e604a2016c6d9117f59b05b68ede2859
| 2019-04-15T08:47:32Z |
go
| 2019-04-15T17:30:10Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 941 |
["server/sublist.go", "server/sublist_test.go"]
|
Performance issues with locks and sublist cache
|
- [ ] Defect
- [x] Feature Request or Change Proposal
## Feature Requests
#### Use Case:
We are using gnatsd 1.4.1 (compiled go 1.11.5).
During benchmark, we observed non-trivial latency (500 ms+, usually seconds) at gnatsd cluster.
As there is no slow consumers (with default 2 seconds threshold) and the OS rcv buffer got full and TCP window went to 0, it seems that the gnatsd server is somehow slow in read loop. We are trying to slow down the sender for one connection but we believe that gnatsd can also be improved. If you need more proofs of slowness of read loop, we might be able to provide some tcpdump snippets and tracing logs of gnatsd.
We also observe some parser errors happened rarely when gnatsd is under high load of reading.
The client is using cnats. However we are not sure who (cnats, OS, or gnatsd) was not doing right. After we found it out, we may open another issue to address the problem.
```
[8354] 2019/04/01 12:17:11.695815 [ERR] 10.228.255.129:44588 - cid:1253 - Client parser ERROR, state=0, i=302: proto='"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"...'
```
By the way, as gnatsd could detect slow consumer, is that possible for gnatsd to know itself becomes a slow consumer (slow read)?
The only idea I come up is to adjust OS buffer and let the upstream to know the pressure.
If you have any suggestions, please let me know.
#### Proposed Change:
1. Improve locks.
https://github.com/nats-io/gnatsd/compare/branch_1_4_0...azrle:enhance/processMsg_lock
Comparison of read loops between high load and low load:

Sync blocking graph:

2. Ability to adjust sublist cache size or disable it.
https://github.com/nats-io/gnatsd/compare/branch_1_4_0...azrle:feature/opts-sublist_cache_size
According to our application characteristic, it doing sub/unsub very frequently and most of subjects are single-used. The hit rate of cache is under 0.5%.
However, it can cost gnatsd to maintenance the sublist cache. Besides locks for the cache, `reduceCacheCount` is noticeable. Compared to other function's goroutines which are less than 50, the number of goroutines for `server.(*Sublist).reduceCacheCount` can climb up to near 18,000.
#### Who Benefits From The Change(s)?
Clients send messages heavily to gnatsd. And subscription changes frequently.
Under our test cases (with enough servers), the 99.9%tile of latency drops from 1500ms to 500ms (it's still slow though).
I noticed that gnatsd v2 is coming. And the implementation changes a lot. But I am afraid that we may not have time to wait for it to get production-ready.
I sincerely hope the performance can be improved for v1.4.
Thank you in advance!
|
https://github.com/nats-io/nats-server/issues/941
|
https://github.com/nats-io/nats-server/pull/967
|
4ff42224c15d2dd8d61e79e8f7e1599b8fdae1ee
|
2a7b2a957819c23b3e2c8d5c69851f095f7dc533
| 2019-04-12T05:22:05Z |
go
| 2019-04-23T01:42:52Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 934 |
["util/mkpasswd.go"]
|
mkpasswd until does not work on Windows 10
|
- [X] Defect
- [ ] Feature Request or Change Proposal
#### OS/Container environment:
Windows 2010
go version go1.11.5 windows/386
#### Steps or code to reproduce the issue:
go run mkpasswd.go -p
OR
mkpasswd.exe
#### Expected result:
Enter in password and obtain bcrypt hash
#### Actual result:
Does not allow to enter in a password and provides a random bcrypt hash
|
https://github.com/nats-io/nats-server/issues/934
|
https://github.com/nats-io/nats-server/pull/935
|
031267dfd6fe6389bc1e4610afcb363aa6ca2afe
|
58b21e392c8e8d33c40c662abba7396cf5645d42
| 2019-04-09T20:00:39Z |
go
| 2019-04-09T23:24:10Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 886 |
["conf/lex.go", "conf/lex_test.go"]
|
CPU hits 100% if there is a config parse error
|
- [x] Defect
- [ ] Feature Request or Change Proposal
## Defects
Make sure that these boxes are checked before submitting your issue -- thank you!
- [x] Included `gnatsd -DV` output
- [x] Included a [Minimal, Complete, and Verifiable example] (http://stackoverflow.com/help/mcve)
#### Versions of `gnatsd` and affected client libraries used:
#### OS/Container environment:
#### Steps or code to reproduce the issue:
Edit the config: `${PWD}/nats/gnatsd.conf` with a missing ` }`
```
# Client port of 4222 on all interfaces
port: 4222
debug: true
trace: true
log_file: "/logs/gnatsd.log"
# HTTP monitoring port
monitor_port: 8222
# This is for clustering multiple servers together.
cluster {
# Route connections to be received on any interface on port 6222
port: 6222
# Routes are protected, so need to use them with --routes flag
# e.g. --routes=nats-route://ruser:T0pS3cr3t@otherdockerhost:6222
authorization {
user: ruser
password: T0pS3cr3t
timeout: 2
}
# Routes are actively solicited and connected to from this server.
# This Docker image has none by default, but you can pass a
# flag to the gnatsd docker image to create one to an existing server.
routes = []
```
Run nats via docker
```
docker run --rm -p 4222:4222 -p 6222:6222 -p 8222:8222 -v "${PWD}/nats/gnatsd.conf:/gnatsd.conf" -v "${PWD}/nats/logs:/logs" -ti --name nats nats:1.4.0-linux --debug --trace
```
To fix add the missing `}` in the `gnatsd.conf` and re-run docker
#### Expected result:
`e6099dccacbe nats 0.05% 2.375MiB / 5.818GiB 0.04% 1.46kB / 0B 0B / 0B 10
`
#### Actual result:
`b5c510a8bdad nats 100.10% 6.129MiB / 5.818GiB 0.10% 1.32kB / 0B 12.3kB / 0B 7`
Doesn't output logs or debug info so hard to know what is wrong
## Feature Requests
#### Use Case:
#### Proposed Change:
Config errors should print to stdout and process to fail
#### Who Benefits From The Change(s)?
#### Alternative Approaches
|
https://github.com/nats-io/nats-server/issues/886
|
https://github.com/nats-io/nats-server/pull/887
|
ed94bd9f27d1b3a3203c2a1483c2812dcc63e24a
|
75a489a31b534fc88f8796c9d3140cf0ba1bafa1
| 2019-01-25T13:57:35Z |
go
| 2019-01-29T01:30:39Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 885 |
["conf/lex.go", "conf/lex_test.go", "conf/parse.go", "conf/parse_test.go"]
|
Values that begin with a number cannot be resolved from the environment
|
- [ X] Defect
## Defects
Variables in a configuration should work with any value when resolving against the environment. Looks like some values won't if they start with numbers.
For an example take:
```
authorization: {
token: $SECRET
}
```
```
➜ export SECRET=3x
~
➜ echo $SECRET
3x
~
➜ gnatsd -c /tmp/nats.config
nats-server: Variable reference for 'SECRET' on line 4 can not be found.
~
➜ export SECRET="3x"
~
➜ echo $SECRET
3x
~
➜ gnatsd -c /tmp/nats.config
nats-server: Variable reference for 'SECRET' on line 4 can not be found.
~
✘1 ➜ export SECRET=x3
~
➜ echo $SECRET
x3
~
➜ gnatsd -c /tmp/nats.config
[55550] 2019/01/24 12:14:06.955522 [INF] Starting nats-server version 2.0.0-beta.12
[55550] 2019/01/24 12:14:06.955616 [DBG] Go build version go1.11.2
[55550] 2019/01/24 12:14:06.955619 [INF] Git commit [not set]
[55550] 2019/01/24 12:14:06.955779 [INF] Listening for client connections on 0.0.0.0:4222
[55550] 2019/01/24 12:14:06.955785 [INF] Server id is NAMHLSL247POBUE5DAIEDIM6DTONRN4SYP56UMPB5EVJJEKBC4ANKMWI
[55550] 2019/01/24 12:14:06.955788 [INF] Server is ready
[55550] 2019/01/24 12:14:06.955799 [DBG] Get non local IPs for "0.0.0.0"
[55550] 2019/01/24 12:14:06.955975 [DBG] ip=192.168.86.52
```
|
https://github.com/nats-io/nats-server/issues/885
|
https://github.com/nats-io/nats-server/pull/893
|
36b902675f3f76efd8e5c839570395a7fc39ef3b
|
5a79df11dd1f53824d212695f090d6ffc0c28f95
| 2019-01-24T18:18:40Z |
go
| 2019-02-06T19:57:05Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 884 |
["server/parser.go", "server/parser_test.go", "server/split_test.go"]
|
Server reads whole PUB message even when length of payload is bigger than size arg
|
- Defect
## Defects
#### Versions of `gnatsd` and affected client libraries used:
all
#### Steps or code to reproduce the issue:
add more tests to https://github.com/nats-io/gnatsd/blob/master/server/parser_test.go#L212
```go
// Clear snapshots
c.argBuf, c.msgBuf, c.state = nil, nil, OP_START
// note this is the case when data has more bytes than expected by size
pub = []byte("PUB foo.bar 11\r\nhello world hello world\r")
err = c.parse(pub)
t.Logf("%s", err) // nil, no error means that it reads all the data from connection until \n
t.Logf("%d", len(c.msgBuf)) // 24
// Clear snapshots
c.argBuf, c.msgBuf, c.state = nil, nil, OP_START
// note this is the case when data has more bytes than expected by size,
pub = []byte("PUB foo.bar 11\r\nhello world hello world\r\n")
err = c.parse(pub)
// ensure that it reads all the data from connection until \n\r
t.Logf("%d", len(c.msgBuf)) // 25
t.Logf("%s", err) // Client parser ERROR, state=30, i=39: proto='""...'
```
#### Expected result:
Parser should detect too big payload before reaching end of payload
#### Actual result:
Server reads whole PUB message payload even when length of payload is bigger than size arg. So it's easy to load the server's memory with long PUB messages without `\n`.
Here is simple example with increasing memory
```go
func TestMaxPayload2(t *testing.T) {
opts := LoadConfig("./configs/override.conf")
opts.MaxPayload = server.MAX_PAYLOAD_SIZE
srv := RunServer(opts)
_ = srv.StartMonitoring()
endpoint := fmt.Sprintf("%s:%d", opts.Host, opts.Port)
defer srv.Shutdown()
size := opts.MaxPayload * 10
big := sizedBytes(size)
conn1, err := net.DialTimeout("tcp", endpoint, time.Millisecond*500)
defer conn1.Close()
if err != nil {
t.Fatalf("Could not connect to server: %v", err)
}
t.Logf("MaxPayload: %d, Big: %d", opts.MaxPayload, len(big))
pub := fmt.Sprintf("PUB bar %d\r\n", opts.MaxPayload)
_, err = conn1.Write([]byte(pub))
if err != nil {
t.Fatalf("Could not publish event to the server: %s", err)
}
for {
time.Sleep(time.Second)
_, err = conn1.Write(big)
if err != nil {
break
}
varz, _ := srv.Varz(nil)
log.Println(fmt.Sprintf("%d", varz.Mem))
}
}
```
|
https://github.com/nats-io/nats-server/issues/884
|
https://github.com/nats-io/nats-server/pull/889
|
75a489a31b534fc88f8796c9d3140cf0ba1bafa1
|
39fdcd9974e5707315cb193a92b6072f9e0db8ab
| 2019-01-23T12:59:54Z |
go
| 2019-01-31T04:03:06Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 883 |
["go.mod", "go.sum"]
|
Not handled error in test maxpayload_test.go
|
Test needs correction to handle error on `write`
```go
pub := fmt.Sprintf("PUB bar %d\r\n", size)
conn.Write([]byte(pub))
if err != nil {
t.Fatalf("Could not publish event to the server: %s", err)
}
```
https://github.com/nats-io/gnatsd/blob/master/test/maxpayload_test.go#L56-L60
|
https://github.com/nats-io/nats-server/issues/883
|
https://github.com/nats-io/nats-server/pull/4805
|
b306fe7ef889c5ea31a19da32b87fddd28774f8d
|
e8772d592cab4d503fe9744ca73e5af2ffd155d0
| 2019-01-23T07:36:15Z |
go
| 2023-11-20T16:43:20Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 870 |
["server/client.go", "server/route.go"]
|
Data race when running test
|
Seen during Travis run for test `TestServerRestartReSliceIssue`
```
==================
WARNING: DATA RACE
Read at 0x00c4202c1500 by goroutine 263:
github.com/nats-io/gnatsd/server.(*client).sendRouteSubOrUnSubProtos()
/home/travis/gopath/src/github.com/nats-io/gnatsd/server/route.go:965 +0x122a
github.com/nats-io/gnatsd/server.(*client).sendRouteSubProtos()
/home/travis/gopath/src/github.com/nats-io/gnatsd/server/route.go:928 +0x72
github.com/nats-io/gnatsd/server.(*Server).sendSubsToRoute()
/home/travis/gopath/src/github.com/nats-io/gnatsd/server/route.go:907 +0x853
github.com/nats-io/gnatsd/server.(*client).processRouteInfo()
/home/travis/gopath/src/github.com/nats-io/gnatsd/server/route.go:443 +0x748
github.com/nats-io/gnatsd/server.(*client).processInfo()
/home/travis/gopath/src/github.com/nats-io/gnatsd/server/client.go:955 +0x140
github.com/nats-io/gnatsd/server.(*client).parse()
/home/travis/gopath/src/github.com/nats-io/gnatsd/server/parser.go:710 +0x4c7e
github.com/nats-io/gnatsd/server.(*client).readLoop()
/home/travis/gopath/src/github.com/nats-io/gnatsd/server/client.go:688 +0x624
github.com/nats-io/gnatsd/server.(*client).(github.com/nats-io/gnatsd/server.readLoop)-fm()
/home/travis/gopath/src/github.com/nats-io/gnatsd/server/gateway.go:676 +0x41
Previous write at 0x00c4202c1500 by goroutine 170:
github.com/nats-io/gnatsd/server.(*client).registerWithAccount()
/home/travis/gopath/src/github.com/nats-io/gnatsd/server/client.go:440 +0x132
github.com/nats-io/gnatsd/server.(*Server).createClient()
/home/travis/gopath/src/github.com/nats-io/gnatsd/server/server.go:1374 +0x36d
github.com/nats-io/gnatsd/server.(*Server).AcceptLoop.func2()
/home/travis/gopath/src/github.com/nats-io/gnatsd/server/server.go:1119 +0x58
Goroutine 263 (running) created at:
github.com/nats-io/gnatsd/server.(*Server).startGoRoutine()
/home/travis/gopath/src/github.com/nats-io/gnatsd/server/server.go:1788 +0xba
github.com/nats-io/gnatsd/server.(*Server).createRoute()
/home/travis/gopath/src/github.com/nats-io/gnatsd/server/route.go:1148 +0x66e
github.com/nats-io/gnatsd/server.(*Server).routeAcceptLoop.func2()
/home/travis/gopath/src/github.com/nats-io/gnatsd/server/route.go:1449 +0x61
Goroutine 170 (finished) created at:
github.com/nats-io/gnatsd/server.(*Server).startGoRoutine()
/home/travis/gopath/src/github.com/nats-io/gnatsd/server/server.go:1788 +0xba
github.com/nats-io/gnatsd/server.(*Server).AcceptLoop()
/home/travis/gopath/src/github.com/nats-io/gnatsd/server/server.go:1118 +0xa6f
github.com/nats-io/gnatsd/server.(*Server).Start()
/home/travis/gopath/src/github.com/nats-io/gnatsd/server/server.go:921 +0xafa
==================
```
|
https://github.com/nats-io/nats-server/issues/870
|
https://github.com/nats-io/nats-server/pull/873
|
93f7deb6d77532d04d00e6d289dc022b302c264b
|
b53b71c683f9cd02080c69871962a62dda1b6a83
| 2019-01-10T00:05:34Z |
go
| 2019-01-10T16:09:48Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 862 |
["server/client.go", "server/client_test.go"]
|
Nkey user not logged during permissions violation
|
- [X] Defect
- [ ] Feature Request or Change Proposal
## Defects
When an nkey attempts to subscribe to an unauthorized subject, the user field is empty. We should log the nkey.
```[ERR] 127.0.0.1:62317 - cid:1 - Subscription Violation - User "", Subject "demo", SID 1```
|
https://github.com/nats-io/nats-server/issues/862
|
https://github.com/nats-io/nats-server/pull/894
|
a4741c52c5d4dd0ab75f04052ef0c46fb439e75c
|
ae80c4e98ba9a9d671cc8f7c39ec48cb4a24cb22
| 2018-12-19T14:32:46Z |
go
| 2019-02-02T16:54:55Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 859 |
[".github/workflows/stale-issues.yaml"]
|
Is it OK to send SUB with same topic twice?
|
Hi!
Is it OK to send SUB message with same topic from one connection? Will NATS ignore second subscription request (expected)? Or should I manually avoid such collisions in the client?
|
https://github.com/nats-io/nats-server/issues/859
|
https://github.com/nats-io/nats-server/pull/4869
|
3c48d0ea8118c1a5020240417dfb70acb6cfb2b8
|
1998a9ee281f3a53542509e2592cac719e71f61c
| 2018-12-14T14:56:24Z |
go
| 2023-12-11T16:09:10Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 841 |
["server/client.go", "server/events.go", "server/events_test.go", "server/server.go"]
|
Account connections limits
|
- [X] Defect
System account not properly tracking connection limits.
|
https://github.com/nats-io/nats-server/issues/841
|
https://github.com/nats-io/nats-server/pull/842
|
6162f14dcc5880394115736f4363f8882850cf01
|
c3a658e1f1a9338d08151c7323d2fd0a5b94454e
| 2018-12-06T14:39:09Z |
go
| 2018-12-06T17:09:01Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 835 |
["server/client.go", "test/tls_test.go"]
|
Misleading Slow Consumer error during TLS Handshake
|
When a TLS handshake fails, it is possible to see a Slow Consumer error which is misleading.
Here is an excerpt:
```
[ERR] ::1:62514 - rid:1 - TLS route handshake error: read tcp [::1]:6222->[::1]:62514: i/o timeout
[INF] ::1:62514 - rid:1 - Slow Consumer Detected: WriteDeadline of 2s Exceeded
[INF] ::1:62514 - rid:1 - Router connection closed
[ERR] ::1:62515 - rid:2 - TLS route handshake error: read tcp [::1]:6222->[::1]:62515: i/o timeout
```
The real reason is a TLS handshake timeout, not a Slow Consumer error.
|
https://github.com/nats-io/nats-server/issues/835
|
https://github.com/nats-io/nats-server/pull/836
|
05b6403a1333482831fc9e0be5898689118d550e
|
afc3a45a37e717fd8c0d60e999674a5e94081b77
| 2018-12-05T03:22:37Z |
go
| 2018-12-05T04:22:01Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 832 |
["server/auth.go", "server/client.go", "server/client_test.go"]
|
expose client connection info to the CustomClientAuthentication system
|
- [ ] Defect
- [x] Feature Request or Change Proposal
## Feature Requests
#### Use Case:
I would like to build a `CustomClientAuthentication` that sets permissions for clients based on their remote IP address, for this we need access to the IP in use on the connection.
The closest thing today is `client.String()` but this is not reachable and its kind of meh since it would need parsing and no doubt this isnt part of any kind of reasonable expectation of a public API. So a supported public API that expose this information would be good.
#### Proposed Change:
I am not sure what the best way to go about this is, we could either expose this via `ClientAuthentication.GetOpts()` or add a new method to `ClientAuthentication` that returns the `*net.TCPAddr`
Exposing it in `GetOpts` is probably not the purist right way to go but it has the benefit of not changing the interface for anyone who have already built `CustomClientAuthentication`s and so it's something that could land in the current version series
#### Who Benefits From The Change(s)?
People who want to write IP based custom authorizers
#### Alternative Approaches
Cannot think of another approach given how the client is not exported from the package and the interfaces does not expose this anywhere to the `CustomClientAuthentication`
|
https://github.com/nats-io/nats-server/issues/832
|
https://github.com/nats-io/nats-server/pull/837
|
0bb8562930b89d1f17274f9a65153aead326645f
|
519d365ab9c463af9be46ee5b4e66c3581767b42
| 2018-12-04T08:41:12Z |
go
| 2018-12-06T16:42:41Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 801 |
["conf/lex.go", "conf/parse_test.go"]
|
`include` within quotes is treated as unknown field
|
- [x] Defect
- [ ] Feature Request or Change Proposal
## Defects
Currently when `include` is within quotes, the config parser does not recognizes it and treats it as an unknown field:
```sh
...
"include": "foo/bar.conf"
...
# => /tmp/nats.conf:11:4: unknown field "include"
```
Note that this is in master, previous versions of the server ignore `include` if it is within quotes (since treated as an unknown field)
|
https://github.com/nats-io/nats-server/issues/801
|
https://github.com/nats-io/nats-server/pull/891
|
09a4b050177b200742656657d275a9bc2cdcd27e
|
46a2661114e3158650f312cfeea8176420d43cfa
| 2018-11-12T23:11:54Z |
go
| 2019-02-06T19:57:40Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 793 |
["server/accounts_test.go", "server/client.go", "server/sublist.go"]
|
Imported subjects not available on wildcard subscriptions
|
- [X] Defect
## Defects
When a stream is imported into another account and mapped to a prefixed subject etc, a wildcard subscription capturing that subject does not deliver the imported message.
|
https://github.com/nats-io/nats-server/issues/793
|
https://github.com/nats-io/nats-server/pull/796
|
96e2e0051013b207d37e57f944ef51be4bfdf394
|
2a7f23f61d49955dd7dfaccef83531d786590cf8
| 2018-11-05T23:34:24Z |
go
| 2018-11-08T17:26:21Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 792 |
[".github/workflows/stale-issues.yaml"]
|
Permission scoping for deny clauses
|
- [X] Defect
- [ ] Feature Request or Change Proposal
## Defects
For a deny clause of a permission, it should ensure that no larger scope, non-literal match could override it.
|
https://github.com/nats-io/nats-server/issues/792
|
https://github.com/nats-io/nats-server/pull/4869
|
3c48d0ea8118c1a5020240417dfb70acb6cfb2b8
|
1998a9ee281f3a53542509e2592cac719e71f61c
| 2018-11-04T20:24:36Z |
go
| 2023-12-11T16:09:10Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 789 |
[".github/workflows/stale-issues.yaml"]
|
Defect: logtime is reset when reload of config
|
'logtime' configuration is set back to true when config reload
#### Versions of `gnatsd` and affected client libraries used:
1.3.0
#### OS/Container environment:
docker images : nats:1.3.0
#### Steps or code to reproduce the issue:
nats-config.yml
```
logtime: false
```
1. Use above config file
2. send SIGHUP to nats
3. See log time is append
#### Expected result:
To se no logtime efter reload
#### Actual result:
```
Nov 01 11:31:53 f31997e82161 nats/1.3.0: [1] [INF] Starting nats-server version 1.3.0
Nov 01 11:31:53 f31997e82161 nats/1.3.0: [1] [INF] Git commit [eed4fbc]
Nov 01 11:31:53 f31997e82161 nats/1.3.0: [1] [INF] Starting http monitor on 0.0.0.0:8222
Nov 01 11:31:53 f31997e82161 nats/1.3.0: [1] [INF] Listening for client connections on 0.0.0.0:4222
Nov 01 11:31:53 f31997e82161 nats/1.3.0: [1] [INF] Server is ready
Nov 01 11:32:11 f31997e82161 nats/1.3.0: [1] [INF] Reloaded: logtime = true
Nov 01 11:32:11 f31997e82161 nats/1.3.0: [1] 2018/11/01 10:32:11.001413 [INF] Reloaded server configuration
```
|
https://github.com/nats-io/nats-server/issues/789
|
https://github.com/nats-io/nats-server/pull/4869
|
3c48d0ea8118c1a5020240417dfb70acb6cfb2b8
|
1998a9ee281f3a53542509e2592cac719e71f61c
| 2018-11-01T10:38:28Z |
go
| 2023-12-11T16:09:10Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 782 |
["conf/lex.go", "conf/lex_test.go"]
|
gnatsd can't launch with dangling quote in config file
|
- [X] Defect
- [ ] Feature Request or Change Proposal
## Defects
Make sure that these boxes are checked before submitting your issue -- thank you!
- [X] Included `gnatsd -DV` output
- [X] Included a [Minimal, Complete, and Verifiable example] (http://stackoverflow.com/help/mcve)
#### Versions of `gnatsd` and affected client libraries used:
nats-server version 1.3.1
#### OS/Container environment:
Mac OS
#### Steps or code to reproduce the issue:
Create a config file with the following:
```hcl
listen: "localhost:4242
http: localhost:8222
```
Note the dangling quote
#### Expected result:
error from gnatsd, or possible launch anyway
#### Actual result:
hangs - no output with -DV
|
https://github.com/nats-io/nats-server/issues/782
|
https://github.com/nats-io/nats-server/pull/785
|
d33be3aaa3d4f7dc1bee2b582927c99a6bfd8d96
|
037acf131022cfdd5786abcdedab2ed55e452b66
| 2018-10-22T17:32:38Z |
go
| 2018-10-24T22:42:13Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 772 |
[".github/workflows/stale-issues.yaml"]
|
Dev: shadow sub not removed when original sub is
|
Suppose there is 2 accounts, A that exports foo and B that imports foo.
If a client from B creates a subscription on foo, the server will create a shadow subscription in account A to allow messages published on A's foo to be routed to B's foo.
However, when B's foo subscription is removed, that shadow subscription is not.
|
https://github.com/nats-io/nats-server/issues/772
|
https://github.com/nats-io/nats-server/pull/4869
|
3c48d0ea8118c1a5020240417dfb70acb6cfb2b8
|
1998a9ee281f3a53542509e2592cac719e71f61c
| 2018-10-06T17:47:47Z |
go
| 2023-12-11T16:09:10Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 764 |
[".github/workflows/stale-issues.yaml"]
|
Account users may not be properly loaded if there are users without account
|
Suppose a simple configuration with user without account and user part of an account:
```
authorization {
users [
{user: ivan, password: bar}
]
}
accounts {
synadia {
users [{user: derek, password: foo}]
exports = [
{stream: "foo.*"}
]
}
```
Start the server and try to have a connect from both users, it should work. Restart the server many times until you will see that the user `derek` fails to connect.
|
https://github.com/nats-io/nats-server/issues/764
|
https://github.com/nats-io/nats-server/pull/4869
|
3c48d0ea8118c1a5020240417dfb70acb6cfb2b8
|
1998a9ee281f3a53542509e2592cac719e71f61c
| 2018-10-03T19:03:02Z |
go
| 2023-12-11T16:09:10Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 762 |
["server/client.go", "server/log_test.go"]
|
Password obfuscation in trace does not work properly
|
If username and password are the same, or simply if the password text is present in the trace before actual location of the password, that string would be replaced.
For instance, trying to connect with URL `nats://ivan:ivan@localhost:4222` would produce this trace:
```
[TRC] ::1:49797 - cid:1 - ->> [CONNECT {"verbose":false,"pedantic":false,"user":"[REDACTED]","pass":"ivan","tls_required":false,"name":"","lang":"go","version":"1.6.1","pr
```
|
https://github.com/nats-io/nats-server/issues/762
|
https://github.com/nats-io/nats-server/pull/776
|
71eb6d80d29ca50bc0b44ccdf58fcc1827017c04
|
3b0b139158165e1801394a893d66fe7e80ca58dd
| 2018-10-03T18:36:57Z |
go
| 2018-10-13T01:28:18Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 719 |
["main.go", "server/reload.go", "server/reload_test.go", "server/route.go", "test/test.go"]
|
Possible cluster `Authorization Error` on config reload
|
Suppose server A and B have a cluster defined with authorization.
Servers are started and connect to each other without error.
If something in the cluster is modified, say the timeout value,
and the config is reloaded, you may see the following error:
```
[INF] Reloaded: cluster
[ERR] 127.0.0.1:4244 - rid:1 - Authorization Error
```
and the route between A and B is broken and never reconnects.
|
https://github.com/nats-io/nats-server/issues/719
|
https://github.com/nats-io/nats-server/pull/720
|
6608e9ac3be979dcb0614b772cc86a87b71acaa3
|
a2b5acfd3da826f484bae16707118bc7eaaf6be4
| 2018-08-15T23:59:56Z |
go
| 2018-08-16T05:44:59Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 702 |
[".travis.yml", "server/monitor.go", "server/monitor_sort_opts.go", "server/monitor_test.go", "server/server.go"]
|
ByUptime sort broke for closed connections
|
https://github.com/nats-io/nats-server/issues/702
|
https://github.com/nats-io/nats-server/pull/705
|
60bd35f552a689b6f051f38de99a22a65687d44c
|
43b991443f98e8c45b93dc2cc2dcb96805fe9f66
| 2018-06-29T20:58:13Z |
go
| 2018-06-30T01:08:09Z |
|
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 701 |
[".travis.yml", "server/monitor.go", "server/monitor_sort_opts.go", "server/monitor_test.go", "server/server.go"]
|
Add new sort options for closed connections
|
Add in additional sort options that are applicable to close connections. e.g. stop, reason.
|
https://github.com/nats-io/nats-server/issues/701
|
https://github.com/nats-io/nats-server/pull/705
|
60bd35f552a689b6f051f38de99a22a65687d44c
|
43b991443f98e8c45b93dc2cc2dcb96805fe9f66
| 2018-06-29T19:59:16Z |
go
| 2018-06-30T01:08:09Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 700 |
[".travis.yml", "server/monitor.go", "server/monitor_sort_opts.go", "server/monitor_test.go", "server/server.go"]
|
Sorting connections by Idle time is incorrect.
|
Currently does a string compare.
|
https://github.com/nats-io/nats-server/issues/700
|
https://github.com/nats-io/nats-server/pull/705
|
60bd35f552a689b6f051f38de99a22a65687d44c
|
43b991443f98e8c45b93dc2cc2dcb96805fe9f66
| 2018-06-29T19:53:53Z |
go
| 2018-06-30T01:08:09Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 681 |
[".travis.yml", "server/configs/reload/srv_a_4.conf", "server/reload_test.go", "server/route.go", "server/server_test.go"]
|
Unable to remove route with config reload if remote server not running
|
Say you have server A with a route to server B. A connects to B. B is stopped. A's config is changed to remove route to B. A is sent the signal to reload its config. A still continues to try to connect to B.
PS: If B were to be restarted just for a moment (until A connects to B), then it after B is stopped again, A would indeed stop to reconnect to B.
|
https://github.com/nats-io/nats-server/issues/681
|
https://github.com/nats-io/nats-server/pull/682
|
5598d5c71162657cc16f244110a9a7d7874a54df
|
760e41d77814a4489ada3986b4bc1580648188ae
| 2018-06-18T18:24:27Z |
go
| 2018-06-20T05:54:13Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 662 |
["README.md"]
|
Authorization documentation ( _INBOX.* permission )
|
When using request API, reply-to is set to, for example **_INBOX.OpIh338NswuirQdUdmrX77.OpIh338NswuirQdUdmrXX9**
Shouldn't the documentation mention permission for _INBOX.> or _INBOX.\*.\* instead of _INBOX.*?
```Note that _INBOX.* subscribe permissions must be granted in order to use the request APIs in Apcera supported clients. If an unauthorized client publishes or attempts to subscribe to a subject, the action fails and is logged at the server, and an error message is returned to the client.```
|
https://github.com/nats-io/nats-server/issues/662
|
https://github.com/nats-io/nats-server/pull/673
|
99b6bb30d00a225d6d0ecd5bc3a126ca5dfa4d1d
|
a45ef57aa376fe2baf539e45801f128f677a00ff
| 2018-04-02T13:34:27Z |
go
| 2018-05-22T21:47:02Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 640 |
["server/sublist.go", "server/sublist_test.go"]
|
DATA RACE when going over results of sublist match and sub removed from sublist
|
This was seen when working on PR #638, and showed on an unrelated test. The new test added in the PR (TestRoutedQueueUnsubscribe) is reporting race at almost every run when running locally.
```
IvanMBP:gnatsd ivan$ go test -race -v -run=TestRoutedQueueUnsubscribe -count 10 ./server/
=== RUN TestRoutedQueueUnsubscribe
==================
WARNING: DATA RACE
Write at 0x00c420496af8 by goroutine 31:
github.com/nats-io/gnatsd/server.removeSubFromList()
/Users/ivan/dev/go/src/github.com/nats-io/gnatsd/server/sublist.go:440 +0x100
github.com/nats-io/gnatsd/server.(*Sublist).removeFromNode()
/Users/ivan/dev/go/src/github.com/nats-io/gnatsd/server/sublist.go:460 +0x174
github.com/nats-io/gnatsd/server.(*Sublist).Remove()
/Users/ivan/dev/go/src/github.com/nats-io/gnatsd/server/sublist.go:378 +0x80a
github.com/nats-io/gnatsd/server.(*client).unsubscribe()
/Users/ivan/dev/go/src/github.com/nats-io/gnatsd/server/client.go:847 +0x27d
github.com/nats-io/gnatsd/server.(*client).deliverMsg()
/Users/ivan/dev/go/src/github.com/nats-io/gnatsd/server/client.go:1002 +0x708
github.com/nats-io/gnatsd/server.(*client).processMsg()
/Users/ivan/dev/go/src/github.com/nats-io/gnatsd/server/client.go:1142 +0x20b0
github.com/nats-io/gnatsd/server.(*client).parse()
/Users/ivan/dev/go/src/github.com/nats-io/gnatsd/server/parser.go:226 +0x1fd1
github.com/nats-io/gnatsd/server.(*client).readLoop()
/Users/ivan/dev/go/src/github.com/nats-io/gnatsd/server/client.go:284 +0xe93
github.com/nats-io/gnatsd/server.(*Server).createRoute.func2()
/Users/ivan/dev/go/src/github.com/nats-io/gnatsd/server/route.go:407 +0x41
Previous read at 0x00c420496af8 by goroutine 43:
github.com/nats-io/gnatsd/server.(*client).processMsg()
/Users/ivan/dev/go/src/github.com/nats-io/gnatsd/server/client.go:1198 +0x1296
github.com/nats-io/gnatsd/server.(*client).parse()
/Users/ivan/dev/go/src/github.com/nats-io/gnatsd/server/parser.go:226 +0x1fd1
github.com/nats-io/gnatsd/server.(*client).readLoop()
/Users/ivan/dev/go/src/github.com/nats-io/gnatsd/server/client.go:284 +0xe93
github.com/nats-io/gnatsd/server.(*Server).createClient.func2()
/Users/ivan/dev/go/src/github.com/nats-io/gnatsd/server/server.go:805 +0x41
Goroutine 31 (running) created at:
github.com/nats-io/gnatsd/server.(*Server).startGoRoutine()
/Users/ivan/dev/go/src/github.com/nats-io/gnatsd/server/server.go:1059 +0xba
github.com/nats-io/gnatsd/server.(*Server).createRoute()
/Users/ivan/dev/go/src/github.com/nats-io/gnatsd/server/route.go:407 +0x62a
github.com/nats-io/gnatsd/server.(*Server).connectToRoute()
/Users/ivan/dev/go/src/github.com/nats-io/gnatsd/server/route.go:784 +0x4cc
github.com/nats-io/gnatsd/server.(*Server).solicitRoutes.func1()
/Users/ivan/dev/go/src/github.com/nats-io/gnatsd/server/route.go:798 +0x4f
Goroutine 43 (running) created at:
github.com/nats-io/gnatsd/server.(*Server).startGoRoutine()
/Users/ivan/dev/go/src/github.com/nats-io/gnatsd/server/server.go:1059 +0xba
github.com/nats-io/gnatsd/server.(*Server).createClient()
/Users/ivan/dev/go/src/github.com/nats-io/gnatsd/server/server.go:805 +0x673
github.com/nats-io/gnatsd/server.(*Server).AcceptLoop.func2()
/Users/ivan/dev/go/src/github.com/nats-io/gnatsd/server/server.go:475 +0x58
==================
If this test is brought to master, the same issue happens, so this is not specific to that branch.
|
https://github.com/nats-io/nats-server/issues/640
|
https://github.com/nats-io/nats-server/pull/641
|
1aef9730999f4dc2d80f5f960845e71167a76c86
|
982643ad44601a3164bfe0400920b817eab8b213
| 2018-03-10T02:07:40Z |
go
| 2018-03-15T21:48:30Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 632 |
["server/client.go", "server/route.go", "server/routes_test.go", "test/routes_test.go"]
|
Defect: messages lost with non-zero max_msgs in 2-node cluster (bi-directional routes)
|
### Versions of `gnatsd` affected:
Found in v0.9.4, reproduced in v1.0.4.
Most probably affects all versions.
### OS/Container environment:
Linux CentOS 7
### Steps or code to reproduce the issue:
1. Setup a NATS cluster of 2 (or more) nodes with bi-directional routes.
2. Connect 2 (or more) queue subscribers with max_msgs set to non-zero value to different NATS server nodes.
3. Have 2 (or more) publishers connect to different NATS server nodes.
4. Let publishers publish as many messages as needed to feed all subscribers.
E.g. with 2 nodes, 10 subscribers, and max_msgs set to 1 need to publish 20 messages.
### Expected result:
All messages are delivered
### Actual result:
Some messages are lost.
An exact number differs from test to test, sometimes all messages are delivered.
### Test Code:
I've created the following bash script to reproduce this issue:
```
./gnatsd -p 4001 --cluster nats-route://localhost:4011/ --routes nats-route://localhost:4012/ &
./gnatsd -p 4002 --cluster nats-route://localhost:4012/ --routes nats-route://localhost:4011/ &
sleep 1
for i in `seq 100`; do
( echo "sub foo bar $i"; echo "unsub $i 1"; sleep 5; ) | nc localhost 400$(($i % 2 + 1)) -C &
done
sleep 1
for i in `seq 0 9`; do
( sleep 1; for n in `seq 0 9`; do echo 'pub foo 6'; echo "test$i$n"; done ) | nc localhost 400$(($i % 2 + 1)) -C &
done
sleep 5
kill %1 %2
```
A test script is sending 100 messages from 10 publishers to 100 subscribers with 2 NATS server nodes.
|
https://github.com/nats-io/nats-server/issues/632
|
https://github.com/nats-io/nats-server/pull/638
|
8d539a4bb0239a407602366eb8fb1e0e7929758a
|
c16a1dbcc746821b551f1d80a623a47a3e9a9065
| 2018-03-05T16:32:23Z |
go
| 2018-03-16T22:01:38Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 630 |
["logger/log.go", "server/configs/reload/file_rotate.conf", "server/configs/reload/file_rotate1.conf", "server/log.go", "server/reload_test.go"]
|
Windows Log Rotation catch-22: cannot rename old file because it is locked
|
- [X] Defect
- [ ] Feature Request or Change Proposal
## Defects
Make sure that these boxes are checked before submitting your issue -- thank you!
- [ ] Included `gnatsd -DV` output
- [X] Included a [Minimal, Complete, and Verifiable example] (http://stackoverflow.com/help/mcve)
#### Versions of `gnatsd` and affected client libraries used:
nats-server 1.0.4
#### OS/Container environment:
Windows Server 2008R2
#### Steps or code to reproduce the issue:
(1) Configure gnatsd as a Windows service, log to a file [WINDOWS ONLY]
(2) Attempt to rotate the log: ren (cmd.exe) or Move-Item (PowerShell) fails because log file is in active use.
#### Expected result:
Rename or Move allowed; then gnatsd.exe -sl reload should restart logging to the (now empty) log file
#### Actual result:
No way to rotate the existing log file out of the way.
## Feature Requests
#### Use Case:
#### Proposed Change:
Possibility 1: Close log file through gnatsd.exe -sl command
Possibility 2: Include FILE_SHARE_DELETE flag when calling CreateFile. This might allow renaming of the file. See URL below.
https://msdn.microsoft.com/en-us/library/windows/desktop/aa363858(v=vs.85).aspx
#### Who Benefits From The Change(s)?
Anyone running gnatsd.exe on Windows logging to file
#### Alternative Approaches
Log to syslog (syslog:true in .cfg or --syslog on gnatsd.exe command line)
|
https://github.com/nats-io/nats-server/issues/630
|
https://github.com/nats-io/nats-server/pull/644
|
982643ad44601a3164bfe0400920b817eab8b213
|
4aa04a6697d835faaa78512aa82e264f65d0c293
| 2018-03-02T16:28:09Z |
go
| 2018-03-15T23:55:39Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 612 |
["server/monitor.go", "server/monitor_sort_opts.go", "server/monitor_test.go"]
|
improve access to Server stats
|
- [ ] Defect
- [x] Feature Request or Change Proposal
## Feature Requests
#### Use Case:
when embedding gnatsd into another program I want to be able to access its stats and perhaps expose those to prometheus or whatever
#### Proposed Change:
Today those structures all unexported - and the stats gathering is done kind of ln-line in the http handlers. So it's very hard to access the stats, you more or less have to reinvent that wheel.
The methods https://github.com/nats-io/gnatsd/blob/ee7b97e6ee3068900d39f1fe4ae7b75f358416ab/server/server.go#L868 seems about the only way to get to this information - and the comment there makes me reluctant to use those as thats probably not part of the supported API.
For my personal use case I just want a copy of the Varz structure populated, so extracting the code to build up that structure and making a getter for it would be amazing, but I think the monitoring in general can be greatly improved along these lines
#### Who Benefits From The Change(s)?
people who embed gnatsd
#### Alternative Approaches
Today the only option is HTTP requests to /varz
|
https://github.com/nats-io/nats-server/issues/612
|
https://github.com/nats-io/nats-server/pull/615
|
4aa04a6697d835faaa78512aa82e264f65d0c293
|
8d539a4bb0239a407602366eb8fb1e0e7929758a
| 2017-12-13T21:11:00Z |
go
| 2018-03-16T21:46:01Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 610 |
["server/route.go"]
|
Log route creation errors at error level
|
- [ ] Defect
- [x] Feature Request or Change Proposal
## Feature Requests
#### Use Case:
Improve debugging route errors and general life cycle.
It's presently hard to know when routes go up and down and if there are errors in this life cycle since this stuff is all logged at debug level. Routes are pretty important so raising the visiblity of these will help people figure out they have issues
#### Proposed Change:
Log https://github.com/nats-io/gnatsd/blob/b002db4a1c66b883fe308838f8cea42d50aa83a7/server/route.go#L244 and https://github.com/nats-io/gnatsd/blob/b002db4a1c66b883fe308838f8cea42d50aa83a7/server/route.go#L378 at error level
And in https://github.com/nats-io/gnatsd/blob/b002db4a1c66b883fe308838f8cea42d50aa83a7/server/route.go#L317 once the route connection is established log an info message
#### Who Benefits From The Change(s)?
People who run nats clusters
|
https://github.com/nats-io/nats-server/issues/610
|
https://github.com/nats-io/nats-server/pull/611
|
b002db4a1c66b883fe308838f8cea42d50aa83a7
|
ee7b97e6ee3068900d39f1fe4ae7b75f358416ab
| 2017-12-10T14:11:06Z |
go
| 2017-12-11T16:04:07Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 600 |
["server/client.go", "server/monitor.go", "server/monitor_test.go", "server/server.go"]
|
Monitor /connz endpoint returns empty reply
|
When I try to query /connz endpoint with offset close to the end of the connections list, I get an empty response from the server. Unfortunately, I wasn't able to find any relevant log entries.
- [X] Defect
- [ ] Feature Request or Change Proposal
#### Versions of `gnatsd` and affected client libraries used:
gnatsd v1.0.4
#### OS/Container environment:
Debian 9.2 Stretch
#### Steps or code to reproduce the issue:
```
$ curl -i 'http://127.0.0.1:8222/connz?limit=1'
HTTP/1.1 200 OK
Content-Type: application/json
Date: Wed, 08 Nov 2017 08:46:19 GMT
Content-Length: 695
{
"now": "2017-11-08T08:46:19.287041042Z",
"num_connections": 1,
"total": 4778,
"offset": 0,
"limit": 1,
"connections": [
{
...
```
```
curl -i 'http://127.0.0.1:8222/connz?limit=10&offset=4710'
curl: (52) Empty reply from server
```
#### Expected result:
Proper response
#### Actual result:
Empty reply
|
https://github.com/nats-io/nats-server/issues/600
|
https://github.com/nats-io/nats-server/pull/604
|
7a4f7bbf03dc524061697085f46a40d543943ce7
|
cde2aa6d2f3dc7e9216566f250c6674de842e2e7
| 2017-11-08T08:59:32Z |
go
| 2017-11-17T19:31:58Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 591 |
["server/ciphersuites_1.5.go", "server/ciphersuites_1.8.go", "server/server.go"]
|
Name of some cipher suites missing on handshake completion
|
Make sure that these boxes are checked before submitting your issue -- thank you!
- [X] Included `gnatsd -DV` output
- [X] Included a [Minimal, Complete, and Verifiable example] (http://stackoverflow.com/help/mcve)
#### Versions of `gnatsd` and affected client libraries used:
Server `1.0.2`, Go client `1.3.1`, Go version `1.8.3`
#### OS/Container environment:
Any
#### Steps or code to reproduce the issue:
Start server with TLS configuration. For instance, from repo's test directory:
```
gnatsd -c ./configs/tls.conf -D
```
From client, connect to `tls://derek:boo@localhost:4443` (and make sure to add `../gnatsd/test/configs/certs/ca.pem` CA certificate.
#### Expected result:
Display the name of the cipher that has been selected
#### Actual result:
```
(...) [DBG] 127.0.0.1:55128 - cid:1 - TLS version 1.2, cipher suite Unknown [cca8]
```
|
https://github.com/nats-io/nats-server/issues/591
|
https://github.com/nats-io/nats-server/pull/592
|
26ff94c21ccd0d1a678c019fabf67ef33c93af92
|
b83c9d1ca40864614bc7cac9199b91734588aaf5
| 2017-09-25T16:29:57Z |
go
| 2017-09-25T19:01:43Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 583 |
["server/server.go"]
|
Elevate client connection errors
|
## Defect
When running at normal non debug log levels client accept errors are not logged, here's an example:
```
[16746] 2017/09/11 10:01:30.942690 [DBG] Temporary Client Accept Error(accept tcp [::]:4222: accept4: too many open files), sleeping 40ms
```
This is a serious error one you should know about, this should not be logged only in debug level
|
https://github.com/nats-io/nats-server/issues/583
|
https://github.com/nats-io/nats-server/pull/584
|
0c3d4ce7fa0f8d5367d7844a3b3f4539ce4c034e
|
c3e09519f9ef6cdd131479154729cac4fa18c04f
| 2017-09-11T08:05:22Z |
go
| 2017-09-11T15:04:08Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 577 |
["main.go", "server/opts.go", "server/opts_test.go", "server/server.go"]
|
Differences Between Configuration File and CLI Parameters
|
- [X] Defect
- [ ] ~~Feature Request or Change Proposal~~
## Defects
Make sure that these boxes are checked before submitting your issue -- thank you!
- [X] Included `gnatsd -DV` output
```
[53885] 2017/09/06 22:13:19.303170 [INF] Starting nats-server version 1.0.2
[53885] 2017/09/06 22:13:19.303297 [DBG] Go build version go1.7.6
[53885] 2017/09/06 22:13:19.303419 [INF] Listening for client connections on 0.0.0.0:4222
[53885] 2017/09/06 22:13:19.303503 [DBG] Server id is iO48Q6XRiHtMYpcZk6LzTj
[53885] 2017/09/06 22:13:19.303542 [INF] Server is ready
```
- [X] Included a [Minimal, Complete, and Verifiable example](http://stackoverflow.com/help/mcve)
```
$ cat minimal.conf
debug: true
cluster {
host: '0.0.0.0'
port: 6222
routes = [
nats://nats:6222
]
}
```
```
$ nslookup nats
Server: 192.168.1.1
Address: 192.168.1.1#53
** server can't find nats: NXDOMAIN
```
**NOTE:** The fact that `nats` doesn't (_initially_) resolve is on purpose.
#### Versions of `gnatsd` and affected client libraries used:
At least `v1.0.2`.
#### OS/Container environment:
OSX, Linux (x64), Docker.
#### Steps or code to reproduce the issue:
```
$ gnatsd -c minimal.conf
Error looking up host with route hostname: lookup nats: no such host
```
```
$ gnatsd --cluster nats://0.0.0.0:6222 --routes nats://nats:6222 --debug
[54226] 2017/09/06 23:20:11.465586 [INF] Starting nats-server version 1.0.2
[54226] 2017/09/06 23:20:11.465842 [DBG] Go build version go1.7.6
[54226] 2017/09/06 23:20:11.466074 [INF] Listening for client connections on 0.0.0.0:4222
[54226] 2017/09/06 23:20:11.466087 [DBG] Server id is 5bc9CwfjHPdcQcIJiAZFL3
[54226] 2017/09/06 23:20:11.466091 [INF] Server is ready
[54226] 2017/09/06 23:20:11.466397 [INF] Listening for route connections on 0.0.0.0:6222
[54226] 2017/09/06 23:20:11.466538 [DBG] Trying to connect to route on nats:6222
[54226] 2017/09/06 23:20:11.473372 [DBG] Error trying to connect to route: dial tcp: lookup nats: no such host
[54226] 2017/09/06 23:20:12.477985 [DBG] Trying to connect to route on nats:6222
[54226] 2017/09/06 23:20:12.479305 [DBG] Error trying to connect to route: dial tcp: lookup nats: no such host
[54226] 2017/09/06 23:20:13.483669 [DBG] Trying to connect to route on nats:6222
[54226] 2017/09/06 23:20:13.485205 [DBG] Error trying to connect to route: dial tcp: lookup nats: no such host
[54226] 2017/09/06 23:20:14.489127 [DBG] Trying to connect to route on nats:6222
[54226] 2017/09/06 23:20:14.490479 [DBG] Error trying to connect to route: dial tcp: lookup nats: no such host
```
#### Expected result:
I would expect both behaviours to be the same — if my understanding of configuration is correct, `minimal.conf` should produce the exact same behaviour as providing the `--cluster nats://0.0.0.0:6222 --routes nats://nats:6222 --debug` parameters to `gnatsd`. In the end, both commands would make `gnatsd` sit on a loop attempting to establish the route until `nats` resolves.
#### Actual result:
If the route is specified in the configuration file `gnatsd` exits upon checking only once that `nats` does not resolve.
|
https://github.com/nats-io/nats-server/issues/577
|
https://github.com/nats-io/nats-server/pull/578
|
c7fc87659aa638b1ebfe1698f69e8b15670f0c72
|
dbe9bc79ee148aee07e9b0b212c33f9c59166330
| 2017-09-06T22:28:38Z |
go
| 2017-09-07T15:46:41Z |
closed
|
nats-io/nats-server
|
https://github.com/nats-io/nats-server
| 564 |
["README.md"]
|
DOC: added a clarification about a token usage
|
- [x] Feature Request or Change Proposal
#### Proposed Change:
Added a clarification into `README.md` about a token usage
#### Who Benefits From The Change(s)?
Anybody who uses a token authentication in client
#### Alternative Approaches
None.
|
https://github.com/nats-io/nats-server/issues/564
|
https://github.com/nats-io/nats-server/pull/563
|
4246f83c7414ad13a33a1ca94c5db2413941f038
|
1ea25001a29429e8e0b4d03bdeb7569cc327fd95
| 2017-08-20T13:11:00Z |
go
| 2017-08-28T15:13:45Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.