repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
awslabs/amazon-kinesis-video-streams-webrtc-sdk-c | 759925946 | Title: [QUESTION] Potential SSL Read Error Again
Question:
username_0: We are also experiencing a similar problem when the bitrate gets too high for some network scenarios, actually very good network conditions.
https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-c/issues/410
We've also been able to replicate similar error log
https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-c/issues/851
However, the fix for this issue is to default to use udp.
There are traces of evidence for potential data races
- Is there an edge case where TCP TLS is needed?
- Could it be caused by data races on connectionListenerDataReceiveRoutine? https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-c/blob/master/tst/suppressions/TSAN.supp
- We are using openssl, could mbedtls make a difference?
Answers:
username_0: I threaten with a friendly smile if anyone closes this issue.
username_1: @username_0 I am not sure what you are observing and what's causing SSL error. This is a very generic error that can be caused by many possible root causes. You will need to have some debug level logs to start pinning it down.
If you have a repro scenario with stock assets we can look at it with highest priority. Other than that, I am not sure if any of this issue is actionable and I would recommend closing this as we don't want to keep non-actionable issues open.
To answer some of your questions.
* Your application can use any of the TURN servers.
* We are in a process of going down the list of exceptions and removing them. Every one of the exceptions in the TSAN should be falling in two categories. 1) Issues that we know the tests are causing as they need to access the internals of the objects and 2) issues with dependent open-source libraries and their usage in our context - for example libWebSocket.
* mbedtls and openssl from the interface perspective are interchangeable. Implementation-wise there could be differences. Without any knowledge what's causing your read error we won't be able to tell what can cause it
username_0: - My first question was, is there a network topology scenario where TCP TLS is absolutely needed.
I will attempt to grab some verbose log leading up to SSL read error (tracking down the sequence of events that lead up to this error is still a bit a huge challenging at my skill level, but I don't mind give that a shot, but I will be struggling)
username_1: It's hard to think of the case where the UDP on 443 would be disabled while the TCP allowed on the same port. Firewalls usually operate at the packet level but perhaps some scenarios call for it. If his is the case then you will need to filter the udp urls out in Common.c when submitting the URLs into the PeerConnection object: https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-c/blob/master/samples/Common.c#L363
Not sure how to help you but one thing would be useful for you is to be able to always run stock sample applications to get the repro so we can be on the same page. We can't debug your own applicaition.
username_0: I think I found a pattern! DTLS has an sslLock protecting ssl read/write, whereas TLS does not!
username_1: Not entirely sure what you mean. The sending/receiving is interlocked by a higher-level lock. Do you have a theory of what's causing your ssl read failures?
username_0: When there are multiple TCP Turn candidate pairs during ICE negotiation or in the process of initial phase of streaming and just before teardown, there might be a few races where multiple TCP connections are calling SSL_read at the same time, the odds of this is very small with lower bitrate. Increasing the bitrate increases the probability of races where two SSL_read are being called at the same time.
username_0: In particular, a single tcp socket from master side might be serving multiple candidates from the viewer side at some moment.
username_0: From the preliminary logging, there are more SSL writes than SSL reads. So could SSL read and write interfere with each other?
username_0: Preliminary steps to reproduce on armv7, will try if this is reproducible on Mac.
Master side:
- remove stun servers
- ice transport policy set to relay
- choose ?transport=tcp relays, there should be only two
- configure a high bitrate
Browser side:
- stock example, forcing turn.
username_1: All of these are interlocked properly - or at least this is the idea. Note that the TCP connection will call the same iceUtilsSendData which then ends up with socketConnectionSendData which interlocks at the connection level.
`MUTEX_LOCK(pSocketConnection->lock); `
So, sending is interlocked.
On the receiving side, socketConnectionReadData also interlocks on this lock.
username_0: @username_1 Is there a case where multiple socket connections are using the same SSL_read at the same time?
For example, there are two socket connections
- socket1 is locked by pSocket1->lock to call SSL_read
- socket2 is locked by pSocket2->lock call SSL_read
Since they have different locks, they are free to call SSL_read at the same time.
username_0: @username_1 My second thoughts, different sockets might have different SSL contexts
username_1: Indeed, these are different sockets so the mutex is per socket - each with its own socket connection object
username_1: Any updates?
username_0: Took Friday off. Last Thursday, I was
- experimenting with a different version of SSL (1.1.1g instead of 1.1.1d)
- trying to follow the DTLS lock pattern.
So I should have something today.
username_0: Also trying to understand this article:
https://www.openssl.org/docs/man1.0.2/man3/threads.html
username_0: yikes ...
```
(Note that OpenSSL uses a number of global data structures that will be implicitly shared whenever multiple threads use OpenSSL.) Multi-threaded applications will crash at random if it is not set.
```
username_0: My thoughts, could one reason be, could the read buffer be not big enough.
username_1: @username_0 I just added an experiment in the common library. https://github.com/awslabs/amazon-kinesis-video-streams-producer-c/pull/151
Can you see if this makes any difference for your crashes?
You need to apply the common library changes. Also, call initializeSslCallbacks from the start of your app and releaseSslCallbacks at the end. See if this makes any difference - I seriously doubt it
username_0: @username_1 This looks amazing! Thanks!
username_0: This cannot be contained with an extra timeout or more retries, correct?
```
2020-12-15 06:50:57 INFO onConnectionStateChange(): New connection state 3
2020-12-15 06:50:57 DEBUG rtcPeerConnectionGetMetrics(): ICE local candidate Stats requested at 16080150572625975
2020-12-15 06:50:57 DEBUG logSelectedIceCandidatesInformation(): Local Candidate IP Address: 192.168.3.11
2020-12-15 06:50:57 DEBUG logSelectedIceCandidatesInformation(): Local Candidate type: relay
2020-12-15 06:50:57 DEBUG logSelectedIceCandidatesInformation(): Local Candidate port: 59368
2020-12-15 06:50:57 DEBUG logSelectedIceCandidatesInformation(): Local Candidate priority: 16777215
2020-12-15 06:50:57 DEBUG logSelectedIceCandidatesInformation(): Local Candidate transport protocol: transport=tcp
2020-12-15 06:50:57 DEBUG logSelectedIceCandidatesInformation(): Local Candidate relay protocol: transport=tcp
2020-12-15 06:50:57 DEBUG logSelectedIceCandidatesInformation(): Local Candidate Ice server source: 3-21-52-195.t-c01cb9af.kinesisvideo.us-east-2.amazonaws.com
2020-12-15 06:50:57 DEBUG rtcPeerConnectionGetMetrics(): ICE remote candidate Stats requested at 16080150572627573
2020-12-15 06:50:57 DEBUG logSelectedIceCandidatesInformation(): Remote Candidate IP Address: 172.16.17.32
2020-12-15 06:50:57 DEBUG logSelectedIceCandidatesInformation(): Remote Candidate type: relay
2020-12-15 06:50:57 DEBUG logSelectedIceCandidatesInformation(): Remote Candidate port: 60126
2020-12-15 06:50:57 DEBUG logSelectedIceCandidatesInformation(): Remote Candidate priority: 0
2020-12-15 06:50:57 DEBUG logSelectedIceCandidatesInformation(): Remote Candidate transport protocol: transport=udp
2020-12-15 06:50:57 DEBUG stepStateMachine(): State Machine - Current state: 0x0000000000000004, Next state: 0x0000000000000008
2020-12-15 06:50:57 DEBUG stepIceAgentStateMachine(): Ice agent state changed from ICE_AGENT_STATE_CONNECTED to ICE_AGENT_STATE_NOMINATING.
2020-12-15 06:50:57 DEBUG turnConnectionHandleStun(): Channel bind succeeded with peer 172.16.17.32, port: 60107, channel number 16386
2020-12-15 06:50:57 DEBUG stepStateMachine(): State Machine - Current state: 0x0000000000000008, Next state: 0x0000000000000010
2020-12-15 06:50:57 DEBUG iceAgentReadyStateSetup(): Selected pair i+mMePHUb_WF8t3dg9M, local candidate type: relay. Round trip time 0 ms
2020-12-15 06:50:57 DEBUG iceAgentReadyStateSetup(): Freeing Turn allocations that are not selected. Total turn allocation count 1
2020-12-15 06:50:57 DEBUG stepIceAgentStateMachine(): Ice agent state changed from ICE_AGENT_STATE_NOMINATING to ICE_AGENT_STATE_READY.
2020-12-15 06:50:58 DEBUG turnConnectionStepState(): TurnConnection state changed from TURN_STATE_CREATE_PERMISSION to TURN_STATE_BIND_CHANNEL
2020-12-15 06:50:58 DEBUG turnConnectionStepState(): TurnConnection state changed from TURN_STATE_BIND_CHANNEL to TURN_STATE_READY
2020-12-15 06:50:58 DEBUG dtlsSessionChangeState(): DTLS init completed. Time taken 1301 ms
2020-12-15 06:50:58 INFO onSctpInboundPacket(): Unhandled PPID on incoming SCTP message 0
2020-12-15 06:50:58 DEBUG onDataChannel(): New DataChannel has been opened kvsDataChannel
2020-12-15 06:50:58 DEBUG onDataChannelMessage(): MESSAGE
2020-12-15 06:50:59 DEBUG sendClientInitiatedDataChannelMessage(): Sending the client the following message: 1s
2020-12-15 06:50:59 DEBUG socketSendDataWithRetry(): sendto() failed with errno Connection reset by peer
2020-12-15 06:50:59 DEBUG socketSendDataWithRetry(): Close socket 34
2020-12-15 06:50:59 DEBUG socketSendDataWithRetry(): Failed to send data. Bytes sent 0. Data len 1257. Retry count 0
2020-12-15 06:50:59 DEBUG socketSendDataWithRetry(): Warning: Send data failed with 0x5800001a
2020-12-15 06:50:59 ERROR iceUtilsSendData(): operation returned status code: 0x5800001a
2020-12-15 06:50:59 WARN tlsSessionProcessPacket(): SSL_read failed with error:140943FC:SSL routines:ssl3_read_bytes:sslv3 alert bad record mac
2020-12-15 06:50:59 WARN turnConnectionSendData(): iceUtilsSendData failed with 0x5800001a
2020-12-15 06:50:59 DEBUG socketConnectionSendData(): Warning: Failed to send data. Socket closed already
2020-12-15 06:50:59 ERROR iceUtilsSendData(): operation returned status code: 0x58000022
2020-12-15 06:50:59 WARN turnConnectionSendData(): iceUtilsSendData failed with 0x58000022
```
username_0: Wait a sec, I will comment out the code that sends messages.
username_0: Eliminating datachannel
```
020-12-15 07:12:32 DEBUG turnConnectionHandleStun(): create permission succeeded for peer 172.16.17.32
2020-12-15 07:12:32 DEBUG turnConnectionHandleStun(): create permission succeeded for peer 172.16.17.32
2020-12-15 07:12:32 DEBUG turnConnectionHandleStun(): create permission succeeded for peer 172.16.17.32
2020-12-15 07:12:32 DEBUG turnConnectionHandleStun(): create permission succeeded for peer 172.16.17.32
2020-12-15 07:12:32 DEBUG turnConnectionHandleStun(): create permission succeeded for peer 172.16.17.32
2020-12-15 07:12:32 DEBUG turnConnectionHandleStun(): Channel bind succeeded with peer 172.16.17.32, port: 52038, channel number 16385
2020-12-15 07:12:32 DEBUG turnConnectionHandleStun(): Channel bind succeeded with peer 172.16.17.32, port: 59140, channel number 16386
2020-12-15 07:12:32 DEBUG turnConnectionHandleStun(): Channel bind succeeded with peer 172.16.17.32, port: 55749, channel number 16387
2020-12-15 07:12:32 DEBUG turnConnectionHandleStun(): Channel bind succeeded with peer 172.16.17.32, port: 64669, channel number 16388
2020-12-15 07:12:32 DEBUG turnConnectionHandleStun(): Channel bind succeeded with peer 172.16.17.32, port: 54823, channel number 16389
2020-12-15 07:12:32 DEBUG turnConnectionHandleStun(): Channel bind succeeded with peer 172.16.17.32, port: 57108, channel number 16390
2020-12-15 07:12:32 DEBUG turnConnectionHandleStun(): Channel bind succeeded with peer 172.16.17.32, port: 55480, channel number 16391
2020-12-15 07:12:32 DEBUG turnConnectionHandleStun(): Channel bind succeeded with peer 172.16.17.32, port: 57980, channel number 16392
2020-12-15 07:12:32 DEBUG turnConnectionHandleStun(): Channel bind succeeded with peer 172.16.17.32, port: 57234, channel number 16393
2020-12-15 07:12:32 DEBUG handleStunPacket(): received candidate with USE_CANDIDATE flag, local candidate type relay.
2020-12-15 07:12:32 DEBUG handleStunPacket(): Ice candidate pair GalcwaAeP_4WZNm76N2 is connected. Round trip time: 179ms
2020-12-15 07:12:32 DEBUG stepStateMachine(): State Machine - Current state: 0x0000000000000002, Next state: 0x0000000000000004
2020-12-15 07:12:32 DEBUG stepIceAgentStateMachine(): Ice agent state changed from ICE_AGENT_STATE_CHECK_CONNECTION to ICE_AGENT_STATE_CONNECTED.
2020-12-15 07:12:32 INFO onConnectionStateChange(): New connection state 3
2020-12-15 07:12:32 DEBUG rtcPeerConnectionGetMetrics(): ICE local candidate Stats requested at 16080163527496688
2020-12-15 07:12:32 DEBUG logSelectedIceCandidatesInformation(): Local Candidate IP Address: 172.16.17.32
2020-12-15 07:12:32 DEBUG logSelectedIceCandidatesInformation(): Local Candidate type: relay
2020-12-15 07:12:32 DEBUG logSelectedIceCandidatesInformation(): Local Candidate port: 56464
2020-12-15 07:12:32 DEBUG logSelectedIceCandidatesInformation(): Local Candidate priority: 16777215
2020-12-15 07:12:32 DEBUG logSelectedIceCandidatesInformation(): Local Candidate transport protocol: transport=tcp
2020-12-15 07:12:32 DEBUG logSelectedIceCandidatesInformation(): Local Candidate relay protocol: transport=tcp
2020-12-15 07:12:32 DEBUG logSelectedIceCandidatesInformation(): Local Candidate Ice server source: 3-14-135-85.t-c01cb9af.kinesisvideo.us-east-2.amazonaws.com
2020-12-15 07:12:32 DEBUG rtcPeerConnectionGetMetrics(): ICE remote candidate Stats requested at 16080163527574506
2020-12-15 07:12:32 DEBUG logSelectedIceCandidatesInformation(): Remote Candidate IP Address: 172.16.17.32
2020-12-15 07:12:32 DEBUG logSelectedIceCandidatesInformation(): Remote Candidate type: relay
2020-12-15 07:12:32 DEBUG logSelectedIceCandidatesInformation(): Remote Candidate port: 52038
2020-12-15 07:12:32 DEBUG logSelectedIceCandidatesInformation(): Remote Candidate priority: 0
2020-12-15 07:12:32 DEBUG logSelectedIceCandidatesInformation(): Remote Candidate transport protocol: transport=udp
2020-12-15 07:12:32 DEBUG handleStunPacket(): Ice candidate pair GalcwaAeP_dY/HCSoya is connected. Round trip time: 180ms
2020-12-15 07:12:32 DEBUG handleStunPacket(): Ice candidate pair GalcwaAeP_gKp92Zv2V is connected. Round trip time: 187ms
2020-12-15 07:12:32 DEBUG handleStunPacket(): Ice candidate pair GalcwaAeP_kvGHCiDTd is connected. Round trip time: 182ms
2020-12-15 07:12:32 DEBUG stepStateMachine(): State Machine - Current state: 0x0000000000000004, Next state: 0x0000000000000008
2020-12-15 07:12:32 DEBUG stepIceAgentStateMachine(): Ice agent state changed from ICE_AGENT_STATE_CONNECTED to ICE_AGENT_STATE_NOMINATING.
2020-12-15 07:12:32 DEBUG handleStunPacket(): received candidate with USE_CANDIDATE flag, local candidate type relay.
2020-12-15 07:12:32 DEBUG stepStateMachine(): State Machine - Current state: 0x0000000000000008, Next state: 0x0000000000000010
2020-12-15 07:12:32 DEBUG iceAgentReadyStateSetup(): Selected pair GalcwaAeP_gKp92Zv2V, local candidate type: relay. Round trip time 0 ms
2020-12-15 07:12:32 DEBUG iceAgentReadyStateSetup(): Freeing Turn allocations that are not selected. Total turn allocation count 1
2020-12-15 07:12:32 DEBUG stepIceAgentStateMachine(): Ice agent state changed from ICE_AGENT_STATE_NOMINATING to ICE_AGENT_STATE_READY.
2020-12-15 07:12:32 DEBUG handleStunPacket(): Ice candidate pair GalcwaAeP_PfYAKwreP is connected. Round trip time: 177ms
2020-12-15 07:12:33 DEBUG dtlsSessionChangeState(): DTLS init completed. Time taken 700 ms
2020-12-15 07:12:33 WARN tlsSessionProcessPacket(): SSL_read failed with error:140943FC:SSL routines:ssl3_read_bytes:sslv3 alert bad record mac
2020-12-15 07:12:33 DEBUG socketConnectionClosed(): Close socket 34
2020-12-15 07:12:33 DEBUG connectionListenerReceiveDataRoutine(): recvfrom() failed with errno Connection reset by peer for socket 34
2020-12-15 07:12:33 DEBUG socketConnectionSendData(): Warning: Failed to send data. Socket closed already
2020-12-15 07:12:33 ERROR iceUtilsSendData(): operation returned status code: 0x58000022
2020-12-15 07:12:33 WARN turnConnectionSendData(): iceUtilsSendData failed with 0x58000022
```
username_1: I have not seen this issue before
`2020-12-15 07:12:33 WARN tlsSessionProcessPacket(): SSL_read failed with error:140943FC:SSL routines:ssl3_read_bytes:sslv3 alert bad record mac
`
Looks like an intermediary has changed it's mac address or something while trying to establish a network connection.
I am drawing a blank at this stage a I don't have too much detail into OpenSSL implementation of the TLS
username_0: The record mac address is likely getting corrupted with data races? There is a correlation.
username_0: Another possibility is SSL version negotiation. Am I using an up to date root cert or TLSv2 instead of v3? Or my generated cert bits does not align with the relay server?
username_1: I believe it’s not related to the protocol itself as there would be a simple negotiation error. This error in particular could happen when the peer max address is changed - could be the devices max address or the immediate peer like wifi router gateway (not sure). It might, i guess be caused by a memory corruption. Harder to think of a scenario where this could be a network bits corruption.
To simplify:
* run stock master on your hardware platform (latest commit)
* run your application on commodity platform like macOS
Look for OpenSSL forums for more info on this particular issue.
username_0: Experimenting with embedTLS and wider/narrower system port range
username_1: If I might suggest, you need a methodic and systematic approach to your investigation. I have given the above suggestions multiple times but for some reason no action is followed. As I mentioned, I have doubts that this has anything to do with the SSL implementation. If you are able to reproduce this issue on stock sample then we can be of more help. Otherwise, we suggest closing this issue as this is not actionable at all on our side - moreover, every indication is that this is your application/platform specific.
username_0: That's a really valuable suggestion. I will try the stock example. By luck I've narrowed it down to port range settings. But to get a better understanding, I will run the stock with debug, restricting tcp traffic only.
username_0: With the latest commit and a few modifications to gstreamer sample, I will create a branch
```
2020-12-15 21:36:08 DEBUG handleStunPacket(): Ice candidate pair MXoVUkOhr_mzgV4Q8WT is connected. Round trip time: 178ms
2020-12-15 21:36:08 DEBUG stepStateMachine(): State Machine - Current state: 0x0000000000000008, Next state: 0x0000000000000010
2020-12-15 21:36:08 DEBUG iceAgentReadyStateSetup(): Selected pair elCzM8l7l_4kbKlc4DY, local candidate type: relay. Round trip time 0 ms
2020-12-15 21:36:08 DEBUG iceAgentReadyStateSetup(): Freeing Turn allocations that are not selected. Total turn allocation count 2
2020-12-15 21:36:08 DEBUG stepIceAgentStateMachine(): Ice agent state changed from ICE_AGENT_STATE_NOMINATING to ICE_AGENT_STATE_READY.
2020-12-15 21:36:08 DEBUG turnConnectionStepState(): TurnConnection state changed from TURN_STATE_CREATE_PERMISSION to TURN_STATE_CLEAN_UP
2020-12-15 21:36:08 WARN handleStunPacket(): Cannot find candidate pair with local candidate 0000:0000:0000:0000:0000:0000:0000:0000 and remote candidate 172.16.17.32. Dropping STUN binding success response
2020-12-15 21:36:08 DEBUG turnConnectionHandleStun(): TURN Allocation freed.
2020-12-15 21:36:08 DEBUG socketConnectionClosed(): Close socket 40
2020-12-15 21:36:08 DEBUG turnConnectionStepState(): TurnConnection state changed from TURN_STATE_CLEAN_UP to TURN_STATE_NEW
2020-12-15 21:36:08 INFO onConnectionStateChange(): New connection state 3
2020-12-15 21:36:08 DEBUG rtcPeerConnectionGetMetrics(): ICE local candidate Stats requested at 16080681688244864
2020-12-15 21:36:08 DEBUG logSelectedIceCandidatesInformation(): Local Candidate IP Address: 172.16.58.3
2020-12-15 21:36:08 DEBUG logSelectedIceCandidatesInformation(): Local Candidate type: relay
2020-12-15 21:36:08 DEBUG logSelectedIceCandidatesInformation(): Local Candidate port: 62306
2020-12-15 21:36:08 DEBUG logSelectedIceCandidatesInformation(): Local Candidate priority: 16777215
2020-12-15 21:36:08 DEBUG logSelectedIceCandidatesInformation(): Local Candidate transport protocol: transport=tcp
2020-12-15 21:36:08 DEBUG logSelectedIceCandidatesInformation(): Local Candidate relay protocol: transport=tcp
2020-12-15 21:36:08 DEBUG logSelectedIceCandidatesInformation(): Local Candidate Ice server source: 3-134-76-195.t-c01cb9af.kinesisvideo.us-east-2.amazonaws.com
2020-12-15 21:36:08 DEBUG rtcPeerConnectionGetMetrics(): ICE remote candidate Stats requested at 16080681688310106
2020-12-15 21:36:08 DEBUG logSelectedIceCandidatesInformation(): Remote Candidate IP Address: 172.16.17.32
2020-12-15 21:36:08 DEBUG logSelectedIceCandidatesInformation(): Remote Candidate type: relay
2020-12-15 21:36:08 DEBUG logSelectedIceCandidatesInformation(): Remote Candidate port: 55408
2020-12-15 21:36:08 DEBUG logSelectedIceCandidatesInformation(): Remote Candidate priority: 0
2020-12-15 21:36:08 DEBUG logSelectedIceCandidatesInformation(): Remote Candidate transport protocol: transport=udp
2020-12-15 21:36:08 DEBUG dtlsSessionChangeState(): DTLS init completed. Time taken 704 ms
2020-12-15 21:36:09 INFO onSctpInboundPacket(): Unhandled PPID on incoming SCTP message 0
2020-12-15 21:36:09 INFO onDataChannel(): New DataChannel has been opened kvsDataChannel
2020-12-15 21:36:09 INFO onDataChannelMessage(): DataChannel String Message: mainstream
2020-12-15 21:36:09 WARN tlsSessionProcessPacket(): SSL_read failed with error:140943FC:SSL routines:ssl3_read_bytes:sslv3 alert bad record mac
2020-12-15 21:36:09 DEBUG socketConnectionClosed(): Close socket 41
2020-12-15 21:36:09 DEBUG connectionListenerReceiveDataRoutine(): recvfrom() failed with errno Connection reset by peer for socket 41
2020-12-15 21:36:09 DEBUG socketConnectionSendData(): Warning: Failed to send data. Socket closed already
2020-12-15 21:36:09 ERROR iceUtilsSendData(): operation returned status code: 0x58000022
2020-12-15 21:36:09 WARN turnConnectionSendData(): iceUtilsSendData failed with 0x58000022
2020-12-15 21:36:09 ERROR turnConnectionSendData(): operation returned status code: 0x58000022
2020-12-15 21:36:09 ERROR iceUtilsSendData(): operation returned status code: 0x58000022
2020-12-15 21:36:09 WARN iceAgentSendPacket(): iceUtilsSendData failed with 0x58000022
2020-12-15 21:36:09 WARN iceAgentSendPacket(): IceAgent connection closed unexpectedly
2020-12-15 21:36:09 WARN iceAgentSendPacket(): Invalid state for data sending candidate pair.
2020-12-15 21:36:09 WARN iceAgentSendPacket(): Invalid state for data sending candidate pair.
2020-12-15 21:36:09 WARN iceAgentSendPacket(): Invalid state for data sending candidate pair.
2020-12-15 21:36:09 WARN iceAgentSendPacket(): Invalid state for data sending candidate pair.
2020-12-15 21:36:09 WARN iceAgentSendPacket(): Invalid state for data sending candidate pair.
2020-12-15 21:36:09 WARN iceAgentSendPacket(): Invalid state for data sending candidate pair.
2020-12-15 21:36:09 WARN iceAgentSendPacket(): Invalid state for data sending candidate pair.
2020-12-15 21:36:09 DEBUG turnConnectionStepState(): TurnConnection state changed from TURN_STATE_CREATE_PERMISSION to TURN_STATE_BIND_CHANNEL
2020-12-15 21:36:09 DEBUG turnConnectionStepState(): TurnConnection state changed from TURN_STATE_BIND_CHANNEL to TURN_STATE_READY
2020-12-15 21:36:09 WARN iceAgentSendPacket(): Invalid state for data sending candidate pair.
2020-12-15 21:36:09 WARN iceAgentSendPacket(): Invalid state for data sending candidate pair.
2020-12-15 21:36:10 WARN iceAgentSendPacket(): Invalid state for data sending candidate pair.
2020-12-15 21:36:10 WARN iceAgentSendPacket(): Invalid state for data sending candidate pair.
```
username_0: Might not be reproducible on common platform
username_0: Maybe sending another one without datachannel message
username_0: My changes to the latest master
https://github.com/username_0/amazon-kinesis-video-streams-webrtc-sdk-c/commit/11f02704a473db172d3a874926425065509f0732
username_0: I will also forward an rtsp stream to a Mac to see if it's reproducible, I tried stock samples on a Mac and did not reproduce.
username_0: I forwarded an RTSP stream to a Mac in the same network as the camera. I even cranked up the bit rate to 10Mbps, still cannot reproduce it on Mac. So you were saying platform specific?
username_0: Random note I came across and should be on my checklist.
https://www.openssl.org/docs/man1.1.0/man3/CRYPTO_THREAD_run_once.html
```
You can find out if OpenSSL was configured with thread support:
#include <openssl/opensslconf.h>
#if defined(OPENSSL_THREADS)
// thread support enabled
#else
// no thread support
#endif
```
username_0: Found another pattern!
Dtls clears the error before read! Tls does not :)
https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-c/blob/master/src/source/Crypto/Dtls_openssl.c#L453
username_0: Nvm, resetting did not seem to work, no data channel opened
```
2020-12-16 06:00:53 DEBUG dtlsSessionChangeState(): DTLS init completed. Time taken 708 ms
2020-12-16 06:00:53 DEBUG rtcPeerConnectionGetMetrics(): ICE local candidate Stats requested at 16080984533595277
2020-12-16 06:00:53 DEBUG logSelectedIceCandidatesInformation(): Local Candidate IP Address: 172.16.17.32
2020-12-16 06:00:53 DEBUG logSelectedIceCandidatesInformation(): Local Candidate type: relay
2020-12-16 06:00:53 DEBUG logSelectedIceCandidatesInformation(): Local Candidate port: 56127
2020-12-16 06:00:53 DEBUG logSelectedIceCandidatesInformation(): Local Candidate priority: 16777215
2020-12-16 06:00:53 DEBUG logSelectedIceCandidatesInformation(): Local Candidate transport protocol: transport=tcp
2020-12-16 06:00:53 DEBUG logSelectedIceCandidatesInformation(): Local Candidate relay protocol: transport=tcp
2020-12-16 06:00:53 DEBUG logSelectedIceCandidatesInformation(): Local Candidate Ice server source: 3-14-135-85.t-c01cb9af.kinesisvideo.us-east-2.amazonaws.com
2020-12-16 06:00:53 DEBUG rtcPeerConnectionGetMetrics(): ICE remote candidate Stats requested at 16080984533894579
2020-12-16 06:00:53 DEBUG logSelectedIceCandidatesInformation(): Remote Candidate IP Address: 192.168.3.11
2020-12-16 06:00:53 DEBUG logSelectedIceCandidatesInformation(): Remote Candidate type: relay
2020-12-16 06:00:53 DEBUG logSelectedIceCandidatesInformation(): Remote Candidate port: 51945
2020-12-16 06:00:53 DEBUG logSelectedIceCandidatesInformation(): Remote Candidate priority: 0
2020-12-16 06:00:53 DEBUG logSelectedIceCandidatesInformation(): Remote Candidate transport protocol: transport=udp
2020-12-16 06:00:54 DEBUG turnConnectionStepState(): TurnConnection state changed from TURN_STATE_CREATE_PERMISSION to TURN_STATE_BIND_CHANNEL
2020-12-16 06:00:54 DEBUG turnConnectionStepState(): TurnConnection state changed from TURN_STATE_BIND_CHANNEL to TURN_STATE_READY
2020-12-16 06:00:57 DEBUG socketSendDataWithRetry(): sendto() failed with errno Connection reset by peer
2020-12-16 06:00:57 DEBUG socketSendDataWithRetry(): Close socket 14
2020-12-16 06:00:57 DEBUG socketSendDataWithRetry(): Failed to send data. Bytes sent 0. Data len 121. Retry count 0
2020-12-16 06:00:57 DEBUG socketSendDataWithRetry(): Warning: Send data failed with 0x5800001a
2020-12-16 06:00:57 ERROR iceUtilsSendData(): operation returned status code: 0x5800001a
2020-12-16 06:00:57 WARN turnConnectionSendData(): iceUtilsSendData failed with 0x5800001a
2020-12-16 06:00:57 WARN tlsSessionProcessPacket(): SSL_read failed with error:140943FC:SSL routines:ssl3_read_bytes:sslv3 alert bad record mac
2020-12-16 06:00:57 DEBUG socketConnectionSendData(): Warning: Failed to send data. Socket closed already
2020-12-16 06:00:57 ERROR iceUtilsSendData(): operation returned status code: 0x58000022
2020-12-16 06:00:57 WARN turnConnectionSendData(): iceUtilsSendData failed with 0x58000022
2020-12-16 06:00:57 ERROR turnConnectionSendData(): operation returned status code: 0x58000022
2020-12-16 06:00:57 ERROR iceUtilsSendData(): operation returned status code: 0x58000022
2020-12-16 06:00:57 WARN iceAgentSendPacket(): iceUtilsSendData failed with 0x58000022
2020-12-16 06:00:57 WARN iceAgentSendPacket(): IceAgent connection closed unexpectedly
2020-12-16 06:00:57 WARN iceAgentSendPacket(): Invalid state for data sending candidate pair.
2020-12-16 06:00:57 WARN iceAgentSendPacket(): Invalid state for data sending candidate pair.
2020-12-16 06:00:57 WARN iceAgentSendPacket(): Invalid state for data sending candidate pair.
2020-12-16 06:00:57 WARN iceAgentSendPacket(): Invalid state for data sending candidate pair.
2020-12-16 06:00:57 WARN iceAgentSendPacket(): Invalid state for data sending candidate pair.
2020-12-16 06:00:57 WARN iceAgentSendPacket(): Invalid state for data sending candidate pair.
2020-12-16 06:00:57 WARN iceAgentSendPacket(): Invalid state for data sending candidate pair.
2020-12-16 06:00:57 WARN iceAgentSendPacket(): Invalid state for data sending candidate pair.
2020-12-16 06:00:57 WARN iceAgentSendPacket(): Invalid state for data sending candidate pair.
2020-12-16 06:00:57 WARN iceAgentSendPacket(): Invalid state for data sending candidate pair.
2020-12-16 06:00:57 WARN iceAgentSendPacket(): Invalid state for data sending candidate pair.
2020-12-16 06:00:57 WARN iceAgentSendPacket(): Invalid state for data sending candidate pair.
2020-12-16 06:00:57 WARN iceAgentSendPacket(): Invalid state for data sending candidate pair.
2020-12-16 06:00:57 WARN iceAgentSendPacket(): Invalid state for data sending candidate pair.
2020-12-16 06:00:57 WARN iceAgentSendPacket(): Invalid state for data sending candidate pair.
2020-12-16 06:00:57 WARN iceAgentSendPacket(): Invalid state for data sending candidate pair.
2020-12-16 06:00:57 DEBUG stepStateMachine(): State Machine - Current state: 0x0000000000000010, Next state: 0x0000000000000040
2020-12-16 06:00:57 ERROR executeFailedIceAgentState(): IceAgent failed with 0x58000022
2020-12-16 06:00:57 DEBUG stepIceAgentStateMachine(): Ice agent state changed from ICE_AGENT_STATE_READY to ICE_AGENT_STATE_FAILED.
2020-12-16 06:00:57 INFO onConnectionStateChange(): New connection state 5
2020-12-16 06:00:57 DEBUG freeSampleStreamingSession(): Freeing streaming session with peer id: 4Z1PGSM1MIW
```
username_0: Should both relays have matching protocols udp/tcp?
username_1: Sorry for the delay, I am oof.
re: openssl_threads.. the pr I sent has an ifdef to enable callbacks or not.
re udp/tcp. Consider each peer separately when they are connected via TURN. The server can handle on one side ups on the other tcp.
try to focus on what’s different between the platforms that you can’t repro on a commodity os
try to run the stock application on your platform per my earlier suggestion.
I assume for the issue we are talking about the ssl errors
username_0: Very naive question, could those be too small:
net.core.rmem_default = 163840
net.core.rmem_max = 163840
username_0: My sort of fuzzy logic was, this is bitrate related, one possibility could be that it overwhelmed the read buffer.
I will stop being annoying for the rest of today...
username_1: Oh no worries, it’s just that we can’t make any forward progress and I am not even sure yet whether it’s your platform or your application related.
I can’t comment really on the net settings but those should work without modifications.
Try to gather more data from the runs and run different combinations taking notes. Try to make parallels between the two other issues with turn connection and max address issue. I am running out of ideas at this stage so perhaps your platform guys could chime in?
username_0: With the latest master code
- Reproduced on the slightly modified stock example on armv7.
- Did not reproduce the same one on Mac.
- Will try to reproduce on Windows (Speculate not reproducible).
username_0: Probably related to this one https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-c/issues/1007
username_0: Actually, this seems to work. https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-c/issues/1007
Need more testing
username_1: Will take a look at it as soon as I can
username_2: MAC is Message Authentication Code in the ssl error message "bad record mac", yup, it's confusing since we usually interpret MAC as NIC MAC address. SSL complains the data integrity, bug #1007 caused this issue on my device since when retry happens, garbage data would be sent to the TLS session, of course SSL data integrity check fails. This only happens on the devices, not laptops, one thought is the device is less powerful, cannot send the TCP data fast enough till reaching the socket buffer limits, so retry does happen more often, especially for I frames in high bitrate video stream.
username_1: This might really be the issue. I have sent a PR for this issue: https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-c/pull/1008
We will be merging this soon
username_0: Thanks again!
Status: Issue closed
|
jechasteen/gratuitous | 651458568 | Title: New Widget: awesome docs in a webview widget.
Question:
username_0: We'll have to do quite a few things to make this happen:
* We need to keep a local copy of the docs pages
* we should generate them ourselves
* custom skin to match the color scheme
* it should be kept up-to-date
* There needs to be a new webview widget
* If it is possible, it should tile with editors, or pin to the left or right side.
* it should have basic control buttons: Home, back, forward
* it should support search within page, maybe the built-in webview already has this? |
cu-mkp/m-k-manuscript-data | 408920489 | Title: Translation of "graver"
Question:
username_0: Graver = to engrave or etch depending on context – can it also be ‘carve’ as is currently translated eg. in Founders of small tin work (80v)?
Answers:
username_1: searched for "grav"
translated relevant words as "engrave" or "etch" depending on context.
See entry p004v_3 (fols. 4v-5r) for an instance in which both are used, and "engraving" takes the sense of "carving" into the varnish ground covering an iron object.
On 80v (founders of small tin works), I changed "carve.. on stones" to "engrave," since the latter includes the sense of incising and also has a (now obsolete though related) sense of cutting into hard material (See OED "engrave" 2a). Other instances of "carve" became "engrave" or "etch"
Status: Issue closed
username_2: Graver = to engrave or etch depending on context – can it also be ‘carve’ as is currently translated eg. in Founders of small tin work (80v)?
Status: Issue closed
|
jlippold/tweakCompatible | 651872465 | Title: `MilkyWay2` working on iOS 13.1.1
Question:
username_0: ```
{
"packageId": "jp.akusio.milkyway2",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "jp.akusio.milkyway2",
"deviceId": "iPhone8,4",
"url": "http://cydia.saurik.com/package/jp.akusio.milkyway2/",
"iOSVersion": "13.1.1",
"packageVersionIndexed": true,
"packageName": "MilkyWay2",
"category": "Tweaks",
"repository": "(null)",
"name": "MilkyWay2",
"installed": "0.2.0-alpha",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "jp.akusio.milkyway2",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.0",
"shortDescription": "multitasking for iOS13.",
"latest": "0.2.0-alpha",
"author": "akusio",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
```
Answers:
username_1: This issue is being closed because your review was accepted into the tweakCompatible website.
Tweak developers do not monitor or fix issues submitted via this repo.
If you have an issue with a tweak, contact the developer via another method.
Status: Issue closed
|
sudheerapte/popcornrelease | 406417379 | Title: Designer should control CSS file sequence
Question:
username_0: Instead of searching for CSS files underneath the machine directory and putting them in the <head> in whatever sequence they appear in, popcorn should allow the designer to specify a sequence for them in the <head>.
Probably requires us to define a new file called something like "head-frags.html" in which the designer can write a sequence of <link> tags, and put that entire fragment inside the <head>.
Answers:
username_0: Version 0.0.6 allows you to create a file called "head-frags.html", which can contain the exact sequence of <link> tags naming the CSS files you need. This entire file is interpolated into the <head> tag of the index.html served by popcorn. CSS files are no longer scanned from the machine directory.
Status: Issue closed
|
artemis-nerds/protocol-docs | 328365295 | Title: v1.65 Archaeology
Question:
username_0: Some of us have managed to get hold of v1.65 and v1.70 for archaeological purposes! :grin:
v1.65 Object types appear to be:
### Object Type 0x00 EOL
### Object Type 0x01 PlayerShip (79-bit field - yikes, really? suspect my reverse-engineering tool may be broken :cry: )
Missing Weapons / Eng console objects?
### Object Type 0x02 Enemy (39-bit field? "E75", "K43", etc)
### Object Type 0x03 Civilian NPCShip (35 bit field? "De14", "Tr43", etc)
### Object Type 0x04 Base (14 bit field, "DS1", "DS2", etc)
### Object Type 0x05 Mine? (3 bit field)
### Object Type 0x06 Anomaly (4 bit field, "ANOM")
### Object Type 0x07 not seen yet.
### Object Type 0x08 Nebula (6 bit field)
### Object Type 0x09 Torpedo (8 bit field)
### Object Type 0x0a BlackHole (3 bit field)
### Object Type 0x0b Asteroid (3 bit field)
### Object Type 0x0c not seen yet.
### Object Type 0x0d Crystalline Entity? (4 bit field, all named "???"?)
### Object Type 0xe Xeno (13 bit field, all named "Xeno") |
googleprojectzero/fuzzilli | 950395942 | Title: Add Minimizer Tests
Question:
username_0: It would be great to have a way to write tests for the [Minimizer](https://github.com/googleprojectzero/fuzzilli/tree/main/Sources/Fuzzilli/Minimization) to catch issues such as the one fixed with https://github.com/googleprojectzero/fuzzilli/commit/555021d1b9f73d0201ca1629b8be482d0422cd2d earlier. |
rancher/rke2 | 1115731628 | Title: [Epic] Eliminate direct dependency on K3s as "engine" for RKE2
Question:
username_0: ### Summary:
Today, K3s and RKE2 are tightly coupled projects. K3s serves as the "engine" within RKE2 and is utilized as a sort of library.
Overall, maintenance of RKE2 has proven to be very difficult due to the use of K3s as a vendored "engine" within it. A majority of feature requests for RKE2 end up needing to be built into K3s whether the features make sense to belong in K3s, and require a (painful) vendoring process into RKE2.
In addition, user-created PRs for RKE2 are nearly impossible for a user to coordinate from start to finish, given that K3s is where the bulk majority of functionality must be enhanced in most cases. A user is at the "hands" of the K3s/RKE2 team to review their PRs in multiple stages, and is also at the mercy of both CI platforms.
This issue is an epic to encompass the effort of eliminating K3s as a vendored/go.mod'ed dependency of RKE2.
This is going to be performed through several steps, as outlined below:
1. Identify common requirements of engine code between K3s/RKE2.
2. Split/remove K3s/RKE2 engine code bits into vendor-able libraries that can be consumed by RKE2 and other projects which wish to utilize these components
3. Remove the direct `rancher/k3s`/`k3s-io/k3s` entry in `go.mod` |
clap-rs/clap | 1075772434 | Title: `no_version` have to be marked twice to hide version info of subcommand
Question:
username_0: <a href="https://github.com/loichyan"><img src="https://avatars.githubusercontent.com/u/73006950?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [loichyan](https://github.com/loichyan)**
_Friday Feb 12, 2021 at 10:36 GMT_
_Originally opened as https://github.com/TeXitoi/structopt/issues/465_
----
In code below, I have to mark `no_version` at both **P1** and **P2** to hide version info of `sub1`. For subcommands, their version info should be successfully hided when I marked `no_version` anywhere.
```rust
use structopt::clap::AppSettings::DisableVersion;
#[derive(structopt::StructOpt)]
enum Cmd {
#[structopt(no_version)] // P1
Sub1(Sub1),
}
#[derive(structopt::StructOpt)]
#[structopt(
no_version, // P2
global_settings = &[DisableVersion])]
struct Sub1 {}
#[paw::main]
fn main(cmd: Cmd) {
match cmd {
Cmd::Sub1(_s1) => {}
}
}
```
BTW, this sounds a little similar to #324.
Answers:
username_0: <a href="https://github.com/TeXitoi"><img src="https://avatars.githubusercontent.com/u/5787066?v=4" align="left" width="48" height="48" hspace="10"></img></a> **Comment by [TeXitoi](https://github.com/TeXitoi)**
_Friday Mar 05, 2021 at 13:33 GMT_
----
There is no way of removing the version of a subcommand, as the different enum can't know each others.
The real solution to this problem would be to have no default version handling, and a argument less `version` to find automatically the version number. It would be coherent with authors. But that's a breaking change, and that's not enough to justify a new major version.
username_0: version is now opt-in, instead of opt-out. We also only propagate populated versions. I believe this is now fixed.
Status: Issue closed
|
saltstack/salt | 227188202 | Title: salt-cloud libvirt provider: Unable to validate doc against /usr/share/libvirt/schemas/domain.rng
Question:
username_0: Using salt-cloud -p to stand up a VM using libvirt I get this:
`[DEBUG ] Clone XML '<Element 'domain' at 0x2feae90>'
libvirt: XML Util error : XML document failed to validate against schema: Unable to validate doc against /usr/share/libvirt/schemas/domain.rng
Extra element devices in interleave
Element domain failed to validate content
[INFO ] Cleaning up after exception clean up items: [{'item': <libvirt.virStorageVol object at 0x3045050>, 'what': 'volume'}]
[ERROR ] There was a profile error: XML document failed to validate against schema: Unable to validate doc against /usr/share/libvirt/schemas/domain.rng
Extra element devices in interleave
Element domain failed to validate content
Traceback (most recent call last):
File "/root/src/salt/salt/cloud/cli.py", line 284, in run
self.config.get('names')
File "/root/src/salt/salt/cloud/__init__.py", line 1449, in run_profile
ret[name] = self.create(vm_)
File "/root/src/salt/salt/cloud/__init__.py", line 1279, in create
output = self.clouds[func](vm_)
File "/root/src/salt/salt/cloud/clouds/libvirt.py", line 463, in create
raise e
libvirtError: XML document failed to validate against schema: Unable to validate doc against /usr/share/libvirt/schemas/domain.rng
Extra element devices in interleave
Element domain failed to validate content`
Couldn't figure out what it was complaining about, so I edited salt/salt/cloud/clouds/libvirt.py:
`diff --git a/salt/cloud/clouds/libvirt.py b/salt/cloud/clouds/libvirt.py
index 2692ec9..23e5935 100644
--- a/salt/cloud/clouds/libvirt.py
+++ b/salt/cloud/clouds/libvirt.py
@@ -412,7 +412,8 @@ def create(vm_):
log.debug("Clone XML '{0}'".format(domain_xml))
clone_xml = ElementTree.tostring(domain_xml)
- clone_domain = conn.defineXMLFlags(clone_xml, libvirt.VIR_DOMAIN_DEFINE_VALIDATE)
+ clone_domain = conn.defineXMLFlags(clone_xml, 0)
cleanup.append({'what': 'domain', 'item': clone_domain})
clone_domain.createWithFlags(libvirt.VIR_DOMAIN_START_FORCE_BOOT)`
That allowed salt-cloud -p to do what I wanted it to do. Now that the VM created I did this, expecting it to tell me what was up (but it didn't, it said everything was fine):
`# virt-xml-validate /etc/libvirt/qemu/lvnew10.xml domain
/etc/libvirt/qemu/lvnew10.xml validates`
Provider, profile looks like this:
`# cat /etc/salt/cloud.providers.d/libvirt.conf
libvirt-config:
driver: libvirt
url: qemu:///system
# cat /etc/salt/cloud.profiles.d/libvirt-admin.conf
libvirt-admin:
provider: libvirt-config
base_domain: centos7.0
ip_source: ip-learning
clone_strategy: quick
ssh_username: root
password: ############
deploy_command: sh /tmp/.saltcloud/deploy.sh
script_args: -F
[Truncated]
pycparser: Not Installed
pycrypto: 2.6.1
pycryptodome: Not Installed
pygit2: Not Installed
Python: 2.7.5 (default, Nov 6 2016, 00:28:07)
python-gnupg: Not Installed
PyYAML: 3.12
PyZMQ: 16.0.2
RAET: Not Installed
smmap: Not Installed
timelib: Not Installed
Tornado: 4.5.1
ZMQ: 4.1.6
System Versions:
dist: centos 7.3.1611 Core
machine: x86_64
release: 3.10.0-514.16.1.el7.x86_64
system: Linux
version: CentOS Linux 7.3.1611 Core`
Answers:
username_1: @username_2 it looks like you wrote this portion of the libvirt driver, would you be able to help me understand this bug?
Thanks,
Daniel
username_2: Weird last time I tested on Centos 7.0 things worked. When I submitted the PR I tested with [libvirt 1.2.17 and qemu 1.5.3](https://github.com/saltstack/salt/blob/develop/salt/cloud/clouds/libvirt.py#L46). The versions then available in centos 7. @username_0 what version of libvirt are you running?
There may have been a libvirt update since then that might have made validation tighter. Disabling validation is indeed an option (as you did). Maybe it should be exposed in a configuration setting so people can disable it if it gives problems.
The defineXml python API maps onto: [this C API](http://libvirt.org/html/libvirt-libvirt-domain.html#virDomainDefineXMLFlags). E.g. create the domain with validation.
Sadly I screwed up the logging of the XML that went into the call, that might have shed some light on what is going on. Should have used clone_xml in stead of domain_xml :( @username_0 Could you try the command once more with fixed logging? E.g.:
```clone_xml = ElementTree.tostring(domain_xml)
log.debug("Clone XML '{0}'".format(clone_xml))
```
And report back what the XML was going into the call?
I'm not sure why the XML validates after creating the domain. Conjecture: libvirt strips incorrect XML after creating?
If time permits I may be able to test at work on our Centos 7 hosts.
@username_1 What is the time frame to get this fixed?
username_0: Hi - sorry for delay busy doing other stuff.
Here's my libvirt versions::
[root@xeon ian]# rpm -qa|grep libvirt|sort
libvirt-client-2.0.0-10.el7_3.5.x86_64
libvirt-daemon-2.0.0-10.el7_3.5.x86_64
libvirt-daemon-config-network-2.0.0-10.el7_3.5.x86_64
libvirt-daemon-config-nwfilter-2.0.0-10.el7_3.5.x86_64
libvirt-daemon-driver-interface-2.0.0-10.el7_3.5.x86_64
libvirt-daemon-driver-network-2.0.0-10.el7_3.5.x86_64
libvirt-daemon-driver-nodedev-2.0.0-10.el7_3.5.x86_64
libvirt-daemon-driver-nwfilter-2.0.0-10.el7_3.5.x86_64
libvirt-daemon-driver-qemu-2.0.0-10.el7_3.5.x86_64
libvirt-daemon-driver-secret-2.0.0-10.el7_3.5.x86_64
libvirt-daemon-driver-storage-2.0.0-10.el7_3.5.x86_64
libvirt-daemon-kvm-2.0.0-10.el7_3.5.x86_64
libvirt-devel-2.0.0-10.el7_3.5.x86_64
libvirt-docs-2.0.0-10.el7_3.5.x86_64
libvirt-gconfig-0.2.3-1.el7.x86_64
libvirt-glib-0.2.3-1.el7.x86_64
libvirt-gobject-0.2.3-1.el7.x86_64
libvirt-python-2.0.0-2.el7.x86_64
username_2: I managed to find some time to test things. Sadly I was unable to recreate the problem on a patched Centos 7 system (Centos 7 VM connecting to a Centos 7 KVM hypervisor)
It might help if you execute a `virsh edit <base-domain>` and safe again. Just to see if there is an issue with the base domain XML.
I implemented a flag to disable validation in a PR #41555.
username_0: Cheers @username_2 - I'll have a go tonight as I've got a new machine to put in my cluster anyway.
username_0: Saved the domain xml again with virsh edit as you suggested @username_2
Same issue when I tried to use salt-cloud -p:
salt-cloud -p libvirt-test-idell test-idell
libvirt: QEMU Driver error : Domain not found: no domain with matching name 'test-idell'
[WARNING ] /root/src/my_salt/salt/salt/cloud/clouds/libvirt.py:376: FutureWarning: The behavior of this method will change in future versions. Use specific 'len(elem)' or 'elem is not None' test instead.
if source_element and 'path' in source_element.attrib:
libvirt: XML Util error : XML document failed to validate against schema: Unable to validate doc against /usr/share/libvirt/schemas/domain.rng
Extra element devices in interleave
Element domain failed to validate content
[ERROR ] There was a profile error: XML document failed to validate against schema: Unable to validate doc against /usr/share/libvirt/schemas/domain.rng
Extra element devices in interleave
Element domain failed to validate content
The file /usr/share/libvirt/schemas/domain.rng looks like this:
<?xml version="1.0"?>
<grammar xmlns="http://relaxng.org/ns/structure/1.0" datatypeLibrary="http://www.w3.org/2001/XMLSchema-datatypes">
<!-- Grammar for accepting a domain element, both as top level, and
also suitable for inclusion in domainsnapshot.rng -->
<start>
<ref name="domain"/>
</start>
<include href='domaincommon.rng'/>
<define name='storageStartupPolicy' combine='choice'>
<!-- overrides the no-op version in storagecommon.rng -->
<ref name='startupPolicy'/>
</define>
<define name='storageSourceExtra' combine='choice'>
<!-- overrides the no-op version in storagecommon.rng -->
<ref name='diskspec'/>
</define>
</grammar>
username_2: @username_0 the domain.rng did not come through.
More importantly did you use the newer version of the code that has the fixed logging of the domain xml that is used to provision the VM? Did you try the configuration flag to disable domain validation?
Or alternatively provide the domain definition itself as returned with virsh dump ? |
aws/aws-sdk-cpp | 185253326 | Title: Async and Client lifetime
Question:
username_0: Hello,
Due to the token expiration, I am forced to recreate my clients everytime I get new tokens, my question being: who is responsible for the lifetime of the Client objects ? Does the executor keep a ref count of the object or should I use the lambda capture operator to ref count my shared_ptr ?
Max
Answers:
username_1: Could you go into more detail about your use-case and token usage? Would implementing your own auto-refreshing credentials provider be a possible solution (similar to what we've provided with cognito in the identity-management sdk)?
username_0: Hey Bret, it's been a while!
The way I do things right is that I need to assume a role through STS, but the tokens I get are valid for at most 1h. I use the tokens to create a Aws::Auth::AWSCredentials that I use to create the Clients.
For example:
`auto pClient = std::make_shared<Aws::DynamoDB::DynamoDBClient>(Aws::Auth::AWSCredentials(pIdentity->AccessKeyId, pIdentity->SecretAccessKey, pIdentity->SessionToken), config);`
To get new tokens I create a task that waits for 30 minutes, calls AssumeRole and notifies listeners that new tokens are available and then I recreate all the clients as seen above. But at this time it is possible that an Async call is happening, right now I capture the shared_ptr in my lambda callback to ref count Async tasks but it doesn't seem efficient.
I will take a look at the credentials provider and see if that works for me.
Thanks,
Max
username_1: Hey Max,
Still crushing noobs at Dota I'm hoping? Jonathan mentioned that he'd like to add a refreshing STS-based credentials provider in the aws-cpp-sdk-identity-management library. If you want to take a crack at it, feel free, otherwise he'll probably crank it out one of these days.
username_2: I'll be adding this to the identity-management package. Feel free to write one and submit a PR, it will probably be a week until I get to it. Otherwise, I'll let you know when it's done.
Sent from my iPhone
>
username_0: I went ahead and implemented an STS provider and so far everything works perfectly, sadly I doubt I will be able to contribute to the repository, I tried to get the company to approve the release of parts of what I work on opensource and the answer is clear: no...
@username_1 Always, feel free to join me anytime!
username_2: We're hiring....
Sent from my iPhone
>
username_3: I know you are, sadly I am not a US citizen :/
username_2: We pay for visas.
Sent from my iPhone
>
Status: Issue closed
username_2: we'll add an STS Credentials provider to the identity-management project story to our backlog
username_2: I just pushed this out. It's in the identity-management project. |
EliasKotlyar/Monsieur-Cuisine-Connect-Hack | 641843245 | Title: Home Assistant on LidloMix :)
Question:
username_0: this is for fun only, but thanks to this great hack it works :)
Thanks @EliasKotlyar for this :+1:
Now we can control the lights, have a camera preview, listen youtube or radio in kitchen (I need to add a better speaker)
this is full video how it works:
https://www.youtube.com/watch?v=X4uQ4EbO1Xg
and some pictures:




Answers:
username_1: Hello. after a few minutes will you not start the normal MMC menu? or how do you open it how to minimize and use another application? I have icons at the bottom but nothing but the MMC program I can not choose or when I enter the MMC settings, it creates a second active window, but there is nothing underneath.
Sorry for my English
username_1: Oo widze ze jesteś z Polski :)) moge jakis kontakt do Ciebie w sprawie pomocy instalacji aplikacji? Mi nie wchodzi rm/system/ recovery-from-boot.p wogole. 4 dni juz nad tym siedze i chyba juz 9 razy przywracalem do ustawien fabryczynych i zaczynalem od początku wszytko. |
axios/axios | 638121570 | Title: Headers merge stragety for custom instance
Question:
username_0: ```js
myAxios = axios.create({
headers: {
token: 'old';
},
});
myAxios.defaults.headers.common.token = 'new';
myAxios.get(url);
```
With code above, what is the correct header `token` of the get request?Should it be `'old'` or `'new'`? According to the [document](https://github.com/username_0/axios#custom-instance-defaults), it seems that `'new'` is the correct value in theory, but it's `'old'` in fact. Is it a bug?
Answers:
username_1: Set it before `axios.create`.
Status: Issue closed
|
Atlantiss/NetherwingBugtracker | 393667751 | Title: [Quest][NPC] You, Robot/Negatron = Server Crash
Question:
username_0: **Description**: When Negatron gets to 1%, server crashes
**Current behaviour**: You, Robot quest=10248, Negatron npc=19851
**Expected behaviour**: No crash
**Server Revision**: 2483
Answers:
username_0: Unrelated, but Negatron will immediately lock aggro onto player, not the robot controlled with Scrap Reaver X6000 Controller item=28634.
Casting Feign Death will drop aggro, but when standing up again Negatron will immediately aggro onto player again, even if no damage has been done, heals cast etc.
username_1: Negatron is bugged once again and locks aggro on the player doing the quest, completely ignoring the Scrap Reaver X6000. |
ether/etherpad-lite | 808627326 | Title: NPM issue with a fresh Manual Install on Windows?
Question:
username_0: **Describe the bug**
I first followed the "Manually install on Windows" directions to set up Etherpad on Windows for development then I ran into the issue that I'll describe below. I assumed that I set something up wrong, so I trashed that and started from scratch with the "Prebuilt Windows package" steps and the error happened again.
Below you'll find the command line output from running start.bat as an administrator, using an installation that was freshly extracted from etherpad-lite-win.zip and unmodified. Is there any other information that I can supply that would be helpful to debug this issue?
C:\Windows\system32>cd c:\dev\W\VisualStudio.com\Etherpad
c:\dev\W\VisualStudio.com\Etherpad>start.bat
c:\dev\W\VisualStudio.com\Etherpad>node node_modules\ep_etherpad-lite\node\server.js
[2021-02-15 10:18:12.137] [DEBUG] console - Running on Node v12.20.2 (minimum required Node version: 10.17.0)
[2021-02-15 10:18:12.149] [DEBUG] AbsolutePaths - c:\dev\W\VisualStudio.com\Etherpad\node_modules\ep_etherpad-lite does not end with "src"
[2021-02-15 10:18:12.150] [INFO] console - All relative paths will be interpreted relative to the identified Etherpad base dir: c:\dev\W\VisualStudio.com\Etherpad
[2021-02-15 10:18:12.150] [INFO] console - Random string used for versioning assets: 3d78ab94
[2021-02-15 10:18:12.151] [DEBUG] AbsolutePaths - Relative path "settings.json" can be rewritten to "c:\dev\W\VisualStudio.com\Etherpad\settings.json"
[2021-02-15 10:18:12.151] [DEBUG] AbsolutePaths - Relative path "credentials.json" can be rewritten to "c:\dev\W\VisualStudio.com\Etherpad\credentials.json"
[2021-02-15 10:18:12.157] [INFO] console - settings loaded from: c:\dev\W\VisualStudio.com\Etherpad\settings.json
[2021-02-15 10:18:12.158] [INFO] console - No credentials file found in c:\dev\W\VisualStudio.com\Etherpad\credentials.json. Ignoring.
[2021-02-15 10:18:12.159] [INFO] console - Using skin "colibris" in dir: c:\dev\W\VisualStudio.com\Etherpad\src\static\skins\colibris
[2021-02-15 10:18:12.159] [INFO] console - Session key loaded from: c:\dev\W\VisualStudio.com\Etherpad\SESSIONKEY.txt
[2021-02-15 10:18:12.159] [WARN] console - DirtyDB is used. This is not recommended for production. File location: c:\dev\W\VisualStudio.com\Etherpad\var\dirty.db
[2021-02-15 10:18:12.483] [INFO] server - Starting Etherpad...
[2021-02-15 10:18:12.489] [ERROR] server - Error: spawn npm ENOENT
at Process.ChildProcess._handle.onexit (internal/child_process.js:268:19)
at onErrorNT (internal/child_process.js:470:16)
at processTicksAndRejections (internal/process/task_queues.js:84:21)
[2021-02-15 10:18:12.490] [ERROR] server - TypeError: Promise resolve or reject function is not callable
at Promise.then (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
[2021-02-15 10:18:12.490] [ERROR] server - Error occurred while waiting to exit. Forcing an immediate unclean exit...
**To Reproduce**
Follow the steps to set up the Windows Prebuilt Package. The above error occurs when the start.bat script is executed in a command prompt with Administrator permissions.
https://github.com/ether/etherpad-lite#prebuilt-windows-package
Answers:
username_1: If you do node -v what do you get?
username_0: Thanks for your fast response! v12.20.2
username_1: Confirmed with 12.20.2
username_1: Good news is I can replicate, bad news is it has nothing to do with the built / compiled version, it's broken when running from source ;\
username_1: https://github.com/ether/etherpad-lite/blob/develop/src/node/server.js#L135 is offending line.
username_1: @username_2 any ideas on this? I think it's related to npm version but it looks like it's failing on 6.14.11 when I thought this functionality wouldn't fail until 7.*
username_0: @username_1, do you think if I downgraded to an older version of Node then I'd be able to continue with my set up and testing? If so, which version would you recommend?
username_2: I don't have access to a Windows machine so I have no idea what's going wrong, or why it's happening now. We can try upgrading npm to a newer 6.x (see #4788) but looking at the changelog I doubt that will help.
Are you sure it's line 135 that's causing the problem? Can you get any more information about where inside `npm.load()` it is failing?
username_2: I merged that PR. Please check out the latest `develop` branch and see if you can reproduce.
username_1: No dice w/ that PR. Any tips for how to dive into npm.load? Afaik that's an npm internal thing?
username_1: ```
'use strict';
const util = require('util');
const npm = require('npm/lib/npm.js');
(async () => {
console.log('does this crash?');
await util.promisify(npm.load)();
console.log('this works!');
})();
```
This does not crash...
username_1: ```
'use strict';
const util = require('util');
const npm = require('npm/lib/npm.js');
const pluginDefs = require('../static/js/pluginfw/plugin_defs');
const plugins = require('../static/js/pluginfw/plugins');
(async () => {
console.log('does this crash?');
await util.promisify(npm.load)();
console.log('this works!');
})();
```
crashes with
```
[2021-02-16 09:29:24.753] [INFO] console - does this crash?
events.js:292
throw er; // Unhandled 'error' event
^
Error: spawn npm ENOENT
at Process.ChildProcess._handle.onexit (internal/child_process.js:269:19)
at onErrorNT (internal/child_process.js:465:16)
at processTicksAndRejections (internal/process/task_queues.js:80:21)
Emitted 'error' event on ChildProcess instance at:
at Process.ChildProcess._handle.onexit (internal/child_process.js:275:12)
at onErrorNT (internal/child_process.js:465:16)
at processTicksAndRejections (internal/process/task_queues.js:80:21) {
errno: -4058,
code: 'ENOENT',
syscall: 'spawn npm',
path: 'npm',
spawnargs: [ '--version' ]
```
username_1: So the problem is in ``pluginfw/plugins.js``
username_1: ```
await Promise.all([
(async () => { for await (const chunk of p.stdout) chunks.push(chunk); })(),
p, // Await in parallel to avoid unhandled rejection if np rejects during chunk read.
]);
```
is the offending line.
username_1: That's as far as I can get today, the contents of p.stdout is
```
[2021-02-16 09:46:23.494] [WARN] console - <ref *1> Socket {
connecting: false,
_hadError: false,
_parent: null,
_host: null,
_readableState: ReadableState {
objectMode: false,
highWaterMark: 16384,
buffer: BufferList { head: null, tail: null, length: 0 },
length: 0,
pipes: [],
flowing: true,
ended: false,
endEmitted: false,
reading: true,
sync: false,
needReadable: true,
emittedReadable: false,
readableListening: false,
resumeScheduled: true,
errorEmitted: false,
emitClose: false,
autoDestroy: false,
destroyed: false,
errored: null,
closed: false,
closeEmitted: false,
defaultEncoding: 'utf8',
awaitDrainWriters: null,
multiAwaitDrain: false,
readingMore: false,
decoder: StringDecoder {
encoding: 'utf8',
[Symbol(kNativeDecoder)]: <Buffer 00 00 00 00 00 00 01>
},
encoding: 'utf8',
[Symbol(kPaused)]: false
},
_events: [Object: null prototype] {
end: [ [Function: onReadableStreamEnd], [Function (anonymous)] ],
close: [Function (anonymous)],
data: [Function (anonymous)]
},
_eventsCount: 3,
_maxListeners: undefined,
_writableState: WritableState {
objectMode: false,
highWaterMark: 16384,
finalCalled: false,
needDrain: false,
ending: false,
ended: false,
finished: false,
destroyed: false,
decodeStrings: false,
defaultEncoding: 'utf8',
length: 0,
[Truncated]
},
allowHalfOpen: false,
_sockname: null,
_pendingData: null,
_pendingEncoding: '',
server: null,
_server: null,
[Symbol(async_id_symbol)]: 9,
[Symbol(kHandle)]: Pipe { reading: true, [Symbol(owner_symbol)]: [Circular *1] },
[Symbol(kSetNoDelay)]: false,
[Symbol(lastWriteQueueSize)]: 0,
[Symbol(timeout)]: null,
[Symbol(kBuffer)]: null,
[Symbol(kBufferCb)]: null,
[Symbol(kBufferGen)]: null,
[Symbol(kCapture)]: false,
[Symbol(kBytesRead)]: 0,
[Symbol(kBytesWritten)]: 0
}
```
username_1: On linux p.stdout is
```
<ref *1> Socket {
connecting: false,
_hadError: false,
_parent: null,
_host: null,
_readableState: ReadableState {
objectMode: false,
highWaterMark: 16384,
buffer: BufferList { head: null, tail: null, length: 0 },
length: 0,
pipes: [],
flowing: null,
ended: false,
endEmitted: false,
reading: true,
sync: false,
needReadable: true,
emittedReadable: false,
readableListening: false,
resumeScheduled: false,
errorEmitted: false,
emitClose: false,
autoDestroy: false,
destroyed: false,
errored: null,
closed: false,
closeEmitted: false,
defaultEncoding: 'utf8',
awaitDrainWriters: null,
multiAwaitDrain: false,
readingMore: false,
decoder: null,
encoding: null,
[Symbol(kPaused)]: null
},
_events: [Object: null prototype] {
end: [Function: onReadableStreamEnd],
close: [Function (anonymous)]
},
_eventsCount: 2,
_maxListeners: undefined,
_writableState: WritableState {
objectMode: false,
highWaterMark: 16384,
finalCalled: false,
needDrain: false,
ending: false,
ended: false,
finished: false,
destroyed: false,
decodeStrings: false,
defaultEncoding: 'utf8',
length: 0,
writing: false,
corked: 0,
sync: true,
bufferProcessing: false,
onwrite: [Function: bound onwrite],
[Truncated]
},
allowHalfOpen: false,
_sockname: null,
_pendingData: null,
_pendingEncoding: '',
server: null,
_server: null,
[Symbol(async_id_symbol)]: 8,
[Symbol(kHandle)]: Pipe { reading: true, [Symbol(owner_symbol)]: [Circular *1] },
[Symbol(kSetNoDelay)]: false,
[Symbol(lastWriteQueueSize)]: 0,
[Symbol(timeout)]: null,
[Symbol(kBuffer)]: null,
[Symbol(kBufferCb)]: null,
[Symbol(kBufferGen)]: null,
[Symbol(kCapture)]: false,
[Symbol(kBytesRead)]: 0,
[Symbol(kBytesWritten)]: 0
}
```
username_1: The notable values that change are
* resumeScheduled
* decoder
* The end event
I'd hazard a guess here the windows nodejs stdout is behaving different to linux, there are some docs on this but I'm not sure which part is useful to us: https://nodejs.org/api/child_process.html
username_2: Fixed by PR #4799. Please try the latest commit on the `develop` branch.
@username_1 Should we cut a new release? It'll give us an excuse to test out the `release.js` changes. :slightly_smiling_face:
Status: Issue closed
|
amrayach/PML | 755758560 | Title: Extra Text Preprocessing steps
Question:
username_0: what we have until now:
preprocessing_setps = {
'remove_hashtags': remove_hashtags,
'remove_urls': remove_urls,
'remove_user_mentions': remove_user_mentions,
'lower': lower,
'double_new_line': remove_double_new_line,
'wordnet_augment_text': augment_text_wordnet,
} |
zalando/patroni | 926476284 | Title: Patroni doesn't start as leader, when try to restore using pgbackrest
Question:
username_0: Jun 21 17:42:58 patroni[17801]: 2021-06-21 17:42:58.529 P01 INFO: restore file /mnt/disks/timescaledata/base/1/13260 (0B, 100%)
Jun 21 17:42:58 patroni[17801]: 2021-06-21 17:42:58.531 P01 INFO: restore file /mnt/disks/timescaledata/base/1/13255 (0B, 100%)
Jun 21 17:42:58 patroni[17801]: 2021-06-21 17:42:58.532 P01 INFO: restore file /mnt/disks/timescaledata/base/1/13250 (0B, 100%)
Jun 21 17:42:58 patroni[17801]: 2021-06-21 17:42:58.634 P01 INFO: restore file /mnt/disks/timescaledata/base/1/13245 (0B, 100%)
Jun 21 17:42:58 patroni[17801]: 2021-06-21 17:42:58.635 P00 INFO: write updated /mnt/disks/timescaledata/postgresql.auto.conf
Jun 21 17:42:58 patroni[17801]: 2021-06-21 17:42:58.651 P00 INFO: restore global/pg_control (performed last to ensure aborted restores cannot be started)
Jun 21 17:42:58 patroni[17801]: 2021-06-21 17:42:58.656 P00 INFO: restore command end: completed successfully (2509331ms)
Jun 21 17:42:59 patroni[17801]: localhost:5432 - no response
Jun 21 17:42:59 postgres[21956]: [1-1] 2021-06-21 17:42:59.110 UTC [21956] LOG: ending log output to stderr
Jun 21 17:42:59 patroni[17801]: 2021-06-21 17:42:59.110 UTC [21956] LOG: ending log output to stderr
Jun 21 17:42:59 patroni[17801]: 2021-06-21 17:42:59.110 UTC [21956] HINT: Future log output will go to log destination "syslog".
Jun 21 17:42:59 postgres[21956]: [1-2] 2021-06-21 17:42:59.110 UTC [21956] HINT: Future log output will go to log destination "syslog".
Jun 21 17:42:59 postgres[21956]: [2-1] 2021-06-21 17:42:59.111 UTC [21956] LOG: starting PostgreSQL 13.3 (Debian 13.3-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
Jun 21 17:42:59 postgres[21956]: [3-1] 2021-06-21 17:42:59.111 UTC [21956] LOG: listening on IPv4 address "0.0.0.0", port 5432
Jun 21 17:42:59 postgres[21956]: [4-1] 2021-06-21 17:42:59.113 UTC [21956] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
Jun 21 17:42:59 postgres[21958]: [5-1] 2021-06-21 17:42:59.127 UTC [21958] LOG: database system was interrupted; last known up at 2021-06-21 16:15:49 UTC
Jun 21 17:42:59 postgres[21958]: [6-1] 2021-06-21 17:42:59.767 UTC [21958] LOG: restored log file "00000003.history" from archive
Jun 21 17:43:00 postgres[21958]: [7-1] 2021-06-21 17:43:00.030 UTC [21958] LOG: restored log file "00000004.history" from archive
Jun 21 17:43:00 postgres[21966]: [5-1] 2021-06-21 17:43:00.059 UTC [21966] postgres@postgres FATAL: the database system is starting up
Jun 21 17:43:00 postgres[21967]: [5-1] 2021-06-21 17:43:00.060 UTC [21967] postgres@postgres FATAL: the database system is starting up
Jun 21 17:43:00 patroni[17801]: localhost:5432 - rejecting connections
Jun 21 17:43:00 postgres[21969]: [5-1] 2021-06-21 17:43:00.075 UTC [21969] postgres@postgres FATAL: the database system is starting up
Jun 21 17:43:00 postgres[21970]: [5-1] 2021-06-21 17:43:00.076 UTC [21970] postgres@postgres FATAL: the database system is starting up
Jun 21 17:43:00 patroni[17801]: localhost:5432 - rejecting connections
Jun 21 17:43:00 postgres[21958]: [8-1] 2021-06-21 17:43:00.329 UTC [21958] LOG: starting point-in-time recovery to earliest consistent point
Jun 21 17:43:00 postgres[21958]: [9-1] 2021-06-21 17:43:00.642 UTC [21958] LOG: restored log file "00000004.history" from archive
Jun 21 17:43:01 postgres[21982]: [5-1] 2021-06-21 17:43:01.092 UTC [21982] postgres@postgres FATAL: the database system is starting up
Jun 21 17:43:01 postgres[21983]: [5-1] 2021-06-21 17:43:01.093 UTC [21983] postgres@postgres FATAL: the database system is starting up
Jun 21 17:43:01 patroni[17801]: localhost:5432 - rejecting connections
Jun 21 17:43:02 postgres[21995]: [5-1] 2021-06-21 17:43:02.115 UTC [21995] postgres@postgres FATAL: the database system is starting up
Jun 21 17:43:02 postgres[21996]: [5-1] 2021-06-21 17:43:02.117 UTC [21996] postgres@postgres FATAL: the database system is starting up
Jun 21 17:43:02 patroni[17801]: localhost:5432 - rejecting connections
Jun 21 17:43:02 postgres[21958]: [10-1] 2021-06-21 17:43:02.221 UTC [21958] LOG: restored log file "00000002000004F00000003F" from archive
Jun 21 17:43:02 postgres[21958]: [11-1] 2021-06-21 17:43:02.251 UTC [21958] FATAL: requested timeline 4 is not a child of this server's history
Jun 21 17:43:02 postgres[21958]: [11-2] 2021-06-21 17:43:02.251 UTC [21958] DETAIL: Latest checkpoint is at 4F0/862A0608 on timeline 2, but in the history of the requested timeline, the server forked off from that timeline at 455/AA983238.
Jun 21 17:43:02 postgres[21956]: [5-1] 2021-06-21 17:43:02.253 UTC [21956] LOG: startup process (PID 21958) exited with exit code 1
Jun 21 17:43:02 postgres[21956]: [6-1] 2021-06-21 17:43:02.253 UTC [21956] LOG: aborting startup due to startup process failure
Jun 21 17:43:02 postgres[21956]: [7-1] 2021-06-21 17:43:02.258 UTC [21956] LOG: database system is shut down
Jun 21 17:43:03 patroni[17801]: localhost:5432 - no response
Jun 21 17:43:03 patroni[17801]: Traceback (most recent call last):
Jun 21 17:43:03 patroni[17801]: File "/usr/local/bin/patroni", line 10, in <module>
Jun 21 17:43:03 patroni[17801]: sys.exit(main())
Jun 21 17:43:03 patroni[17801]: File "/usr/local/lib/python3.7/dist-packages/patroni/__init__.py", line 170, in main
Jun 21 17:43:03 patroni[17801]: return patroni_main()
Jun 21 17:43:03 patroni[17801]: File "/usr/local/lib/python3.7/dist-packages/patroni/__init__.py", line 138, in patroni_main
Jun 21 17:43:03 patroni[17801]: abstract_main(Patroni, schema)
Jun 21 17:43:03 patroni[17801]: File "/usr/local/lib/python3.7/dist-packages/patroni/daemon.py", line 100, in abstract_main
Jun 21 17:43:03 patroni[17801]: controller.run()
Jun 21 17:43:03 patroni[17801]: File "/usr/local/lib/python3.7/dist-packages/patroni/__init__.py", line 108, in run
Jun 21 17:43:03 patroni[17801]: super(Patroni, self).run()
Jun 21 17:43:03 patroni[17801]: File "/usr/local/lib/python3.7/dist-packages/patroni/daemon.py", line 59, in run
Jun 21 17:43:03 patroni[17801]: self._run_cycle()
Jun 21 17:43:03 patroni[17801]: File "/usr/local/lib/python3.7/dist-packages/patroni/__init__.py", line 111, in _run_cycle
Jun 21 17:43:03 patroni[17801]: logger.info(self.ha.run_cycle())
Jun 21 17:43:03 patroni[17801]: File "/usr/local/lib/python3.7/dist-packages/patroni/ha.py", line 1457, in run_cycle
Jun 21 17:43:03 patroni[17801]: info = self._run_cycle()
Jun 21 17:43:03 patroni[17801]: File "/usr/local/lib/python3.7/dist-packages/patroni/ha.py", line 1351, in _run_cycle
Jun 21 17:43:03 patroni[17801]: return self.post_bootstrap()
Jun 21 17:43:03 patroni[17801]: File "/usr/local/lib/python3.7/dist-packages/patroni/ha.py", line 1247, in post_bootstrap
Jun 21 17:43:03 patroni[17801]: self.cancel_initialization()
Jun 21 17:43:03 patroni[17801]: File "/usr/local/lib/python3.7/dist-packages/patroni/ha.py", line 1240, in cancel_initialization
Jun 21 17:43:03 patroni[17801]: raise PatroniFatalException('Failed to bootstrap cluster')
Jun 21 17:43:03 patroni[17801]: patroni.exceptions.PatroniFatalException: 'Failed to bootstrap cluster'
Jun 21 17:43:03 systemd[1]: patroni.service: Main process exited, code=exited, status=1/FAILURE
Jun 21 17:43:03 systemd[1]: patroni.service: Failed with result 'exit-code'.
Jun 21 17:45:01 CRON[22019]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Jun 21 17:46:03 collectd[745]: uc_update: Value too old: name = /processes-all/disk_octets; value time = 1624297563.287; last cache update = 1624297563.287;
Tried with full and incremental backup, with archive-copy on and off.
Can you please have a look into this?
Answers:
username_1: The error comes from a postgres and has nothing to do with patroni. If you perform a restore and start postgres manually you will get into the same situation.
It seems that the backup place contains history files that don't belong to the given cluster.
username_0: mkdir -p /mnt/disks/timescaledata
pgbackrest --stanza=demo --pg1-path=/mnt/disks/timescaledata --log-level-console=info --type=immediate --target-action=promote --delta restore
If restore an sample database of few MB's, it's working fine. Cluster gets created a leader is in place we can add replicas.
But when we try to restore timescale db data, it fails.
username_1: Please re-read the [message](https://github.com/zalando/patroni/issues/1976#issuecomment-865283257), I have nothing more to add.
Status: Issue closed
|
jverzani/Mustache.jl | 625735428 | Title: scan until right tag with "{{{VAR}}}"
Question:
username_0: In closinig issue #114, it was seen that the scanning the above for a closing tag "}}" seems to find the first pair, no the last. This leaves the token "{VAR" which works for what is desired. It should probably leave the token as "{VAR}" though. There might be an edge case that exposes the difference. For now, will leave this issue as a reminder.
Answers:
username_0: Here are two incorrectly parsed things:
```
julia> Mustache.parse("{{{variable}} stuff")
Mustache.MustacheTokens(Mustache.Token[Mustache.TextToken("text", ""), Mustache.TagToken("{", "variable", "{{", "}}", ""), Mustache.TextToken("text", " stuff")])
julia> Mustache.parse("[[{variable]] stuff", ("[[","]]"))
Mustache.MustacheTokens(Mustache.Token[Mustache.TextToken("text", ""), Mustache.TagToken("{", "variable", "[[", "]]", ""), Mustache.TextToken("text", " stuff")])
``` |
wee-slack/wee-slack | 596449909 | Title: RFE: allow searching history
Question:
username_0: Would it be possible to allow searching of channel histories, similar to how the slack web UI allows it?
Answers:
username_1: There are some limited options for searching available. You can either press ctrl-r to search the history in the current buffer (though, it won't search further back than what's in the buffer), or you can use the [grep.py script](https://weechat.org/scripts/source/grep.py.html/) to search your log files.
As for searching through messages you have not logged, that is not implemented. #485 is open for that, so I'll close this as a duplicate.
Status: Issue closed
|
liamdamato1997/acunetix360 | 920140578 | Title: Vulnerability - Referrer-Policy Not Implemented
Question:
username_0: **URL:** http://php.testsparker.com/
**Name:** Referrer-Policy Not Implemented
**Severity:** Best Practice
**Certainty:** 90%
You can see vulnerability details from the link below:
https://online.acunetix360.com/issues/detail/ba13e1c828b540c8e533ad4701b820b8 |
composer/composer | 195239899 | Title: Different branches for dev-test-prod
Question:
username_0: I cannot find a way to require different packages for three different stages in a convinient way. I would like to see this feature in future - or if possible - learn how to use it.
I use one repository, with three different branches (dev, test, prod).
On dev I can override dependencies by adding the repositories to the require-dev section to, overriding it to `@dev-master` for example.
On prod I run --no-dev, so it will only use the main packages (`@prod`).
On test I would like to use the intermediate `@test`. But I'm not sure how (besides manually editing it every time).
Obviously we use branching over releases.
Answers:
username_0: How am I supposed to use it with staging envs?
username_1: why should your staging environment run a different codebase than the prod ?
username_2: I do not want to question your motivation for the path you have chosen to take in regards to your development flow. It simply is not a path composer is very compatible with, because it falls outside of the norm. We cannot accommodate everyone, because some flows simply are not compatible. So we go with the 99% :-)
username_0: I am open for suggestion how to use composer in a professional env with different staging envs
Status: Issue closed
username_3: You can script different environments using `composer require ...` to override some requirements in a given env, but I really don't think this is a good practice if you have a dev => staging => prod pipeline as you might end up changing the version of dependencies along the way. It can be fine to do this if it's about running say multiple staging envs with different branches of internal dependencies or something along those lines. But I don't think it's something we can support.. At best we can offer the `self.version` trick which you can use in your requires so that if you run a composer update on branch dev-foo it will require other packages in version dev-foo as well if you require them using `self.version`. |
Tencent/FeatherCNN | 332737192 | Title: sgemm.cpp:block_sgemm_external_pack_threading_8x8, block_sgemm_external_pack_threading
Question:
username_0: unsigned int tN = N / num_threads / factor;
tN = (tN + 7) & 0xFFFFFFF8;
for example:
if N = 26 && num_threads = 3
tN = 8;
thread task:8 8 8
but left 26-24= 2
Answers:
username_1: All SGEMM related functions are under reconstruction, please wait for a few days and these things will be fixed.
Status: Issue closed
username_1: Close |
igvteam/igv.js | 152911456 | Title: BAM - Sort by base (alt-click) doesn't work for Capture Panels
Question:
username_0: Is this a known issue? Sorting by base doesn't seem to work for Capture Panels.
(It worked ok for Amplicon Panels)
Answers:
username_0: Amplicon Panels have many reads with the exact same start/finish, so the BAM looks like a bunch of blocks in IGV
Reads from Capture Panels are staggered, so the reads look like waves in IGV
Do you have a different name for these panels? Like, targeted panels vs whole genome panels?
username_0: Ok I'll prepare a small test bam that I can send out of the institute and get back to you.
username_0: Oh! I updated to the latest IGV.js and it seems to work now. I'm not sure exactly how old the other IGV.js I was using was, between 1-2 months I think.
Here's the [example bam.zip](https://github.com/igvteam/igv.js/files/251595/example.bam.zip) anyway. It might be useful in future.
The locus is: 18:19750692-19752307
Status: Issue closed
|
mymonero/mymonero-app-js | 225344580 | Title: Addtl 'require password' usages
Question:
username_0: require password in order to send funds? require password to delete everything? require password to remove wallet? require password to view wallet secrets?
Answers:
username_0: Authorize to send funds and view secrets are implemented.
Request password to delete everything and to remove wallet are both probably good targets to add to all the apps
Status: Issue closed
|
cns-iu/make-a-vis | 364575596 | Title: YAML Data Type and Scale Checks
Question:
username_0: - [x] Bruce shares link to YAML files for review. ISI and NSF formats
- [ ] Michael reviews YAML record set data scale and variable type mappings for
- [ ] Review in context of DVL Data scale, Variable Type definitions and R/Python/Java data type encodings.
Answers:
username_1: Example YML files are here:
https://github.com/cns-iu/make-a-vis/tree/develop/projects/dvl-fw/src/lib/examples
username_1: @username_0
This is still relevant. We need to nail down the different data variable types, data scales, graphic variable types, graphic symbol types, etc. And clearly link it to the dvl-fw proper. |
jonathangjertsen/pyboard-fdc1004 | 503086779 | Title: is_ready(FDC_ADDR) ... missing attribute
Question:
username_0: Hi,
Getting the error "AttributeError: 'I2C' object has no attribute 'is_ready'" while importing the fdc1004stream.py into ESP32 (running micropython) !
any clue ?
-Shanmuganathan
Answers:
username_1: This means that the `is_ready` method is not implemented in the I2C library for the ESP32.
The code that uses it is redundant (if the device is found when scanning, that means it's "ready" anyway) and can be removed. You could try replacing the whole loop in lines 81-94 with this:
```
while FDC_ADDR not in i2c.scan():
pyb.delay(1000)
```
That should be equivalent, waiting until the device responds. |
primefaces/primefaces | 216056358 | Title: Table not supported in Text Editor
Question:
username_0: 1) Environment
- PrimeFaces version: 6.1.RC1
- No
- No
- Glassfish 3.1.2.2
- Mozilla Firefox
2) There should be option to insert table as it is present in CKEditor.
3) Insert table is not available in Text Editor
4) add text editor in any xhtml page and test.
5) <p:textEditor widgetVar="editor1" value="#{editorView.text}" height="300" style="margin-bottom:10px"/>
6)package org.primefaces.showcase.view.input;
import javax.faces.bean.ManagedBean;
@ManagedBean
public class EditorView {
private String text;
private String text2;
public String getText() {
return text;
}
public void setText(String text) {
this.text = text;
}
public String getText2() {
return text2;
}
public void setText2(String text2) {
this.text2 = text2;
}
}
Status: Issue closed
Answers:
username_1: Its not available in the QuillEditor, so we cant fix this.
You can just use the PF Extensions CKEditor ;) |
mondediefr/docker-rutorrent | 811957509 | Title: JS = TypeError: table is null
Question:
username_0: Bonjour,
Depuis la dernière version, j'ai une erreur récurrente sur l'image docker (version avec FileBot).
Dans le journal de ruTorrent, j'ai cette erreur là :
`JS error: [https://rutorrent.domaine.fr/js/jquery.js line 2 > eval : 3717] TypeError: table is null`
Je dois très régulièrement redémarrer le container pour ne plus avoir l'erreur.
Aucune erreur particulière pour cette erreur dans les logs du container.
J'ai parfois même plus du tout accès au GUI mais pour le coup, j'ai pas pensé à regarder les logs.
Si ça revient, je posterais les logs.
Merci
Answers:
username_1: Y a de forte chance que le problème vient du coté client.
username_0: Salut,
Tu penses que le problème viendrait d’où coté client ?
Dernière version de Firefox, j'utilise Traefik en reverse proxy.
Je ne vois vraiment pas d’où cela pourrait venir.
J'ai fait un peu le tour mais là je suis un peu bloqué.
J'ai beau redémarrer le conteneur, au bout d'un certain temps, j'ai ce fameux message d'erreur qui persiste à revenir
username_2: @username_0 Tu aurais moyen d’investiguer un peu plus pour savoir plus?
username_0: Oui bien sur.
Je sais pas vraiment ou chercher mais je vais me renseigner
username_2: Je serais pas te dire non plus car j'ai jamais eu ce soucis.
username_0: J'ai supprimé le dossier "conf" de rutorrent pour repartir sur une base seine.
L'erreur réapparait malgré tout mais je me suis aperçu qu'en faisant un "Ctrl + F5" pour vider le cache, l'erreur disparait....
J'avoue ne pas trop comprendre pourquoi !!!
Auriez-vous une idée ?
username_2: Oui je pense qu'il y a un soucis dans rutorrent est que c'est pas de notre coté.
username_0: C'est étonnant d'être le seul impacté tout de même....
username_2: Oui mais va savoir...
Essaye d'ouvrir une issue sur le repo rutorrent.
username_1: @username_0 quand l'erreur apparait, ouvre l'outil de développement et regarde si y a un message d'erreur dans la console.
username_0: @username_1 Voici les erreurs retournés dans la console :
```
Cette page utilise la propriété non standard « zoom ». Envisagez d’utiliser calc() dans les valeurs des propriétés pertinentes ou utilisez « transform » avec « transform-origin: 0 0 ». rutorrent.domaine.fr
Le cookie « browser_timezone » sera bientôt rejeté car son attribut « SameSite » est défini sur « None » ou une valeur invalide et il n’a pas l’attribut « secure ». Pour en savoir plus sur l’attribut « SameSite », consultez https://developer.mozilla.org/docs/Web/HTTP/Headers/Set-Cookie/SameSite common.js:1596:62
Une chaîne vide a été transmise à « getElementById() ». jquery.js:2:24934
Une chaîne vide a été transmise à « getElementById() ». 81 common.js:8:43
Une chaîne vide a été transmise à « getElementById() ». common.js:8:43
Une chaîne vide a été transmise à « getElementById() ». common.js:8:43
Une chaîne vide a été transmise à « getElementById() ». common.js:8:43
Une chaîne vide a été transmise à « getElementById() ». common.js:8:43
Une chaîne vide a été transmise à « getElementById() ». common.js:8:43
Une chaîne vide a été transmise à « getElementById() ». common.js:8:43
Une chaîne vide a été transmise à « getElementById() ». common.js:8:43
Une chaîne vide a été transmise à « getElementById() ». common.js:8:43
Une chaîne vide a été transmise à « getElementById() ». common.js:8:43
Une chaîne vide a été transmise à « getElementById() ». common.js:8:43
Une chaîne vide a été transmise à « getElementById() ». common.js:8:43
Une chaîne vide a été transmise à « getElementById() ». common.js:8:43
Une chaîne vide a été transmise à « getElementById() ». common.js:8:43
Une chaîne vide a été transmise à « getElementById() ». common.js:8:43
Une chaîne vide a été transmise à « getElementById() ». common.js:8:43
Une chaîne vide a été transmise à « getElementById() ». common.js:8:43
```
username_2: ça semble plus des warning les deux 1ere lignes. Tu as pas un thème custom ou un thème spécial ?
username_0: @username_2 J'utilise le thème fourni dans l'image (materialdesign)
Là, en allant voir, je n'avais plus du tout accès et avec un "Ctrl + F5" c'est revenu :

Aucun message d'erreur, même dans la console développeur, mise à part les 2 premiers cités dans mon dernier message.
username_2: J'ai vraiment aucune idée de ou ça peut venir. J'utilise aussi ce thème.
username_0: @username_2 D'accord, bah écoute on va attendre de voir si ça répond dans l'issue que j'ai fait sur le Github de ruTorrent.
Merci en tout :)
username_0: Aucune nouvelle pour le moment, à croire que je suis vraiment le seul à avoir ce problème et il est toujours d'actualité.
username_1: Tu peux tester avec un autre navigateur?
username_0: J'ai suivi ton conseil en utilisant le seul autre navigateur d'installé sur mon PC (Microsoft Edge) et effectivement c'est très bien vu et je n'y avais pas pensé car avec Edge, je n'ai pas d'erreur alors qu'au même moment avec Firefox j'ai le problème en question.
Je vais essayé de voir en désactivant un par un quel est le plugin qui fou la merde car j'en ai quelques uns :)
Merci à toi @username_1
Status: Issue closed
|
blacktop/docker-ghidra | 954326437 | Title: Docker Hub missing version 10 / 10.0.1
Question:
username_0: Docker Hub (https://hub.docker.com/r/username_1/ghidra) is missing a tag for the version 10 branch, and the "latest" tag is pointing to 9.2.4.
I think this is because it is based off the `Dockerfile` file, whereas the only one that is being updated by @username_1 seems to be `Dockerfile.alpine`.
For people who have deployed the image from Docker Hub using the "latest" tag, they are not getting the latest stable version when running `docker pull`, as it is stuck on 9.2.4.
Please let me know if I've misunderstood something.
Answers:
username_1: Thank you for brining this to my attention! I have swapped out the Dockerfile with the Dockerfile.alpine as the file read by the automated dockerhub builds. They are building now, feel free to close once you have noticed the correction work it's way through docker's servers
username_0: @username_1 - that has fixed the issue with the `latest` tag, and an existing image now updates to 10.0.1 with `docker pull`. Thank you for the quick fix!
I see 2 other related issues with tags.
- The `9` tag is now pulling 10.0.1, whereas it should presumably stay with the latest version on the 9 branch
- There is no `10` tag available on Docker Hub (which should track the 10 branch but then not auto-update to the 11 branch when that comes out
That way people can pick a major branch (like 9, or 10) and have the images auto-update on a regular basis for minor patches, but not auto-update to the next major version (so that they can do more testing before updating, read the changelog etc.).
username_1: Good catch! It looks like Docker won't build my images anymore unless I give them $5 a month :'(. Might have to move the images to github
username_2: For what it is worth, my experience with building using github actions was that it was an easy migration. It worked out of the box when following the documentation (think I used https://docs.github.com/en/actions/guides/publishing-docker-images)
username_1: @username_2 thank you for pointing me at that. The github actions workflows are REALLY nice. Hopefully microsoft can keep that service free unlike docker/mirantis 😩
username_1: @username_0 I have latest 9 and 10 building/tagging properly now. I'll try and back fill the others if you think you'd need those? Or we can just move forward now as is? Also I'll try and get the alpine/beta-nightly via cron/ and bindiff images working soon
username_1: So I believe not we'll have 9,10,alpine AND nightly! which will be built once a day. Feel free to re-open if you see fit or catch anymore most bugs/mislabeled images etc
Status: Issue closed
username_0: I've not tried the alpine images, but for the others, the tags `9`, `10`, and `latest` all seem to point to correct versions now.
Thanks! |
xerpi/vita-udcd-uvc | 596920314 | Title: Works fine for a few seconds then freezes
Question:
username_0: It connects to my pc fine, and works for approximately 30 seconds before the video output freezes on my pc, and the game is still running on the vita. The audio is still going through as well, but the video just freezes. I'm running 3.60 enso on a 2000 series vita. the game is running of of an sd2vita memory card if that matters.


(Images of issue)
Answers:
username_1: i have the same issue on my 2000 in both windows and linux
it seems to disconnect and reconnect occasionally, so you can remedy this by manually switching video sources

username_1: fix: put the plugin at the bottom of the *KERNEL part of your config
@xerpi you may want to add that to the readme |
rubygems/rubygems | 803993542 | Title: bundle lock --add-platform x86_64-linux fails with raygun-apm-rails
Question:
username_0: ### Describe the problem as clearly as you can
On macOS, I cannot add the linux platform with this Gemfile:
```rb
source "https://rubygems.org"
gem "raygun-apm-rails"
```
https://github.com/username_0/bundle-add-platform-bug-repro
### Post steps to reproduce the problem
Clone repo, run `bundle lock --add-platform x86_64-linux`
### Which command did you run?
```
bundle lock --add-platform x86_64-linux
```
### What were you expecting to happen?
No Error, platform added to lock
### What actually happened?
```
Fetching gem metadata from https://rubygems.org/..
Resolving dependencies...
Bundler found conflicting requirements for the Ruby version:
In Gemfile:
Ruby
raygun-apm-rails was resolved to 1.0.57, which depends on
raygun-apm (~> 1.0.53) was resolved to 1.0.78, which depends on
Ruby (< 2.8.dev, >= 2.5)
```
### If not included with the output of your command, run `bundle env` and paste the output below
## Environment
```
Bundler 2.2.8
Platforms ruby, x86_64-darwin-19
Ruby 2.7.2p137 (2020-10-01 revision 5445e0435260b449decf2ac16f9d09bae3cafe72) [x86_64-darwin19]
Full Path /Users/username_0/.asdf/installs/ruby/2.7.2/bin/ruby
Config Dir /Users/username_0/.asdf/installs/ruby/2.7.2/etc
RubyGems 3.2.8
Gem Home /Users/username_0/.asdf/installs/ruby/2.7.2/lib/ruby/gems/2.7.0
Gem Path /Users/username_0/.gem/ruby/2.7.0:/Users/username_0/.asdf/installs/ruby/2.7.2/lib/ruby/gems/2.7.0
User Home /Users/username_0
User Path /Users/username_0/.gem/ruby/2.7.0
Bin Dir /Users/username_0/.asdf/installs/ruby/2.7.2/bin
Tools
Git 2.28.0
RVM not installed
rbenv not installed
chruby not installed
```
## Bundler Build Metadata
```
Built At 2021-02-02
Git SHA 4015e550dc
Released Version true
```
Answers:
username_1: Can you post the output with `DEBUG_RESOLVER=1`? This doesn't repro on Linux and I don't have a MacOS machine, so I can't repro this myself :(.
username_0: ```
: 0: Starting resolution (2021-02-08 15:59:36 -0600)
: 0: User-requested dependencies: [#<Gem::Resolver::DependencyRequest:0x00007f9a746f7780 @dependency=<Gem::Dependency type=:runtime name="did_you_mean" requirements="= 1.4.0">, @requester=nil>, #<Gem::Resolver::DependencyRequest:0x00007f9a746f7758 @dependency=<Gem::Dependency type=:runtime name="bundler" requirements="= 2.2.8">, @requester=nil>]
Resolving dependencies...: 0: Creating possibility state for did_you_mean (= 1.4.0) (1 remaining)
: 1: Attempting to activate [did_you_mean-1.4.0]
: 1: Activated did_you_mean at [did_you_mean-1.4.0]
: 1: Requiring nested dependencies ()
: 1: Creating possibility state for bundler (= 2.2.8) (1 remaining)
: 2: Attempting to activate [bundler-2.2.8]
: 2: Activated bundler at [bundler-2.2.8]
: 2: Requiring nested dependencies ()
: 0: Finished resolution (2 steps) (Took 0.003558 seconds) (2021-02-08 15:59:36 -0600)
: 0: Unactivated:
: 0: Activated: did_you_mean, bundler
Fetching gem metadata from https://rubygems.org/.
BUNDLER: Starting resolution (2021-02-08 15:59:37 -0600)
BUNDLER: User-requested dependencies: [#<Bundler::DepProxy:0x00007f9a74a2fda8 @dep=<Bundler::Dependency type=:runtime name="raygun-apm" requirements=">= 0">, @__platform=#<Gem::Platform:0x00007f9a74a8d958 @cpu="x86_64", @os="darwin", @version="19">>, #<Bundler::DepProxy:0x00007f9a74a2fc18 @dep=<Bundler::Dependency type=:runtime name="raygun-apm" requirements=">= 0">, @__platform=#<Gem::Platform:0x00007f9a74a4e640 @cpu="x86_64", @os="linux", @version=nil>>, #<Bundler::DepProxy:0x00007f9a74a2e340 @dep=<Bundler::Dependency type=:runtime name="Ruby\u0000" requirements=">= 0">, @__platform=#<Gem::Platform:0x00007f9a74a8d958 @cpu="x86_64", @os="darwin", @version="19">>, #<Bundler::DepProxy:0x00007f9a74a2e250 @dep=<Bundler::Dependency type=:runtime name="Ruby\u0000" requirements=">= 0">, @__platform=#<Gem::Platform:0x00007f9a74a4e640 @cpu="x86_64", @os="linux", @version=nil>>, #<Bundler::DepProxy:0x00007f9a74a2e110 @dep=<Bundler::Dependency type=:runtime name="RubyGems\u0000" requirements="= 3.2.8">, @__platform=#<Gem::Platform:0x00007f9a74a8d958 @cpu="x86_64", @os="darwin", @version="19">>, #<Bundler::DepProxy:0x00007f9a74a2e020 @dep=<Bundler::Dependency type=:runtime name="RubyGems\u0000" requirements="= 3.2.8">, @__platform=#<Gem::Platform:0x00007f9a74a4e640 @cpu="x86_64", @os="linux", @version=nil>>]
Resolving dependencies...
BUNDLER: Creating possibility state for raygun-apm x86_64-darwin-19 (2 remaining)
BUNDLER(1): Attempting to activate [raygun-apm (1.0.78) (x86_64-darwin-19, x86_64-linux)]
BUNDLER(1): Activated raygun-apm at [raygun-apm (1.0.78) (x86_64-darwin-19, x86_64-linux)]
BUNDLER(1): Requiring nested dependencies (Ruby (< 2.8.dev, >= 2.5) x86_64-linux)
BUNDLER(1): Creating possibility state for raygun-apm x86_64-linux (2 remaining)
BUNDLER(2): Attempting to activate [raygun-apm (1.0.78) (x86_64-darwin-19, x86_64-linux)]
BUNDLER(2): Found existing spec ([raygun-apm (1.0.78) (x86_64-darwin-19, x86_64-linux)])
BUNDLER(2): Creating possibility state for Ruby x86_64-darwin-19 (1 remaining)
BUNDLER(3): Attempting to activate [Ruby (172.16.17.32) (x86_64-darwin-19), Ruby (172.16.17.32) (x86_64-darwin-19)]
BUNDLER(3): Activated Ruby at [Ruby (172.16.17.32) (x86_64-darwin-19), Ruby (172.16.17.32) (x86_64-darwin-19)]
BUNDLER(3): Requiring nested dependencies ()
BUNDLER(3): Creating possibility state for RubyGems (= 3.2.8) x86_64-darwin-19 (1 remaining)
BUNDLER(4): Attempting to activate [RubyGems (3.2.8) (ruby), RubyGems (3.2.8) (x86_64-darwin-19, x86_64-linux)]
BUNDLER(4): Activated RubyGems at [RubyGems (3.2.8) (ruby), RubyGems (3.2.8) (x86_64-darwin-19, x86_64-linux)]
BUNDLER(4): Requiring nested dependencies ()
BUNDLER(4): Creating possibility state for Ruby x86_64-linux (1 remaining)
BUNDLER(5): Attempting to activate [Ruby (172.16.17.32) (x86_64-linux), Ruby (172.16.17.32) (x86_64-linux)]
BUNDLER(5): Found existing spec ([Ruby (172.16.17.32) (x86_64-darwin-19), Ruby (172.16.31.107) (x86_64-darwin-19)])
BUNDLER(5): Unsatisfied by existing spec ([Ruby (172.16.31.107) (x86_64-darwin-19), Ruby (172.16.17.32) (x86_64-darwin-19)])
BUNDLER(5): Unwinding for conflict: Ruby x86_64-linux to -1
BUNDLER: Finished resolution (5 steps) (Took 0.001343 seconds) (2021-02-08 15:59:37 -0600)
```
username_0: One thing that may be of note is that if I remove that gem from the `Gemfile`, I can add the platform, then add the gem back in and `bundle install` succeeds and the `Gemfile.lock` contains both platforms as I'd exepct.
username_1: Thanks! I'll fix this tomorrow!
username_0: Great, thank you.
username_1: I didn't have time today, but will hopefully get to this tomorrow.
username_1: Ok, so I had a look at this, and it's trickier than I expected. I managed to write a failing spec for this:
```diff
+ it "supports adding new platforms edge case" do
+ next_minor = Gem.ruby_version.segments[0..1].map.with_index {|s, i| i == 1 ? s + 1 : s }.join(".")
+ two_minors_before = Gem.ruby_version.segments[0..1].map.with_index {|s, i| i == 1 ? s - 2 : s }.join(".")
+
+ build_repo4 do
+ build_gem "raygun-apm", "1.0.78" do |s|
+ s.platform = "x86_64-linux"
+ s.required_ruby_version = ["< #{next_minor}.dev", ">= #{two_minors_before}"]
+ end
+
+ build_gem "raygun-apm", "1.0.78" do |s|
+ s.platform = "universal-darwin"
+ s.required_ruby_version = ["< #{next_minor}.dev", ">= #{two_minors_before}"]
+ end
+ end
+
+ gemfile <<-G
+ source "https://localgemserver.test"
+
+ gem "raygun-apm"
+ G
+
+ lockfile <<-L
+ GEM
+ remote: https://localgemserver.test/
+ specs:
+ raygun-apm (1.0.78-universal-darwin)
+
+ PLATFORMS
+ x86_64-darwin-19
+
+ DEPENDENCIES
+ raygun-apm
+
+ BUNDLED WITH
+ #{Bundler::VERSION}
+ L
+
+
+ bundle "lock --add-platform x86_64-linux", :artifice => :compact_index, :env => { "BUNDLER_SPEC_GEM_REPO" => gem_repo4.to_s }
+ end
+
```
Like in other issues we've been getting, the problem is that the lockfile doesn't currently record any `required_ruby_version` information. So when the dependency information is picked up from the lockfile, it is incorrect in this particular case and assumes that the MacOS version has no required ruby version constraints.
I do have an idea to add this information to the lockfile in a backwards compatible manner, I'll give it a try later.
username_0: Okay, thanks for the update. Good luck with the fix.
I wanted to add on another issue just in case it's related. If you'd like me to open a separate issue, I can, but I found it when trying to work around this issue.
When adding a dependency that has more than one platform like `raygun-apm`, if you have the platform set already in the lockfile, *and* you have an additional remote with an empty block (which we do to work around the "multiple sources" warning, which is another topic), then only the current platform of the newly added gem will be installed.
e.g.:
Gemfile:
```rb
source "https://rubygems.org"
source "https://rubygems.pkg.github.com/username_0" do
end
```
`bundle install`
`bundle lock --add-platform x86_64-linux`
Update Gemfile to:
```rb
source "https://rubygems.org"
source "https://rubygems.pkg.github.com/username_0" do
end
gem "raygun-apm"
```
`bundle install`
Gemfile.lock result:
```
GEM
remote: https://rubygems.org/
remote: https://rubygems.pkg.github.com/username_0/
specs:
raygun-apm (1.0.78-universal-darwin)
PLATFORMS
x86_64-darwin-19
x86_64-linux
DEPENDENCIES
raygun-apm
BUNDLED WITH
2.2.8
```
This is the `DEBUG_RESOLVER` log, which to my untrained eye makes it look related to the original issue:
```
: 0: Starting resolution (2021-02-10 09:03:02 -0600)
: 0: User-requested dependencies: [#<Gem::Resolver::DependencyRequest:0x00007fb99156d838 @dependency=<Gem::Dependency type=:runtime name="did_you_mean" requirements="= 1.4.0">, @requester=nil>, #<Gem::Resolver::DependencyRequest:0x00007fb99156d810 @dependency=<Gem::Dependency type=:runtime name="bundler" requirements="= 2.2.8">, @requester=nil>]
Resolving dependencies...: 0: Creating possibility state for did_you_mean (= 1.4.0) (1 remaining)
: 1: Attempting to activate [did_you_mean-1.4.0]
: 1: Activated did_you_mean at [did_you_mean-1.4.0]
: 1: Requiring nested dependencies ()
[Truncated]
BUNDLER(5): Activated raygun-apm at [raygun-apm (1.0.78) (ruby)]
BUNDLER(5): Requiring nested dependencies ()
BUNDLER(5): Creating possibility state for raygun-apm x86_64-linux (181 remaining)
BUNDLER(6): Attempting to activate [raygun-apm (1.0.78) (x86_64-darwin-19, x86_64-linux)]
BUNDLER(6): Found existing spec ([raygun-apm (1.0.78) (ruby)])
BUNDLER(6): Unsatisfied by existing spec ([raygun-apm (1.0.78) (ruby)])
BUNDLER(6): Unwinding for conflict: raygun-apm x86_64-linux to 5
BUNDLER(5): Creating possibility state for raygun-apm x86_64-linux (180 remaining)
BUNDLER(6): Attempting to activate [raygun-apm (1.0.78) (ruby)]
BUNDLER(6): Found existing spec ([raygun-apm (1.0.78) (ruby)])
BUNDLER: Finished resolution (190 steps) (Took 0.179297 seconds) (2021-02-10 09:03:04 -0600)
BUNDLER: Unactivated:
BUNDLER: Activated: raygun-apm, Ruby , RubyGems
Using bundler 2.2.8
Using raygun-apm 1.0.78 (universal-darwin)
Updating files in vendor/cache
Bundle complete! 1 Gemfile dependency, 2 gems now installed.
Use `bundle info [gemname]` to see where a bundled gem is installed.
```
username_1: Thanks for the extra report. When I get to this I'll have a look and open a separate issue if it's not related in the end.
As an update, since the fix here is not straightforward I'm postponing it for now to deal with some other issues, but I'll get to it. I think my idea should work, but requires work.
Just to leave my ideas written here, I see two ways that I think would be backwards compatible:
* Lock metadata dependencies like dependencies are locked now, just with the `\0` in the end used internally to distinguish them from regular dependencies. I think this would be backwards compatible, but it would make lockfiles look binary by default for editors, differs, and so on. And that bothers me and I think it will bother people.
* Add separate lockfile sections including metadata dependencies (for example, named, `GEM_V2`, `PATH_V2`, `GIT_V2` and so on), and also keep the current metadata-less sections. I think this would be backwards compatible too because `bundler` ignores unknown lockfile sections by default.
username_0: Yeah, I don't like option 1 there. Could you show what the Lockfile would need to look like in this particular scenario? I'm somewhat struggling to see what the issue actually is.
username_0: It looks like if I specify a Ruby verison in my Gemfile then add the lock, the resolver fails with this:
```
BUNDLER(4): Attempting to activate [RubyGems (3.2.8) (ruby), RubyGems (3.2.8) (x86_64-darwin-19, x86_64-linux)]
BUNDLER(4): Activated RubyGems at [RubyGems (3.2.8) (ruby), RubyGems (3.2.8) (x86_64-darwin-19, x86_64-linux)]
BUNDLER(4): Requiring nested dependencies ()
BUNDLER(4): Creating possibility state for Ruby (~> 2.7.2.0) x86_64-linux (1 remaining)
BUNDLER(5): Attempting to activate [Ruby (172.16.17.32) (x86_64-linux), Ruby (172.16.17.32) (x86_64-linux)]
BUNDLER(5): Found existing spec ([Ruby (172.16.17.32) (x86_64-darwin-19), Ruby (172.16.17.32) (x86_64-darwin-19)])
BUNDLER(5): Unsatisfied by existing spec ([Ruby (172.16.17.32) (x86_64-darwin-19), Ruby (172.16.17.32) (x86_64-darwin-19)])
BUNDLER(5): Unwinding for conflict: Ruby (~> 2.7.2.0) x86_64-linux to -1
```
Are there scenarios where platform information is needed in the Ruby constraint? It looks like if the platform information wasn't considered when adding the lock, it would work just fine.
username_1: Yeah, the issue is super confusing, and the resolver logs don't help clarifying.
In the end the problem is that bundler is picking `raygun-apm-1,0.78-x86_64-darwin-19` dependency information from the lockfile and as a consequence is failing to consider the `>= 2.5, < 2.8.dev` dependency on ruby, because it's not present in the lockfile. For the other gem, `raygun-apm-1.0.78-x86_64-linux`, the dependency information is pulled from the remote dependency API, and thus correctly considers the ` >= 2.5, < 2.8.dev`. This mismatch is making the resolver end up in a conflict.
Adding platform information to dependencies (including ruby constraints) is expected. Bundler needs to do this in order to provide a correct resolution in general, since sometimes different variants of the same gem have different dependencies. For example, nokogiri-1.11.1 depends on "ruby >= 2.5" whereas nokogiri-1.11.1-x86_64-linux depends on "ruby >= 2.5, < 3.1.dev".
username_1: Well, I wasn't actually sure anymore and having a closer look, turns out that this is a regression from 1f7797a1b24b3aa5d14086415289d55b57be89e1 :grimacing:.
I'm not sure whether we can just suppress platform information from metadata dependencies and resolution candidates, maybe we actually can. I'll dig into this more tomorrow. Thanks for putting me back on the right track :sweat_smile:.
username_1: Alright, I think I got a fix! :tada:
Status: Issue closed
|
yiisoft/yii2 | 191126846 | Title: Возможно не оптимальная работа метода User::can
Question:
username_0: ### What steps will reproduce the problem?
Имею социальную сеть, допустим храбрахабр. В зависимости от того есть ли право пользователя редактирования поста требуется показать элемент интерфейса на определенном посте с кнопкой редактирование. Использую yii\rbac\DbManager
Право пользователя проверяется по авторизации и по правилу пользователя.
```php
// использую этот метод
Yii::$app->user->can(Rights::USER_POST_UPDATE, ['model' => $model])
```
### What is the expected result?
Если человек гость: Генерируется 4 запроса к БД
Если авторизован: Генерируется 4 запроса к БД
### What do you get instead?
Если человек гость: Не создается запросов
Если авторизован: Генерируется 4 запроса к БД
### Additional info
То есть. Мне кажется, что не должно генерироваться запросы в случае, если человек не авторизован. Возможно я не прав и чего-то не понимаю.
Сейчас я пользуюсь этой конструкцией
```php
!Yii::$app->user->isGuest && Yii::$app->user->can(Rights::USER_POST_UPDATE, ['model' => $model])
```
| Q | A
| ---------------- | ---
| Yii version | 2.0.10?
| PHP version | 7.0.8
| Operating system | Windows NT K-USER 6.1 build 7601 (Windows 7 Professional Edition Service Pack 1) AMD64
Answers:
username_1: Будет исправлено пул-реквестом https://github.com/yiisoft/yii2/pull/12785
username_1: Дубликат #12771
Status: Issue closed
|
matter-labs/zksync | 844018048 | Title: Question: how many circuit gages be generated for one normal transfer transaction ?
Question:
username_0: HI, guys
I am new to zksync, but interested in some implementation details such as number of circuit gages be generated for one normal transfer transaction proof ?
similars: ecdsa signature & hash operation will take how many gates for circuits to handle ?
Thanks a lot!
Answers:
username_1: For zkSync 1.0, it’s 2 chunks per transfer and some X chunks for 2^26 circuit, then it can be computed.
Status: Issue closed
|
aliostad/CacheCow | 865103696 | Title: PUT operations do not invalidate cache (RFC7234 Section 4.4)
Question:
username_0: Hi Ali,
I'd like to bring an issue I'm seeing to your attention. Based on [Section 4.4 of RFC7234](https://tools.ietf.org/html/rfc7234#section-4.4), I'd expect my client cache to invalidate a resource if I make an unsafe request against that URI. However that's not the behavior I'm experiencing with CacheCow.Client. Here's a quick repro block with it's output:
```csharp
using System;
using System.Net.Http;
using System.Text;
using CacheCow.Client;
namespace CacheCowBugRepro
{
class Program
{
public static void Main(string[] args)
{
using (var client = new HttpClient(
new CachingHandler
{
InnerHandler = new HttpClientHandler()
})
{
BaseAddress = new Uri("https://chrome-caching-bug-repro.azurewebsites.net")
})
{
var myNumber = client.GetAsync("my-number").Result.Content.ReadAsStringAsync().Result;
Console.WriteLine($"GET my-number (origin server): {myNumber}");
myNumber = client.GetAsync("my-number").Result.Content.ReadAsStringAsync().Result;
Console.WriteLine($"GET my-number (from cache): {myNumber}");
myNumber = client.PutAsync("my-number", new StringContent("1", Encoding.UTF8, "application/json")).Result.Content.ReadAsStringAsync().Result;
Console.WriteLine($"PUT my-number (origin server): {myNumber}");
myNumber = client.GetAsync("my-number").Result.Content.ReadAsStringAsync().Result;
Console.WriteLine($"GET my-number (still from cache): {myNumber}");
}
}
}
}
```
Output:
```
GET my-number (origin server): 0
GET my-number (from cache): 0
PUT my-number (origin server): 1
GET my-number (still from cache): 0
```
If you deem this to be Pull Request worthy, feel free to tap me for the additional work.
Thanks,
Brett
Answers:
username_1: Hi, this is as far as I am concerned is a missing feature not a bug.
Reality is, the cases where this _instance_ of `CacheCow` client is the only client making unsafe calls pretty rare in production hence you need to design the cache in a way that either tolerates the cache expiry time or maxage=0 and simply validating all the time.
Point is, even if we invalidate cache on this instance of CacheCow, other clients could be making similar calls and this instance would be none-the-wiser.
Also, this requires that I should be able to clear a large part of my cache based on the URL since I could have different representations of the same resource. A lot of storage mechanisms supported in CacheCow are simple key-value ones and do not allow search feature - currently Redis and File.
In the previous version of CacheCow I had such features with optional interfaces but caused a lot of headache. So I am not going to be able to provide this feature. Sorry
username_0: Thanks for the response and I understand that some features aren't worth the work/maintenance. For what it's worth, I'd like to give you a little bit more background on my use case and hopefully get your opinion. We would be tolerating cache expiration between clients. I'm not necessarily worried about updating all clients when a PUT request happens. However if I'm the user that is doing the PUT request, I'd like my own cache to be up to date so I can have confirmation that my PUT operation was successful. Does this sound reasonable and worth pursuing?
username_0: This use case is something I was planning to do in production. Even in the production scenario, why would the cache being stale for other clients be a concern, when I'm OK with their cache being stale? What am I missing? The only reason I want the local cache being invalidated for the single client, is so that I can provide feedback to that client that their update was successful.
For example, say my web API that I'm applying the caching to is serving a SPA browser application. The application works similar to a spreadsheet, lots of data on a page that's editable. Now if I edit a page, I'd like my cache to be invalidated so that when I refresh the page, I can see my updates. However, I don't necessarily care that other clients don't see my changes immediately.
username_1: In your particular case, I do not see a reason for caching. A single client, making a call, updating something that it already knows has been updated, hence the cache of the spreadsheet is on the client anyway with its latest state - it does not need HTTP caching. Also if you are saying that it needs to reload its state clean, then it is a single call, use no-cache in your request which triggers validation. The server knows if it has change since and either returns the new data or refreshes your cache.
I am afraid it is not a typical scenario where CacheCow is used or I would say HTTP Caching in general.
Hope this helps.
Status: Issue closed
username_0: CacheCow.Client doesn't seem to support the no-cache **_request_** header. When adding the no-cache header, CacheCow.Client still serves the response from cache in my scenario. This might be easy enough to add this functionality by extending the CachingHandler. Do you think something like this is safe to do?
```csharp
public class ExtendedCachingHandler : CachingHandler
{
public ExtendedCachingHandler()
{
var responseValidator = ResponseValidator;
ResponseValidator = response =>
{
var result = responseValidator(response);
if (result == ResponseValidationResult.OK
&& response.RequestMessage.Headers.CacheControl != null
&& response.RequestMessage.Headers.CacheControl.NoCache == true)
{
return ResponseValidationResult.MustRevalidate;
}
return result;
};
}
}
``` |
aztfmod/terraform-azurerm-caf | 919928625 | Title: Epic - VM Extensions Support
Question:
username_0: VM Extensions support:
- [X] Cloud Monitoring
- [X[ Log Analytics Agent
- [X] AD Domain Join
- [X] DSC Extension for Azure Virtual Desktop
Proposed additions:
- [ ] Network Watcher [#340]
- [ ] App Insights [#285]
- [ ] Azure Monitor (preview) - [Instructions](https://docs.microsoft.com/en-us/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell%2CPowerShellWindows%2CPowerShellWindowsArc%2CCLIWindows%2CCLIWindowsArc)
Answers:
username_1: Hi Arnaud, are there any immediate plans for developing support for the SqlVirtualMachine resource type? (I believe that's the correct underlying AzureRM resource type for the "SQL Extension" above)
It should correlate to the [Terraform Registry: Azure RM - MSSQL Virtual Machine](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/mssql_virtual_machine) resource.
I'm working on a project that will be migrating such objects from a manually-created landing zone to one built entirely with the CAF for Terraform supermodule over the next few weeks, so would ideally like to get or add that support if possible.
username_0: hi @username_1 - tagging was not done correctly :) the capability is in the module as per this: https://github.com/aztfmod/terraform-azurerm-caf/issues/646 and you can find an example here: https://github.com/aztfmod/terraform-azurerm-caf/blob/ef75c2783e83625027d6eb6e6af66dfef7958cb4/examples/compute/virtual_machine/108-mssql-vm/configuration.tfvars let us know if that works for you!
Status: Issue closed
username_1: Many thanks Arnaud, I'd not spotted that's where it lives :) |
polkadot-js/apps | 818708763 | Title: Fix eth private key derivation
Question:
username_0: Currently, when adding an account from private key in the apps, the key is used as seed to derive a pair using the hd. Instead, it should just create a pair from this private key using keyring.addFromSeed ad tested in https://github.com/polkadot-js/common/pull/909
Please let me know how to add this function in ui-keyring, because it is not exposed yet.
Answers:
username_1: The ui-kering is only to be used with `addFromUri` -
- it can take a mnemonic
- it can take a hex seed
In both cases there is a separate path that is applied for the conversion, see https://github.com/polkadot-js/common/blob/master/packages/keyring/src/keyring.ts#L179-L181
username_0: solved
Status: Issue closed
|
missaugustina/RhappyFUNblog | 274739127 | Title: Daily Devops for Data Sciencers Series
Question:
username_0: A series of posts containing one tip each on things data scientists can do to automate their daily workflow. This will be the foundation of a talk submission I've been putting together.
- [ ] Connect the dots (la, la, la-la) - managing dot files (like .Rprofile) across machines
- [ ] R package management and updates (packrat)
- [ ] Things you can use continuous integration for that I bet you didn't know you you could use continuous integration for
- [ ] Hooked on git (hooks, setting up new projects)
- [ ] Shell script basics? not sure about this one... |
saltstack/salt | 497196045 | Title: Including a pillar multiple times will copy its keys only on the first include
Question:
username_0: Hello Salt-Team,
### Description of Issue
While implementing a composable pillar system, I ran into a problem : An included pillar can only be included once. Please note that there is two levels of includes.
### Setup
Pillar setup:
1. Composer pillars
**composer/database.sls**
`
MS_DATABASE_PASSWORD: the password
MS_DATABASE_USERNAME: the username
MS_DATABASE_URL: theurl
`
I have as many composers as I have third parties, or "config groups"
2. Service aggregator
**service/addresses.sls**
`
include:
- composer.rabbitmq:
key: env
- composer.database:
key: env
- composer.service:
key: env
defaults:
name: addresses
`
So far, everything works the way I expected. Each composed pillar are merged under **env** key.
3. Service group
**service/group.sls**
`
include:
- service.addresses:
key: addresses
- service.countries:
key: countries
`
This part groups configured services. I expected to meet the same behavior as previously. However, if **addresses** and **countries** both include **composer.database**, only one of them has database configuration under its keys.
4. Top file
I use this system in two ways :
- Single service machine :
`
'addresses-machine-pattern-*':
- service.addresses
`
- Multiple service machine :
`
'group-machine-pattern-*':
- service.group
`
Because I want to be able to manage both deployment types depending on whether or not the machine has a lot of resources.
The whole point was to prevent configuration duplicates.
### Steps to Reproduce Issue
To reproduce, run pillar.items on the **group-machine-pattern-*** and note that only one of the service's pillar has the configured keys.
[Truncated]
pycrypto: 2.6.1
pycryptodome: Not Installed
pygit2: 0.23.1
Python: 2.7.5 (default, Jun 20 2019, 20:27:34)
python-gnupg: Not Installed
PyYAML: 3.10
PyZMQ: 14.7.0
RAET: Not Installed
smmap: 0.9.0
timelib: Not Installed
Tornado: 4.2.1
ZMQ: 4.1.4
System Versions:
dist: centos 7.6.1810 Core
locale: UTF-8
machine: x86_64
release: 3.10.0-957.el7.x86_64
system: Linux
version: CentOS Linux 7.6.1810 Core |
ddnionio/news | 253165088 | Title: @realDonaldTrump: We are in the NAFTA (worst trade deal ever made) renegotiation process with Mexico & Canada.Both being very difficultmay have to terminate?
Question:
username_0: @realDonaldTrump: We are in the NAFTA (worst trade deal ever made) renegotiation process with Mexico & Canada.Both being very difficult,may have to terminate?<br>
via Twitter http://twitter.com/realDonaldTrump/status/901804388649500672<br>
August 27, 2017 at 09:51PM |
topepo/caret | 118416827 | Title: add attribute in model list binary/multi-class...(enhancement).
Question:
username_0: Hello,
I am using different classification models over a set that is multi-class (has 3 levels).
The issue is that besides some more or less familiar models (randomForest, Adabost, C5.0, gbm, XGB), when I try to use some others the first challenge I face is to know in advance if the algorithm supports **multi-class**.
For instance, *"obliqueRF"* (oblique RandomForest) is just only for binary classification, and this happens with many others.
Would it be possible to include that info in the model list table?:
https://username_2.github.io/caret/modelList.html
Thanks,
Carlos.
Answers:
username_1: I support this request. Additionally, it would be nice to see in the model list if a model accepts NAs in the predictors, if it directly accepts factors in the predictors (i.e. without explicit dummy expansion, as glinternet supports it, I believe), if it's a linear or a non-linear model, and if it supports the sub-model trick (since that will give you a free performance boost during model selection, so it's nice to know). I also suggest to add the ability to use class-weights during training to the list, but then that's not fully implemented in the caret resampling loop anyway, so there may not be a broad interest in it.
Additionally, there already is information on implicit feature (or sometimes grouped feature) selection and feature importance cababilities in the docs, but this information is not accessible in a sortable tabular format. Having all this in sortable table would make dealing with the ever-increasing list of models (which is the great thing about caret, of course) much more convenient.
Otherwise I fear that people may not be able to make use of all the shiny new models you are adding, since when in doubt they will prefer to stick with what they know and never learn about the new stuff. And that would be a shame, wouldn't it!
username_2: * That is setup but requires the model-specific code to have weights implemented. If there are specific models that support weights but this isn't implemented in the model code, let me know.
* For the sub-model trick, you can check that via `unlist(lapply(mods, function(x) !is.null(x$loop)))`.
username_1: Yes, weights are supported but only as a global numeric vector, as far as I understood. Does _train_ do the right thing and match the appropriate weights to the samples actually present in each resampling iteration? The documentation wasn't very verbose in that respect. If yes, then this could be added to the docs on learning imbalanced classes, as it seems to be a suitable alternative approach to equalize the impact of less abundant classes, or am I mistaken?
username_2: Accepting missing data is hard to determine programatically. Does anyone have a list of models fitting these tags (beyond the ones that I tagged above)?
Status: Issue closed
username_0: Thanks.
`obliqueRF` is another one to include under the _accept 2 class outcomes_. |
vaadin/flow | 459060196 | Title: Button click shortcut listener does not init value change on text field
Question:
username_0: A simple use case: a form with some text fields and a save button. The user changes stuff and wants to submit the form by using the enter key.
The text fields are used as they are (value change mode ON_CHANGE), bound with a simple binder. To achieve the "on enter", we set a click shortcut.
Now, when filling all fields and using the mouse to click the button, everything works fine.
When filling the fields and hitting enter, the currently focused field does not get submitted. Seems like the blur / on_change event is not fired for that element, since it is still focused.
Using a self defined shortcut listener and the button method `clickInClient` does not solve the problem
Setting the value change mode to EAGER makes it work as a workaround.<issue_closed>
Status: Issue closed |
LegionDark/Issues | 213582816 | Title: aegis stage 5 CS not working
Question:
username_0: received my alcina but cant get the cs after<!--
DO NOT REMOVE OR CHANGE THE PRE-FORMATTED TEXT TO PUT @COMMANDS IN TEMPLATE!!!
GITHUB SEES `@MENTIONS`, NOT `@GMCOMMANDS`!!!
IF YOU STUPIDLY IGNORE THIS WARNING I WILL CLOSE YOUR ISSUE!11eleventytwo!
Issues will also be closed without being looked into if
the following information is missing (unless its not applicable)!!!
-->
**Date & Time**:
march 12th 2017
**FFXI Client Version (use `/ver`)**:
/ver
**Server's Expected Client Version matches, yes/no? (use `$ecv`)**:
$ecv
**Character Name**:
**Nation**:
**Job(level)/Sub Job(level)**:
**NPC or Monster or item Name**:
**Zone name**:
**Coordinates (use `$where`)**:
**ffxiah.com link (for items issues only)**:
**Multi-boxing? (multiple clients on same connection)**:
**Steps To Reproduce / any other info**:
Answers:
username_1: Please fill out the template.
Status: Issue closed
username_0: im sey im bad with computers and my english is bad><
username_0: sry* |
ionide/ionide-fsgrammar | 1063091483 | Title: let f = (*) 2 is highlighted as a comment
Question:
username_0: **Describe the bug**
Using the multiplication operator as a prefix function (i.e. `(*)`) is highlighted as a comment.
**To Reproduce**
Steps to reproduce the behaviour:
1. Create a `.fs` file
1. Write `let double = (*) 2`
1. the `*)` is wrongly highlighted as comment
**Expected behaviour**
It should be highlighted as a regular prefix operator, such as `(+)`
**Screenshots**

**Environment**
- OS: Windows 10
- VSCode version: 1.62.3
Answers:
username_1: Hum,
I suppose this is because comments rules are placed before keywords rules like way highter...
Comments rules are at rank 2 while keywords rules are 16.
I am unsure if we can change the rank easily without breaking stuff.
username_2: I think all you need is a better regex. The `"begin"` pattern is already correct: it won't allow a closing paren right after the opening `(*`:

The problem is that the `"end"` pattern is too permissive:

It's just a matter of making the `"end"` pattern symmetrical with `"begin"`, like this:
```diff
diff --git a/grammars/fsharp.json b/grammars/fsharp.json
index 3c70cb8..527d097 100644
--- a/grammars/fsharp.json
+++ b/grammars/fsharp.json
@@ -547,7 +547,7 @@
{
"name": "comment.block.fsharp",
"begin": "(\\(\\*(?!\\)))",
- "end": "(\\*\\))",
+ "end": "((?<!\\()\\*\\))",
"beginCaptures": {
"1": {
"name": "comment.block.fsharp"
@@ -571,7 +571,7 @@
},
{
"name": "comment.block.markdown.fsharp.end",
- "match": "((\\*)+\\))",
+ "match": "(((?<!\\()\\*)+\\))",
"captures": {
"1": {
"name": "comment.block.fsharp"
````
I tried it locally and it works:

username_1: PR #162 fix this specific issue.
However, it doesn't cover cases like `(****)` because the negative look ahead need to have a fixed length.
Status: Issue closed
|
denzilferreira/aware-client | 173615087 | Title: Settings to control hashing
Question:
username_0: Hi,
I'd like to propose some settings that can adjust hashing of sensitive information. In summary:
* An option to set the hash algorithm
* An option to salt the values hashed
* Abstracting out the hashing to another function
* Option to specify different hash functions for different types of data
In detail:
Option to set the hash algorithm + separate function
* Right now, in android and iOS, I see sha1 hard-coded in different places. We only need to worry about reversals, but the differences in brute forcing things like phone numbers for them all is not that much.
* The abstracting out the hashing to another function is needed for other things, so may as well make it configurable.
* I would suggest options be at least md5, sha1, sha256. sha1 should be default for backwards compatibility. Maybe md5 could be excluded, but could be useful if researchers need to compare to other data sources.
* Controlled by variable `hash_function` which is the string of hash function names.
Option to salt the values hashed
* Add a new variable, `hash_salt`. It is a string defaulting to the null string `""`.
* This really should be done, there are websites which can reverse hashes of phone numbers, as students have found out. Salt doesn't protect against brute force, but at least our hashes should not be in rainbow tables!
* `hashed_value = H(value || salt_string)`. This is string concatenation. I don't think any of the extra security of HMAC is necessary here, since the salt_string is public.
Option to specify hash functions for different types of data
* This means that, for example, you could specify a different hash function for phone numbers as compared to other things.
* The motivation for this is that we are doing a study, and we need to connect app-recorded phone numbers with numbers collected by surveys. Asking for whole phone numbers is too invasive, so we settled on collecting only the last six digits. We need to be able to uniformly hash this
* I would expect implementation would be something like adding a variable `hash_function_phone`, and have some way to use this if it is set, otherwise use the default `hash_function`. People with special needs could define a custom `hash_function` that has their policy.
By the way, Purple Robot use the function `android.telephony.PhoneNumberUtils.formatNumber` to standardize the number formatting. This was quite useful and important. We should consider this, but for the non-deprecated `formatNumber` functions, we need to specify a country code and I'm not sure what to use there.
What do you think?
- Richard
Answers:
username_1: Hi Richard,
Good points! It's been a while since we have gone through the privacy measures on the framework. At the time, SHA1 was the standard, but yes, we should move on to SHA256 or even SHA512.
As far as I know, it's impossible to get one's phone number programmatically on Android, except for rooted devices, but not on general population devices. Why not use the participants' email? We do have the Google Auth plugin that you could use to match these without being so invasive as you put it.
There are a few covenants regarding these changes:
- if you have already collected data with a specific participant and you recruit them again, if the hash changed, you won't know it's the same participant. This is important for longitudinal studies. Not a problem if we always pick the same hash algorithm.
- the non-existing uniform hash of phone numbers: this is actually intentional. Like this, you would get always the same hash for the same string on your device, but maybe not the same if on another device (if the number was input on the contact list differently). This blocks cross-device hash matching, as in, the same contacts are present in both devices, hence a common contact. This could have significant impact on social-engineering attacks.
username_0: Hi,
Regarding getting phone numbers: we do not need phone numbers of participants, we know who the participants are. We need to know something about who the subjects are calling. For that, we have the remote phone number.
As for non-uniform hash: it seems there are better ways to prevent linking. They could be hashed with a random salt that never leaves the device, and then there is actual security. Or or, say, the device ID if you want to be a bit less secure. (Even using device_id and if it was public, hash(data||device_id), there is no way to correlate things without brute force reversing). (Another random thought: are incoming and outcoming numbers formatted the same way? Does incoming use )
Still, in our use cases we need to be able to correlate remote contacts, so even if intentional we need to improve things. I'm well aware of re-creating the social network, that is related to our research.
[theoretical, for reference: As for hash functions, I have tried to see what the situation with SHA1 is as it applies to us. There is a risk of collision attacks, but this actually doesn't apply to us (it applies to things like digital signatures). We are concerned with first preimage attacks, which is much harder and still theoretical even for md5. Plus, even without standardizing the formats, there are so few bits of entropy that it could just be directly brute forced. So debating hash functions is probably mostly theoretical.]
Given the above, we can mostly chose what we want. sha256 seems like a reasonable option to provide.
As for changing the hash function during studies, that is indeed a problem. That is why it is server-configurable (and the research needs to beware), but it defaults to sha1 so the default case is preserving compatibility.
As in all my changes, it would default to the existing behavior, and the options do not even need to be presented in the client interface or upstream Aware server, so your studies are not affected. I would very much like as much of our changes to be accepted upstream as possible, to minimize the change of a hard fork. Given this, would you like a PR even if you won't use the changes?
- Richard
username_1: Yes, PR so I can see the changes and merge upstream.
Denzil;
username_0: I just made the PR. The basic things work for me, and hashing is as expected. Let me know if there are stylistic or java-related things I don't know about.
It is configurable, with all defaults to the existing settings. Configuration is only be server, users don't see anything (if not using webservice, these are probably not that useful). It is somewhat flexible, but not as generic as it could be. I took the idea that there's no reason to over-engineer now, but at least if improvements are needed later, they are only needed in two functions. Let me know if there are an particular use cases you are interested in now.
Also, I noticed that my phone returns the number in as standard form: no spaces or anything else, even though in contact list it is different. In this case, the goal to obfuscate things doesn't work. I added a option for the salts hash_salt="device_id", which will use the device_id as a salt for the hash. This can be either applied only to phone numbers, or to all hashes. I'm not entirely happy using device_id for this, but if you can think of a better idea let me know.
Thanks,
- Richard
Status: Issue closed
username_0: Now merged, so closing. If someone has more requests for hashing, reply here. |
erlang/rebar3 | 180516812 | Title: 3.3.* fails to compile
Question:
username_0: ### Environment ###
A freshly checked out rebar3 from the upstream repo, `git checkout 3.3.0` for the version
### Current behaviour ###
Running bootstrap failes on a freshly checked out 3.3.*
```
./bootstrap
Dependency providers already exists
Dependency getopt already exists
Dependency cf already exists
Dependency erlware_commons already exists
Dependency certifi already exists
===> Verifying dependencies...
===> Compiling rebar
===> Cleaning out bbmustache...
===> Cleaning out certifi...
===> Cleaning out cf...
===> Cleaning out cth_readable...
===> Cleaning out erlware_commons...
===> Cleaning out eunit_formatters...
===> Cleaning out getopt...
===> Cleaning out providers...
===> Cleaning out rebar...
===> Cleaning out relx...
===> Cleaning out ssl_verify_fun...
===> Cleaning out ssl_verify_hostname...
===> Verifying dependencies...
===> Compiling getopt
===> Compiling providers
===> Compiling cf
===> Compiling erlware_commons
===> Compiling bbmustache
===> Compiling relx
===> Compiling eunit_formatters
===> Compiling cth_readable
===> Compiling certifi
===> Compiling ssl_verify_fun
===> Compiling rebar
===> Building escript...
escript: exception error: no match of right hand side value error
in function rebar_prv_escriptize:find_deps_of_deps/3 (src/rebar_prv_escriptize.erl, line 240)
in call from rebar_prv_escriptize:find_deps/2 (src/rebar_prv_escriptize.erl, line 234)
in call from rebar_prv_escriptize:escriptize/2 (src/rebar_prv_escriptize.erl, line 104)
in call from rebar_prv_escriptize:do/1 (src/rebar_prv_escriptize.erl, line 72)
in call from rebar_core:do/2 (src/rebar_core.erl, line 125)```
```
### Expected behaviour ###
I would expect it t compile.
Answers:
username_1: This could have been the following thing fixed in 3.3.1:
- [Prevent crash when doing a hash check with missing index, and fetch the index instead](https://github.com/erlang/rebar3/pull/1315)
username_0: I doubt so, it also happens when using 3.3.1 or master. I'll try to kill the index and see if that changes things.
username_0: removing ~/.cache/rebar3 doesn't change it either.
username_0: adding full debug output: https://gist.github.com/username_0/06d84741b9340b6a7f0226e67ce15e38
username_1: From the debug output we've found that `sasl` is seen as a system dep, but not found in the root path, which triggers/fails the checks at https://github.com/erlang/rebar3/blob/master/src/rebar_prv_escriptize.erl#L243-L246 and tries to find it from local deps.
We should see if we can cope with that somehow or if the error is reasonable to expect. I'm thinking the SASL-but-not-in-a-root-lib is a fun edge case here.
username_0: This was created by having `code:root_dir/0` and the `ERL_LIBS` environment variable be two different things. (yay symlinks!)
username_1: ERL_LIBS can be different from root_dir(), it just happens that apps in ERL_LIBS shadowed those in root_dir() -- that's even edgier of a case!
username_0: I see your edge and raise you another edge ERL_LIB pointed to a directory that was a symlink to what root_dir() was.
Status: Issue closed
|
UNIZAR-30249-2017-CampusManager/CampusManagerWebapp | 216067653 | Title: Imágenes markers del mapa
Question:
username_0: Existe un error en algunos navegadores de algunos usuarios que no permiten mostrar las imágenes correspondientes a los marcadores de los mapas de leaflet.
Answers:
username_0: Arreglando escribiendo a mano el path de los fuentes de las imágenes en el fuente de leaflet ([ver commit](https://github.com/UNIZAR-30249-2017-CampusManager/CampusManagerWebapp/commit/ca47d1b2a7e03dd8b54d8ef916abb5e526164224)).
Status: Issue closed
|
learningequality/kolibri | 426246148 | Title: User added twice on the list to enroll in class
Question:
username_0: ### Observed behavior
When admin first selects manually a learner with the checkbox, but then searches for a text using the `Search` field, and already selected learner happens to be among the results and gets selected for the second time, will provoke a `Server Error (500)`.


### Expected behavior
Ignore the first selection?
### User-facing consequences
UI error
### Errors and logs
[kolibri.log](https://github.com/learningequality/kolibri/files/3015798/kolibri.log)
### Steps to reproduce
1. Go to Facility > Classes > Class N > Enroll Learners
2. Select a learner with checkbox
3. Use the `Search` field to filter with a string that gives as a result the previously selected learner and at least one other.
4. Check the `Select all` checkbox
5. Click `Confirm`
### Context
0.12.2.beta3
Ubuntu/Firefox
…
cc @username_1
Status: Issue closed
Answers:
username_1: Fixed in #5322 |
mamund/hal-forms | 1041059792 | Title: Typos in type definition
Question:
username_0: https://rwcbook.github.io/hal-forms/#_code_type_code
"Possible settings for the type value adn teh expected contents to be returned inthe are:"
Should be?:
"Possible settings for the type value and the expected contents to be returned in here are:"
Answers:
username_1: It might also be a good chance to add a note on whether the list of possible values for `type` is exhaustive. That list is very close to HTML input types except `checkbox`, `radio` and `file`. Are other types generally supported, too?
username_2: update got reverted somehow. will fix
and, yes, we should state that "type" list is not exclusive. with a fallback for when client's don't recognize the value of "type"
username_1: Should we remove the sentence about `type` being an "enumerated attribute"? It's that one, that led me to assume exhaustiveness of the list given. Because, if arbitrary other types are allowed as well, there's no enumeration anymore, is there?
username_2: removing would be OK w/ me. AFAIK, the word's definition does not call for exhastiveness. but happy to improve clarity here.
username_1: As we're following concepts of HTML pretty closely I had assumed it was a reference to the concept of an enumerated attribute like this: https://stackoverflow.com/questions/4104110/what-are-enumerated-attributes-in-html
username_2: that's a very reasonable assumption. i agree that we should drop "enumerated". it's a small change for a big win. |
robotframework/robotframework | 1178027114 | Title: Distributed Tests unclear
Question:
username_0: My Test Scenario:

Steps:
1. Test.exe shall be deployed on TestPC1 and TestPC2 from GatewayPC
2. The Test.exe shall be executed at the same time on both Test PCs (they rely on each other)
3. The Test.exe on TestPC1 connects to TestPC2 and vice versa to do their tests between ech other.
4. Test results are printed on both pcs on stdout and shall be collected.
My Goals:
- Start a .robot test on the gateway pc to execute the test steps above.
My Problem:
- With the Robot-SSH-Library i cant set the neccessery environment on the TestPCs like i would do in a .robot file with `Set environment variable`
- The `PythonRemoteServer` seems to be a dead project (last update 2 years ago)
Is there any solution with the Robot-Framework to accomplish my goal?<issue_closed>
Status: Issue closed |
everythingtype/jsa | 146424023 | Title: Responsive: Homepage
Question:
username_0: From JSA:
Also, the homepage seems to "stick" while loading, preventing users from scrolling down. I understand that a pause for loading might be necessary, but it feels a little awkward. Can you recommend a way of making this feel more seamless? Perhaps a splash screen like on the desktop?
Answers:
username_1: @username_0 This doesn't have to do with loading. It's a bug-- it thinks the splash screen is visible, even though it's not. I'll work on a fix...
username_1: Fixed
Status: Issue closed
|
NativeScript/NativeScript | 285280071 | Title: Couple of issues on Mac OS Sierra after using NativeScript Setup Command
Question:
username_0: Hello:
I installed NativeScript with the following command:
ruby -e "$(curl -fsSL https://www.nativescript.org/setup/mac)"
And I have a couple of issues that I have been trying to fix, but still I'm a little stuck on that...
on ios, the command 'tns run ios' works fine, but only if I open the emulator prior to running this command or unless if I retype this command twice.
the message I get on the terminal is this:
Searching for devices...
Unable to find applicable devices to execute operation. Ensure connected devices are trusted and try again.
now on android, after running the command 'tns run android' it shows the following message but it never opens... it kind of times out...
Searching for devices...
Starting Android emulator with image Nexus_5X_API_25
Cannot find connected devices.
Emulator start failed with: Cannot run your app in the native emulator. Increase the timeout of the operation with the --timeout option or try to restart your adb server with 'adb kill-server' command. Alternatively, run the Android Virtual Device manager and increase the allocated RAM for the virtual device.
To list currently connected devices and verify that the specified identifier exists, run 'tns device'.
To list available emulator images, run 'tns device <Platform> --available-devices'.
I was VERY excited to discover your Project, and I like the fact that is very well organized in all aspects. Hope I can be part of this community and look forward to hearing from you.
Happy coding on 2018!!!
Answers:
username_1: Hi @username_0,
Thank you for contacting us and for your interest in NativeScript,
Regarding the first issue with iOS, could you provide some more info about the problem and if there is an error which is thrown while running `tns run ios` for the first time.
About the `tns run android` this seems to be related to the Android itself and not with the NativeScript Installation script. Most probably the problem is caused due to missing HAXM on your machine, which is needed while starting a simulator.
To fix this issue you could install it manually by opening following dir `open $ANDROID_HOME/extras/intel`. There you will find `Hardware_Accelerated_Execution_Manager` folder. Open it and install `.dmg` file. Then restar the mashine and try again to run the project with `tns run android`.
Let me know if the problem still persists.
Status: Issue closed
|
WayneLambert/portfolio | 1165704651 | Title: Change Docker Build
Question:
username_0: Address the following:
- [ ] Introduce multi-stage build
- [ ] Add permissions to container setup
- [ ] Remove superfluous `runserver` command in `Dockerfile`
- [ ] Update to Python 3.10.2
- [ ] Update any outdated packages |
accelad/blog | 324028807 | Title: Python 语言特性整理
Question:
username_0: # `Python` 语言特性
## `Python` 的函数参数传递
- 传引用和传值的区别
从本质上说,`Python` 的参数传递过程是一个赋值操作,即将实参的`对象引用`赋值到形参上,形参是无法改变实参的`对象引用`的。
对于`不可变对象`(如,数字、字符串、元组)来说,能形参得到`对象引用`,但无法对对象进行修改,对于实参来说,完全不受影响。
对于`可变对象`(如,列表、字典)来说,形参可能会在调用该对象的可变方法。这样一样来,虽然“实参本身指向的对象是谁”这个客观事实没有改变,但是外部的实参所引用对象的属性确实被改变了。
```python
def func(sequence):
sequence.append(4)
def test_func():
ns = [1, 2, 3]
ns_ref = ns
func(ns)
assert ns is ns_ref
assert ns == [1, 2, 3, 4]
```
另一方面,应注意的是在使用`默认参数`的时候,不建议(不应该)把`可变对象`作为`默认参数`,而是应采用第二种形式。
```python
def func1(mutable=[]):
mutable.append(1)
mutable.append(2)
return mutable
def func2(mutable=None):
if mutable is None:
mutable = []
mutable.append(1)
mutable.append(2)
return
def test_func():
assert func1() == [1, 2]
assert func1() == [1, 2, 1, 2]
assert func2() == [1, 2]
assert func2() == [1, 2]
```
## `is` 和 `==`
- `a == b`
实际上是调用的 `__eq__` 方法,判断对象的值是否相等。
[Truncated]
```
- `copy.deepcopy`
递归遍历对象的所有组成部分,已达到完整复制(事实上很少会这样做)。
```python
import copy
def test_func():
a = [1, 2, 3]
b = [4, 5, 6]
seq1 = [a, b]
seq2 = copy.deepcopy(seq1)
assert seq1 is not seq2
assert seq1[0] is not seq2[0]
assert seq1[1] is not seq2[1]
``` |
jrodal98/conf-edit | 533660471 | Title: source posthook doesn't work
Question:
username_0: This might not technically be a bug, but rather a limitation of how the source command works.
You can't have something like `source ~/.zshrc` as a posthook command, which kind of sucks. I should research to see if there's a way around this.
Answers:
username_0: I'm confident that this is simply a feature of the source command. Finding a workaround could introduce security vulnerabilities, which is why the source command doesn't work in this context in the first place.
Status: Issue closed
|
codeforberlin/tickets | 367898070 | Title: Visualize public transport coverage
Question:
username_0: The official [„Nahverkehrsplan“](https://www.berlin.de/senuvk/verkehr/politik_planung/oepnv/nahverkehrsplan/download/nvp2019-2023/NVP_2019-2023_Entwurf_Stand31Juli2018.pdf) (public transport scheme?) for Berlin specifies that 80% of homes in Berlin need to be located within a given distance of a public transport stop where services run at least every 20 minutes (page 100 of the document, page 116 of the PDF).
The official scheme also cites a study to verify that the given goals have (not) been met in the past. We could either try to explicitly validate this information using census data, openstreetmap, public transport APIs or - which might make more sense IMHO - create a map that simply displays those locations within the city for which the requirements made by the „Nahverkehrsplan“ are not met. |
fonttools/skia-pathops | 712741235 | Title: SkConic.computeQuadPOW2 method is not in public include/core/* headers
Question:
username_0: In `src/python/pathops/_skia/core.pxd` we import some symbols that are not exported in the Skia `include/` public headers.
This makes it hard to compile the pathops bindings with an external pre-built copy of the Skia library, without access to the skia src/ directory where SkGeometry.h is located.
In particular it's the `SkConic` struct and its `computeQuadPOW2` method, that are used inside `Path.convertConicsToQuads` to compute the number of quads needed to approximate a conic given a tolerance.
Given that function is relatively trivial, we should just reimplement it in cython
https://github.com/google/skia/blob/52a4379f03f7cd4e1c67eb69a756abc5838a658f/src/core/SkGeometry.cpp#L1198-L1231<issue_closed>
Status: Issue closed |
cloud-gov/cg-site | 844923812 | Title: Better home for deprecation notices
Question:
username_0: In order to keep deprecations visible and incidents/maintenances discoverable, we want to keep deprecation notices around without cluttering "real" maintenance on StatusPage.
This might look like one of these:
- Keep posting on statuspage, but with a one- or two-sentence description and a link to a longer body on the website
- Just post on the website
- something else entirely |
crate/crate | 358731768 | Title: Add ability to sort by a percentile
Question:
username_0: **Use case**: If I am querying multiple percentiles, it would be great if I could order by a specific index
**Feature description**: When executing a query like:
```
SELECT count(*), PERCENTILE(field, [ .90, .95, .99 ]) as fieldper ORDER BY fieldper[0]
```
It'd be great if it provided me the ability to order the results by ```.90, .95, or .99```
Answers:
username_1: Let me have a look at this for you!
username_2: Looks like we've an issue detecting that `fieldper` is an alias if a subscript expression is also there.
If you use the full function name in the ORDER BY it works:
```
select
count(*), percentile(x, [.90, .95, .99]) as fieldper
from
t
group by country
order by percentile(x, [.90, .95, .99])[1] desc;
```
We'll have a closer look why addressing it by alias doesn't work.
username_2: This is unfortunately more complex to support than initially expected.
We'll have to change some internal representations. It's something we want to do but it has no immediate priority.
We're keeping this open as feature addition for the future. Meanwhile the workaround by addressing the expression directly without alias can be used. |
plantain-00/schema-based-json-editor | 438717780 | Title: npm install not found
Question:
username_0: I think the last version or change done have a poblem to install with npm.
unable to find a readme for [email protected]

Answers:
username_1: Only the README.md is not provided for the npm publish
It should be OK for v7.22.0
username_0: It was a problem in my job now I am using, thank you for your job :) |
MaartenGr/BERTopic | 974061015 | Title: returning N of Words per topic in topics_over_time method
Question:
username_0: Hi there!
I'm trying to change the amount of words `topics_over_time` method returns
---
By default, you return 5 words (506 line):
https://github.com/username_1/BERTopic/blob/80c9fa174e4671b49f1697b44368e23ee0d9bd18/bertopic/_bertopic.py#L505-L508
I slightly changed the code (see here: https://github.com/hcss-utils/hcss-BERTopic/commit/081388a30524239d14f2085d525e6e156f76fbb1), but instead of, say, 15 words, the function only returns 10.
Could we return more than 10 words, and if so, what am I doing wrong?
Thanks!
Answers:
username_1: There are two things happening here. First, to increase the number of words per topic, you'll need to change `top_n_words` when initializing BERTopic. Next, you will indeed need to change the code as you'll have done before and it should work.
Do note that I purposefully did not allow users to change that value as topic representations are typically best with a maximum of 5 words. Adding words do add information but not significantly.
Hopefully, this helps a bit. Let me know if you have any other questions!
username_0: ok, totally makes sense!
thanks for the reply
Status: Issue closed
|
thedaviddias/Front-End-Checklist | 893403308 | Title: Looking to navigate around issues with overlapping devices what is primary fire fox browser versus Apple product
Question:
username_0: I guess hierarchy of the voices versus software hardware versus software hierarchy what takes precedent in terms of saving passwords etc. operating system hierarchy sorry thinking out loud here
Answers:
username_0: I guess hierarchy of the voices versus software hardware versus software hierarchy what takes precedent in terms of saving passwords etc. operating system hierarchy sorry thinking out loud here
username_0: Awesome thanks! |
yapplabs/ember-radio-button | 71573051 | Title: Run loop buggy behaviour
Question:
username_0: Hello,
I found a "non blocking" bug in the run loop (I guess it there). In fact, it blocked my UT... but I found a "work around" fix.
I describe it in the way I made it happen.
I build a model containing a boolean :
```javascript
var MyModel= DS.Model.extend(EmberValidations.Mixin, {
switcher: DS.attr('boolean')
)};
MyModel.reopen({
validations : {
switcher: {
inclusion: {
in: ['true', 'false']
}
}
}
});
```
I user my value on a radio button. A radio button can't have boolean values, that's why my validation uses strings.
All this works, the type changes between boolean and string on the need and the validation works fine. And the REST part in Java gets the boolean value.
My problem is when I submit my page with no error.
If eveything is fine, I do a forward to another page (a list of MyModels). Then, after some seconds, I have an error in the consol :
```javascript
TypeError: this.$(...) is undefined
```
Following the error, it happens in this part of generated code :
```javascript
_updateElementValue: (function () {
Ember['default'].run.next(this, function () {
this.$().prop("checked", this.get("htmlChecked"));
});
}).observes("htmlChecked")
```
I use more than one radio button, but the only one making the error is the "true / false" one.
The "work around" fix I use is to define my property as string and not boolean. Everything works fine all the same, and I don't have the error anymore.
That's all, folks !
Answers:
username_0: When I do that manualy, it works, no problem.
But when I do that in my UT, as it is automated, it submits the form when "run.next" is still not performed, and I have the error.
Of course, all tests I do after "submit" are in a "andThen".
I guess it means qunit don't consider radio buttons "run.next" before to do what is in the "andThen".
PS : I managed to find a "fix" : I have to do something like that when I save my model :
```javascript
model.save().then(function () {
Ember.run.next(...[uses a flashMessages to show success]
```
I wonder why radio buttons run loop is not finished when I save my model, as it has already been validated by ember-validations.
username_1: Hi @username_0 - It sounds like you are using version 0.1.3 (or lower) of ember-radio-button.
We are now on version 1.0.5 and I think some of the changes we made could help with these issues. Are you using htmlbars? If so, can you try it out with the newer code?
username_0: As I checked the version, I noticed a HUGE misstake...
I use ember-radio-button**s**... I am so deeply sorry...
Status: Issue closed
username_1: No problem :) |
VisualDataWeb/WebVOWL | 188978723 | Title: Function template with simple example?
Question:
username_0: Hello,
Thank you very much for sharing and developing WebVOWL. I am ont an expert in ontologies, my focus would be using the graph visualization for a slightly different purpose.
Was trying to create a new data file using the template.json but it isn't getting loaded. Tried filling the expected values but I'm still missing to understand what is wrong.
Do you think it woulb be possible to provide a template that has a simple functional example with 2~3 nodes?
Many thanks.
Status: Issue closed
Answers:
username_1: Hi,
I used [OntoBench](http://ontobench.visualdataweb.org/) to generate a simple ontology with an ObjectProperty and visualized it with WebVOWL: http://visualdataweb.de/webvowl/#iri=http://visualdataweb.de/ontobench/ontology/ontology.ttl?features=objectprop.
Just go to `Export` at the bottom and select `Export as JSON` and take a look at it.
You could also use the pregenerated ontologies like our [benchmark.json](https://github.com/VisualDataWeb/WebVOWL/blob/master/src/app/data/benchmark.json) as example.
Cheers
username_0: Thank you very much! That was exactly what was needed.
Perhaps worth putting that tiny example on the data folder too?
username_1: Glad it helped 😄
The data folder is only used for the *official* WebVOWL page located here: http://vowl.visualdataweb.org/webvowl/index.html. We suggest to use OWL2VOWL for the JSON creation because the manual creation is really error prone. Since it sounds like your problem is a special case I think it won't really help putting it there, but thanks for the suggestion!
BTW a tip if you create the JSON manually. You can skip objects in `classAttribute` and `propertyAttribute` and put everything in the object in the `class`/`property` array as seen here: [benchmark.json#L613](https://github.com/VisualDataWeb/WebVOWL/blob/master/src/app/data/benchmark.json#L613)
username_0: Thank you very much. It was possible to fully automate the generation thanks to your help. 👍 |
microsoft/playwright | 774383373 | Title: [BUG]
Question:
username_0: **Context:**
- Playwright Version: [1.6]
- Operating System: [CentOS Linux release 7.9.2009 (Core)]
- Node.js version: [12]
- Browser: [Chromium]
- Extra: [run with metamask extension]
<!-- browserType.launchPersistentContext: Protocol error (Browser.getVersion): Target closed.
=========================== logs ===========================
<launching> /home/centos/.cache/ms-playwright/chromium-823944/chrome-linux/chrome --disable-background-networking --enable-features=NetworkService,NetworkServiceInProcess --disable-background-timer-throttling --disable-backgrounding-occluded-windows --disable-breakpad --disable-client-side-phishing-detection --disable-component-extensions-with-background-pages --disable-default-apps --disable-dev-shm-usage --disable-extensions --disable-features=TranslateUI,BlinkGenPropertyTrees,ImprovedCookieControls,SameSiteByDefaultCookies,LazyFrameLoading --disable-hang-monitor --disable-ipc-flooding-protection --disable-popup-blocking --disable-prompt-on-repost --disable-renderer-backgrounding --disable-sync --force-color-profile=srgb --metrics-recording-only --no-first-run --enable-automation --password-store=basic --use-mock-keychain --user-data-dir=/home/centos/zed-automation/utils/browser/test-user-data-dir --remote-debugging-pipe --auto-open-devtools-for-tabs --disable-extensions-except=/home/centos/zed-automation/utils/browser/metamask-chrome-8.1.3 --load-extension=/home/centos/zed-automation/utils/browser/metamask-chrome-8.1.3 --start-maximized about:blank
<launched> pid=8914
[err] [8914:8914:1224/122032.062408:FATAL:zygote_host_impl_linux.cc(117)] No usable sandbox! Update your kernel or see https://chromium.googlesource.com/chromium/src/+/master/docs/linux/suid_sandbox_development.md for more information on developing with the SUID sandbox. If you want to live dangerously and need an immediate workaround, you can try using --no-sandbox.
[err] #0 0x5622db9afd89 base::debug::CollectStackTrace()
[err] #1 0x5622db921533 base::debug::StackTrace::StackTrace()
[err] #2 0x5622db931b60 logging::LogMessage::~LogMessage()
[err] #3 0x5622da3cdc6e content::ZygoteHostImpl::Init()
[err] #4 0x5622db8cc3b8 content::ContentMainRunnerImpl::Initialize()
[err] #5 0x5622db8ca47b content::RunContentProcess()
[err] #6 0x5622db8ca5cc content::ContentMain()
[err] #7 0x5622d8f03c4c ChromeMain
[err] #8 0x7fc70f12e555 __libc_start_main
[err] #9 0x5622d8f03a6a _start
[err]
[err] Received signal 6
[err] #0 0x5622db9afd89 base::debug::CollectStackTrace()
[err] #1 0x5622db921533 base::debug::StackTrace::StackTrace()
[err] #2 0x5622db9af92b base::debug::(anonymous namespace)::StackDumpSignalHandler()
[err] #3 0x7fc7148c8630 (/usr/lib64/libpthread-2.17.so+0xf62f)
[err] #4 0x7fc70f142387 __GI_raise
[err] #5 0x7fc70f143a78 __GI_abort
[err] #6 0x5622db9ae8b5 base::debug::BreakDebugger()
[err] #7 0x5622db931fd2 logging::LogMessage::~LogMessage()
[err] #8 0x5622da3cdc6e content::ZygoteHostImpl::Init()
[err] #9 0x5622db8cc3b8 content::ContentMainRunnerImpl::Initialize()
[err] #10 0x5622db8ca47b content::RunContentProcess()
[err] #11 0x5622db8ca5cc content::ContentMain()
[err] #12 0x5622d8f03c4c ChromeMain
[err] #13 0x7fc70f12e555 __libc_start_main
[err] #14 0x5622d8f03a6a _start
[err] r8: 0000000000000000 r9: 0000000000000000 r10: 0000000000000008 r11: 0000000000000206
[err] r12: 00007ffd6e6a3020 r13: aaaaaaaaaaaaaaaa r14: 00007ffd6e6a3030 r15: 00007ffd6e6a27b0
[err] di: 00000000000022d2 si: 00000000000022d2 bp: 00007ffd6e6a1f60 bx: 00007ffd6e6a27d0
[err] dx: 0000000000000006 ax: 0000000000000000 cx: ffffffffffffffff sp: 00007ffd6e6a1e28
[err] ip: 00007fc70f142387 efl: 0000000000000206 cgf: aaaa000000000033 erf: 0000000000000000
[err] trp: 0000000000000000 msk: 0000000000000000 cr2: 0000000000000000
[err] [end of stack trace]
[err] Calling _exit(1). Core file will not be generated.
<process did exit: exitCode=1, signal=null>
-->
<!-- npx envinfo --preset playwright -->
**Code Snippet**
Help us help you! Put down a short code snippet that illustrates your bug and
that we can run and debug locally. For example:
```
const pathToExtension = require("path").join(__dirname,"metamask-chrome-8.1.3");
const userDataDir = __dirname + "/test-user-data-dir";
async init() {
[Truncated]
args: [
`--disable-extensions-except=${pathToExtension}`,
`--load-extension=${pathToExtension}`,
`--start-maximized`,
],
devtools: true,
chromiumSandbox: true,
timeout: 0,
});
const [metamaskPage] = await Promise.all([
this.browserContext.waitForEvent("page"),
this.browserContext.backgroundPages()[0],
]);
this.metamask = metamaskPage;
return this.metamask;
}
```
**I have project automation test on metamask extension, when i run it in local it pass, but when i push to server it not. How i can fix this ?**
Answers:
username_1: Could you try with `chromiumSandbox: false` if this fixes your problem? This disables the sandbox and its less secure, but seems like you are testing a chromium extension which is known, so you can trust it theoretically.
username_2: Closing as per lack of feedback, please file it again if this is still an issue!
Status: Issue closed
|
kimlimjustin/xplorer | 1013948171 | Title: markdown Link not rendered in docs
Question:
username_0: ### Description
here link in https://xplorer.vercel.app/docs/common_problems#xplorer-is-unstable-after-installing-it hasn't rendered a `<a>` tag
### Steps To Reproduce
_No response_
### Expected behavior
a link
### Xplorer Version
v0.1.0
### Operating System Version
Windows, android
### Additional Information
_No response_
Answers:
username_1: Which a tag do you mean?
username_0: `a` tag within the `xplorer is unstable after installing` detail tag.
username_1: Ah I see, I'll fix it together with the merging of branch `theme_customization`, thanks! Also, please focus on the bug or contribute to Xplorer itself.
username_0: sure thing @username_1
username_1: Fixed together via #127
Status: Issue closed
|
facebook/react-native | 188867752 | Title: add style opacity to animated.view in react native 0.37
Question:
username_0: this code works perfectly on react native 0.35 or earlier, but after update to react native 0.36 and 0.37 it not works
`class LoginViewComponent extends Component {
constructor(props) {
super(props);
this.state = {
scale: new Animated.Value(1.5),
offset: new Animated.Value((window.height / 2) - (125)),
opacity: new Animated.Value(0)
};
}
componentDidMount() {
setTimeout(() => {
Animated.timing(
this.state.offset,
{
toValue: 0,
duration: 1000
}
).start();
Animated.timing(
this.state.scale,
{
toValue: 1,
duration: 1000
}
).start();
setTimeout(() => {
console.log("it is really called");
Animated.timing(
this.state.opacity,
{
toValue: 1,
duration: 500
}
).start();
}, 800);
}, 1200);
}
render() {
var scale = this.state.scale;
return (
<View style={styles.container}>
{/* this animated view works */}
<Animated.View style={[{ alignItems: 'center', position: 'absolute', top: 0, left: 0, right: 0, bottom: 0 }, { transform: [{ scale }, { translateY: this.state.offset }] }]}>
<Image source={require('../Assets/Images/logotype.png')} style={styles.imageLogo} />
</Animated.View>
{/* this animated view do not works */}
<Animated.View style={{ opacity: this.state.opacity }}>
<FinalView />
</Animated.View>
{/* I have tried too */}
<Animated.View style={{ opacity: 1 }}>
<FinalView />
</Animated.View>
</View>
)
}
}`
It only works if I remove Animated.View, but works without animation
Answers:
username_1: Is this on iOS, Android, or both?
username_0: @username_1 only on Android, on ios the styles of '<FinalView />' are a bit different
username_2: I am running into the same issue on iOS
username_1: cc @username_3 @username_4
username_2: Oops. Sorry I figured it out. I had some bad styles after updating. :(
username_1: Ok so this may still be Android only. @username_0, can you provide a simple example of this bug on rnplay.org ?
username_3: @username_0 What exactly happens when it does not work? Is there an error or just no animation? Could you also provide a small example that reproduces the issue?
username_0: @username_1 @username_3 this is the example https://rnplay.org/apps/u0mzZw , the animation just no works even when i set opacity in 1
`
<Animated.View style={{ opacity: 1 }}>
<FinalView />
</Animated.View>
`
I am testing on a Nexus 5x with Android 7.0
username_4: I'm confused -- rnplay example seems to work for me?
username_5: Yeah I'm running in to this to--views aren't visible on Android when animating opacity.
username_5: It won't happen on rnplay because Exponent uses 0.36, not 0.37
username_5: Got it--I think it has to do with `position: relative` and `position: absolute`: https://rnplay.org/apps/bOo6Gw
Here's a repo to play with: https://github.com/username_5/RNAndroidOpacityBug
username_6: This seems to still be happening in 0.40 (can anyone else confirm?). In my case I use an interpolated `Animated.Value` to set my opacity. The initial value doesn't seem to work. Once it is changed via `Animated.timing` the opacity starts working again.
Has this been fixed for anyone else?
username_7: @username_6
It's happening for me in 0.41.2. Is there a way to find out what version of RN will include this PR?
username_8: Hi there! This issue is being closed because it has been inactive for a while.
But don't worry, it will live on with Canny! Check out its new home: https://react-native.canny.io/feature-requests/p/error-adding-opacity-to-animatedview-in-react-native-037
Status: Issue closed
username_9: Hi,
There is still something wrong with setting opacity as an AnimatedValue:
this.state = {
opacityValue: new Animated.Value(0),
}
animate() {
Animated.sequence([
Animated.timing(
this.state.opacityValue,
{
toValue: 1,
duration: 400,
easing: Easing.linear,
useNativeDriver: true,
}
),
Animated.timing(
this.state.opacityValue,
{
toValue: 0,
duration: 800,
easing: Easing.linear,
useNativeDriver: true,
}
),
Animated.delay(Math.floor(Math.random() * 10) * 100)
]).start(() => {
this.animate();
});
}
And then on render:
<Animated.Image
style={{
opacity: this.state.opacityValue,
}}
source={{ uri: this.props.techImages[this.state.shownImageIndex] }}
resizeMode={'contain'}
/>
When i remove opacity: this.state.opacityValue or set it to a constant value (1), the view is rendered otherwise i get this cryptic error:

This error is only reproduced on Android i confirmed that on IOS everything works fine.
I had this error on react-native 0.51.0
username_8: Please open a new issue. |
anvaka/panzoom | 617916865 | Title: onTouch to have behavior like beforeMouseDown
Question:
username_0: Hi,
I am using beforeMouseDown bypass panzoom when dragging an element, I want to be able to have the same behavior in onTouch.
Currently:
`
function onMouseDown(e) {
// if client does not want to handle this event - just ignore the call
if (beforeMouseDown(e)) return;
`
Could something similar be done for onTouch?
`
function onTouch(e) {
// let the override the touch behavior
beforeTouch(e);
`
It is possible to have not just event propogation, but selective disabling of the gesture?
Thank you for the code, I'm forking something here to cover my use case :) |
TimOliver/TOCropViewController | 787723520 | Title: xcode TOCropToolbar.m UIImage 7 error : Use of undeclared identifier 'UIImageSymbolConfiguration'
Question:
username_0: 7 error in TOCropToolbar.m file
xcode: 10.1
1:
```
if (@available(iOS 13.0, *)) {
UIImage *image = button.imageView.image;
button.imageEdgeInsets = UIEdgeInsetsMake(0, 0, image.baselineOffsetFromBottom, 0);
}'
```
error: Property 'baselineOffsetFromBottom' not found on object of type 'UIImage *'
2,3,4,5,6,7:
```
if (@available(iOS 13.0, *)) {
return [UIImage systemImageNamed:@"checkmark"
withConfiguration:[UIImageSymbolConfiguration configurationWithWeight:UIImageSymbolWeightSemibold]];
}
```
error: Use of undeclared identifier 'UIImageSymbolConfiguration'
?
Answers:
username_1: Ah. I guess when I added that, I set Xcode 11 to be the base version that this library supports. It should be easy enough to convert that to defines instead.
Is there any particular reason why you're still using Xcode 10?
username_0: hi !
how to convert ?
yes! my same library package just work in xcode 10
username_0: @username_1 Do you have any suggestions?
username_2: @username_0 solve that? |
stargate/stargate | 923623036 | Title: Add possibility to disable parallel filtering during document search
Question:
username_0: The document search service should be extended to support a parameter that can turn off the in-parallel candidates filtering. Although the parallel filtering is optimized for the latency, in some cases it produces extra load on the backing data store. The user should be able to disable it.
Here is the quick explanation why the parallel filtering was introduced in first place:
```
total expression (T) = 3
candidates expressions (C) = 2
candidates after first expression (N)
candidates query latency (l)
case 1: first C match all N, second C match all N (better latency, same load)
- sequentially we would do N + N queries to recognize all match, latency N * (l1 + l2)
- parallel we would do N + N queries, but as we are parallel latency is MAX(N * l1, N * l2)
case 2: first C match all N, second C match 0 (better latency, same load)
- sequentially we would do N + N queries to recognize that we have no matching document, latency N * (l1 + l2)
- parallel we would do N + N queries, but as we are parallel latency is N * l2
case 3: first C match 0, second C match all N (same latency, more load)
- sequentially we would do N queries to recognize that we have no matching document, latency N * l1
- parallel we would do N + N queries, but as we are parallel latency is N * l1
** the overhead for parallel processing is assumed to be 0
``` |
global-urban-datafest/grx-contracts | 58461956 | Title: Could you add city prefix to repo name?
Question:
username_0: Hey! Congrats on creating the repo!
Could you please add prefix to the repo name? Since we've got teams from all over the world under this organization, adding city prefix will make it easier to tell which project is from which city. GitHub will automatically track the renaming so as long as nobody else create a repo with your old repo name, nothing should be broken :D |
naser44/1 | 109356739 | Title: 1 أكتوبر : اليوم العالمي للموسيقى .
Question:
username_0: <a href="http://ift.tt/1VssRXJ">1 أكتوبر : اليوم العالمي للموسيقى .</a> |
amoeba/decaldevbook | 1120333522 | Title: Chapter: Handling unsupported messages
Question:
username_0: It's helpful for developers to know how to handle messages that come out of ServerDispatch, including handling messages Decal doesn't parse because its messages.xml is out of date.
Resources for writing this up:
- https://gitlab.com/trevis/funkwerks/-/blob/master/FunkWerksPlugin/Plugin.cs
- https://gitlab.com/trevis/funkwerks/-/blob/master/ACMessageDefs/MessageHandler.cs |
i18next/i18next | 119809775 | Title: Typo in 2.0.0?
Question:
username_0: This line throws an error for me (I'm using [email protected], not sure if it makes any difference):
https://github.com/i18next/i18next/blob/2.0.0/lib/BackendConnector.js#L62
`Cannot read property 'allowMultiLoading' of undefined`
Should it be `this.backend.options.allowMultiLoading`? That's defined. Happy to open a PR if you can confirm this issue.
Answers:
username_1: should be on `this.options.backend.allowMultiloading`. Not fully sure whats the issue. Did you add the XHR backend in your code but not added backend options in init:
```
i18next.init({
backend: {...}
});
```
while writing this i see the point...you're right...when i allow setting options directly on an instance of a backend and adding that by `i18next.use` i need to use the options defined there.
Will publish an update asap. Meanwhile as a fast fix just add the backend options to the init options of i18next. Like https://github.com/i18next/i18next-xhr-backend#backend-options --> preferred way.
username_1: oh and thanks a lot for finding this...good to have this fixed in final release.
username_1: can you try with v2.0.0-beta.1...should fix this.
username_0: Hey, forgot to respond, that did fix it. Thanks for the quick fix!
Status: Issue closed
username_1: great...thanks for the confirm. |
enthought/traits | 672092664 | Title: test_enum fails if Pyface is installed but TraitsUI is not.
Question:
username_0: The tests in `test_enum.py` fail if Pyface is installed but TraitsUI is not. If that's fixed, it fails in a different way when Pyface is installed but Pyside2 (or PyQt5) is not.
```
mirzakhani:traits username_0$ python -m venv --clear ~/.venvs/traits
mirzakhani:traits username_0$ source ~/.venvs/traits/bin/activate
(traits) mirzakhani:traits username_0$ pip install -q --upgrade pip setuptools wheel
(traits) mirzakhani:traits username_0$ pip install -q pyface pyside2
(traits) mirzakhani:traits username_0$ pip install -q -e .
(traits) mirzakhani:traits username_0$ python -m unittest -f
........................................................................................................................................................................................................................................................................................................................................................ssssssssssss...s..sss.............................ssssss...............................................................................................................ssssssss............................................ssssssssssssssssE
======================================================================
ERROR: traits.tests.test_enum (unittest.loader._FailedTest)
----------------------------------------------------------------------
ImportError: Failed to import test module: traits.tests.test_enum
Traceback (most recent call last):
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/unittest/loader.py", line 436, in _find_test_path
module = self._get_module_from_name(name)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/unittest/loader.py", line 377, in _get_module_from_name
__import__(name)
File "/Users/username_0/Enthought/ETS/traits/traits/tests/test_enum.py", line 19, in <module>
GuiTestAssistant = pyface.toolkit.toolkit_object(
AttributeError: module 'pyface' has no attribute 'toolkit'
----------------------------------------------------------------------
Ran 580 tests in 7.428s
FAILED (errors=1, skipped=46)
```
Answers:
username_0: This should be a relatively easy fix for the next milestone.
Status: Issue closed
|
Working-Title-MSFS-Mods/fspackages | 1092827726 | Title: G1000 NXi - Direct To Navigation
Question:
username_0: DA62 unable to cancel direct-to navigation.
From G1000 Cockpit Reference Guide (page 41):
"CANCELLING A DIRECT-TO
1) Press the Direct-to Key to display the Direct-to Window.
2) Press the MENU Key.
3) With ‘Cancel Direct-To NAV’ highlighted, press the ENT Key. If a flight
plan is still active, the system resumes navigating the flight plan along the
closest leg."
When in Direct-to Window, Menu Key not active.
Answers:
username_1: @username_0 This GitHub is for our open source mods only, and we do not track NXi issues here. Please direct your feedback to our NXi channel on our Discord.
Status: Issue closed
|
hpi-swa-teaching/RichTextEditing | 619512869 | Title: Default values
Question:
username_0: As a casual user, that writes texts sometimes, I want to have default values for all of the predefined structures, in order to reduce the overhead of configuring the structures for myself.
**Acceptance criteria:**
- The following structures have a default value that fit their names
- [ ] `footnote`
- [ ] `heading1`
- [ ] `heading2`
- [ ] `heading3`<issue_closed>
Status: Issue closed |
jcc/blog | 201200740 | Title: 安装执行时遇到一个报错,暂时没发现会影响到什么。
Question:
username_0: [Symfony\Component\Debug\Exception\FatalThrowableError]
Class 'Faker\Factory' not found
---
---
php artisan passport:install
Encryption keys generated successfully.
Personal access client created successfully.
Answers:
username_1: composer 安装的时候去掉 --no-dev
Status: Issue closed
|
sda17dev/sda-web | 377313364 | Title: 발간번호 신청 및 발행 관리
Question:
username_0: 발간번호 목록
발간번호 세부
발간번호 등록
관리 - 발간번호 목록
관리 - 발간번호 세부
발간번호 히스토리 (같은 번호로 변경)
Answers:
username_0: https://app.moqups.com/slowalk/ddevc2Sl0I/edit/page/a047aa8bd
https://app.moqups.com/slowalk/ddevc2Sl0I/edit/page/aeee2c5c4
https://app.moqups.com/slowalk/ddevc2Sl0I/edit/page/afd2f7913
https://app.moqups.com/slowalk/ddevc2Sl0I/edit/page/a9042a89b
https://app.moqups.com/slowalk/ddevc2Sl0I/edit/page/a5e794bee
username_0: 화면 약간 업데이트 해 놓겠습니다.
username_1: 발간번호 히스토리 (같은 번호로 변경)는 화면설계서에 없는거 같네요.
발간번호 세부 화면의 변경이력을 가리키는건가요?
username_1: 발간번호 목록 - 완료
발간번호 등록 - 90%완료, 폼유효성검사 스크립트만 남았습니다.
우선 현재 상태로 배포하고 차주에 계속 작업하겠습니다.
마크업은 변경이 없을거 같으니, 모조님 작업 진행해도 될거 같습니다
username_0: 1. 발간번호 수정변경시에 "기존 발간등록번호 검색" 앞에 셀렉트 버튼이 빠졌습니다.(발간번호/기관명)
- 타 서비스에서는 팝업으로 열리는데, 아이템 추가하는 화면을 팝업으로 처리하는 화면을 안 만들고자 조금 고민하고 있었습니다. (아래로 화면이 열리는 방식으로 잡았는데, 보시고 의견주세요.
2. 사용된 텍스트랑 상단 안내 문구는 제가 업데이트 해 놓겠습니다.
3. 발간번호를 1번 발행받고, 다시 수정할 때도 히스토리를 계속 쌓는 구조입니다. [UC38]정부간행물 발간등록/송부 - 신청내역 보기 페이지 하단에 있는 히스토리를 클릭하면 다른 세부 화면으로 이동합니다.
4. 세부화면 아래에는 삭제, 수정 버튼이 있는데 로그인 없이 이를 노출하고 있어서 정책확인 후 와이어에 적용해 놓겠습니다.
username_0: <img width="426" alt="2018-11-08 10 49 42" src="https://user-images.githubusercontent.com/6022883/48175217-784bfc00-e34e-11e8-8c24-ea227336e049.png">
모바일쪽 제목 영역이 많이 좁은 것 같습니다.
username_1: "타 서비스에서는 팝업으로 열리는데, 아이템 추가하는 화면을 팝업으로 처리하는 화면을 안 만들고자 조금 고민하고 있었습니다. (아래로 화면이 열리는 방식으로 잡았는데, 보시고 의견주세요."
- 살펴보았는데요. 제 생각엔 검색결과에서 아이템 선택시에, 지금 와이어처럼 표가 나타나는게 좀 부자연스러워 보이네요. 일반적인 UX는 선택된 발간번호만 텍스트박스에 넣어주는거고 저도 그게 자연스러워 보입니다.
username_1: 전체작업완료하여 배포하였습니다
Status: Issue closed
username_0: - 수정삭제 버튼이 추가될 수 있음.
- 파일 등록 눌렀을 때 레이어가 필요할 수 있음. |
codeka/wwmmo | 384044848 | Title: Planets that have 10 food congeniality and set to 100% is glitchy
Question:
username_0: This glitch only happens when you try to colonize 1 planet has 10 food congeniality and try to max the food output. This will cause the rest of the planets in the star system to flicker and lag continuously. Its understandable that you can't build anything on these planets but they are still useful to have to make sure no one gets a foothold too close your empire. Fixing this is minor but maybe it causes unnecessary lag/strain on the server that is not needed. |
Dawgma-1712/scouting-app | 906525799 | Title: JSON building Match Scouting
Question:
username_0: Build a JSON file using data entered from the match scouting screen.
Answers:
username_0: See #4 for match scouting screen
username_0: Also add documentation regarding file format
- make a docs folder in the repo
- make a MD file about the match scouting format |
immersive-web/webxr | 924410932 | Title: Expose a way to query a session about the supported features - need to reconsider?
Question:
username_0: I do not seem to be able to reopen the original issue (#952), so filing a new one now that we have independent sets of enabled features on an XR device & on an XRSession.
If my read of the spec is correct, the Permissions API will only return permission status for the device features (which may be persistent across sessions and do not really say anything about the state of the session), so the apps have no way of observing the contents of XRSession's set of enabled features.
My preferred way to solve this would be extending the `XRSession` to expose a set of enabled features. Spec-wise, it seems it could be achieved by turning the not-IDL-exposed `XRSession/set of enabled features` into an IDL-exposed attribute. `XRSession.enabledFeatures` of type `set<DOMString>` (or `set<any>`, although I'm not sure why `DOMString` wouldn't be sufficient here).
Related issues:
immersive-web/real-world-geometry#30
immersive-web/anchors#64
Answers:
username_1: +1 on that
username_2: Yeah, I agree, and when I was adding that field it was with the hope that we might expose it at some point.
username_0: Strawperson proposal:
```webidl
enum XRFeature {
"local",
"local-floor",
...
};
interface XRFeatureSet {
// Alternatively, this could be a DOMString, or any.
readonly setlike<XRFeature>;
};
partial interface XRSession {
// Probably should not be [SameObject] if we want to allow enabling features post-session-creation.
// Alternatively, FrozenArray (which would make it impossible to make it a [SameObject] IIUC).
readonly attribute XRFeatureSet enabledFeatures;
};
```
I don't think there's a big difference between an enum vs `DOMString` (maybe for editorial purposes, but we already have a "feature descriptor" that we keep extending in modules). IMO it should not be `any` (I'm not sure why we're accepting `any`s as required / optional features if we also spec they'll be `ToString()`'ed), but I don't have full context on this so I may be missing something.
Naming TBD, I went with `XRFeature` above but it may very well be `XRFeatureName` or `XRFeatureDescriptor`. This only matters if we want to go with the enum route. I also went with `enabledFeatures` for the attribute name, but this may be confusing if we ever have a feature that needs to be configured post-session-creation (so the feature is granted, but isn't yet enabled / active / etc) so it's probably better to call it `grantedFeatures`.
As for the attribute type, I think we should be returning something that's setlike as the main use case for the apps would be to check whether a specific feature was enabled, and this boils down to `enabledFeatures.has("feature")` check with a setlike.
As for semantics, I'd propose that all enabled features are returned (including the required ones). I think this would make the app logic simpler ("to check whether a feature is enabled, I check if it's in XRSession.enabledFeatures" vs "to check whether a feature is enabled, if it was optional, I check if it's in XRSession.enabledFeatures, otherwise it's enabled because I wouldn't have gotten a session"). As a bonus, the spec text becomes simpler since we'd just need to expose the [session's set of granted features](https://immersive-web.github.io/webxr/#xrsession-set-of-granted-features) to the app.
LMK if this looks reasonable, I can issue a PR if so.
username_2: Should we just return a set directly? I guess it could get more complex for cases of non-stringy descriptors.
username_3: Another thing to consider: can a feature become available/unavailable during session life? E.g. physical disconnect of hardware, or permissions change, etc?
username_0: Currently, I believe it's not possible, but I've seen discussions that we may want to be able to allow that, e.g. because an app could ask for more features after the session was created. If we ever allow it, we'll need to be certain to spec it so that UAs can still disallow it - I know there are some AR session features can only be enabled at session creation in our implementation.
username_4: /agenda
username_5: Did we discuss this issue 2 weeks ago? If so does it still need discussing?
username_1: We discussed it but we didn't ask anyone to work on it so nothing happened :-) |
Perustaja/CentennialAircraftMaintenance | 588059955 | Title: Implements a more DDD-style event and service structure
Question:
username_0: Work from the bottom(domain model) up (controller)
# Main concerns
- [ ] Add an Entity base for events
- [ ] Add an Events class
- [ ] Add methods in model to invoke events
- [ ] Implement Events in a switch statement in model
- [ ] Implement a non-generic service that calls a method which invokes
- [ ] Implement some sort of bus to notify other services
# Part (properties are publicly settable for Edit or add a wholesale Edit akin to a ctor?)
- [ ] AddQty(int q) => Apply(new PartQtyAdded)
- [ ] RemoveQty(int q) => Apply(new PartQtyRemoved)
- [ ] UpdateImage(str imgpath, str thumbpath) => Apply(new PartImageChanged)
- [ ] Add domain concerns and tests |
tensorflow/probability | 544193914 | Title: Suggestion for a simple "histogram" distribution
Question:
username_0: I would find it useful to be able to form a probability distribution from a set of samples, similarly to the Empirical distribution, but in a continuous fashion, like a histogram. Such a distribution would be better suited for MCMC inference than a pointlike Empirical distribution.
Does there already exist a canonical way to form such distributions? If not, would it be beneficial to have one?
Answers:
username_1: Sounds a bit like MixtureSameFamily with underlying Uniform?
<NAME> | Software Engineer | <EMAIL>
username_0: I agree, a mixture of Uniforms could be used. However, there is some amount of work in computing the mixture probabilities and bin endpoints from samples (with tf.histogram_fixed_width, say), and my suggestion would be to have a distribution, like Empirical, that would accept samples directly. Moreover, a reasonable number of bins could be in tens or even in hundreds, which to me sounds a bit cumbersome to be handled by means of a mixture. |
EGA-archive/ega-download-client | 1165742075 | Title: filaled to download
Question:
username_0: ````
(pyega3) [zhou@localhost EGA]$ pyega3 -cf ~/raid/EGA/zjg.zmu_credential_file.json files EGAD00001007575
[2022-03-11 05:04:48 +0800]
[2022-03-11 05:04:48 +0800] pyEGA3 - EGA python client version 4.0.0 (https://github.com/EGA-archive/ega-download-client)
[2022-03-11 05:04:48 +0800] Parts of this software are derived from pyEGA (https://github.com/blachlylab/pyega) by <NAME>
[2022-03-11 05:04:48 +0800] Python version : 3.8.12
[2022-03-11 05:04:48 +0800] OS version : Linux #1 SMP Fri Jan 14 13:59:45 UTC 2022
[2022-03-11 05:04:48 +0800] Server URL: https://ega.ebi.ac.uk:8052/elixir/data
[2022-03-11 05:04:48 +0800] Session-Id: 3070537591
````
````
[2022-03-11 05:20:16 +0800] Invalid username, password or secret key - please check and retry. If problem persists contact helpdesk on <EMAIL>
Traceback (most recent call last):
File "/home/zhou/miniconda2/envs/pyega3/lib/python3.8/site-packages/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "/home/zhou/miniconda2/envs/pyega3/lib/python3.8/site-packages/urllib3/connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "/home/zhou/miniconda2/envs/pyega3/lib/python3.8/site-packages/urllib3/connectionpool.py", line 1040, in _validate_conn
conn.connect()
File "/home/zhou/miniconda2/envs/pyega3/lib/python3.8/site-packages/urllib3/connection.py", line 416, in connect
self.sock = ssl_wrap_socket(
File "/home/zhou/miniconda2/envs/pyega3/lib/python3.8/site-packages/urllib3/util/ssl_.py", line 449, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(
File "/home/zhou/miniconda2/envs/pyega3/lib/python3.8/site-packages/urllib3/util/ssl_.py", line 493, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
File "/home/zhou/miniconda2/envs/pyega3/lib/python3.8/ssl.py", line 500, in wrap_socket
return self.sslsocket_class._create(
File "/home/zhou/miniconda2/envs/pyega3/lib/python3.8/ssl.py", line 1040, in _create
self.do_handshake()
File "/home/zhou/miniconda2/envs/pyega3/lib/python3.8/ssl.py", line 1309, in do_handshake
self._sslobj.do_handshake()
TimeoutError: [Errno 110] Connection timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/zhou/miniconda2/envs/pyega3/lib/python3.8/site-packages/requests/adapters.py", line 440, in send
resp = conn.urlopen(
File "/home/zhou/miniconda2/envs/pyega3/lib/python3.8/site-packages/urllib3/connectionpool.py", line 785, in urlopen
retries = retries.increment(
File "/home/zhou/miniconda2/envs/pyega3/lib/python3.8/site-packages/urllib3/util/retry.py", line 550, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/home/zhou/miniconda2/envs/pyega3/lib/python3.8/site-packages/urllib3/packages/six.py", line 769, in reraise
raise value.with_traceback(tb)
File "/home/zhou/miniconda2/envs/pyega3/lib/python3.8/site-packages/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "/home/zhou/miniconda2/envs/pyega3/lib/python3.8/site-packages/urllib3/connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "/home/zhou/miniconda2/envs/pyega3/lib/python3.8/site-packages/urllib3/connectionpool.py", line 1040, in _validate_conn
conn.connect()
File "/home/zhou/miniconda2/envs/pyega3/lib/python3.8/site-packages/urllib3/connection.py", line 416, in connect
self.sock = ssl_wrap_socket(
File "/home/zhou/miniconda2/envs/pyega3/lib/python3.8/site-packages/urllib3/util/ssl_.py", line 449, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(
File "/home/zhou/miniconda2/envs/pyega3/lib/python3.8/site-packages/urllib3/util/ssl_.py", line 493, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
[Truncated]
(pyega3) [zhou@localhost ~]$ openssl s_client -connect ega.ebi.ac.uk:8443
CONNECTED(00000003)
````
but 8052 is not working.....
````
openssl s_client -connect ega.ebi.ac.uk:8052
write:errno=110
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 315 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
````
Answers:
username_1: Hello,
I'm having a similar problem and receiving the same error:
`(EGA)[apmfz5@lewis4-r630-login-node675 EGA_dc]$ pyega3 -cf APM_Credentials.json datasets
[2022-03-14 10:51:18 -0500]
[2022-03-14 10:51:18 -0500] pyEGA3 - EGA python client version 3.4.1 (https://github.com/EGA-archive/ega-download-client)
[2022-03-14 10:51:18 -0500] Parts of this software are derived from pyEGA (https://github.com/blachlylab/pyega) by <NAME>
[2022-03-14 10:51:18 -0500] Python version : 3.6.13
[2022-03-14 10:51:18 -0500] OS version : Linux #1 SMP Wed Jun 3 14:28:03 UTC 2020
[2022-03-14 10:51:18 -0500] Server URL: https://ega.ebi.ac.uk:8052/elixir/data
[2022-03-14 10:51:18 -0500] Session-Id: 2968141525
Traceback (most recent call last):
File "/storage/hpc/data/apmfz5/Dissertation/EGA/lib/python3.6/site-packages/urllib3/connection.py", line 175, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw
File "/storage/hpc/data/apmfz5/Dissertation/EGA/lib/python3.6/site-packages/urllib3/util/connection.py", line 96, in create_connection
raise err
File "/storage/hpc/data/apmfz5/Dissertation/EGA/lib/python3.6/site-packages/urllib3/util/connection.py", line 86, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/storage/hpc/data/apmfz5/Dissertation/EGA/lib/python3.6/site-packages/urllib3/connectionpool.py", line 706, in urlopen
chunked=chunked,
File "/storage/hpc/data/apmfz5/Dissertation/EGA/lib/python3.6/site-packages/urllib3/connectionpool.py", line 382, in _make_request
self._validate_conn(conn)
File "/storage/hpc/data/apmfz5/Dissertation/EGA/lib/python3.6/site-packages/urllib3/connectionpool.py", line 1010, in _validate_conn
conn.connect()
File "/storage/hpc/data/apmfz5/Dissertation/EGA/lib/python3.6/site-packages/urllib3/connection.py", line 358, in connect
conn = self._new_conn()
File "/storage/hpc/data/apmfz5/Dissertation/EGA/lib/python3.6/site-packages/urllib3/connection.py", line 187, in _new_conn
self, "Failed to establish a new connection: %s" % e
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x2aaab3540588>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/storage/hpc/data/apmfz5/Dissertation/EGA/lib/python3.6/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/storage/hpc/data/apmfz5/Dissertation/EGA/lib/python3.6/site-packages/urllib3/connectionpool.py", line 756, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/storage/hpc/data/apmfz5/Dissertation/EGA/lib/python3.6/site-packages/urllib3/util/retry.py", line 574, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='ega.ebi.ac.uk', port=8443): Max retries exceeded with url: /ega-openid-connect-server/token (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x2aaab3540588>: Failed to establish a new connection: [Errno 111] Connection refused',))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/storage/hpc/data/apmfz5/Dissertation/EGA/bin/pyega3", line 8, in <module>
sys.exit(main())
File "/storage/hpc/data/apmfz5/Dissertation/EGA/lib/python3.6/site-packages/pyega3/pyega3.py", line 646, in main
token = get_token(credentials)
File "/storage/hpc/data/apmfz5/Dissertation/EGA/lib/python3.6/site-packages/pyega3/pyega3.py", line 140, in get_token
r = requests.post(URL_AUTH, headers=headers, data=data)
File "/storage/hpc/data/apmfz5/Dissertation/EGA/lib/python3.6/site-packages/requests/api.py", line 117, in post
return request('post', url, data=data, json=json, **kwargs)
File "/storage/hpc/data/apmfz5/Dissertation/EGA/lib/python3.6/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
[Truncated]
resp = self.send(prep, **send_kwargs)
File "/storage/hpc/data/apmfz5/Dissertation/EGA/lib/python3.6/site-packages/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "/storage/hpc/data/apmfz5/Dissertation/EGA/lib/python3.6/site-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='ega.ebi.ac.uk', port=8443): Max retries exceeded with url: /ega-openid-connect-server/token (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x2aaab3540588>: Failed to establish a new connection: [Errno 111] Connection refused',))`
But instead of 8052 not working, 8443 is not working for me.
8052 ...
`(EGA)[apmfz5@lewis4-r630-login-node675 EGA_dc]$ openssl s_client -connect ega.ebi.ac.uk:8052
CONNECTED(00000003)`
8443 ...
`(EGA)[apmfz5@lewis4-r630-login-node675 EGA_dc]$ openssl s_client -connect ega.ebi.ac.uk:8443
46912496921600:error:0200206F:system library:connect:Connection refused:crypto/bio/b_sock2.c:110:
46912496921600:error:2008A067:BIO routines:BIO_connect:connect error:crypto/bio/b_sock2.c:111:
connect:errno=111`
Any help would be greatly appreciated! Thank you!
username_0: Hi @username_1 I found the official web shared the news: "Our download client is currently down and we are working as hard as possible to resolve the issue. Please refrain from creating tickets for download issues at this time. When this banner is removed, normal download services will be resumed.
Thank you for patience and understanding. EGA Team.”
So please waiting for the EGA Team!
username_1: @username_0 ah, thank you for the update! Hopefully it will be back online soon!
Status: Issue closed
username_0: Hi @username_2 still error....
````
(base) [zhou@localhost ~]$ conda activate pyega3
(pyega3) [zhou@localhost ~]$ openssl s_client -connect ega.ebi.ac.uk:8443
CONNECTED(00000003)
depth=2 C = BM, O = QuoVadis Limited, CN = QuoVadis Root CA 2 G3
verify return:1
depth=1 C = NL, O = QuoVadis Trustlink B.V., CN = QuoVadis Europe EV SSL CA G1
verify return:1
depth=0 jurisdictionC = GB, businessCategory = Government Entity, serialNumber = Government Entity, C = GB, ST = Essex, L = Saffron Walden, O = European Bioinformatics Institute, CN = ega.ebi.ac.uk
verify return:1
---
Certificate chain
0 s:jurisdictionC = GB, businessCategory = Government Entity, serialNumber = Government Entity, C = GB, ST = Essex, L = Saffron Walden, O = European Bioinformatics Institute, CN = ega.ebi.ac.uk
i:C = NL, O = QuoVadis Trustlink B.V., CN = QuoVadis Europe EV SSL CA G1
1 s:C = NL, O = QuoVadis Trustlink B.V., CN = QuoVadis Europe EV SSL CA G1
i:C = BM, O = QuoVadis Limited, CN = QuoVadis Root CA 2 G3
2 s:C = BM, O = QuoVadis Limited, CN = QuoVadis Root CA 2 G3
i:C = BM, O = QuoVadis Limited, CN = QuoVadis Root CA 2 G3
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIH+DCCBeCgAwIBAgIUUNfgpHyIRZ1XErfmVXRlZ+ZvvuQwDQYJKoZIhvcNAQEL
BQAwVjELMAkGA1UEBhMCTkwxIDAeBgNVBAoMF1F1b1ZhZGlzIFRydXN0bGluayBC
LlYuMSUwIwYDVQQDDBxRdW9WYWRpcyBFdXJvcGUgRVYgU1NMIEN<KEY>4bfqBbeElWh1VhX0n7Ljc6osXbhczcH4sNJaOV8B6Nr6cdM
[Truncated]
(pyega3) [zhou@localhost ~]$
(pyega3) [zhou@localhost ~]$
(pyega3) [zhou@localhost ~]$ pyega3 -cf ~/raid/EGA/zjg.zmu_credential_file.json datasets
[2022-03-17 00:56:35 +0800]
[2022-03-17 00:56:35 +0800] pyEGA3 - EGA python client version 4.0.0 (https://github.com/EGA-archive/ega-download-client)
[2022-03-17 00:56:35 +0800] Parts of this software are derived from pyEGA (https://github.com/blachlylab/pyega) by <NAME>
[2022-03-17 00:56:35 +0800] Python version : 3.8.12
[2022-03-17 00:56:35 +0800] OS version : Linux #1 SMP Fri Jan 14 13:59:45 UTC 2022
[2022-03-17 00:56:35 +0800] Server URL: https://ega.ebi.ac.uk:8052/elixir/data
[2022-03-17 00:56:35 +0800] Session-Id: 1880068365
[2022-03-17 00:56:37 +0800]
[2022-03-17 00:56:37 +0800] Invalid username, password or secret key - please check and retry. If problem persists contact helpdesk on <EMAIL>
Traceback (most recent call last):
File "/home/zhou/miniconda2/envs/pyega3/lib/python3.8/site-packages/pyega3/libs/auth_client.py", line 38, in token
r.raise_for_status()
File "/home/zhou/miniconda2/envs/pyega3/lib/python3.8/site-packages/requests/models.py", line 960, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: for url: https://ega.ebi.ac.uk:8443/ega-openid-connect-server/token
````
username_2: Hi @username_0
Are you able to run pyega3 with the test credentials: `pyega3 -t datasets`?
username_0: @username_2 That's crazing....
````
(pyega3) [zhou@localhost ~]$ pyega3 -cf ~/raid/EGA/zjg.zmu_credential_file.json datasets
[2022-03-17 00:56:35 +0800]
[2022-03-17 00:56:35 +0800] pyEGA3 - EGA python client version 4.0.0 (https://github.com/EGA-archive/ega-download-client)
[2022-03-17 00:56:35 +0800] Parts of this software are derived from pyEGA (https://github.com/blachlylab/pyega) by <NAME>
[2022-03-17 00:56:35 +0800] Python version : 3.8.12
[2022-03-17 00:56:35 +0800] OS version : Linux #1 SMP Fri Jan 14 13:59:45 UTC 2022
[2022-03-17 00:56:35 +0800] Server URL: https://ega.ebi.ac.uk:8052/elixir/data
[2022-03-17 00:56:35 +0800] Session-Id: 1880068365
[2022-03-17 00:56:37 +0800]
[2022-03-17 00:56:37 +0800] Invalid username, password or secret key - please check and retry. If problem persists contact helpdesk on <EMAIL>
Traceback (most recent call last):
File "/home/zhou/miniconda2/envs/pyega3/lib/python3.8/site-packages/pyega3/libs/auth_client.py", line 38, in token
r.raise_for_status()
File "/home/zhou/miniconda2/envs/pyega3/lib/python3.8/site-packages/requests/models.py", line 960, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: for url: https://ega.ebi.ac.uk:8443/ega-openid-connect-server/token
(pyega3) [zhou@localhost ~]$
(pyega3) [zhou@localhost ~]$ pyega3 -d -t datasets
[2022-03-17 01:40:30 +0800]
[2022-03-17 01:40:30 +0800] pyEGA3 - EGA python client version 4.0.0 (https://github.com/EGA-archive/ega-download-client)
[2022-03-17 01:40:30 +0800] Parts of this software are derived from pyEGA (https://github.com/blachlylab/pyega) by <NAME>
[2022-03-17 01:40:30 +0800] Python version : 3.8.12
[2022-03-17 01:40:30 +0800] OS version : Linux #1 SMP Fri Jan 14 13:59:45 UTC 2022
[2022-03-17 01:40:30 +0800] Server URL: https://ega.ebi.ac.uk:8052/elixir/data
[2022-03-17 01:40:30 +0800] Session-Id: 2405447546
[2022-03-17 01:40:30 +0800] Starting new HTTPS connection (1): ipinfo.io:443
[2022-03-17 01:40:30 +0800] https://ipinfo.io:443 "GET /json HTTP/1.1" 200 None
[2022-03-17 01:40:30 +0800] Starting new HTTPS connection (1): ega.ebi.ac.uk:8443
[2022-03-17 01:40:34 +0800] https://ega.ebi.ac.uk:8443 "POST /ega-openid-connect-server/token HTTP/1.1" 200 None
[2022-03-17 01:40:34 +0800]
[2022-03-17 01:40:34 +0800] Authentication success for user '<EMAIL>'
[2022-03-17 01:40:34 +0800] Starting new HTTPS connection (1): ega.ebi.ac.uk:8052
[2022-03-17 01:40:38 +0800] https://ega.ebi.ac.uk:8052 "GET /elixir/data/metadata/datasets HTTP/1.1" 200 None
[2022-03-17 01:40:38 +0800] Request URL : https://ega.ebi.ac.uk:8052/elixir/data/metadata/datasets
[2022-03-17 01:40:38 +0800] Response :
[
"EGAD00001003338",
"EGAD00001006673"
]
[2022-03-17 01:40:38 +0800] Dataset ID
[2022-03-17 01:40:38 +0800] -----------------
[2022-03-17 01:40:38 +0800] EGAD00001003338
[2022-03-17 01:40:38 +0800] EGAD00001006673
````
BUT my password and id did not change anymore.
username_0: ```
(pyega3) [zhou@localhost STAR]$ pyega3 -cf ~/raid/EGA/zjg.zmu_credential_file.json datasets
[2022-03-18 04:30:29 +0800]
[2022-03-18 04:30:29 +0800] pyEGA3 - EGA python client version 4.0.0 (https://github.com/EGA-archive/ega-download-client)
[2022-03-18 04:30:29 +0800] Parts of this software are derived from pyEGA (https://github.com/blachlylab/pyega) by <NAME>
[2022-03-18 04:30:29 +0800] Python version : 3.8.12
[2022-03-18 04:30:29 +0800] OS version : Linux #1 SMP Fri Jan 14 13:59:45 UTC 2022
[2022-03-18 04:30:29 +0800] Server URL: https://ega.ebi.ac.uk:8052/elixir/data
[2022-03-18 04:30:29 +0800] Session-Id: 1207189894
[2022-03-18 04:30:30 +0800]
[2022-03-18 04:30:30 +0800] Invalid username, password or secret key - please check and retry. If problem persists contact helpdesk on <EMAIL>
Traceback (most recent call last):
File "/home/zhou/miniconda2/envs/pyega3/lib/python3.8/site-packages/pyega3/libs/auth_client.py", line 38, in token
r.raise_for_status()
File "/home/zhou/miniconda2/envs/pyega3/lib/python3.8/site-packages/requests/models.py", line 960, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: for url: https://ega.ebi.ac.uk:8443/ega-openid-connect-server/token
```
username_2: Hi @username_0. Please email the EGA Helpdesk at [<EMAIL>](mailto:<EMAIL>) with this issue and attach the output log produced by pyega3 (pyega3_output.log). The Helpdesk team can then provide further assistance for your specific account. Thank you. |
go-playground/colors | 436997067 | Title: Add a Valid() function to check if a string is a valid color(ie parses)
Question:
username_0: Any thoughts on adding a `Valid(color string) bool` function to quickly check if a string is color? Perhaps with `ValidRGB, ValidHex, etc` siblings.
Would look something like this
```
func Valid(color string) bool {
if _, err := Parse(color); err != nil{
return false
}
return true
}
```
Answers:
username_1: Hey @epeic
Ya I’m good with that as a convenience method, make a PR and I’ll get it merged. |
apache/dubbo | 491526904 | Title: No matter how I set it, the version is always 0.0.0
Question:
username_0: ### Environment
* Dubbo version: 2.7.3
* Operating System version: windows10
* Java version: 8
### Steps to reproduce this issue
1.Declare service:
```xml
<dubbo:service interface="com.xxx.RemoteDefaultService" version="0.0.1" timeout="5000" ref="remoteDefaultServiceImpl" group="commonComponent"/>
```
2.Generalizing calls using API
```java
ReferenceConfig<GenericService> reference = new ReferenceConfig<GenericService>();
reference.setApplication(this.applicationConfig);
reference.setRegistry(this.registryConfig);
reference.setInterface("com.xxx.RemoteDefaultService");
reference.setGeneric(true);
reference.setConsumer(consumerConfig);
reference.setGroup("commonComponent");
reference.setVersion("0.0.1");
ReferenceConfigCache cache = ReferenceConfigCache.getCache();
GenericService genericService = cache.get(reference);
int len = params.size();
String[] invokeParamTyeps = new String[len];
Object[] invokeParams = new Object[len];
for (int i = 0; i < len; i++) {
GenericParamsEntity current = params.get(i);
invokeParamTyeps[i] = current.getTypeName();
invokeParams[i] = current.getValue();
}
try {
Object result = genericService.$invoke(method, invokeParamTyeps, invokeParams);
return result;
} catch (GenericException e) {
genericExceptionAnalytical(e);
throw e;
}
```
3.Throw an exception
```java
Not found exported service:
········································································
·········································································
interface=com.shxhome.hopps.common.core.microservice.defaultservice.RemoteDefaultService, version=0.0.0, generic=true}]
```
**No matter how I set it, the version is always 0.0.0**
Answers:
username_1: **the same issue:**
Env:
```
maven:
<dependency>
<groupId>org.apache.dubbo</groupId>
<artifactId>dubbo-spring-boot-starter</artifactId>
<version>2.7.3</version>
</dependency>
OS: windows10
JDK:1.8
```
1.Provider source code:
```
@Service(version = "demo", interfaceClass = HelloWorldService.class)
public class HelloWorldServiceImpl implements HelloWorldService {
@Value("${dubbo.application.name}")
private String serviceName;
@Override
public String sayHello(String name) {
return String.format("[%s] : Hello, %s", serviceName, name);
}
}
```
2.Consumer source code:
```
@RestController
@RequestMapping("/sample")
public class HelloWorldController {
@Reference(version = "demo", url = "dubbo://127.0.0.1:12345")
private HelloWorldService helloWorldService;
@GetMapping(value = "/hello/{name}")
@ResponseBody
public String say(@PathVariable("name") String name) {
return helloWorldService.sayHello(name);
}
}
```
3.But get below error log
```
org.apache.dubbo.remoting.RemotingException: org.apache.dubbo.remoting.RemotingException: Not found exported service: com.example.dubbo.springboot.api.service.HelloWorldService:12345 in [com.example.dubbo.springboot.api.service.HelloWorldService:demo:12345], may be version or group mismatch , channel: consumer: /192.168.0.190:9410 --> provider: /192.168.0.190:12345, message:RpcInvocation [methodName=sayHello, parameterTypes=[class java.lang.String], arguments=[aa], attachments={path=com.example.dubbo.springboot.api.service.HelloWorldService, input=270, dubbo=2.0.2, interface=com.example.dubbo.springboot.api.service.HelloWorldService, version=0.0.0}]
org.apache.dubbo.remoting.RemotingException: Not found exported service: com.example.dubbo.springboot.api.service.HelloWorldService:12345 in [com.example.dubbo.springboot.api.service.HelloWorldService:demo:12345], may be version or group mismatch , channel: consumer: /192.168.0.190:9410 --> provider: /192.168.0.190:12345, message:RpcInvocation [methodName=sayHello, parameterTypes=[class java.lang.String], arguments=[aa], attachments={path=com.example.dubbo.springboot.api.service.HelloWorldService, input=270, dubbo=2.0.2, interface=com.example.dubbo.springboot.api.service.HelloWorldService, version=0.0.0}]
```
username_1: When remove the version property, it works fine.
username_0: I'm a previous project, I've managed it using version.
username_0: I'm a previous project, I've managed it using version.
username_2: I have a demo locally. It works fine.
Do you have a demo git repository link?
username_1: @username_2 Yes, i init a pubic repository, you can reference this link: [dubbo-demo](https://github.com/username_1/dubbo-demo)
username_3: This is an interesting issue. At the first place, your demo didn't use any registry but just directly connected from Annotation. That's maybe the reason someone failed to reproduce your case:
```java
@Reference(url = "dubbo://127.0.0.1:12345", version = "1.0.0")
private HelloWorldService helloWorldService;
```
I troubleshot the test cases and went through the source codes and figured out the dubbo didn't pick up the URL parameters as the 'RpcInvocation.Attachment'.
That's no wonder you always get the default "0.0.0".
Here I have a go to show you my hotfix, which just passed the unit test of dubbo, about how to interim fix the issue.
org.apache.dubbo.rpc.protocol.dubbo.DecodeableRpcInvocation.java
```java
@Override
public Object decode(Channel channel, InputStream input) throws IOException {
ObjectInput in = CodecSupport.getSerialization(channel.getUrl(), serializationType)
.deserialize(channel.getUrl(), input);
String dubboVersion = in.readUTF();
request.setVersion(dubboVersion);
setAttachment(DUBBO_VERSION_KEY, dubboVersion);
setAttachment(PATH_KEY, in.readUTF());
// below is my current hotfix
// get the service version from URL.
String reqVersion = channel.getUrl().getParameter(VERSION_KEY);
String defaultVersion = in.readUTF();
// if empty, use the default version instead.
setAttachment(VERSION_KEY, StringUtils.isEmpty(reqVersion) ? defaultVersion : reqVersion);
```
But I know that's not the ultimate solution, I am also still curious why the version inside the *InputStream input* is always *0.0.0*. Looking forward any official maintainer to reply.
username_2: ```java
if (REGISTRY_PROTOCOL.equals(url.getProtocol())) {
urls.add(url.addParameterAndEncoded(REFER_KEY, StringUtils.toQueryString(map)));
} else {
urls.add(ClusterUtils.mergeUrl(url, map));
}
```
See org.apache.dubbo.rpc.cluster.support.ClusterUtils#mergeUrl.
If no registry was set, some params will be removed including `VERSION_KEY`.
@username_1 For your case, temp solution: you can use http://dubbo.apache.org/zh-cn/docs/user/references/registry/simple.html if you don't have any registry center.
I was wondering why these params would be removed if there is no registry.
@htynkn Any official answers?
username_4: 用的是什么注册中心?请检查注册中心中 provider url 注册上去的 group version 值是什么?
另外贴一些日志,看看注册中心实际下发下来的 url 是什么,然后我们才能继续定位问题
Status: Issue closed
username_4: Reopen if necessary. |
microsoft/PowerToys | 624404512 | Title: Use and appearance of NumberBoxes
Question:
username_0: # Summary of the new feature/enhancement
_Came across #3375 and #3118 lately and already left a message with the first one. Was advised to create a new issue, so here it is._
I think the current design of most - or nearly all - of the numberboxes (in the Settings) is pretty bad. Most of them are waaaaaaaay too wide. Also, most of the time the input values have an lower and upper limit, but none of them have any info indicating what their limit is. If any.
In order of appearance:
- FancyZones > "Zone highlight opacity (%)" --> Better use a slider, since this setting has a logical and also fixed minimum and maximum: 0 - 100.
- Power Rename > "Maximum numbers of items to show in recently used list for autocomplete dropdown" --> Label is too long. NumberBox is way too wide. See _picture_ below. Currently there's no indication of min/max (turns out to be 0 - 20 here). A Slider or a DropDown Menu (with or without ability for user input?) would be better here. I mention DropDown, because that's what is used in Windows Settings @ "number of contacts shown" on the Taskbar, next the Contacts button. Also it visually resembles with the dropdown that is the subject.

- PowerToys Run > "number of results. --> Same idea as previous. But what is the current upper limit though?
- Shortcut Guide > "Press duration before showing (ms)" --> Currently it does not seem to have an upper limit? A value like 91050 (ms) is accepted, which makes no sense at all. I would say a slider (values 0 - 2000) will do fine here.
<!--
A clear and concise description of what the problem is that the new feature would solve.
Describe why and how a user would use this new functionality (if applicable).
-->
Answers:
username_1: I agree.
We need to think about a UI model where there's a textfield next to the slider that shows the actual value. Currently sliders only show the value when using the slider. There's an earlier issue that mentions this #3234
username_2: NumberBoxes are great where you need a precise number - but Sliders are better if there are hard limits, and the value is perceptive than accurate.
Opacity as a slider is good.
Dimensions are probably better as a Numberbox. _(With or without min and max limits)_
username_3: Accurate values for opacity are important; consider the effect of multiple transparent layers.
Hard limits can be applied to numberboxes. Scrolling works in numberboxes. Sliders are notoriously annoying when accuracy is the goal - particularly on inaccurate input devices such as touchscreens. However, I see where they can be useful for those who do not require accuracy, or do require a visual representation of a fixed scale, or for people with poorly implemented touchscreens that do not allow scrolling.
If the width of the label defines the width of the box, that's a fault with the widget. Changing the widget type would only be a workaround, and one that introduces other equally undesirable UX.
In fact, the more I think about all of this, the more it seems that this is a widget UX problem being approached as an app UX problem. I agree with your observations that the UX leaves some room for improvement, but the solutions provided do, too.
Assuming that your suggestions are based on limitations of the widgets, perhaps this should be taken upstream. If not, and the widgets are capable of providing the desired UX for the varying use cases, then perhaps other implementations of the existing widgets could be considered (eg moving arrows, textboxes next to sliders, etc), rather than changing widgets and running into new UX problems.
username_0: That doesn't sound like a bad idea, for those who want more precision.
People are already requesting a Label next to a Slider to display the current value, while using it. (Windows general settings doesn't have this either, by the way) Maybe the static/label idea can be changed into the inputbox? Then we have them both at the same time. Or is that overkill?
username_3: And apparently that is true so it's gone upstream to the WinUI guys.
username_0: Lol, thanks for the compliment, haha.
I guess it just boils down to adding a few numbers in the code to give the controls a (fixed) width, adding a ceiling here and there, changing one or two of them into a Slider (I still think opacity would be a good one) and we're fine.
username_1: Thanks for all the input.
I think in general, we should be very practical in terms of the decision if it's Slider vs. Numberbox. Numberboxes are great for exact values and when there's not really an upper limit defined. Sliders work well for values that are not too big (e.g. opacity).
I'd propose the following:
**Zone highlight opacity (FancyZones)**
Slider | 0 - 100 %
**JPEG Quality Level (Image Resizer)**
Slider | 0 - 100
**Maximum number of items (PowerRename)**
NumberBox | 1 - *
**Maximum number of results (PT Run)**
NumberBox | 1 - *
**Press duration (Shortcut Guide)**
NumberBox | 0 - * ms
**Opacity of background (Shortcut Guide)**
Slider | 0 - 100 %
This means not too much logic to be rewritten (with ComboBoxes we need to convert stuff).
## Values next to slider
As tracked in #3234 we need to make sure that we define a way to show the value (and unit) next to the slider, so it's easy to check without moving the slider around.
## Long NumberBoxes / other UI elements
Some of our titles are pretty long, which results in NumberBoxes that are stretched out. This looks pretty bad, and we should put in a standard. I created an issue for that: #3733
username_0: Sounds good to me. I'll leave the implementations to the developers now. Think the situation is clear enough. Still think steps and ceilings are needed in some of the settings (like at the milliseconds) but I've mentioned that already. 🤐
username_4: Fixed in 0.19.0, please visit https://github.com/microsoft/PowerToys/releases/ for the latest release
Status: Issue closed
username_0: @username_1 In 0.19.2 the "JPEG quality" is (still) a numberbox. Was this one omitted or did somebody vote against the slider? |
FlorianMarckmann/42Cub3D | 1107131153 | Title: Test de C pour <NAME>
Question:
username_0: Vous souhaitez placer un bâtiment dans chaque case avec seulement quelques consignes dans une map de 4 x 4 :
- La hauteur des bâtiments est comprise entre 1 et 4
- Deux bâtiments consécutifs ou en colonne ne peuvent pas avoir le même nombre d'étages
- Un indice est le nombre de gratte-ciel que vous pouvez voir dans une rangée ou une colonne de l'extérieur
- Les bâtiments supérieurs bloquent la vue des bâtiments inférieurs situés derrière eux
Le programme devra être en mesure de résoudre ce problème sur une diversité de map.
- Exemple de map avec sa solution :

----------------------

----------------------
- Les maps en annexes serviront aux développement du programme et suivront toujours le même pattern. D'autre map seront testes lors de l'entretien.
- La map se présente de la façon suivante dans un fichier *.txt:
``` text
N 4 3 2 1
O 4 3 2 1
E 1 2 2 2
S 1 2 2 2
```
Illustre la map suivante :

- Le résultat devra être restituer dans la sortie standard, le style de restitution vous appartient tant qu'il permet une analyse clair des résultats.
### BONUS
- Si le temps vous le permet, vous pouvez faire un Makefile afin de compiler les fichiers sources.
Answers:
username_0: [test_remi2.txt](https://github.com/username_0/42Cub3D/files/7890967/test_remi2.txt)
[test_remi.txt](https://github.com/username_0/42Cub3D/files/7890968/test_remi.txt)
[test_remi1.txt](https://github.com/username_0/42Cub3D/files/7890969/test_remi1.txt)
Status: Issue closed
|
yuzhenmi/taleweaver | 429984789 | Title: Configurable document page size
Question:
username_0: Right now, it's just hard-coded. We should make this configurable at initialization. As a bonus, we can also make this configurable after initialization.
Status: Issue closed
Answers:
username_0: This is now a configuration implemented in https://github.com/username_0/taleweaver/pull/81 |
dataabc/weiboSpider | 403718166 | Title: IndexError: list index out of range
Question:
username_0: 请教您以下问题如何解决?感谢。
File "weibospider.py", line 58, in get_user_info
"//div[@class='tip2']/span[@class='tc']/text()")[0]
IndexError: list index out of range
共0条微博,其中0条为原创微博
Answers:
username_1: 感谢反馈,能否提供微博id,方便调试
username_1: 可能是你的微博id没有写对,导致无法获取用户信息。
大部分用户的微博id可以在目标用户首页url里找到,是一串数字。然而有一部分用户的微博id不是这一串数字。需要点击用户首页的"资料",此时url里的数字才是用户的微博id。
出现上述情况的原因是,微博曾经推出过"微号"功能,用户申请微号,就可以得到"https://weibo.cn/u/微号"形式的个性化地址。然而用户的信息页用的是"https://weibo.cn/微博id/info"这样的地址,所以用微号替代微博id无法获取用户信息,故而出错。
username_0: 谢谢您的回复,一时不查,搞错id,已经解决。
不过仍有一个问题,最近在爬虫时,大概爬一两个月就会自动停止,无法继续爬完。
情况如下,以人民日報(id: 2803301701)為例:
Error: 'NoneType' object has no attribute 'xpath'
Traceback (most recent call last):
File "weibospider.py", line 89, in get_long_weibo
info = selector.xpath("//div[@class='c']")[1]
AttributeError: 'NoneType' object has no attribute 'xpath'
【河南安阳警方通报辅警收钱37段视频:基本属实,大队长和3民警停职,10名辅警被开除】近日,安阳一超限站交通协助交警向司机收受钱物的视频曝光。@安阳交警 通报:情况 基本属实。已对收受钱物的10名交通协助交警联合治超人员予以开除,对负管理责任的大 队长及3名民警停止执行职务并接受调查。 全
微博位置: 无
微博发布时间: 2018-12-03 21:11
微博发布工具: 微博 weibo.com
点赞数: 11025
转发数: 5471
评论数: 6337
username_1: 这个可能是网络问题。因为这条微博内容较长,在微博列表页无法全部显示,所以先获取这一条微博的地址(<https://weibo.cn/comment/H5yJ7CMa4>),然后再获取此页的微博内容。因为网络原因,网页内容获取失败,所以才出现'NoneType' 。另外,出现错误提示后,程序还是会继续爬下一条微博,不会停止。
测试了上述微博,可以爬取到这条微博并获取全部的文字内容。
username_0: 谢谢您的答复,其实之前爬是可以的,但后来不管是爬哪个微博都会出现相同情形。
瞭解您的逻辑,但由于自己是程式码的新手,想请问程式码内容要如何做更改?
麻烦您了。
username_0: 您好,我后来重複测试用不同帐号的cookie爬虫,发现该问题解决,不过始终只能爬取约前三个月的贴文就自动停止并完成存档。之前可以全部爬完,现在只能爬一下下就停止爬取了@@
username_1: 可能是爬的速度太快了,减慢爬取速度应该会解决。例如,每爬几页sleep几秒,可以参考#8
username_0: 您好,我按照解决步骤:
包的依赖:
numpy
#pip install numpy
#代码正文:
import numpy as np
#第一部分的插入代码:
page1=0
random_pages=2
#第二部分的插入代码:
random_sleep= np.random.randint (6,10)
if page-page1==random_pages:
sleep(random_sleep)
page1=page
random_pages = np.random.randint (1, 5)
爬虫速度看起来没有变慢,且爬到相同处便停止。以人民日报为例(ID: 2803301701)
我是用Python 3.7版本
username_1: 用下面代码替换get_long_weibo,试试。
```python
def get_long_weibo(self, weibo_link):
try:
while True:
html = requests.get(weibo_link, cookies=self.cookie).content
selector = etree.HTML(html)
if selector is not None:
info = selector.xpath("//div[@class='c']")[1]
wb_content = info.xpath("div/span[@class='ctt']")
if len(wb_content) >= 1:
wb_content = wb_content[0].xpath("string(.)").replace(u"\u200b", "").encode(sys.stdout.encoding, "ignore").decode(
sys.stdout.encoding)
return wb_content
else:
print("123")
except Exception as e:
print("Error: ", e)
traceback.print_exc()
```
爬虫速度没变慢可能是你的get_weibo_info没有改好。
username_2: 您好!
我下载了您的代码运行之后发现遇到了以下的情况:
E:\Englishwaypoint\GitDP\weiboSpider>python weiboSpider.py
用户名: 新浪
Error: list index out of range
Traceback (most recent call last):
File "weiboSpider.py", line 59, in get_user_info
"//div[@class='tip2']/span[@class='tc']/text()")[0]
IndexError: list index out of range
进度: 0%| | 0/1 [00:00<?, ?it/s]
Error: list index out of range
Traceback (most recent call last):
File "weiboSpider.py", line 240, in get_weibo_info
is_empty = info[0].xpath("div/span[@class='ctt']")
IndexError: list index out of range
微博写入文件完毕,保存路径:
E:\Englishwaypoint\GitDP\weiboSpider\weibo\2175582565.txt
信息抓取完毕
===========================================================================
且怎么更换目标id用户名都是“新浪” ,不知是什么问题所致的呢?
username_2: 您好,我在运行了您的代码之后一直出现:
E:\Englishwaypoint\GitDP\weiboSpider>python weiboSpider.py
用户名: 新浪
Error: list index out of range
Traceback (most recent call last):
File "weiboSpider.py", line 59, in get_user_info
"//div[@class='tip2']/span[@class='tc']/text()")[0]
IndexError: list index out of range
进度: 0%| | 0/1 [00:00<?, ?it/s]
Error: list index out of range
Traceback (most recent call last):
File "weiboSpider.py", line 240, in get_weibo_info
is_empty = info[0].xpath("div/span[@class='ctt']")
IndexError: list index out of range
微博写入文件完毕,保存路径:
E:\Englishwaypoint\GitDP\weiboSpider\weibo\2175582565.txt
信息抓取完毕
且无论怎么更换目标ip依然存在这个错误,请问这个是什么情况呢?
username_1: @username_2 说明cookie没有设置正确,按照README获取cookie并替换your cookie即可
username_2: 感谢!已成功解决!之前登陆之后没有跳转到weibo.cn那里去找cookie 找了mweibo的
Status: Issue closed
username_3: 您好,我在weibospider.py中没有看到长微博相关的代码,请问哪里可以找到替换get_long_weibo的地方呢,感谢。
username_4: @username_3 现在的 get_long_weibo 在 weiboSpider/weibo_spider/parser/comment_parser.py.
不过我怀疑这个 issue 中的问题已经被 fix 过了。
username_1: @username_4
这个issue确实被修过了。长微博尝试获取5次,如果有一次成功就返回,如果都失败就使用对应微博列表中内容不全的正文。 |
PaloAltoNetworks/vistoq | 384362420 | Title: snippets with extends do not load all extended variables
Question:
username_0: extends field will pull in other snippets, but those snippets will not have their variables loaded in the UI or set to a default value.
Answers:
username_0: initial fix caused crash when variables were not included in baseline config. Commit 502ba99 resolves this
username_0: tested and pushed to master
Status: Issue closed
|
ant-design/ant-design | 320281315 | Title: Regression in onClick for Menu componenet
Question:
username_0: - [ ] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### Version
3.5.0
### Environment
any
### Reproduction link
[https://codepen.io/abramo/pen/mLMjVG](https://codepen.io/abramo/pen/mLMjVG)
### Steps to reproduce
Open the codepen link and click on nav 1 menu item
### What is expected?
onClick handler should be called once with menu item info as argument
### What is actually happening?
onClick handler is called twice: the first time with the correct menu item info as argument and a second time with React event
---
The browser console will show the two calls and a failed assertion related to the unexpected call.
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
Answers:
username_1: Your codepen seems to be OK
username_0: Sorry, I've forgotten to restore the settings after I've tried with antd 3.4.5 to ensure it was a regression.
Please retry now.
username_2: +1, I just got the same problem
username_2: i just add
``` js
if (e.domEvent) {
e.domEvent.stopPropagation()
}
```
to the end of the function.
Problem Solved(Maybe).
username_3: +1
username_4: Should be fixed in `[email protected]`
Status: Issue closed
username_5: Problem is still valid for `[email protected]` when using `Menu.ItemGroup` wrapper.
username_4: - [x] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### Version
3.5.0
### Environment
any
### Reproduction link
[https://codepen.io/abramo/pen/mLMjVG](https://codepen.io/abramo/pen/mLMjVG)
### Steps to reproduce
Open the codepen link and click on nav 1 menu item
### What is expected?
onClick handler should be called once with menu item info as argument
### What is actually happening?
onClick handler is called twice: the first time with the correct menu item info as argument and a second time with React event
---
The browser console will show the two calls and a failed assertion related to the unexpected call.
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
username_4: @username_6
Status: Issue closed
username_6: fixed in https://github.com/react-component/menu/commit/9ff201073c17f9e3b132dfeb624d8353cd0717c5
username_1: Fixed in https://github.com/react-component/menu/commit/1bf0c8dc6dc6134316dd8f4ed433b0544b35330d?
username_6: @username_1 是的 |
tangrams/tangram-play | 149484674 | Title: esc/enter keys to cancel/confirm "scene not saved" modal
Question:
username_0: "Your scene has not been saved. Continue?"
Answers:
username_1: The escape key should be canceling this modal - is this not the case for you?
username_0: It works! Magical!
username_1: Awesome. I've added a PR to automatically focus on the "Confirm" button on this modal so that the "Enter" key will confirm it.
username_0: Ooh, that's a nice trick.
Status: Issue closed
|
BlakeBr0/ExtendedCrafting | 290234543 | Title: Combination Process No Longer Working
Question:
username_0: Hey username_1 I had a large number of Combination Crafting recipes written for my pack that were working fine about a month ago (I think you were there during the preview stream). I recently revisited that portion of the pack to write some more recipes, and none of them seem to be working. I'm sure it's something silly I am missing, but I can't put my finger on it. I didn't see anything in the change log that would make the recipes not work, and my recipe syntax still matches what's on the wiki.
As you can see in the picture below... the recipe shows up fine in JEI, the items on the pedastals match the orientation of the recipe in the picture, and there is almost 3million RF in the core, which is more than plenty for the recipe. But nothing is happening. And the GUI of the core says no recipe found. I tried this with multiple recipes, same result. I also tried using a lever, redstone signal, etc... but nothing. I went back and watched the stream where these recipes were used, and it worked fine back then.

Answers:
username_0: Also tried rolling the mod version back to the last 3 updates, and updating to the latest forge.
username_1: Does it work in a different instance? I just tested and it still works just fine for me. Haven't changed anything about it recently either.
username_1: Could also be an issue with a specific recipe(s). Does the first recipe registered work?
username_0: Ok, took me a few hours of beating my head against the wall, but apparently it is something to do with ore dic entries. For example, the recipe in the picture above reflects ore:ingotCopper, and shows in JEI as accepting any of the copper ingots. But in reality, your mod won't craft it with any other ingot than Mekanism copper. Same with the other recipes that involved ore dic entries. ore:dyeCyan would only accept the Pickle Tweaks dye, but not regular minecraft dye. I guess your mod is only taking the first entry in the ore dic as acceptable? I would need to make a completely different recipe for each ore dic entry if I wanted them to work. :-/
username_0: Below are the 4 different types of copper ingots in the pack... TE, Mekanism, Immersive Engineering, and Futurepack. They are all oredicted as ore:ingotCopper but as you can see, only the Mekanism one starts the Extended Crafting process.

username_0: The syntax I am using for the recipe is this...
mods.extendedcrafting.CombinationCrafting.addRecipe(<thermalfoundation:material:515>, 70000, 100, <thermalfoundation:material:512>, [<industrialforegoing:plastic>, <minecraft:redstone>, <immersiveengineering:material:20>, <ore:ingotCopper>]);
username_1: Yup. Turns out it only accepted the last entry because it always iterated over the entire list even if it already found a valid item. Sorry for the trouble it caused.
I'm not entirely sure when I'll get an update out with the fix, because I'm actually in the middle of adding an extrautils QED like crafter. Shouldn't be too long though.
Status: Issue closed
username_0: Ok... I'll move on to other things in the pack. If you still haven't gotten an update out in a few weeks when I need to start prepping for beta, ill just have to go through and make a recipe for ever oredic item. |
LSSTDESC/gcr-catalogs | 280402119 | Title: automate integration tests that require NERSC resources
Question:
username_0: Now that GCRCatalogs have real users (aka people not on the DESCQA team) and people have started to make plots with these catalogs, it is important to test the updates to readers and catalogs before releasing them.
While there are some very basic unit tests in place, most of the major issues are likely to come up at the integration stage (e.g., with interfacing with DESCQA or with actual catalog data). It would be nice if we can automatically trigger integration tests that require NERSC resources when a PR is submitted, and display the test results in GitHub.
During a brief conversation, @username_2 suggested that what we need is likely feasible. What is left to do is to figure out how to make it happen.
Answers:
username_0: - [ ] @username_0 @username_1 @tomuram figure out how to submit a job to run at NERSC from SLAC
username_0: Further conversation with @username_2 and @username_1 points out the need of a special NERSC account that can be used to ssh into NERSC and submit a batch job.
username_1: @tony-johnson and @brianv0 Jim suggested that we'd want to submit these jobs from SLAC via Jenkins using a special NERSC account. We're going to need your expertise :)
username_2: Capturing advice from Tony (via email) on how to do this:
```
The easiest way to get this to work is just to run a jenkins
agent at NERSC which creates a connection back to the jenkins
server at SLAC and asks for work. This is how we run most of the
agents at SLAC. If you do this I would suggest using some
account like desc to run the agent, and if you run it somewhere a
cron job can be used to ensure it is running (e.g. corigrid) it
is easy to setup.
A second way would be to set up an on-demand agent which is
started from SLAC, when it is needed, probably by storing NERSC
ssh credentials in the SLAC jenkins server. I think this would
have to use a real-users ssh credentials, I don't think it would
work with a service account like desc.
If you want to go with the first option, the steps would be:
a) Create an jenkins "job" at SLAC.
b) Create a script for running the agent at NERSC under desc or
similar account. I can create an example script if this is what
you would like to do.
```
For the first option, Tony provided
```
a basic set up scripts are in
~desc/jenkins (at NERSC)
But we need the jenkins agent set up to test them.
```
username_2: @username_0 Brian helped me set up the Jenkins job on the SLAC server, and I've made a cron job that ensures the jenkins agent at NERSC is running under the `desc` account. We triggered Jenkins to run a `hello, world` script at NERSC:
https://srs.slac.stanford.edu/hudson/job/LSST-DESC/job/nersc-helloworld/lastBuild/console
So we can start to discuss what github events to trigger on and where to put the scripts that you want executed for the integration tests.
username_0: That's great! Thanks @username_2! Several questions:
- Where do we put together the script to run, NERSC or SLAC? In your example I did not see a script on the NERSC side.
- What environment variable will we have in the script?
- I assume we also need to add something to the `.travis.yml` to trigger the Jenkins build? Or can Jenkins monitor GitHub on its own?
username_2: The script can live anywhere at NERSC where the `desc` account can access (so any `lsst` group area). You can set any environment you need within that script. The Jenkins configuration is accessible through the Jenkins interface. I should be able to grant you access to add and configure jobs in the LSSTDESC area via your SLAC userid.
username_0: @username_2 OK. For the NERSC-side script, the first thing it needs to do is to clone the targeted PR or commit. So how can the NERSC-side script know which PR/commit to clone? I imagine that must be through the Jenkins?
username_0: I think this is now possible with Heather being SPIN certified? I think this is not super high priority but we should revisit it some time. |
google/blockly-devtools | 788568500 | Title: Error when selecting block in Workspace Factory
Question:
username_0: Whenever a block is selected in Workspace Factory, there is an error in the console:
<img width="594" alt="Screen Shot 2021-01-18 at 16 53 51" src="https://user-images.githubusercontent.com/31634240/104965930-c092ea00-59ad-11eb-9051-27347cf74ec0.png">
This seems to affect the ability to create shadow blocks.
I think this could be fixed by changing lines 348 and 349 of `wfactory_init.js` from this:
```javascript
if (e.type == Blockly.Events.BLOCK_MOVE ||
e.type == Blockly.Event.SELECTED) {
```
To this:
```javascript
if (e.type == Blockly.Events.BLOCK_MOVE ||
e.type == Blockly.Events.SELECTED) {
``` |
weishaaan/05_juillet_printWebPageHTML | 171921879 | Title: Design à uniformiser sur la combo box
Question:
username_0: Tous les contrôle de l'écran doivent avoir le même rendu visuel: se baser sur les styles natifs de bootstrap.
<issue_closed>
Status: Issue closed |
SEED-platform/seed | 393187511 | Title: Attach PDF (or other files) to a building record
Question:
username_0: ### Expected Behavior
This has been suggested by several users that they would like to be able to attach files (such as PDF, image files, etc) to building records. Kansas City said that they are currently accumulating a paper trail in addition to the digital trail, and it would be nice to be able to scan at least some of the paper docs to attach to the SEED building records. Berkeley has also expressed an interest in attaching audit reports (I think they are PDFs) to specific building records.
Investigate the data storage implications when working on this feature
From the November 2015 mid-month user call as well as other conversations with users (added by LBNL from previous list)
### Actual Behavior
n/a
### Steps to Reproduce
n/a
### Instance Information
n/a
Answers:
username_1: I think we should figure out how to work this in. It seems to be coming up more and more lately.
username_0: @username_1 -- do we want to add it to the project?
username_1: Not yet. I think we have plenty at the moment to focus on.
username_1: There is a request to support uploading OSM, OSW files as well.
username_0: This is still of interest. It came up when I was talking to Philadelphia
username_1: agreed, we still want this.
username_1: This needs to be done still. Moving to P-1 |
frappe/frappe | 195716512 | Title: Better styling for the calendar widget
Question:
username_0: - fix font sizes
- paddings
- better style for events
<img width="996" alt="screen shot 2016-12-15 at 10 15 58 am" src="https://cloud.githubusercontent.com/assets/140076/21212164/00674d68-c2b0-11e6-9506-5474695a5c8b.png">
Answers:
username_1: #2639
Status: Issue closed
|
mattzollinhofer/teachy | 201356568 | Title: Gradebook - Star Icon
Question:
username_0: In the gradebook view for the teacher, each assignment has the possible points in parenthesis. If it is a star option, it would be nice for it to say 2 and then have a star icon in the parenthesis as well.<issue_closed>
Status: Issue closed |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.