repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
lukehorvat/musicxml-to-speaker | 66172645 | Title: failed to install with 'npm install -g musicxml-to-speaker'
Question:
username_0: child_process: customFds option is deprecated, use stdio instead.
CC(target) Release/obj.target/output/deps/mpg123/src/output/coreaudio.o
LIBTOOL-STATIC Release/liboutput.a
libtool: unrecognized option `-static'
libtool: Try `libtool --help' for more information.
make: *** [Release/liboutput.a] Error 1
gyp ERR! build error
gyp ERR! stack Error: `make` failed with exit code: 2
gyp ERR! stack at ChildProcess.onExit (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:267:23)
gyp ERR! stack at ChildProcess.emit (events.js:110:17)
gyp ERR! stack at Process.ChildProcess._handle.onexit (child_process.js:1067:12)
gyp ERR! System Darwin 14.1.0
gyp ERR! command "node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /usr/local/lib/node_modules/musicxml-to-speaker/node_modules/speaker
gyp ERR! node -v v0.12.0
gyp ERR! node-gyp -v v1.0.2
gyp ERR! not ok
npm ERR! Darwin 14.1.0
npm ERR! argv "/usr/local/bin/node" "/usr/local/bin/npm" "install" "-g" "musicxml-to-speaker"
npm ERR! node v0.12.0
npm ERR! npm v2.5.1
npm ERR! code ELIFECYCLE
npm ERR! [email protected] install: `node-gyp rebuild`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] install script 'node-gyp rebuild'.
npm ERR! This is most likely a problem with the speaker package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! node-gyp rebuild
npm ERR! You can get their info via:
npm ERR! npm owner ls speaker
npm ERR! There is likely additional logging output above.
npm ERR! Please include the following file with any support request:
npm ERR! /Users/chris/Documents/github/vexflow-musicxml/npm-debug.log
Answers:
username_1: Thanks for bringing this to my attention. The version of [node-speaker](https://github.com/TooTallNate/node-speaker) (a module that this one depends on) was a little outdated and doesn't work on Node v0.12.
I've updated it and pushed out a new version of musicxml-to-speaker. Let me know if the problem persists.
username_0: child_process: customFds option is deprecated, use stdio instead.
CC(target) Release/obj.target/output/deps/mpg123/src/output/coreaudio.o
LIBTOOL-STATIC Release/liboutput.a
libtool: unrecognized option `-static'
libtool: Try `libtool --help' for more information.
make: *** [Release/liboutput.a] Error 1
gyp ERR! build error
gyp ERR! stack Error: `make` failed with exit code: 2
gyp ERR! stack at ChildProcess.onExit (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:267:23)
gyp ERR! stack at ChildProcess.emit (events.js:110:17)
gyp ERR! stack at Process.ChildProcess._handle.onexit (child_process.js:1067:12)
gyp ERR! System Darwin 14.1.0
gyp ERR! command "node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /usr/local/lib/node_modules/musicxml-to-speaker/node_modules/speaker
gyp ERR! node -v v0.12.0
gyp ERR! node-gyp -v v1.0.2
gyp ERR! not ok
npm ERR! Darwin 14.1.0
npm ERR! argv "/usr/local/bin/node" "/usr/local/bin/npm" "install" "-g" "musicxml-to-speaker"
npm ERR! node v0.12.0
npm ERR! npm v2.5.1
npm ERR! code ELIFECYCLE
npm ERR! [email protected] install: `node-gyp rebuild`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] install script 'node-gyp rebuild'.
npm ERR! This is most likely a problem with the speaker package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! node-gyp rebuild
npm ERR! You can get their info via:
npm ERR! npm owner ls speaker
npm ERR! There is likely additional logging output above.
npm ERR! Please include the following file with any support request:
npm ERR! /Users/chris/Documents/github/vexflow-musicxml/npm-debug.log
username_1: Hmmm. Maybe [this blog post](http://flummox-engineering.blogspot.ru/2014/04/libtool-unrecognized-option-static.html) will help you?
username_0: I unable to visit blogspot in China, it blocked : (
thank you for your help, it's not your problem, i am going to figure out by myself.
username_1: :disappointed: Well, assuming you're using Homebrew, all the blog post says to do is:
1. `brew unlink libtool`.
2. Install the npm package (i.e. `npm install musicxml-to-speaker`).
3. `brew link libtool`.
username_0: @username_1 thank you so much. I fixed it by removing original libtool. (brew unlink not work, I rm libtool finally). here is my solution
1) remove libtool from system(dont worry, could be install back using brew)
2) npm install -g muscixml-to-speaker (would be successful)
3) brew install libtool back, if necessary.
Status: Issue closed
username_1: :+1: |
stryker-mutator/stryker-net | 860892922 | Title: Subfolders are not created on AzureStorageFile
Question:
username_0: **Describe the bug**
When specifying a subfolder in the azure-storage-url (https://storageaccountname.file.core.windows.net/azurefilesharename/my-project-key-01), then it results in an error creating the directories on storage.
I'm not sure if it a bug or a feature. Let me describe the use case we looking for.
We have several projects, which should upload it's mutation test reports. As they are microservices or individual libraries, they are usually are independent artifacts and have their own release cycle and also their own baselines.
**Azure Storage / Azure Files**
Azure File Storage Immutable Url: https://storageaccountname.file.core.windows.net
Azure File Share Name: azurefilesharename
Resulting URL: https://storageaccountname.file.core.windows.net/azurefilesharename
So, resulting URL is the base for all projects.
Every Project have a key like: `my-project-key-01`, ...
Resulting URL: https://storageaccountname.file.core.windows.net/azurefilesharename/my-project-key-01, ...
**Predefined `_outputPath` directories from Stryker.NET**
OutputPath: `StrykerOutput/Baselines/<version>`
Parameter Version: 2.0.0-beta0001
Resulting URL: https://storageaccountname.file.core.windows.net/azurefilesharename/my-project-key-01/StrykerOutput/Baselines/2.0.0-beta0001
I'm happy to contribute with PR again on this, if this is expected behavior.
**Logs**
```
[16:25:43 DBG] Could not locate the current branch name, using project version instead: 2.0.0-beta0270
[16:25:44 DBG] No baseline was found at https://xxx.file.core.windows.net/yyy/the-sub-folder/StrykerOutput/Baselines/dashboard-compare/2.0.0-beta0270/stryker-report.json
[16:25:44 DBG] Creating directories for file https://xxx.file.core.windows.net/yyy/the-sub-folder/StrykerOutput/Baselines/dashboard-compare/2.0.0-beta0270/stryker-report.json
[16:25:44 DBG] Creating directory https://xxx.file.core.windows.net/yyy/the-sub-folder/StrykerOutput/
[16:25:44 ERR] Creating directory failed with status Forbidden and message <?xml version="1.0" encoding="utf-8"?><Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
RequestId:ff76b1ea-401a-0066-0495-3300f5000000
Time:2021-04-17T14:25:44.3194645Z</Message><AuthenticationErrorDetail>sr is mandatory. Cannot be empty</AuthenticationErrorDetail></Error>
[16:25:44 INF] Time Elapsed 00:14:19.0889376
[16:25:44 INF] The final mutation score is 57.59 %
```
**Expected behavior**
All subfolders are created, if they not exists.
**Desktop (please complete the following information):**
- OS: Windows / Linux
- Type of project: Core
- Framework Version: .NET SDK 5.0.202
- Stryker Version: 0.22.3 (latest)
Answers:
username_1: I suppose the docs are unclear. The subfolder must already exist, Stryker does not create it. Stryker only creates its own folders within the subfolder.
username_0: Ok, then the question is, if this have to stay like this? This is very cumbersome to create a Azure File Share upfront per project instead of creating just one.
We may can add the `IStrykerOptions.ProjectName` to the uri, so that it either results in:
Possible Azure File Storage URI:
- https://storageaccountname.file.core.windows.net/azurefilesharename (Must exists, as is)
- https://storageaccountname.file.core.windows.net/azurefilesharename/custom-sub-folder (Must exists upfront, as is)
And include the `IStrykerOptions.ProjectName`:
- Option a) `/{ProjectName}/StrykerOutput/Baselines/2.0.0-beta0001`
- Option b) `/StrykerOutput/{ProjectName}/Baselines/2.0.0-beta0001`
- Option c) `/StrykerOutput/Baselines/{ProjectName}/2.0.0-beta0001`
To retain backward functionality we could say, if `ProjectName` is null or empty, we skip this url part.
username_1: You won't have to create a whole new fileshare sorry it that wasn't clear. Only the subfolder needs to be created inside the fileshare.
I am fine with allowing the projectname to be used in the fileshare path. I think option a fits best as it would be comparable to the stryker dashboard reporter and baseline provider. I would say if the projectname does not exist use StrykerOutput, if it does exist leave out that part of the path.
`/{ProjectName|StrykerOutput}/Baselines/2.0.0-beta0001`
username_0: Ok, I will submit a PR.
Status: Issue closed
|
Shippable/admiral | 226350368 | Title: 8413.2 Change POST /api/db to return early
Question:
username_0: Shippable/pm#8413
This route will soon be called directly by the UI. Because it is a route that takes a long time to complete, it should return the response early - after `_setProcessingFlag` - and return the db object from systemSettings in the response. The response code should also be a 202, instead of 200.<issue_closed>
Status: Issue closed |
bcgit/bc-java | 1126863687 | Title: Incompatible GCM - Version 1.71b07
Question:
username_0: Testing the jars for issue #1100 I was faced some incompatibility with other (d)tls libraries. My [Eclipse/Californium Interoperability Test](https://github.com/eclipse/californium/tree/master/californium-tests/californium-interoperability-tests) have succeeded with BC 1.70, but fail with the 1.71b07.
- OpenSSL 1.1.1 11 Sep 2018
- GnuTLS 3.5.18
- MbedTLS 2.27.0
Failing handshakes:
openssl-client, bc-server:
[psk_gcm.pcapng.gz](https://github.com/bcgit/bc-java/files/8021449/psk_gcm.pcapng.gz)
bc-client, openssl-server:
[server_psk_gcm.pcapng.gz](https://github.com/bcgit/bc-java/files/8021450/server_psk_gcm.pcapng.gz)
For me the captures shows, that the GCM MAC is calculated wrong. Next step from my side would be a test, which compares the output of SunJCE GCM with BC. I hope, that makes nailing down the problem easier.
Answers:
username_0: My current state is irritating:
I added to decrypt the record with SunJCE as well to get the failing data set:
```
Provider BC version 1.7099 AES/GCM/NoPadding
javax.crypto.AEADBadTagException: mac check in GCM failed
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at org.bouncycastle.jcajce.provider.symmetric.util.BaseBlockCipher$AEADGenericBlockCipher.doFinal(BaseBlockCipher.java:1538)
at org.bouncycastle.jcajce.provider.symmetric.util.BaseBlockCipher.engineDoFinal(BaseBlockCipher.java:1173)
at java.base/javax.crypto.Cipher.doFinal(Cipher.java:2260)
at org.eclipse.californium.scandium.dtls.cipher.AeadBlockCipher.jreDecrypt(AeadBlockCipher.java:134)
at org.eclipse.californium.scandium.dtls.cipher.AeadBlockCipher.decrypt(AeadBlockCipher.java:84)
at org.eclipse.californium.scandium.dtls.DtlsAeadConnectionState.decrypt(DtlsAeadConnectionState.java:176)
at org.eclipse.californium.scandium.dtls.Record.decodeFragment(Record.java:778)
at org.eclipse.californium.scandium.DTLSConnector.processRecord(DTLSConnector.java:1722)
at org.eclipse.californium.scandium.DTLSConnector$15.run(DTLSConnector.java:1600)
at org.eclipse.californium.elements.util.SerialExecutor$1.run(SerialExecutor.java:290)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Provider SunJCE version 11 AES/GCM/NoPadding
key: <KEY>
nonce: 7C3FF9DE76C289DC177EBD47
aad: 000100000000000117FEFD0017
e-data: F472A41406A6778C36522577C35ED70B54A179A0EAF0401C1CCAAD71FE2FA73453A13555A793EA
e-off: 8, e-len: 39
data: 4102B17D01B474657374FF48656C6C6F2C20436F415021
```
If I retest with that data in an isolated test, it works with BC 1.71b07.
The tar's contains the sources? Maybe I can use that to step in with the debugger.
username_0: Within the californium tests, the failure seems to depend on the reuse of the Cipher, when that is mixed for DECRYPT_MODE and ENCRYPT_MODE. But until now, I was still not able to reproduce the effect in a isolated example.
Maybe you have an idea, what may fail, if a Cipher is reused for DECRYPT_MODE and ENCRYPT_MODE.
(Just to say, that reuse works with SunJCE and BC 1.70 reliable, so I don't think, is an issue in that reuse. I guess more, that in some very special cases, some data is kept and not replaced by the `Cipher.init(int opmode, Key key, AlgorithmParameterSpec params)`).
username_0: ```
import java.security.GeneralSecurityException;
import java.security.Provider;
import javax.crypto.Cipher;
import javax.crypto.SecretKey;
import javax.crypto.spec.GCMParameterSpec;
import org.bouncycastle.jce.provider.BouncyCastleProvider;
public class GCMTest {
public static void main(String[] args) throws GeneralSecurityException {
Provider provider = new BouncyCastleProvider();
Cipher cipher = Cipher.getInstance("AES/GCM/NoPadding", provider);
// remove comment to use SunJCE
// cipher = Cipher.getInstance("AES/GCM/NoPadding");
byte[] kr = hex2byte("756230B7D9F5A530E6FE482C0A593A70");
byte[] n = hex2byte("44DE810CFE1DB4A1CA0580FD");
byte[] a = hex2byte("000100000000000016FEFD0018");
byte[] e = hex2byte("F6905CCCB9B621EC658EA459454DA1FA0A82EB53935B274CE317F09B09BB7C2F03D8D8DD68EFFD6F");
SecretKey secretKeyR = SecretUtil.create(kr, "AES");
decrypt(cipher, secretKeyR, n, a, e);
byte[] kw = hex2byte("BF994F88D17ECA6BC95C3D6B47555605");
n = hex2byte("3C123F990001000000000000");
a = hex2byte("000100000000000016FEFD0018");
byte[] d = hex2byte("1400000C000300000000000CE04106C294FEFB84F8AAD672");
SecretKey secretKeyW = SecretUtil.create(kw, "AES");
encrypt(cipher, secretKeyW, n, a, d);
n = hex2byte("44DE810CFE1DB4A1CA0580FE");
a = hex2byte("000100000000000117FEFD0017");
e = hex2byte("75EDAF6FF3BDB7235EC6FDBDB086D1E548340A18B452F2B32D5C3EF3392347288BD29AD9CADD38");
decrypt(cipher, secretKeyR, n, a, e);
}
public static byte[] encrypt(Cipher cipher, SecretKey key, byte[] nonce, byte[] aad, byte[] data)
throws GeneralSecurityException {
GCMParameterSpec parameterSpec = new GCMParameterSpec(16 * 8, nonce);
cipher.init(Cipher.ENCRYPT_MODE, key, parameterSpec);
cipher.updateAAD(aad);
return cipher.doFinal(data, 0, data.length);
}
public static byte[] decrypt(Cipher cipher, SecretKey key, byte[] nonce, byte[] aad, byte[] data)
throws GeneralSecurityException {
GCMParameterSpec parameterSpec = new GCMParameterSpec(16 * 8, nonce);
cipher.init(Cipher.DECRYPT_MODE, key, parameterSpec);
cipher.updateAAD(aad);
return cipher.doFinal(data);
}
private static byte[] hex2byte(String hex) {
[Truncated]
}
```
Yeap! That reproduces the issue. If BC 1.71b07 is used,
```
Exception in thread "main" javax.crypto.AEADBadTagException: mac check in GCM failed
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at org.bouncycastle.jcajce.provider.symmetric.util.BaseBlockCipher$AEADGenericBlockCipher.doFinal(BaseBlockCipher.java:1538)
at org.bouncycastle.jcajce.provider.symmetric.util.BaseBlockCipher.engineDoFinal(BaseBlockCipher.java:1173)
at java.base/javax.crypto.Cipher.doFinal(Cipher.java:2202)
at io.cloudcoap.modem.GCMTest.decrypt(GCMTest.java:58)
at io.cloudcoap.modem.GCMTest.main(GCMTest.java:41)
the exception is thrown, with SunJCE, no exception occurs.
```
username_0: I was right now also at the GCMCache. I think the `cipher.init` is missing, when an entry is reused.
username_0: I tried to understand the idea of `GCMCache`. I'm not sure, which usage pattern is considered for that.
Let me try to explain, the considerations and implementation in the DTLS 1.2 layer of [Eclipse/Californium](https://github.com/eclipse/californium/tree/master/scandium-core).
In difference to TCP, UDP comes with less state. That enables to have much more "connections" (called there "relations"), even if each one is only rarely used. In order to have high throughput, [Eclipse/Californium](https://github.com/eclipse/californium/tree/master/scandium-core) uses an multi-threaded thread pool. A common setup for 3.x is to have as many threads as cores (e.g. 4-64) an a huge amount of relations (e.g 10000 up to 1000000). Many functions as `Cipher.getInstance(String)` takes too much time, therefore we reuse them. Years ago we discussed therefore two variants: one Cipher per thread, and key-pairs per relation, or one Cipher-pair (including the key-pair) per relation. We decide to go for the first (performance penalty in our benchmark for that was about 10%). With that, we have a couple of Ciphers, bound as thread-local, and we map the relation-keys free just on usage. Encryption/Decryption therefore always starts with lookup the thread-local cipher, then initialize the cipher with the relation-key and nonce, updating the aad, encrypt/decrypt the data (doFinal), and the the thread may return to the pool and is free for the next encryption/decryption operation with the next relation and a different relation-key.
So, what is your considered usage pattern?
username_1: Your usage pattern should be fine, although keeping one Cipher per key has the potential to perform better as I'll discuss below.
Regarding GCMCache, our motivation was to avoid reinitialization costs for two things when the algorithm and key haven't changed: the underlying block cipher initialization (e.g.. the AES key schedule) and the GCMMultiplier (should be a function only of the cipher and key).
For quite a while we have supported re-initializing GCMBlockCipher with a null key - meaning re-use existing key. However this pattern is not valid for the provider Cipher implementations (AFAICT), so we need also to be able to try and get re-use if a repeated key is provided to Cipher.init. Then there are cases like yours (it sounds like), where a single Cipher might even be switching b/w a small set of keys. So we were looking at a sort of general auto-detection for these cases at the GCMBlockCipher level. It ought to cover all possible usages (if it were working correctly).
In our FIPS library our internal API makes it even harder to try and convert same-key Cipher initialization to null-key same-instance GCMBlockCipher initialization.
Of course the most obvious place where the performance issue arises (of not re-using keys) is (D)TLS with a GCM cipher suite (there are potentially many records using the same key), to the point that users have complained.
GCMCache is not set in stone; there's obviously some bug(s) and it may well get removed and replaced with a different approach in any case. We're happy to take suggestions along those lines.
username_0: Our decision not to use one cipher per key was mainly based on the consideration, the we want to support the possibility to have "million keys". In fact, tests/benchmarks with about 1500000 relations are working quite well. I'm not sure, if 3.000.000 ciphers are also well.
(We will see, if the "weak-reference/weak-hash-map" works as intended, also for large amount of entries. My experience in the past with such GC dependencies have been not that good.)
username_1: I agree that won't hold up because of GC dependency and thread contention. I am leaning towards allowing re-use only within some "context" object, and that would likely correspond to a Cipher instance in the provider, and be very small and not GC-managed.
username_1: Can you clarify what is the biggest impact of holding extra Cipher instances for you? It might be something else to review.
username_0: Our consideration was the amount of required heap.
username_2: New beta now up. https://www.bouncycastle.org/betas 171b09.
username_0: Updated, test are running. I will report the result as soon as I get them.
(For the reproducing example it works! Cool.)
username_0: Not sure, if that was also the case before ... now I have 20-40s startup time.
I will need to spend more time into investigation. Unfortunately I'm busy for today and tomorrow.
But I will try to analyze that on Friday or the weekend.
I let you know, if I have some results.
username_0: Beside of that statup time ... the tests are successful.
Status: Issue closed
username_0: Some tests shows, that this is also the case for 1.70. There is a startup delay, which occurs randomly, not always.
If I find some time to collect more details, and the details are point to BC at all, I will open an other new issue.
username_2: Just a suggestion on this one, it might be entropy exhaustion on the platform while it's seeding the internal DRBG.
username_0: Without BC, this takes only milliseconds, with BC it's sometimes milliseconds as well and sometimes much more, as the 34s seconds above. I guess, for Californium, I will find a work-around. But it's interesting, that this varies that large, and, of course, large times as 34s I would consider as "bug".
I will collect some more infos and ope a new issue, when I'm ready.
username_0: That's about 30s! On Ubuntu 18.04 running on a AMD® Ryzen 7 2700 eight-core processor × 16 .
I have my doubts about 30s for randoms. As written above, I still collect information. Using AES/ECB seems to eliminate it, AES/CBC as well. CCM is running and GCM afterwards.
username_2: Actually at this point it sounds a bit like jar validation, but I've got no idea what would be causing such a wild variation in time...
username_0: Before that call of `Cipher.getMaxAllowedKeyLength("AES") `, a `org.bouncycastle.jce.provider.BouncyCastleProvider` and a `org.bouncycastle.jsse.provider.BouncyCastleJsseProvider` is created. As well a `KeyFactory.getInstance("EdDSA", provider)`is called.
I would guess, some BC interal thread sync issue. But we will see. it's still irritating.
username_0: My first work-around idea, to use "AES/ECB/NoPadding" instead of "AES" didn't work.
Any recommendations from your side, how to enable logging in BC in order to get more internal details (and hopefully not too much ;-) )?
username_2: It's kind of in the system here. Try removing the SunJSSE.
username_0: is the root-cause. That DRBG offers some configurability, but I wasn't able to overcome that random start penality.
username_0: Removing
```
Security.removeProvider("SunJSSE");
Security.removeProvider("SunJCE");
```
doesn't help.
username_0: Replacing `new SecureRandom()` by `SecureRandom.getInstance("DEFAULT", "BC")` didn't help either.
(Scandium creates for the most crypto instances a reused thread-local one. So, there are a couple of `SecureRandom`, but no masses.)
What I found:
"/dev/random" is very very slow on my machine and `SecureRandom.getInstanceStrong()` is also very very slow.
username_3: For most use cases, `dev/urandom` should be sufficient. What happens if you configure `java.security` to use that instead of `dev/random`?
username_0: I tried that with
```
Security.setProperty("securerandom.source", "file:///dev/urandom");
```
but that changed close to nothing. My interpretation is, that this caused by `SecureRandom.getInstanceStrong()` having precedence (see [DRBG.createInitialEntropySource()](https://github.com/bcgit/bc-java/blob/master/prov/src/main/java/org/bouncycastle/jcajce/provider/drbg/DRBG.java#L93-L133).
For me it's not that clear, how exactly that works on startup. My impression is, that it sometimes stuck for seconds. Especially, if it's started over and over. That makes it very hard to really test it. Because it varies very much.
username_2: Okay, try:
securerandom.source=file:/dev/urandom
and
securerandom.strongAlgorithms=NativePRNGNonBlocking:SUN
You'll should do this in the java.security file, chances are by time your test is running it's too late. With BC 1.70 it's the second setting that makes the most difference.
Also if your platform supports rng-tools, you should find installing it will make a difference.
username_0: Yes, finally, with the idea to prevent using really the slow "NativePRNGBlocking", that was also my idea.
Though I load BC only on demand/configuration via reflection, It works in my case also, when I set it up from the code before creating the providers instance. At least "on my machine". My feeling is, this is also the root-cause for issue #1100 . 3s start uptime on android, which I get, isn't that bad, but 10s and more, as a user reported, seems to be more related to the random.
I'm still not sure, if I fully understand the implementation.
I guess, using the (real) strong random also to initialize the [HybridSecureRandom](https://github.com/bcgit/bc-java/blob/master/prov/src/main/java/org/bouncycastle/jcajce/provider/drbg/DRBG.java#L377-L389) and not only to reseed, when enough randomness is available, causes "randomly" very long startup times. Maybe using the strong one only the gather the seed for the reseed is a better option.
username_2: It's tempting, but it's worth considering what this would say about the quality of an secret keys that get generated. If you're generating a secret key you want it to be a good one, because if it's not, and there's a predictable way of making that happen, someone will find a way to exploit it. It's the cost of security. That said, there might be a few ways to prime up the entropy on these platforms and it would be worth investigating those - you can never have too much!
username_0: The current implementation will randomly stuck in the initialization, in some cases for up to 60s.
FMPOV, that causes then availability problems. Starting up with a random with less entropy it could not be excluded, that this gets insecure (but it is not that easy to break it). And the proposal is, to reseed by a "strong random", if enough entropy is available. So that difference will be "no function" or "not excludable insecure function" on start up phase only.
With the current implementation, if can only chose, between randomly large startup times or "weaker reseeding".
One pain point for seems to be the blocking behaviour of the `SecureRandom.getInstanceStrong()`.
So, If I put it together, I think:
- a configurable "initial timeout", which waits far that to gather then entropy, but limits the blocking startup to that time.
- a reseed using the strong entropy and reseed frequently by that
would be a nice solution. A timeout 0 may then be equivalent to the current behaviour.
username_3: Of course, it depends on your use case, but I don't see a security problem with using non-blocking `/dev/urandom`. Yes, you can be even _more_ secure by waiting for "better" entropy, but what you get should be "good enough" from cryptographic point of view.
I'm not aware of any attacks against keys obtained this way. Is anybody knows differently - please post here.
username_2: There is a BC property org.bouncycastle.drbg.entropysource that can be used to provide a custom entropy source, you could try a "hybrid scheme" through that but I would be extremely careful and do some statistical analysis to make sure you're getting what you need. See org.bouncycastle.jcajce.provider.drbg.DRBG for details. Finding a way to temporarily boost entropy on startup (there's usually a way, although it may vary depending on environment), would be a better solution though.
I'm exaggerating the case a bit here, but it's worth keeping in mind you can produce a 256 "random" key from a single byte by using SHA-256, the thing is after a few keys, you might start noticing some of the 256 bit "random" strings are starting to look a bit familiar.
username_0: Yes. For now, I don't have a compile time dependency and start bc only via reflection on startup.
I can add a separate module, which then contains such a compile time dependency to BC in order to implement such a entropysource.
But I still have feeling, that the treat of between:
- not excludable startup delay (up to 60s)
- not excludable random issues
could be better configured with a "initial timeout".
It's bascially this line
[buildHMAC(new HMac(new SHA512Digest()), baseRandom.generateSeed(32), false)](https://github.com/bcgit/bc-java/blob/master/prov/src/main/java/org/bouncycastle/jcajce/provider/drbg/DRBG.java#L388)
where the "baseRandom.generateSeed(32)" maybe replaced by a function, which waits for such strong entropy with a timeout. e.g. if the startup delay could be limited to e.g. 3s, in many cases that will be enough to have a strong entropy, in some case it may be less, but then the next "reseed" will provide 60s later a better entropy. And those, who want to be "safe" and want to "wait", may use a timeout of 0.
Such a function may not call the `generateSeed(32)` but instead waits, if the similar `EntropyGatherer` can provide some entropy.
username_2: Actually a better way of dealing with this would be to provide your own SecureRandom for getInstanceStrong(). That would probably be the best way to do something like this and would give you the most control. No need to change either the provider or craft something to inject.
For our part we need to stick with the blocking behavior, we are in no position to make a call on whether it's safe not to. |
adobe/S3Mock | 379671789 | Title: s3mock 2.1.0 fails to start with
Question:
username_0: While s3mock 2.0.11 works well in our tests, updating to 2.1.0 let's the s3mock startup fail with
```
09:15:27.511 INFO o.s.boot.SpringApplication - Starting application on mescalin with PID 377 (started by username_0 in /path/to/project)
09:15:27.512 INFO o.s.boot.SpringApplication - No active profile set, falling back to default profiles: default
09:15:28.324 WARN o.s.b.w.s.c.AnnotationConfigServletWebServerApplicationContext - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.support.BeanDefinitionOverrideException: Invalid bean definition with name 'httpRequestHandlerAdapter' defined in class path resource [org/springframework/data/rest/webmvc/config/RepositoryRestMvcConfiguration.class]: Cannot register bean definition [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=org.springframework.data.rest.webmvc.config.RepositoryRestMvcConfiguration; factoryMethodName=httpRequestHandlerAdapter; initMethodName=null; destroyMethodName=(inferred); defined in class path resource [org/springframework/data/rest/webmvc/config/RepositoryRestMvcConfiguration.class]] for bean 'httpRequestHandlerAdapter': There is already [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=org.springframework.boot.autoconfigure.web.servlet.WebMvcAutoConfiguration$EnableWebMvcConfiguration; factoryMethodName=httpRequestHandlerAdapter; initMethodName=null; destroyMethodName=(inferred); defined in class path resource [org/springframework/boot/autoconfigure/web/servlet/WebMvcAutoConfiguration$EnableWebMvcConfiguration.class]] bound.
09:15:28.334 INFO o.s.b.a.l.ConditionEvaluationReportLoggingListener -
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
09:15:28.336 ERROR o.s.b.d.LoggingFailureAnalysisReporter -
***************************
APPLICATION FAILED TO START
***************************
Description:
The bean 'httpRequestHandlerAdapter', defined in class path resource [org/springframework/data/rest/webmvc/config/RepositoryRestMvcConfiguration.class], could not be registered. A bean with that name has already been defined in class path resource [org/springframework/boot/autoconfigure/web/servlet/WebMvcAutoConfiguration$EnableWebMvcConfiguration.class] and overriding is disabled.
Action:
Consider renaming one of the beans or enabling overriding by setting spring.main.allow-bean-definition-overriding=true
```
For us 2.0.11 is sufficient right now (i.e. it's not a problem for us), but I still wanted to let you know about this. If you are sure that things are working and there's evidence that it's just a classpath issue on our side you can also just close this ticket as invalid.
Answers:
username_1: As the 2.1.0 release includes a depdency upgrade to SpringBoot 2.1.0, I would also guess that it's a classpath problem. Could you check if you have other (older) Spring dependencies on the classpath?
username_2: S3Mock 2.2.0 was just released that now includes support for [TestContainers](https://www.testcontainers.org/).
If there is a possibility of running Docker during your build, I would highly suggest migrating to TestContainers.
Using S3MockApplication directly or through one of the unit test modules will always have the risk of breaking because of dependencies that are pulled in by the application that is being tested, and auto configration that is added through Spring Boot updates in the S3Mock.
Using Docker in your tests will insulate both applications dependencies and application life cycles.
Status: Issue closed
|
bioconda/bioconda-recipes | 163821950 | Title: bioconda-recipes/recipes/entrez-direct
Question:
username_0: When I install entrez-direct with conda
`conda install entrez-direct -c bioconda`
it installs the EDirect tools one folder above the bin folder. Same with creating an environment.
Answers:
username_1: My attempt in fixing it: https://github.com/bioconda/bioconda-recipes/pull/1906
Status: Issue closed
|
softprops/diffset | 632186514 | Title: Unexpected input 'scala_files', valid inputs are ['base']
Question:
username_0: ```
##[warning]Unexpected input 'scala_files', valid inputs are ['base']
##[warning]Unexpected input 'java_files', valid inputs are ['base']
##[warning]Unexpected input 'cfn_files', valid inputs are ['base']
Run username_1/diffset@v1
with:
scala_files: **/*.scala
java_files: **/*.java
cfn_files: **/*.cf.yml
env:
GITHUB_TOKEN: ***
```
Config
```
steps:
- name: Diffset
id: diffset
uses: username_1/diffset@v1
with:
scala_files: '**/*.scala'
java_files: '**/*.java'
cfn_files: '**/*.cf.yml'
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
```
Answers:
username_1: 👋 beyond a warning are you seeing absent list of files?
The warning seems like a likely recent change to the GitHub runners, presumably in an attempt to make best effort inferences from [`action.yml`](https://help.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions) files more visible.
In the case of this action, there is one declared statically named input named [`base`](https://github.com/username_1/diffset/blob/8320c0c4c0a484f48a69fe38b271476f1103f85f/action.yml#L10-L12) with the remainder dynamically named declarations of files sets, essentially to let you give name to a user defined set of prefiltered files that have changed.
If the warning is doing no harm I don't think there is an action to take. This feels like a limitation of action declaration properties at the moment.
If the warning is causing harm you can move from using inputs to direct env variables. Since `with` declaration are [short hand for environment variables prefixed with `INPUT_`](https://github.com/username_1/diffset/blob/8320c0c4c0a484f48a69fe38b271476f1103f85f/src/util.ts#L14-L15), you could if you wish leverage that knowledge to do something like this.
```yaml
- name: Diffset
id: diffset
uses: username_1/diffset@v1
env:
INPUT_SCALA_FILES: '**/*.scala'
INPUT_JAVA_FILES: '**/*.java'
INPUT_CFN_FILES: '**/*.cf.yml'
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
```
If there is not harm caused, I suspect the extra effort might be wasted.
username_0: I did not see any issue so far, but I did not look deep further though, just feel like it would worth calling out the warnings.
username_1: Gotcha. I'll leave this open for now in case you find any actually behavioral issues
username_1: There's a trail of similar issues I'm other action repos linking back to https://github.com/octokit/request-action/issues/26 so I'll leave that link here as reference |
fboyle2001/SixthFormApp | 339215381 | Title: Cleanup Date Displaying
Question:
username_0: Display dates in format dd/mm/yyyy and ignore time since the user cannot actually set it.
Answers:
username_0: Fixed in phonegap-experimental commit bc63e84f12c1165099040a4ade0e098160831317
Status: Issue closed
username_0: Resolved in commit da4e49438e339fc73ea46d7c16828d5de432f82e
Status: Issue closed
|
rust-lang/rust | 831286078 | Title: rustdoc: it should be possible to leave "Methods From Deref" empty, as was previously the default behaviour
Question:
username_0: For example, I would much rather have the documentation for my crate Staticvec look [as it did here](https://docs.rs/staticvec/0.10.5/staticvec/struct.StaticVec.html#deref-methods), as opposed to pulling in the docs for every single slice method [as it does now.](https://docs.rs/staticvec/0.10.6/staticvec/struct.StaticVec.html#deref-methods)
Answers:
username_0: Are there any details on when precisely this changed, also? I've noticed that even though the auto-generated docs on `docs.rs` previously left the `Methods From Deref` section empty, it was seemingly always there in stuff like the official `libstd` docs for `Vec`, so presumably there has always been some kind of way of controlling `rustdoc`'s behaviour in this regard, unless I'm missing something.
username_1: Vec derefs directly to slice - maybe the user crates you're talking about always deref to Vec?
username_1: Hmm, I'm a little concerned the methods will be harder to find that way. We get a lot of complaints that the "Trait Implementations" section is hard to read because you have to click on the name of the trait to see what methods are available.
username_0: Yeah, that makes sense. Honestly just showing `Trait Implementations` before `Methods From Deref` in any case where both exist would likely pretty much cover the kind of thing I was mostly concerned about by itself anyways.
username_0: Sure, I'll take a look at it tomorrow.
username_0: Got busy with some other stuff, but I've now opened a PR that make those changes, which do accomplish what you thought they would.
Status: Issue closed
|
ocornut/imgui | 223782272 | Title: Popup with rounded corners?
Question:
username_0: Popups get closed when clicking outside them, and modals have rounded corners. Is there a way to get both features? Click outside to close, and rounded corners?
Answers:
username_1: Sorry, it is currently hardcoded that BeginPopup() pushes a zero rounding. We could add another variable in the Style structure, but I was hoping to reduce the complexity of styling data whenever possible, because everything we add, we generally cannot remove. It is something that matters so much to you? How does your style looks like?
username_0: No worries. I will manage by using normal windows, in which I can get rounded corners. I'll post a screenshot soon when our app is ready :)
Status: Issue closed
username_1: @username_0: This is now supported via the `style.PopupRounding` setting! |
mulesoft-labs/raml-for-jax-rs | 214406238 | Title: Generating toString, hashCode and equals for Java classes generated from RAML DataTypes
Question:
username_0: It is possible to generate Java classes with toString, hashCode and equals methods from JSON schemas (described [here](https://github.com/mulesoft-labs/raml-for-jax-rs/tree/release/2.0.0/raml-to-jaxrs/examples/maven-examples/simple-json-example)). Is this possible when Java classes are generated from RAML DataTypes?
Answers:
username_1: @username_0: you should be able to use RAML 1.0 annotations along with those [plugins](https://github.com/mulesoft-labs/raml-for-jax-rs/tree/release/2.0.0/raml-to-jaxrs/jaxrs-code-generator). I hope that helps.
username_2: I think this one is important. Otherwise, those generated files need to be modified which defeats the purpose of generation a bit.
username_3: the raml-java-tools is coming with plugins to allow this
username_3: equals and hashCode are in for ObjectTypeDeclaration. I'll do unions later today.
username_3: unions are done.
The plugin can be set in the pom.xml or you can set it on the API of type directly (where you can specify a list of fields to consider field names. I'll do toString() in a separate plugin.
username_3: toString plugin for types, called core.toString
username_1: FYI, those plugin options are documented here: https://github.com/mulesoft-labs/raml-java-tools/blob/19fe0a1b7e2dcbaa82a9d85eff57784f8db53779/raml-to-pojo/README.md
Status: Issue closed
|
JabRef/jabref | 242965216 | Title: The LibraryOfCongress test started failing recently
Question:
username_0: Currently it throws a 500 error.
We should check why, and try to fix it, if possible.
Refs. https://github.com/JabRef/jabref/pull/3012
Answers:
username_1: Seemsl ike a problem with the lccn.gov website as I even can't get it to work manually
http://lccn.loc.gov/2010045158
username_2: Should work again now?
Status: Issue closed
|
Exawind/amr-wind | 1041684033 | Title: AMReX_SetupCUDA is deprecated for CMake >= 3.20
Question:
username_0: I have been building AMR-Wind on Summit as part of my testing for Ascent. With CMake version 3.20, I am getting the warning
`AMReX_SetupCUDA is deprecated for CMake >= 3.20: it will not be processed!`
and ultimately have link issues with CUDA symbols.
AMReX appears to have changed their CMake-ry for CUDA with this change: https://github.com/AMReX-Codes/amrex/pull/2012
I tried modifying `cmake/amr-wind-utils.cmake` to remove the lines
```
if (AMR_WIND_ENABLE_CUDA)
include(AMReX_SetupCUDA)
endif()
```
but this did not work for me.
Ascent requires CMake 3.20 and above. That said, I have a workaround for now, which is to build Ascent with CMake 3.20 and AMR-Wind with 3.18 and the two still appear to work together.
Answers:
username_1: Hello. I'm using CMake 3.21.3 on Summit for AMR-Wind and I do not experience link issues. I always see the `AMReX_SetupCUDA` warning. Can you provide the link error and any more information? |
SublimeText/AFileIcon | 546776736 | Title: Override icons
Question:
username_0: Is there an easy way to override icons?
Say I want to override all php icons and replace them by some own alternatives. I have tried the override approach that is documented somewhere, where I place a `A File Icon` folder in `Packages` and then place my pngs into the corresponding subfolder, but it does not change the icons, I guess it is still grabbing the icons in the packaged default package. Any way to work around this?
Answers:
username_1: Yes, you can. Copy icons from `Packages\zzz A File Icon zzz` to `Packages\User` and modify them there.
In your case the files are `file_type_php.png`, `[email protected]` and `[email protected]` (from `single` or from `multi` folder).
username_0: Thanks, works perfectly.
One last question: Is it important to copy the entire `Packages/zzz A File Icon zzz` folder to `Packages/User` or can I now delete all files in there (except my new PHP icons)?
username_1: You don't need to copy the entire folder. You can overwrite only some icons.
Status: Issue closed
username_0: Thanks a lot for your help. Problem solved! |
GSS-Cogs/family-trade | 1039360474 | Title: None
Question:
username_0: Updates Made. The transformation is also uplifted to use CSVCubed. Links to Staging and Beta below:
1.[Staging](https://staging.gss-data.org.uk/cube/explore?uri=http%3A%2F%2Fgss-data.org.uk%2Fdata%2Ftrade%2Fons-pink-book-trade-in-services%2Fthe-pink-book-trade-in-services.csv%23dataset-catalog-entry)
2.[Beta](https://beta.gss-data.org.uk/cube/explore?uri=http%3A%2F%2Fgss-data.org.uk%2Fdata%2Ftrade%2Fons-pink-book-trade-in-services%2Fthe-pink-book-trade-in-services.csv%23dataset-catalog-entry) |
huggingface/transformers | 775249141 | Title: No such file or directory: '/usr/local/lib/python3.6/dist-packages/tqdm-4.41.1.dist-info/METADATA'
Question:
username_0: - Platform: Colab
- Python version:
- PyTorch version (GPU?):GPU
- Tensorflow version (GPU?):GPU
- Using GPU in script?:Yes
examples/token-classification: @stefan-it
-->
The problem arises when using:
* [ ] the official example scripts: I'm following the tutorial "23_Transfer_Learning_With_ChemBERTa_Transformers_Pt_2.ipynb" to reproduce the results. However I got this error message " No such file or directory: '/usr/local/lib/python3.6/dist-packages/tqdm-4.41.1.dist-info/METADATA". I cannot figure out the reason. The following is the snippet I uesd:
```
!curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
import conda_installer
conda_installer.install()
!/root/miniconda/bin/conda info -e
!pip install --pre deepchem
import deepchem
deepchem.__version__
from rdkit import Chem
!git clone https://github.com/NVIDIA/apex
!cd /content/apex
!pip install -v --no-cache-dir /content/apex
!pip install transformers
!pip install git+https://github.com/seyonechithrananda/simpletransformers.git@pip
!pip install wandb
!cd ..
!git clone https://github.com/seyonechithrananda/bert-loves-chemistry.git
%cd /content/bert-loves-chemistry
import os
import numpy as np
import pandas as pd
from typing import List
# import molnet loaders from deepchem
from deepchem.molnet import load_bbbp, load_clearance, load_clintox, load_delaney, load_hiv, load_qm7, load_tox21
from rdkit import Chem
# import MolNet dataloder from bert-loves-chemistry fork
from utils.molnet_dataloader import load_molnet_dataset, write_molnet_dataset_for_chemprop
tasks, (train_df, valid_df, test_df), transformers = load_molnet_dataset("clintox", tasks_wanted=None)
from simpletransformers.classification import ClassificationModel
import logging
logging.basicConfig(level=logging.INFO)
transformers_logger = logging.getLogger("transformers")
transformers_logger.setLevel(logging.WARNING)
```
Any suggestions and help are appreciated! Thank you!
Answers:
username_1: Hey @username_0,
This does not seem to be a bug in Transformers, but rather in `seyonechithrananda/simpletransformers.git` so I'm not sure here is the correct place to post the issue. I think one has to change the line `from transformers.modeling_albert import ....` to `from transformers.models.albert.modeling_albert import ...` in the respective repo.
Status: Issue closed
|
nvs-abhilash/CorrectMe | 388984791 | Title: Adding tests for the package
Question:
username_0: This is a long-term objective, but it would be nice to start thinking about adding tests to the code. I think `pytest` seems easy and intutive. Let's use this thread to discuss how we would want to go about approaching this issue.<issue_closed>
Status: Issue closed |
devmanorg/course | 575535877 | Title: Не понятна статья про .env
Question:
username_0: В 3ем уроке знакомства они сталкиваются с `.env`, который объясняется на примере токена, но что такое токен они не знают.
Answers:
username_1: На википедии есть большая [статья](https://ru.wikipedia.org/wiki/%D0%A2%D0%BE%D0%BA%D0%B5%D0%BD_(%D0%B0%D0%B2%D1%82%D0%BE%D1%80%D0%B8%D0%B7%D0%B0%D1%86%D0%B8%D0%B8)#%D0%A2%D0%BE%D0%BA%D0%B5%D0%BD_%D0%B8_%D1%82%D0%B5%D1%85%D0%BD%D0%BE%D0%BB%D0%BE%D0%B3%D0%B8%D0%B8_%D0%B5%D0%B4%D0%B8%D0%BD%D0%BE%D0%B3%D0%BE_%D0%B2%D1%85%D0%BE%D0%B4%D0%B0) по токену.
username_0: Имхо, отостой, т.к. если не знать о чём речь ничего понять будет невозможно. В целом болезнь многих статей на википедии
username_1: Предлагаю просто дописать в [гист](https://gist.github.com/dvmn-tasks/22b18aafb24a6be5213eb5c6532eaef8) на [8 шаге](https://dvmn.org/modules/meeting-python/lesson/friend-invitation/#8) упоминание, что такое токен.
username_1: Убрал вообще упоминание о нём.
Status: Issue closed
|
predixdesignsystem/px-datetime-field | 304540847 | Title: Timezone UTC dropdown: Update from "string" order to numerical order
Question:
username_0: The Etc/GMT+“X” dropdown numbers are not sorted/listed in numerical order - e.g., 1, 10, 11, 12, 2. Seems to be listed in “string” order, might just be the default of the dropdown or how the list was built?
**Screenshot:**
**
Status: Issue closed
Answers:
username_1: Fixed in latest datetime-common to order the timezones in order from -14 to +12 instead. Should be picked up by all components on next rebuild. |
aragon/nest | 395415613 | Title: Aragon Nest Proposal: Open Badges
Question:
username_0: # Aragon Nest Proposal: Open Badges
## Abstract
Connected, verifiable credentials represented in portable image files.
Open Badges are visual tokens (not blockchain tokens, but like an seal/distinctive/diploma/badge) of achievement, affiliation, authorization, or other trust relationship sharable across the web. Open Badges represent a more detailed picture than a CV or résumé as they can be presented in ever-changing combinations, creating a constantly evolving picture of a person’s lifelong learning.
NEW - [See which products are Open Badges v2.0 (OBv2) certified by IMS Global!](https://www.imsglobal.org/cc/statuschart/openbadges)
Thousands of organizations across the world issue Open Badges, from non-profits to major employers to educational institutions at every level.
The above text is from [openbadges.org](https://openbadges.org/)
Note: I don't have skills to make it working, but to propose it; then you're free to begin implementing it on a pull request and receive your grants.
## Deliverables
1. An aragonOS-compatible application (the open badges platform) that lets people [Understand](https://openbadges.org/get-started/understanding-badges), [Issue](https://openbadges.org/get-started/issuing-badges), [Earn](https://openbadges.org/get-started/earning-badges) and [Display](https://openbadges.org/get-started/displaying-badges) their badges
2. An template/API where other apps can display badges in user's profile
3. Compatibility with other apps, to send open badges according to events
4. Voting to create/design new badges, and to send to an user
5. Evidence of badge's task completion link is immutable
6. Badge can be revoked, if user needs to update their skills or for other reason
7. Other needed things; open for suggestions (also, Open Badges maybe needs some updates to be more usable on a decentralized manner)
## Grant size
Funding: from $50k up to $100k in ETH, split into chunks paid out over achieved deliverables.
Success reward: Up to $50k in ANT, given out when all deliverables are ready.
## Application requirements
- Proof of concept of the smart contracts for the ERC721-compatible token. Alternatively, a whitepaper researching the implementation of the whole protocol
- Details of the team members, alongside with their willingness in terms of implication
- Estimated average burn rate for completing the deliverables
- Legal structure to be adopted, if any
## Development timeline
To be discussed or to be proposed by the team requesting funds.
Answers:
username_0: @username_1
username_1: Hi @username_0 thanks a lot for your submitting your proposal.
We will issue a post on the things that the Nest program will focus on in 2019. Meanwhile, we are not approving this proposal since open badges are not a priority for the Aragon project.
Thank you for your interest and participation!
Status: Issue closed
|
codalab/chahub | 557654508 | Title: Impossible to sign up
Question:
username_0: When trying to create an account on Chahub, I get the following message:
`Server Error (500)`
This may be the cause of this issue on Codalab academic instance: [#2744](https://github.com/codalab/codalab-competitions/issues/2744) |
gookit/validate | 495273235 | Title: 有些验证字段不生效
Question:
username_0: # 使用方法
```go
type SetProfileReq struct {
Nickname string `json:"nickname" validate:"" filter:"trim"`
Avatar string `json:"avatar" validate:"required|url" filter:"trim"`
}
func checkSetProfileReq(ctx echo.Context, req *SetProfileReq) (*response.NO, error) {
if err := ctx.Bind(req); err != nil {
return response.NewNO(
response.InvalidArgument,
locale.Message(ctx, "invalid_argument"),
), errors.Wrapf(err, "bind param err")
}
fmt.Println("bind:", req)
if err := ctx.Validate(req); err != nil {
return response.NewNO(
response.InvalidArgument,
locale.Message(ctx, "invalid_argument"),
), errors.Wrapf(err, "validate param err")
}
fmt.Println("validated:", req)
return nil, nil
}
// validate方法的实现
type Validate struct {
}
func (Validate) Validate(i interface{}) error {
v := validate.Struct(i)
if !v.Validate() {
return v.Errors
}
return v.BindSafeData(i)
}
```
Avatar使用了两个规则,required和url。但是url规则不生效。
测试的结果
```console
bind: &{123nickname111 1}
validated: &{123nickname111 1}
```
验证通过,没有校验url格式。
我换另一个规则,如required|ip,这样也是生效的。会检查ip格式。
# 版本信息
1.1.3
Answers:
username_1: @username_0 看了下 url 判断用的 `url.Parse(s)`
但是 `url.Parse("123")` 这样的也不会报错。。。
username_0: @username_1 恩。我也试了,不会报错,当成合法的path了。那这个我改为正则判断吧。谢谢。
Status: Issue closed
|
UTRS/utrs | 53569560 | Title: Only allow one appeal at a time
Question:
username_0: Currently more than one appeal can be filed by the same user while one is open. We need to close this gap so we don't overwhelm admins reviewing.
Answers:
username_1: Appeals filed from the same IP are automatically listed and should be possible to disallow; appeals made from different IPs aren't and would need to be matched by some other criteria (e-mail address, probably)
Status: Issue closed
|
pingcap/tispark | 447980843 | Title: cannot read when insert Long.MAX to BIGINT
Question:
username_0: ```
ERROR 1105 (HY000): Other(StringError("[src/coprocessor/codec/error.rs:176]: codec:Io(Custom { kind: UnexpectedEof, error: StringError(\"eof\") })"))
```
Answers:
username_1: Something wrong with our current implementation of `writeUVarLong`, I am investigating it.
Status: Issue closed
|
nucleos/NucleosUserAdminBundle | 853722710 | Title: Symfony 5 Sonata Admin
Question:
username_0: <!--
Before you open an issue, make sure this one does not already exist.
-->
<!--
If you are reporting a bug, please try to fill in the following.
Otherwise remove it.
-->
### Environment
#### Packages
```
$ composer show --latest
# Put the result here.
```
#### PHP version
```
$ php -v
# Put the result here.
```
## Subject
<!--
Give here as many details as possible.
Next sections are for ERRORS only.
-->
## Steps to reproduce
## Expected results
## Actual results
<!--
Problem 1
- Root composer.json requires nucleos/user-admin-bundle 1.5.x-dev -> satisfiable by nucleos/user-admin-bundle[1.5.x-dev].
- nucleos/user-admin-bundle 1.5.x-dev requires sonata-project/admin-bundle ^3.90 -> found sonata-project/admin-bundle[3.90.0, ..., 3.x-dev] but it conflicts with your root composer.json require (dev-master).
-->
Answers:
username_1: Relies on the next SonataAdminBundle major version: https://github.com/sonata-project/SonataAdminBundle/pull/6476
username_1: Done with #372
Status: Issue closed
|
acceptbitcoincash/acceptbitcoincash | 290282132 | Title: Add 'ZoneAlarm' to the 'Anti-Virus' category
Question:
username_0: Requesting to add 'ZoneAlarm' to the 'Anti-Virus' category.
Details follow:
```yml
- name: ZoneAlarm
url: https://www.zonealarm.com/
img: https://pbs.twimg.com/profile_images/839416101306007552/OfgeZCH5_400x400.jpg
twitter: zonealarm
facebook: ZoneAlarm2017
bch: No
btc: No
othercrypto: No
```<issue_closed>
Status: Issue closed |
NVIDIA/DALI | 342549440 | Title: What is the input data type of DALI, if it have any difference with original data load ?
Question:
username_0: => creating model 'resnet50'
Traceback (most recent call last):
File "main.py", line 423, in <module>
main()
File "main.py", line 196, in main
pipe.build()
File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/nvidia/dali/pipeline.py", line 124, in build
self._prepare_graph()
File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/nvidia/dali/pipeline.py", line 115, in _prepare_graph
self._pipe.AddOperator(op.spec, op.name)
RuntimeError: [/opt/dali/dali/pipeline/operators/reader/loader/lmdb.h:73] Assert on "mdb_env_open(mdb_env_, db_path_.c_str(), mdb_flags, 0664) == 0" failed: LMDB Error: No such file or directory
Answers:
username_1: Hi,
Please make sure that following path exists:
- /workspace/dataset/pytorch/train
- /workspace/dataset/pytorch/val
And insides are files in appropriate format.
username_1: Please make sure that you are using Cafee 2 LMDB format.
If you set path correctly and using file in appropriate format please reopen this issue.
Status: Issue closed
|
jpd002/Play-Compatibility | 1075800458 | Title: [SLES-52521] Adiboo and the Energy Thieves
Question:
username_0: **Last Tested On**
[09/12/2021] - https://github.com/jpd002/Play-/commit/695cb21e06e85c8eaef19e0c6adb4c05fa9cb2c7
**Known Issues & Notes**
In-game, but only the "Start" button works in-game (all the other buttons don't work, only in the menus)
**Related**
**Screenshots**
 |
fedora-python/portingdb | 589778099 | Title: pypidb
Question:
username_0: Hi,
It looks like this project is almost complete, which is great to see.
A heads up I am using the fedora.json as a dataset for a list of Fedora python packages to use in testing of https://github.com/username_0/pypidb , which locates the SCM for PyPI packages. The methodology is to not rely on the links provided in the PyPI metadata, and it also doesnt include explicit mappings which would be brittle.
There are some packages I couldnt/didnt map to PyPI names in https://github.com/username_0/pypidb/blob/ba7740e/tests/datasets.py#L46 , and I am only currently looking at packages prefixed `python[23]-*` and `py*`. There is a mapping of Fedora names to PyPI names directly above that, which may be incomplete.
The list of those PyPI packages in Fedora that I do not yet find an SCM for is at https://github.com/username_0/pypidb/blob/master/tests/test_fedora.py#L50 . I suspect some of those will be missing mappings from Fedora names to PyPI names, which the team here might be able to spot quickly.
The resulting mappings to SCM might be helpful for this project in various ways, e.g.
- doing additional analysis (an area I would love to go into is checking whether downstream projects are running CI, esp on Python 3.8),
- remediation downstream (the obvious is updating setup.py/setup.cfg/etc to emit Python 3.x metadata), and
- linking directly to the SCM for packages which havent been ported yet.
- also checking whether .spec are using current URLs (the logic in pypidb is very often identifying readthedocs websites, but that isnt exposed via the API yet; c.f. https://github.com/username_0/pypidb/issues/29).
Answers:
username_1: Yes, this project is winding down; with <150 packages remaining it's getting to the point where having a database is unnecessary. And I already update it less and less often.
I'm not sure what you are asking here. Do you want help mapping Fedora packages to SCM URLs, or Fedora packages to PyPI packages? Or do you want some help with one of the possible projects you mention?
username_0: Hi @username_1 ,
Help mapping Fedora packages to PyPI packages would be very helpful, if the team here are able to use their knowledge or tools to address that, and find it beneficial to have that mapping.
Besides that, I created the issue to see if there is interest here in using pypidb to improve the linkage from Fedora packages to the source repositories for any reason that might be on the roadmap for Fedora. While this project is nearing completion, it surely has spawned other side-projects for improving Fedora Python that would also require systematic tasks that this project has facilitated.
username_1: Currently don't really have a task where a mapping to PyPI names or SCM URLs would help.
Usually you need the SCM when making a pull request upstream, but finding the repo is trivial compared to understanding the code, reading contribution guidelines, etc.
username_0: Seems you havent looked at the list I provided yet, and dont seem to be interested, so I'll close it.
Status: Issue closed
username_1: Linking PyPI names or SCM URLs doesn't sound useful *to me*, currently. But I'll keep pypidb in mind if I find a project where it's needed.
username_0: They are manually maintained because they are names which cant be automatically matched using the semi-automated matching algorithms in [`get_pypi_name`](https://github.com/username_0/pypidb/blob/ba7740e/tests/datasets.py#L96), or the three fully-automated matching algorithms in https://github.com/username_0/pypidb/blob/0baa777/tests/test_fedora.py#L124 .
What would be really great is if Fedora undertook a project to rename their packages to match PyPI name so there is a consistent naming convention, or even only the subset of cases where the Fedora name clashes with a different PyPI project with the same name, which is very confusing.
Also working with upstream projects to have them published on PyPI. I havent been doing that with openSUSE projects which were on the equivalent openSUSE list, and most upstream projects are happy to publish onto PyPI but often would like someone to help fix/test the setup.py changes needed.
username_1: Thanks for explaining! I see more clearly what you're trying to say. Even though my answer doesn't change, I can at least explain a bit more.
Both projects you mention would be nice. I'll certainly encourage packagers to use predictable component names. But I can't commit to drive a project to rename/publish them all. Especially the renaming would be a much project project than it might seem.
Some name clashes come with backwards compatibility issues. Keep in mind that Fedora is older than PyPI.
I don't think the two namespaces can realistically be joined; there'll always be some exceptions.
Two examples to think about:
* Should the [Ansible](https://pypi.org/project/ansible/) RPM be named `python-ansible`? I don't think it should. Same for all other tools – text editors, command-line helpers, anything that's not primarily an importable library.
* The `python-ldap` Fedora package is [python-ldap](https://pypi.org/project/python-ldap/) on PyPI. Should it be released as `ldap` on PyPI? (It would be nice but it turns out `pip` can't handle such a name change when updating.)
While systems with ad-hoc rules and exceptions might be useful to get general overviews (like portingdb), it would be hard to build on top of them in the long term, and it'll be hard to maintain them. (portingdb sure is pretty bad, but at least it's going away after Python 2 is gone.)
So instead of guessing names, automated tools around Fedora can use `python3dist(NAME)` virtual provides. We're working to make *that* reliable and useful, because it can become a solid standard to base other things on.
username_0: So we do have shared objectives. All of the names on my list will almost certainly be wrong data emitted by `python3dist(NAME)`. I've already provided a list of very high probability problems that need further analysis - or, `python3dist(NAME)` will identify a few cases where my mapping data is missing entries.
username_1: For `collectd_systemd`, `evic`, `hwdata` and possibly others: While `pypi_name` *is* a badly chosen name in this case, it's not actually used to generate `python3dist(NAME)`. That's taken from actual metadata (e.g. `setup.py`).
---
But you're right that the python3dist name should match the PyPI package, or be blocked on PyPI. Otherwise we'll get mismatches between what the RPM and pip metadata means.
@username_2, do you have an opinion? IMO this should be at least a SHOULD in the new packaging guidelines.
username_2: Where *do X* can be something like:
- talk to upstream about hosting the project on PyPI or at least namesquat the name there
- talk to upstream to rename their package if the PyPI package is a different software
- disable the provides generator if this conflicts with another Fedora package
Also note that when the RPM is installed, `pip` considers the package installed as well regardless of whether it actually is the same as on PyPI.
username_1: Yes, and that's a problem: the names in `setup.py` will mean different packages for pip and for Fedora.
username_2: New paragraph:
The name is derived from the Python package name (e.g. the `name` argument of the `setup()` function in `setup.py`). It means, it does not necessarily correspond with the provided module name (how the package is imported) -- for example, the [djangorestframework Python package](https://pypi.org/project/djangorestframework/) would provide `python3dist(djangorestframework)` even when imported via `import rest_framework`. For packages from [PyPI](https://pypi.org/), this is the same name as used there. Packages hosted somewhere else sometimes may have names clashing with different packages from PyPI or names missing from PyPI entirely. In that case, packagers SHOULD contact upstream in order to resolve this situation (by adding the package to PyPI and renaming it if necessary).
---
PS Should we reopen this or take it elsewhere?
PS2 I've noticed the guidelines also say "Using a fictional module named 'example', the subpackage containing the Python 3 version must provide python3-example." Which obviously is not followed at all (see pkg_resources, rest_framework, etc...).
username_2: https://pagure.io/packaging-committee/issue/965
username_0: Perfect; thanks.
fwiw, the openSUSE naming policy does enforce the PyPI name, but with some exceptions which might help guide any Fedora policy around certain possible problems, but I find it doesnt answer the hair naming issues like `.` vs `-`, and makes exceptions that are problematic (jupyter kernels are often dependencies, if only in test suites of other packages). Worth a quick read.
https://en.opensuse.org/openSUSE:Packaging_Python#Naming_policy
username_1: Hello again! We had some private discussions on topics like this. Sorry for silence.
Today we proposed a draft of new Python Packaging Guidelines for Fedora, which will try to synchronize the "PyPI name" between PyPI and Fedora (`python3dist(...)`)
https://hackmd.io/XzJe-sHUQvWK7cSrEH_aKg?both#PyPI-parity
We plan that Python dependencies will be specified using that, rather than importable module names.
As for dots vs. dashes, those can be put into a canonical form, which PyPI uses, using [an algorithm described in PEP 503](https://www.python.org/dev/peps/pep-0503/#normalized-names). All automated tools should convert to that form. |
ply-ct/ply | 866851609 | Title: Bump js-yaml from 3.14.0 to 4.1.0
Question:
username_0: This is a breaking change because result yaml URL values are no longer enclosed in quotes.
old:
```
repositoryTopicsQuery:
request:
'url: https://api.github.com/graphql'
```
new:
```
repositoryTopicsQuery:
request:
url: https://api.github.com/graphql
```<issue_closed>
Status: Issue closed |
rust-lang/rust-clippy | 897764840 | Title: Suggest to change `std::iter::repeat(s).take(n).collect::<String>()` to `s.repeat(n)`
Question:
username_0: ### What it does
It suggests the user to change `std::iter::repeat(string).take(n).collect::<String>()` to `string.repeat(n)`. It should work without the `String` type argument to `collect` too and clippy has to find out based on context whether it does collect to a `String`.
Additionally, when the argument to `std::iter::repeat` is a constant char like `'a'` it should suggest to change that to `"a".repeat(n)`. I said only constant chars because I'm not sure how well it will work if it does that for variable chars as well.
### Categories (optional)
- Kind: `clippy::perf`
```rust
use criterion::{criterion_group, criterion_main, Criterion};
fn repeat1(count: usize) -> String {
std::iter::repeat("a").take(count).collect()
}
fn repeat2(count: usize) -> String {
"a".repeat(count)
}
fn criterion_benchmark(c: &mut Criterion) {
c.bench_function("std::iter::repeat", |b| b.iter(|| repeat1(100)));
c.bench_function("str::repeat", |b| b.iter(|| repeat2(100)));
}
criterion_group!(benches, criterion_benchmark);
criterion_main!(benches);
```
```
std::iter::repeat time: [433.62 ns 434.36 ns 435.09 ns]
Found 10 outliers among 100 measurements (10.00%)
4 (4.00%) low mild
5 (5.00%) high mild
1 (1.00%) high severe
str::repeat time: [52.540 ns 52.658 ns 52.857 ns]
Found 3 outliers among 100 measurements (3.00%)
1 (1.00%) high mild
2 (2.00%) high severe
```
It's faster and generates a lot less instructions, in the `char` case too.
It's also easier to read.
### Drawbacks
None.
### Example
```rust
std::iter::repeat("hello").take(10).collect::<String>();
std::iter::repeat('x').take(10).collect::<String>();
```
Could be written as:
```rust
"hello".repeat(10);
"x".repeat(10);
```
Answers:
username_1: https://doc.rust-lang.org/std/primitive.str.html#method.repeat
MSRV 1.16
There will be code that uses `iter::repeat` because it was written before `str::repeat` was added.
username_0: Then I suppose Clippy should check if the version is >=1.16.0 if that's a thing.
username_1: Yes. My last comment was supposed to be a note for the implementer to add that config.
Status: Issue closed
|
GoogleCloudPlatform/fda-mystudies | 775253659 | Title: [SB] [Audit Logs] "studyId" is displayed incorrect value for the events
Question:
username_0: **Events:**
1. STUDY_NEW_RESOURCE_CREATED
2. STUDY_RESOURCE_SAVED_OR_UPDATED
Sample snippet for STUDY_NEW_RESOURCE_CREATED event
```
{
"insertId": "45wyvdg1rvvzyn",
"jsonPayload": {
"occurred": 1608028878213,
"mobilePlatform": null,
"userAccessLevel": "STUDY BUILDER ADMIN",
"participantId": null,
"appVersion": "1.0",
"userIp": "172.16.17.32",
"studyVersion": null,
"userId": "133",
"correlationId": "0e503df2-132d-4bf8-bbda-4793da5f409b",
"source": "STUDY BUILDER",
"siteId": null,
"destinationApplicationVersion": "1.0",
"eventCode": "STUDY_NEW_RESOURCE_CREATED",
"platformVersion": "1.0",
"appId": "STUDY BUILDER",
"description": "New Resource created (resource ID - 68).",
"destination": "STUDY DATASTORE",
"studyId": "1069",
"resourceServer": null,
"sourceApplicationVersion": "1.0"
},
"resource": {
"type": "global",
"labels": {
"project_id": "mystudies-open-impl-track1-dev"
}
},
"timestamp": "2020-12-15T10:41:18.213Z",
"severity": "INFO",
"logName": "projects/mystudies-open-impl-track1-dev/logs/application-audit-log",
"receiveTimestamp": "2020-12-15T10:41:18.364702590Z"
}
```
Answers:
username_1: @username_0 Please test this issue in dev environment and update
Status: Issue closed
username_0: Issue is resolved now in Dev |
mholt/PapaParse | 1104811144 | Title: How to export nested
Question:
username_0: I'm working with JSON data which has nested fields how but PapaParse returns [object] for nested objects. How can I extract some fields from a nested json
Status: Issue closed
Answers:
username_1: This is not supported (and we do not plan to support it). You should transform your data before passing it to PapaParase |
fission/fission | 331369104 | Title: Missing implementation for mqtrigger get command
Question:
username_0: As above, the fission CLI does not support getting mqtriggers by name.
https://github.com/fission/fission/blob/master/fission/mqtrigger.go#L111
Answers:
username_1: Related/Subset of #681
username_2: Issue seem not relevant now. Feel free reopen if required.
Status: Issue closed
|
bacca87/great2 | 479974082 | Title: Statistiche
Question:
username_0: nella grafica "year Statistic" inserire le ore di Vacations
Answers:
username_1: Stiamo valutando come farlo. Nel grafico year statistic risulterebbe poco leggibile: la bar delle ferie sarebbe di qualche mm in confronto a quella delle ore.
username_0: ok, ma anche se non è propriamente in scala non penso che sia un problema....
si può mettere una dimensione minima?
d'altronde anche le ore di straordinario, se fatte poche, è minima.
però, almeno, così me lo ritrovo sulle statiche nel popup che viene fuori quando ci vado sopra con il mouse....
username_2: Il grafico a barre "year statistics" (che cambierà nome perche non vuol dire niente) serve ad avere un idea delle ore **lavorate** suddivise per tipologia. Le ore di ferie o di permesso non possono essere inserite in quel grafico perche ne sballerebbe i valori. Le ore di ferie non possono essere sommate a quelle di lavoro.
Per avere un resoconto delle ore per tipologia, bisogna creare un grafico a parte, oppure un semplice totale sotto forma di testo. Ad ogni modo come ha gia risposto corra, stiamo re-impaginando ed ampliando le statistiche, e nelle prossime versioni saranno piu complete.
username_1: Nuova pagina statistiche:
Tab Hours:

Cambiati i nomi dei grafici e aggiunto il grafico per tipologia ore
Tab Factories:

Devo finire di impaginarlo bene. Ho aggiunto la mappa con i paesi più frequentati. Per quel tipo di grafico ancora non esiste una legenda. Il paese più scuro è il più visitato.
Ho anche aggiunto un indicatore del numero di giorni in trasferta
username_2: ma che monitor hai? un ultra wide screen?

comunque le tab "hours", "factories", "Kilometers" etc vanno nascosti e messi i bottoni nel menu sopra.
username_2: cmq il thread ufficiale per le statistiche è il #19, questo ticket deve essere chiuso perchè è gia stata data una risposta al problema.
Status: Issue closed
|
syroegkin/swagger-markdown | 268095038 | Title: Reference only parameters are ignored.
Question:
username_0: This is valid swagger 2.0, but parameters specified this way are ignored.
/{clientId}:
get:
description: |
Get the client.
parameters:
- $ref: '#/definitions/clientId'
There is no error when the file is processed, but the Name, Located in, and Description columns in the generated parameters table are blank.
Andy
Answers:
username_1: Just started using this library and I'm seeing this as well
Status: Issue closed
username_2: Well not really.
I was trying this with https://editor.swagger.io/ and it always reports about mistake.
It will definitely work when parameters are in parameters section but not in definitions.
So this one will work just fine:
```yaml
swagger: '2.0'
info:
title: Some API
description: Some description
version: "1.0.0"
paths:
/{clientId}:
get:
description: |
Get the client.
parameters:
- $ref: '#/parameters/clientId'
responses:
200:
description: All good
parameters:
clientId:
name: clientId
in: path
required: true
type: integer
format: int32
``` |
cdli-gh/Framework | 259789312 | Title: search count can be off
Question:
username_0: _From @username_0 on March 3, 2017 17:28_
Search count seems to take into account only some instances of search string in texts.
eg : "SZU+LAGAB" => 28,571 instances
counting them manually => 31,000+
_Copied from original issue: cdli-gh/cdli1#10_
Answers:
username_0: Bob: I checked and that is certainly due to the counter and highlighter not picking up multiple instances of a searched sign in a single line
username_0: No time to fix before redesign.
Status: Issue closed
|
Automattic/mongoose | 78422432 | Title: Mongoose: select method does not return the include field
Question:
username_0: I have one document like that
```javascript
var ConfigSchema = new Schema({
basicConfig: {
levelId: {
type: Number,
required: true
},
hostId: {
type: Number,
required: true
},
Name: {
type: String,
trim: true
},
Settings: {
Type1: {
// some types here...
}
Type2: {
// some types here...
}
}
},
enrolls: {
name: {type: String},
list: [String]
},
awards: {
enable: {type: Boolean},
Primary: {
amount: {type: Number},
type: {type: String}
}
}
```
Now I want to find configs with hostId matches 60, and selecting basicConfig field.
```javascript
Config.findOne({ 'basicConfig.hostId': 60 })
.select('basicConfig').exec(function(err, configs) {
if (err) {
console.error(err);
return ;
}
console.log(configs);
});
```
However, all fields of this document will be returned. It seems that the select does NOT work? Why?
Also, those following codes have been test, it does not work.
```javascript
BonusConfig.findOne({ 'basicConfig.hostId': 60 }, 'basicConfig', function(err, configs) {
if (err) {
[Truncated]
```
But, without the basicConfig field with select with the following codes, it work well.
```javascript
BonusConfig.findOne({ 'basicConfig.hostId': 60 })
.select('-basicConfig').exec(function(err, configs) {
if (err) {
console.error(err);
return ;
}
console.log(configs);
});
```
What's wrong with my codes?
Mongoose version: 3.8.24
Mongodb version: 2.6.7
Answers:
username_1: Hmm so the first way should work. Can you enable mongoose debug mode and show me the query that's being sent to the server? `require('mongoose').set('debug', true);`
As a workaround,
```javascript
Config.findOne({ 'basicConfig.hostId': 60 }, { basicConfig: 1 }).exec(function(err, configs) {
if (err) {
console.error(err);
return ;
}
console.log(configs);
});
```
should work
username_0: hi @username_1
Here is the query log under mongoose debug mode.
`Mongoose: configs.findOne({ 'basicConfig.hostId': 60 }) { fields: { basicConfig: 1 } }`
username_0: **Update**
After further investigation.
The output result of `Config.findOne({ 'basicConfig.hostId': 60 }).select('basicConfig')`:
```javascript
{ _id: 555c4144c0bff1541d0e4059,
enrolls: {},
awards: { primary: { PrimarySettings: {}, primaryAck: {} } },
basicConfig:
{ levelId: 24,
hostId: 60,
poolName: 'LC' } }
```
Other fields are `empty value` except `basicConfig`. However, I want the result is
```javascript
{ _id: 555c4144c0bff1541d0e4059,
basicConfig:
{ levelId: 24,
hostId: 60,
poolName: 'LC' } }
```
username_1: Ah-hah, that makes sense, this is just another manifestation of #2503. This behavior is by design, but admittedly it's not very well-designed. Mongoose is over-eager when it comes to creating sub-docs when loading from the database. Planning on changing that in v5.
username_1: Upon further investigation, this is not something mongoose can support without introducing some baffling behavior. See https://github.com/Automattic/mongoose/issues/5310#issuecomment-307666824 for more detail. https://github.com/Automattic/mongoose/issues/5369 will be the workaround we go with.
Status: Issue closed
|
microsoft/playwright | 1182625697 | Title: [Question] - is PlaywrightTestConfig -> use -> video -> 'on' working?
Question:
username_0: Hi
thanks for this awesome project.
i'm using since playwright 1.18.
the video recording feature is awesome.
i'm able to use it in a Electron application, with this property in the launcOptions
```
recordVideo: {
dir:"./electron-test-run-videos",
size:{
width:1280,
height:720
}
}
```
i'm able to use it in a browser application, with this property in the newContext options
```
const browserContext = await webBrowser.newContext({
recordVideo: {
dir: 'browser-videos/',
size: { width: 640, height: 480 },
}
})
```
but NOT able to work if i remove these configurations and put on [playwrightTestConfig](https://playwright.dev/docs/test-configuration#record-video) , video is never recorder.
i tried both with this syntax
```
const config: PlaywrightTestConfig = {
testDir: './tests/e2e',
timeout: 60000,
retries: 0,
use: {
locale: 'en-GB',
ignoreHTTPSErrors: true,
video: {
mode:"on",
size:{
width:1280,
height:720
}
}
},
webServer: {
command: 'http-server dist -p 8080 -a localhost',
port: 8080,
timeout: 120 * 1000,
},
expect: {
toMatchSnapshot: { threshold: 1 },
timeout: 5 * 1000,
}
}
[Truncated]
test("tempative to record a video " , async () => {
const webBrowser = await chromium.launch({ headless: true })
const browserContext = await webBrowser.newContext({
recordVideo: {
dir: 'browser-videos/',
size: { width: 640, height: 480 },
}
})
const page = await browserContext.newPage()
await page.goto("https://www.google.com")
await browserContext.close()
await webBrowser.close()
})
```
someone know if this functionality is working?
thanks
Answers:
username_1: Playwright Test has built-in fixtures, and many of the configs will only work if you use the built-in fixtures: https://playwright.dev/docs/test-fixtures#built-in-fixtures
The built-in fixtures will be set up properly based on your configs, and you will not need to manually close the page/context/browser as the built-in fixtures will take care of that after your test code runs.
Your test above can simply be written as:
```
test("tempative to record a video " , async ({ page }) => {
await page.goto("https://www.google.com");
});
```
Does this work for you?
username_0: @username_1 it works like a charm, thanks :)
can i make a pull request updating the documentation with something like this?

due my application use both web and electron, maybe for electron i can create a custom fixture, what do you think?
thanks for your help
username_1: It should be better once https://github.com/microsoft/playwright/issues/13104#issuecomment-1081019968 is resolved! As the video settings will even be applied if you do:
```
test('…', async ({ context }) => {
await context.newPage();
…
});
```
Let's hold off adding a note to the docs until the linked issue is resolved as it may quickly fix the situation.
Cheer! 🎉
Status: Issue closed
|
envoyproxy/envoy | 272617853 | Title: Script to automate envoyproxy/data-plane-api SHA updates to envoyproxy/envoy
Question:
username_0: Whenever `data-plane-api` is updated, we need to put in a commit to the main Envoy repo to update `envoy_api_deps()`. This is a bit of a futz, in particular when we move docs to `data-plane-api` (likely today/tomorrow). This could easily be automated away, by writing a script that automatically creates a PR in the main Envoy repo to do the SHA bump.
An example of how it could work would be to live in `data-plane-api/tools`, do a shallow Envoy clone in a tmp dir, create PR, push to GH and teardown. Or, it could be in `envoy/tools` and do some stateful stuff on the current git tree, creating and removing branches as needed.
I think this should only be a couple hundred lines of Python/Go/Fortran/Cobol.
Answers:
username_1: I'm pretty sure you can do this only using the github API, I'll have a play around and see if it's possible
username_1: It is possible: https://gist.github.com/username_1/db79fc0695e4c1567a5933e1be306aa2
#2364 was created with this.
username_1: This is going to create a PR on *every* commit to `data-plane-api`, let me know if that's overly enthusiastic
username_2: @username_1 I will be honest I'm kind of ambivalent about doing this. I don't think lack of auto update has been a big pain point for us (not to mention we would need to find a place to run the bot code). @username_0 WDYT? I'm inclined to just close this. (@username_1 sorry that you potentially did work that won't be used, though we will have the gist if we want to go back to this at some point).
username_0: I like the use of GH APIs for this. My main concern is that sometimes the update needs manual intervention (e.g. if some non-frozen field names changes), then you need to do a local PR. I think if you could manually run the script to generate the PR and kick it off when you've made a change it would be a nice convenience.
username_1: It's currently running on one of my servers, I can turn it off if it's not helpful
username_2: @username_1 I think we have decided to turn the bot off for now. If possible do you think you could submit the code as a script that could be run by someone manually? cc @envoyproxy/maintainers
username_1: No probs, it's turned off now. I'll clean up the script and submit it later
username_1: I pushed it over here: https://github.com/envoyproxy/data-plane-api/pull/419
username_2: Thanks @username_1! Going to close this one out for now.
Status: Issue closed
|
tomchavakis/nuget-license | 654810783 | Title: Transitive packages
Question:
username_0: Hello, it would be very useful to print out transitive package licenses as well. Is this something planned already?
Answers:
username_0: I have the change locally to do this. I could create a pull request if you like.
username_1: Merged your PR. Thank you 👍
Status: Issue closed
username_0: Will you release the new version ? :)
username_2: The version v2.2.1 released with your changes!!
username_0: Great, thank you! |
nicoriff/ORMi | 350077604 | Title: Conversion fails for Nullable ValueType
Question:
username_0: It seems [Convert.ChangeType at L76 of TypeHelper](https://github.com/username_1/ORMi/blob/master/ORMi/Helpers/TypeHelper.cs#L76) chokes on a nullable value type when the value is `null`. Since a `null` value will result in default value of property anyway, we can do an early return out of `_SetPropertyValue` and not run into an `InvalidCastException`.
Property tested: `IPConnectionMetric` as `unit?` on `NetworkAdapterConfiguration`
StackTrace:
```
at System.Convert.ChangeType(Object value, Type conversionType, IFormatProvider provider)
at ORMi.Helpers.TypeHelper.LoadObject(ManagementObject mo, Type t)
at ORMi.WMIHelper.Query[T]()
at UserQuery.Main()
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.ThreadHelper.ThreadStart()
```
Proposed fix @[L76](https://github.com/username_1/ORMi/blob/master/ORMi/Helpers/TypeHelper.cs#L76): Simplify to `p.SetValue(o, a, null);`
Is there a specific scenario that you're using `Convert.ChangeType` for? IMHO, if there's a mismatch of types between what WMI was expecting and what end-user wanted, end-user should be doing an explicit cast rather than the library.
It might be worth looking into a Converter attribute for more custom behaviour like going from a `string` IPAddress to `IPAddress`. I've done something similar in my powershell-converter repo for the same reason: [Serialize](https://gitlab.com/maverik/powershell-converter/blob/master/PsConverter/PsConvert.cs#L37) & [Deserialize](https://gitlab.com/maverik/powershell-converter/blob/master/PsConverter/PsConvert.cs#L49)
Thank you very much for your work. This makes working with WMI much more pleasant.
Answers:
username_1: Hi @username_0. Thanks for using ORMi in first place.
I´ve already solved the issue. You can download or pull the new version of the repo.
`SetValue` method aims to make simpler the development. Datetime format in WMI is a pain. SetValue solves that so you not have to try to guess about WMI DateTime format.
It is a good idea indeed to make a Converter for other class of types such as IpAddress. I will take that to backlog.
Regards!.
username_0: Perfect! thank you for such a quick fix!
Re: Converter, Just to be clear: I meant a generic `IValueConverter<TInput, TOutput>` sort of thing that we could feed via a `ConverterAttribute` and not literally create multiple converters (since that'd just be fair bit of work and bloat the library).
Status: Issue closed
|
integrations/slack | 300563230 | Title: Subscribing randomly fails with "Could not find resource"
Question:
username_0: Experienced this morning:

Didn't change anything about the state of the installation of `integrations/test`, so this seems kinda weird.
Answers:
username_1: Probably related: I see messages being missed randomly as well. I created several test issues and commits, saw only some of them in the subscribed channel while others were not shown.
username_0: @username_1 thanks for confirming.
This seems to be an issue with the GitHub API timing out for a certain input. We're investigating
username_0: Hey folks! I've deployed a change that should stop message deliveries being missed.
The app will continue to be a little slow until the fix in the GitHub API goes out.
username_2: We started using Slack yesterday (2018-02-26) and this was already happening. After many many tries, we get a repo to subscribe. All of our repos are private.
username_0: @username_2 thanks, that's very useful info! Are you still seeing this problem now?
username_2: yes
username_0: A fix has gone out for the GitHub API, and we haven't seen this error for the past 15min.
@username_2 if you're still having problems, could you open another issue on this repo? 🙇
Status: Issue closed
username_3: I've been seeing this again today, with my public repos
username_3: Thanks for the quick reply, signin fixed it!! Thanks! |
markfinal/BuildAMation | 117433778 | Title: Multi-config Xcode builds for application bundles result in exception
Question:
username_0: ERROR: (Bam.Core.Exception) Error during non-threaded build
ERROR: (Bam.Core.Exception) Product type has already been set to ApplicationBundle. Cannot change it
ERROR:
at XcodeBuilder.Target.SetProductType (EProductType type) [0x0003b] in /Users/mark/dev/BuildAMation/packages/XcodeBuilder/bam/Scripts/Target.cs:135
at XcodeBuilder.Target.EnsureOutputFileReferenceExists (Bam.Core.TokenizedString path, EFileType type, EProductType productType) [0x00003] in /Users/mark/dev/BuildAMation/packages/XcodeBuilder/bam/Scripts/Target.cs:161
at C.XcodeLinker.C.ILinkingPolicy.Link (C.ConsoleApplication sender, Bam.Core.ExecutionContext context, Bam.Core.TokenizedString executablePath, System.Collections.ObjectModel.ReadOnlyCollection`1 objectFiles, System.Collections.ObjectModel.ReadOnlyCollection`1 headers, System.Collections.ObjectModel.ReadOnlyCollection`1 libraries, System.Collections.ObjectModel.ReadOnlyCollection`1 frameworks) [0x00065] in /Users/mark/dev/BuildAMation/packages/C/bam/XcodeBuilder/XcodeLinker.cs:52
at C.ConsoleApplication.ExecuteInternal (Bam.Core.ExecutionContext context) [0x0005e] in /Users/mark/dev/BuildAMation/packages/C/bam/Scripts/ConsoleApplication.cs:235
at Bam.Core.Module.Bam.Core.IModuleExecution.Execute (Bam.Core.ExecutionContext context) <0x37c8950 + 0x00143> in <filename unknown>:0
at Bam.Core.Executor+<Run>c__AnonStorey2.<>m__0 () <0x37c80f0 + 0x00266> in <filename unknown>:0
Answers:
username_0: This was actually a fix for issue #65.
username_0: This exception only happens when there are modules that are configuration specific. e.g. only build an installer in non-debug builds. This causes the modules for each configuration to be in a skewed order, which may result in collation policies (which change the product type) to be executed before the link step for the same executable (in the eyes of Xcode) in another configuration. Thus the product type goes from NA->Executable->Application Bundle(first config)->Executable(second config) and that's the issue.
Status: Issue closed
|
ministryofjustice/cloud-platform | 837914132 | Title: Update guide to clarify position on supported TLS
Question:
username_0: Currently the user guide says - https://user-guide.cloud-platform.service.justice.gov.uk/documentation/concepts/migrate-from-td.html#ssl:
`TLS version: By default, Kubernetes supports a minimum TLS version of 1.2 If you have users with browsers that do not support TLS 1.2, they will not be able to access your service.`
This isn't strictly true.
We should update the guide accordingly.
Answers:
username_0: https://mojdt.slack.com/archives/C57UPMZLY/p1603205916133700
username_1: https://github.com/ministryofjustice/cloud-platform-user-guide/pull/701
Status: Issue closed
|
godotengine/godot-cpp | 936070571 | Title: Cross-Compiling on linux for windows throws error when arrives at StyleBox
Question:
username_0: Message:
```
x86_64-w64-mingw32-ar: src/gen/StyleBoxLine.o: No such file or directory
scons: *** [bin/libgodot-cpp.windows.debug.64.a] Error 1
scons: building terminated because of errors.
```
Command:
`scons platform=windows -j4 generate_bindings=yes`
PS:
It was compiling everything fine, just throwed this error when StyleBoxLine wasnt found (using masterbranch and building for 3.3)
Compiling for linux works fine!
Answers:
username_0: I Solved the problem, the thing is that I was compiling for windows the debug and release at the same time, so doing it one by one works fine!
Status: Issue closed
|
spotify/scio | 315642861 | Title: Give better help, documentation and error messages for pipeline options
Question:
username_0: Currently if we forget parameters when running a pipeline Scio just throws an exception.
For example:
```
Exception in thread "main" java.lang.IllegalArgumentException: Missing value for property 'textOutput'
at com.spotify.scio.Args.required(Args.scala:103)
at com.spotify.scio.Args.apply(Args.scala:86)
at com.spotify.data.goldenpath.TopTracksJob$.main(TopTracksJob.scala:43)
at com.spotify.data.goldenpath.TopTracksJob.main(TopTracksJob.scala)
```
`--help` is also not supported, and there's no way to interactively get documentation about the pipeline supported options (I'm thinking about both required and optional parameters).
It would be nice if a scio pipeline provided better errors and documentation about its options.
Answers:
username_1: Not sure what we can do with `--help` without changing the `Args` API, since we don't know what flags will be queried until runtime. Any ideas?
username_0: scop has an API that is not that different. We may do something similar ?
`Args` could keep an internal state of all supported options and use it to generate a help text ?
username_0: Hum so we do have to change the API since as you said help is impossible to implement in the current API. Most command parsing libraries un Scala are boilerplate heavy which I'm sure would raise a lot of complaints from our user. (see: [scoopt](https://github.com/scopt/scopt), [ciris](https://cir.is/docs/supported-sources), [scallop](https://github.com/scallop/scallop))
There's [case-app](https://github.com/alexarchambault/case-app) which is fairly close to Beam's PipelineOptions and even provides the [`CaseApp` trait](https://github.com/alexarchambault/case-app#whole-application-with-argument-parsing) which seems to be fairly strait-forward to use.
Do you think deprecating `Args` to improve help / error messages is worth it @username_1 ?
username_0: Note that one added benefit would be to fail faster in pipelines like [this example](https://github.com/spotify/scio/blob/master/scio-examples/src/main/scala/com/spotify/scio/examples/extra/BeamExample.scala#L98) which are reading arguments at the very end.
username_2: It would be great if this was supported out of the box, here's what we're doing to circumvent the issue at the moment:
https://github.com/snowplow/snowplow/blob/0a1005181b4f86b44f7c9b24a7a741fdb19fe26f/3-enrich/beam-enrich/src/main/scala/com.snowplowanalytics.snowplow.enrich.beam/config.scala#L45-L78
username_1: @username_0 TBH don't think it'll benefit our internal use cases much since we don't often run scio jobs ad-hoc via command line but instead wrap them in luigi/flo. So once wired up one almost never worry about command line interfaces.
OTOH, is there a way to make `PipelineOptions` interface easier to use, so that we can tap into existing tooling? Some macro to convert case classes to `PipelineOptions`?
But TBH not sure if it's worth the extra work.
username_0: `PipelineOptions` is really not scala-ish and working on a macro sounds like a hard to implement and brittle solution...
Status: Issue closed
|
tensorflow/datasets | 527192382 | Title: NonMachtingChecksumError for Kitti dataset
Question:
username_0: The following code fails with `NonMatchingChecksumError`
Code:
```
import tensorflow_datasets as tfds
import tensorflow as tf
kitti_builder = tfds.builder('kitti', data_dir='/data')
# Download the dataset
kitti_builder.download_and_prepare()
```
Output error:
```
NonMatchingChecksumError: Artifact https://s3.eu-central-1.amazonaws.com/avg-kitti/devkit_object.zip, downloaded to /data/downloads/s3.eu-central-1_avg-kitti_devkit_objectLwMAO1uIjU-eBJEp605N-Wtd3-25ncIu5j-rD605uW0.zip.tmp.523afd5a748243559a2ccd57ba87102b/devkit_object.zip, has wrong checksum
```
Answers:
username_1: Is the issue still here ? This might indicates that the file has been silently updated or is momentarily unavailable. If the issue persist, it means that we need to find a non-corrupted mirror of this file and update the url.
If it's the dataset which has been updated, the version should be increased with the new checksum (and should make sure the dataset still correctly generate).
username_0: The workaround I found is to run the script `download_and_prepare` with `register_checksums=True` as follow
```
python -m tensorflow_datasets.scripts.download_and_prepare --registrer_checksums=True --datasets=kitti
```
username_2: This is a good workaround - but running with 'register_checksums=True' basically means that you 'ignore' the build-in checksum and simply download the file.
@username_0 - could you do us a favor and
a) verify that data is still correct and not corrupted (same number of elements etc)
b) send the pull request with the new checksum + version change?
Thanks!
username_3: I get the same error and used a similar workaround.
The resulting checksum is:
`https://s3.eu-central-1.amazonaws.com/avg-kitti/devkit_object.zip 63794 ce0b76b69c0c5f89690a0d65b7302bbbdb962a0c7e8aba6efc7050d1b04b4cf1`
username_4: Fixed by PR https://github.com/tensorflow/datasets/pull/1507
Status: Issue closed
|
lttkgp/C-3PO | 280709448 | Title: Minor change in KWOC.md
Question:
username_0: The first heading says "Why, hello there!". I am not sure if you wrote "Why" over there or if it is a typo.
If it is a typo please let me correct it
Answers:
username_0: I submitted a PR for this issue . Please notify me if I am wrong
Status: Issue closed
|
AcademySoftwareFoundation/OpenColorIO | 1181896974 | Title: errno checks in 'from_chars' function are wrong
Question:
username_0: In the 'from_chars' function, it's first checked if errno is 0 and immediately returns with std::errc::result_out_of_range aka ERANGE.
https://github.com/AcademySoftwareFoundation/OpenColorIO/blob/4fa94918c2cf572dcaf61ca07016f3b5c231c14c/src/utils/NumberUtils.h#L71-L74
According to `strtod(3p)`, if no conversion could be performed, errno may be set to EINVAL.
Unlike glibc which doesn't do this, musl does. This leads to an incorrect return value (ERANGE != EINVAL).
I discovered this through failing tests on musl. |
ChalkyBrush/roshpit-bug-tracker | 304063028 | Title: Serengard Sun Crystall
Question:
username_0: Serengard sun crystall all elem dmg amplification doesent work (dot give any dmg and also not shown at hero description window) tested on training dummy twice in a different games.


Answers:
username_1: check valdun helm too
username_0: Valdun helm works properly

username_2: fixed
Status: Issue closed
|
yukimochi/CM3D2_Plugin_Merge | 211885124 | Title: "VR&LIVE EVENT2017 アイテムセット" をマージしようとすると強制終了します。
Question:
username_0: ## 事象
"VR&LIVE EVENT2017 アイテムセット" をマージしようとすると強制終了します。
## 原因
アップデートファイルが、Chu-B Lip 向けアップデートと共用するように記述されたupdate.lstファイルの取り扱いに現時点で対応していないため。
## 修正の見通し
ある程度時間がかかる見通しです。
※この修正が行われると、"追加性格","ビジュアルパック","プラス"の統合にも対応します。
Status: Issue closed
Answers:
username_0: Fixed at [This Commit](https://github.com/username_0/CM3D2_Plugin_Merge/commit/df60b38254402531474645a8652dad529f1970de). |
1drop/shopware-sentry | 277501785 | Title: Error 500 after installation, Backend not useable
Question:
username_0: System:
====
* Shopware 5.2.21
* Sentry Plugin 1.1.1
* PHP 7.0.11
* `uname -a`
Output: `FreeBSD qidone 10.2-RELEASE-p14 hostBSD 10.2-RELEASE-p14 #3: Sat Mar 19 18:10:30 CET 2016 <EMAIL>:/usr/obj/usr/src/sys/DMRKERNEL amd64`
Plugin settings:
====

Logfiles
====
apache error log
---
```
[Tue Nov 28 19:58:50.169539 2017] [fcgid:warn] [pid 41348] [client 93.228.xx.xxx:64334] mod_fcgid: stderr: PHP Fatal error: Uncaught Symfony\\Component\\DependencyInjection\\Exception\\InvalidArgumentException: The parameter "shopware.sentry" must be defined. in /shopware/var/cache/production_201703211201/proxies/ShopwareProductione928cc560381633e6f727b3a89e61338d3e91c3aProjectContainer.php:2160, referer: https://www.flooridoo.com/shop/backend/
```
Error during deaktivation from commandline:

Answers:
username_1: I think the entry for sentry in the config.php is required in the current version.
username_2: The CompilerPass sets a default configuration if it does not missing. I guess there is something wrong while building container....
Have you tried clearing complete cache folder in /var/cache?
username_0: you are right, after adding the following lines to config.php the error 500 is gone.

So it seems to me, that the defaults are faulty.
__PS: If you look into the description in the shopware store, there is no hint about that setting ...__
username_3: username_2 is right, the default configuration (if not explicitly set in config.php => which is NOT required) is added at a very early stage of the plugin bootstrapping which did not cause any problems until now.
username_2: Could you provide also a stack Trace?
username_3: Hmm, I just tried it with a fresh installed demo shop:

And can't reproduce it.
username_0: for sure, added it into the issue description
maybe the problem is, that i did installation in the following steps:
* download via shopware store (directly in shopware)
* install the plugin (automated step)
* configure the plugin (add dsn, etc.)
* activate the plugin
after activation the installation stopped working
username_3: Oh, I never got the installation of a plugin inside of Shopware working. Always get some weird response codes.
I need to figure out a way to test this.
username_0: Ahh your installation needs to be registered @shopware and reachable from the shopwarw servers ... strange stuff going on with it
username_3: Actually it is registered
username_3: I installed a new shop on which the registration worked and installed the plugin via the integrated shopware store. Can't reproduce the issue.
username_0: Mhmm so let’s close it for now, I’m not so much into bsd maybe a platform specific problem 🤔
Atleast it works with the config.php settings, maybe it makes sense to point to the readme on github from the shopware store. 😉
Thanks a lot for your effort.
Status: Issue closed
|
AtlasOfLivingAustralia/spatial-service | 184975675 | Title: Requested ID has an extra cl prefix when editing an existing field
Question:
username_0: See screenshot below for example of the requested ID having an extra cl prefix when editing an existing field:
http://spatial-test.ala.org.au/spatial-service/manageLayers/field/cl10841
<img width="1082" alt="screen shot 2016-10-25 at 9 46 29 am" src="https://cloud.githubusercontent.com/assets/82365/19666879/e2920920-9a98-11e6-85cb-58f0e6dc7742.png">
Status: Issue closed
Answers:
username_1: Editing fields is now working. |
GoogleCloudPlatform/professional-services | 1048206226 | Title: tools/bq-visualizer failed to build
Question:
username_0: Command :
` gcloud builds submit --config cloudbuild.yaml --substitutions="_BQVISUALISER_BUCKET=<test-bucket-name>`
```
Step #0: g++ '-DNODE_GYP_MODULE_NAME=binding' '-DUSING_UV_SHARED=1' '-DUSING_V8_SHARED=1' '-DV8_DEPRECATION_WARNINGS=1' '-DV8_DEPRECATION_WARNINGS' '-DV8_IMMINENT_DEPRECATION_WARNINGS' '-D_LARGEFILE_SOURCE' '-D_FILE_OFFSET_BITS=64' '-D__STDC_FORMAT_MACROS' '-DOPENSSL_NO_PINSHARED' '-DOPENSSL_THREADS' '-DBUILDING_NODE_EXTENSION' -I/builder/home/.node-gyp/14.10.0/include/node -I/builder/home/.node-gyp/14.10.0/src -I/builder/home/.node-gyp/14.10.0/deps/openssl/config -I/builder/home/.node-gyp/14.10.0/deps/openssl/openssl/include -I/builder/home/.node-gyp/14.10.0/deps/uv/include -I/builder/home/.node-gyp/14.10.0/deps/zlib -I/builder/home/.node-gyp/14.10.0/deps/v8/include -I../../nan -I../src/libsass/include -fPIC -pthread -Wall -Wextra -Wno-unused-parameter -m64 -O3 -fno-omit-frame-pointer -fno-rtti -fno-exceptions -std=gnu++1y -std=c++0x -MMD -MF ./Release/.deps/Release/obj.target/binding/src/create_string.o.d.raw -c -o Release/obj.target/binding/src/create_string.o ../src/create_string.cpp
Step #0: ../src/create_string.cpp: In function 'char* create_string(Nan::MaybeLocal<v8::Value>)':
Step #0: ../src/create_string.cpp:17:37: error: no matching function for call to 'v8::String::Utf8Value::Utf8Value(v8::Local<v8::Value>&)'
Step #0: v8::String::Utf8Value string(value);
Step #0: ^
Step #0: In file included from /builder/home/.node-gyp/14.10.0/include/node/node.h:67:0,
Step #0: from ../../nan/nan.h:53,
Step #0: from ../src/create_string.cpp:1:
Step #0: /builder/home/.node-gyp/14.10.0/include/node/v8.h:3287:5: note: candidate: v8::String::Utf8Value::Utf8Value(v8::Isolate*, v8::Local<v8::Value>)
Step #0: Utf8Value(Isolate* isolate, Local<v8::Value> obj);
Step #0: ^~~~~~~~~
Step #0: /builder/home/.node-gyp/14.10.0/include/node/v8.h:3287:5: note: candidate expects 2 arguments, 1 provided
Step #0: binding.target.mk:131: recipe for target 'Release/obj.target/binding/src/create_string.o' failed
Step #0: make: Leaving directory '/workspace/node_modules/node-sass/build'
Step #0: make: *** [Release/obj.target/binding/src/create_string.o] Error 1
Step #0: gyp ERR! build error
Step #0: gyp ERR! stack Error: `make` failed with exit code: 2
Step #0: gyp ERR! stack at ChildProcess.onExit (/workspace/node_modules/node-gyp/lib/build.js:262:23)
Step #0: gyp ERR! stack at ChildProcess.emit (events.js:314:20)
Step #0: gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:276:12)
Step #0: gyp ERR! System Linux 5.4.0-1052-gcp
Step #0: gyp ERR! command "/usr/local/bin/node" "/workspace/node_modules/node-gyp/bin/node-gyp.js" "rebuild" "--verbose" "--libsass_ext=" "--libsass_cflags=" "--libsass_ldflags=" "--libsass_library="
Step #0: gyp ERR! cwd /workspace/node_modules/node-sass
Step #0: gyp ERR! node -v v14.10.0
Step #0: gyp ERR! node-gyp -v v3.8.0
Step #0: gyp ERR! not ok
Step #0: Build failed with error code: 1
Step #0: npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules/fsevents):
Step #0: npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
Step #0: npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules/node-sass):
Step #0: npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] postinstall: `node scripts/build.js`
Step #0: npm WARN optional SKIPPING OPTIONAL DEPENDENCY: Exit status 1
Step #0:
Step #0: added 1263 packages from 1258 contributors and audited 1390 packages in 202.003s
Step #0: found 259 vulnerabilities (1 low, 74 moderate, 146 high, 38 critical)
Step #0: run `npm audit fix` to fix them, or `npm audit` for details
Finished Step #0
Starting Step #1
Step #1: Already have image (with digest): gcr.io/cloud-builders/npm
Step #1:
Step #1: > [email protected] build /workspace
Step #1: > ng build "--prod"
Step #1:
Step #1: Browserslist: caniuse-lite is outdated. Please run next command `npm update`
Step #1:
Step #1: Date: 2021-11-09T05:16:06.383Z
Step #1: Hash: 1a29f1733d1aa8d43184
Step #1: Time: 230037ms
Step #1: chunk {0} runtime.ec2944dd8b20ec099bf3.js (runtime) 1.41 kB [entry] [rendered]
Step #1: chunk {1} main.4676ba3fae7420b1a81f.js (main) 2.02 MB [initial] [rendered]
Step #1: chunk {2} polyfills.59d692cfbbd23ff7e836.js (polyfills) 61.7 kB [initial] [rendered]
Step #1: chunk {3} styles.baa94c1066afa16a868e.css (styles) 61.6 kB [initial] [rendered]
Finished Step #1
PUSH
Artifacts will be uploaded to gs://techm-test-nikunj using gsutil cp
tools/bq-visualizer/dist/*: Uploading path....
CommandException: No URLs matched: tools/bq-visualizer/dist/*
CommandException: 1 file/object could not be transferred.
ERROR
```
Answers:
username_1: @username_0 There is a PR against that repo to upgraded it to Angular v12 https://github.com/GoogleCloudPlatform/professional-services/pull/715
Let's wait for it to be merged and we will see what's wrong with the tool
@username_2 Please consider making sure the tool builds.
username_2: Fix is on on branch master |
MicrosoftDocs/dynamics-365-customer-engagement | 496852287 | Title: The provisioning step might need to be updated
Question:
username_0: I tried the provisioning step after September update, it may need to be updated here is a blog post I wrote https://www.linkedin.com/pulse/provisioning-portal-fitsum-mekuria-
Answers:
username_1: Thanks @username_0. The provisioning steps are updated: https://docs.microsoft.com/en-us/powerapps/maker/portals/provision-portal-add-on.
Closing this issue.
Status: Issue closed
|
php/php-langspec | 325312355 | Title: DateTime::createFromFormat and date() should support same format letters
Question:
username_0: Differences in format letters (specifically "u" and "v") make the DateTime format constants unusable for createFromFormat.
Example:
`DATE_RFC3339_EXTENDED` has the value `"Y-m-d\TH:i:s.vP"`.
This predefined format will work for `date()`, but not for `DateTime::createFromFormat`, as milliseconds require the format symbol "u" instead of "v".<issue_closed>
Status: Issue closed |
nodejs/diagnostics | 466007361 | Title: diagnostics wg meeting Jul 10 2019
Question:
username_0: The issue didn't get automatically generated. @mhdawson
If this meeting is still happening tomorrow, I'd like to show off a short demo of my diagnostic reporting tool
Answers:
username_0: n/m, I have no idea why this is on my calendar scheduled for the 10th. Next meeting is 17th.
Status: Issue closed
|
BioSchemas/specifications | 591261928 | Title: Add legal_awareness dataset for leisure sea fishing in France to live deploy list
Question:
username_0: I have used bioschemas.org/Taxon to qualify some legal_awareness dataset for leisure sea fishing in France, fishes, shells ... This is in test since about a year now for all species that are targeted by some legislation.
https://search.google.com/structured-data/testing-tool#url=https%3A%2F%2Fwww.opalesurfcasting.net%2Fconnaissances%2Ffaune-et-flore%2Fle-saumon-de-l-atlantique-salmo-salar.html
https://search.google.com/test/rich-results?id=EnYla7r4hYxRyh_uyf6XgA
_Originally posted by @username_1 in https://github.com/BioSchemas/specifications/issues/321#issuecomment-570834625_
Answers:
username_0: @username_1 Thanks for adding this to our list of live deploys. I'm sorry I hadn't managed to do this for you.
The markup looks to be mostly correct. The one thing is that the [Taxon Profile](https://bioschemas.org/profiles/Taxon/) requires a link to a taxon rank.
If you are having difficulties knowing what to do with this, @username_2 would be the person to ask.
username_1: Thanks @username_0,
I have added taxonRank as text for the moment, using 'species'. I will look at use TDWG TaxonRank ontology in next days.
username_2: Just for information, some TDWG ontologies are deprecated today, although new work is on track to come up with an updated version thereof.
However the idea of taxonRank is to give users flexibility wrt. to what they use to denote the ranks: URIs, or text.
In an ideal world where semantic web practices would be adopted everywhere, we would only need URIs. And we have many of them: if you lookup the [species rank](http://taxref.mnhn.fr/lod/taxrank/Species) in TAXREF-LD, you'll see I have aligned it with TDWG, NCBI, Geospecies, TaxonConcept and the Taxonomic rank vocabulary.
But this is no ideal world, and many people still use strings for ranks. Typically because lots of data come from Darwin Core archives that are basically XML files where the rank is a string.
So this is the reason why I proposed to have the two options, text or URI.
username_1: Thanks @username_2 |
home-assistant/core | 730638727 | Title: MQTT_json device_tracker not working in HA 117
Question:
username_0: <!-- READ THIS FIRST:
- If you need additional help with this template, please refer to https://www.home-assistant.io/help/reporting_issues/
- Make sure you are running the latest version of Home Assistant before reporting an issue: https://github.com/home-assistant/core/releases
- Do not report issues for integrations if you are using custom components or integrations.
- Provide as many details as possible. Paste logs, configuration samples and code into the backticks.
DO NOT DELETE ANY TEXT from this template! Otherwise, your issue may be closed without comment.
-->
## The problem
<!--
Describe the issue you are experiencing here to communicate to the
maintainers. Tell us what you were trying to do and what happened.
-->
since [this](https://github.com/home-assistant/core/issues/42203) was closed but 2 of the reported templates still are not working, I repost these separately.
## Environment
<!--
Provide details about the versions you are using, which helps us to reproduce
and find the issue quicker. Version information is found in the
Home Assistant frontend: Configuration -> Info.
-->
- Home Assistant Core release with the issue: 117
- Last working Home Assistant Core release (if known): 116.4
- Operating environment (OS/Container/Supervised/Core): OS
- Integration causing this issue: mqtt_json
- Link to integration documentation on our website: https://www.home-assistant.io/integrations/mqtt_json/
## Problem-relevant `configuration.yaml`
<!--
An example configuration that caused the problem for you. Fill this out even
if it seems unimportant to you. Please be sure to remove personal information
like passwords, private URLs and other credentials.
-->
```yaml
action:
service: mqtt.publish
data:
topic: location/marijn
retain: true
payload: >-
{
"latitude": "{{state_attr('zone.home','latitude')}}",
"longitude": "{{state_attr('zone.home','longitude')}}",
"battery_level": {{states('sensor.calltheboss_battery_level')|int}},
"gps_accuracy": 0
}
action:
service: mqtt.publish
data:
topic: location/marijn
retain: true
payload: >-
{
"latitude": "{{trigger.to_state.attributes.latitude}}",
"longitude": "{{trigger.to_state.attributes.longitude}}",
"battery_level": {{states('sensor.calltheboss_battery_level')|int}},
"gps_accuracy": {{trigger.to_state.attributes.gps_accuracy|int}}
}
[Truncated]
.....
TypeError: payload must be a string, bytearray, int, float or None.
```
## Additional information
tried with this:
```
action:
service: mqtt.publish
data:
topic: location/marijn
retain: true
payload:
latitude: "{{state_attr('zone.home','latitude')}}",
longitude: "{{state_attr('zone.home','longitude')}}",
battery_level: {{states('sensor.calltheboss_battery_level')|int}},
gps_accuracy: 0
```
but that seems incorrect too..
Answers:
username_1: What beta are you using?
username_1: You've also left out the stack trace and replaced it with `.....`… |
pbrod/numdifftools | 109580874 | Title: Example of using Jacobian with args and kwds?
Question:
username_0: Can you provide an example of using Jacobian with args?
I have a function
def f(x, args):
pass
I tried
x0
jac = Jacobian(f)
jac(x0, args)
This produces the error
TypeError Traceback (most recent call last)
<ipython-input-66-86e601a44004> in <module>()
----> 1 jac(q0,params)
/usr/local/lib/python2.7/dist-packages/numdifftools/core.pyc in __call__(self, x, *args, **kwds)
1112
1113 def __call__(self, x, *args, **kwds):
-> 1114 return super(Gradient, self).__call__(np.atleast_1d(x), *args, **kwds)
1115
1116
/usr/local/lib/python2.7/dist-packages/numdifftools/core.pyc in __call__(self, x, *args, **kwds)
747 def __call__(self, x, *args, **kwds):
748 xi = np.asarray(x)
--> 749 results = self._derivative(xi, args, kwds)
750 derivative, info = self._extrapolate(*results)
751 if self.full_output:
/usr/local/lib/python2.7/dist-packages/numdifftools/core.pyc in _derivative(self, xi, args, kwds)
945 steps = self._get_steps(xi)
946 fxi = self._eval_first(f, xi, *args, **kwds)
--> 947 results = [diff(f, fxi, xi, h, *args, **kwds) for h in steps]
948 step_ratio = self._compute_step_ratio(steps)
949
/usr/local/lib/python2.7/dist-packages/numdifftools/core.pyc in _central(f, fx, x, h, *args, **kwds)
1067 increments = np.identity(n) * h
1068 partials = [(f(x + hi, *args, **kwds) - f(x - hi, *args, **kwds)) / 2.0
-> 1069 for hi in increments]
1070 return np.array(partials).T
1071
TypeError: unsupported operand type(s) for -: 'list' and 'list'
Answers:
username_1: return np.exp(-x**2-args[0]**2)
In [96]: df = nd.Derivative(f)
In [97]: x = np.linspace(-2,2)
In [98]: X,Y = np.meshgrid(x,x)
In [99]: z = df(x,y)
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-99-026854c54f18> in <module>()
----> 1 z = df(x,y)
NameError: name 'y' is not defined
In [100]: z = df(X,Y)
In [101]: plt.contourf(x,x,z)
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-101-51c00ce29d86> in <module>()
----> 1 plt.contourf(x,x,z)
NameError: name 'plt' is not defined
In [102]: import pylab as plt
In [103]: plt.contourf(x,x,z)
Out[103]: <matplotlib.contour.QuadContourSet instance at 0x00000000022180C8>

In [104]:
username_1: array([ 0., -16., -32.])
username_1: closing this issue
Status: Issue closed
|
sxs-collaboration/spectre | 293980033 | Title: Precompiled header fails to recompile when necessary
Question:
username_0: After modifying Abort.hpp running make gives
```
[ 1%] Built target pch
[ 6%] Built target SPHEREPACK
[ 6%] Building CXX object src/ApparentHorizons/CMakeFiles/ApparentHorizons.dir/SpherepackIterator.cpp.o
fatal error: file '/home/username_0/spectre/src/Parallel/Abort.hpp' has been
modified since the precompiled header
'/home/username_0/spectre-work/SpectrePch.hpp.gch' was built
note: please rebuild precompiled header
'/home/username_0/spectre-work/SpectrePch.hpp.gch'
1 error generated.
```
Status: Issue closed
Answers:
username_0: After modifying Abort.hpp running make gives
```
[ 1%] Built target pch
[ 6%] Built target SPHEREPACK
[ 6%] Building CXX object src/ApparentHorizons/CMakeFiles/ApparentHorizons.dir/SpherepackIterator.cpp.o
fatal error: file '/home/username_0/spectre/src/Parallel/Abort.hpp' has been
modified since the precompiled header
'/home/username_0/spectre-work/SpectrePch.hpp.gch' was built
note: please rebuild precompiled header
'/home/username_0/spectre-work/SpectrePch.hpp.gch'
1 error generated.
```
username_0: This still happens when system headers are modified.
username_2: Yes, I'm not planning on changing that, but if anyone has a patch that would be great :)
Status: Issue closed
|
att/rcloud | 70178272 | Title: Record IP address of 'nobody' users
Question:
username_0: Please record/log the IP address of a user who accesses a notebook as 'nobody' in order to capture usage statistics by unique user of applications created in RCloud.
@username_1
Answers:
username_1: I'm not sure I understand this request. The IP landing at RCloud is typically not useful since it will be the IP of the proxy - logging of the actual user IP address should be done by the proxy since it is the only place that actually knows the external IP address of the user. Alternatively, we could define a way by which the proxy can relay that information to RCloud (e.g. a HTTP header), but that require cooperation of the proxy and RCloud.
So I'll interpret this as "record a defined header in HTTP requests to ulog".
username_2: @username_1 We are forwarding the user's real IP using the proxy_set_header directive in the proxy.
It's just not available in the ulogs.
<code> proxy_set_header X-Real-IP $remote_addr; </code>
username_1: Added to the WS proxy:
https://github.com/username_1/Rserve/commit/2db1c39f1e9a52d2b92a73cd4a3b982f410a53fa
Status: Issue closed
|
cgoldsby/TvOSMoreButton | 273233762 | Title: Error Type 'NSAttributedStringKey' (aka 'NSString') has no member 'foregroundColor'
Question:
username_0: We have fount error:
1) Type 'NSAttributedStringKey' (aka 'NSString') has no member 'foregroundColor'
2) Type 'NSAttributedStringKey' (aka 'NSString') has no member 'foregroundColor'
in " TvOSMoreButton.swift"
Can you help me to resolve it ?
Answers:
username_1: I have the same issue
username_2: Hi @username_0 and @username_1, here is some information that might help:
Release 1.1.0+ now requires Xcode 9 but there are a few things you _may_ have to do depending on whether your project is running Swift 3.2 or Swift 4.
The [release page](https://github.com/username_2/TvOSMoreButton/releases/tag/1.1.0) and [changelog](https://github.com/username_2/TvOSMoreButton/blob/master/CHANGELOG.md) have more information.
If your main project has been migrated to Swift 4, i.e. SWIFT_VERSION is 4.0 in "Build Settings", then updating your Podfile to use `TvOSMoreButton` 1.1.0+ should work without any changes.
However, if your project has not been migrated to Swift 4 and is using **Swift 3.2**, then you need to update the SWIFT_VERSION to 4.0 just for the `TvOSMoreButton` pod (see [release page](https://github.com/username_2/TvOSMoreButton/releases/tag/1.1.0) for instructions.)
What version of Swift are you using?
Hopefully, one of these steps will solve these compile issues. Let me know how things go.
username_0: tq i have fix it.
username_2: That's fantastic @username_0! I am glad things are working for you. Can this issue be closed?
username_0: yes. thank you for ur reply
Status: Issue closed
|
nci-hcmi-catalog/portal | 695923703 | Title: Model Detail page: TSV file names
Question:
username_0: When the user downloads the TSV form the model page variant tabs, please make the names as follows:
- [ ] modelname-research-somatic-variants.tsv
- [ ] modelname-clinical-variants.tsv
- [ ] modelname-histopathological-biomarkers.tsv
<img width="805" alt="2020-09-08_10-24-04" src="https://user-images.githubusercontent.com/18424138/92489886-48761580-f1be-11ea-8391-c0d350c94abd.png">
<img width="796" alt="2020-09-08_10-24-16" src="https://user-images.githubusercontent.com/18424138/92489894-4c099c80-f1be-11ea-9671-152732667f5a.png">
Answers:
username_1: lgtm me on the site! thank you @mistryrn !!
Status: Issue closed
|
geommer/yabar | 142464479 | Title: spectrwm issue?
Question:
username_0: Hello - using spectrwm V2.7.2 on Fedora 23. I am having trouble getting yabar to display on all workspaces? Maybe I dont understand the documentation. I've tried reserving screen in my .spectrwm.conf, not reserving a screen, and launching yabar with the baraction in .spectrwm.conf and as a script that calls yabar.
Calling yabar in .xinitrc starts yabar fine on WS 1. Any idea what I could be doing wrong (other than using spectrwm :)
Answers:
username_1: @username_2 did you solve your problem? If so, how?
username_2: I did not solve the problem. I moved on and should've closed the issue.
Status: Issue closed
|
ClickHouse/ClickHouse | 526480061 | Title: Failed to do integration test on CentOS 7
Question:
username_0: **Describe the bug or unexpected behaviour**
Here's my development environment:
-1 CentOS 7 physical machine, checkout Clickhouse code
-2 Ubuntu 18.04 LTS container, mount Clickhouse code into container
-3 Build Clickhouse binaries inside container
-4 Copy clickhouse executable from container to /usr/bin outside. Run integration test outside container.
I failed at step 4.
```
[root@gpu07 integration]# ./runner --clickhouse-root /bigdata/zhichyu/2/ClickHouse/ 'test_partition'
clickhouse_integration_tests_volume
Start tests
ImportError while loading conftest '/ClickHouse/dbms/tests/integration/conftest.py'.
ImportMismatchError: ('conftest', '/bigdata/zhichyu/2/ClickHouse/dbms/tests/integration/conftest.py', local('/ClickHouse/dbms/tests/integration/conftest.py'))
Traceback (most recent call last):
File "./runner", line 110, in <module>
subprocess.check_call(cmd, shell=True)
File "/usr/lib64/python2.7/subprocess.py", line 542, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command 'docker run --net=host -it --rm --name clickhouse_integration_tests --privileged --volume=/usr/bin/clickhouse-odbc-bridge:/clickhouse-odbc-bridge --volume=/usr/bin/clickhouse:/clickhouse --volume=/bigdata/zhichyu/2/ClickHouse/dbms/programs/server:/clickhouse-config --volume=/bigdata/zhichyu/2/ClickHouse/:/ClickHouse --volume=clickhouse_integration_tests_volume:/var/lib/docker -e PYTEST_OPTS='test_partition' yandex/clickhouse-integration-tests-runner ' returned non-zero exit status 4
```
I managed to run integration test on an Ubuntu 18.04 physical machine. However I prefer CentOS since it's more stable.
Answers:
username_1: Most likely it happened because `.dockerignore` does not affect `--volume` bindings, so python cache files created outside container was used inside it. Try remove all `__pycache__` dirs and `*.pyc` files in `/bigdata/zhichyu/2/ClickHouse/dbms/tests/integration/` and sub-directories, it should help.
username_0: @username_1 Your suggestion works! Thanks.
It's better to set `PY_IGNORE_IMPORTMISMATCH=1` and remove all `__pycache__` dirs and `*.pyc` files at `dbms/tests/integration/image/dockerd-entrypoint.sh` or `runner.py`.
And it's even better to simplify integration framework. Python dependencies and container-in-container make things complicated.
Status: Issue closed
|
sphinx-contrib/redirects | 613774758 | Title: exception: No module named 'sphinxcontrib.redirects'
Question:
username_0: I am using sphinx 2.1.2 and followed the readme and getting error like
`Running Sphinx v2.1.2
Extension error:
Could not import extension sphinxcontrib.redirects (exception: No module named 'sphinxcontrib.redirects') `<issue_closed>
Status: Issue closed |
chriscdn/RHManagedObject | 168854299 | Title: Get object by objectID?
Question:
username_0: I need to save `objectID` and through some time get it (or nil if it doesn't exist) by ID. The problem is I should store `objectID` as string but its type is `NSManagedObjectID`. Could you explain how to save this unique id and get an object with this id according to this library?
Answers:
username_1: @username_0
It's no different from using core data. You can store the absolute string of an objectID's URI representation. An example of using a category on RHCoreDataTableViewController is one example accomplishing your goal.
```objc
@implementation RHCoreDataTableViewController (KJMAdditions)
- (NSString *)kjm_objectIDstringRepresentation
{
return [self kjm_selectedObjectID].URIRepresentation.absoluteString;
}
- (NSManagedObjectID *)kjm_selectedObjectID
{
NSManagedObject *selectedObject = [self.fetchedResultsController objectAtIndexPath:(self.tableView).indexPathForSelectedRow];
return selectedObject.objectID;
}
@end
```
Also, you should make sure the ObjectID is a permanentID before saving it.
```objc
- (NSManagedObjectID *)kjm_permanentObjectID
{
NSError *error = nil;
if (![self.managedObjectContext obtainPermanentIDsForObjects:@[self] error:&error]) {
NSLog(@"Couldn't obtain a permanent ID for object %@", error);
}
return self.objectID;
}
```
username_0: I did it slightly differently:
```
NSManagedObjectID *objID = [managedObjectContext.persistentStoreCoordinator managedObjectIDForURIRepresentation:[NSURL URLWithString:objIDString]];
id obj = [managedObjectContext existingObjectWithID:objID error:&err];
```
Could you explain the last step with permanentID?
username_1: The last step is when saving a managed object id used to find objects later on, as you said in the first post. From apple docs:
"New objects inserted into a managed object context are assigned a temporary ID which is replaced with a permanent one once the object gets saved to a persistent store."
username_0: Ok, thanks it may help to solve my another problem. Additionally maybe do u know how to operate with objectID in overridden `willSave` and `didSave` methods if this id may be invalid?
username_1: Not sure I exactly understand what you're trying to do but keep this in mind:
```objc
// It never returns nil, and never performs I/O. The object specified by objectID is assumed to exist, and if that assumption is wrong the fault may throw an exception when used.
- (NSManagedObject *)objectWithID:(NSManagedObjectID *)objectID;
// vs.
// Returns nil if it does not exist
- (nullable NSManagedObject *)existingObjectWithID:(NSManagedObjectID*)objectID error:(NSError**)error;
```
Also, if your passing the objectID, using a segue, for example, you can check whether the objectID exists --
```objc
if (self.yourObjectIDProperty) { // fetch the object using the objectID };
```
username_0: ok. My task is to save/delete objects from core spotlight. The problem is core spotlight is "writeonly". So I decided to create/update/delete data from core spotlight simultaneously with core data (`willSave` and `didSave` methods). Now it works somehow but If this problem occurs again I'll have duplicate entries in core spotlight because I use entity ID as unique identifier. |
vanvught/rpidmx512 | 597960260 | Title: Multiple RDM devices were not found.
Question:
username_0: Multiple RDM devices have been searched and tested at the same time, and now I'm testing that it's easy to find if it's a single, but none of the rdM devices together are discovered.
Test environment: opi_emac_artnet_dmx for orangepi_zero
Answers:
username_1: When opening an issue, then please provide detailed information. There is not much information to work with.
1. - Which DMX/RDM hardware are you using?
2. - What about the signal termination used?
3. - Are the fixtures RDM compliant? For example RDM Test results?
4. - What are the UUID's for the RDM fixtures?
5. - Can you provided RDM traces?
username_1: I cannot replicate the issue. I am using the DMX/RDM boards from [http://bitwizard.nl/shop/dmx]( http://bitwizard.nl/shop/dmx)
```
arjanusername_1@MacBook-Air ~ % ola_rdm_discover -u 1
29aa:02001420
5000:081e0bd3
5000:1c34020e
arjanusername_1@MacBook-Air ~ % ola_rdm_discover -u 1 -f
29aa:02001420
5000:081e0bd3
5000:1c34020e
arjanusername_1@MacBook-Air ~ % ola_rdm_discover -u 1 -i
29aa:02001420
5000:081e0bd3
5000:1c34020e
arjanusername_1@MacBook-Air ~ %
olad/plugin_api/Universe.cpp:523: Full RDM discovery triggered for universe 1
plugins/artnet/ArtNetNode.cpp:1778: Artnet RDM discovery complete
olad/plugin_api/Universe.cpp:523: Full RDM discovery triggered for universe 1
plugins/artnet/ArtNetNode.cpp:1778: Artnet RDM discovery complete
olad/plugin_api/Universe.cpp:525: Incremental RDM discovery triggered for universe 1
plugins/artnet/ArtNetNode.cpp:1778: Artnet RDM discovery complete
```

username_0: 1.UID:111a:3c7a738f
2.I am using network cable is not wireless.
3.DMX-Workshop software search, RDM fixtures is a self-written program, may also be my RDM fixtures is not correct.
4.Can you provide detailed instructions on the opi_dmx_usb_pro of OLA use?
username_1: http://www.orangepi-dmx.org/raspberry-pi-rdm-controller
Note : you also run the `rdm_test_server.py ` with the Art-Net firmware.
username_1: Open items:
1. Which DMX/RDM hardware are you using?
2. What about the signal termination used?
username_1: Closing as there is no activity, the issue is not related to the firmware.
Status: Issue closed
|
JuliaParallel/ClusterManagers.jl | 745607089 | Title: htcondor manager: failure when listening to a telnet commu
Question:
username_0: See recent comment on the unduly closed issue #107 !
In a nutshell: telnet connection between worker node and master node fails:
` telnet: connect to address 192.168.1.3: Connection refused `
Is anyone able to run addprocs_htc() on a cluster running htcondor scheduler??
The issue was posted when I was running julia version <=1. 1 but it is still here with v1.4 or v1.5
Answers:
username_1: Hi, I ran into this issue too. Based on the MPI change mentioned in https://github.com/JuliaParallel/ClusterManagers.jl/issues/107, I made a modification here that allows connections from remote machines
https://github.com/username_1/ClusterManagers.jl/commit/f91789be45336b0c4ca949ffd9853ba283cbccdf#diff-54c957b90c04bed63e172caa4efa42b072b2e0aef85562ece656d68f8bc8337bL45-R57
In my case, I switched to `nc` since `telnet` wasn't available in my worker node environment.
If it works out for you too, I can clean this up and make a PR
username_0: Managed to test it finally. It seems to work on my cluster! I still get some erratic connection issues with some particular nodes on the cluster... but this may not be related to ClusterManagers !! I like the additional options, too! Thanks!
username_1: Glad it works for you! :)
username_2: @tanmaykm
probably can close this? and make a new breaking release maybe? for both HTCondor and `qsub` related overhaul in #153 |
prometheus-community/helm-charts | 721347353 | Title: [prometheus-kube-stack] High Availability
Question:
username_0: Hello,
Please I need to get more details about the HA implementation using the chart prometheus-kube-stack.
What is recommended to have a HA setup?
In my topology, I am using InfluxDB as external storage.
If I create multiple replica for the same prometheus instance, each replica will scrape the metrics independently from the others? So, if the scrape interval is 20 seconds and I have 4 replicas, we will get the metrics from the target 4 times within 20 seconds?
What is the best solution for the best scalability while avoind high data deduplication and high numbers of scrape targets metrics within seconds?
Please advise as I need to setup the most appropriate architecture, scalable and with data storage optimization.
For the external storage, influxdb is not a requirement if you would propose to use alternative solution.
Regards,
Answers:
username_1: The question here really is about how do you want to implement HA for Prometheus itself.
You can take a look at:
- Thanos, https://github.com/thanos-io/thanos
- VictoriaMetrics, https://github.com/VictoriaMetrics/VictoriaMetrics
among other projects aiming to solve this issue.
These all can be used with the Prometheus-operator shipped by kube-prometheus-stack to achieve HA (and many other features as well).
Status: Issue closed
username_0: Thank you for your reply |
markmercedes/krudmin | 288109703 | Title: Relations are not displayed properly in show page
Question:
username_0: Ex.

Status: Issue closed
Answers:
username_1: 
👍 |
RasaHQ/rasa | 602734609 | Title: Encountered invalid end-to-end format
Question:
username_0: **Rasa version**: 1.9.6
**Python version**: 3.6.9
**Operating system** (windows, osx, ...): windows
**Issue**: Cannot test my model because of an error in my stories, but they look good, and I've been using the same stories with no problems.
**Error (including full traceback)**:
```
Error in line 2: Encountered invalid end-to-end format for message time_remaining. Please visit the documentation page on end-to-end testing at https://rasa.com/docs/rasa/user-guide/testing-your-assitant/#end-to-end-testing/
```
**Command or request that led to error**:
```
rasa test
```
stories.md:
```
## time_remaining
* time_remaining
- action_time_remaining
## current_dose
* current_dose
- action_current_dose
## treatment_end
* treatment_end
- action_treatment_end
## required_dose
* required_dose
- action_required_dose
```
Answers:
username_1: Hi , look at your **e2e stories** file not the stories.md file , and read the doc on how to test the model .
username_0: Hi, where do I find my e2e stories? I read the docs, I followed it for testing my model
username_1: I believe they reside in the tests folder that comes with rasa init ?
username_2: Thanks for raising this issue, @alwx will get back to you about it soon✨
###### Please also check out the [docs](https://rasa.com/docs/) and the [forum](https://forum.rasa.com/) in case your issue was raised there too 🤗 |
gin-gonic/gin | 497167924 | Title: POST request not consuming all of body
Question:
username_0: - go version:
`1.12`
- gin version (or commit ref):
`b75d67cd51eb53c3c3a2fc406524c940021ffbda`
- operating system:
Linux
## Description
I have a fairly simple login request:
Handler:
```golang
var requestData LoginRequest
err := c.ShouldBindBodyWith(&requestData, binding.JSON)
if err != nil {
logger.Log.WithError(err).Error("INVALID_REQUEST_DATA")
utils.HandleError(c, "INVALID_REQUEST_DATA", err)
return
}
```
Contract:
```golang
type LoginRequest struct {
AuthType string `json:"authType" form:"authType" binding:"required"`
Email string `json:"email" form:"email" binding:"required"`
Password string `json:"password" form:"password" binding:"required"`
}
```
I am sending this request:
```bash
curl -X POST \
http://localhost:3000/v1/users/login/ \
-H 'Accept: */*' \
-H 'Accept-Encoding: gzip, deflate' \
-H 'Cache-Control: no-cache' \
-H 'Connection: keep-alive' \
-H 'Content-Length: 80' \
-H 'Content-Type: application/json' \
-H 'Host: localhost:3000' \
-H 'Postman-Token: <PASSWORD>,<PASSWORD>,8<PASSWORD>' \
-H 'User-Agent: PostmanRuntime/7.15.2' \
-H 'cache-control: no-cache,no-cache' \
-d '{
"email": "<EMAIL>",
"authType": "password",
"password": "<PASSWORD>"
}'
```
`ShouldBindWith` always returns `unexpected EOF`
When I tried to find out the root cause, it looks like gin is stripping off some of the json at the end & this is what the handler ends up receiving:
```
"{\n \"email\": \"<EMAIL>\",\n \"authType\": \"password\",\n \"password\": \"ab"
```
Not sure what is going on here
Answers:
username_0: Duplicate
Status: Issue closed
|
xenia-project/game-compatibility | 1000462090 | Title: 565707D0 - Lolipop Chainsaw
Question:
username_0: [Marketplace](https://marketplace.xbox.com/en-US/Product/Lollipop-Chainsaw/66acd000-77fe-1000-9115-d802565707d0)
Tested on https://github.com/xenia-canary/xenia-canary/commit/9c74b4caba5a2f35f40c20ac555abe9e21e02b93
# Issues:
I have yet to complete the game but from the 2 hours or so of playtime I have in it the game is very playable. There are of course bugs such as audio glitching and graphical pop ins from time to time. The first screenshot I have below is of the graphical pop in. It only pops up for a second or two and then disappears. Aside from that the game plays well and I'm able to get around 30fps even on my lower end system.
# Log:
[xenia.zip](https://github.com/xenia-project/game-compatibility/files/7193173/xenia.zip)
# Screenshot(s):


# Labels:
state-gameplay
gpu-drawing-corrupt
Answers:
username_1: Duplicate of #983
Do not report canary status in official compatibility report
Status: Issue closed
username_0: As I responded with the resident evil 4 issue report I test all my games with both canary and master that way I can see if either has differing issues and was not sure which one to use in the report. And from what you said about the duplicate I think you should look into https://github.com/xenia-project/game-compatibility/issues/983 because if you search the name "lolipop chainsaw" the only thing that will show up is the issue made by me, however if you search "983" it will show up. Not sure if that's a glitch or something else.
username_2: Not to sound rude but you might not have found the old issue because you only used one 'L' in lollipop. Spelling it with 2 allows you to find the original issue |
rossfuhrman/_why_the_lucky_markov | 387523658 | Title: They walked that line kids are tough as a courtesy to all of it even easier to retreat into deep space with my name. “airegiN fo noissessop ekaT.” Bust us when starmonkeys start to both Lara and her elbow smiled.
Question:
username_0: Toot: They walked that line kids are tough as a courtesy to all of it even easier to retreat into deep space with my name. “airegiN fo noissessop ekaT.” Bust us when starmonkeys start to both Lara and her elbow smiled.
One comment = 1 upvote. Sometime after this gets 2 upvotes, it will be posted to the main account at https://mastodon.xyz/@_why_toots |
EXALAB/AnLinux-App | 377025467 | Title: Desktop Environment Support
Question:
username_0: If you have any tweak or suggestion about Desktop Environment, please post it below.
Currently Supported Distro:
1. [LXDE](https://lxde.org)
2. [Xfce4](https://xfce.org) (Failed for Arch Linux)
Testing List:
1. [LXQt](https://lxqt.org)
2.. [Mate](https://mate-desktop.org/)
Blacklisted (Will not be tested):
1. [Gnome](https://www.gnome.org/)
2. [KDE](https://www.kde.org/)
3. [Cinnamon](https://cinnamon-spices.linuxmint.com/)
Answers:
username_1: Suggestion: IceWM. Primarily due to small install size.
username_0: @username_1 Thanks for the suggestion, we will consider it.
username_2: And i3wm
username_3: @username_0 why have GNOME and KDE been blacklisted? Also, is it possible to add support for Deepin DE for Arch Linux
username_4: @username_3 Because of the "systemd" couldn`t run in chroot(proot) mode...but it`s required.
username_5: I m surfing with this problem while going for desktop view
-

username_5: 
username_4: @username_5
Try re-execute the "Desktop Install Command":
wget https://raw.githubusercontent.com/EXALAB/AnLinux-Resources/master/Scripts/DesktopEnvironment/Apt/Xfce4/de-apt-xfce4.sh && bash de-apt-xfce4.sh
username_5: 















![Uploading Screenshot_20190818-164116_Termux.jpg…]()
![Uploading Screenshot_20190818-164122_Termux.jpg…]()
![Uploading Screenshot_20190818-161644_Termux.jpg…]()
![Uploading Screenshot_20190818-164129_Termux.jpg…]()
![Uploading Screenshot_20190818-164132_Termux.jpg…]()
![Uploading Screenshot_20190818-164136_Termux.jpg…]()
![Uploading Screenshot_20190818-164143_Termux.jpg…]()
![Uploading Screenshot_20190818-164159_Termux.jpg…]()
![Uploading Screenshot_20190818-164202_Termux.jpg…]()
![Uploading Screenshot_20190818-164205_Termux.jpg…]()
![Uploading Screenshot_20190818-164208_Termux.jpg…]()
![Uploading Screenshot_20190818-164210_Termux.jpg…]()
![Uploading Screenshot_20190818-164212_Termux.jpg…]()
username_5: same oUtput was coming
username_6: That's most likely got something to do with the vnc your running, of course I not 100% sure I'm just replying cause I see this was 21 days ago. I would check if I knew what vnc I run the same desktop on one of my phones I'm sure..
username_7: Are there other alternatives besides tigerVNC according to the guidelines of the AnLinux application?
because I did not find the intended application in Playstore
I experienced the same thing, which was error
"line 4: command not found"
username_8: Another solution of termux desktop environment....[AidLearning Framework](https://github.com/username_8/AidLearning-FrameWork)
username_9: I tested kde it is works (lagged)
username_2: @username_9 can you describe you how did it, where you needed to hack around?
username_10: Kde it work debian 9/10
username_11: how to run vncserver as a user(not root)?
I have installed ubuntu, xfce4, vncserver runs good as root
but when I try it as a normal user, i got this

Status: Issue closed
username_12: Thank you so very much for spamming us with self-serving, off-topic content. |
jcoelho93/personal-website | 586928093 | Title: Education section missing?
Question:
username_0: Great project!
However, it seems an education section should be added, since you have the information in resume.json?
Answers:
username_1: Thanks!
There’s a small how-to in the README.
But I’m accepting PRs of you’re interested in collaborating.
username_2: @username_0 I also forked the work from @username_1 and added within other sections, the education section. You can take a look at my site as well:
https://github.com/username_2/javascript-cv
leon-gregori.com |
aandergr/kspalculator | 166854148 | Title: Support bi-, tri-, quad- -couplers and - adapters
Question:
username_0: These give the option to use more than one stack-mounted engine, and offers different choices of where to put fuel tanks.
Answers:
username_1: This also has interesting interactions with:
* SFB staging
* possible radial geometries
* small to large radius converters (e.g. bi-coupler to c7 slanted works to get side-by-side large) |
openmodelingfoundation/openmodelingfoundation.github.io | 1072474896 | Title: Voting by email
Question:
username_0: In organisations like OMF, with voting members distributed over the world, it can be very hard to meet quorum requirements in meetings. This can effectively stop all decision-making. Thus there needs to be some mechanism for off-line voting, with certain time limits and means of notification of a vote occurring.
Answers:
username_1: I think this has been taken care of by https://github.com/openmodelingfoundation/openmodelingfoundation.github.io/pull/373
Status: Issue closed
|
KDAB/android_openssl | 794051868 | Title: QT_VERSION is empty in cmakelists.txt
Question:
username_0: In recent versions of Qt, ANDROID_EXTRA_LIBS must be defined before the first call of FindPackage(Qt5 ...), this is at least what the example projects suggest.
So it is very likely that users have an include(android_openssl/cmakelists.txt) before any calls to FindPackage(Qt5). In this case, QT_VERSION is not defined yet when the include file is parsed.
Anyhow, the version detection currently is working, even if it doesn't look like this: All comparisons with the undefined variable return false, and this way `if (NOT (QT_VERSION LESS 5.14.0))` is the if branch that finally succeeds.
I believe that the following code at the beginning of cmakelists.txt would solve the issue:
if(NOT DEFINED QT_VERSION) \
find_program(QMAKE_EXE NAMES qmake) \
execute_process(COMMAND ${QMAKE_EXE} -query QT_VERSION OUTPUT_VARIABLE QT_VERSION) \
endif()
Note that find_program will work here because CMAKE_PREFIX_PATH must contain the base directory of Qt according to the Qt documentation, so this solution is independent of Qt Creator.
I do not have the old versions of Qt available here so I cannot test this code, and I didn't want to create a PR for code that I could not try out. |
Rigellute/spotify-tui | 531231573 | Title: Under WSL, `spt` does nothing on invocation
Question:
username_0: I have installed spotify-tui both through `cargo install spotify-tui` and by building from git master, in both cases running `spt` results in nothing happening and the program exiting.
```
spotify-tui $ cargo run
Finished dev [unoptimized + debuginfo] target(s) in 0.20s
Running `target/debug/spt`
```
On the first run I was prompted for my credentials, which I entered. I have tried deleting `~/.config/spotify-tui` and re-authenticating to see if that fusername_0ed it, which it did not.
Let me know if any further information is needed!
Answers:
username_0: Some additional information:
After entering the Client ID and Client Secret on the first run or after deleting the config directory, the following message is displayed in the program output
```
Error Os { code: 2, kind: NotFound, message: "No such file or directory" };Please navigate here ["https://accounts.spotify.com/authorize?[some redacted params]"]
```
username_1: Sorry to hear this @username_0, which operating system are you on?
username_0: Hi there username_1, thanks for a timely response.
/etc/lsb-release lists the following;
```
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.3 LTS"
```
This is in Windows Subsystem for Linux, with a full `uname` of
```
Linux ahostname 4.4.0-18362-Microsoft #476-Microsoft Fri Nov 01 16:53:00 PST 2019 x86_64 x86_64 x86_64 GNU/Linux
```
Windows specifications are as follows:

I have tried running the program both in and out of tmux to no avail. `xorg-dev` is installed, alongside the libxcb* dependencies... `xorg-dev is already the newest version (1:7.7+19ubuntu7.1)`
I am running the program in a headless environment, perhaps that could be related to it as I assume an X server is needed for the X clipboard functionality to work? I will test later with X11 support through Xming if needed.
username_1: Hmm... thanks for this extra information.
If it is not too much bother, would you mind reproducing this in dev mode? i.e. clone the repo and `cargo run`?
That might give you the error causing the app to exit immediately.
username_0: I've worked it out!
With Xming (a Windows X server) running, invoking `spt` like follows causes it to work fine:
```env DISPLAY=localhost:0.0 ~/.cargo/bin/spt```
Removing the environment variable with a valid X11 display causes the "silent fail" behavior I initially reported to occur. Since you mentioned X support being a requirement for the clipboard functionality, would it be possible to make this able to be disabled through a flag?
I would be more than happy to contribute this change if desirable, but it may take me a week or so to have the time for it.
username_0: Just to add to the above: `xfvb-run spt` also works, by running spt in the context of an in-memory X Server, and is useful when there isn't actually an X server set up to connect to.
username_1: Great that you've figured it out! And thanks for the write up.
You are right, we should probably gracefully handle the error when the clipboard can't be set up. This should be easy to implement. Will investigate setting up a flag to toggle the clipboard behaviour.
username_1: This was fusername_0ed in #217
Status: Issue closed
|
pandas-dev/pandas | 298914311 | Title: NameError: name '_converter' is not defined
Question:
username_0: #### Code Sample, a copy-pastable example if possible
```python
import pandas
d = pandas.DataFrame({"lat": [-6.081690, -5.207080], "lon": [145.789001, 145.391998]})
d.plot(kind='scatter', x='lat', y='lon')
```
#### Problem description
```
NameError Traceback (most recent call last)
<ipython-input-4-f42fef061f30> in <module>()
1 import pandas
2 d = pandas.DataFrame({"lat": [-6.081690, -5.207080], "lon": [145.789001, 145.391998]})
----> 3 d.plot(kind='scatter', x='lat', y='lon')
~/.local/lib/python3.6/site-packages/pandas/plotting/_core.py in __call__(self, x, y, kind, ax, subplots, sharex, sharey, layout, figsize, use_index, title, grid, legend, style, logx, logy, loglog, xticks, yticks, xlim, ylim, rot, fontsize, colormap, table, yerr, xerr, secondary_y, sort_columns, **kwds)
2675 fontsize=fontsize, colormap=colormap, table=table,
2676 yerr=yerr, xerr=xerr, secondary_y=secondary_y,
-> 2677 sort_columns=sort_columns, **kwds)
2678 __call__.__doc__ = plot_frame.__doc__
2679
~/.local/lib/python3.6/site-packages/pandas/plotting/_core.py in plot_frame(data, x, y, kind, ax, subplots, sharex, sharey, layout, figsize, use_index, title, grid, legend, style, logx, logy, loglog, xticks, yticks, xlim, ylim, rot, fontsize, colormap, table, yerr, xerr, secondary_y, sort_columns, **kwds)
1900 yerr=yerr, xerr=xerr,
1901 secondary_y=secondary_y, sort_columns=sort_columns,
-> 1902 **kwds)
1903
1904
~/.local/lib/python3.6/site-packages/pandas/plotting/_core.py in _plot(data, x, y, subplots, ax, kind, **kwds)
1685 if isinstance(data, DataFrame):
1686 plot_obj = klass(data, x=x, y=y, subplots=subplots, ax=ax,
-> 1687 kind=kind, **kwds)
1688 else:
1689 raise ValueError("plot kind %r can only be used for data frames"
~/.local/lib/python3.6/site-packages/pandas/plotting/_core.py in __init__(self, data, x, y, s, c, **kwargs)
835 # the handling of this argument later
836 s = 20
--> 837 super(ScatterPlot, self).__init__(data, x, y, s=s, **kwargs)
838 if is_integer(c) and not self.data.columns.holds_integer():
839 c = self.data.columns[c]
~/.local/lib/python3.6/site-packages/pandas/plotting/_core.py in __init__(self, data, x, y, **kwargs)
802
803 def __init__(self, data, x, y, **kwargs):
--> 804 MPLPlot.__init__(self, data, **kwargs)
805 if x is None or y is None:
806 raise ValueError(self._kind + ' requires and x and y column')
~/.local/lib/python3.6/site-packages/pandas/plotting/_core.py in __init__(self, data, kind, by, subplots, sharex, sharey, use_index, figsize, grid, legend, rot, ax, fig, title, xlim, ylim, xticks, yticks, sort_columns, fontsize, secondary_y, colormap, table, layout, **kwds)
98 table=False, layout=None, **kwds):
99
--> 100 _converter._WARN = False
101 self.data = data
102 self.by = by
NameError: name '_converter' is not defined
```
[Truncated]
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: 0.999999999
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
Answers:
username_0: Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/username_0/.local/lib/python3.6/site-packages/pandas/plotting/_converter.py", line 8, in <module>
import matplotlib.units as units
ModuleNotFoundError: No module named 'matplotlib'
```
The error should be more useful.
username_1: So this is a duplicate of https://github.com/pandas-dev/pandas/issues/19340, and the problem is that we cannot reproduce it. It clearly shows up for multiple people, but we need to diagnose why this happens.
Can you run the code in https://github.com/pandas-dev/pandas/issues/19340#issuecomment-359416261 and check if that fails?
username_0: ... deregister as deregister_matplotlib_converters
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/username_0/.local/lib/python3.6/site-packages/pandas/plotting/_converter.py", line 8, in <module>
import matplotlib.units as units
ModuleNotFoundError: No module named 'matplotlib'
```
username_1: Ah, I actually *can* reproduce this, and I suppose this is due to
https://github.com/pandas-dev/pandas/blob/f8dfcfb35ca975575dcfba625eac6c9f231c0e5e/pandas/plotting/_core.py#L44-L50
where we import `_converter`, which is later on used in the file, but then don't raise a good error message when matplotlib is not available.
Yes, this should certainly be solved.
username_2: Experienced this in Jupyter Notebook. Installed `matplotlib`, imported it, and reimported `pandas` afterwards, the exception did not go away. Restarted the notebook, and it was fixed. Feels like some global state doesn't get reinitialized.
username_1: @username_2 to reliably check imports with python you always need to restart the python process.
But anyway, this is a bug, and we have to fix it for the next release.
username_3: @tonyshare I met the same question, how do you solve?
username_1: @username_3 By installing matplotlib. This will be fixed in the next version of pandas.
Status: Issue closed
username_4: Solution is to restart the kernel or restarting jupyter server
username_5: restarting kernel does not work for me |
serverless/serverless | 124401248 | Title: Cannot find module
Question:
username_0: I am just starting out with Serverless framework. I followed the instructions in "Installing Serverless" and then "Introducing Modules". Seemed pretty straight forward. I successfully did an endpoint deploy and then tested the URL. I received the following error:
{"errorMessage":"Cannot find module 'modules\\greetings\\hello\\handler'",,"errorType":"Error","stackTrace":["Function.Module._resolveFilename (module.js:338:15)","Function.Module._load (module.js:280:25)","Module.require (module.js:364:17)","require (module.js:380:17)"]}
I also followed another set of directions at:
Answers:
username_1: Looks like you are working on Windows? There are other people having the same issue here: https://github.com/serverless/serverless/issues/235
username_0: Yes, I am working on Windows. Thanks for pointing me to issue #235. Do you have any knowledge of a work-around or if this will be addressed in v0.1.0 release? It seems like a pretty significant issue.
username_1: @username_0 nope sorry, I have not heard of any workaround yet. But I can imagine it has something to do with differences in path resolution (`/` vs `\`).
username_0: Thanks for the info. I am very new to Serverless. What would be the recommended process or who to reach out to in order to determine if it will be addressed in v0.1.0 release?
username_1: @username_0 I think I figured out what's going wrong, although I don't have a Windows machine to test on.
Would you be so kind to test my `windows` branch on your environment? It's based on the `master` branch, because `v0.1.0` is quite unstable at the moment. If this fix works, I will submit it for both `master` and `v0.1.0`.
https://github.com/username_1/serverless/tree/windows
username_1: Oh, forgot to mention, my change only affects newly generated functions. If you want to test this for existing functions, you should change all backslashes in the `handler` property in your `s-function.json` files to forward slashes.
username_0: Thanks so much for working on this! I will test. I am just in learning/experimentation mode so I will start out with new functions - not a problem.
username_2: @username_1 if you go and reference my work in #219, you'll see that all AWS paths should be generated using `path.posix.*` (or make use of `path.posix.sep`). All local paths should be generated using `path.*` (or make use of `path.*`). Simply using `path.join` everywhere or including the path separator explicitly will eventually lead to failures.
username_1: @username_2 :+1: exactly, that is the fix I added in my branch. (was not familiar with the `posix` mode though, will change that if the fix seems to work)
username_2: `path.posix.*` and `path.win32.*` allow you to generate paths for known platforms, they're just platform specific versions of `path.*`.
username_0: Great news Joost! I was able to successfully deploy and then hit the website successfully. Thanks again for taking the time to look at this and provide a resolution. Much appreciated!
username_1: Great! Have submitted the PRs! :+1:
username_0: Awesome! Thanks again.
Status: Issue closed
|
quynhbud/Kid-Monitoring-Application | 842735712 | Title: Thiết kế Sequence Diagram
Question:
username_0: [Nhom08_SequenceDiagram.pdf](https://github.com/username_0/Kid-Monitoring-Application/files/6639542/Nhom08_SequenceDiagram.pdf)
Status: Issue closed
Answers:
username_1: Link lược đồ:https://drive.google.com/drive/folders/1ju-y43xEbW5qDvq3lQ8KM_OoLngeeTH-?usp=sharing

username_2: Link các lược đồ tuần tự:
https://drive.google.com/drive/folders/1JV7-4Mp2hswBuAD6AHBNWIYqcgmxndQt?usp=sharing
username_0: [Nhom08_SequenceDiagram.pdf](https://github.com/username_0/Kid-Monitoring-Application/files/6639542/Nhom08_SequenceDiagram.pdf)
Status: Issue closed
username_0: https://drive.google.com/file/d/1JE515bo0NoBFllOlwwPBmbSQQJDDw6-S/view?usp=sharing
Status: Issue closed
|
ClickHouse/ClickHouse | 729965804 | Title: toTimeZone does not throw an error about non-constant timezone
Question:
username_0: ```
select materialize('America/Los_Angeles') t, toTimeZone(now(), t)
┌─t───────────────────┬─toTimeZone(now(), materialize('America/Los_Angeles'))─┐
│ America/Los_Angeles │ 2020-10-26 22:58:42 │
└─────────────────────┴───────────────────────────────────────────────────────┘
desc (select materialize('America/Los_Angeles') t, toTimeZone(now(), t))
┌─name──────────────────────────────────────────────────┬─type────────────────┬
│ t │ String │
│ toTimeZone(now(), materialize('America/Los_Angeles')) │ DateTime('Etc/UTC') │
└───────────────────────────────────────────────────────┴─────────────────────┴
example with toString
SELECT
materialize('America/Los_Angeles') AS t, toString(now(), t)
Received exception from server (version 20.11.1):
Code: 44. DB::Exception: Received from localhost:9000. DB::Exception: Illegal type of argument #2 'timezone' of function toString, expected const String, got String.
```
Answers:
username_0: related https://github.com/ClickHouse/ClickHouse/issues/6948 |
AdguardTeam/dnsproxy | 426041402 | Title: Integrate with urlfilter library
Question:
username_0: https://github.com/AdguardTeam/urlfilter
Check the example here:
https://github.com/AdguardTeam/urlfilter/blob/master/dns_engine_test.go
Here's what needs to be done:
1. [ ] Allow configuring blocklists
2. [ ] In the case of the console tool, filter list ID is not really important
3. [ ] Mobile API should let receiving events with the following data: DNS request, response, data of the filtering rule applied (if any)<issue_closed>
Status: Issue closed |
pivorakmeetup/pivorak-web-app | 338136540 | Title: Float type for homework marks
Question:
username_0: As result of using float type for homework marks we have incorrect total mark

That column need to be migrated to decimal type
Answers:
username_1: Do you wanna fix that ? In admin area we also have this problem too.
username_0: @username_1 yes, I want to try to fix that
username_1: @username_0 Go on! Good luck & Have fun
Status: Issue closed
|
nanomsg/nanomsg | 157417884 | Title: Possible lost events in nn_ctx_leave()
Question:
username_0: When doing the nn_ctx_leave() stuff, we check the events, and the eventsto. But there is a problem where if an event that is either queued as a result of a cycle, or from other activity, we might wind up not running the code. We really want to make sure that both the event queues are empty before finally leaving.
This is a subtle race caused by dropping the lock and not re-verifying that prior assumptions still hold true when the lock is reacquired.
Status: Issue closed
Answers:
username_0: Well, it seems that handling the events recursively causes other problems. We really need to drop these else we get a hang. |
kazupon/vue-i18n | 177105801 | Title: Event listener for "Cannot translate the value of keypath"
Question:
username_0: I had to build this to track the missing translations, which is quite hacky. I would prefer a simple event listener.
```js
var _warn = console.warn;
console.warn = function()
{
// track the missing translation
if (arguments[0].indexOf("i18n") !== -1)
{
var missingPhrase = arguments[0].replace('[vue-i18n] Cannot translate the value of keypath "', "");
missingPhrase = missingPhrase.replace('". Use the value of keypath as default', "");
WebService._call('translation/add-phrase', {phrase: missingPhrase});
return;
}
return _warn.apply(console, arguments);
};
```
Answers:
username_1: We should probably support the error handling I/F like `Vue.config.errorHandler`.
http://rc.vuejs.org/api/#errorHandler
username_0: ¡That sounds cool!
Status: Issue closed
|
dannysanchez559/2021_React_Apprenticeship | 929691445 | Title: Movie Details Modal
Question:
username_0: **Is your feature request related to a problem? Please describe.**
User needs to know more details about the selected movie.
**Describe the solution you'd like**
Create a modal to provide user more information about the selected movie.<issue_closed>
Status: Issue closed |
rook/rook | 416337563 | Title: Publish Rook to OperatorHub.io
Question:
username_0: <!-- **Are you in the right place?**
1. For issues or feature requests, please create an issue in this repository.
2. For general technical and non-technical questions, we are happy to help you on our [Rook.io Slack](https://Rook-io.slack.com).
3. Did you already search the existing open issues for anything similar? -->
**Is this a bug report or feature request?**
* Feature Request
**What should the feature do:**
Publish rook operator on operatorhub.io
**What is use case behind this feature:**
Ease of distribution
**Environment**:
<!-- Specific environment information that helps with the feature request -->
Answers:
username_1: https://github.com/operator-framework/community-operators/pull/78
Status: Issue closed
username_2: Fixed via https://github.com/operator-framework/community-operators/pull/348 |
SpriteStudio/SS5PlayerForCocos2d-x | 205070715 | Title: Aアニメをplayで再生後にBアニメを同様のss::Playerで再生するとアニメの表示が崩れる不具合が起きる
Question:
username_0: ss::Playerは master最新のもの
cocos2d-x 3.14.1 (現行最新のもの)
単体のアニメのみ再生した場合は表示は崩れないですが
複数のアニメを同一のss::playerで再生した時に表示が崩れます。
こちらで調査してみたところ原因が不明ですので、仕様等何かまずいことがあればご教授頂ければなと思います。
Answers:
username_0: 該当するアニメファイルを送らせて頂きました。
crane_anime.zipファイルを解凍してコンバートし
どういつのplayerやで
ssplayer->play("ball_anim_default/ball_default_a0",0);
を再生した後に
ball_anim_change/ball_change_0a_0を再生すると崩れます。
ssplayer->play("ball_anim_change/ball_change_0a_0",1);
ssplayer->play("ball_anim_change/ball_change_0a_0",1);
単体で再生したときと表示結果が異なります。
username_1: ご報告ありがとうございます。
データの方、受け取りましたので現象を確認しまして、こちらで報告させていただきます。
username_1: お送りいただきましたアニメーションをcocos2d-x 3.14.1 で新規に作成した検証用プロジェクトで確認を行いましたが、ご報告いただいた同じプレイヤーで別のアニメーションを再生すると表示が崩れる現象を確認する事ができませんでした。
崩れるというのは具体的にどのようになるのでしょうか?
・パーツの位置や大きさはあっているがUVがあっていない(テクスチャの別の領域が表示されている)
・パーツの位置があっていない
・変更前のアニメーションのパーツが残っている
おそらくこのような感じではないかと思いますが情報をいただけますと幸いです。 |
allegro/allegro-api | 419965735 | Title: Ilość sprzedanych sztuk w ofercie
Question:
username_0: Witam,
Czy będzie dodana ilość sprzedanych sztuk w ofercie dostępne przez nowe API dla ofert nienależących do zalogowanego użytkownika? Takie dane mamy dostępne w starym API i liczę że to również zostanie przeniesione do nowego.
Answers:
username_1: Pracujemy nad zasobem, który będzie zwracał publiczne informacje o danej ofercie, przekazałem Twoją sugestię, aby znalazły się w nim informacje o ilości sprzedanych sztuk.
Obecnie możesz skorzystać z zasobu offers/listing, gdzie otrzymasz w polu popularity informacje:
```
"popularity": 0 -- popularność oferty. W przypadku oferty
typu BUY_NOW jest to liczba zakupów od
unikalnych użytkowników z ostatnich
30 dni. W przypadku oferty typu AUCTION
jest to liczba użytkowników,
którzy biorą udział w licytacji.
```
Oczywiście wiem, że to nie są dokładnie dane o które prosisz, ale obecnie przez REST API udostępniliśmy tylko takie dane w tym zakresie dla oferty, której nie wystawiłeś.
username_0: Czy wiadomo kiedy ten zasób będzie dostępny? Orientacyjnie? Czy będzie to np drugi kwartał 2019 roku?
username_1: Jak tylko będziemy znać przybliżony termin wdrożenia takich zasobów, to poinformujemy o nim w dedykowanym komunikacie. O naszych ostatnich planach wspominaliśmy m.in. tutaj. #1043 |
wso2/product-is | 716852488 | Title: Getting an Error Message and missing element when configure Primary user store as DB2
Question:
username_0: **Describe the issue:**
There is an Error message retrieve when try to click on "Users" in side panel Console > Manage
Also, there are UI elements are missing when configuring DB2 as a Primary User store.
**How to reproduce:**
Configure deployment.toml and put JDBC driver for the version <IS_HOME>/repository/components/lib folder
Start the IS server
**Environment information** (_Please complete the following information; remove any unnecessary fields_) **:**
- Product Version: 5.11.0 alpha4 Snapshot
- OS: Windows & Mac
- Database: DB2
- Userstore: JDBC
```[user_store]
type = "database_unique_id"
[database.identity_db]
url = "jdbc:db2://localhost:50000/alpha33"
username = "db2inst1"
password = "<PASSWORD>"
driver = "com.ibm.db2.jcc.DB2Driver"
[database.identity_db.pool_options]
maxActive = "80"
minIdle ="5"
testOnBorrow = true
validationQuery="SELECT 1 FROM SYSIBM.SYSDUMMY1"
validationInterval="30000"
defaultAutoCommit=false
[database.shared_db]
url = "jdbc:db2://localhost:50000/alpha33"
username = "db2inst1"
password = "<PASSWORD>"
driver = "com.ibm.db2.jcc.DB2Driver"
[database.shared_db.pool_options]
maxActive = "80"
minIdle ="5"
testOnBorrow = true
validationQuery="SELECT 1 FROM SYSIBM.SYSDUMMY1"
validationInterval="30000"
defaultAutoCommit=false
```


Answers:
username_1: Will be fixed with https://github.com/wso2/carbon-kernel/pull/2818
Status: Issue closed
|
kriasoft/react-starter-kit | 457841054 | Title: How to disable SSR completely
Question:
username_0: Hi , I have developed one of my application using this starter kit,
Now we want our application to not perform SSR
I know SSR has many benefits but we need it to render on the client side
Can anyone please help.
I tried many things but no luck
Answers:
username_1: @username_0 maybe you shouldn't use RSK for your approach? https://github.com/kriasoft/react-starter-kit/issues/984#issuecomment-261797529
But also you can check https://github.com/kriasoft/react-starter-kit/pull/833
username_0: I have removed this line. Its seem to work.
let me know if this is fine
```
data.children = ReactDOM.renderToString(
<App context={context}>{route.component}</App>,
);
```
```
data.children = "" //working
```
username_1: @username_0 youp! You can just comment **data.children = ...** and it won't do any server rendering
username_0: But it calls APIs on serverside
username_2: @username_0 thank you very much for crating this issue! Unfortunately, we have close it due to inactivity. Feel free to re-open it or [join](https://discord.com/invite/2nKEnKq) our [Discord](https://discord.com/invite/2nKEnKq) channel for [discussion](https://github.com/kriasoft/react-starter-kit/discussions/1950).
NOTE: The `main` branch has been updated with React Starter Kit v2, using JAM-style architecture.
Status: Issue closed
|
kmcgill88/admob_flutter | 515262678 | Title: Crash when used with Google Maps
Question:
username_0: Great work on this package! This is the only package that offers a truly fluttery way to incorporate ads as a widget.
I have tried it out and it works fine on android and on the iOS simulator. However, when run on a physical iOS device, my app crashes. I did some testing and found out that the crash only occurs when the Google Map widget is present in the same Scaffold as the banner. I have tried placing the banner in both a stack and a column yielding the same results. This seems like a very strange behaviour and I am not able to get an debug info on the crash either.
Answers:
username_1: @username_0 Can you do flutter doctor -v and give me the output.
username_0: Also, I just switched to the master channel (1.10.15-pre.368) and the crash is not happening anymore. Previously, I was on the dev channel when I was experiencing this issue. So I am not sure whether it's a problem with this package or flutter itself.
And yes I tested it on both platforms on both simulators and physical devices. The crash was consistently occurring only on physical iOS devices on both iOS 12.4 and iOS 13.1.3.
username_0: To add on, even in 1.10.15-pre.351 the crash is still present. So maybe that narrows down and helps with identifying what changed in flutter with the last few commits.
username_1: @username_0 It could be a issue with flutter itself and the fix is currently only in the master channel.
Status: Issue closed
username_0: Since the stable channel is now 1.12.13, the issue seems to be fixed. |
dolphindb/Tutorials_CN | 723979338 | Title: 是否可以同時 insert ?
Question:
username_0: 在同時 insert db 時,出現這個 error
RuntimeError: <Server Exception> in run: <ChunkInTransaction>filepath 'xxx/xxx/20051013' has been owned by transaction 4717
是否可以同時 insert ?
Answers:
username_1: 可以同時insert,但是需要insert到不同的分區,例如一個insert 到 0~5, 一個insert到5~10. 同一個分區不能有多於一個進程寫入。
Status: Issue closed
username_0: 那有沒有 function 可以判斷,該 table 正在被寫入? 這樣比較好避免同時 insert
username_1: 没有这样的function; 不过可以通过查询记录变化来check是否有写入。 |
webhintio/hint | 533704563 | Title: Unclosed span element in scan-category-summary.ejs
Question:
username_0: There's a `<span>` vs. `</span>` in https://github.com/webhintio/hint/blob/68a2280a45070eb52807c5a739e04292b0cd599f/packages/formatter-html/src/views/partials/scan-category-summary.ejs#L9
Answers:
username_1: Good catch. @sarvaje is this fixed in your PR #3422?
In in mobile so can't check at the moment. If not can you add it or @username_0 can you make a PR with the fix?
Thanks 🙏
username_2: It wasn't part of #3422, I added the fix. Thanks @username_0!
Status: Issue closed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.