repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
miketheman/pytest-socket | 476431296 | Title: Is there a "disable_socket" mark?
Question:
username_0: Hi,
Thanks for your hard work and useful plugin.
What if there's a _minority_ X tests out of Y tests that I want to _disable_ connections for? Is there a `@pytest.mark.disable_socket` so we can selectively disable sockets on specific tests? I couldn't find anything for that in the docs. It would be much more useful in my case than adding `@pytest.mark.enable_socket` to my Y-X tests (which are the majority of my tests).
Answers:
username_0: @atugushev Thank you!
Status: Issue closed
|
kalkih/mini-graph-card | 817799474 | Title: [Feature request] Group by hour number
Question:
username_0: | Day 1 | Day 2 | Day 3 | | Min | Avg | Max
-- | -- | -- | -- | -- | -- | -- | --
0:00 | 65 | 72 | 73 | | 65 | 70 | 73
1:00 | 64 | 71 | 70 | | 64 | 68.33333 | 71
2:00 | 63 | 69 | 67 | | 63 | 66.33333 | 69
3:00 | 60 | 69 | 65 | | 60 | 64.66667 | 69
4:00 | 60 | 69 | 63 | | 60 | 64 | 69
5:00 | 60 | 66 | 61 | | 60 | 62.33333 | 66
6:00 | 59 | 65 | 61 | | 59 | 61.66667 | 65
7:00 | 56 | 63 | 58 | | 56 | 59 | 63
8:00 | 57 | 65 | 58 | | 57 | 60 | 65
9:00 | 57 | 67 | 61 | | 57 | 61.66667 | 67
10:00 | 59 | 67 | 64 | | 59 | 63.33333 | 67
11:00 | 62 | 70 | 66 | | 62 | 66 | 70
12:00 | 64 | 72 | 67 | | 64 | 67.66667 | 72
13:00 | 67 | 72 | 69 | | 67 | 69.33333 | 72
14:00 | 68 | 72 | 69 | | 68 | 69.66667 | 72
15:00 | 68 | 72 | 69 | | 68 | 69.66667 | 72
16:00 | 70 | 72 | 69 | | 69 | 70.33333 | 72
17:00 | 70 | 73 | 70 | | 70 | 71 | 73
18:00 | 71 | 75 | 70 | | 70 | 72 | 75
19:00 | 69 | 73 | 68 | | 68 | 70 | 73
20:00 | 68 | 73 | 67 | | 67 | 69.33333 | 73
21:00 | 67 | 73 | 67 | | 67 | 69 | 73
22:00 | 65 | 72 | 65 | | 65 | 67.33333 | 72
23:00 | 62 | 72 | 65 | | 62 | 66.33333 | 72

Answers:
username_0: Looks like the text got cut off. Feature request is to be able to aggregate by hour number - i.e., all the hour 0s together; all the hour 1s, hour 2s, etc.
The use case is to be able to see time-of-day trends over time.
username_0: Bump
username_1: How do you see this in practice? How would you configure it f.e.?
username_0: It would be functionally identical to the aggregation that occurs on, say, a seven-day basis, but instead of being the average, min, and max per day, it would reflect the average, min, and max in the 0:00 hour for the past seven days. |
ionic-team/stencil | 529560447 | Title: Add api to check event detail that isn't the last event
Question:
username_0: Note, I do plan on opening a PR for this.
**Stencil version:**
```
@stencil/[email protected]
```
**I'm submitting a:**
[ ] bug report
[x] feature request
[ ] support request => Please do not submit support requests here, use one of these channels: https://stencil-worldwide.herokuapp.com/ or https://forum.ionicframework.com/
**Current behavior:**
Right now, the `toHaveReceivedEventDetail` jest matcher only checks the [`lastEvent`](https://github.com/ionic-team/stencil/blob/master/src/testing/matchers/events.ts#L64). Jest provides [`toHaveBeenNthCalledWith`](https://jestjs.io/docs/en/expect#tohavebeennthcalledwithnthcall-arg1-arg2-) for the same purpose and stencil could provide the same for the events.
Currently, to check an event at some index, you'd have to do:
```js
const someEvent = await page.spyOnEvent('someEvent');
expect(someEvent.events[2].detail).toEqual({....});
```
**Expected behavior:**
It'd be nice to have:
```js
const someEvent = await page.spyOnEvent('someEvent');
expect(someEvent).toHaveNthReceivedEventDetail(2, {...});
```<issue_closed>
Status: Issue closed |
MicrosoftDocs/msteams-docs | 467863951 | Title: URL markdown breaks on mobile with parentheses
Question:
username_0: Rendering the following card works as expected in the desktop app, but not the mobile app.
`{
"$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
"type": "AdaptiveCard",
"version": "1.0",
"body": [
{
"type": "TextBlock",
"text": "Check out [(555) 194-9393](tel://5551949393)"
}
]
}`
It seems that the markdown breaks if the URL text contains parentheses
Tested on Android version 1416/1.0.0.2019061803
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 40f35d71-c372-14ab-17d3-f4fc2a241a62
* Version Independent ID: f94dbfa0-9e4f-31cc-26ab-48dc3dda993a
* Content: [Text formatting in cards - Teams](https://docs.microsoft.com/en-us/microsoftteams/platform/concepts/cards/cards-format#feedback)
* Content Source: [msteams-platform/concepts/Cards/cards-format.md](https://github.com/MicrosoftDocs/msteams-docs/blob/master/msteams-platform/concepts/Cards/cards-format.md)
* Product: **msteams**
* GitHub Login: @o365devx
* Microsoft Alias: **o365devx**
Answers:
username_1: @username_0 Thanks for reporting this. We have raised a bug for this.
Status: Issue closed
|
NativeScript/nativescript-unit-test-runner | 178489563 | Title: [socket.io.js] document.createElement is not a function
Question:
username_0: I was super pleased to discover that the karma test runner could be used in conjunction with the interactive debugger:
`tns test android --debug-brk`
However, I did want to raise one small issue..
```text
The application crashed because of an uncaught exception. You can look at "stackTrace" or "nativeException" for more detailed information about the exception.
com.tns.NativeScriptException:
Calling js method run failed
TypeError: document.createElement is not a function
File: "/data/data/org.nativescript.ExampleProject/files/app/tns_modules/zone.js/dist/zone-node.js, line: 201, column: 25
StackTrace:
Frame: function:'JSONPPolling.doPoll', file:'/data/data/org.nativescript.ExampleProject/files/app/tns_modules/nativescript-unit-test-runner/socket.io.js', line: 1085, column: 25
Frame: function:'Polling.poll', file:'/data/data/org.nativescript.ExampleProject/files/app/tns_modules/nativescript-unit-test-runner/socket.io.js', line: 1740, column: 8
Frame: function:'Polling.doOpen', file:'/data/data/org.nativescript.ExampleProject/files/app/tns_modules/nativescript-unit-test-runner/socket.io.js', line: 1684, column: 8
Frame: function:'Transport.open', file:'/data/data/org.nativescript.ExampleProject/files/app/tns_modules/nativescript-unit-test-runner/socket.io.js', line: 827, column: 10
Frame: function:'Socket.open', file:'/data/data/org.nativescript.ExampleProject/files/app/tns_modules/nativescript-unit-test-runner/socket.io.js', line: 248, column: 13
Frame: function:'Socket', file:'/data/data/org.nativescript.ExampleProject/files/app/tns_modules/nativescript-unit-test-runner/socket.io.js', line: 129, column: 8
Frame: function:'Socket', file:'/data/data/org.nativescript.ExampleProject/files/app/tns_modules/nativescript-unit-test-runner/socket.io.js', line: 55, column: 41
Frame: function:'Manager.open.Manager.connect', file:'/data/data/org.nativescript.ExampleProject/files/app/tns_modules/nativescript-unit-test-runner/socket.io.js', line: 4549, column: 17
Frame: function:'', file:'/data/data/org.nativescript.ExampleProject/files/app/tns_modules/nativescript-unit-test-runner/socket.io.js', line: 4859, column: 12
Frame: function:'ZoneDelegate.invokeTask', file:'/data/data/org.nativescript.ExampleProject/files/app/tns_modules/zone.js/dist/zone-node.js', line: 323, column: 38
Frame: function:'Zone.runTask', file:'/data/data/org.nativescript.ExampleProject/files/app/tns_modules/zone.js/dist/zone-node.js', line: 223, column: 48
Frame: function:'ZoneTask.invoke', file:'/data/data/org.nativescript.ExampleProject/files/app/tns_modules/zone.js/dist/zone-node.js', line: 391, column: 34
Frame: function:'ZoneDelegate.invoke', file:'/data/data/org.nativescript.ExampleProject/files/app/tns_modules/zone.js/dist/zone-node.js', line: 290, column: 29
Frame: function:'Zone.runGuarded', file:'/data/data/org.nativescript.ExampleProject/files/app/tns_modules/zone.js/dist/zone-node.js', line: 197, column: 48
Frame: function:'', file:'/data/data/org.nativescript.ExampleProject/files/app/tns_modules/zone.js/dist/zone-node.js', line: 173, column: 30
Frame: function:'java.lang.Runnable.run', file:'/data/data/org.nativescript.ExampleProject/files/app/tns_modules/timer/timer.js', line: 17, column: 13
```
the file `nativescript-unit-test-runner/socket.io.js` is pretty much littered with references to the DOM, and attempts to perform DOM updates.
in particular:
* `JSONPPolling.prototype.doPoll`
* `JSONPPolling.prototype.doWrite`
* `useColors`
* `localstorage`
* various references to: `navigator.userAgent`
I haven't done any digging into this.
I'm not aware of the low-level details of precisely how socket.io is used by the debugger,
or why this error didn't occur while debugging "app/main.js" (a few days ago),
and only occurs while debugging "app/tests/*.js" unit tests.
Maybe this is nothing more than pilot error (on my part)..
I wouldn't rule it out, though I don't think that I've done anything wrong.
In any case, I just wanted to share my observations..
in case somebody who knows the code and how things are glued together..
might read this and mutter: "oh shoot, yep.. easy fix"
Answers:
username_0: wait.. there's a chance this is the result of something I did wrong (pilot error)..
I'm re-running some tests now, and will report back shortly..
username_0: ok.. it's a legit issue.. has nothing to do with a bit of `zone.js` error-catching trickery that I had in place.
* commented the trickery out entirely.. so `zone.js` wasn't even included under `tns_modules`
* added a single breakpoint in `socket.io.js` within the function `JSONPPolling.prototype.doPoll` at the line where `document.createElement` is first used
* ran the tests until the breakpoint was reached (it was)
* stepped over it.. and sure enough.. uncaught exception
username_0: incidentally.. and feel free to completely ignore this suggestion.. which has very little to do with the issue at hand (though it may actually help to catch it and conditionally ignore it).. but is just a feature that you may want to consider including (maybe that could be toggled on/off with a boolean flag in `karma.conf.js`)
and please keep in mind that this code is very rough.. and I'm still working on getting it right.. but something along the lines of:
```javascript
/* ------------------------------------------------------------------
* summary:
* --------
* all tests are passed to the testing framework
* using the function signature:
* it('Do this and then do that', function (done) {...}
*
* when timers (ex: setTimeout) are called by the code under test,
* Exceptions that may occur when the timer triggers will NOT
* be caught by the testing framework,
* and will result in the test suite ending prematurely,
* with a terse error message about an "uncaught Exception".
*
* the purpose of this file is to use "zone.js"
* to wrap the "it" function,
* in such a way that Exceptions raised by asynchronous timers
* will NOT result in an "uncaught Exception".
*
* the Zone will report the error to mocha,
* which will then fail the particular test,
* and then continue processing the remainder of the test suite.
* ------------------------------------------------------------------
* references:
* -----------
* https://github.com/angular/zone.js/
* https://github.com/angular/zone.js/issues/418
* https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/length
* ------------------------------------------------------------------
*/
require('zone.js/dist/zone-node');
var old_it = global.it;
var new_it = function(desc, test){
// it.skip()
if (test === undefined){
return old_it(desc);
}
var my_done;
var my_zone = global.Zone.current.fork({
onHandleError: function(){
var error = arguments[3];
my_done(error);
// zone-node.js, lines: 201, 292
// return value serves as a boolean flag for whether or not to throw the exception.
// - truthy: throw, which will be uncaught and crash the test runner.
return false;
}
});
return old_it(desc, function(done){
my_done = done;
my_zone.run(function(){
test(done);
if (test.length === 0){
done();
}
});
});
};
new_it.skip = old_it.skip;
new_it.only = old_it.only;
new_it.retries = old_it.retries;
global.it = new_it;
```
username_0: sorry for being so verbose, but i have a few additional observations regarding the issue..
* base directory for filepaths:
`ExampleProject/platforms/android/src/main/assets/app/tns_modules/nativescript-unit-test-runner`
* file: `config.js`<br>
contents: `module.exports = {"port":"9876","ips":["192.168.1.104","127.0.0.1"],"options":{"debugTransport":false,"debugBrk":true,"watch":false}}`
* file: `main-view-model.js`<br>
contents:
```javascript
...
function enableSocketIoDebugging() {
console.log('enabling socket.io debugging');
global.localStorage = {
debug: "*"
};
global.window = global;
}
var config = require('./config');
...
if (config.options.debugTransport) {
enableSocketIoDebugging();
}
...
var io = require('./socket.io');
var socket = this.socket = io.connect(this.baseUrl, {forceBase64: true});
...
if (config.options.debugBrk) {
debugger;
}
...
```
* file: `socket.io.js`<br>
contents:
```javascript
module.exports = exports = lookup;
function lookup(uri, opts) {
...
io = Manager(source, opts);
...
return io.socket(parsed.path);
}
Manager.prototype.socket = function(nsp){
...
socket = new Socket(this, nsp);
...
};
function Socket(uri, opts){
...
this.transports = opts.transports || ['polling', 'websocket'];
...
}
```
* this makes me think that the fix is as simple as passing an additional option in..<br>
file: `main-view-model.js`<br>
code: `var socket = this.socket = io.connect(this.baseUrl, {forceBase64: true, transports: ['websocket']});`
* testing now..
username_0: yeah, looks good now.
I'm stepping through an enormous test suite..
giving it plenty of time to poll or do whatever else might cause a problem..
I left the breakpoint in "socket.io.js" but it hasn't been hit at all..
it may be premature to call this patch a fix, but it certainly appears to be.
**short tldr; version**
* file: `nativescript-unit-test-runner/main-view-model.js`
* code:
* _current value_:<br>`var socket = this.socket = io.connect(this.baseUrl, {forceBase64: true});`
* _new/updated value_:<br>`var socket = this.socket = io.connect(this.baseUrl, {forceBase64: true, transports: ['websocket']});`
username_0: _update:_
I have another observation, which is loosely related and most-likely a fairly minor issue..
* I walked away from my debug session for a long time, leaving execution halted at a breakpoint
* I returned and attempted to continue the session
* the following log message appeared in the output console:
```text
NSUTR-socket.io: transport close
/data/data/org.nativescript.ExampleProject/files/app/tns_modules/nativescript-unit-test-runner/main-view-model.js:90
NSUTR-socket.io: 1
/data/data/org.nativescript.ExampleProject/files/app/tns_modules/nativescript-unit-test-runner/main-view-model.js:90
```
* then execution stopped at the breakpoint that I had set in `socket.io.js` at the line in `JSONPPolling.prototype.doPoll` that touches the DOM and triggers the uncaught exception
so..
* the DOM is no-longer touched during an active session by periodic polling, which is the important thing
* however, when the web socket is closed by the server and the test runner attempts to re-open it.. things blow up, which is less important.. since this shouldn't happen during normal usage
username_0: ok, this is embarrassing..
* turns out that there's a side-effect to running the command: `tns test android --debug-brk`
* the file:<br>`ExampleProject/platforms/android/src/main/assets/app/tns_modules/nativescript-unit-test-runner/main-view-model.js`
* is over-written by the file:<br>`ExampleProject/node_modules/nativescript-unit-test-runner/main-view-model.js`
* so any changes made to the contents of the former are lost before the test runner loads
* and, consequently, all of my "observations" thus far are wrong..
here's what I'm seeing now:
* using the options: `forceBase64: true, transports: ['websocket']`
* "socket.io" cannot connect to "karma"
```text
NSUTR: connecting to karma at http://192.168.1.104:9876
NSUTR: socket.io error on connect: timeout
NSUTR-socket.io: 1
NSUTR: socket.io error on connect: timeout
NSUTR-socket.io: 2
NSUTR: socket.io error on connect: timeout
NSUTR-socket.io: 3
NSUTR: socket.io error on connect: timeout
```
* using the options: `forceBase64: true, transports: ['websocket'], jsonp: false`
* "socket.io" __cannot__ connect to "karma"
```text
NSUTR: connecting to karma at http://192.168.1.104:9876
NSUTR: socket.io error on connect: timeout
NSUTR-socket.io: 1
NSUTR: socket.io error on connect: timeout
NSUTR-socket.io: 2
NSUTR: socket.io error on connect: timeout
NSUTR-socket.io: 3
NSUTR: socket.io error on connect: timeout
```
* using the options: `forceBase64: true, jsonp: false`
* "socket.io" connects to "karma"
```text
NSUTR: successfully connected to karma
...
NSUTR: beginning test run
```
* interactive debugging works well..
* ok, now I have some real proof that __this__ is the option that prevents the uncaught exception:<br>
a few minutes into my session, the following log messages appeared in the console:
```text
NSUTR-socket.io: transport close
NSUTR-socket.io: 1
NSUTR: socket.io error on connect: timeout
NSUTR-socket.io: 2
NSUTR: socket.io error on connect: timeout
NSUTR-socket.io: 3
```
* however, the breakpoint in `socket.io.js` at the line in `JSONPPolling.prototype.doPoll` that touches the DOM and triggers the uncaught exception was __not__ reached
* so.. the option `{jsonp: false}` is the one that prevents attempts at intra-session DOM updates (ie: adding `jsonp <script> tags`)
took a while, but we got there :)
username_0: _updated_:
**short tldr; version**
* file: `nativescript-unit-test-runner/main-view-model.js`
* code:
* _current value_:
```javascript
var socket = this.socket = io.connect(this.baseUrl, {
forceBase64: true
});
```
* _new/updated value_:
```javascript
var socket = this.socket = io.connect(this.baseUrl, {
forceBase64: true,
jsonp: false
});
```
username_0: a few closing comments regarding reconnect attempts..
* I have no idea why the connection closes
* I have no idea why attempts to reconnect fail
* in my `karma.conf.js`, I had changed the value
* from: `singleRun: false`
* to: `singleRun: true`
and thought that maybe that was the reason that the server closed its connection and became unavailable, but I changed the value back to test that assertion and the behavior was unchanged
* after the connection is closed, the debugger continues to function correctly.<br>as such..
* if reconnection is __not__ important, then I would recommend that we add the additional option:
```javascript
var socket = this.socket = io.connect(this.baseUrl, {
forceBase64: true,
jsonp: false,
reconnection: false
});
```
* if completely disabling reconnect attempts is too extreme, then maybe we could set a finite limit on the number of attempts?
```javascript
var socket = this.socket = io.connect(this.baseUrl, {
forceBase64: true,
jsonp: false,
reconnectionAttempts: 5
});
```
the default behavior is to make infinite attempts, at intervals determined by a backoff strategy. this results in log messages pertaining to the reconnect attempts being scattered throughout the console output.
username_1: This issue was moved to NativeScript/nativescript-cli#2770
Status: Issue closed
|
android/ndk | 766687643 | Title: [BUG] NDK ignore the APP_STL variable
Question:
username_0: #### Description
The prefab requires ` c++_static`, I added `APP_STL := c++_static` to `Application.mk`. However, NDK ignored this variable when the cpp file did not exist in `LOCAL_SRC_FILES`, make build fail: `error: undefined reference to 'operator delete(void*)'`
test case: https://github.com/username_0/XposedDetector/commit/334279e7176563956e48e676dba98875dc62abc2 https://github.com/username_0/XposedDetector/commit/5cca1de8ed83a1bb52a5bc64ffaa09e6686641c6
log: https://github.com/username_0/XposedDetector/runs/1551353201?check_suite_focus=true#step:5:60
#### Environment Details
* NDK Version: 21.3.6528147
* Build system: ndk-build
* Host OS: Mac and Ubuntu
* ABI: ALL
* NDK API level: 21
* Device API level: N/A
Answers:
username_1: Just to make sure I've understood the test case, the imported library uses C++ but the importing code does not?
username_0: prefab use c++, LOCAL_MODULE only has `.c` file: `LOCAL_SRC_FILES := username_0.c`
username_1: Gotcha. That's a prefab bug then. Refiled as https://github.com/google/prefab/issues/125
Status: Issue closed
|
lengyijun/gene | 430273757 | Title: 开发注意事项
Question:
username_0: - golang的chaincode必须先`go build `,再上链
- golang的chaincode必须经过`go test`
- chaincode修改过后,必须重启docker网络,即
```
docker ps -aq | xargs docker rm -rf
docker rmi dev-* #remove old chaincode docker image
```
Answers:
username_0: golang struct的key要首字母大写,如
```
type FileDescriptor struct {
Id string
Name string
CreateTime string
UpdateTime string
Owner string
Description string
Level int `0(the lowest,all available),1,2,3(the highest,only few people can accesss it)`
Requested bool
}
```
而不能
```
type FileDescriptor struct {
id string
name string
createTime string
updateTime string
owner string
description string
level int `0(the lowest,all available),1,2,3(the highest,only few people can accesss it)`
requested bool
}
```
username_0: golang的语法忘记了可以看看这里
https://github.com/skywind3000/awesome-cheatsheets/blob/master/languages/golang.go |
maieul/ledmac | 95430817 | Title: Multiple series of endnotes not working on different places
Question:
username_0: http://tex.stackexchange.com/q/255598/7712
Answers:
username_0: Le pb est le suivant: lorsqu'on clot un fichier pour pouvoir lire son contenu, cela veut dire que lorsqu'on l'ouvre après, pour y réécrire, on en efface tout le contenu. Conclusion : ici le \doendnotes{A} fait perdre les notes B stockées. On pourrait reinjecter les notes A, mais non, car il se peut que notre auteur veuille des notes A ≠ pour le second texte. La seule solution : séparer les fichiers des notes, un par série... (pf, ca va encore bouffer...)
username_0: was asked by @username_1
username_0: @username_1 : could you try branch "issue348". It is based on reledmac, that means you should maybe changes some little things in your preamble.
username_1: @username_0 thank you for writing code for this! the mwe works fine! it is the first time i heard of your reledmac-efforts, which i understand and which are very welcome. nevertheless i obtain several errors in my real document and i was wondering, if you keep a list where i can find the old command next to the new ones. or do i have to go through all the examples?
username_0: of course, there is a list ;-). It will be included in the handbook. You can dowload the list of modification to do here
http://geekographie.username_0.net/IMG/pdf/migration-example.pdf
I am waiting for a mwe from @ralessi, and I think reledmac could be done
Status: Issue closed
username_0: d'après @ralessi en #353, vérifié par moi, le MWE ne fonctionnait pas. On rouvre.
username_0: http://tex.stackexchange.com/q/255598/7712
Status: Issue closed
username_0: fause allerte, on referme. Tester avec le bon package c'est mieux ;-) |
invertase/react-native-firebase | 355137237 | Title: Error: Firestore: The service is currently unavailable. (firestore/unavailable).
Question:
username_0: <!---
BEFORE YOU MAKE AN ISSUE
The issue list of this repo is exclusively for bug reports.
1) For feature requests please visit our [Feature Request Board](https://boards.invertase.io/react-native-firebase).
2) For questions and support please use our Discord chat: https://discord.gg/C9aK28N or Stack Overflow: https://stackoverflow.com/questions/tagged/react-native-firebase
3) If this is a setup issue then please make sure you've correctly followed the setup guides, most setup issues such as 'duplicate dex files', 'default app has not been initialized' etc are all down to an incorrect setup as the guides haven't been correctly followed.
-->
### Issue
<!--- Please write your issue here, provide as much detail as you can, code snippets, key files which will help us to debug such as your `Podfile` and/or `app/build.gradle` file). -->
### Environment
1. Application Target Platform:
Android
2. Development Operating System:
macOs HighSierra
3. Build Tools:
4. `React Native` version:
0.56.0
5. `React Native Firebase` Version:
4.3.7
6. `Firebase` Module:
Firestore
7. Are you using `typescript`?
no
---
After a while a get the following error:
Error: Firestore: The service is currently unavailable. (firestore/unavailable).
at createErrorFromErrorData (NativeModules.js:146)
at NativeModules.js:95
at MessageQueue.__invokeCallback (MessageQueue.js:392)
at MessageQueue.js:128
at MessageQueue.__guard (MessageQueue.js:291)
at MessageQueue.invokeCallbackAndReturnFlushedQueue (MessageQueue.js:127)
at t (RNDebuggerWorker.js:1)
Then I have to deinstall and reinstall the app to get the connection again.
I tried the new function
firebase.firestore().enableNetwork()
but it doesn't work.
I have no ideas anymore...
Best Qung
Answers:
username_1: I encountered this error when I used a new fresh firebase project. Firestore database is also needs to be activated on the firebase console.
username_0: So I have to activate Firebase too?
username_2: Hi, I can't solved the issue so I changed to Parse as backend. It works like a charme. Now I just use the Push Server for my App.
Maybe I will try it later again.
Best Qung
username_0: Hi, I can't solved the issue so I changed to Parse as backend. It works like a charme. Now I just use the Push Server for my App.
Maybe I will try it later again.
Best Qung
username_3: If you are located in China, FireStore / Google is blocked there.
It'll error out with `firestore/unavailable`
username_4: @Salakar We are having this issue in our production apps. Please check this

username_5: any update for this issues
username_6: Same here , it is a problem
username_7: +1
username_8: Someone Help this happens in some times suddenly (not every time)

username_9: [https://medium.com/@chonnaronghanyawongse/coding-diary-why-my-firestore-doesnt-work-2db41fb82121] Helped me, by disabling and enabling the token service.
username_9: Previously, this was an intermittent problem(happens 2-3x every night). But I'm getting this error since 3 days ago.
username_10: I've got the same problem in two different applications in production.
username_3: One possibility is that it's an error message issue.
when I'm doing `.get({source: 'cachce'})` and the firebase SDK will throw
```
Error Domain=FIRFirestoreErrorDomain Code=14 "Failed to get document from cache. (However, this document may exist on the server. Run again without setting source to FirestoreSourceCache to attempt to retrieve the document " UserInfo={NSLocalizedDescription=Failed to get document from cache. (However, this document may exist on the server. Run again without setting source to FirestoreSourceCache to attempt to retrieve the document }
```
But code 14 will be translated into `FIRFirestoreErrorCodeUnavailable`
with error message of
```
The service is currently unavailable. This is a most likely a transient condition and may be corrected by retrying with a backoff.
```
which is incorrect in this case.
username_11: Any solution here?
It's really blocking production release.
username_12: Did you guys figure this one out? Please update if anyone did. Thanks!
username_13: I'm also having the same problem :/
username_14: I'm facing the same issue on my App in production now and then.
`Non-fatal Exception: UnhandledPromiseRejection
[firestore/unavailable] The service is currently unavailable. This is a most likely a transient condition and may be corrected by retrying with a backoff.
<unknown>
`
Would be great to find a workaround soon.
username_10: Have the same bug
```[firestore/unavailable] The service is currently unavailable. This is a most likely a transient condition and may be corrected by retrying wit```
username_15: In my case I have array in document and there are 75 items in array. I reduced array items to 50 and error gone.
username_16: I get these errors randomly, if I restart the phone they tend to go away.
username_17: Also having this issue. Any updates?
username_18: Are there any updates on this issue?
username_19: Why would there be updates on a closed issue?
username_20: Maybe @username_19 because it should have never been closed in the first place :)
Like @username_3, I can reproduce this issue when I try to get the data from the cache.
On a new device or by deactivating the persistence:
```
await firestore().settings({
persistence: false,
});
```
username_19: @username_20 this issue is so old, I'd recommend opening a new issue with 100% up to date versions (that is firebase-ios-sdk 6.28.1 and firebase-android-sdk BoM 25.6.0 - it's documented on rnfirebase.io how to override) to make sure we're not chasing phantoms.
You can't imagine the number of "I have the exact same problems" we get that are actually the same problem about 1% of the time, so we work with the info we have available, doing our best, and close things when they seem like the *original* issue is closed. With the dates on these comments so old, definitely go for a new issue with up to date versions, necessary info provided, and a repro App.js if at all possible
username_21: I had the same problem. In my case it was the settings on google-services.json that were wrong. Maybe it will help someone.
username_22: It's. revelant with **_firestore.disableNetwork()**
When you disable network then enable again; it does not work. It keeps disabled. So that's why getting this error.
And also I could not find any solution.
username_19: @username_22 there have been several upstream issues related to this, out of our control. If you instrument the code and verify we are calling the native disable/enable network APIs then the react-native-firebase module is doing what it can. You might verify with firebase-android-sdk 25.9.0 and firebase-ios-sdk pod 6.31.0 to make sure that with up to upstream SDKs this is still happening, and check their issues lists (they are on github as well under those repo names)
username_23: I had the same issue. It was a problem because the time of the test device was 1 month ago. After correcting the time, all the problems were solved.
username_19: Interesting @username_23 that info may help some people. It's important to note that if you are a parent with children, it's currently a "thing to try" for children to get around various parental restrictions by setting the time on their borrowed parent's phones back in time, so having incorrect times/dates set on devices happens more than you would think.
username_24: Still have this problem. I tried to change the timezone, signout at the beginning of App.js but no luck :( @username_19 do you have any idea how to fix it once for good? Thanks
username_25: Hello, we have the same problem. Once the device network is loose and then back again the firestore keep returning old data. It keeps using the cache without synchronization until the app is not reinstalled (the local storage is deleted). Please share some workarounds if you have some.
P.S. That's happening on iOS/Android simulator/emulator and devices as well.
username_19: The best thing to do is make sure you are using the most up to date firebase-android-sdk underling, which is I think 25.13.0 (26.0.0 won't be possible to use until #4471 lands and forward ports to the new major version of the upstream library). Information on how to override the underlying SDK version (in case you do not have the most current) is on the rnfirebase.io main page I believe
username_26: It is still happening on the latest verison.
username_19: @username_26 does not move the issue forward but does generate notifications for everyone. How did it go when you tried each of detailed items in the comments above?
username_26: @username_19 I'm sorry
This is my environment.
1. React Native 0.63.3
2. @react-native-firebase/app" 10.1.0
3.@react-native-firebase/firestore 10.1.1
4. Firebase region Europe London
5. Issue experienced in Pakistan
6. Happens when network is 2G or slower( everything else works )
username_19: So, you have outdated versions (likely not a cause, but still, why chase issues on old versions?) and on likely unreliable networks firestore is unavailable? That seems like expected behavior? There is all sorts of information you do not address about caching, network availability network state etc above.
username_27: i got the s
username_27: i got the same and here is the way i fixed it.
i just run yarn add @react-native-firebase/app and yarn add @react-native-firebase/firestore
I also close my simulator and wipe the data before turn it on again and it worked for me. Hopefully it works for you guys.
username_28: This is really strange issue, yesterday everything was working just fine, now this error keeps showing
[cloud_firestore/unavailable] The service is currently unavailable. This is a most likely a transient condition and may be corrected by retrying with a backoff.
Its really a blocker and there is not valid explanation or solution!
Why everything related to Firebase always buggy and unstable!
username_19: Everything related to Firebase is pretty stable and works great for me.
Sorry you're having this experience, I can understand if you suffer frequent problems it is frustrating.
I imagine you either are attempting to use a VPN that has been flagged for abuse by google and blocked, or you don't have your project set up correctly to generate tokens (like, the device check API is not enabled or is failing for some reason) so firestore no longer believe it has sufficient privileges to authenticate to the cloud backend and fails you
username_29: In my case I was getting the error from the android emulator running on mac.
I didn't notice that the **android emulator didn't have internet connexion**.
I fixed it by adding a DNS Server as suggested in this [StackOverflow answer](https://stackoverflow.com/questions/44535500/internet-stopped-working-on-android-emulator-mac-os).
That solved the emulator-not-having-internet issue, which then solved the `firestore/unavailable` issue.
username_30: I am having this issue on mac on Android Emulator only.
iOS emulator is running fine and is able to read the exact same data.
it's not the DNS or VPN issue, as it occurs randomly. Mostly it runs fine, but suddenly one rainy day, it gives the Error.
username_31: Having this issue on some Android devices, and it seems that this happens quite often on some of the users' phones. I believe it is related to the refresh token, but don't know how to fix it.
username_19: @username_31 my first suspicion is always a VPN for firestore unavailable. If it's not that, then if your thunch about refresh tokens is right, check all the google cloud platform API access around auth tokens and device check APIs etc
username_31: Hi @username_19, I already checked the API settings, looks all good and Firebase works most of the time. All of the users reported this issue are in Australia and I believe they don't use any VPN, and their network works normally since they can use other apps without any problem. If the unavailable issue occurs, the Firebase services including Cloud Functions, Authentication(network request failed) and Firestore will fail in a row even I already implemented some sorts of retry, then the users need to restart the app to fix the problem. I don't think the app itself has some logical bugs that lead to this issue, since even the very simple Firestore query on the login page fails randomly. Only happens on Android.
username_19: This may need closer work with a friendly user, a native reproduction tester app from firebase-android-sdk and some careful testing / display of Auth token / refresh token status along with network availability testing / display that user could run / send results with. No idea how we could reproduce without being in situ on the affected network
username_31: Yea actually my client she's one of the affected user, she did run some tests for me on her phone, like I told her to switch the network and make sure the network was working well, and she tried wifi/4g/other guy's hotspot, but the same issue kept happening. According to her report, when this bug occurred, she restarted the app, but the app asked her to sign in again, so it seems that the refresh token expired somehow, which I believe is same as the issue discussed in this post https://medium.com/@chonnaronghanyawongse/coding-diary-why-my-firestore-doesnt-work-2db41fb82121
I added some code to catch the unavailable exception, and tried to get the refresh token from the cloud but in some cases (roughly 50% of the time) it still failed 3 times in a row with roughly 3 sec retry delay for each.
username_19: This sounds really frustrating, it's clear you've put some effort in to this. Your retry attempts sound reasonable, but doesn't it seem like the most likely path is still a network error of some sort?
I'd make 100% sure you are on the most up to date versions of everything (there are changes in this area periodically...firebase-android-sdk 28.3.1 is most recent, firebase-ios-sdk is 8.5.0 most recent) and if your client is affected that's a good opportunity to go for a native reproduction. This will *have* to be pursued upstream with the firebase team, we have no ability at the react-native level to affect these things, it's all in the underlying firebase-android-sdk and they'll need a native reproduction. They do provide quickstarts to make it almost painless to generate a little side-loadable debug app though: https://github.com/firebase/quickstart-android/tree/master
username_31: Yea I think this is more likely a network error or something wrong with the native implementation because the same issue is happening on Flutter. I'll try, thanks for your help: )
username_32: Not resolved
```
[Error: [firestore/unavailable] The service is currently unavailable. This is a most likely a transient condition and may be corrected by retrying with a backoff.]
```
React Native And Firebase versions
```
react-native-cli: 2.0.1
react-native: 0.64.2
```
```json
"@react-native-firebase/app": "^13.0.1",
"@react-native-firebase/firestore": "^13.0.1",
```
React Native Info
```
System:
OS: Linux 5.11 Ubuntu 20.04.3 LTS (Focal Fossa)
CPU: (4) x64 Intel(R) Core(TM) i3-2377M CPU @ 1.50GHz
Memory: 285.96 MB / 11.49 GB
Shell: 5.0.17 - /bin/bash
Binaries:
Node: 16.13.0 - /usr/bin/node
Yarn: 1.22.11 - /usr/bin/yarn
npm: 8.1.0 - /usr/bin/npm
Watchman: 4.9.0 - /usr/bin/watchman
SDKs:
Android SDK: Not Found
IDEs:
Android Studio: Not Found
Languages:
Java: Not Found
npmPackages:
@react-native-community/cli: Not Found
react: 17.0.2 => 17.0.2
react-native: 0.64.2 => 0.64.2
npmGlobalPackages:
*react-native*: Not Found
```
username_19: There are a variety of reasons this can happen. One typical reason is VPNs for instance.
This is definitely not a problem with the module, it's underying cloud service network expectations + the native SDKs, there is nothing actionable in this repo, sorry
username_32: Please make sure you have enabled Firestore on the Firebase console.
username_33: Can anyone solve this error? My app is running very well and then it suddenly give error like this.
username_19: @username_33 all sorts of reasons it can happen. My (not-)favorite is apparently firebase won't service requests from many VPNs. Another is if an auth token failied, that chains into a firestore error, and auth tokens can fail for various reasons my (not-)favorite one there being if in google cloud platform console the correct APIs aren't enabled or are restricted or something. How you troubleshoot it depends on what your hunches are but it always boils down to slowly bisecting environment and code. Try a completely new throwaway project and hook to that if you're not sure your google cloud stuff is right. Check a different network path to end device if that's a thing. Keep brainstorming and testing.
username_33: Thankyou for the tips @username_19. I hope this will works.
username_33: I tried to update my firebase package by adding yarn add @react-native-firebase/app, same with firestore and enable api in google cloud platform console. It workssss. But i dont know it will not happen again. Thanks guys
username_19: It will stop working if you try certain VPNs or disable certain APIs in google cloud platform console :-). Don't do that :laughing: ? Glad it's working!
username_34: Solved the issue in Flutter, simply toggle airplane mode and try again
username_35: Thanks, solved for me! ASHUHSUASUHSAS |
quarkusio/quarkus | 554085218 | Title: JWT responds with "401 Unauthorized"
Question:
username_0: **Describe the bug**
After upgrade to 1.2.0.CR1, all requests are responded with 401 Unauthorized.
**Expected behavior**
With the correct JWT token, a request should be processed.
**Actual behavior**
All requests are responded with 401 Unauthorized.
**To Reproduce**
Steps to reproduce the behavior:
1. Configure Keycloak client to map "User Client Role" to "groups" (was necessary for 1.1)
2. Using Postman, obtain JWT token.
3. Send a request with a token attached.
**Configuration**
```properties
mp:
jwt:
verify:
publickey:
location: classpath:META-INF/certificate.json
quarkus:
...
smallrye-jwt:
enabled: true
ssl:
native: true
```
or
```
quarkus.ssl.native=true
quarkus.smallrye-jwt.enabled=true
mp.jwt.verify.publickey.location=keycloak:8080/auth/realms/1/protocol/openid-connect/certs
mp.jwt.verify.issuer=keycloak:8080/auth/realms/1
```
**Screenshots**
(If applicable, add screenshots to help explain your problem.)
**Environment (please complete the following information):**
- Output of `uname -a` or `ver`: Microsoft Windows [Version 10.0.19041.21]
- Output of `java -version`: java version "1.8.0_231"
- GraalVM version (if different from Java): N/A
- Quarkus version or git rev: 1.2.0.CR1
**Additional context**
(Add any other context about the problem here.)
Answers:
username_1: @username_0 Thanks. Unfortunately the log content was lost as part of [this PR](https://github.com/smallrye/smallrye-jwt/commit/52fb7d4b8f4d85f3b2d7cea5a8e897cfb7f4bc22#diff-5f4020e440d6d2dd7b7ff27fbe83dd25).
Can you please try to see what is being logged [here](https://github.com/quarkusio/quarkus/blob/1.1.1.Final/extensions/smallrye-jwt/runtime/src/main/java/io/quarkus/smallrye/jwt/runtime/auth/MpJwtValidator.java#L72) ?
username_0: With file.based token `-Dmp.jwt.verify.publickey.location=classpath:META-INF/certificate.json` i get
```
2020-01-23 12:10:37,016 DEBUG [asc.api-service] () [vert.x-eventloop-thread-14] [JWTAuthContextInfoProvider.java:241] - init, mpJwtPublicKey=NONE, mpJwtIssuer=keycloak:8080/auth/realms/1, mpJwtLocation=classpath:META-INF/certificate.json
2020-01-23 12:10:37,017 DEBUG [asc.api-service] () [vert.x-eventloop-thread-14] [AbstractBearerTokenExtractor.java:55] - tokenHeaderName = Authorization
2020-01-23 12:10:37,030 DEBUG [asc.api-service] () [vert.x-eventloop-thread-14] [KeyLocationResolver.java:243] - Trying to create a key from the encoded PEM key...
2020-01-23 12:10:37,032 DEBUG [asc.api-service] () [vert.x-eventloop-thread-14] [KeyLocationResolver.java:247] - Failed to create a key from the encoded PEM key: java.lang.IllegalArgumentException: Illegal base64 character 7b
at java.util.Base64$Decoder.decode0(Base64.java:714)
at java.util.Base64$Decoder.decode(Base64.java:526)
at java.util.Base64$Decoder.decode(Base64.java:549)
at io.smallrye.jwt.KeyUtils.decodePublicKey(KeyUtils.java:169)
at io.smallrye.jwt.auth.principal.KeyLocationResolver.tryAsPEMPublicKey(KeyLocationResolver.java:245)
at io.smallrye.jwt.auth.principal.KeyLocationResolver.initializeKeyContent(KeyLocationResolver.java:172)
at io.smallrye.jwt.auth.principal.KeyLocationResolver.<init>(KeyLocationResolver.java:82)
at io.smallrye.jwt.auth.principal.DefaultJWTTokenParser.getKeyResolver(DefaultJWTTokenParser.java:227)
at io.smallrye.jwt.auth.principal.DefaultJWTTokenParser.parse(DefaultJWTTokenParser.java:72)
at io.quarkus.smallrye.jwt.runtime.auth.MpJwtValidator.authenticate(MpJwtValidator.java:55)
at io.quarkus.smallrye.jwt.runtime.auth.MpJwtValidator.authenticate(MpJwtValidator.java:28)
at io.quarkus.smallrye.jwt.runtime.auth.MpJwtValidator_ClientProxy.authenticate(MpJwtValidator_ClientProxy.zig:134)
at io.quarkus.security.runtime.QuarkusIdentityProviderManagerImpl.handleProvider(QuarkusIdentityProviderManagerImpl.java:121)
...
```
username_0: With HTTP `mp.jwt.verify.publickey=http://keycloak:8080/auth/realms/1/protocol/openid-connect/certs` I get
```
2020-01-23 12:49:51,738 DEBUG [asc.api-service] () [vert.x-eventloop-thread-21] [JWTAuthContextInfoProvider.java:241] - init, mpJwtPublicKey=http://keycloak:8080/auth/realms/1/protocol/openid-connect/certs, mpJwtIssuer=http://keycloak:8080/auth/realms/1, mpJwtLocation=keycloak:8080/auth/realms/1/protocol/openid-connect/certs
2020-01-23 12:49:51,749 DEBUG [asc.api-service] () [vert.x-eventloop-thread-21] [JWTAuthContextInfoProvider.java:307] - mpJwtPublicKey failed as JWK(S), Illegal base64 character 3a
2020-01-23 12:49:51,750 ERROR [asc.api-service] () [vert.x-eventloop-thread-21] [QuarkusErrorHandler.java:60] - HTTP Request to /atries-rx/1/asc/api-service/v1/shifts?importFrom=2019-08-11T00:00:00Z&importTo=2019-08-12T23:59:59Z failed, error id: 43f76a3e-0d07-47e6-ae8c-7756de829aba-1: javax.enterprise.inject.spi.DeploymentException: java.lang.IllegalArgumentException: Illegal base64 character 3a
at io.smallrye.jwt.config.JWTAuthContextInfoProvider.decodeMpJwtPublicKey(JWTAuthContextInfoProvider.java:313)
at io.smallrye.jwt.config.JWTAuthContextInfoProvider.getOptionalContextInfo(JWTAuthContextInfoProvider.java:258)
at io.smallrye.jwt.config.JWTAuthContextInfoProvider.getContextInfo(JWTAuthContextInfoProvider.java:390)
at io.smallrye.jwt.config.JWTAuthContextInfoProvider_ProducerMethod_getContextInfo_21d111677ae04ef1a7cf911100af3482e4c3b30a_Bean.create(JWTAuthContextInfoProvider_ProducerMethod_getContextInfo_21d111677ae04ef1a7cf911100af3482e4c3b30a_Bean.zig:154)
at io.smallrye.jwt.config.JWTAuthContextInfoProvider_ProducerMethod_getContextInfo_21d111677ae04ef1a7cf911100af3482e4c3b30a_Bean.create(JWTAuthContextInfoProvider_ProducerMethod_getContextInfo_21d111677ae04ef1a7cf911100af3482e4c3b30a_Bean.zig:206)
at io.quarkus.arc.impl.AbstractSharedContext.createInstanceHandle(AbstractSharedContext.java:80)
at io.quarkus.arc.impl.ComputingCache$CacheFunction.lambda$apply$0(ComputingCache.java:99)
at io.quarkus.arc.impl.LazyValue.get(LazyValue.java:26)
at io.quarkus.arc.impl.ComputingCache.getValue(ComputingCache.java:41)
at io.quarkus.arc.impl.AbstractSharedContext.get(AbstractSharedContext.java:25)
at io.smallrye.jwt.config.JWTAuthContextInfoProvider_ProducerMethod_getContextInfo_21d111677ae04ef1a7cf911100af3482e4c3b30a_ClientProxy.arc$delegate(JWTAuthContextInfoProvider_ProducerMethod_getContextInfo_21d111677ae04ef1a7cf911100af3482e4c3b30a_ClientProxy.zig:1057)
at io.smallrye.jwt.config.JWTAuthContextInfoProvider_ProducerMethod_getContextInfo_21d111677ae04ef1a7cf911100af3482e4c3b30a_ClientProxy.getTokenHeader(JWTAuthContextInfoProvider_ProducerMethod_getContextInfo_21d111677ae04ef1a7cf911100af3482e4c3b30a_ClientProxy.zig:474)
at io.smallrye.jwt.auth.AbstractBearerTokenExtractor.getBearerToken(AbstractBearerTokenExtractor.java:54)
at io.quarkus.smallrye.jwt.runtime.auth.JWTAuthMechanism.authenticate(JWTAuthMechanism.java:47)
```
username_1: Hi @username_0 thanks, what is `certificate.json`, is it a 'base64-encoded' JWK key ? The only thing that have changed in `KeyLocationResolver` related to it is that an extra Base64-URL decoding try is added which was not even there in 2.0,10.
https://github.com/smallrye/smallrye-jwt/blob/2.0.12/implementation/src/main/java/io/smallrye/jwt/auth/principal/KeyLocationResolver.java
Can you please post this certificate.json - it is a public key so should be OK and I can try locally; or if you could put a breakpoint in `KeyLocationResolver` then it would also help
username_1: @username_0 Also, FYI, `publickey` can not point to a URL based location, it should be a PEM or JWK or JWK set, try `publickey.location`
username_1: @username_0 Another question, have you changed to a different a Keycloak version recently ?
username_0: In the first example with file, I used publickey.location:
`mp.jwt.verify.publickey.location=classpath:META-INF/certificate.json`
In the second example, with URL, I used just the publickey:
`mp.jwt.verify.publickey=http://keycloak:8080/auth/realms/1/protocol/openid-connect/certs`
Keycloak version is baked-in in docker-compose; not the latest one, but the one required by the customer:
```
keycloak:
image: jboss/keycloak:6.0.1
```
In any case, the file worked with 1.1 and now it does not. The only changed thing is Quarkus version.
username_1: @username_0 sure, so somehow we need to identify the problem. The smallrye-jwt tests in Quarkus (extension specific) and MP JWT TCK tests are passing.
As I said, either paste a public certificate here if it is not a sensitive info and I will check it or otherwise please put a breakpoint in that resolver code and check locally what exactly is happening.
username_0: In the end, it was misconfiguration from my side and confusing error messages from MP JWT side. Even now when I make everything work I get
```
2020-01-23 15:33:29,991 DEBUG [asc.api-service] () [vert.x-eventloop-thread-0] [KeyLocationResolver.java:253] - Trying to create a key from the encoded PEM certificate...
2020-01-23 15:33:29,991 DEBUG [asc.api-service] () [vert.x-eventloop-thread-0] [KeyLocationResolver.java:257] - Failed to to create a key from the encoded PEM certificate: java.lang.IllegalArgumentException: Illegal base64 character 7b
```
I could not find it without a debugger. I understand that security frameworks are intentionally silent about problems, I just wish there is an easier way to find the issue.
What puzzles me most is that everything worked when I reverted to 1.1.
On the bright side, the problem with Windows and HTTP certificates is gone. Thank you for yout support!
Status: Issue closed
username_1: Hi @username_0 thanks a million, yeah, it was a bit tricky :-). It just logs there the options it tries, but it needs to be made less confusing, will open an issue in smallrye-jwt
username_1: @username_0 FYI, https://github.com/smallrye/smallrye-jwt/issues/173
username_0: I subscribed immediately :) |
tensorflow/tensorflow | 282829870 | Title: Tensorflow installation error
Question:
username_0: gopi@gp:~$ source activate tensorflow
(tensorflow) gopi@gp:~$ pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.4.0-cp27-none-linux_x86_64.whl
Collecting tensorflow-gpu==1.4.0 from https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.4.0-cp27-none-linux_x86_64.whl
Using cached https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.4.0-cp27-none-linux_x86_64.whl
Collecting enum34>=1.1.6 (from tensorflow-gpu==1.4.0)
Using cached enum34-1.1.6-py2-none-any.whl
Collecting six>=1.10.0 (from tensorflow-gpu==1.4.0)
Using cached six-1.11.0-py2.py3-none-any.whl
Collecting protobuf>=3.3.0 (from tensorflow-gpu==1.4.0)
Using cached protobuf-3.5.0.post1-cp27-cp27mu-manylinux1_x86_64.whl
Collecting numpy>=1.12.1 (from tensorflow-gpu==1.4.0)
Using cached numpy-1.13.3-cp27-cp27mu-manylinux1_x86_64.whl
Collecting wheel (from tensorflow-gpu==1.4.0)
Using cached wheel-0.30.0-py2.py3-none-any.whl
Collecting backports.weakref>=1.0rc1 (from tensorflow-gpu==1.4.0)
Using cached backports.weakref-1.0.post1-py2.py3-none-any.whl
Collecting tensorflow-tensorboard<0.5.0,>=0.4.0rc1 (from tensorflow-gpu==1.4.0)
Using cached tensorflow_tensorboard-0.4.0rc3-py2-none-any.whl
Collecting mock>=2.0.0 (from tensorflow-gpu==1.4.0)
Using cached mock-2.0.0-py2.py3-none-any.whl
Collecting setuptools (from protobuf>=3.3.0->tensorflow-gpu==1.4.0)
Downloading setuptools-38.2.4-py2.py3-none-any.whl (489kB)
100% |████████████████████████████████| 491kB 30kB/s
Collecting futures>=3.1.1; python_version < "3.2" (from tensorflow-tensorboard<0.5.0,>=0.4.0rc1->tensorflow-gpu==1.4.0)
Downloading futures-3.2.0-py2-none-any.whl
Collecting werkzeug>=0.11.10 (from tensorflow-tensorboard<0.5.0,>=0.4.0rc1->tensorflow-gpu==1.4.0)
Downloading Werkzeug-0.13-py2.py3-none-any.whl (311kB)
100% |████████████████████████████████| 317kB 31kB/s
Collecting html5lib==0.9999999 (from tensorflow-tensorboard<0.5.0,>=0.4.0rc1->tensorflow-gpu==1.4.0)
Collecting markdown>=2.6.8 (from tensorflow-tensorboard<0.5.0,>=0.4.0rc1->tensorflow-gpu==1.4.0)
Downloading Markdown-2.6.10.zip (414kB)
100% |████████████████████████████████| 419kB 43kB/s
Collecting bleach==1.5.0 (from tensorflow-tensorboard<0.5.0,>=0.4.0rc1->tensorflow-gpu==1.4.0)
Using cached bleach-1.5.0-py2.py3-none-any.whl
Collecting funcsigs>=1; python_version < "3.3" (from mock>=2.0.0->tensorflow-gpu==1.4.0)
Using cached funcsigs-1.0.2-py2.py3-none-any.whl
Collecting pbr>=0.11 (from mock>=2.0.0->tensorflow-gpu==1.4.0)
Using cached pbr-3.1.1-py2.py3-none-any.whl
Building wheels for collected packages: markdown
Running setup.py bdist_wheel for markdown ... done
Stored in directory: /home/gopi/.cache/pip/wheels/1e/5a/55/a80b200d12e234d575ad68c1528593d1ce488720b65b24e48c
Successfully built markdown
Installing collected packages: enum34, six, setuptools, protobuf, numpy, wheel, backports.weakref, futures, werkzeug, html5lib, markdown, bleach, tensorflow-tensorboard, funcsigs, pbr, mock, tensorflow-gpu
Exception:
Traceback (most recent call last):
File "/home/gopi/.local/lib/python2.7/site-packages/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/home/gopi/.local/lib/python2.7/site-packages/pip/commands/install.py", line 342, in run
prefix=options.prefix_path,
File "/home/gopi/.local/lib/python2.7/site-packages/pip/req/req_set.py", line 784, in install
**kwargs
File "/home/gopi/.local/lib/python2.7/site-packages/pip/req/req_install.py", line 851, in install
self.move_wheel_files(self.source_dir, root=root, prefix=prefix)
File "/home/gopi/.local/lib/python2.7/site-packages/pip/req/req_install.py", line 1064, in move_wheel_files
isolated=self.isolated,
File "/home/gopi/.local/lib/python2.7/site-packages/pip/wheel.py", line 345, in move_wheel_files
clobber(source, lib_dir, True)
File "/home/gopi/.local/lib/python2.7/site-packages/pip/wheel.py", line 329, in clobber
os.utime(destfile, (st.st_atime, st.st_mtime))
OSError: [Errno 1] Operation not permitted: '/home/gopi/anaconda2/lib/python2.7/site-packages/enum/README'
(tensorflow) gopi@gp:~$
Answers:
username_1: Seems like an authorization issue rather TensorFlow's. Your user is probably not authorized to access a directory necessary for the installation. Using `sudo` will work but be sure you know what you're doing.
username_0: Thanks for the reply.
Still error. Please check
(tensorflow) gopi@gp:~$ sudo pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.4.0-cp27-none-linux_x86_64.whl
The directory '/home/gopi/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/home/gopi/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
tensorflow_gpu-1.4.0-cp27-none-linux_x86_64.whl is not a supported wheel on this platform.
(tensorflow) gopi@gp:~$ sudo -H pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.4.0-cp27-none-linux_x86_64.whl
tensorflow_gpu-1.4.0-cp27-none-linux_x86_64.whl is not a supported wheel on this platform.
--------------------------
FYI:
gopi@gp:~/NVIDIA_CUDA-8.0_Samples/1_Utilities/deviceQuery$ ./deviceQuery
./deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "GeForce GTX 1070"
CUDA Driver Version / Runtime Version 9.0 / 8.0
CUDA Capability Major/Minor version number: 6.1
Total amount of global memory: 8113 MBytes (8506769408 bytes)
(15) Multiprocessors, (128) CUDA Cores/MP: 1920 CUDA Cores
GPU Max Clock rate: 1835 MHz (1.84 GHz)
Memory Clock rate: 4004 Mhz
Memory Bus Width: 256-bit
L2 Cache Size: 2097152 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = GeForce GTX 1070
Result = PASS
--------------------------
gopi@gp:~$ cat /usr/local/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2
#define CUDNN_MAJOR 5
#define CUDNN_MINOR 1
#define CUDNN_PATCHLEVEL 10
--
#define CUDNN_VERSION (CUDNN_MAJOR * 1000 + CUDNN_MINOR * 100 + CUDNN_PATCHLEVEL)
username_1: Now it's a different error. Could you perhaps have created an environment for Python 3 rather 2?
This error occurs when the binary that doesn't correspond with the local environment, e.g. OS, Python version etc.
username_1: @tensorflowbutler hey
username_1: Yes, Linux packages are right there for TensorFlow v1.4.1. You can install as I mentioned before.
Status: Issue closed
|
nesbox/TIC-80 | 278884256 | Title: sync() raises "RangeError: invalid stack index 0" in JS
Question:
username_0: I've not been able to solve this problem.
Version of TIC80: `TIC-80 tiny computer 0.50.1 Pro`
OS: `macOS 10.13.1 (17B1003)`
(I've not tested this on other OSes)
Answers:
username_1: I made a fix, pls check
[tic80_pro_437_fix.dmg.zip](https://github.com/username_1/TIC-80/files/1526114/tic80_pro_437_fix.dmg.zip)
Thanks
Status: Issue closed
|
smartdevicelink/sdl_ios | 93379239 | Title: iOS 9 breaks app launching
Question:
username_0: In an effort to stop apps from scanning what other apps are installed on the phone (as a result from apps using it for advertising and marketing), Apple has made major changes to `canOpenURL` and `openURL` for iOS URL schemes.
**Note:** It appears that the `openURL` limitations are a bug. We will be able to do app launching, but it's abilities will be severely limited. We will be able to know what apps are SDL connected and launch them, but not which among iOS SDL connected apps are on the users phone.
**References:**
http://www.macstories.net/linked/ios-9-bringing-changes-to-url-schemes/
http://awkwardhare.com/post/121196006730/quick-take-on-ios-9-url-scheme-changes<issue_closed>
Status: Issue closed |
ropensci/worrms | 443520000 | Title: WORMS API changes
Question:
username_0: * 2019-05-10
* AphiaRecord object now exposes its taxonRankID
* Added functions AphiaTaxonRanksByID & AphiaTaxonRanksByName functions to get information about the available ranks in the system
* Added function AphiaRecordsByTaxonRankID to get all entries of specific rank. Can be limited to a certain AphiaID its descendants (eg: Get all species in a specified family).
* matchAphiaRecordsByNames (soap) & AphiaRecordsByMatchNames (rest): Now both return the current (final) accepted name (similar as in the update from 2018-10-01)
* 2018-02-01 Added offset parameter to support fetching more than 50 results getAphiaSynonymsByID
* 2018-11-27 lowercase the parameter 'aphiaids' for the function AphiaRecordsByAphiaIDs
* 2018-11-06 added function AphiaRecordsByAphiaIDs to get an AphiaRecord for multiple AphiaIDs in one go<issue_closed>
Status: Issue closed |
alco/benchfella | 41022890 | Title: --mem-stats breaks
Question:
username_0: I don't know if memory stats are actually fully implemented yet, but:
$ mix bench --mem-stats
** (exit) bad cast: {:remote_dispatch, Binary}
$ mix bench --sys-mem-stats
** (exit) bad cast: {:remote_dispatch, Binary}
Answers:
username_1: Seems mem_stats has been removed, so I guess this can be closed? :)
Status: Issue closed
username_2: @username_1 indeed. |
apache/mynewt-nimble | 443532406 | Title: Extended Advertising with 2M secondary PHY
Question:
username_0: I am trying to receive Extended Advertisements from a Nordic nRF52 using the Nordic SDK (ble_app_rscs) with another Nordic nRF52 device with Nimble (using btshell and blehci). It works, when the Nordic SDK advertiser sends with 1M on both the primary and the secondary PHY, but when it has BLE_GAP_PHY_2MBPS on the secondary, the advertisements are not seen on the Nimble side...
Is 2M not implemented yet for extended advertisements? Do I need to activate a configuration option? Or am I missing something?
Answers:
username_1: 2M and Coded are disabled by default in NimBLE controller, you can enable them by setting `BLE_LL_CFG_FEAT_LE_2M_PHY: 1` or `BLE_LL_CFG_FEAT_LE_CODED_PHY: 1` respectively
username_0: yes, that works! great! :-)
Status: Issue closed
|
thunkable/thunkable-issues | 358372373 | Title: Issue with Stack Navigator Screen
Question:
username_0: Hi Thunkable Team,
I have created an example application here of the problem I am experiencing https://x.thunkable.com/copy/493095682b9dc2faee9eea3f24f77aa4
Basically I am using a stack navigator layout in modal mode with no headers in my app. What I expected to happen - and what was happening previously was that when triggered the child screen of the stack navigator would appear from bottom to top.
Now when I do it the screen does not appear at all - I'm not sure if this is a recent layout issue - but it has effected multiple apps of mine,
Cheers
Jacob
Status: Issue closed
Answers:
username_1: AFAICT, this is working as intended. You cannot navigate forward between screens in different navigators. One way to fix your issue would be to move `Screen1` into your Screen Navigator. Alternatively, you could have your button in Screen1 navigate to `Stack Navigator1` rather than `Screen2`
I'm not sure why this might have worked previously. If so, it was not supposed to ;-) |
ioBroker/ioBroker.opcua | 875701243 | Title: Think about to fix the issues found by adapter checker
Question:
username_0: I am an automatic service that looks for possible errors in ioBroker and creates an issue for it. The link below leads directly to the test:
https://adapter-check.iobroker.in/?q=https://raw.githubusercontent.com/ioBroker/ioBroker.opcua
- [ ] [E016] No SPDX license found in package.json. Please use one of listed here: https://spdx.org/licenses/
- [ ] [E114] No adapter are allowed in the repo without admin3 support
- [ ] [E116] No SPDX license found. Please use one of listed here: https://spdx.org/licenses/
- [ ] [E150] No common.connectionType found in io-package.json
- [ ] [E152] No common.dataSource found in io-package.json
- [ ] [E124] Main file not found under URL: https://raw.githubusercontent.com/ioBroker/ioBroker.opcua/master/main.js
- [ ] [E504] "main.js found in package.json, but not found as file
- [ ] [E300] Not found on travis. Please setup travis
I have also found warnings that may be fixed if possible.
- [ ] [W400] Cannot find "opcua" in latest repository
- [ ] [W801] .npmignore not found
Thanks,
your automatic adapter checker. |
appium/appium | 15901069 | Title: Does appium support integratiing to an existing running native application?
Question:
username_0: Hi,
I want to break my test into several steps, can I start the test by invoking the application and then after a while continue by integrating to that existing application?
how does it work?
while the server is up and running then it save the application status ? |
tensorflow/tensorflow | 372248419 | Title: Failed to reimplement MobileNet performance on Pixel Phone
Question:
username_0: <em>Please make sure that this is a bug. As per our [GitHub Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template</em>
**System information**
Host: Mac OS High Sierra / Ubuntu 16.04
Phone: Google Pixel, Android 8.1
Measure tool: build from [TF-Benchmark](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/tools/benchmark)
Model: [mobilenet_v2_1.0_224 from TF-Slim](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet)
**Describe the current behavior**
Net | Paper Report | Measured
--- | --- | ---
MobileNet v2 1.0 224 | 73.8ms | 82.9ms
**Code to reproduce the issue**
```
adb push $bin /data/local/tmp
adb shell chmod +x /data/local/tmp/$bin
bin=benchmark_tflite_model
model=model.tflite
adb shell rm /data/local/tmp/$model
adb push $model /data/local/tmp
adb shell /data/local/tmp/$bin \
--graph=/data/local/tmp/$model \
--input_layer="input" \
--input_layer_shape="1,224,224,3" \
--input_layer_type="float" \
--num_runs=200 --warmup_runs=50
```
**Other info / logs**
```
➜ mobilenet_v2_1.0_224 adb shell
sailfish:/ $ getprop | grep -e 'model' -e 'version.sdk' -e 'manufacturer' -e 'hardware' -e 'platform' -e 'revision' -e 'serialno' -e 'product.name' -e 'brand'
[media.recorder.show_manufacturer_and_model]: [true]
[ro.board.platform]: [msm8996]
[ro.boot.hardware]: [sailfish]
[ro.boot.hardware.color]: [SLV00]
[ro.boot.hardware.ddr]: [4096MB,Samsung,LPDDR4]
[ro.boot.hardware.revision]: [PVT]
[ro.boot.hardware.ufs]: [32GB,Samsung]
[ro.boot.serialno]: [FA68X0302645]
[ro.build.version.sdk]: [28]
[ro.frp.pst]: [/dev/block/platform/soc/624000.ufshc/by-name/frp]
[ro.hardware]: [sailfish]
[ro.hardware.power]: [marlin]
[ro.product.brand]: [google]
[ro.product.manufacturer]: [Google]
[ro.product.model]: [Pixel]
[ro.product.name]: [sailfish]
[ro.product.vendor.brand]: [google]
[ro.product.vendor.manufacturer]: [Google]
[ro.product.vendor.model]: [Pixel]
[ro.revision]: [0]
[ro.serialno]: [FA68X0302645]
[Truncated]
CONV_2D 73.134 4.064 3.489 4.207% 28.071% 0.000 1 [MobilenetV2/expanded_conv_16/project/BatchNorm/FusedBatchNorm]
CONV_2D 19.677 3.217 3.177 3.831% 31.901% 0.000 1 [MobilenetV2/expanded_conv_2/expand/Relu6]
CONV_2D 27.468 3.104 3.113 3.754% 35.655% 0.000 1 [MobilenetV2/expanded_conv_3/expand/Relu6]
CONV_2D 25.126 2.359 2.280 2.749% 38.405% 0.000 1 [MobilenetV2/expanded_conv_2/project/BatchNorm/FusedBatchNorm]
DEPTHWISE_CONV_2D 22.854 2.329 2.271 2.739% 41.144% 0.000 1 [MobilenetV2/expanded_conv_2/depthwise/Relu6]
CONV_2D 62.441 2.469 2.175 2.623% 43.766% 0.000 1 [MobilenetV2/expanded_conv_14/expand/Relu6]
============================== Summary by node type ==============================
[Node type] [count] [avg ms] [avg %] [cdf %] [mem KB] [times called]
CONV_2D 36 68.106 82.152% 82.152% 0.000 36
DEPTHWISE_CONV_2D 17 14.538 17.536% 99.689% 0.000 17
ADD 10 0.179 0.216% 99.905% 0.000 10
AVERAGE_POOL_2D 1 0.054 0.065% 99.970% 0.000 1
SOFTMAX 1 0.024 0.029% 99.999% 0.000 1
RESHAPE 1 0.001 0.001% 100.000% 0.000 1
Timings (microseconds): count=250 first=96023 curr=83115 min=79219 max=96023 avg=82932.5 std=1625
Memory (bytes): count=250 curr=0(all same)
66 nodes observed
```
Answers:
username_0: Hi @harshini-gadige , any thoughts?
username_1: Nagging Assignees @tofulawrence, @harshini-gadige: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly.
username_1: Nagging Assignee @username_2: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly.
username_2: Can you try again with single big core as said in the instruction?
adb shell taskset f0 /data/local/tmp/$bin ...
username_0: Hi @username_2 , Pixel 1 does not have big/small cores. This command only works for Pixel 2 and later phones.
username_2: Looks like @shashishekhar has resolved this in a private email thread. Pasting the response here for public reference:
I see that the attached binary for benchmark_model is 32 bit build. You should build it with --config=android_arm64, this should help in resolving the discrepancy. If you have any difficulties when building with 64 bit, try this [workaround](https://github.com/tensorflow/tensorflow/issues/20192#issuecomment-404971539) for upgrading the version of NDK.
username_0: Thank you!
username_0: Thanks for your help!
Status: Issue closed
|
geopandas/geopandas | 311310804 | Title: read_file on an empty layer in a gdb results in a KeyError
Question:
username_0: I'm not sure if this is expected behavior, but I was trying to read a specific layer in a geodatabase which happened to be empty (has no rows, but has columns) using `geopandas.read_file()` and it resulted in a `KeyError`. The same gdb and the layer within the gdb can be obtained from here: https://coast.noaa.gov/htdata/Inundation/SLR/SLRdata/Pacific/HI_Lanai_slr_data_dist.zip
Full stack trace below:
```
gdf = geopandas.read_file('../noaa/sea_level_rise/hawaii/HI_Lanai_slr_data_dist/HI_Lanai_slr_final_dist.gdb', layer='HI_Lanai_low_2ft')
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-7-513ea4fddcc2> in <module>()
1 gdf = geopandas.read_file('../noaa/sea_level_rise/hawaii/HI_Lanai_slr_data_dist/HI_Lanai_slr_final_dist.gdb',
----> 2 layer='HI_Lanai_low_2ft')
/usr/local/lib/python3.6/site-packages/geopandas/io/file.py in read_file(filename, **kwargs)
28 # re-order with column order from metadata, with geometry last
29 columns = list(f.meta["schema"]["properties"]) + ["geometry"]
---> 30 gdf = gdf[columns]
31
32 return gdf
/usr/local/lib/python3.6/site-packages/geopandas/geodataframe.py in __getitem__(self, key)
396 GeoDataFrame.
397 """
--> 398 result = super(GeoDataFrame, self).__getitem__(key)
399 geo_col = self._geometry_column_name
400 if isinstance(key, string_types) and key == geo_col:
/usr/local/lib/python3.6/site-packages/pandas/core/frame.py in __getitem__(self, key)
2051 if isinstance(key, (Series, np.ndarray, Index, list)):
2052 # either boolean or fancy integer index
-> 2053 return self._getitem_array(key)
2054 elif isinstance(key, DataFrame):
2055 return self._getitem_frame(key)
/usr/local/lib/python3.6/site-packages/pandas/core/frame.py in _getitem_array(self, key)
2095 return self.take(indexer, axis=0, convert=False)
2096 else:
-> 2097 indexer = self.ix._convert_to_indexer(key, axis=1)
2098 return self.take(indexer, axis=1, convert=True)
2099
/usr/local/lib/python3.6/site-packages/pandas/core/indexing.py in _convert_to_indexer(self, obj, axis, is_setter)
1228 mask = check == -1
1229 if mask.any():
-> 1230 raise KeyError('%s not in index' % objarr[mask])
1231
1232 return _values_from_object(indexer)
KeyError: "['Id' 'grid_code' 'Shape_Length' 'Shape_Area' 'geometry'] not in index"
```
Are there any work arounds? or do I have to try-except the KeyError which seems a little broad.
Answers:
username_1: This is hacky, and does not solve the problem, but might at least improve your `KeyError` `try .. catch` logic:
```
try:
gdf = gpd.read_file(filepath)
except KeyError as exception:
with fiona.open(dir_data, driver="OpenFileGDB") as src:
if _ensure_that_there_are_zero_rows(src):
pass
else:
raise exception
```
username_2: You can use `listlayers` from `fiona` to check :
```python
import fiona
import geopandas as gpd
gdf = gpd.read_file('HI_Lanai_slr_data_dist/HI_Lanai_slr_final_dist.gdb')
for layer in fiona.listlayers('HI_Lanai_slr_data_dist/HI_Lanai_slr_final_dist.gdb'):
gdf = gpd.read_file('HI_Lanai_slr_data_dist/HI_Lanai_slr_final_dist.gdb', layer=layer)
print(layer, len(gdf))
```
This will give you
```
HI_Lanai_slr_0ft 11
HI_Lanai_slr_1ft 4
HI_Lanai_slr_2ft 8
HI_Lanai_slr_3ft 17
HI_Lanai_slr_4ft 42
HI_Lanai_slr_5ft 31
HI_Lanai_slr_6ft 31
HI_Lanai_low_0ft 0
HI_Lanai_low_1ft 0
HI_Lanai_low_3ft 5
HI_Lanai_low_4ft 1
HI_Lanai_low_5ft 5
HI_Lanai_low_6ft 4
HI_Lanai_low_2ft 0
```
I guess your layer is empty.
username_0: Interesting @username_2 , thanks for your note - I run into the same error which makes me think this is a versioning issue.
```
HI_Lanai_slr_0ft 11
HI_Lanai_slr_1ft 4
HI_Lanai_slr_2ft 8
HI_Lanai_slr_3ft 17
HI_Lanai_slr_4ft 42
HI_Lanai_slr_5ft 31
HI_Lanai_slr_6ft 31
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-18-a190d840970d> in <module>()
1 for layer in fiona.listlayers('../data-cache/noaa/sea_level_rise/hawaii/HI_Lanai_slr_data_dist/HI_Lanai_slr_final_dist.gdb'):
----> 2 gdf = geopandas.read_file('../data-cache/noaa/sea_level_rise/hawaii/HI_Lanai_slr_data_dist/HI_Lanai_slr_final_dist.gdb', layer=layer)
3 print(layer, len(gdf))
4
/usr/local/lib/python3.6/site-packages/geopandas/io/file.py in read_file(filename, **kwargs)
28 # re-order with column order from metadata, with geometry last
29 columns = list(f.meta["schema"]["properties"]) + ["geometry"]
---> 30 gdf = gdf[columns]
31
32 return gdf
/usr/local/lib/python3.6/site-packages/geopandas/geodataframe.py in __getitem__(self, key)
396 GeoDataFrame.
397 """
--> 398 result = super(GeoDataFrame, self).__getitem__(key)
399 geo_col = self._geometry_column_name
400 if isinstance(key, string_types) and key == geo_col:
/usr/local/lib/python3.6/site-packages/pandas/core/frame.py in __getitem__(self, key)
2051 if isinstance(key, (Series, np.ndarray, Index, list)):
2052 # either boolean or fancy integer index
-> 2053 return self._getitem_array(key)
2054 elif isinstance(key, DataFrame):
2055 return self._getitem_frame(key)
/usr/local/lib/python3.6/site-packages/pandas/core/frame.py in _getitem_array(self, key)
2095 return self.take(indexer, axis=0, convert=False)
2096 else:
-> 2097 indexer = self.ix._convert_to_indexer(key, axis=1)
2098 return self.take(indexer, axis=1, convert=True)
2099
/usr/local/lib/python3.6/site-packages/pandas/core/indexing.py in _convert_to_indexer(self, obj, axis, is_setter)
1228 mask = check == -1
1229 if mask.any():
-> 1230 raise KeyError('%s not in index' % objarr[mask])
1231
1232 return _values_from_object(indexer)
KeyError: "['ID' 'GRIDCODE' 'Shape_Length' 'Shape_Area' 'geometry'] not in index"
```
I'm on geopandas `0.3.0`, pandas `0.19.2` and fiona `1.7.6`. Can you share what's yours?
username_3: We recently fixed a bug for reading empty files (https://github.com/geopandas/geopandas/issues/649, PR https://github.com/geopandas/geopandas/pull/653), I suppose this will be the same.
username_0: Ah yes, that looks like the same. Closing issue. Thanks!
Status: Issue closed
|
kubernetes/org | 849407686 | Title: REQUEST: New membership for <your-GH-handle>
Question:
username_0: ### GitHub Username
@username_0
### Organization you are requesting membership in
kubernetes
kubernetes-sigs
### Requirements
- [x] I have reviewed the community membership guidelines (https://git.k8s.io/community/community-membership.md)
- [x] I have enabled 2FA on my GitHub account (https://github.com/settings/security)
- [x] I have subscribed to the kubernetes-dev e-mail list (https://groups.google.com/forum/#!forum/kubernetes-dev)
- [x] I am actively contributing to 1 or more Kubernetes subprojects
- [x] I have two sponsors that meet the sponsor requirements listed in the community membership guidelines
- [x] I have spoken to my sponsors ahead of this application, and they have agreed to sponsor my application
### Sponsors
- @username_2
- @username_1
### List of contributions to the Kubernetes project
- PRs reviewed / authored
https://github.com/kubernetes/kubernetes/pulls?q=is%3Apr+author%3Ausername_0+
- Issues responded to
https://github.com/kubernetes/kubernetes/issues?q=is%3Aissue+username_0+
- SIG projects I am involved with
sig-api-machinery
wg-api-expression
kubebuilder/controller-runtime/controller-tools
Answers:
username_1: I confirm my sponsorship.
username_2: Same
username_3: Hi, @username_0 👋 Thank you for all your contributions!
I've created PR #2626 to add you to the @kubernetes & @kubernetes-sigs organization. Once that gets merged, you should get a membership invite notification.
Welcome to @kubernetes! 🎉
/assign |
ppy/osu-framework | 326783789 | Title: PassThroughInputManager blocking event from PlatformActionContainer
Question:
username_0: `PlatformActionContainer` events are blocked by `PassThroughInputManager` and will not reach children that may be expecting them. This is because `PassThroughInputManager` is unaware of the incoming actions and therefore cannot pass them on.
Not sure what the best way to fix this is.
Answers:
username_1: Are there any concrete steps to reproduce the problem?
username_0: - Place a textbox inside a `PassThroughInputManager`
- Try to backspace
This can be seen on the `Lounge` VisualTest (osu)
Status: Issue closed
|
OAID/Tengine | 1011994109 | Title: tengine在vim3上converttool的问题
Question:
username_0: tengine model zoo的openpose_coco模型,在vim3上转化和量化,发现当指定输入图片尺寸不是方形是报错。比如
./quant_tool_uint8 -m openpose-caffe.tmfile -i images/ -o pose_uint83.tmfile -g 3,**640,640** -w 0,0,0 -s 0.0039,0.0039,0.0039 -z 1 是没问题的
如果
./quant_tool_uint8 -m openpose-caffe.tmfile -i images/ -o pose_uint83.tmfile -g 3,**540,96**0 -w 0,0,0 -s 0.0039,0.0039,0.0039 -z 1 就会报错
khadas@Kh<EMAIL>:~/etahAiTest/posedetect$ sudo ./tmpose -m pose_uint8.tmfile -i pose1.jpg
tengine-lite library version: 1.5-dev
init tengind ok
create_graph ok
get_graph_input_tensor beg
set_tensor_shape beg
set_tensor_buffer beg
prerun_graph_multithread beg
E [op_check:333]Concat output dims size(68 vs 67)
E [setup_node:481]Check node[180] CONCAT fail
Tengine: Model compile to bin failed.Tengine Fatal: Pre-run subgraph(0) on TIMVX failed.
Tengine: Scheduler(sync) prerun failed.
链接:https://pan.baidu.com/s/13uk8NZ36cwOw6X43wtWpwg
提取码:1234
--来自百度网盘超级会员V5的分享
Answers:
username_1: 同样的例子 convert tool在下x86 pc上没问题么?一般都不在vim3上进行转换量化的吧 |
phanxgames/AetherStory | 1046982261 | Title: Volume settings randomly change
Question:
username_0: **Describe the bug**
"The volume settings randomly change every time I load the game. I obviously always keep the music at 100% (and sound effects a little quieter):"
**To Reproduce**
Steps to reproduce the behavior:
1. Unknown
**Expected behavior**
Volume should stay consistent
**Screenshots**

**Context:**
- Character Name: Gregg
Status: Issue closed
Answers:
username_0: 15% is the default setting when the volume setting cant be found or hasnt been set. This might have occured due to the flag not being saved correctly.
Fix?: Save the flag after setting it
Should be fixed |
arnaud-lb/php-rdkafka | 671777802 | Title: connection down brokers. t try catch connection fail message. but can`t work
Question:
username_0: PHP version: 7.1.9
librdkafka version: 0.9.1
php-rdkafka version: 3.1.2
kafka version: 2.5.0
my php code
```
$kafkaBrokers = 'debian-server:9092';
$kafkaTopic = 'test';
$producer = new \RdKafka\Producer();
$producer->addBrokers($kafkaBrokers);
$topicConfig = new \RdKafka\TopicConf();
$topic = $producer->newTopic($kafkaTopic, $topicConfig);
// here block my web api,
// i try catch exception, but can`t work
$topic->produce(RD_KAFKA_PARTITION_UA, 0, 'i am message');
```
i get error message
```
%3|1596426822.744|FAIL|rdkafka#producer-1| debian-server:9092/bootstrap: Failed to resolve 'debian-server:9092': %3|1596426822.
756|ERROR|rdkafka#producer-1| debian-server:9092/bootstrap: Failed to resolve 'debian-server:9092': %3|1596426822.767|ERROR|rdk
afka#producer-1| 1/1 brokers are down
```
I want to be able to return a response to the API after the connection fails
Answers:
username_1: Use `logCb`:
https://arnaud.le-blanc.net/php-rdkafka-doc/phpdoc/rdkafka-conf.setlogcb.html
and anything with fatal level, you can react accordingly.
Or alternatively flush would return a timeout in this case too if you are doing this directly in your api and you can react on this.
username_0: @username_1 Can an exception be thrown in the callback to interrupt the connection and return the response, because I see this is an event callback and the code is not synchronized
username_2: Also, I'd highly suggest upgrading your librdkafka dependency. 0.9 as far as I know is no longer supported by phprdkafka and versions before 0.11.6 are known to have issues.
username_1: @username_0 if you are sending from the API directly, i would just call `flush` at the end with a reasonable timeout as parameter and react to the return code of `flush`
the alternative would be throwing and catching an exception from the callback and polling for all events, but it's more work and flush does pretty much the same :smile:
username_0: ```
$kafkaBrokers = 'debian-server:9092';
$kafkaTopic = 'test';
$producer = new \RdKafka\Producer();
$producer->addBrokers($kafkaBrokers);
$topicConfig = new \RdKafka\TopicConf();
$topic = $producer->newTopic($kafkaTopic, $topicConfig);
// here block my web api,
// i try catch exception, but can`t work
$topic->produce(RD_KAFKA_PARTITION_UA, 0, 'i am message');
while ($producer->getOutQLen() > 0) {
$producer->poll(10);
}
```
I mean, code is not blocking at `flush`. in`$topic::product` method block
username_1: so you can replace this:
```
while ($producer->getOutQLen() > 0) {
$producer->poll(10);
}
```
with this:
```
$result = $producer->flush(5000);
if (RD_KAFKA_RESP_ERR_NO_ERROR !== $result) {
// throw exception or return appropriate api response, whatever suits your usecase
}
```
username_1: Yeah and i forgot, also absolutely do what @username_2 said, upgrade librdkafka to a more current version :v:
username_0: no, code no run to `flush`.
code block to `produce`
block time out, my api request always timeout.
username_1: Yeah probably because the default message timeout for produce is 5s:
```
request.timeout.ms: 5000
```
and with flush also taking max. 5s (if broker is down), that is a total of 10s which is probably letting your request timing out.
So you should probably not let your request time out and probably also adjust the timeout.
To be clear though, during these 5s, librdkafka is retrying to send your message. In most cases where Kafka is setup properly, it will not take that long even if a broker goes offline, in a proper cluster setup this will have no to minimal impact.
username_0: i try run code in the cli.
get notice
Even after 10s, the program cannot be stopped
```
%3|1596436852.140|FAIL|rdkafka#producer-1| debian-server:9092/bootstrap: Failed to resolve 'debian-server:9092': %3|1596436852.
144|ERROR|rdkafka#producer-1| debian-server:9092/bootstrap: Failed to resolve 'debian-server:9092': %3|1596436852.148|ERROR|rdk
afka#producer-1| 1/1 brokers are down
```
username_1: So just to be clear, you are running this code:
```
$kafkaBrokers = 'debian-server:9092';
$kafkaTopic = 'test';
$producer = new \RdKafka\Producer();
$producer->addBrokers($kafkaBrokers);
$topicConfig = new \RdKafka\TopicConf();
$topic = $producer->newTopic($kafkaTopic, $topicConfig);
// here block my web api,
// i try catch exception, but can`t work
$topic->produce(RD_KAFKA_PARTITION_UA, 0, 'i am message');
$result = $producer->flush(5000);
if (RD_KAFKA_RESP_ERR_NO_ERROR !== $result) {
// throw exception or return appropriate api response, whatever suits your usecase
}
```
Also i noticed you run `php-rdkafka:3.1.2` which doesn't have flush yet.
I suggest you upgrade to the following versions:
- rdkafka:4.0.3
- librdkafka:1.5
username_0: ···
// 1
$kafkaBrokers = 'debian-server:9092';
$kafkaTopic = 'test';
// 2
$producer = new \RdKafka\Producer();
$producer->addBrokers($kafkaBrokers);
// 3
$topicConfig = new \RdKafka\TopicConf();
$topic = $producer->newTopic($kafkaTopic, $topicConfig);
// 4
$topic->produce(RD_KAFKA_PARTITION_UA, 0, 'i am message');
// 5
$result = $producer->flush(5000);
if (RD_KAFKA_RESP_ERR_NO_ERROR !== $result) {
// throw exception or return appropriate api response, whatever suits your usecase
}
···
The code timeout is blocked at 4 and cannot reach 5
username_1: I cannot reproduce this with:
- `php:7.1.33`
- `rdkafka:4.0.3`
- `librdkafka:1.5.0`
With the following code you provided:
```
// 1
$kafkaBrokers = 'kafka:9097';
$kafkaTopic = 'test';
// 2
$producer = new \RdKafka\Producer();
$producer->addBrokers($kafkaBrokers);
// 3
$topicConfig = new \RdKafka\TopicConf();
$topic = $producer->newTopic($kafkaTopic, $topicConfig);
// 4
$topic->produce(RD_KAFKA_PARTITION_UA, 0, 'i am message');
// 5
$result = $producer->flush(5000);
if (RD_KAFKA_RESP_ERR_NO_ERROR !== $result) {
// throw exception or return appropriate api response, whatever suits your usecase
}
```
I get the following output and i the script exits successfully from cli:
```
%5|1596439067.083|CONFWARN|rdkafka#producer-1| [thrd:app]: No `bootstrap.servers` configured: client will not be able to connect to Kafka cluster
bash-5.0# php issue.php
%5|1596439092.178|CONFWARN|rdkafka#producer-1| [thrd:app]: No `bootstrap.servers` configured: client will not be able to connect to Kafka cluster
%3|1596439092.182|FAIL|rdkafka#producer-1| [thrd:kafka:9097/bootstrap]: kafka:9097/bootstrap: Connect to ipv4#172.20.0.4:9097 failed: Connection refused (after 2ms in state CONNECT)
%3|1596439092.182|ERROR|rdkafka#producer-1| [thrd:app]: rdkafka#producer-1: kafka:9097/bootstrap: Connect to ipv4#172.20.0.4:9097 failed: Connection refused (after 2ms in state CONNECT)
%3|1596439092.183|ERROR|rdkafka#producer-1| [thrd:kafka:9097/bootstrap]: 1/1 brokers are down
%3|1596439093.178|FAIL|rdkafka#producer-1| [thrd:kafka:9097/bootstrap]: kafka:9097/bootstrap: Connect to ipv4#172.20.0.4:9097 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
%3|1596439093.179|ERROR|rdkafka#producer-1| [thrd:app]: rdkafka#producer-1: kafka:9097/bootstrap: Connect to ipv4#172.20.0.4:9097 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
```
username_1: I can only reproduce this if i revert `flush` back to:
```
while ($producer->getOutQLen() > 0)
{
$producer->poll(100);
echo 'stuck in poll';
}
```
This will run forever when your broker is down, since it cannot send the message, this is why i advised to upgrade to `php-rdkafka:4.x` and use flush :v:
username_0: thanks,
Status: Issue closed
|
yeatmanlab/pyAFQ | 340349536 | Title: Examine assumptions encoded here
Question:
username_0: https://github.com/yeatmanlab/pyAFQ/blob/02c68a2452efc4354e258a75431e7cde9fd95dc3/AFQ/segmentation.py#L265
Take [0,0,0] in MNI (that should be roughly the middle voxel of the MNI brain)and use that to determine where the center of the brain is in the data (that should be an inverse transform away, but is it? ...)
Status: Issue closed
Answers:
username_0: This is handled in changes around https://github.com/yeatmanlab/pyAFQ/commit/6627bf3634d64851e4df1783ebd8b9253c0533d2 |
MicrosoftDocs/azure-docs | 561305727 | Title: Azure Multi-Factor Authentication Server is now depracated
Question:
username_0: There are a number of references on this page to Azure Multi-Factor Authentication Server.
However, as per https://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-mfaserver-deploy :
"As of July 1, 2019, Microsoft will no longer offer MFA Server for new deployments".
So this is no longer an option and the documentation should be updated to reflect this.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d577f70e-9632-48ee-8e00-cead8e3cbd5b
* Version Independent ID: 44ebe4c6-0e6a-5ce4-554c-d4ed856e09c9
* Content: [Azure Multi-Factor Authentication FAQ - Azure Active Directory](https://docs.microsoft.com/en-us/azure/active-directory/authentication/multi-factor-authentication-faq)
* Content Source: [articles/active-directory/authentication/multi-factor-authentication-faq.md](https://github.com/Microsoft/azure-docs/blob/master/articles/active-directory/authentication/multi-factor-authentication-faq.md)
* Service: **active-directory**
* Sub-service: **authentication**
* GitHub Login: @username_2
* Microsoft Alias: **iainfou**
Answers:
username_1: @username_0
Thanks for your feedback! We will investigate and update as appropriate.
username_1: Thanks for this feedback. MFA Server is not supported for new deployments, but it is still supported for existing customers.
"Existing customers who have activated MFA Server prior to July 1 will be able to download the latest version, future updates and generate activation credentials as usual."
There are still a number of customers who use this product because they had an existing environment prior to July 1st of last year.
username_1: @username_0 your feedback seems more relevant to the deployment article, rather than the FAQ, so I'm moving this issue there.
username_1: MFA server is still supported for old customers, but as new deployments are no longer supported I agree that the deployment instructions are less relevant. I'm assigning your feedback to the content author to look into.
username_2: Thanks for the feedback, @username_0. I'm a little unsure which article this feedback is meant to be addressing at this point though, as the top of the linked article is noting the deprecation status for MFA Server:

As @username_1 then notes though, as there are still customers using MFA Server, those existing docs and references needs to remain. That's why each should doc should have that disclaimer at the top.
Can you clarify what you feel is missing here?
username_0: Apologies for delay.
Yes, rereading the documentation, there is a disclaimer at the top that clarifies things.
Please close.
username_2: Thanks for the clarification, @username_0
#please-close
Status: Issue closed
|
pydata/xarray | 666896781 | Title: intersphinx trouble with implementation modules
Question:
username_0: This is a widespread issue caused by the pattern of defining objects in private module and then exposing them to the final user by importing them in the top-level ``__init__.py``, vs. how intersphinx works.
Exact same issue in different projects:
- https://github.com/aio-libs/aiohttp/issues/3714
- https://jira.mongodb.org/browse/MOTOR-338
- https://github.com/tkem/cachetools/issues/178
- https://github.com/AmphoraInc/xarray_mongodb/pull/22
If a project
1. uses xarray, intersphinx, and autodoc
2. subclasses any of the classes exposed by ``xarray/__init__.py`` and documents the new class with the ``:show-inheritance:`` flag
3. Starting from Sphinx 3, **has any of the above classes anywhere in a type annotation**
Then Sphinx emits a warning and fails to create a hyperlink, because intersphinx uses the ``__module__`` attribute to look up the object in objects.inv, but ``__module__`` points to the implementation module while objects.inv points to the top-level ``xarray`` module.
# Workaround
In conf.py:
```python
import xarray
xarray.DataArray.__module__ = "xarray"
```
# Solution
Put the above hack in ``xarray/__init__.py`` |
raspberrypi/userland | 817966422 | Title: Level 4.2 encoding
Question:
username_0: **Describe the bug**
Level 4.2 encoding increases encoding latency when comparing 720p60 and 720p120, even though this is contradictionary - with
a higher framerate, any frames the encoder holds on to should take less time to make it through the pipeline.
**To reproduce**
Compare the encoding time when running raspivid at 720p60 vs 720p120 , by either measuring the end-to-end latency or measuring the encoding time like this:
https://github.com/raspberrypi/userland/compare/master...
Answers:
username_1: I don't follow your logic - higher frame rate means more frames means more work. You might want them to be processed quicker but I don't see how that can be the case.
username_2: The H264 block is specified as level 4.0.
It allows the configuration to be set to level 4.2 predominantly to allow for transcoding to that level (not real time).
Raspivid requests ```3 + vcos_max(0, (state->framerate-30)/10)``` frames on the output of the ISP which are also passed to the input of the video encoder.
https://github.com/raspberrypi/userland/blob/master/host_applications/linux/apps/raspicam/RaspiVid.c#L1584
What you are seeing is that that pool of images is exhausted because the encoder can't keep up.
username_0: I'm gonna close the issue and come back after I got more testing.
Status: Issue closed
|
newrelic/newrelic-ruby-agent | 887834652 | Title: DT with Server Side Config support for Ruby Agent
Question:
username_0: ### Is your feature request related to a problem? Please describe.
The Ruby agent has default.rb configuration for DT in the file. We need to change the default for SSC to true now that the UI is available.
### Feature Description
Turn on SSC for DT to default to true.
### Additional context
Please do the following:
- [ ] End to end testing of DT with SSC
- [ ] Review the cross agent spec
- [ ] Communicate with GTS the support of this feature and how will it change for them so that they are ready to field any questions that may come along.
### Priority
[Must Have]<issue_closed>
Status: Issue closed |
w3c/aria-at | 626929504 | Title: Tester issue report for: "Navigate to a checked checkbox"
Question:
username_0: ### Test file at exact commit
[tests/checkbox/test-06-navigate-to-checked-checkbox-interaction.html](https://github.com/w3c/aria-at/blob/2a3c46f1cbd20e7f2f0a706052d2d82aaf207a29/tests/checkbox/test-06-navigate-to-checked-checkbox-interaction.html)
### Cycle:
Test Pilot (2020-05-27)
### AT:
VoiceOver for macOS (version macOs 10.15.4)
### Browser:
Safari (version 13.1)
### Description
Despite the fact that I indicated the first assertion passed, when I submitted the test, the submission output indicated I had marked it as FAILING. I attempted to edit the form by clicking the Edit button and then resubmit the test, but the submission output still indicated FAILING. |
InnovaLangues/CollecticielBundle | 118389673 | Title: Ajouter une colonne "Accusé de réception"
Question:
username_0: Dans l'onglet "paramètres" lors de la création d'un collecticiel, il faut ajouter une colonne "Accusé de réception".
Answers:
username_0: Voir le début du travail ici : https://github.com/InnovaLangues/CollecticielBundle/commit/aa65773c4635b95213fb509173df34d0f6d7a5db
Status: Issue closed
username_0: Livré avec cette version et le tag 6.1.2 : https://github.com/InnovaLangues/CollecticielBundle/releases/tag/v6.1.2 |
monarch-initiative/phenol | 307049825 | Title: DbXref
Question:
username_0: There are two classes with this name in phenol -- org.monarchinitiative.phenol.io.obo.DbXref
org.monarchinitiative.phenol.ontology.data.DbXref (an interface).
-- this is mixing up my IDE and it is unclear to me that this is necessary. Figure out what to do here.
Answers:
username_1: From my observation, it seems that these two classes, [DbXref](https://github.com/monarch-initiative/phenol/blob/master/phenol-io/src/main/java/org/monarchinitiative/phenol/io/obo/DbXref.java) and [ImmutableDbxref](https://github.com/monarch-initiative/phenol/blob/master/phenol-core/src/main/java/org/monarchinitiative/phenol/ontology/data/ImmutableDbxref.java), are almost identical except the part that handles TrailingModifier (the ImmutableDbxref is the one that implements Dbxref interface). In fact, DbXref instance is later transformed into ImmutableDbxref as the process of construcing term object (e.g., [GoOboFactory](https://github.com/monarch-initiative/phenol/blob/630ced8bf36fe113f86ebef0f3a7b83b68ef4e4e/phenol-io/src/main/java/org/monarchinitiative/phenol/io/obo/go/GoOboFactory.java#L173)). I guess we could remove one of them. Let me check and refactor these codes.
username_0: I think this is working now.
Status: Issue closed
|
lindenb/jvarkit | 433433755 | Title: Problem : CHROM header line
Question:
username_0: Dear Pierre
I have a problem using fixvcfmissinggenotypes
I am using (just for a test) two exome vcf samples, merging with vcf-merge and indexing the file resulting in a vcf ver4.2
I have indexed the bam files (in the same directory) and them I tried to identify the missing values
my bam list is :
/home/path/bam_files/V21_R1.fastq.bam
/home/path/bam_files/V23_R1.fastq.bam
when I run
java -jar ./dist/fixvcfmissinggenotypes.jar -f list_2bams_list.txt < twosamples_test.vcf.gz > 2samples_test_nonmissing.vcf
I received the following error:
mple_nonmissing.vcf
[SEVERE][Launcher]Your input file has a malformed header: We never saw the required CHROM header line (starting with one #) for the input VCF file
htsjdk.tribble.TribbleException$InvalidHeader: Your input file has a malformed header: We never saw the required CHROM header line (starting with one #) for the input VCF file
at htsjdk.variant.vcf.VCFCodec.readActualHeader(VCFCodec.java:115)
at htsjdk.variant.vcf.VCFIteratorBuilder$VCFReaderIterator.<init>(VCFIteratorBuilder.java:177)
at htsjdk.variant.vcf.VCFIteratorBuilder.open(VCFIteratorBuilder.java:97)
at com.github.username_1.jvarkit.util.vcf.VCFUtils.createVCFIteratorFromInputStream(VCFUtils.java:289)
at com.github.username_1.jvarkit.util.vcf.VCFUtils.createVCFIterator(VCFUtils.java:327)
at com.github.username_1.jvarkit.util.jcommander.Launcher.openVCFIterator(Launcher.java:514)
at com.github.username_1.jvarkit.util.jcommander.Launcher.doVcfToVcf(Launcher.java:561)
at com.github.username_1.jvarkit.util.jcommander.Launcher.doVcfToVcf(Launcher.java:583)
at com.github.username_1.jvarkit.tools.misc.FixVcfMissingGenotypes.doWork(FixVcfMissingGenotypes.java:346)
at com.github.username_1.jvarkit.util.jcommander.Launcher.instanceMain(Launcher.java:736)
at com.github.username_1.jvarkit.util.jcommander.Launcher.instanceMainWithExit(Launcher.java:894)
at com.github.username_1.jvarkit.tools.misc.FixVcfMissingGenotypes.main(FixVcfMissingGenotypes.java:358)
------
I tried to repeated the experiment with the examples of the program for rotavirus but the error is still the same.
Can you help me with that?
Thank you for your time
Answers:
username_1: that's because the example is outdated (sorry), look at the options at the top. The option is not `-f` but `-B`
```
find src/test/resources/ -name "S*.bam" > jeter.list
java -jar dist/fixvcfmissinggenotypes.jar -B jeter.list src/test/resources/rotavirus_rf.vcf.gz
```
username_0: Dear Pierre,
Thank you for the response. I have success with the example, but I have a problem with my data
following that comand and having my bam files with the same name as the samples of the vcf and the index bam with <name>.bam.bai, I have this new issue
**java -jar /home/path/jvarkit/dist/fixvcfmissinggenotypes.jar -B list_2bams.txt twosamples.vcf.gz**
[INFO][FixVcfMissingGenotypes]Reading header for list_2bams.txt
[SEVERE][FixVcfMissingGenotypes]No BAM index available for list_2bams.txt
[INFO][Launcher]fixvcfmissinggenotypes Exited with failure (-1)
Do you have any recomendation?
Thank you for your time
username_1: you have to index your bams with 'samtools index '
username_0: they are indexed and they are in the same path of the bam
using samtools index sample_R1_fastq.bam
samtools index sample_R3_fastq.bam
and generating the sample_R1_fastq.bam.bai and sample_R3_fastq.bam
but I still have the same error.
username_1: use 'list_2bams.list' , not 'list_2bams.txt"
username_0: now it works!
Thank you so much for your time!
Status: Issue closed
|
cordery/django-languages-plus | 66218384 | Title: associate_countries_and_languages does not correctly associate all languages/countries
Question:
username_0: For example the United States has the languages u'en-US,es-US,haw,fr'
Currently associate_countries_and_languages only adds "french". This is because there is a line missing from the culture code processing section of that function that would add the language in addition to the culture code.
Sidenote: haw, the Hawaiian Language (http://en.wikipedia.org/wiki/Hawaiian_language) is missing from Languages. |
openjournals/joss | 150695123 | Title: Will the review process be open? Will reviews be "published" with DOI? Will we allow continuous commenting/review of projects/papers?
Question:
username_0: Perhaps I missed this and we are all clear on this already but I was wondering about how "open" we have our review process.
Will we have private and anonymous peer review without publishing peer review data (comments and replies)? Or, do we publish/maintain submission history and peer review comments? If so, do we collect all review stuff into a single document and assign a DOI to this? Alternatively we could keep it online and open but not assign DOI so it is "cheaper". If we choose open peer-review I think I might be in favour of just showing the stuff rather than assigning DOI's to this.
Also if we show the full review process will it remain anonymous or not?
Will we allow continuous commenting/review of accepted works? i.e. post-publication review. Alternatively once we are recognized as an open journal people interested in post publication reviewing could do so on other platforms like https://www.scienceopen.com/. I'm in favour of just doing it that way.
Kevin<issue_closed>
Status: Issue closed |
openfarmcc/OpenFarm | 55966282 | Title: Shared Gardens
Question:
username_0: My roommates and I share the same physical Garden, it only makes sense that we should be able to share the same virtual one too.
This may be a feature for way down the road as it starts to make OpenFarm somewhat of a social network, which, I think is something that could be really powerful.
Thoughts?
Answers:
username_1: As an outside observer, my first thought is: What happens when people
"break up"? Can one of them trash the data that the other one cares about?
How do you determine who should actually control the garden?
Perhaps you can make it easy to clone a shared garden into a private one or
something. Or perhaps I'm just paranoid :)
username_0: Haha no that's a super valid point! Maybe there doesn't really need to be an owner, only people who access it? Like in real life: there are Gardens, and people can access them, but the garden never really goes away if everybody leaves town.
Perhaps someone could vandalize the garden though. A break up happens and one person deletes all the plants in the garden ;( But perhaps that is not for us to manage...
Though there could be some basic access controls. For example a garden could be viewed or edited by: anyone, a specific group of people, or just one person.
username_1: Perhaps it would actually be worth looking at something like
`mongoid-history` to deal with vandalism concerns, though you have to think
seriously about storage concerns when you start storing the history of
every change. Having a history would be good for just being able to recover
from screwups too.
username_0: Yes I was thinking about that dangerous delete garden button this morning. Maybe it should be an 'Archive Garden' button, and once a garden is archived it can either be deleted or restored.
There could also be an undo button for any changes made? Not sure how hard that is... |
HatScrubs/GameAccess | 120219893 | Title: trying to get in steam group
Question:
username_0: Submitter: <NAME>
Email: <EMAIL>
So link page tells this:
Steam Group!
Server Type
Steam Group
Description
Invite Only Steam Group so you can play games with us on Twitch. Post your SteamID on the forums so we can invite you!
.. and my question is that - what forum and where? I have linked my steam account here, but I really have no idea about that forum :D sorry if this is stupid question.
Answers:
username_0: Handled in email
Status: Issue closed
|
schmittjoh/JMSTranslationBundle | 142819264 | Title: Truncate Filter
Question:
username_0: I am constantly getting this error when I clear cache or warmup the cache.
```
[assetic] The template "JMSTranslationBundle:Translate:messages.html.twig" contains an error: Unknown "truncate" filter. Did you mean "easyadmin_truncate" in "JMSTranslationBundle:Translate:messages.html.twig" at line 14?
```
Is there a known solution for this. I am on Simony 2.8.3 and the latest Twig.
Answers:
username_1: You get this error because you have not enabled twigs text extension. http://twig.sensiolabs.org/doc/extensions/text.html
We should see if there is a workaround so we do not have to use the truncate filter.
username_2: I've just submitted #313 which is a proposition to get rid of the twig extension
Status: Issue closed
|
SublimeText-Markdown/MarkdownEditing | 107207181 | Title: Is it possible to use another Syntax Highlighter?
Question:
username_0: When I write in markdown I use the triple backticks along with js. Like this without the spaces:
\`\`\`js
code here...
\`\`\`
But - it uses the Sublime Text JavaScript syntax highlighter, which isn't great. I use the Babel Syntax Highlighter which also supports JSX.
Problem is, my code ends up all weird:

Is it possible to invoke another syntax highlighter when `js` is used?
Answers:
username_1: https://github.com/SublimeText-Markdown/MarkdownEditing/blob/master/Markdown.tmLanguage#L1815-L1830
MarkdownEditing map fenced JS code block to syntax `source.js`, (L1827) which is defined by the Babel package.
To archive your goal, you need a `.tmLanguage` that defines your syntax (say `source.myjs`, possibly forked from Babel) and then add a `fanced-myjs` block that maps ````myjs` to `source.myjs`).
I would also recommend [facelessuser/ScopeHunter](https://github.com/facelessuser/ScopeHunter). It helps you analyse the color defined by the syntax.
username_0: Hrm okay thanks - I just raised an issue on their GH. I'm not the only one with this issue so it would be great if we could get the two packages to play along.
username_2: In MDE, we just reference the scope source.js. If you disable the default JavaScript package and have babel-sublime installed, you will have babel syntax definition instead.
Status: Issue closed
username_3: @username_0 Have you figured out the solution :) ?
username_0: Yes - delete the default JS package all together
username_3: Are you sure? There is a disable package entry in the command palette, we can disable it from there. |
cfpb/hmda-platform | 139351133 | Title: Validation (2017): Q057 (Action Taken – Type)
Question:
username_0: **Edit #:** Q057
**Type:** Macro
**Category:** The Macro Quality Edit Report contains the following edits and is generated by the FFIEC
**Column:** Action Taken – Type
If the total number of loan applications is ≥ 50, then the total number of denied loan applications should be > zero.
Additional resources:
* [FFIEC's 2016 HMDA File Spec](https://www.ffiec.gov/hmda/pdf/spec2016.pdf)
* [FFIEC's 2016 HMDA Edits](https://www.ffiec.gov/hmda/pdf/edit2016.pdf)
* [CFPB's 2017 HMDA File Spec](http://www.consumerfinance.gov/hmda/static/for-filers/2017/2017-HMDA-File-Specifications.pdf)<issue_closed>
Status: Issue closed |
frontendbr/vagas | 453249423 | Title: [Remoto] Front-end Developer na Trio
Question:
username_0: <!--
==================================================
POR FAVOR, SÓ POSTE SE A VAGA FOR PARA FRONT-END!
Não faça distinção de gênero no título da vaga.
Use: "Front-End Developer" ao invés de
"Desenvolvedor Front-End" \o/
Exemplo: `[São Paulo] Front-End Developer na NOME DA EMPRESA`
==================================================
-->
## Descrição da vaga
Somos uma empresa de desenvolvimento de software que presta serviços para empresas do mundo todo.
Estamos procurando um talentoso desenvolvedor front-end que domine Javascript e tenha boa experiencia com React JS and React Native.
Se você é atento a detalhes e gosta de escrever code de qualidade, venha trabalhar conosco.
## Local
Remoto
## Requisitos
- Linguagens: TypeScript, Redux, React Native, ReactJS, Javascript
- Mínimo de 4 anos de experiência em desenvolvimento de software
- Mínimo de 3 anos de experiência em desenvolvimento com React
- Boa comunicação e gosta de trabalhar em equipe
- Inglês Fluente
## Contratação
PJ a combinar
Salário: até R$ 12.000
## Nossa empresa
Somos uma software house que trabalha com empresas nos Estados Unidos.
## Como se candidatar
Para se candidatar e ter mais informações sobre a oportunidade, basta acessar: http://bit.ly/2K25V7a
## Labels
- PJ
- Remoto
- Sênior<issue_closed>
Status: Issue closed |
ErikEJ/EFCorePowerTools | 518171266 | Title: Reverse Engineering adds UseNetTopologySuite
Question:
username_0: When using Reverse Engineering to create a database Context, the optionsBuilder.UseSqlServer statement is generated with UseNetTopologySuite.
### Steps to reproduce
Reverse Engineer a database.
See error in because Net Topology Suite isn't being used, but is called.
### Further technical details
EF Core Power Tools version: 2.3.76.0
Database engine: SQL Server 2016
Visual Studio version: Visual Studio Enterprise 2019 v16.3.7
Answers:
username_1: Reverse engineer without a connection string?
username_1: Could you share some sample code?
username_0: I had "Include connection string in generated code" checked. I had "Use Data Annotation attributes to configure the model" checked. I think "Install the EF Core provider package in the project" was checked automatically.
The generated DB Context class had the following method in it:
`
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder) {
if (!optionsBuilder.IsConfigured) {
optionsBuilder.UseSqlServer("connection string", x => x.UseNetTopologySuite());
}
}
`
username_1: Maybe avoid the connectionstring in the generated code?
Status: Issue closed
|
Azure/azure-cli-extensions | 410676485 | Title: Need az webapp up support --ServicePlanName
Question:
username_0: - If the issue is to do with Azure CLI 2.0 in-particular, create an issue here at [Azure/azure-cli](https://github.com/Azure/azure-cli/issues)
### Extension name (the extension in question)
az webapp up
### Description of issue (in as much detail as possible)
1. Create and deploy existing local code to the web app.
az webapp up --name "jackywebappupdj1" --location "westus" --sku "S1"
2. Move the webapp and **ServicePlan** to an existing resource group
3. Create and deploy existing local code to the **second** web app.
az webapp up --name "jackywebappupdj2" --location "westus" --sku "S1"
Resource group 'appsvc_rg_Linux_westus' already exists.
Creating App service plan 'appsvc_asp_Linux_westus' ...
Server Farm with name 'appsvc_asp_Linux_westus' already exists for subscription '30365905-2ed4-410b-aa9f-3cc0bd354520'.
If we can't assign the Service Plan Name, can't use "az webapp up" to create the second web app.
Status: Issue closed
Answers:
username_1: this is complete & is available in core see https://docs.microsoft.com/en-us/cli/azure/webapp?view=azure-cli-latest#az-webapp-up |
nodemcu/nodemcu-firmware | 402342121 | Title: [esp32] MQTT mutual auth
Question:
username_0: Trying to get MQTT mutual authentication (client cert) work on esp32.
[Here is my attempt](https://github.com/username_0/nodemcu-firmware/commit/bea2c4813306d3c4ebe5a025df66a981120a12fb), trying to pass the PEM's to esp-mqtt.
And the lua call is:
```lua
mqttClient:connect(url, port, 1, 0, cacert, client_cert, private_key, on_ok, on_nok);
```
However, I am getting this error when connecting:
```
I (11446) MQTT_CLIENT: Sending MQTT CONNECT message, type: 1, id: 0000
E (11546) MQTT_CLIENT: Invalid MSG_TYPE response: 9, read_len: 0
I (11546) MQTT_CLIENT: Error MQTT Connected
I (11546) MQTT_CLIENT: Reconnect after 10000 ms
E (11556) MQTT_CLIENT: Client has not connected
```
`read_len` is zero meaning the response (9 in this case) is basically rubbish data in the buffer, we could ignore that.
[`select()`](https://github.com/espressif/esp-idf/blob/e2ae69f/components/tcp_transport/transport_ssl.c#L89) tells us the fd is ready to be read but [actually reading](https://github.com/espressif/esp-idf/blob/e2ae69f/components/tcp_transport/transport_ssl.c#L129) it yields `0` length. I guess that means the socket is closed. But I am not sure why.
I am not a C expert, I would like to have some guide on this problem. Thanks.
Answers:
username_1: I'm not sure about the overall status of mqtt over SSL, never tried that myself. Just to check basic things:
* you selected 'ESP-MQTT Configurations' --> 'Enable MQTT over SSL' in menuconfig?
* mqtt SSL connection works *without* client cert?
username_0: Turns out it is working fine. I was using AWS IOT and there are some permission settings there disallowing connection. Fixing them fixed the problem.
Now I wonder what is the expected Lua API for this?
username_1: Resolved with #2628, #2657.
Status: Issue closed
|
ngl4/prj-rev-bwfs-tea-cozy | 247579477 | Title: Positioning
Question:
username_0: You're right that you need to adjust frequently to re-position. That is just a fact of CSS, but also you use absolute and relative positioning frequently which makes everything messier. Many areas of your website don't need this type of positioning and could be done simply with padding and margin. Your website after all, just flows from top to bottom in an orderly way. So, you can default to letting everything order itself top to bottom (which by default it will) and then simply use centering, padding, and margin to move things vertically and horizontally in the ways you want. Ultimately, just more practice will make this easier! |
Azure/azure-cli | 795139414 | Title: Trying to connect to ACI
Question:
username_0: ### **This is autogenerated. Please review and update as needed.**
## Describe the bug
**Command Name**
`az container exec`
**Errors:**
```
[Errno 110] Connection timed out
Traceback (most recent call last):
python3.6/site-packages/knack/cli.py, ln 233, in invoke
cmd_result = self.invocation.execute(args)
cli/core/commands/__init__.py, ln 659, in execute
raise ex
cli/core/commands/__init__.py, ln 722, in _run_jobs_serially
results.append(self._run_job(expanded_arg, cmd_copy))
cli/core/commands/__init__.py, ln 715, in _run_job
six.reraise(*sys.exc_info())
...
python3.6/site-packages/websocket/_http.py, ln 120, in connect
sock = _open_socket(addrinfo_list, options.sockopt, options.timeout)
python3.6/site-packages/websocket/_http.py, ln 189, in _open_socket
raise error
python3.6/site-packages/websocket/_http.py, ln 172, in _open_socket
sock.connect(address)
TimeoutError: [Errno 110] Connection timed out
```
## To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
- _Put any pre-requisite steps here..._
- `az container exec --ids {} --exec-command {}`
## Expected Behavior
## Environment Summary
```
Linux-4.19.128-microsoft-standard-x86_64-with-debian-bullseye-sid
Python 3.6.10
Installer: DEB
azure-cli 2.18.0
```
## Additional Context
<!--Please don't remove this:-->
<!--auto-generated-->
Answers:
username_1: container |
iuap-design/tinper-bee | 443181925 | Title: Menu.Item中嵌套Menu.Item hover时报错onItemHover is not a function
Question:
username_0: ## 环境及版本信息
- `tinper-bee` 版本号: @2.0.9
<!-- 请填具体版本号 -->
- 若使用单个组件,请标明该组件版本号:
<!-- 请填具体版本号 -->
- 当前项目中`react`的版本号:
<!-- 请填具体版本号 -->
- 所使用的操作系统:
<!-- Windows/Mac -->
- 所使用的浏览器:
<!-- 浏览器及版本 -->
## 描述这个问题:
Menu.Item中嵌套Menu.Item, 在hover时报错onItemHover is not a function,应该有个短路运算吧
`onItemHover && onItemHover()`
### 代码
<!-- 请详细说明问题 -->
<!-- 截图说明 -->
### 报错信息
`Uncaught TypeError: onItemHover is not a function`
<!-- 请详细说明问题 -->
<!-- 截图说明 -->
## 当前的行为:效果(可截图说明)及动作描述
<!-- 请详细描述当前行为,以便我们复现及定位问题 -->
<!-- 截图说明 -->


## 期望的行为:
<!-- 请详细描述期望达到的行为及效果,以便我们准确理解需求 -->
Answers:
username_0: 嵌套以后还有个警告:
`Warning: validateDOMNesting(...): <li> cannot appear as a descendant of <li>. See Registration > li >`
看了下是li标签里嵌套了li标签,
可能`<Item>`嵌套`<Item>`这个思路不太对,还是手写一下实现
Status: Issue closed
username_1: 多级嵌套菜单,使用 SubMenu嵌套。
[参考示例](https://github.com/tinper-bee/bee-menus/blob/master/demo/demolist/Demo9.js) |
longitachi/ZLPhotoBrowser | 582825207 | Title: 编辑视频的问题
Question:
username_0: 感谢提交问题,提交 issue 前请先通过关键字搜索已经存在或解决了的 issue,避免重复提交相同内容。
由于框架内参数相对来说比较多,如果想要实现某种操作
* 请阅读README中相关功能介绍,查看是否有描述;
* 请翻阅下`ZLPhotoConfiguration`类中所定义的属性,每个属性均有详细的注释;
- [ ] 我已经查阅了已有的issue,没有找到相同的
- [ ] 我想提出一个建议或bug,而不是问一个问题
#### 关于 Issue
1. 当前所使用的cocoapods版本. 3.1.3
2. 所遇到的问题(如有可能,请尽量详细描述,建议附上截图)
当前设置为maxSelectCount = 1; allowSelectVideo = YES; editAfterSelectThumbnailImage = YES;
出现的问题是选择10s以上的视频可直接进入编辑视频页,选择10s以下的视频无响应;
请问在 editAfterSelectThumbnailImage = YES 下能否实现10s以上的图片进入编辑页,10s以下的图片直接返回。
Answers:
username_1: 这个暂时不可以,小于10s的视频不能编辑,我看下好不好修改成小于10s的也可以编辑
username_1: `ZLPhotoConfiguration` 中限制了10s以下视频不能编辑,你可以改下代码先,把10改的小点,然后设置一个值
```
- (void)setMaxEditVideoTime:(NSInteger)maxEditVideoTime
{
_maxEditVideoTime = MAX(maxEditVideoTime, 10);
}
```
Status: Issue closed
|
rigetti/pyquil | 996739019 | Title: Saving important metadata during QPU runs
Question:
username_0: Pre-Request Checklist
---------------------
- [x ] I am running the latest versions of pyQuil and the Forest SDK
- [ x] I checked to make sure that this feature has not already been requested
Currently, the performance of my quantum algorithm varies greatly with the different QPU runs I perform. To track this, I started saving the `native.native_quil_metadata` output to a file so I can compare the program fidelity between runs.
It would be nice if `native.native_quil_metadata` could also output information such as the latest retune results for the select qubits in the algorithm and not the whole chip.
Moreover, if there could be functionality to save all this information along with an image of the decomposed circuit which ends up being executed to a pdf automatically, this would provide a holistic set of information that the user can use for analysis later on. |
ktorio/ktor | 357615338 | Title: Netty warning: Failed to find a usable hardware address
Question:
username_0: I experience `netty` issue when running last `ktor` version `0.9.4` on `Debian 8.1.1`:
```
14:28:36.230 [main] WARN io.netty.util.internal.MacAddressUtil - Failed to find a usable hardware address from the network interfaces; using random bytes: fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b
```
I've created sample hello-world project for easy reproduction: https://github.com/username_0/ktor-helloworld
Answers:
username_1: I tried this:
`gradle shadowJar`
`Dockerfile`
```
FROM smartentry/debian:8.1-0.4.1
RUN echo "deb http://http.debian.net/debian jessie-backports main" >> /etc/apt/sources.list
RUN apt-get update && apt-get install -t jessie-backports -y openjdk-8-jdk
VOLUME ["/app"]
```
```
docker build . -t debian-java-demo
```
`./run.sh`
```
BASEDIR="$( cd "$(dirname "$0")" ; pwd -P )"
docker run -v$BASEDIR/build/libs:/app -p 9090:8082 -it debian-java-demo java -jar /app/ktor-helloworld-1.0.0-SNAPSHOT.jar
```
```
wget http://127.0.0.1:9090
```
Works fine.
Was not able to reproduce your problem. I'm afraid that might be a problem with the configuration of your machine. And sadly I can't help there.
I'm closing the issue. Feel free to reopen it if you are able to reproduce it with a VM or something so I can reproduce it.
PS this is the output I get:
```
smartentry> running main program(UID=0 GID=0 USER=root)
13:52:37.904 [main] DEBUG io.netty.util.internal.logging.InternalLoggerFactory - Using SLF4J as the default logging framework
13:52:37.909 [main] DEBUG io.netty.channel.MultithreadEventLoopGroup - -Dio.netty.eventLoopThreads: 8
13:52:37.942 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.noKeySetOptimization: false
13:52:37.943 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.selectorAutoRebuildThreshold: 512
13:52:37.958 [main] DEBUG io.netty.util.internal.PlatformDependent0 - -Dio.netty.noUnsafe: false
13:52:37.958 [main] DEBUG io.netty.util.internal.PlatformDependent0 - Java version: 8
13:52:37.960 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe: available
13:52:37.961 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.copyMemory: available
13:52:37.962 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Buffer.address: available
13:52:37.963 [main] DEBUG io.netty.util.internal.PlatformDependent0 - direct buffer constructor: available
13:52:37.964 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Bits.unaligned: available, true
13:52:37.964 [main] DEBUG io.netty.util.internal.PlatformDependent0 - jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable prior to Java9
13:52:37.964 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.DirectByteBuffer.<init>(long, int): available
13:52:37.964 [main] DEBUG io.netty.util.internal.PlatformDependent - sun.misc.Unsafe: available
13:52:37.965 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.tmpdir: /tmp (java.io.tmpdir)
13:52:37.965 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.bitMode: 64 (sun.arch.data.model)
13:52:37.966 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.noPreferDirect: false
13:52:37.966 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.maxDirectMemory: 466092032 bytes
13:52:37.967 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.uninitializedArrayAllocationThreshold: -1
13:52:37.967 [main] DEBUG io.netty.util.internal.CleanerJava6 - java.nio.ByteBuffer.cleaner(): available
13:52:37.970 [main] DEBUG io.netty.util.internal.PlatformDependent - org.jctools-core.MpscChunkedArrayQueue: available
13:52:37.993 [main] INFO Application - No ktor.deployment.watch patterns specified, automatic reload is not active
13:52:38.292 [main] INFO Application - Responding at http://0.0.0.0:8082
13:52:38.304 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.processId: 19 (auto-detected)
13:52:38.306 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv4Stack: false
[Truncated]
13:52:38.310 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.machineId: fdf8:f53e:61e4::18 (auto-detected)
13:52:38.313 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024
13:52:38.313 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096
13:52:38.317 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.level: simple
13:52:38.317 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.targetRecords: 4
13:52:38.335 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numHeapArenas: 4
13:52:38.335 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numDirectArenas: 4
13:52:38.335 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.pageSize: 8192
13:52:38.335 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxOrder: 11
13:52:38.336 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.chunkSize: 16777216
13:52:38.336 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.tinyCacheSize: 512
13:52:38.336 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.smallCacheSize: 256
13:52:38.336 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.normalCacheSize: 64
13:52:38.336 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedBufferCapacity: 32768
13:52:38.336 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimInterval: 8192
13:52:38.337 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.useCacheForAllThreads: true
13:52:38.344 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: pooled
13:52:38.344 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 0
13:52:38.345 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.maxThreadLocalCharBufferSize: 16384
```
Status: Issue closed
|
Ogurijay/diary | 614517390 | Title: 浅谈数据结构-----1.队列
Question:
username_0: *前言:如果你还在大学或者刚出社会不久,最好不要只面向需求编程,一味学框架调接口,这样你可能工作几年后才发现自己只能是个API Caller。这是博主的亲身经历,扎实的基础永远都是刚需,以前不够重视,直到工作三四年后发现大学期间荒废了太多时间,不得已花了比以前用来玩乐几倍的时间再去巩固基础*
本篇是我在LeetCode刷题的过程中,一点点对于算法的笔记和感悟,希望对看到这篇文章的人有帮助。
先附上推荐的入门卡片
https://leetcode-cn.com/explore/learn/card/queue-stack/(源自LeetCode)
### 队列
队列是一种FIFO **(First In First Out 先入先出)** 数据结构。撇开编程不谈,其实队列在生活中随处可见,例如排队取号,先排队的人先被叫号,叫号后则移出队伍,下一个人等待取号,这就是队列的原理。
在编程中,我们需要有入队(enqueue)和出队(dequeue)操作来实现对队列的插入(insert)和删除(delete)。
例如:有五个小朋友排队等待玩游戏,每加入一个小朋友都需要给小朋友提供一把椅子给他休息,每个排队的小朋友都可以获得一张号码牌等待叫号。那么现在的情况应该是(以数组表示):
[1,2,3,4,5]
现在新来了一个小朋友,根据队列的原理这个小朋友应该排在队伍最后,并且提供给他一把椅子
[1,2,3,4,5] ==> [1,2,3,4,5,6]
当第一个小朋友被叫到号时,根据FIFO原则,先入队的元素先出队
[1,2,3,4,5,6] ==> [2,3,4,5,6]
但实际上,即使是进行了出队操作,内存中为数组分配的空间是不会消失的(此处的空位就是例子中的椅子),所以实际队列的情况应该是(空置的空间以#代替)
[#,2,3,4,5,6]
那么你会发现,当一个数组插入的元素越来越多,那么他占用的空间会越来越多,查询的速度也会越来越慢,这就是队列的缺点。当这种情况发生,我们可以设计一个有限的空间通过循环的形式去优化它,这种有限空间的形式被称为**循环队列**。
依旧如上题所示,五个小朋友正在排队,但现在不为新加入的小朋友提供椅子了,只有固定的五把椅子:
[#,#,#,#,#]
现在小朋友对号入座,当第一位小朋友被叫号,后续排队的小朋友就可以进入队列
[1,2,3,4,5] ==> [#,2,3,4,5] ==> [6,2,3,4,5]
那么在这种情况下既节省了空间,也提高了查询的效率。
下一篇会简单的记录下我对栈的理解
**To Be Continue...** |
sheffieldnlp/deepQuest | 380548326 | Title: bug for eval_word_qe
Question:
username_0: In evaluation.py,eval_word_qe should not be passed four parameters.
[https://github.com/sheffieldnlp/deepQuest/blob/01529fbd75c7d059a20a1aef35b3817134f2f669/quest/keras_wrapper/extra/evaluation.py#L156](url)
After I changed this , I countered this problem
`
Traceback (most recent call last):
File "main.py", line 602, in <module>
train_model(parameters, args.dataset, trainable_est=True, trainable_pred=True, weights_path=parameters.get('PRED_WEIGHTS', None))
File "main.py", line 191, in train_model
nmt_model.trainNet(dataset, training_params)
File "/data2/somebody/source/QE/deepQuest-master/quest/keras_wrapper/cnn_model.py", line 737, in trainNet
self.__train(ds, params)
File "/data2/somebody/source/QE/deepQuest-master/quest/keras_wrapper/cnn_model.py", line 946, in __train
initial_epoch=params['epoch_offset'])
File "/data2/somebody/source/QE/deepQuest-master/quest/keras/legacy/interfaces.py", line 87, in wrapper
return func(*args, **kwargs)
File "/data2/somebody/source/QE/deepQuest-master/quest/keras/engine/training.py", line 2205, in fit_generator
callbacks.on_epoch_end(epoch, epoch_logs)
File "/data2/somebody/source/QE/deepQuest-master/quest/keras/callbacks.py", line 73, in on_epoch_end
callback.on_epoch_end(epoch, logs)
File "/data2/somebody/source/QE/deepQuest-master/quest/keras_wrapper/extra/callbacks.py", line 238, in on_epoch_end
self.evaluate(epoch, counter_name='epoch')
File "/data2/somebody/source/QE/deepQuest-master/quest/keras_wrapper/extra/callbacks.py", line 414, in evaluate
split=s, ds = self.ds, set=self.gt_id[0])
File "/data2/somebody/source/QE/deepQuest-master/quest/keras_wrapper/extra/evaluation.py", line 156, in qe_metrics
final_scores = eval_word_qe(ref, pred_list[0], ds.vocabulary['word_qe'])
File "/data2/somebody/source/QE/deepQuest-master/quest/keras_wrapper/extra/evaluation.py", line 210, in eval_word_qe
pred_word = y_init[i][j]
IndexError: index 70 is out of bounds for axis 0 with size 70
`
I think a detailed word level instruction is necessary. :)
Answers:
username_1: Hi @username_0, could you provide us more information regarding this pb? what data and its format are you using, steps to reproduce, etc. ? so we could better understand the origins of the pb.
Best,
username_0: pb1: ` final_scores = eval_word_qe(ref, pred_list[0], ds.vocabulary['word_qe'], 'Word')` should be replaced with ` final_scores = eval_word_qe(ref, pred_list[0], ds.vocabulary['word_qe'])` . Because 'eval_word_ge' is defined as `def eval_word_qe(gt_list, pred_list, vocab):`
pb2: how to apply thie system to wmt18 word level beacuase the new data has the new format, which introduces the 'tag' flag.
@username_1
username_0: By the way, It seems that deepQuest put all train, dev, test data into a single Dataset_*.pkl. So If I want to try other test data. What should i do?
username_2: Hi @username_0, thanks a lot for your feedback. I am fixing this WordQE issue in my commit as shown above. Testing on new data is implemented for the moment only for Sentence QE (see documentation: Tutorial-Scoring). The procedure is unfortunately not tested for WordQE yet (ongoing). For WMT2018 data, please filter tags (take only target tags, for example). And, no, we do not support multi-gpu for now. I am closing this issue. Feel free to contact me directly on my email for any other questions.
Status: Issue closed
|
ubports/ubuntu-touch | 263299857 | Title: Provide daily builds of rootfs on cdimage.ubports.com
Question:
username_0: Currently people can obtain builds of Ubuntu Touch (vivid and xenial) for testing their ports on https://ci.ubports.com. There is also a great folder, http://cdimage.ubports.com/rootfs/, for providing these builds. However, there is only an outdated xenial build over there. The CI server should be set up to upload images to cdimage so that we have one place to point people to.
Answers:
username_1: Ok I will take a look for this one...
username_1: For the time being, I provide this with a simple pull script in the web server. It will copy this daily in case smth has changed (we can decide for the time). It wil also keep a copy of the previous version. Does this satisfy our needs?
username_1: 
Current state
Status: Issue closed
|
rocky-linux/rockylinux.org | 787760137 | Title: Update copyright date on website footer to 2021
Question:
username_0: Currently has 2020 in there:
© 2020 The Rocky Linux Project. All rights reserved.
Answers:
username_1: This would need a bit of `sed` magic, as it's currently in the i18n files. `sed -i '/© 2020/© 2021/g' i18n/*.json` or something like that
Status: Issue closed
|
pytest-dev/pytest | 134417244 | Title: yield_fixture incompatible with other decorators
Question:
username_0: I'd like to be able to use other decorators on my yield_fixtures, but doing so causes the error below regardless of the decorator ordering.
`from mock import patch
import pytest
@pytest.yield_fixture
@patch('os.path')
def brokenfixture(mockpath, request):
with open('/tmp/foobar', 'wb') as foobar:
yield
def test_brokenfixture(brokenfixture):
pass
`
`$ py.test example.py -v -s
====================================================== test session starts ======================================================
platform linux2 -- Python 2.7.11, pytest-2.8.5, py-1.4.31, pluggy-0.3.1 -- /home/jeff/.virtualenvs/wsm/bin/python
cachedir: .cache
rootdir: /home/jeff/jlconsulting/clients/wdsolutions/wsm_phpstorm/services, inifile: pytest.ini
plugins: cov-2.2.1
collected 1 items
example.py::test_brokenfixture ERROR
============================================================ ERRORS =============================================================
ERROR at setup of test_brokenfixture
yield_fixture requires yield statement in function:
@wraps(func)
def patched(*args, **keywargs):
extra_args = []
entered_patchers = []
exc_info = tuple()
try:
for patching in patched.patchings:
arg = patching.__enter__()
entered_patchers.append(patching)
if patching.attribute_name is not None:
keywargs.update(arg)
elif patching.new is DEFAULT:
extra_args.append(arg)
args += tuple(extra_args)
return func(*args, **keywargs)
except:
if (patching not in entered_patchers and
_is_started(patching)):
# the patcher may have been started, but an exception
# raised whilst entering one of its additional_patchers
entered_patchers.append(patching)
# Pass the exception to __exit__
exc_info = sys.exc_info()
# re-raise the exception
raise
finally:
for patching in reversed(entered_patchers):
patching.__exit__(*exc_info)
/home/jeff/jlconsulting/clients/wdsolutions/wsm_phpstorm/services/example.py:4
==================================================== 1 error in 0.03 seconds ====================================================
`
Answers:
username_1: Hi @username_0,
I don't have an answer to your specific question, but I see you are using `@patch`. As an alternative you might consider using [pytest-mock](https://github.com/pytest-dev/pytest-mock/) instead, which lets you use `mock` with pytest more easily. One of the [reasons](https://github.com/pytest-dev/pytest-mock/#why-bother-with-a-plugin) for me to write it was that using `@patch` doesn't mix nicely with pytest's fixtures.
For illustration your example using `pytest-mock` would be:
```python
import pytest
@pytest.yield_fixture
def brokenfixture(mocker, request):
mockpath = mocker.patch('os.path')
with open('/tmp/foobar', 'wb') as foobar:
yield
def test_brokenfixture(brokenfixture):
pass
```
username_2: mock.patch severely messes up signatures, and needs to be explicitly supported by support code to work
username_0: It's a common pattern for decorators to add leading arguments to functions.
mock.patch is hardly unique in that respect.
Status: Issue closed
username_3: Closing because `yield_fixture` is deprecated and will be removed in pytest 4.0
username_1: @username_3 just a clarification: `yield_fixture` and `fixture` are functionally the same, so I issues about `yield_fixture` will probably apply to `fixture` functions that use `yield` too. Also we don't plan to actually remove the `yield_fixture` decorator anytime soon, as it doesn't incur any maintenance overhead (it is just an alias to `fixture`).
Having said that, we have since in recent versions improved how signatures are handled, so the original example actually works in recent pytest versions. 👍 |
CenturyLinkCloud/clc-java-sdk | 101592172 | Title: Design API for shared Anti-Affinity Policies operations
Question:
username_0: It would be great to design set of API calls and structures that will allow to provide possibilities to manage Anti-Affinity Policies.
Proposed set of required to implementation APIs:
1. Create Anti-Affinity Policy
2. Update Anti-Affinity Policy
3. Search Anti-Affinity Policies
4. Delete Anti-Affinity Policy
5. Add support of Anti-Affinity Policy to Create Server Operation<issue_closed>
Status: Issue closed |
typeorm/typeorm | 203064195 | Title: Raw sql query
Question:
username_0: For my application I would need to create a `Union` between two tables.
I do not expect that typeorm would support the union query, but instead is it possible to execute an arbitrary sql string and get back raw json objects?
Answers:
username_1: right, you can use `repository.query` or `entityManager.query` methods which return you raw database results.
username_2: @username_1 How should I pass parameters to the query method in order to avoid SQL injection attack?
Status: Issue closed
username_1: @username_2 added support of parameters in `0.0.9-alpha.2`
username_2: Wow that was fast! Thanks for the great work! =D
username_3: how to reference a parameter in the query string? Cant find something in the docs.
username_1: @username_3 using `query` method its second argument: `entityManager.query("SQL", [params]);`, using QueryBuilder its `setParameter` method
username_3: yes but how to reference the parameter inside "SQL"
username_1: it depends on underlying driver. For mysql its `:param`, for postgres its `$param`.
username_4: I thought that Postgres does not have named params in queries I'm using parametrized queries like below
```sql
entityManager.query('SELECT u.name FROM users AS u WHERE u.name = $1 AND u.lastName = $2', ['John', 'Doe']);
```
username_1: @username_4 yes you are right, thats a correct way of doing it for postgres. I just confused with multiple drivers 😖
username_5: It is also possible to perform raw queries like `INSERT DELETE UPDATE` using `entityManager.query`?
username_1: yeah any sql query
username_5: Very nice! thanks @username_1
username_6: ...but it seems `escapeQueryWithParameters` already takes care of translating a common `:param` into, e.g., `$1`, `@1`, `name` or `?`, depending the database dialect?
https://github.com/typeorm/typeorm/search?q=escapeQueryWithParameters seems to show all dialects use the same line to find the common `:name` placeholders (with `\b` to match whole words only):
```ts
const keys = Object.keys(parameters).map(parameter => "(:" + parameter + "\\b)").join("|");
```
So, apparently this is not used for `repository.query` and `entityManager.query`?
username_7: `query(query: string): Promise<any>`
So I expected this to fail:
Entity:
```
@Entity()
@Index(["country","iso3","region","name","pollutant"], { unique: true })
export class ExposureRegion implements Region {
@PrimaryColumn()
@Column({ length: 56 })
country?: string;
@PrimaryColumn()
@Column({ length: 6})
iso3?: string;
@PrimaryColumn()
@Column({ length: 12})
region?: string;
@PrimaryColumn()
@Column({ length: 56})
name?: string;
@PrimaryColumn()
@Column({ length: 5})
pollutant?: string;
constructor(args: ExposureRegion){
Object.assign(this, args);
}
}
```
Service
```
public async findByExposureRegionAndGroupBy(exposureRegion:ExposureRegion, groupBy:string): Promise<ExposureRegion[]>{
return this.exposureRegionRepository.query(`
select name
from exposure_region
where region = $1 and pollutant = $2
group by name
`,[exposureRegion.region,exposureRegion.pollutant]);
}
```
The return type of the query should be any, but typescript is allowing me to set the signature of my method to a promise with a real type (and without any casting). In the controller that calls this service method, I can even do:
```
let results = this.regionService.findByExposureRegionAndGroupBy(new ExposureRegion({region:region, pollutant:pollutant}),groupBy);
results.then(exposureRegions => {
console.log(exposureRegions[0].name);
});
```
I can see that the object is indeed not a true instance of ExposureRegion, but for all intents and purposes, the result is the same, and the json response still matches, so even though the query method is returning a raw db result, it looks like we can still fudge it and keep all our types in order too. This is quite nice!
username_1: Be careful with such approach because you actually lying to compiler. This can bring problems for example if your entity has methods and you pass entity somewhere to some function that use this method, but in this case you are passing to this method your fake object - this will cause a runtime errors.
username_7: Hi, thanks for the reply, yes, I could certainly see the problem that would present (although in this case, the entity doesn't have methods so maybe its ok).
So would you recommend just doing a conversion like this to handle the raw data mapping then?
```
public async findByExposureRegionAndGroupBy(exposureRegion:ExposureRegion, groupBy:string): Promise<ExposureRegion[]>{
return this.exposureRegionRepository.query(`
select name
from exposure_region
where region = $1 and pollutant = $2
group by name
`,[exposureRegion.region,exposureRegion.pollutant]).then(exposureRegions=>{
return exposureRegions.map(val => new ExposureRegion(val));
});
```
username_1: conversion is better then nothing. But you may have issues anyway. For example if you use custom database column names - it will break because raw results return database column names. Or your relations won't be instances of real entities too - you may need to perform conversion in cycle and maybe recursively if needed.
username_7: Right, yes, thank you for pointing that out. As far as names go, that's not too bad since we can include the entity name as an alias for the column in the query. But hydrating the rest of the entity relations could definitely get hairy. Fortunately, I the majority of times that I turn to these raw custom queries is mainly for analytic queries that aren't trying to actually include joined table data. Thanks for all the clarification around this issue!
username_1: @username_7 also in most cases when you are loading your entities you don't really need to use raw queries. Im sure `QueryBuilder` functionality is enough for 90% of use cases, and maybe even that 10% you can rethink and bring them to `QueryBuilder` functionality. So please make sure to try QueryBuilder first, before doing raw sql
username_7: Yes, I whole heartedly agree, applications that reduce the amount of native sql will be easier to maintain down the line, easier to add new features to, etc., etc. I've been using hibernate/jpa and then querydsl (all within spring) for many years, and when I could use those technologies for my queries I did. It did help though that jpa added support for mapping native queries to entities for those queries that couldn't be done without native (for a while, this was primarily queries that used the db's spatial functions).
Looking ahead however, I can see that graphql is eating into the orm market, and it does look like a great fit for the types of queries it supports (partial/whole selects with where criteria and optional fetching of child entities). I especially liked hearing your response here:
https://github.com/typeorm/typeorm/issues/934#issuecomment-331779235
because I think that kind of support would really shine as features you can't get with graphql. I would say that custom selections like that were probably the #1 reason in the past I've had to return to raw sql-
username_7: I did also notice that sequelize let's you return entities from raw sql as well - http://docs.sequelizejs.com/manual/tutorial/raw-queries.html
username_8: when i pass parameter like this :
```
const status = await getConnection().manager.query(
`
SELECT "status"
FROM "order"
where "id" = $1`,
[id]
);
```
username_9: Wondering why typeorm chose not to return entities from raw sql queries, it really makes life harder than needed for the cases where custom / direct interface with the RDBMs is needed... any hints?
username_10: At least it should provide generic type to query function so developer can add the return type by himself.
username_11: How to pass array in queryparam in query method of typeorm dynamically using IN operator in sql query
username_12: [solved] - How do I get the query builder to output in raw as string - 9 -->https://laratuto.com/i-get-the-query-builder-to-output-in-raw-as-string/ |
astropy/astropy | 418723496 | Title: Make it so that running pytest directly doesn't require an install
Question:
username_0: At the moment, the following doesn't work in the astropy repository:
```
python setup.py build_ext --inplace
pytest astropy
```
unless ``pip install -e .`` has been run beforehand, because of an issue related to asdf. The error in the above case is:
```
$ pytest astropy
Traceback (most recent call last):
File "/Users/tom/python/dev/lib/python3.7/site-packages/_pytest/config/__init__.py", line 430, in _importconftest
return self._conftestpath2mod[conftestpath]
KeyError: local('/Users/tom/Dropbox/Code/Astropy/astropy/astropy/conftest.py')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/tom/python/dev/bin/pytest", line 11, in <module>
sys.exit(main())
File "/Users/tom/python/dev/lib/python3.7/site-packages/_pytest/config/__init__.py", line 61, in main
config = _prepareconfig(args, plugins)
File "/Users/tom/python/dev/lib/python3.7/site-packages/_pytest/config/__init__.py", line 196, in _prepareconfig
pluginmanager=pluginmanager, args=args
File "/Users/tom/python/dev/lib/python3.7/site-packages/pluggy/hooks.py", line 284, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "/Users/tom/python/dev/lib/python3.7/site-packages/pluggy/manager.py", line 68, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/Users/tom/python/dev/lib/python3.7/site-packages/pluggy/manager.py", line 62, in <lambda>
firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
File "/Users/tom/python/dev/lib/python3.7/site-packages/pluggy/callers.py", line 203, in _multicall
gen.send(outcome)
File "/Users/tom/python/dev/lib/python3.7/site-packages/_pytest/helpconfig.py", line 93, in pytest_cmdline_parse
config = outcome.get_result()
File "/Users/tom/python/dev/lib/python3.7/site-packages/pluggy/callers.py", line 80, in get_result
raise ex[1].with_traceback(ex[2])
File "/Users/tom/python/dev/lib/python3.7/site-packages/pluggy/callers.py", line 187, in _multicall
res = hook_impl.function(*args)
File "/Users/tom/python/dev/lib/python3.7/site-packages/_pytest/config/__init__.py", line 655, in pytest_cmdline_parse
self.parse(args)
File "/Users/tom/python/dev/lib/python3.7/site-packages/_pytest/config/__init__.py", line 841, in parse
self._preparse(args, addopts=addopts)
File "/Users/tom/python/dev/lib/python3.7/site-packages/_pytest/config/__init__.py", line 797, in _preparse
early_config=self, args=args, parser=self._parser
File "/Users/tom/python/dev/lib/python3.7/site-packages/pluggy/hooks.py", line 284, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "/Users/tom/python/dev/lib/python3.7/site-packages/pluggy/manager.py", line 68, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/Users/tom/python/dev/lib/python3.7/site-packages/pluggy/manager.py", line 62, in <lambda>
firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
File "/Users/tom/python/dev/lib/python3.7/site-packages/pluggy/callers.py", line 208, in _multicall
return outcome.get_result()
File "/Users/tom/python/dev/lib/python3.7/site-packages/pluggy/callers.py", line 80, in get_result
raise ex[1].with_traceback(ex[2])
File "/Users/tom/python/dev/lib/python3.7/site-packages/pluggy/callers.py", line 187, in _multicall
res = hook_impl.function(*args)
File "/Users/tom/python/dev/lib/python3.7/site-packages/_pytest/config/__init__.py", line 699, in pytest_load_initial_conftests
self.pluginmanager._set_initial_conftests(early_config.known_args_namespace)
File "/Users/tom/python/dev/lib/python3.7/site-packages/_pytest/config/__init__.py", line 373, in _set_initial_conftests
[Truncated]
File "/Users/tom/python/dev/lib/python3.7/site-packages/asdf/asdf.py", line 122, in __init__
self._process_extensions(extensions)
File "/Users/tom/python/dev/lib/python3.7/site-packages/asdf/asdf.py", line 206, in _process_extensions
self._extensions = default_extensions.extension_list
File "/Users/tom/python/dev/lib/python3.7/site-packages/asdf/extension.py", line 233, in extension_list
self._extension_list = AsdfExtensionList(self.extensions)
File "/Users/tom/python/dev/lib/python3.7/site-packages/asdf/extension.py", line 226, in extensions
self._load_installed_extensions()
File "/Users/tom/python/dev/lib/python3.7/site-packages/asdf/extension.py", line 197, in _load_installed_extensions
ext = entry_point.load()
File "/Users/tom/python/dev/lib/python3.7/site-packages/pkg_resources/__init__.py", line 2286, in load
self.require(*args, **kwargs)
File "/Users/tom/python/dev/lib/python3.7/site-packages/pkg_resources/__init__.py", line 2303, in require
items = working_set.resolve(reqs, env, installer)
File "/Users/tom/python/dev/lib/python3.7/site-packages/pkg_resources/__init__.py", line 849, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'astropy' distribution was not found and is required by the application
```
@username_2 - could we make this a bit more robust so that e.g. we skip the ASDF tests if astropy is not installed?
Answers:
username_1: I mean as soon as you use things like entry points you really need to be installing the package. I am not sure this is really a usecase to be supported?
username_0: Looking at this closer, the issue is not about entry points per se, it's about the fact that currently the asdf pytest plugin is auto-registered if asdf is present but not necessarily if it is importable (it's only importable if astropy is installed).
@username_2 - is there any reason for using ``find_spec`` instead of just doing a ``try...except`` of importing asdf in ``conftest.py``?
username_0: A related issue is that ``find_spec`` sometimes gives weird results, for example I uninstalled asdf but ``find_spec`` is still returning not ``None`` so that I get the following error when running the tests:
```
$ pytest astropy
ImportError while loading conftest '/Users/tom/Dropbox/Code/Astropy/astropy/astropy/conftest.py'.
astropy/conftest.py:22: in <module>
from asdf import __version__ as asdf_version
E ImportError: cannot import name '__version__' from 'asdf' (unknown location)
```
I do think that for fast development of packages unrelated to entry points it would be good to avoid having to install the package, even in develop mode.
Related to this, I wonder whether adding the ASDF schema tester should be done via a pytest command-line argument rather than some magic in ``conftest.py``? Why not just make it opt-in with an ``--asdf-schema`` flag and add that in CI?
username_2: The ASDF schema tester is a `pytest` plugin. The reason for the logic here is we can't make reference to the plugin if `asdf` is not installed (I believe this will cause a `pytest` error).
It's possible that using `try/except` is better in this case. I have seen weird issues with `find_spec` as well but assumed it was due to a degenerate state in my environment (e.g. installing a package with `conda` but removing it with `pip`).
username_2: My comment above only addressed one aspect of this issue. My opinion is that we do not want to support this. Packages should be installed before being tested. This is a big part of the motivation behind the `src`/`test` directory discussion that came up on Slack recently. I'm not necessarily advocating for that particular change, but I would draw a hard line about requiring the package to be installed before testing.
username_0: Ok I think I'm convinced an install is necessary, but I would still advocate that the asdf schema tester be declared as a proper pytest plugin via entry points and enabled with a command-line flag rather than being auto-included based on whether it's installed or not.
username_2: I've opened an issue to track this in `asdf`: https://github.com/spacetelescope/asdf/issues/657
Should this issue be closed now?
Status: Issue closed
username_0: Yep, I'm happy with that option. |
bisq-network/support | 912324467 | Title: Reimbursement for Cycle 25
Question:
username_0: I paid out 0.48265664 to traders. 30D BSQ/BTC rate is 0.00003930.
I ask DAO to reimburse me 35387.23BSQ.
Role report: https://github.com/bisq-network/roles/issues/93#issuecomment-855257876
Answers:
username_0: fce333682c82aaf10205037385e69ab24617cd6d910754a9a4ce0ebd57bac46a
username_1: This does not seem right. At the given rate the reimbursement should be 0.482655664/0.00003930 = 12281.31BSQ. It looks like the BSQ amount requested is the same as from the previous cycle, https://github.com/bisq-network/support/issues/925 where the number did add up.
@username_0 could you please check on this discrepancy?
username_0: I used the old reimbursement request as template but I forgot to change the final amount of BSQ requested as reimbursement.
Please reject this reimbursement, I'll add the 0.4826 BTC to the next Ccyle's reimbursement request, using the 30D average on that moment.
username_0: Was accepted.
Status: Issue closed
|
Himmelt/FoshanVirusKiller | 290516318 | Title: 新病毒
Question:
username_0: 发现新病毒!!
[Uploading autorun.zip…]()
Answers:
username_0: [bingdu.zip](https://github.com/username_0/FoshanVirusKiller/files/1652593/bingdu.zip)
[DeviceConfigManager.zip](https://github.com/username_0/FoshanVirusKiller/files/1652596/DeviceConfigManager.zip)
Status: Issue closed
|
glasklart/hd | 115962604 | Title: CyDown
Question:
username_0: **App Name:** CyDown
**Bundle ID:** com.julioverne.cydown
**Version:** 6.1.9
**iTunes ID:** none (Cydia app)
**Original Artwork:**
<img src="https://lh3.googleusercontent.com/-yPxACJnm-Mc/AAAAAAAAAAI/AAAAAAAAAEQ/_bA604PJkJE/photo.jpg" width="150" height="150" />
**Accepted Artwork:**
\#\#\# THIS IS FOR GLASKLART MAINTAINERS DO NOT MODIFY THIS LINE OR WRITE BELOW IT. CONTRIBUTIONS AND COMMENTS SHOULD BE IN A SEPARATE COMMENT. \#\#\#
Answers:
username_1: @username_0 The same icon as Cydia? Really?
username_1: @username_0 Is'nt it more like this one? :

username_0: Sorry wrong link, OP has been updated thanks for the heads up!
username_1: @username_0 OK, perfect! :+1:
username_1: 
https://cloud.githubusercontent.com/assets/2068130/11047431/a5755e36-8732-11e5-8dbd-aab39597506d.png
--- ---
Source:
https://cloud.githubusercontent.com/assets/2068130/11047439/b47bd216-8732-11e5-9350-9a563f45e21e.png
username_0: That was really fast, THANKS !!!!!!!
Status: Issue closed
|
sfztools/sfizz | 827160865 | Title: MacOS Logic Pro Auto-Reload Bug
Question:
username_0: Using latest sfizz with MacOS Logic Pro, the auto-reload feature of sfizz works sporadically.
Sometime it will reload when I save the `file.sfz`- and other times I will need to close/open sfizz plugin. It also works sometimes when I load another SFZ script- and come back to the one in question.
Answers:
username_1: Close/open the GUI, or reload the plugin?
If you can run in debug build, there's a console message when the file reloads.
username_2: Automatic reloading is dependent on the `process` function to send a message every period of a fixed number of frames.
There are some cases where the `process` function is not working in a periodic way, or not called at all.
One is the processor being suspended of course, or performing offline rendering, or in these save/load situations (where the processor performs a special `process` call whose purpose is parameter flush. see https://steinbergmedia.github.io/vst3_doc/vstsdk/faq.html#faqCommunication6).
No doubt, a more appropriate way would be that the work thread does this ticking by itself on the basis of some independent timer.
username_0: Closing/opening the GUI tends to work. I am not reloading the plugin.
It is very sporadic, no knowing when it will reload- and what will trigger it. (On Logic Pro MacOS)
Status: Issue closed
|
sveltejs/language-tools | 1087868508 | Title: Make semantic tokens more performant
Question:
username_0: We should add the following logic to semantic tokenization and see if it improves performance:
When a document is updated, check if was a added/deleted alphanumerical character and if the previous character is also alphanumerical. In that case the resulting word will be treated the same semantic-wise. Example: `const fo` -> `const foo` -> we know that `fo/foo` are the same semantic type. Therefore skip computation and mapping entirely and reuse the previous tokens and only adjust/update that result.
Answers:
username_0: I tested this out, turns out this doesn't help us at all unfortunately. VS Code's highlighting works on word boundaries already so if we type out something it will keep the same color. The proposed algorithm also has a fundamental flaw: When a variable name changes, its semantic meaning could change as well. `fo` could be a `const` but `foo` could be a `function`, which should result in different coloring which would no longer be the case when applying the outlined algorithm.
So what could be other ways to make this faster?
- somehow only let TS encode semantic tokens on the text change range. This would be faster since this would mean only encoding one variable most of the time. The danger is that we need to find out if the changed variable in question was at the declaration site, not usage site, and so maybe its semantic meaning could have changed (someone did `const foo` but now decided to do `let foo`). How to find out? We probably need to check the TS AST for that ("is modifying declaration site? -> play it safe and don't do optimzations").
- turn the logic around and adjust mappings after changes where we know that they can't have changed a semantic token - basically non-word characters.
username_1: What makes TypeScript fast at semantic tokens and this extension slow? Is it due to `svelte2tsx` transformations or something like that?
Presuming it is due to `svelte2tsx`, my naive first idea is to simply extract `<script>` content and parse for semantic tokens separately there. Everywhere else, the current approach is used. This could probably be sped-up using a worker, if that's possible. My instincts tell me the issue with this is that certain transformations need to be done in order for the semantic highlighting to be correct in `<script>` tags, but what are those transformations? I would imagine semantic highlighting wouldn't need "valid" code.
Although, if it was workerized, I would actually put the more complicated `svelte2tsx` semantic highlighting in the worker.
username_0: The priority queue was added in #1144 as a way to give more important computations like getCompletions priority. At that time I had the impression that semantic tokens was holding up the rest of the computations. But I did some more profiling recently, too and found that most of the time semantic tokens are not that much of a bottleneck as I thought. I still would need to test this out on some bigger projects first - for example in https://github.com/sveltejs/language-tools/issues/1139#issuecomment-904526552 it looks like semantic tokens is the bottleneck (but maybe that log is flawed since it doesn't take into account the other stuff that has run before returning stuff; or the TS got more performant in the meantime). Maybe we should also add a third option, `mid`, which only waits 200ms.
username_1: If a delay is needed, I would make it so that the semantic tokens don't get indefinitely delayed if you keep typing. As in, throttle semantic token requests rather than debounce them. That should make them feel a lot more responsive despite having a delay. |
tinymce/tinymce | 263659358 | Title: Feature request: allow addContextToolbar to dynamically generate items
Question:
username_0: I propose allowing the second parameter of addContextToolbar to be a function which receives the clicked element as parameter and returns a string or array representing the items to be included in the toolbar. This would allow showing different toolbar items based on the element attributes or other application state. The first parameter would still determine whether the toolbar should be displayed at all, and the second would allow customizing it when it is displayed.
Currently it's only possible to have a function as the first parameter, so in order to implement a dynamic toolbar, I would have to call addContextToolbar for each possible permutation of the toolbar items, which is less than ideal both in terms of code organization and performance.
Answers:
username_1: That sounds like a good idea, but really, we might be rebuilding the contextmenu very soon for a few different reasons.
1. It is a plugin... a bit overkill, should be in core.
2. It isn't really very context aware, the contexttual changes are a bit of a hack now and can be done a lot better, and allow for what your suggesting.
username_2: Hi!
Any news here? :) Is it possible to do in TinyMCE 5? |
synapse-wireless-labs/component-lab | 264806944 | Title: Problem with visualization components from another module
Question:
username_0: Hello. After installing and including library i have next problem, where i import component from another module (creating in module ui-components) in console write next message: 'app-search-input' is not a known element:
1. If 'app-search-input' is an Angular component, then verify that it is part of this module.
2. If 'app-search-input' is a Web Component then add 'CUSTOM_ELEMENTS_SCHEMA' to the '@NgModule.schemas' of this component to suppress this message. ("
[ERROR ->]<app-search-input></app-search-input>
"): ng:///ExperimentModule/ExperimentCaseComponent.html@1:12
How i can use this tool with another module component?
P.S. Included module working with app.module.
P.P.S. app and modules was generated by angular-cli
Answers:
username_1: Can you provide your lab file? If you're importing an existing NgModule into your `LabModule` in your lab file, it must expose its components through the `exports` property in the NgModule metadata. |
psf/black | 476394267 | Title: Unstable formatting on long assert and binary op with parenthesized operand.
Question:
username_0: **Operating system:** Ubuntu 16.04
**Python version:** Python 3.7.4
***Black* version:** e664517
**Does also happen on master:** Yes
I've verified that this is a regression from 19.3b0. This is as much as I can minimize.
It doesn't matter why the line is too long. It could be any combination of the lengths of the three elements and the indentation level. It also doesn't matter what the operator is.
```diff
--- source
+++ first pass
@@ -1,2 +1,4 @@
-assert X, X % (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX)
+assert (
+ X
+), X % XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
--- first pass
+++ second pass
@@ -1,4 +1,4 @@
-assert (
- X
-), X % XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+assert X, (
+ X % XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+)
```
Status: Issue closed
Answers:
username_1: Just rechecked, Black on stable and master no longer crash so I'll be closing this. @username_0 if you find the current output not ideal:
```python3
assert X, X % (
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
)
```
... please open a new design issue.
Thanks for reporting and sorry for taking so long to get back to you! |
pyj9293/iothome | 199140264 | Title: 음성인식 질의를 통한 TTS응답 출력하기
Question:
username_0: 음성인식 API가 정상적으로 구동됨에 따라 음성인식 결과를 문자열 값으로 받아올 계획입니다.
이 값을 통해 특정 질의에 따라 응답을 출력할 수 있도록 할 예정이며 텍스트 출력이 완료되면 거기에 TTS 음성 출력까지 더해보도록 하겠습니다.
Answers:
username_1: 1. 네이버 음성합성 api를 이용
[](https://developers.naver.com/docs/labs/tts)
username_1: 검토 바랍니다
username_2: 검토 했습니다.
목소리 종류는 뭐로 할 생각이신가요?
username_1: 방법이 두가지 있습니다
1. 목소리 설정을 따로 두어서 사용자가 원하는 목소리를 설정할 수 있게 하는 방법
2. 회의를 통해 목소리를 결정하는 방법
하지만 지금은 목소리를 무엇을 할 것인가는 중요하지 않은것 같습니다
username_0: 2. 입력된 음성에 대한 값을 txt파일로 저장하도록 추가적으로 코딩하였습니다.
3. 생성된 txt파일은 nvoice.py(네이버음성합성+mp3재생)에서 읽은 후 음성합성을 거쳐 mp3파일로 생성됩니다.
4. 생성된 mp3 파일은 바로 재생됩니다.
현재 위와 같은 형태로 진행되었으며 일단은 음성인식에 대한 값을 활용할 수 있게 되었습니다.
앞으로는 질의에 대해 알맞은 응답을 선택할 수 있는 코드를 삽입해보려고 합니다.
이상입니다.
Status: Issue closed
|
screwdriver-cd/screwdriver | 237051764 | Title: Job Templates - Second Pass
Question:
username_0: *Background* See https://github.com/screwdriver-cd/screwdriver/issues/470
Answers:
username_1: ## Template tagging
For template tagging, we decided to have a new table called `templateTags`.
This will be a map of `tag` and `version`.
1 version can be mapped to multiple tags, but 1 tag can only be mapped to 1 version.
**Example:** `[email protected]` can be tagged as `stable` and `latest`
The templateTags table will look like this
| id | name | tag | version |
|---|---|---|---|
| 1 | mytemplate | stable | 1.2.0 |
| 2 | mytemplate | latest | 1.2.0 |
If they retag `stable` with version `1.1.0`, then the table will look like
| id | name | tag | version |
|---|---|---|---|
| 1 | mytemplate | stable | 1.1.0 |
| 2 | mytemplate | latest | 1.2.0 |
username_2: In order to determine the exact version that was published, we need to:
* save the output of the publish to a file
* parse the file for the exact version that was published
* save that version to be used with the tagging operation
This problem kind of goes away if the template owner explicitly defines a `major.minor.patch` version in their `sd-template.yaml`. The oddities with this method is that you have to be sure to change it in two places, and you can't run more than one build without a code change.
2. Defining the `--name` value should be implied
Having an option for `--name` to be provided is nice to explicitly define the template to tag. It would be more convenient to have that value determined by the `sd-template.yaml` file. The publish operation determines the name based on the `sd-template.yaml` file so I would expect that the tagging operation would infer the same value.
Status: Issue closed
|
jfc3/atehere | 505641070 | Title: Add Gibson's Donuts to the MEM and TN JSON Files
Question:
username_0: Need to add Gibson's Donuts to the MEM and TN JSON files.
https://www.google.com/search?source=hp&ei=c2-fXbmQMbaT0PEP8NuA8A0&q=+Gibson’s+donuts+&oq=+Gibson’s+donuts+&gs_l=mobile-gws-wiz-hp.3..0j0i22i30l7.17975.21370..22432...0.0..0.234.1342.0j6j1....2..0....1.......3..41j41i275j41i22i30j46j0i10.3FFRL-c8BFY
Status: Issue closed
Answers:
username_0: Added Gibson's Donuts to the MEM and TN JSON files. |
elishacloud/Silent-Hill-2-Enhancements | 1067495570 | Title: Large mouse pointer on menu screens
Question:
username_0: I'd previously played Silent Hill Enhanced Edition in 2020 without this issue. I recently re-installed the game from scratch (following the steps precisely and correctly, just as before). I am using all of the same hardware as before. I am playing with a PC controller via XinputPlus (just as before). Now I have a large mouse pointer in the upper-left corner of my menu screens. Importantly, it does NOT correspond to my actual mouse cursor (which shows up on the screen as a small pointer and can be moved around normally).
https://imgur.com/tQUGoUJ
[](ur
[d3d8.log](https://github.com/elishacloud/Silent-Hill-2-Enhancements/files/7627598/d3d8.log)
l)
Answers:
username_1: The large mouse pointer is correct and part of the game, and will appear in the game regardless if using a controller or not.
The game should take full exclusivity of your mouse when the game is in focus, so you shouldn't be able to see your normal Windows cursor while playing. While playing, your mouse cursor is replaced by the large, in-game mouse cursor you've mentioned.
If you Alt + Tab out of the game you will then see your normal mouse cursor, and can even hover your normal mouse cursor over the game window. Once you click on the game window again, the game will take exclusivity of your mouse once more.
username_1: PS - Looking at your log file, you have some old files we no longer use in our project. I'd recommend **removing** the following files/folders from the game's directory:
```
\reshade-shaders\
d3d8.dat
d3d9.dll.bak
d3d9.log
ReShade.ini
ReShade_sh2pc.ini
```
username_0: "The game should take full exclusivity of your mouse when the game is in focus, so you shouldn't be able to see your normal Windows cursor while playing. While playing, your mouse cursor is replaced by the large, in-game mouse cursor you've mentioned."
Except that this is **not** what happens for me. My mouse cursor shows up as the small default Windows cursor in the game; the large pointer remains in the upper-left corner and never moves. Alt-Tabbing has no effect. The game is most definitely in focus and I am able to play as normal.
username_1: I understand now. This is the first instance of someone reporting an issue like this.
Temporarily remove the `d3d8.dll` file from the game's directory. This will more-of-less revert the game back to vanilla. Does the game's mouse cursor function correctly for you then?
username_0: Interestingly the issue persists after removing d3d8.dll. The large pointer in the upper left is now replaced by a bloody knife, and my windows mouse pointer is still visible and still non-functional.
username_1: Okay, that at least confirms that the project files shouldn't be the cause of this. Since this isn't happening to any of your other games, I'm wondering if there's some PC setting preventing exclusivity with sh2pc.exe.
Make a copy of sh2pc.exe and rename it to something else like sh2pc_test.exe. Launch the game through this renamed executable. The game should still run. Does the issue persist?
username_0: Renamed to sh2pc_test.exe and the issue persists.
FWIW, I have over 100 other PC games on various platforms (Steam, GOG, Origin, etc) and this doesn't happen in any other game.
username_1: I'm honestly not sure...
The last thing I can suggest to try is 100% reverting the game back to vanilla. To do this, temporarily move these files/folders someplace else (the list below is assuming you've already removed the files/folders I mentioned previously):
```
\sh2e\
alsoft.ini
d3d8.dll
d3d8.ini
d3d8.log
d3d8.res
Dinput.dll
Dinpout8.dll
dsoal-aldrv.dll
dsound.dll
XInput1_3.dll
XInputPlus.ini
```
You can also try unplugging any/all controllers and gamepad peripherals then launching the game to see if you regain mouse exclusivity. Maybe a certain peripheral is doing wonky things with the game? At this point, I'm totally out of ideas.
username_0: FIXED!
Went my taskbar, right-clicked the "NVIDIA settings" icon, and clicked "Exit." Started the game and now experiencing intended functionality; my mouse is represented by the large pointer, can be moved around, and is fully functional.
Weirdly enough, when this Nvidia settings icon reappears after I close the game, the issue doesn't come back even if I don't close the icon again. Not sure what was going on there. Driver 472.12 fwiw.
Thank you your timely comments here all the same, and thanks for the outstanding work on this project! You've really brought this classic back to life, and it's deeply appreciated.
username_1: We should've started you off by restarting your PC. 🤣 Years of working in an office and talking to folks in IT have taught me this, yet I didn't even think to suggest it!
Anyway, glad it's all sorted out for you now. I'll close this ticket then. |
Pilen/legedatabasen | 220032466 | Title: Filtre og kategorier
Question:
username_0: Lige nu kan man fra filtrene ikke kigge nærmere på kategorier. Og fra kategori visningen kan man kun se én kategori af gangen.
Når man bruger filtre er det nok typisk fordi man gerne vil have noget meget specifikt.
Hvad nu hvis man neden under filteboksen viste selector boksen, men istedet for at man kun kan vælge én, kan man vælge så mange man vil. Så kan man finde lege der tager 10 minutter for pilte der enten er fange eller tonselege.
Så fungerer vælgeren som checkboxes istedet for radiobuttons.
Det skal være sådan at hvis man har alle kategorier valgt eller ingen, så vises lege i alle kategorier.
Det vil nok fylde så meget at man ikke længer med det samme kan se legene opdatere under menuerne. Man kunne lave en løsning med at skrive, neden under dørslaget i filter boksen, hvor mange resultater der blev fundet.
Jeg kunne forestille mig at dette vil gøre filter funktionen meget mere anvendelig.
Som på mesterskabet hvor man enten bruger de simple navngivne indstillinger, eller virkelig fintuner alting selv. |
storycoder/joeSTOCK | 88185761 | Title: FriendlyID integration with Devise
Question:
username_0: 1. Merged Devise branch to master and cloned to integrate slugs via friendlyid.
2. Installed friendlyid gemfile
3. entered the following in terminal:
rails g friendly_id
rails g migration AddSlugToUsers slug:string:uniq (also tried AddSlugToUser)
rake db:migrate
4. got the following error message:
/Users/josephmargolis/Documents/tts/rails/joestock_add_slug_to_devise/joeSTOCK/db/migrate/20150611020936_create_users.rb:5: Can't assign to false
t.string :email null: false, default: ""
This is exactly what I did to get it to migrate before. The only difference is that I wasn't working with devise. |
bagisto/bagisto-bulk-upload | 945070677 | Title: Uploading simple products records chrome crash
Question:
username_0: When we try to upload simple products more than 1000 it crash the browser also products are not uploading properly.

you can see in screenshot most of products don't have name field. |
opsgenie/opsgenie-go-sdk-v2 | 730329315 | Title: No field for url in APIBasedIntegrationRequest
Question:
username_0: In order to create api integration of type ```Webhook``` the REST API excpects a ```url``` field which will basically contain the endpoint where Opsgenie will make requests on alerts. However this field is currently not mentioned in ```APIBasedIntegrationRequest```. As a result, the sdk cannot create Webhook integrations<issue_closed>
Status: Issue closed |
FStarLang/FStar | 262380773 | Title: quoting a binder yeilds tm_unknown
Question:
username_0: The following fails:
```
module Minimal
open FStar.Tactics
#set-options "--ugly --print_full_names --print_bound_var_types"
let test =
assert_by_tactic (True ==> True)
(b <-- implies_intro;
qb <-- quote b;
qf <-- quote (fun (b: binder) -> print "f"); // f: tactic unit
let q_fofb = mk_app qf [(qb, Q_Explicit)] in
print ("::: " ^ term_to_string q_fofb);;
unquote #(tactic unit) q_fofb;;
fail "A")
```
It says `Failed to resolve implicit argument of type 'FStar.Reflection.Types.binder' introduced in (?410 uu___#713907)`
I chatted about this briefly with @username_1; CC @nikswamy.
Answers:
username_1: This now works in my branch. The problem is that we were not able to typecheck the embedded alien terms again, and since they are just a Tm_unknown + meta, you were getting this error. The fix is to carry their type and use it when typechecking. There's also a fix needed in the normalizer, but I'm not super convinced of it, so not merging for now.
username_1: A partial fix now in master. Your example works, and you can unquote the tactic, but if you try to call it the normalizer will get stuck. I think the memoization logic is to blame, but fixing it made CI time rise by 50%; that's the part I'm dubious about, so that did not go into master yet.
Status: Issue closed
username_1: This is the test
```
module Aliens
open FStar.Tactics
(* Testing that aliens are typechecked/unquoted properly *)
let test =
assert_by_tactic (True ==> True)
(b <-- implies_intro;
qb <-- quote b;
qf <-- quote (fun (b: binder) -> print "f"); // f: tactic unit
let q_fofb = mk_app qf [(qb, Q_Explicit)] in
print ("::: " ^ term_to_string q_fofb);;
tac <-- unquote #(tactic unit) q_fofb;
tac;;
trivial)
```
which fails with
```
Failure("Expected Result.Success or Result.Failed applied to a single argument, got (match
match reify ((fun b -> FStar.Tactics.Builtins.print \"f\") _ ()) _ with
| FStar.Tactics.Result.Success #uu___566985 (#uu___566981, #uu___566982, #a, #q) ->
(let () = FStar.Tactics.Types.tracepoint q in
reify (FStar.Tactics.Builtins.trivial ()) (FStar.Tactics.Types.decr_depth q))
<:
FStar.Tactics.Result.__result Prims.unit
| FStar.Tactics.Result.Failed #uu___567613 (#uu___567609, #uu___567610, #msg, #q) ->
FStar.Tactics.Result.Failed (FStar.Pervasives.Native.Mktuple2 msg q)
with
| FStar.Tactics.Result.Success #uu___566985 (#uu___566981, #uu___566982, #a, #q) ->
(let () = FStar.Tactics.Types.tracepoint q in
reify (FStar.Tactics.Effect.return a ()) (FStar.Tactics.Types.decr_depth q))
<:
FStar.Tactics.Result.__result Prims.unit
| FStar.Tactics.Result.Failed #uu___567613 (#uu___567609, #uu___567610, #msg, #q) ->
FStar.Tactics.Result.Failed (FStar.Pervasives.Native.Mktuple2 msg q))
<:
FStar.Tactics.Result.__result Prims.unit")
```
The commit to fix memoization is 7e32a4e59785d51b78a38210f9440211a4c59e8f, but seems to make things much slower (which I think is a symptom that there are many examples where we're memoizing not-yet-normal-forms).
username_1: The following fails:
```
module Minimal
open FStar.Tactics
#set-options "--ugly --print_full_names --print_bound_var_types"
let test =
assert_by_tactic (True ==> True)
(b <-- implies_intro;
qb <-- quote b;
qf <-- quote (fun (b: binder) -> print "f"); // f: tactic unit
let q_fofb = mk_app qf [(qb, Q_Explicit)] in
print ("::: " ^ term_to_string q_fofb);;
unquote #(tactic unit) q_fofb;;
fail "A")
```
It says `Failed to resolve implicit argument of type 'FStar.Reflection.Types.binder' introduced in (?410 uu___#713907)`
I chatted about this briefly with @username_1; CC @nikswamy.
Status: Issue closed
username_1: Amazingly, this is also fixed by other normalizer and reification fixes. The programming gods are smiling upon us. Merging it soon, so closing this. |
biobricks/bionet-new | 356088135 | Title: Lab Editor - New Item Form
Question:
username_0: When the user clicks the red delete button, nothing happens. Perhaps we can replace this delete with a cancel button that clears the new form and hides it from view.

Answers:
username_0: Inventory Refactor - commit <PASSWORD>
Status: Issue closed
|
openstax/os-webview | 690475550 | Title: Enter and display star ratings for Tech Scout Partners
Question:
username_0: # Description
[Hi fi design](https://projects.invisionapp.com/share/KDY97ME3BHF#/screens/426790129)
Logged-in users should be able to rate Partners on a 1-to-5 star scale.
# Acceptance criteria
- [ ] Add "Rate this resource" tab to the lightbox
- [ ] Require sign-in
- [ ] Accept star rating
- [ ] Allow update of user's star rating
- [ ] Display star average and distribution
- [ ] Display star rating on Tech Scout cards
- [ ] Display star rating on Partners list on the Book Details page
- [ ] Display star rating in the title bar of the lightbox
- [ ] Add sort by rating to the sort menu
- [ ] ?? May have to do custom handling for advanced filter by "# stars and up" |
dotnet/roslyn | 1128797767 | Title: Error format string not filled for Error ENC0088 Modifying the body of '{0}' will prevent the debug session from continuing due to internal error: {1}
Question:
username_0: **Notes**
when discussing the behaviour in the c# discord, @username_1 mentioned that that error string isn't in the roslyn sources...
Answers:
username_1: I done a [repo search](https://grep.app/search?q=debug%20session%20from%20continuing) and yeah can't find it.
cc @username_3 @username_4
username_2: [GitHub Code Search found it](https://cs.github.com/?scopeName=All+repos&scope=&q=%22will+prevent+the+debug+session+from+continuing+due+to+internal+error%22) in various older forks. The message was reworded in https://github.com/dotnet/roslyn/commit/911be131f6e14b71880244520c1a9c4bffb83750. The new wording is:
https://github.com/dotnet/roslyn/blob/82094cf4d4648a4d5fe308a3665993dfcaa63e77/src/Features/Core/Portable/FeaturesResources.resx#L1219-L1222
(FYI, `#error version` will tell you what commit Roslyn was built from, it's nice for situations like this.)
username_1: Thanks @username_2!
It looks like the message needs two arguments, but only one is passed:
https://github.com/dotnet/roslyn/blob/82094cf4d4648a4d5fe308a3665993dfcaa63e77/src/Features/Core/Portable/EditAndContinue/AbstractEditAndContinueAnalyzer.cs#L1198
Probably nothing is replaced at all if the number of passed arguments doesn't match the number of placeholders.
username_3: @username_4
username_4: Hilariously the message for the diagnostic has been broken since the day it was added (for the record, only 1 month after I joined the team 😛): https://github.com/dotnet/roslyn/pull/46793/files#diff-a61167b90b09bc3d02049a4ab4e309ad36e6b5a3a60f6f988be10272418007f0R1182-R1185
The real fix is to try to work out _why_ the internal error occurred, but "doesn't repro in a console app" does not fill me with hope
username_4: The internal error does not repro in VS 2022, so the root cause is at least something we've fixed already.
username_5: Microsoft seems to less considering stabilizing their product when releasing.
Often, when there are debug errors that do not appear previously, rebooting your computer resolves those issue.
Status: Issue closed
|
lizhongzhen11/dailyGain | 391728133 | Title: vue源码骚操作之res.push.apply(res, c)--2018-12-17
Question:
username_0: 今天学到了至少两点
1.阅读vue源码`normalizeArrayChildren`方法时看到标题那段代码,第一时间没能理解,然后在控制台实验:
```js
var a = [1]
var b = [2, 3]
a.push.apply(a, b) // 3
a // [1, 2, 3]
```
- 其实这里起到的就是`concat`的作用,只是利用了`apply()`方法,避免了`concat`返回新数组在堆内存开辟新空间的损耗。记得在网上看到过评论,说`concat`性能不高。
- 都怪自己学艺不精。
2.关于反斜杠知识点,看博客 |
dart-lang/sdk | 221343070 | Title: Support noreturn annotation on functions
Question:
username_0: This should have similar effect as GCC:
https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#Common-Function-Attributes
It is helpful in two cases:
1. To ensure that a noreturn function actually does not return any value, even void. It should terminate by calling to other noreturn functions.
2. To eliminate dead code in the same basic block after the call to a noreturn function.
Answers:
username_0: #29213
username_1: How does this related to `@alwaysThrows` annotation (#17999)?
Status: Issue closed
username_2: With the upcoming null-safety feature, the properties mentioned [here](https://github.com/dart-lang/sdk/issues/29334#issue-221343070) are obtained with a function whose return type is `Never`. It is a compile-time error for such a function to `return`, and also to be able to reach the end of the function body (which implicitly means `return;` at that point). So it's coming!
This makes `@alwaysThrows` obsolete as well.
I'll close this issue, because there is no doubt that the null-safety feature bundle is coming, and it will deliver these features. |
communicationartetmarges/art-et-marges | 335755753 | Title: rajouter artistes http://www.artetmarges.be/en/expo.html
Question:
username_0: Rajouter en bas de page :
Exhibited Artists :
<NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> – COOREMAN - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME> - <NAME><issue_closed>
Status: Issue closed |
Facepunch/garrysmod-issues | 16933144 | Title: error in my game npc wrong
Question:
username_0: I'd like to report that my game was ok, but now it has a problem. The characters appears standing up wrong.
this is 1 photo of my game

Answers:
username_1: This bug is still exist in Dedicated Servers not just npcs like props and others... |
pinetree408/VFT | 154945501 | Title: Making parser class for architecture xml and test log file
Question:
username_0: May be needed to discuss about how the parser module send parsed data to filter module.
Answers:
username_0: I find that the log files (PO_log_20160510_2024, PO_log_20160510_2024_testBankName) provided from TA have a problem like following example
Problem 1 :
-----------------------------------------------------------
<PO5::CBanking->CAtm>
<13,Money.java,48,call(public java.lang.String java.lang.StringBuilder.toString()),method-call,java.lang.StringBuilder,toString,[]>
[$0.00]>
-----------------------------------------------------------
"[$0.00]>" is incomplete form.
Problem 2 :
And the following log is correct? normally almost log have a pair of
one <PO#: .. -> ..> and one <12, ~. <12, ~.java, .. , .. , .. >,
but sometimes one <PO#: .. -> ..> have two <12, ~.java, .. , .. , .. >.
-----------------------------------------------------------
<PO5::CBanking->CAtm>
<12,CashDispenser.java,28,set(private com.atmsimulation.banking.Money com.atmsimulation.atm.physical.CashDispenser.cashOnHand),field-set,com.atmsimulation.atm.physical.CashDispenser,cashOnHand,<PO3::CSimulation->CAtm>
<40,Money.java,46,execution(public java.lang.String com.atmsimulation.banking.Money.toString()),method-execution,com.atmsimulation.banking.Money,toString,[]>
-----------------------------------------------------------
Problem 3:
We don't know the meaning of each element in
"<40,Money.java,46,execution(public java.lang.String com.atmsimulation.banking.Money.toString()),method-execution,com.atmsimulation.banking.Money,toString,[]> " form.
Status: Issue closed
|
sdnhm-vertnet/sdnhm-birds | 175013650 | Title: Monthly VertNet data use report for 2016-8, resource sdnhm_birds
Question:
username_0: Your monthly VertNet data use report is ready!
You can see the HTML rendered version of the reports with this link:
http://tools-usagestats.vertnet-portal.appspot.com/reports/84b26828-f762-11e1-a439-00145eb45e9a/201608/
Raw text and JSON-formatted versions of the report are also available for
download from this link. In addition, a copy of the text version has been
uploaded to your GitHub repository, under the "Reports" folder. Also, a full
list of all reports can be accessed here:
http://tools-usagestats.vertnet-portal.appspot.com/reports/84b26828-f762-11e1-a439-00145eb45e9a/
You can find more information on the reporting system, along with an
explanation of each metric, here:
http://www.vertnet.org/resources/usagereportingguide.html
Please post any comments or questions to:
http://www.vertnet.org/feedback/contact.html
Thank you for being a part of VertNet. |
debois/elm-mdl | 326578596 | Title: Library continuity
Question:
username_0: Hi, I'm just wondering if you plan on maintaining this library?
I see a lot of issues opened and a few PRs, also there's a new revision from Google to Material.
Thanks
Answers:
username_1: This library is unofficially unmaintained. The new version, which is a port of Material Components Web, can be found here: https://github.com/username_4/elm-mdc
username_2: really ? Wouldn't there be a way to release v9?
I took a quick look at elm-mdc, unfortunatly doc nor demo site are up to the level of elm-mdl, which is not very engaging. I was interested in the "Select" component and its behaviour in the demo was very buggy while it seems nice in mdl's demo, but this part of the code is not yet released.
username_1: @username_2 you could, conceivably, fork this repo and publish your own version. There are actually quite a few forks so someone may have done that already. But ideally, this repo would be updated to the new version. Maybe @username_5 or @username_4 can comment on future plans?
username_3: @username_1 I think it would be a good idea to mention this in the repo description and as the first line on the readme. Night prevent people doing what I did: implemented this library, found some bugs, came here and found out there's a successor.
Thanks
username_4: My focus shifted to https://github.com/username_4/elm-mdc. Apologies.
@username_5 Can you do something about it? Can you archive/ deprecate this repository, seeing that there is no activity?
username_3: @username_4 It would also be great to mention this on the demo site, as that's what attracts many people to this library in the first place
Status: Issue closed
username_5: Done. @username_4, let me know if I should add any additional info on elm-mdc. |
littleflute/blog | 474738259 | Title: From the muddy banks of the Wishkah [CD] by Nirvana (Musical group)
Question:
username_0: https://library.ci.corvallis.or.us/?id=26928003301282#section=resource&resourceid=13050083¤tIndex=0&view=fullDetailsDetailsTab
Answers:
username_1: https://username_0.github.io/blcd2/cd06
username_1: https://mp.weixin.qq.com/s/DvrY0RMSSwR42pLOGRE20Q
Status: Issue closed
|
nodejs/help | 408184157 | Title: Nodejs halted/stopped client to access the site or crashing browser
Question:
username_0: * **Node.js Version**: v8.15.0
* **OS**: Ubuntu 18.10
* **Scope (install, code, runtime, meta, other?)**:
* **Module (and version) (if relevant)**: socket.io, react redux, server NGINX
Database : mongodb
Last day, we made beta testing in our chat site using 20+ users. This is not the first time we are doing this beta test. But, before we used to have bugs and fixed everything now. So we made another beta test to make sure the chat can hold more users at a time (concurrent).
Technically, it works like a charm when we have 10 or 15 users at a time, but when we reach more than 20+ or 30+ it crashes in client side, Attaching screenshot of the chat, HTOP result and console logs.

I have increased the ulimit system wide from 1024 to 90000 for all process.
Open file limits also increased system wide in root and nginx too.
Our developer also confused and have no idea what is cusing this issue, becoz there is not log error when this happens.
Just take a look at the HTOP result in the screenshot and confirm me, what is going on, it is NODEJS causing this issue?
Answers:
username_1: @username_0 - can you clarify:
- when you say crash, did you mean the client just terminates? or becomes un-usable?
- what is the data you want to highlight from the `HTOP` tool? memory / CPU / I/O ?
high CPU or high I/O are not expected to crash the process, they can potentially slow down things. So it is important to know what happened to the process - termination or hang, and what you see in the `top` output.
username_1: no followup, closing. feel free to reopen if the issue resurfaces
Status: Issue closed
|
shader-slang/slang | 705004158 | Title: Compiling Slang to other shading languages (Metal, ESSL)
Question:
username_0: Is the Slang compiler able to generate Metal OR ESSL shaders instead of GLSL?
If the Slang compiler doesn't already have this feature, then it might be possible to generate Metal or ESSL shaders using [ShaderConductor](https://github.com/microsoft/ShaderConductor).
Answers:
username_1: Support for more targets is always desirable, and we’d welcome pull requests that work toward support for the Metal shading language ESSL, etc.
The existing GLSL output from Slang can sometimes be used on OpenGL, but we currently don’t have logic to avoid/detect Vulkan-specific features. A back-end to support the OpenGL ES shading language (and/of WebGL) would probably need to start with making our GLSL output more compatible with desktop OpenGL.
Using another stage of translation on the output from Slang (as you describe) could be useful as a short-term solution for users who need support for platforms other than the current ones. The risk on that path is that the output for Metal, ESSL, etc. becomes limited to the “lowest common denominator” features of what is exposed in SPIR-V.
In the long run we believe that a dedicated Metal back-end will be worthwhile because then it will be possible for Slang to expose and take advantage of features that are only available on that API.
Our goal with Slang is that it allows for portable code when you want it, but also supports writing code that uses API-specific features when you need to. |
hyperledger/iroha-python | 528671847 | Title: Transport Closed
Question:
username_0: Hi all,
I am trying to execute example usage script, that is demonstrated in iroha-python repo Readme,
`from iroha import Iroha, IrohaCrypto, IrohaGrpc
iroha = Iroha('<EMAIL>')
net = IrohaGrpc('127.0.0.1:50051')
alice_key = IrohaCrypto.private_key()
alice_tx = iroha.transaction(
[iroha.command(
'TransferAsset',
src_account_id='<EMAIL>@test',
dest_account_id='bob@test',
asset_id='bitcoin#test',
description='test',
amount='1'
)]
)
IrohaCrypto.sign_transaction(alice_tx, alice_key)
net.send_tx(alice_tx)
for status in net.tx_status_stream(alice_tx):
print(status)`
And I get the above error:
```
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "Transport closed"
debug_error_string = "{"created":"@1574768622.626196952","description":"Error received from peer ipv4:127.0.0.1:50051","file":"src/core/lib/surface/call.cc","file_line":1055,"grpc_message":"Transport closed","grpc_status":14}"
```
Any ideas??
Many thanks
Answers:
username_1: Are You sure Your iroha node is running on Your computer on port 50051? |
argoproj/argo-rollouts | 514411689 | Title: ExperimentStep specRef does not support preview use case
Question:
username_0: I was trying to come up with an example where an experiment could be used as part of a rollout step, to support a preview/staging replicaset. The requirement is that a preview replicaset should be able to run in the production namespace, but without receiving any traffic (because it does not have the same selector labels).
```yaml
strategy:
canary:
steps:
- experiment:
duration: 30
templates:
- name: preview
specRef: canary
```
The problem with the above example, is that `specRef: canary` causes the selector labels from the canary pod template spec to be carried forward. This means that it also includes labels like `app: guestbook` from the pod template, which means that traffic will be directed to the preview -- the opposite of what I'm trying to achieve.
While specRef works well in addressing the baseline vs. canary for mann-whitney analysis, it does not work well for the preview pod use case. We need to come up with syntax which allows for a canary replicaset to be brought up, but with a different set of selectors.<issue_closed>
Status: Issue closed |
fieldpapers/fieldpapers | 150803928 | Title: Idea: Move QR and ruler out of map area
Question:
username_0: QR-code and ruller takes some space on map and if it cover an area where objects have to be drawn, there is no way to draw this objects presicely. QR-code and ruler with north mark can be moved to headline outside of the map area.
Answers:
username_1: Yes, this is a known problem. See #101 for some ideas. Sorry we haven't fixed it yet!
Status: Issue closed
|
googleanalytics/ga-dev-tools | 718687270 | Title: ошибка
Question:
username_0: Здравствуйте, использую сервис формирования utm меток достаточно давно https://ga-dev-tools.appspot.com/campaign-url-builder/ . Вчера перестали работать ссылки. Можете ли вы посодействовать в исправление этой ошибки?<issue_closed>
Status: Issue closed |
SilurianYang/uni-simple-router | 1022312705 | Title: edge浏览器手机模式下 使用浏览器的返回按键问题
Question:
username_0: **问题描述**
edge浏览器手机模式下 使用浏览器的返回按键返回上一页后上一页的滚动位置没有保持,不用uni-simple-router的情况是正常或者使用uniapp自带的头部返回按钮正常。
**复现步骤**
[复现问题的步骤]
1. 使用edge浏览器手机模式打开某一个页面,页面滚动一下
2. 进入到另一个页面
3. 使用浏览器的返回按键返回上一页
[或者可以直接贴源代码]
**预期结果**
返回上一页可以保持页面滚动位置
**实际结果**
返回上一页没有保持页面滚动位置
**系统信息:**
- 发行平台: [如 微信小程序、H5平台、5+ App等]
- 操作系统 [如 iOS 12.1.2、Android 7.0]
- HBuilderX版本 [如使用HBuilderX,则需提供 HBuilderX 版本号]
- 项目创建方法 [如使用Vue-cli创建/HBuilderX]
- 设备信息 [如 iPhone8 Plus]
- uni-simple-router版本 [如 v1.5.4]
**补充信息**
[可选]
[根据你的分析,出现这个问题的原因可能在哪里?]
Answers:
username_1: 自己实现以下这个方法,下面是示例
[h5-scrollTo-demo.zip](https://github.com/username_1/uni-simple-router/files/7319981/h5-scrollTo-demo.zip) |
ServiceInnovationLab/eggsroyale-frontend | 329633005 | Title: As a service provider, i want to be able to list my CSC sevice on Eggs Royale, so that CSC holders can learn about it and access it
Question:
username_0: This includes:
- [ ] Name of offer (e.g. Free curtains)
- [ ] Name of provider (e.g. Sustainability Trust)
- [ ] Information on the service/deal (body copy)
- [ ] Contact details for me including phone, email, website, and pyshical address<issue_closed>
Status: Issue closed |
mpv-android/mpv-android | 1147005223 | Title: [Feature Request] Add button to open keyboard
Question:
username_0: it would be handy if could open keyboard
Example:
İn my input.conf I have
P cycle_values screenshot-format "png" "webp" "jpg"
to use this I need third party apps
it would be better if mpv did have button to open keyboard so that we don't have to use third party apps like "Hacker's Keyboard" |
wso2/docs-is | 568320131 | Title: Use email address as the username content is confusing
Question:
username_0: **Description:**
https://is.docs.wso2.com/en/next/learn/using-email-address-as-the-username/

To enable "email address as the username" user only needs to follow the 1st and 2nd steps in H2 environment. But there are more steps without mentioning that those are optional and for different environments. Which is confusing. |
PaddlePaddle/PaddleDetection | 752601417 | Title: 案例eval和infer的时候报同样错误
Question:
username_0: ``(paddle) E:\PyCharm_Projects\Paddle\PaddleDetection>python tools/eval.py -c configs/yolov3_mobilenet_v1_roadsign.yml -o use_gpu=true
2020-11-28 13:07:27,009-WARNING: config YOLOv3Loss.batch_size is deprecated, training batch size should be set by TrainReader.batch_size
W1128 13:07:27.579334 71964 device_context.cc:252] Please NOTE: device: 0, CUDA Capability: 75, Driver API Version: 11.0, Runtime API Version: 10.0
W1128 13:07:27.586342 71964 device_context.cc:260] device: 0, cuDNN Version: 7.6.
2020-11-28 13:07:29,355-WARNING: output/yolov3_mobilenet_v1_roadsign_coco_template/.pdparams not found, try to load model file saved with [ save_params, save_persistables, save_vars ]
D:\Anaconda3\envs\paddle\lib\site-packages\paddle\fluid\executor.py:1070: UserWarning: The following exception is not an EOF exception.
"The following exception is not an EOF exception.")
Traceback (most recent call last):
File "D:\Anaconda3\envs\paddle\lib\site-packages\paddle\fluid\io.py", line 1891, in load_program_state
filename=file_name)
File "D:\Anaconda3\envs\paddle\lib\site-packages\paddle\fluid\io.py", line 805, in load_vars
executor.run(load_prog)
File "D:\Anaconda3\envs\paddle\lib\site-packages\paddle\fluid\executor.py", line 1071, in run
six.reraise(*sys.exc_info())
File "D:\Anaconda3\envs\paddle\lib\site-packages\six.py", line 703, in reraise
raise value
File "D:\Anaconda3\envs\paddle\lib\site-packages\paddle\fluid\executor.py", line 1066, in run
return_merged=return_merged)
File "D:\Anaconda3\envs\paddle\lib\site-packages\paddle\fluid\executor.py", line 1154, in _run_impl
use_program_cache=use_program_cache)
File "D:\Anaconda3\envs\paddle\lib\site-packages\paddle\fluid\executor.py", line 1229, in _run_program
fetch_var_name)
paddle.fluid.core_avx.EnforceNotMet:
--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
Windows not support stack backtrace yet.
------------------------------------------
Python Call Stacks (More useful to users):
------------------------------------------
File "D:\Anaconda3\envs\paddle\lib\site-packages\paddle\fluid\framework.py", line 2610, in append_op
attrs=kwargs.get("attrs", None))
File "D:\Anaconda3\envs\paddle\lib\site-packages\paddle\fluid\io.py", line 785, in load_vars
attrs={'file_path': os.path.join(dirname, new_var.name)})
File "D:\Anaconda3\envs\paddle\lib\site-packages\paddle\fluid\io.py", line 1891, in load_program_state
filename=file_name)
File "E:\PyCharm_Projects\Paddle\PaddleDetection\ppdet\utils\checkpoint.py", line 91, in _load_state
state = fluid.io.load_program_state(path)
File "E:\PyCharm_Projects\Paddle\PaddleDetection\ppdet\utils\checkpoint.py", line 126, in load_params
state = _load_state(path)
File "tools/eval.py", line 140, in main
checkpoint.load_params(exe, startup_prog, cfg.weights)
File "tools/eval.py", line 179, in <module>
main()
----------------------
Error Message Summary:
----------------------
InvalidArgumentError: tensor version 1522126346 is not supported, Only version 0 is supported
[Hint: Expected version == 0U, but received version:1522126346 != 0U:0.] at (D:\1.8.4\paddle\paddle\fluid\framework\lod_tensor.cc:287)
[operator < load > error]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "tools/eval.py", line 179, in <module>
main()
File "tools/eval.py", line 140, in main
checkpoint.load_params(exe, startup_prog, cfg.weights)
File "E:\PyCharm_Projects\Paddle\PaddleDetection\ppdet\utils\checkpoint.py", line 126, in load_params
state = _load_state(path)
File "E:\PyCharm_Projects\Paddle\PaddleDetection\ppdet\utils\checkpoint.py", line 91, in _load_state
state = fluid.io.load_program_state(path)
File "D:\Anaconda3\envs\paddle\lib\site-packages\paddle\fluid\io.py", line 1894, in load_program_state
"Failed to load model file , please make sure model file is saved with the "
RuntimeError: Failed to load model file , please make sure model file is saved with the following APIs: save_params, save_persistables, save_vars‘’
Answers:
username_1: @username_0
快速开始里,您需要先训练得到模型,才能做eval和infer。
请注意检测 output/yolov3_mobilenet_v1_roadsign_coco_template/.pdparams 是否存在
username_0: 自己已经训练过了,但是就是没有template文件,只生成了一个yolov3_mobilenet_v1_roadsign文件夹,其中是由生成好的模型的,,但是后面eval和infer时读不到这个文件夹的模型。
username_1: @username_0 那在yml中正确设置下 weights的路径,或命令行中通过 `-o weights=` 设置下模型的路径看下吧。
username_2: 加上weights=./output/yolov3_mobilenet_v1_roadsign/best_model.pdmodel就好了好像……
username_3: 请问是在哪个位置加呢
Status: Issue closed
|
linuxdeepin/developer-center | 210093731 | Title: 鼠标在登入后在桌面会持续呈现转圈圈的状态
Question:
username_0: 这现象只会发生在桌面,打开 firefox 或者 Deepin File Manager 等并不会发生,但缩小视窗后鼠标还是会一直呈现转圈圈的状态
Answers:
username_1: 你是不是有自动启动的应用?
username_0: @username_1 是,不過自動啟動目前用於部份 APP 基本上是失效的狀態,以下是我放在 autostart 內的 .desktop 檔:
```
[Desktop Entry]
Comment[zh_TW]=xrandr --output HDMI1 --set "Broadcast RGB" "Full"
Exec=xrandr --output HDMI1 --set "Broadcast RGB" "Full"
Icon=application-default-icon
Name[zh_TW]=FullRangeRGB
Type=Application
```
```
[Desktop Entry]
Version=1.0
Encoding=UTF-8
Name=SVP 4 Linux
GenericName=Real time frame interpolation
Type=Application
Categories=Multimedia;AudioVideo;Player;Video;
MimeType=video/x-msvideo;video/x-matroska;video/webm;video/mpeg;video/mp4;
Terminal=false
StartupNotify=true
Exec="/home/laichiaheng/SVP 4/SVPManager" %f
Icon=svp-manager4.png
Hidden=false
```
username_2: 请问问题得到解决了么?
也遇到了同样的问题
Status: Issue closed
username_3: This question has not been answered for too long and will be closed. If there is a need to continue the discussion, it will be reopened. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.