repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
Lord-Takeda/Discord-Selfbot | 809482864 | Title: no femboy command
Question:
username_0: no femboy command, i am mad!
Answers:
username_1: At the time my stash wasn't big enough I'm sorry :pensive:
username_2: My disappointment is immeasurable and my day is ruined.
username_3: broski stfu you retarded skid you probably already stole and modified the script |
gocd/gocd | 53105044 | Title: [task plugin] param substitution for fields should be optional
Question:
username_0: Causes this otherwise:

Hard to debug as to why it happened.
Answers:
username_1: Though not directly related, the issue would get resolved as part of #854
After single hash is allowed the above error wouldn't come.
Status: Issue closed
username_1: Closing this.
username_1: Reopening it for now.. Will close it once the PR is merged.
Have referenced the issue, so that it could be closed together.
username_1: Causes this otherwise:

Hard to debug as to why it happened.
Status: Issue closed
username_2: Closing as stale. |
megajanlott/cbor-decoder | 218688290 | Title: Major Type
Question:
username_0: ## `MajorType`
### Action
- [ ] a tape elso byte-jarol megallapitja, hogy annak elso 3 bitje milyen major type-ot ir le
### Returns
- [ ] a megkapott major type elso state-jevel
Additional info about states can be found [here](https://docs.google.com/document/d/1tvQJtJbYUcM2<KEY>WqNYs<KEY>FFVsV8).<issue_closed>
Status: Issue closed |
SocialiteProviders/Generators | 184046041 | Title: How to use generated provider in app for testing?
Question:
username_0: I feel like the readme is missing the crucial step for using the provider in an app. How is this accomplished?
Answers:
username_1: You can just add the Providers `src` directory to `psr-4` in `autoload` in `composer.json` and then run `composer dumpautoload` and from there on proceed as usual.
Status: Issue closed
username_2: Still, it would be helpful if this was part of the readme.
username_1: I would say this is assumed to be common knowledge if you start to develop packages for composer.
username_2: I think the point to make is that the readme only take you so far, and does not let you get up and running as quickly.
Even if you have read the composer docs, someone might assume that it should just work, which I actually did when I started.
I would also say that the philosophy of the main Laravel docs is to be as complete and friendly as possible, with as few assumptions as possible. "Common knowledge" for one person isn't the same for another.
Is there anything problematic about adding the note?
username_1: You are free to send a PR. |
ardatan/graphql-import-node | 832552652 | Title: Cannot set property '.graphql' of undefined after Webpack build
Question:
username_0: Hi,
I'm trying to build my project with webpack.
Everything is good but when i'm running my project , i see that error
```
webpack-internal:///./node_modules/graphql-import-node/register.js:13
(void 0)[`.${ext}`] = handleModule;
```
do you have a clue on what is going on ?
thank you for your time :)
Status: Issue closed
Answers:
username_1: `graphql-import-node` is not compatible with bundlers like Webpack, Rollup or Parcel. |
asrob-uc3m/operadores | 281802018 | Title: Peticion operador
Question:
username_0: Buenas. Soy del grupo de robofactory de asrob. Necesitaria ser operador de impresoras para imprimir las piezas necesarias para los proyectos de la asociacion y demas.
Answers:
username_1: Aprovechando que hay otro miembro de RoboFactory pendiente de recibir la formación, creo que lo lógico sería que recibieseis la formación a la vez, por aquello de ahorrar tiempo.
username_0: Perfecto, yo no tengo problema, tengo ahora bastante disponibilidad de horarios, asi que cuando prefieras.
username_1: Mañana he quedado a las 14:30 con @wenflehecanario en la nave, estaremos hasta las 4 o así. Pásate si quieres
username_0: Perfecto, no se si podré estar exactamente a las 14:30 pero si que me acercaré, gracias.
username_2: @username_0 ya es operario certificado #29 validado por @jorfru
La validación de Rubén fue durante una reunión de RoboFactory, y no me enteré de que había otra persona interesada ni se manifestó ese día.
Cerramos esta issue y esperamos a que la otra persona interesada abra la suya?
username_3: No tiene sentido que la misma persona tenga a la vez una issue de formación abierta y otra cerrada. Lo que sí hubiera sido buena idea es reutilizar la misma en vez de abrir otra distina.
Dado que hay mucha más documentación en #29 que en esta, esta creo que se puede dar por invalidada para no liar al sistema entre una y otra.
Status: Issue closed
username_2: No nos dimos cuenta de que ya estaba abierta, Sorry. |
franzholz/patchlayout | 780525632 | Title: Another way to avoid TYPO to render a content element
Question:
username_0: I'm not sure what you want to achieve but I would say:
tt_content.your_cType >
tt_content.your_CType = TEXT
should be the same in frontend. No ouptut for this cType.
Or define your column (which is used for avoided content) and don't get the column in your frontend rendering.
greetings,
Answers:
username_1: This extension is only for the column colPos. Some extensions allow special values as -1 here. However since version 9.5 TYPO3 does not allow this and gives an error message. |
ispras/lingvodoc-react | 1083020148 | Title: Edit interface translations: By default the first tab (Perspective) should be active.
Question:
username_0: Login as admin
Tools > Edit interface translations
Expected: By default the first tab (Perspective) should be active.
Actual: Nothing is selected. No information.

Answers:
username_1: Fixed
Status: Issue closed
username_0: Verified |
gradle/gradle | 413332623 | Title: Modifying a dependency in eachDependency() disables improved pom support
Question:
username_0: <!---
Please follow the instructions below. We receive dozens of issues every week, so to stay productive, we will close issues that don't provide enough information.
Please open Android-related issues on the Android Issue Tracker at https://source.android.com/source/report-bugs
Please open Kotlin DSL-related issues at https://github.com/gradle/kotlin-dsl/issues
Please open Gradle Native-related issues at https://github.com/gradle/gradle-native/issues
-->
<!--- Provide a brief summary of the issue in the title above -->
IMPROVED_POM_SUPPORT, which should be active for everyone now on gradle 5, is actually deactivated if a dependency is modified within eachDependency.
### Expected Behavior
Improved pom support should always be active for maven poms.
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
### Current Behavior
Improved pom support is disabled if a dependency is modified with eachDependency, even simply via .because(). Legacy maven configuration selection is used instead.
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
### Context
I noticed this because some of our maven poms were pulling in `default` of their dependencies, while (seemingly randomly), others were pulling in `runtime`.
This is because `alwaysUseAttributeMatching()` is always called for maven modules in https://github.com/gradle/gradle/blob/5e65884e365e643c270f98a606bd36f744dbab82/subprojects/dependency-management/src/main/java/org/gradle/internal/component/external/model/maven/RealisedMavenModuleResolveMetadata.java#L148, but then that flag is lost when the metadata is cloned on https://github.com/gradle/gradle/blob/6be5f69a972a128cdc1a5ebc45765223590530b2/subprojects/dependency-management/src/main/java/org/gradle/internal/component/external/model/ConfigurationBoundExternalDependencyMetadata.java#L137 (and the two methods following)
### Your Environment
Gradle 5.2.1
Answers:
username_1: Thanks for the report and taking time to pinpoint the bug.
Status: Issue closed
|
baggepinnen/FluxOptTools.jl | 1068824926 | Title: License
Question:
username_0: Hi @username_1 -- Would you be willing to put a license in your repository and register it? This is a really useful package. We'd love to integrate it into our work, but not sure we are allowed? Thank you!
Status: Issue closed
Answers:
username_1: Hello!
Sure, the LICENSE is added, I'll trigger registration if tests pass with added compat bounds
username_1: @username_2 register
username_2: Registration pull request created: [JuliaRegistries/General/49746](https://github.com/JuliaRegistries/General/pull/49746)
After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.
This will be done automatically if the [Julia TagBot GitHub Action](https://github.com/marketplace/actions/julia-tagbot) is installed, or can be done manually through the github interface, or via:
```
git tag -a v0.1.0 -m "<description of version>" 8c57dd7f7d045b6c7c048ea3b44611d65a8a74f4
git push origin v0.1.0
```
username_1: @username_2 register
username_2: Registration pull request updated: [JuliaRegistries/General/49746](https://github.com/JuliaRegistries/General/pull/49746)
After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.
This will be done automatically if the [Julia TagBot GitHub Action](https://github.com/marketplace/actions/julia-tagbot) is installed, or can be done manually through the github interface, or via:
```
git tag -a v0.1.0 -m "<description of version>" f88409a29786bcaba6683ef1d9ca8ce080f7aee0
git push origin v0.1.0
```
username_0: fantastic - thanks a lot!! |
miekg/redis | 993217634 | Title: [WARNING] An external plugin (/home/USERNAME/go/src/github.com/miekg/redis/setup.go line 127) is using the deprecated function Normalize. This will be removed in a future versions of CoreDNS. The plugin should be updated to use OriginsFromArgsOrServerBlock or NormalizeExact instead.
Question:
username_0: This err. is shown when having compiled CoreDNS to include the `redisc` plugin. This on v1.8.4 of CoreDNS - deployed via the Helm chart.
Is this a major issue going forward or can we live with it?
Thank you
Answers:
username_1: no, that should be fixed at some point, otherwise it will stop compiling |
Hacker0x01/react-datepicker | 279352158 | Title: Double date choosing in specific cases
Question:
username_0: Depending on where the datepicker frame opens it requires double action to implement date choosing
Observe the date frame position:
Here, choosing requires 1 click

But here - 2 clicks. Because date frame changes its position

The issue occurs only if date frame opens on top of the input field |
guoyang9/Relation-Fact-Detector | 551134424 | Title: Hello, bro!!
Question:
username_0: Thank you so much for your efforts!
I'm confused about this part:

how to get the VG.h5 file? seems this project dont have code to train the VG.h5 file?
Answers:
username_1: I would recommend anyone who use this repo do read the README first.
Status: Issue closed
|
kubeflow/pipelines | 635760531 | Title: [Test infra] Sample/e2e tests are not testing against pending SDK changes.
Question:
username_0: PR https://github.com/kubeflow/pipelines/pull/3934 is intended to fix a bug introduced by https://github.com/kubeflow/pipelines/pull/3861, which will cause pipelines that download prebuilt components to fail (e.g., XGBoost training sample).
IIUC, our presubmit e2e/sample test should have blocked https://github.com/kubeflow/pipelines/pull/3861 in that case, which did not happen.
Answers:
username_0: /assign @username_0
/assign @Bobgy
username_0: /close
It turns out that this issue can only be manifested in the sample tests which use prebuilt components. However only post submit tests have those samples as test cases so it won't block the PR. |
nasa/openmct | 379761273 | Title: CSS overflow-x visible with an element at far right in the demo
Question:
username_0: It's an older technique and baaad style.
Chrome version: 70.0.3538.102
OS: Windows 10


Status: Issue closed
Answers:
username_1: Thanks for the report, have moved this to the proper repository: https://github.com/nasa/openmct-demo/issues/9 |
wavetogether/wave_algorithm_challenge | 549106071 | Title: 문제 출제사이트 이전
Question:
username_0: [hackerrank](hackerrank.com)는
1. 문제 제출 후 실행 속도나 메모리 점유율 등의 유용한 정보를 볼 수가 없고
2. 테스트의 수가 20-30개 밖에 안되서 모든 코너 케이스들을 커버하지 않는다는 한계가 존재하는데
[leetcode](https://leetcode.com/), [백준](https://www.acmicpc.net/) 혹은 다른 사이트를 시도해보는 건 어떨까요?
Answers:
username_1: hackerrank 테스트 케이스도 살펴봤는데 테스트 케이스도 잘 못 만들고 엉망인 것 같아요. 그리고 실행 속도나 메모리 점유율 볼 수 없는 것도 아쉽고,
하스켈만 포기한다면 leetcode 좋을 것 같습니다.
username_0: @artechventure 하스켈 포기해도 되나요?
Status: Issue closed
username_0: 16일 문제부터 leetcode에서 출제하겠습니다! |
vega/vega-lite | 645783373 | Title: Support view encoding?
Question:
username_0: Right now we lack the ability move enclosing groups (like in Vega's group mark).
Since view maps to group properties, it might make sense to support view encoding somehow.
The tricky part is how do we map default scale range etc.
// TODO: add line chart with two-line text in a lollipop
Answers:
username_1: Would view support just `x`/`y` encodings, or would you envision support for other encodings like `rotation` or `color`?
username_0: I think we can support whatever Vega group marks support.
username_2: ![Uploading 85815768...]() |
llakssz/CIAngel | 149566112 | Title: Can't compile.
Question:
username_0: When I compile, I get this output:
```
main.cpp
arm-none-eabi-g++ -MMD -MP -MF /home/username_0/CIAngel/build/main.d -g -Wall -O2 -mword-relocations -fomit-frame-pointer -ffast-math -march=armv6k -mtune=mpcore -mfloat-abi=hard -I/home/username_0/CIAngel/include -I/opt/devkitPro/libctru/../libhbkb/include -I/opt/devkitPro/libctru/include -I/home/username_0/CIAngel/build -DARM11 -D_3DS -fno-rtti -fno-exceptions -std=c++11 -c /home/username_0/CIAngel/source/main.cpp -o main.o
/home/username_0/CIAngel/source/main.cpp:17:18: fatal error: hbkb.h: No such file or directory
compilation terminated.
make[1]: *** [main.o] Error 1
make: *** [build] Error 2
```
And I don't end up with any build files.
DevKitPro is installed to /opt/devkitPro/ .
Ubuntu 14.04.
Answers:
username_1: The project now requires HBKB: https://gbatemp.net/threads/hbkblib-a-3ds-keyboard-library.397568/
username_1: Also you'll need to modify the Makefile for HBKB as it hard-codes the DEVKITARM directory as /usr/local/devkitArm or some such.
username_0: I managed to change the makefile (the first line hardcodes DEVKITPRO, which in turn hardcodes DEVKITARM), but now I can't run `sudo -E make install`.
Error I get when running `sudo -E make install`
`make: *** No rule to make target `install'. Stop.`
Do I need to copy the files over to a different directory?
username_1: Yeah. There's two files you'll need to copy. I did it the lazy way and just put them in libctru's folders:
From one directory up from the Makefile (Should contain "hbkb" and "hbkb_include_header"):
cp hbkb/lib/libhbkb.a $DEVKITPRO/libctru/lib
cp hbkb_include_header/hbkb.h $DEVKITPRO/libctru/include/
Give or take a path.
username_0: Now I don't get any issues with hbkb anymore. Now the error is this:
```
username_0@username_0:~/CIAngel$ make
main.cpp
arm-none-eabi-g++ -MMD -MP -MF /home/username_0/CIAngel/build/main.d -g -Wall -O2 -mword-relocations -fomit-frame-pointer -ffast-math -march=armv6k -mtune=mpcore -mfloat-abi=hard -I/home/username_0/CIAngel/include -I/opt/devkitPro/libctru/../libhbkb/include -I/opt/devkitPro/libctru/include -I/home/username_0/CIAngel/build -DARM11 -D_3DS -fno-rtti -fno-exceptions -std=c++11 -c /home/username_0/CIAngel/source/main.cpp -o main.o
/home/username_0/CIAngel/source/main.cpp: In function 'Result DownloadFile(std::__cxx11::string, std::ofstream&)':
/home/username_0/CIAngel/source/main.cpp:44:32: error: 'HTTPC_METHOD_GET' was not declared in this scope
httpcOpenContext(&context, HTTPC_METHOD_GET, (char *)url.c_str(), 1);
^
/home/username_0/CIAngel/source/main.cpp: In function 'int main()':
/home/username_0/CIAngel/source/main.cpp:312:16: error: too many arguments to function 'Result httpcInit()'
httpcInit(0);
^
In file included from /opt/devkitPro/libctru/include/3ds.h:42:0,
from /opt/devkitPro/libctru/include/hbkb.h:71,
from /home/username_0/CIAngel/source/main.cpp:17:
/opt/devkitPro/libctru/include/3ds/services/httpc.h:23:8: note: declared here
Result httpcInit(void);
^
/home/username_0/CIAngel/source/main.cpp:315:15: error: 'sslcInit' was not declared in this scope
sslcInit(0);
^
/home/username_0/CIAngel/source/main.cpp:377:14: error: 'sslcExit' was not declared in this scope
sslcExit();
^
/home/username_0/CIAngel/source/main.cpp:326:10: warning: unused variable 'refresh' [-Wunused-variable]
bool refresh = true;
^
make[1]: *** [main.o] Error 1
make: *** [build] Error 2
```
username_1: Best guess is you're running an old version of libctru. I'd recommend grabbing/building the latest version: https://github.com/smealum/ctrulib/tree/master/libctru
That one should work properly with make install
username_0: Now it works. The second thing was my fault, but the hbkb part should really be included in the README.
username_1: I only figured it out a few minutes before you, and am in the middle of working on another fix for stuff. Hopefully cearp or someone else gets around to fixing that up in the README.
username_2: HBKB isn't anywhere on github, so do you guys think we should just bundle it with CIAngel and attribute it in the readme?
username_1: I would say get in touch with the original author, and start a new GitHub project for it (Fixing up any issues there). If it were to be added to the CIAngel source it makes any fixes made to it harder for other projects to integrate.
username_3: Ok glad you compiled it. :dart:
I'll add something about libraries in the readme a bit later.
username_1: I'm waiting to hear back from the original author of HBKB about whether I can just throw it up as a new project on GitHub, and put a proper license on it. Hopefully he gets back to me soon.
username_1: I've updated the README to have better instructions on compiling, hopefully it clears some things up.
Status: Issue closed
username_4: Just to add a little note to it, so Windows user do understand it more easily:
-------------------------------------------
Helreizer543 said: ↑
I wanted to add a little something for users that are trying to compile the library on windows.
1. comment out the first 2 lines in the make file if your ctrulib is installed in the default location.
2. In the download rename, the folder "Library Source" to just "Library".
After that the library should compile.:
-------------------------------------------
Modify makefile in "\hbkb\Library Source\hbkb\":
- export DEVKITPRO=/usr/local/devkitPro
- export DEVKITARM=${DEVKITPRO}/devkitARM
to
- #export DEVKITPRO=/usr/local/devkitPro
- #export DEVKITARM=${DEVKITPRO}/devkitARM
Rename "\hbkb\Library Source\" to \hbkb\Library\"
Now open a commandline window and go to \hbkb\Library\hbkb\ and run make
Now when it's done copy or move manually the needed files to ctrulib:
-> \hbkb\Library\hbkb_include_header\hbkb.h
to
-> c:\devkitPro\libctru\include\
-> \hbkb\Library\hbkb\lib\libhbkb.a
to
-> c:\devkitPro\libctru\lib\ |
ucb-bar/testchipip | 638232002 | Title: SerialAdapter should require minLatency > 0
Question:
username_0: There is a state machine in the Serial Adapter that seems to assume that d_valid will not be asserted on the same cycle as a_valid when writing. It can't accept the ack on the same cycle that the data is being written. The state machine won't advance if this is the case, even though that is valid Tile Link behavior.
There should be a `require(serial.minLatency > 0)` e.g. here to detect this case, or the state machine should be corrected to handle it:
https://github.com/ucb-bar/testchipip/blob/2797a6c1c17b90dd415619c3b5c5b731868a3adb/src/main/scala/SerialAdapter.scala#L51
This is my understanding, the state machine won't accept the write ack (by asserting d_ready) unless it's in write ack state:
https://github.com/ucb-bar/testchipip/blob/2797a6c1c17b90dd415619c3b5c5b731868a3adb/src/main/scala/SerialAdapter.scala#L105
But it won't go into write-ack state unless the write is actually accepted:
https://github.com/ucb-bar/testchipip/blob/2797a6c1c17b90dd415619c3b5c5b731868a3adb/src/main/scala/SerialAdapter.scala#L178
But in tile link it is legal to have a_ready = d_ready.
Answers:
username_1: LOW while a valid is HIGH in order to delay a concurrent response message until the following
cycle. However, this represents an indefinite delay on Channel D that is not allowed by any of the
forward progress ready rules. Indeed, a TL-UL–conforming slave interface may have connected
d valid and d ready to a valid and a ready respectively. Thus, the non-conforming master
interface has introduced a deadlock. <br> <br> If a master interface cannot deal with receiving a response message on the same cycle as its
request message, then it can instead put a buffer after its Channel D input. The buffer absorbs a
concurrent Channel D response message and presents d ready HIGH until it has been filled.
I've encountered a case where a deadlock does occur. I'd be happy to submit a PR that at least adds buffering in front of the TL interface if that would be helpful.
username_2: Thanks for pointing this out, Megan. We do this a lot throughout our TileLink code. I'll scan through all the existing code and see where else we need to add requires. |
webpack/webpack-dev-server | 54805505 | Title: Is there a simple way to run a command when build become INVALID?
Question:
username_0: Can't find anything in the doc about that.
I would like to lint my files when build is invalid (a file has been changed) & think it should be simple to do it .
I don't want to use a loader for that (I'm trying to avoid wrapper & [have issue with peerDep](https://github.com/webpack/webpack/issues/570#issuecomment-65760396) like I had when using gulp + pipe wrapper.
Is there an easy way to use some cli hook or similar ?
Answers:
username_1: You can write a plugins for the `invalid` hook.
``` js
plugins: [
function() {
this.plugin("invalid", function() {});
}
]
```
Status: Issue closed
username_0: Thanks for the tip. |
Software-Engineering-Group-9/Back-End-API | 751954513 | Title: Update createBusy table
Question:
username_0: -Update the busy table so it matches what front end is sending back.
-Busy table schema
CREATE TABLE busy(
CREATE TABLE busyschedule(aid NUMBER(15) NOT NULL,
title varchar2(40),
start_time varchar2(19),
end_time varchar2(19),
bgColor varchar2(7),
dragBgColor varchar2(7),
userid varchar2(50),
PRIMARY KEY(aid),
FOREIGN KEY (userid) REFERENCES user(uuid)
);
Answers:
username_0: Update to the table
CREATE TABLE busyschedule(aid NUMBER(15) NOT NULL,
title VARCHAR2(40),
start_time VARCHAR2(19),
end_time VARCHAR2(19),
color VARCHAR2(7),
userid VARCHAR2(50),
PRIMARY KEY(aid),
FOREIGN KEY (userid) REFERENCES user(uuid)
);
username_0: update the busyschedule insert SQL command as well
username_0: - I just updated the schema for creating busyschedule table and updated the insert command. I also tested the endpoint with postman.
- Need with the actual frontend
username_0: The create busyschdule features was tested with front-end on 27-Nov-2020 and it works with no error.
- Need testing with retrieving the busyschedule when user logs in
username_0: When user logs in, the calendar will display user busy event as well now, tested with front-end on 28-Nov-2020, works with no error
Status: Issue closed
|
micronaut-projects/micronaut-data | 908739895 | Title: Database migration not working
Question:
username_0: ### Task List
- [ X ] Steps to reproduce provided
- [ ] Stacktrace (if present) provided
- [ X ] Example that reproduces the problem uploaded to Github
- [ X ] Full description of the issue provided (see below)
### Steps to Reproduce
1. Create a database called shop_customers in MariaDB server.
2. Pull and run the sample application provided in the link below.
### Expected Behaviour
The database migration should be run and the tables should be created.
### Actual Behaviour
Migration was not executed and no logs were available in the console.
Note: I tried Flyway and Liquibase (not at the same time of course) but none of them worked.
### Environment Information
- **Operating System**: Ubuntu 20.04.2 LTS
- **Micronaut Version:** 2.5.4
- **JDK Version:** 11
### Example Application
- https://github.com/username_0/micronaut_db_migration<issue_closed>
Status: Issue closed |
melpon/wandbox-builder | 247300563 | Title: How to install external shared library?
Question:
username_0: I'm trying to build the SML# compiler on Wandbox.
However, the compiler needs some libraries: LLVM 3.7.1, [YAJL 2](https://github.com/lloyd/yajl) and [MassiveThreads 0.99](https://github.com/massivethreads/massivethreads), which are not installed in `username_1/wandbox:test-server`. They are required not only to build the compiler, but also to run that.
Therefore, when I tested the compiler, it outputted some errors like the following.
```
/opt/wandbox/smlsharp-3.3.0/bin/smlsharp: error while loading shared libraries: libLLVM-3.7.so.1: cannot open shared object file: No such file or directory
```
These libraries except for MassiveThreads are indeed necessary; the details are written in [§5.3 of the document of SML#](http://www.pllab.riec.tohoku.ac.jp/smlsharp/docs/3.3.0/en/Ch5.S3.xhtml).
How should I handle this problem? For example, should I self-compile the libraries like Boost library does?
Answers:
username_1: I do not want to install additional libraries on the running server whenever possible. So link the library statically.
If SML# has a static link option, specify one. Otherwise, force link statically like [this](https://github.com/username_1/wandbox-builder/blob/74efd700d3c146d35c5aec466531125018ac1b44/build/ldc/install.sh#L34).
Status: Issue closed
username_0: Thank you for your quick reply. I'll try it. |
jlippold/tweakCompatible | 599889165 | Title: `Artemis` partial on iOS 13.3
Question:
username_0: ```
{
"packageId": "com.joey-gm.artemis",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.joey-gm.artemis",
"deviceId": "iPhone8,4",
"url": "http://cydia.saurik.com/package/com.joey-gm.artemis/",
"iOSVersion": "13.3",
"packageVersionIndexed": true,
"packageName": "Artemis",
"category": "Tweaks",
"repository": "Joey GM's repo",
"name": "Artemis",
"installed": "0.7.21~beta5",
"packageIndexed": true,
"packageStatusExplaination": "This package version has been marked as Working based on feedback from users in the community. The current positive rating is 100% with 1 working reports.",
"id": "com.joey-gm.artemis",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.5",
"shortDescription": "Springboard UI customization",
"latest": "0.7.21~beta5",
"author": "<NAME>",
"packageStatus": "Working"
},
"base64": "<KEY>
"chosenStatus": "partial",
"notes": "Not hiding LS items at all"
}
```<issue_closed>
Status: Issue closed |
qiwihui/pocket_readings | 977837031 | Title: 深度剖析CPython解释器33. 为什么 obj == obj 为 False[obj] == [obj] 为 True
Question:
username_0: static PyObject * do_richcompare(PyThreadState *tstate, PyObject *v, PyObject *w, int op) { // richcmpfunc f 相当于声明一个比较函数,因为 Python 将每个比较操作都抽象成了一个魔法方法,比如:__ge__、__eq__ 等等 // 虽然在 Python 中不同的比较操作对应<br>
<br>
Tags: python<br>
<br>
via Pocket https://ift.tt/2WlIjlu original site<br>
<br>
August 24, 2021 at 03:49PM |
terrakok/Cicerone | 331884074 | Title: ResultListener not setting to navigator.
Question:
username_0: Trying to set result listener using this code:
`INSTANCE.cicerone.navigatorHolder.setNavigator((activity as MainActivity).navigator)
getRouter().setResultListener(Utils.NEW_PROJECT_RESULT_CODE, newProjectResultListener)
getRouter().navigateTo(FragmentIdentifiers.DETAILED_PROJECT_FRAGMENT)`
but when I trying to use exitWithResult nothing is happens cause the set of result listeners in empty.
How to fix it?<issue_closed>
Status: Issue closed |
NMBGMR/wellpy | 280231508 | Title: Drag-select manual measurements for faster omit
Question:
username_0: Just want to be able to select and remove multiple measurements quicker. Some records have dozens of manual measurements that take place before the data logger is installed. Really painful clicking on each and every one.
Answers:
username_0: Yup! working great so far
Status: Issue closed
|
stellar/stellar-core | 421121813 | Title: Artifical load bugged or provided config flags in repository are wrong
Question:
username_0: I am using XLM core v10.2.0 on Ubuntu Server. I am not sure if I am doing something wrong, but it seems that either currently provided configuration file in the repository is not up to date (I created my own config based on that file) or full-node logger is somewhat broken.
I am starting my node with:
```
/home/xlm/node_core/src/stellar-core --conf /home/xlm/.stellar-core/stellar-core.cfg
```
The `/home/xlm/.stellar-core/stellar-core.cfg` contains these values:
```
RUN_STANDALONE=false
INVARIANT_CHECKS = []
MANUAL_CLOSE=false
ARTIFICIALLY_GENERATE_LOAD_FOR_TESTING=false
ARTIFICIALLY_ACCELERATE_TIME_FOR_TESTING=false
ARTIFICIALLY_SET_CLOSE_TIME_FOR_TESTING=0
ALLOW_LOCALHOST_FOR_TESTING=false
```
I checked multiple times, those are not overwrited anywehere, the config file is also correct as other values set there are respected by full-node when I run it.
Although I still get following log:
```
2019-03-14T15:56:47.387 <startup> [default INFO] ARTIFICIALLY_ACCELERATE_TIME_FOR_TESTING enabled in configuration file - node will not function properly with most networks
2019-03-14T15:56:47.387 <startup> [default INFO] ALLOW_LOCALHOST_FOR_TESTING enabled in configuration file - node may not be configured for production use
2019-03-14T15:56:47.387 <startup> [default INFO] ARTIFICIALLY_SET_CLOSE_TIME_FOR_TESTING enabled in configuration file - node will not function properly with most networks
2019-03-14T15:56:47.387 <startup> [default INFO] RUN_STANDALONE enabled in configuration file - node will not function properly with most networks
2019-03-14T15:56:47.387 <startup> [default INFO] MANUAL_CLOSE enabled in configuration file - node will not function properly with most networks
2019-03-14T15:56:47.387 <startup> [default INFO] ARTIFICIALLY_GENERATE_LOAD_FOR_TESTING enabled in configuration file - node will not function properly with most networks
```
Despite this log being there I don't see the node to behave wrongly. The machine usage is normal, the sync works correctly, I am able to observe and braodcast transactions. I connect node to mainnet.
What's the reason for those logs showing?
Answers:
username_1: Its error in code. I'll fix it quickly.
Status: Issue closed
|
denizyuret/Knet.jl | 352338085 | Title: Add broadcasted for identity
Question:
username_0: The following gives error in julia 0.7:
```julia
a = KnetArray(rand(3,3))
identity.(a)
ERROR: MethodError: no method matching broadcasted(::typeof(identity), ::KnetArray{Float64,2})
```
Answers:
username_1: This is currently true for functions which are not defined in `src/unary.jl`, not only for `identity`:
```jl
julia> f(a) = a + 1
f (generic function with 1 method)
julia> f.(a)
ERROR: MethodError: no method matching broadcasted(::typeof(f), ::KnetArray{Float64,2})
Closest candidates are:
broadcasted(::Any, ::KnetArray) at /home/rene/.julia/packages/Knet/OPYRf/src/karray.jl:1175
broadcasted(::Any, ::Any, ::AutoGrad.Rec) at /home/rene/.julia/packages/AutoGrad/KCOxA/src/broadcast.jl:43
```
I guess you are using this for a linear output layer - as a quick workaround I would suggest something like this:
```jl
if activation == :linear
r = x
else
r = relu.(x)
end
```
username_2: #342
Status: Issue closed
username_0: Thanks @denizyuret. Thanks @username_1 for your suggestion, I actually did similarly. |
netzwerg/theme-nemo | 159811017 | Title: Upgrade?
Question:
username_0: Hi @netzwerg
I'm no longer maintaining omf, and would appreciate a ton if you can upgrade to [fisherman](https://github.com/fisherman/fisherman):
* Why? fisherman/fisherman#69 (comment)
Everything, including legacy plugins/themes work great with fisherman, but new content is _no_ longer compatible with oh-my-fish. |
GoogleCloudPlatform/fda-mystudies | 957787636 | Title: [SB] [Copy/Import last published version] Active tasks are getting duplicated for the original study in a scenario
Question:
username_0: **Steps:**
1. Create new study
2. Add all contents and add active tasks
3. Launch the study
4. Edit the study again and delete the active task
5. Let study be in draft
6. Copy/Export the last published version for the study. Successfully copied/imported
7. Check API response for GetStudyActivityList for the original study
**Actual:** Study activities are getting duplicated for the original study post copying last published version for the study
**Expected:** Activities should not be duplicated
1. Issue is observed for both Copy and Export last published version
2. Issue not observed for Questionnaires

Status: Issue closed
Answers:
username_0: Issue is fixed in latest QA instance. |
CreemosEnLaRed/entregas | 633644088 | Title: Entrega de Actividad 5 - <<NAME>>
Question:
username_0: Dirección de la actividad: https://username_0.github.io/actividad-5/index.html
Soy oyente? No
Si tenes ganas, contesta alguna o todas las preguntas: (las respuestas son publicas en internet, si preferís mandarnos feedback privado lo podes hacer por las redes sociales o a nuestro mail)
**Que te gusto de la actividad?**
...
**Que no fue claro de la actividad?**
...
**Como podríamos mejorarla?**
... |
tdwg/cd | 528781399 | Title: Property:cardinality
Question:
username_0: | <!-- --> | <!-- --> |
| ---- | ---- |
| **Definition** | An indication of whether identifiers are linked to single specimen, or may cover multiple specimens within the collection. |
| **Dimension** | |
| **Existing property** | |
| **Existing class** | |
| **Existing property identifier** | |
| **Format** | Text |
| **Required** | |
| **Repeatable** | |
| **Constraints** | Controlled vocabulary |
| **Examples** | |
| **Notes** | | |
alexdobin/STAR | 594623876 | Title: Low RNAseq mapping speed
Question:
username_0: Hi,
I'm experiencing a quite low mapping rate of speed (17.8-30 M/hr). The read length is 297 and the mapping length is 290.9 and the unmapped is ~13%, not too bad.
This is the command line I'm using:
```
STAR --runMode genomeGenerate --genomeDir $SPECIES.STAR.Index --runThreadN $THREADS \
--genomeFastaFiles $SPECIES.assembly.fasta --genomeSAindexNbases 13 --genomeChrBinNbits 8
STAR --genomeDir $SPECIES.STAR.Index $limitBAMsortRAM --twopassMode Basic \
--readFilesIn $RNAR1.qf.fastq.gz $RNAR2.qf.fastq.gz --outFilterType BySJout \
--outSAMattributes All --outSAMtype BAM SortedByCoordinate \
--runThreadN 16 --alignEndsType Local --outStd Log --readFilesCommand zcat \
--outFileNamePrefix $SPECIES.RNA.
```
and this is the processing log
```
Time Speed Read Read Mapped Mapped Mapped Mapped Unmapped Unmapped Unmapped Unmapped
M/hr number length unique length MMrate multi multi+ MM short other
Apr 05 17:22:33 Started 1st pass mapping
Apr 05 17:24:43 2.4 86656 297 83.9% 290.9 1.1% 3.4% 0.1% 0.0% 12.6% 0.1%
Apr 05 17:27:31 17.8 1472571 297 83.0% 290.5 1.2% 3.4% 0.1% 0.0% 13.4% 0.1%
Apr 05 17:30:10 22.5 2855666 297 83.2% 290.6 1.2% 3.4% 0.1% 0.0% 13.3% 0.1%
Apr 05 17:31:14 28.1 4065778 297 83.3% 290.7 1.2% 3.4% 0.1% 0.0% 13.2% 0.1%
Apr 05 17:32:42 25.1 4238781 297 83.3% 290.7 1.2% 3.4% 0.1% 0.0% 13.1% 0.1%
Apr 05 17:33:42 28.9 5362744 297 83.3% 290.6 1.2% 3.4% 0.1% 0.0% 13.1% 0.1%
Apr 05 17:35:16 26.5 5622138 297 83.3% 290.6 1.2% 3.4% 0.1% 0.0% 13.1% 0.1%
Apr 05 17:36:24 29.2 6746071 297 83.3% 290.6 1.2% 3.4% 0.1% 0.0% 13.1% 0.1%
Apr 05 17:37:50 27.5 7005339 297 83.4% 290.7 1.2% 3.4% 0.1% 0.0% 13.1% 0.1%
Apr 05 17:38:58 29.4 8042817 297 83.3% 290.6 1.2% 3.4% 0.1% 0.0% 13.1% 0.1%
Apr 05 17:40:24 28.2 8388670 297 83.3% 290.6 1.2% 3.4% 0.1% 0.0% 13.1% 0.1%
Apr 05 17:41:30 29.6 9339605 297 83.3% 290.6 1.2% 3.4% 0.1% 0.0% 13.1% 0.1%
Apr 05 17:43:20 28.2 9771728 297 83.1% 290.5 1.2% 3.4% 0.0% 0.0% 13.3% 0.1%
Apr 05 17:44:37 29.4 10806538 297 83.0% 290.5 1.2% 3.4% 0.0% 0.0% 13.4% 0.1%
Apr 05 17:45:41 29.8 11496064 297 83.0% 290.5 1.2% 3.4% 0.0% 0.0% 13.4% 0.1%
Apr 05 17:46:45 29.4 11840747 297 83.1% 290.5 1.2% 3.4% 0.0% 0.0% 13.4% 0.1%
Apr 05 17:47:51 29.7 12530501 297 83.1% 290.5 1.2% 3.4% 0.0% 0.0% 13.4% 0.1%
Apr 05 17:48:51 29.6 12961482 297 83.1% 290.5 1.2% 3.4% 0.0% 0.0% 13.4% 0.1%
Apr 05 17:49:52 29.8 13564800 297 83.1% 290.5 1.2% 3.4% 0.0% 0.0% 13.4% 0.1%
Apr 05 17:50:59 30.1 14254508 297 83.1% 290.5 1.2% 3.4% 0.0% 0.0% 13.3% 0.1%
Apr 05 17:52:00 29.6 14513223 297 83.1% 290.5 1.2% 3.4% 0.0% 0.0% 13.3% 0.1%
Apr 05 17:53:01 30.1 15289124 297 83.1% 290.5 1.2% 3.4% 0.0% 0.0% 13.3% 0.1%
Apr 05 17:54:09 29.8 15720204 297 83.1% 290.5 1.2% 3.4% 0.0% 0.0% 13.3% 0.1%
```
Why is it so low?
Thanks a lot
F
Answers:
username_1: Hi Francesco,
the mismatch rate seems to be quite high, at 1.2%, compared to the usual Illumina error rate of <0.3%. Are you mapping to a divergent genome? Have you checked the sequencing qualities?
To increase speed, you can try to reduce --seedPerWindowNmax to 30 or even less - there is a good discussion about it here: https://groups.google.com/d/msg/rna-star/7ZUTnk8_bEI/BKFobC46CgAJ
Cheers
Alex
username_0: HI Alex,
yes, reads were error corrected with trimmomatic SLIDINGWINDOW:5:10 MINLEN:100. It's probably a divergent genome.
I've seen the thread, I'll try with a smaller value and see if that's the case.
I'll let you know.
Thanks a lot
F
username_0: It's actually worst...
F |
r-lib/testthat | 325947916 | Title: Idea: use as arg-checker
Question:
username_0: It occurred to me that many of the checks I'd use as arg-checking code at the beginnings of exported functions might be the same things I'd use as `expect_*` functions in unit tests. Is this a use case you've thought about? It would be nice to get all the semantics of the expectation functions, and their more readable error messages. In general unit test libraries tend to be a bit more feature-rich and have smoother interfaces than arg-checking libraries. And it would be nice to have a unified model for both.
I don't like to go overboard with arg-checking, but for some high-level entry points (e.g. R code that we expose in Java APIs) it helps us fail fast when arg shapes aren't aligned, or aren't [coercible to] the right class, etc.
It probably wouldn't really make sense to actually do `library(testthat)` in application code, but a simple repackaging of the same functions into an arg-checking package could be cool.
Answers:
username_1: You can use the `expect_()` functions in package code now, unless they are in a `test_that()` block they signal an error, so you can just use them directly.
Status: Issue closed
|
jupyter-widgets/ipywidgets | 520223244 | Title: disable sub window long display
Question:
username_0: Hi i am using @interact to display many plot and i do not want the subwindow scrolling, i didnot find any solution for this. For example:
```
from ipywidgets import widgets, interact, interactive
@interact
def p(n=200):
for i in range(n):
print(i)
```
Thanks
Answers:
username_1: Thanks for raising this.
I'm not sure I fully understand what you want. Could you upload a screenshot of the behaviour, with maybe an annotation of what you expect?
username_0: would like to disable this and be able to take a screenshot of the full output
 |
rage/tmc-vscode | 952610175 | Title: Remove FileSystemWatcher
Question:
username_0: FileSystemWatcher is reported to cause relatively high CPU usage on multiroot workspaces.
Currently the editor only uses the watcher to detect if the `.tmc` folder is missing. This is a rare enough issue that it can be handled during the startup.
Remove the watcher here: https://github.com/rage/tmc-vscode/blob/317c1bfe9cbb265deb90a88879a175c98acebaac/src/api/workspaceManager.ts#L47
Check that the `.tmc` folder is on the top of TMC workspace on startup and show a warning notification with one click fix to resolve the issue (since it causes a restart). |
robsontenorio/vue-api-query | 948221178 | Title: Adding offset as an additional default parameterNames.
Question:
username_0: For pagination, we have page and limit as parameter names. But limit offset pagination is also something most people use. It would be nice to have this as one of the parameterNames.
Answers:
username_1: This not only affects parameter names, the package has dedicated functions as well (`.page()`, `.imit()`, [see pagination docs](https://robsontenorio.github.io/vue-api-query/building-the-query/#paginating)).
The JSON:API spec has several ideas [how to do pagination](https://jsonapi.org/format/#fetching-pagination) (number/size, offset/limit, cursor). Should all of those be functions and parameters as well?
As for now, you can always set parameters manually in a model query:
```javascript
// Assuming we have a pagination object in data or equivalent...:
.params({
"page[size]": this.pagination.rowsPerPage,
"page[number]": this.pagination.page
})
.get()
.then(async response => {
// ...
});
```
Status: Issue closed
username_0: Understood. Closing this issue now. |
bitnami/charts | 1154164741 | Title: [bitnami/kafka-exporter] Error Init Kafka Client: kafka server: SASL Authentication failed.
Question:
username_0: ### Name and Version
bitnami/kafka-exporter 1.4.2-debian-10-r158
### What steps will reproduce the bug?
Use the official Kafka helm chart: https://github.com/bitnami/charts/tree/master/bitnami/kafka
### Are you using any custom parameters or values?
_No response_
### What is the expected behavior?
A valid connection to the Kafka brokers
### What do you see instead?
I can connect to the brokers using the Explorer 2 client, or Golang API, however doesnt work with the exporter. Only difference I can see is that the exporter is using mechanism "scram-sha256", and in my own connection I'm using "plain".
I0228 14:19:07.890049 1 kafka_exporter.go:769] Starting kafka_exporter (version=1.4.2, branch=non-git, revision=non-git)
F0228 14:19:08.531165 1 kafka_exporter.go:865] Error Init Kafka Client: kafka server: SASL Authentication failed.
I0228 14:19:24.916419 1 kafka_exporter.go:769] Starting kafka_exporter (version=1.4.2, branch=non-git, revision=non-git)
F0228 14:19:25.225427 1 kafka_exporter.go:865] Error Init Kafka Client: kafka server: SASL Authentication failed.
I0228 14:19:49.021280 1 kafka_exporter.go:769] Starting kafka_exporter (version=1.4.2, branch=non-git, revision=non-git)
F0228 14:19:49.330182 1 kafka_exporter.go:865] Error Init Kafka Client: kafka server: SASL Authentication failed.
I0228 14:20:37.500029 1 kafka_exporter.go:769] Starting kafka_exporter (version=1.4.2, branch=non-git, revision=non-git)
F0228 14:20:37.860031 1 kafka_exporter.go:865] Error Init Kafka Client: kafka server: SASL Authentication failed.
I0228 14:21:58.970259 1 kafka_exporter.go:769] Starting kafka_exporter (version=1.4.2, branch=non-git, revision=non-git)
F0228 14:21:59.293920 1 kafka_exporter.go:865] Error Init Kafka Client: kafka server: SASL Authentication failed.
I0228 14:24:44.933412 1 kafka_exporter.go:769] Starting kafka_exporter (version=1.4.2, branch=non-git, revision=non-git)
F0228 14:24:45.238001 1 kafka_exporter.go:865] Error Init Kafka Client: kafka server: SASL Authentication failed.
When setting the Kafka exporter sasl mechanism to "plain", I get following error.
name: kafka-exporter
image: docker.io/bitnami/kafka-exporter:1.4.2-debian-10-r158
command:
- kafka_exporter
args:
- '--kafka.server=kafka-0.kafka-headless.kafka.svc.cluster.local:9092'
- '--kafka.server=kafka-1.kafka-headless.kafka.svc.cluster.local:9092'
- '--kafka.server=kafka-2.kafka-headless.kafka.svc.cluster.local:9092'
- '--sasl.enabled'
- '--sasl.username="$SASL_USERNAME"'
- '--sasl.password="${<PASSWORD>USER_PASSWORD%%,*}"'
- '--sasl.mechanism=plain'
- '--web.listen-address=:9308'
Produces following error:
I0228 14:31:18.361992 1 kafka_exporter.go:769] Starting kafka_exporter (version=1.4.2, branch=non-git, revision=non-git)
F0228 14:31:22.931747 1 kafka_exporter.go:865] Error Init Kafka Client: kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
### Additional information
## @section Global parameters
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
[Truncated]
## ref: https://kubernetes.io/docs/user-guide/persistent-volumes/
## @param zookeeper.persistence.enabled Enable persistence on ZooKeeper using PVC(s)
## @param zookeeper.persistence.storageClass Persistent Volume storage class
## @param zookeeper.persistence.accessModes Persistent Volume access modes
## @param zookeeper.persistence.size Persistent Volume size
##
persistence:
enabled: true
storageClass: ""
accessModes:
- ReadWriteOnce
size: 8Gi
## External Zookeeper Configuration
## All of these values are only used if `zookeeper.enabled=false`
##
externalZookeeper:
## @param externalZookeeper.servers List of external zookeeper servers to use
##
servers: []
Answers:
username_1: Hi,
Thank you for using Bitnami. About the values.yaml file above, could you let us know which exact values you changed? This will be easier for us to reproduce the issue.
username_0: hello @username_1 I have set following mechanisms to sasl (https://github.com/bitnami/charts/blob/master/bitnami/kafka/values.yaml#L231-L244).
When setting to plaintext I confirm the exporter is working.
username_0: Thank you for the reply.
image:
registry: docker.io
repository: bitnami/kafka
tag: 3.1.0-debian-10-r29
and
image:
registry: docker.io
repository: bitnami/kafka-exporter
tag: 1.4.2-debian-10-r158
username_2: And for the chart ? You can find it [in the Chart.yaml file](https://github.com/bitnami/charts/blob/c2182e3e3c5193b18bec925305aab6513c523338/bitnami/kafka/Chart.yaml#L32)
username_0: @username_2 I used latest one.
username_2: Could you also check if the other kafka nodes are able to connect to `node-0` ? |
weilephp/blogNote | 547973066 | Title: (macos)mysql8.0+忘记密码该怎么办
Question:
username_0: ### mysql8.0以上版本忘记密码(macos)
1. 首先停止mysql服务,通常在系统偏好设置那里停止mysql服务,也可以命令行停止
2. 然后打开终端输入sudo /usr/local/mysql/bin/mysqld_safe --skip-grant-tables,让我们可以不用密码就可以登录mysql
3. 新开一个tab终端,mysql -uroot -p登录mysql
4.mysql的密码是放在mysql数据库的user表里面,好像是5.7版本后(具体哪个版本之后忘了)的的密码字段是authentication_string,不再是password,所以一般可以用
`update mysql.user set authentication_string=PASSWORD('<PASSWORD>') where User = 'root'`
去修改,但是由于8.0没有了PASSWORD()语句,于是还是报错
5.查看8.0的身份验证插件变成了caching_sha2_password
`
mysql>
mysql> SHOW VARIABLES LIKE 'default_authentication_plugin';
+-------------------------------+-----------------------+
| Variable_name | Value |
+-------------------------------+-----------------------+
| default_authentication_plugin | caching_sha2_password |
+-------------------------------+-----------------------+
1 row in set, 1 warning (0.00 sec)
mysql>
`
所以可以用
`alter user 'root'@'localhost' identified with caching_sha2_password by '<PASSWORD>';`
去修改新密码,如果修改的时候遇到错误
`ERROR 1290 (HY000): The MySQL server is running with the --skip-grant-tables option so it cannot execute this statement`
可以先刷新一下权限
`mysql> flush privileges;`
再执行上面的修改语句,然后执行完再刷新一下权限,然后就可以用新的密码登录了
`mysql -uroot -p`然后输入密码登录 |
fullcalendar/fullcalendar | 474841831 | Title: Subsequent property declarations must have the same type. Property 'fullCalendar' must be of type 'Calendar', but here has type 'object'
Question:
username_0: I have this problem using fullcalendar-scheduler in angular

this is my repository for the demonstration
https://github.com/username_0/Error-fullcalendar-scheduler
Answers:
username_1: this will no longer be an issue in v4/5
Status: Issue closed
|
FiloSottile/whoami.filippo.io | 566669821 | Title: Call out SSH Agent Forwarding and X11 Forwarding in README
Question:
username_0: `server.go` returns some warnings to users who have SSH Agent Forwarding and/or X11 Forwarding settings enabled (possibly universally).
It would be nice to add those to *How do I stop it?* section of the README for posterity and completeness.
I can send a PR if you agree. |
socketio/engine.io-client-java | 144882933 | Title: RejectedExecutionException Occured in onMessage.
Question:
username_0: [](https://github.com/socketio/engine.io-client-java/blob/master/src/main/java/io/socket/engineio/client/transports/WebSocket.java)
Answers:
username_1: Hi, thanks for your report. Let me know how to reproduce if possible.
username_0: Hello, sorry but for the moment, i can't reproduce..
if you look pkHttp3 websocket implementation, you can see ThreadPoolExecutor if created with a pool size limited to 1
[https://github.com/square/okhttp/blob/master/okhttp-ws%2Fsrc%2Fmain%2Fjava%2Fokhttp3%2Fws%2FWebSocketCall.java#L162](https://github.com/square/okhttp/blob/master/okhttp-ws%2Fsrc%2Fmain%2Fjava%2Fokhttp3%2Fws%2FWebSocketCall.java#L162)
I think is the source of this issue
username_2: Hi, I'm experiencing the same exception, but in onClose(), not onMessage() of RealWebSocket's reader implementation because ThreadPoolExecutor (replyExecutor) is not at all used in onMessage() but in on onPing() and onClose(). My only assumption is that ping response runner is already in the pool at the point of onClose request from the server so it throws an exception since it can only have one runner at the time.
This issue definitely is occurring while using engine.io/socket.io but I rather think it is in the OkHttp library itself and not here...
username_3: Just saw this thread after creating the issue https://github.com/socketio/engine.io-client-java/issues/81 . I think what @username_0 said contributes to Out of Memory exceptions as well as mentioned there.
username_4: Closed due to inactivity, please reopen if needed.
Status: Issue closed
|
yiisoft/yii2 | 56617291 | Title: dropDownList validator
Question:
username_0: i have
...
`<?= $form->field($model, 'filters[]')->dropDownList([0=>'Mobile',1=>'Brow',2=>'Win'] ,['class'=>'form-control chosen-select','multiple'=>true,'size'=>3] ) ?>`
...
`<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.min.js" type="text/javascript"></script>
<script src="http://tdskub.com/chosen/chosen.jquery.js" type="text/javascript"></script>
<script type="text/javascript">
var config = {
'.chosen-select' : {},
'.chosen-select-deselect' : {allow_single_deselect:true},
'.chosen-select-no-single' : {disable_search_threshold:10},
'.chosen-select-no-results': {no_results_text:'Oops, nothing found!'},
'.chosen-select-width' : {width:"95%"}
}
for (var selector in config) {
$(selector).chosen(config[selector]);
}
</script>`
java script works fine
but if i do update then i got error

rules() contains that [['filters'], 'string', 'max' => 1024]
i want to have multiple select saved in db, how i can get it?
Answers:
username_0: i change rules() so
`[['filters'], 'filter', 'filter' => function ($value) {return json_encode($value);}]
`
now i can save my array
but how i can return my array from db?
username_1: please use the forum to ask questsions, github is a bug/feature tracker.
Status: Issue closed
|
ephtracy/ephtracy.github.io | 506354091 | Title: Does not work on MacOS High Sierra 10.13.6
Question:
username_0: Would love to use this on CryptoVoxels but just get a white screen on starting up.
I've tried version 0.99.4.2 and 0.99.4
Really hope you can fix this... heard lots of good things... I'm getting major FOMO
Answers:
username_1: I also got white screen on MacOS 10.13.6 for 0.99.4.
MagicaVoxel-0.99.4.2-alpha-macos/MagicaVoxel.app/Contents/MacOS/MagicaVoxel
Executing directly the binary from the path above seem to work, no idea if there were missing dependencies.
username_2: From https://mvc.wiki/m/MagicaVoxel
(you should close the issue if it works)
Status: Issue closed
username_0: @username_2 thank you so much for your help... I've sent you a Brave Tips (BAT) don't know if you use the Brave browser bu thank you...
@username_1 the answer is here https://mvc.wiki/m/MagicaVoxel - Installing MagicaVoxel - Apple Mac OS section |
prompt-toolkit/ptpython | 851714690 | Title: Creating a ptpython pane inside another prompt_toolkit UI
Question:
username_0: Hi @username_1 I'm hoping you can give me some advice here. We are trying to build a prompt_toolkit app for interacting with devices over [pyserial](https://github.com/pyserial/pyserial). We'd like it to be a full screen app and have at least two panes: one for displaying log messages from pyserial connected devices and one for an interactive python repl.

Here's the layout:
```python
self.python_pane = PythonPane()
self.root_container = FloatContainer(
content=HSplit(
[
self.top_toolbar,
self.log_pane,
self.split_window,
self.python_pane.window,
self.bottom_toolbar,
],
),
floats=[
# Message Echo Area
Float(bottom=1, left=0, right=0, height=1,
content=MessageToolbarBar(self)),
# Floating Help Window
Float(
content=HelpWindow(self),
right=2, top=2,
),
],
)
```
To kick off the pt app I have:
```python
async def run(self):
"""Start the prompt_toolkit UI."""
background_log_task = asyncio.create_task(self.log_forever())
try:
unused_result = await self.application.run_async(
set_exception_handler=True)
finally:
background_log_task.cancel()
print("Quitting event loop. Bye.")
```
The `PythonPane()` class is a bunch of stuff copy pasted (for experimentation) from [ptpython/repl.py](https://github.com/prompt-toolkit/ptpython/blob/master/ptpython/repl.py) and [ptpython/python_input.py](https://github.com/prompt-toolkit/ptpython/blob/master/ptpython/python_input.py) but without the bits to create a new a new `prompt_toolkit` application (from [here](https://github.com/prompt-toolkit/ptpython/blob/master/ptpython/python_input.py#L382)). I also moved a bunch of logic from `run()` and `eval()` [here in repl.py](https://github.com/prompt-toolkit/ptpython/blob/master/ptpython/repl.py#L93) and moved it into the `_accept_handler` [here](https://github.com/prompt-toolkit/ptpython/blob/master/ptpython/python_input.py#L387)
The goal being to allow embeding into an existing app. Some questions:
1. Is this a good way to do this? If yes would you accept a pull request to support this use case? I think it would have to refactor the parts that create a new app and layout outside of the `PythonInput` class and provide an example of how to override the `accept_handler`.
2. Am I better off writing my own repl like https://github.com/prompt-toolkit/python-prompt-toolkit/blob/master/examples/full-screen/calculator.py
3. Can ipython support rendering other `prompt_toolkit` containers in it's UI that are updated asynchronously?
Thanks for any feedback you can provide!
Answers:
username_0: Ok so I have this somewhat working with this change:
https://github.com/username_0/ptpython/commit/1e9508655cad46595d66245afd842bf46d306a2d
It allows to us to create our own `PwPtPythonRepl(repl.PythonRepl)` class. Like this: https://pigweed-review.googlesource.com/c/pigweed/pigweed/+/41580/9/pw_console/py/pw_console/pw_ptpython_repl.py Which can be part of our prompt_toolkit app.
Currently it just logs the results instead of printing them. I still need to figure out how to make [this bit](https://github.com/username_0/ptpython/blob/ptpython-library/ptpython/repl.py#L524) not print to stdout and just return the formatted text instead.
Status: Issue closed
|
bakape/thumbnailer | 341699263 | Title: Ghostscript crashing program
Question:
username_0: Thumbnailing this PDF crashes the program somehow with
`Magick: "gs" "-q" "-dBATCH" "-dSAFER" "-dMaxBitmap=50000000" "-dNOPAUSE" "-sDEVICE=pnmraw" "-dTextAlphaBits=4" "-dGraphicsAlphaBits=4" "-r72x72" "-sOutputFile=/tmp/gmizbT7x" "--" "/tmp/gm7SOzEA" "-c" "quit" (child process quit due to signal 9).`
[sO_o.pdf](https://github.com/username_0/thumbnailer/files/2199516/sO_o.pdf)
Answers:
username_0: Possibly due to incorrect use or implementation of Image::subrange() and/or Image::subImage().
username_0: Could also explain gif-related crashes.
username_0: Can not reproduce for some fucking reason. |
spring-projects/spring-batch | 538705328 | Title: isFragmentRootElementName method access modifier should be protected [BATCH-2583]
Question:
username_0: **[Balan](https://jira.spring.io/secure/ViewProfile.jspa?name=balanjpm)** opened **[BATCH-2583](https://jira.spring.io/browse/BATCH-2583?redirect=false)** and commented
I was looking up to customize the method moveCursorToNextFragment in StaxEventItemReader and I noticed this method uses isFragmentRootElementName which is currently a private method. In my opinion, if caller is 'protected', the callee should have been made 'protected' as well.
---
**Affects:** 3.0.7
Answers:
username_1: That's not necessarily true, because if that private method calls another private method which in turn calls yet another private method, we would transitively need to make all of them protected and end up opening the entire class for extension.
As a second example, the [jumpToItem](https://github.com/spring-projects/spring-batch/blob/a7092a21e428cf904f210aff2682b518dc3649c5/spring-batch-infrastructure/src/main/java/org/springframework/batch/item/xml/StaxEventItemReader.java#L273) is protected and calls [readToStartFragment](https://github.com/spring-projects/spring-batch/blob/a7092a21e428cf904f210aff2682b518dc3649c5/spring-batch-infrastructure/src/main/java/org/springframework/batch/item/xml/StaxEventItemReader.java#L295) and [readToEndFragment](https://github.com/spring-projects/spring-batch/blob/a7092a21e428cf904f210aff2682b518dc3649c5/spring-batch-infrastructure/src/main/java/org/springframework/batch/item/xml/StaxEventItemReader.java#L310) which are private. I don't think these two methods should necessarily be made protected.
That said, I see an added value in making `StaxEventItemReader#isFragmentRootElementName` protected to be able to override the logic in there.
Status: Issue closed
|
sophie-glk/bang | 494541960 | Title: Use branchs to keep your code clean
Question:
username_0: More a suggestion than a issue :
I recommand you to work with gitflow in order to keep your code clean and working.
- The branch master will host the version of the code used in production (aka the code available on the store)
- The branch develop contains the next version of the code before it is put in production. This branch contains code that is always working and not code currently being developped.
- The branch feature/name_of_the_feature will contain the code currently in development related to a specific feature.
When the feature is done and fully tested the branch feature/name_of_the_feature is merged on develop.
When the new version of the code is put in production the branch develop is merge into master.
This is a "simplified" version of gitflow that I stated here.
For more informations about gitflow I strongly recommand you to look at those two links :
- https://danielkummer.github.io/git-flow-cheatsheet/index.html
- https://gist.github.com/JamesMGreene/cdd0ac49f90c987e45ac
Using branches on github is definitely the best practise to respect if you want to avoid loosing code or introduce bugs.
Anyway, if you don't care you can just ignore this issue. 😄<issue_closed>
Status: Issue closed |
MicrosoftDocs/dynamics-365-customer-engagement | 402486717 | Title: Linking to form using extraqs parameter not working in v9 Unified Interface
Question:
username_0: I'm appending the extraqs to the RecordURL(Dynamic) value to populate a custom URL text field. The resulting link appears to open the most recently used form, not the one specified in the extraqs parameter. Is there a workaround?
Answers:
username_0: I found that the extraqs parameter is no longer needed. The correct format is https://<instance URL>/main.aspx?appid=<app guid>&pagetype=entityrecord&etn=<entity schema name>&formid=<form guid>&id=<record guid>
Status: Issue closed
|
jlippold/tweakCompatible | 413935007 | Title: `Moveable` not working on iOS 12.1.2
Question:
username_0: ```
{
"packageId": "net.tateu.moveable",
"action": "notworking",
"userInfo": {
"arch32": false,
"packageId": "net.tateu.moveable",
"deviceId": "iPhone10,5",
"url": "http://cydia.saurik.com/package/net.tateu.moveable/",
"iOSVersion": "12.1.2",
"packageVersionIndexed": false,
"packageName": "Moveable",
"category": "Tweaks",
"repository": "tateu's repo",
"name": "Moveable",
"installed": "0.9~beta-6",
"packageIndexed": false,
"packageStatusExplaination": "This tweak has not been reviewed. Please submit a review if you choose to install.",
"id": "net.tateu.moveable",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.0",
"shortDescription": "Arrange statusbar icons.",
"latest": "0.9~beta-6",
"author": "tateu (<NAME>)",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "not working",
"notes": "Features Do-not work"
}
```<issue_closed>
Status: Issue closed |
psf/requests | 718772948 | Title: Add Python 3.9 to CI
Question:
username_0: Python 3.9 was released on October 5, 2020. Add Python 3.9 to CI, tox configuration and update classifiers in setup.py
## Expected Result
What you expected.
## Actual Result
What happened instead.
## Reproduction Steps
Tests pass on Python 3.9.0
```
tox -e py39
GLOB sdist-make: /root/checked_repos/requests/setup.py
py39 create: /root/checked_repos/requests/.tox/py39
py39 inst: /root/checked_repos/requests/.tox/.tmp/package/1/requests-2.24.0.zip
py39 installed: certifi==2020.6.20,chardet==3.0.4,idna==2.10,requests @ file:///root/checked_repos/requests/.tox/.tmp/package/1/requests-2.24.0.zip,urllib3==1.25.10
py39 run-test-pre: PYTHONHASHSEED='2412316346'
py39 run-test: commands[0] | python setup.py test
running test
WARNING: Testing via this command is deprecated and will be removed in a future version. Users looking for a generic test entry point independent of test runner are encouraged to use tox.
running egg_info
writing requests.egg-info/PKG-INFO
writing dependency_links to requests.egg-info/dependency_links.txt
writing requirements to requests.egg-info/requires.txt
writing top-level names to requests.egg-info/top_level.txt
reading manifest file 'requests.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'requirements.txt'
writing manifest file 'requests.egg-info/SOURCES.txt'
running build_ext
========================================================================= test session starts ==========================================================================
platform linux -- Python 3.9.0, pytest-6.1.1, py-1.9.0, pluggy-0.13.1
cachedir: .tox/py39/.pytest_cache
rootdir: /root/checked_repos/requests, configfile: pytest.ini
plugins: forked-1.3.0, httpbin-0.0.7, cov-2.10.1, mock-3.3.1, xdist-2.1.0
gw0 [555]g 0 items / 1 error
................................................................................................................................................................ [ 28%]
...................................................................x............................................................................................ [ 57%]
.....................s......s................................................................................................................................... [ 86%]
............................................................ssssssssss..... [100%]
================================================================================ ERRORS ================================================================================
ERROR collecting docs/_themes/flask_theme_support.py
docs/_themes/flask_theme_support.py:2: in <module>
from pygments.style import Style
E ModuleNotFoundError: No module named 'pygments'
======================================================================= short test summary info ========================================================================
ERROR docs/_themes/flask_theme_support.py - ModuleNotFoundError: No module named 'pygments'
=================================================== 542 passed, 12 skipped, 1 xfailed, 1 error in 132.95s (0:02:12) ====================================================
ERROR: InvocationError for command /root/checked_repos/requests/.tox/py39/bin/python setup.py test (exited with code 1)
summary
ERROR: py39: commands failed
```
Answers:
username_1: Can I try solving this issue ?
username_2: This should be resolved in #5652.
Status: Issue closed
|
capnkirok/animania | 233382709 | Title: [Suggestion] Sleeping Animals
Question:
username_0: Since you want this to be realistic (and might I say you are doing a FANTASTIC job!), why not have animals sleep at night and maybe take naps throughout the day? I'm pretty sure that animals don't stay awake 24/7! If this is already planned, then awesome! Can't wait! ^^
Answers:
username_1: Yep, this is planned :)
And if you have a Rooster, it will wake them up :)
username_0: Oh, cool! Looking forward to it!
username_0: Would it be possible to have the chickens go back to their coop at night to sleep? I'm not sure how you could do this... Maybe something with the nests? Like perhaps for hens, if they've laid an egg in the nest that's their new 'home' (it could be a radius though so a ton of hens don't try to go in the exact same spot) and for roosters it could still be the nests but they've helped hatch an egg from it? I'm not a coder so I'm not sure if that's possible or not... Either way, I think this would be a nice feature so we could have free-range chickens that don't run away forever.
username_1: Well... could do a Roost, which is where Chickens and Roosters are supposed to sleep. And if you put this inside a barn/coop, they would find it and walk there. Nest could be a backup.
username_2: This would be cool.
username_0: Oh I've been making roosts for my chickens! (But of course they don't use them XD) That would be perfect!
Status: Issue closed
|
builderscon/conf.builderscon.io | 167184886 | Title: Overlapping nav bar with contents, for the sponsor and the news pages
Question:
username_0: 私が修正するつもりです(権限上自分にassignできないです)。
[PC (screenshot)](http://res.cloudinary.com/dlze0abrr/image/upload/v1469281226/2016-07-23_22h30_59_m7u3sc_aiytbu.png)で見た時にnavバーと、コンテンツが重なっています。よろしくない。
[mobile (screenshot)](http://res.cloudinary.com/dlze0abrr/image/upload/v1469280954/image1_rxujiq.png)では同様の問題は発生していません。
Answers:
username_0: PRマージされたのでクローズです!
Status: Issue closed
username_1: 次回からPRに `fixes $issue_number` って入れておくといいですよ。自動的にクローズされます。
username_0: なるほど、ありがとうございます! |
saltstack/salt | 348682824 | Title: file.get_diff not work on version:2018.3.2
Question:
username_0: ### Description of Issue/Question
run file.get_diff on version:2018.3.2,get "No such file or directory".
### Setup
master:2018.3.2
minion:2018.3.2(other version minion)
### Steps to Reproduce Issue
[root@master-host salt]# salt minion-host cp.cache_file salt://aio/archive/archive.tar.md5
minion-host:
/var/cache/salt/minion/files/base/aio/archive/archive.tar.md5
[root@master-host salt]# salt minion-host file.get_diff /.archive.tar.md5 salt://aio/archive/archive.tar.md5
minion-host:
ERROR: Failed to read salt://aio/archive/archive.tar.md5: No such file or directory
on minion node,debug log:
[INFO ] Starting a new job with PID 7373
[DEBUG ] LazyLoaded cp.cache_file
[DEBUG ] LazyLoaded direct_call.execute
[DEBUG ] Initializing new AsyncZeroMQReqChannel for (u'/etc/salt/pki/minion', u'minion-host', u'tcp://10.1.1.100:4506', u'aes')
[DEBUG ] Initializing new AsyncAuth for (u'/etc/salt/pki/minion', u'minion-host', u'tcp://10.1.1.100:4506')
[DEBUG ] Connecting the Minion to the Master URI (for the return server): tcp://10.1.1.100:4506
[DEBUG ] Trying to connect to: tcp://10.1.1.100:4506
[DEBUG ] In saltenv 'base', looking at rel_path 'aio/archive/archive.tar.md5' to resolve 'salt://aio/archive/archive.tar.md5'
[DEBUG ] In saltenv 'base', ** considering ** path '/var/cache/salt/minion/files/base/aio/archive/archive.tar.md5' to resolve 'salt://aio/archive/archive.tar.md5'
[DEBUG ] Minion return retry timer set to 7 seconds (randomized)
[INFO ] Returning information for job: 20180808190340722939
[DEBUG ] Initializing new AsyncZeroMQReqChannel for (u'/etc/salt/pki/minion', u'minion-host', u'tcp://10.1.1.100:4506', u'aes')
[DEBUG ] Initializing new AsyncAuth for (u'/etc/salt/pki/minion', u'minion-host', u'tcp://10.1.1.100:4506')
[DEBUG ] Connecting the Minion to the Master URI (for the return server): tcp://10.1.1.100:4506
[DEBUG ] Trying to connect to: tcp://10.1.1.100:4506
[DEBUG ] minion return: {u'fun_args': [u'salt://aio/archive/archive.tar.md5'], u'jid': u'20180808190340722939', u'return': u'/var/cache/salt/minion/files/base/aio/archive/archive.tar.md5', u'retcode': 0, u'success': True, u'fun': u'cp.cache_file'}
[INFO ] User root Executing command file.get_diff with jid 20180808190346684060
[DEBUG ] Command details {u'tgt_type': u'glob', u'jid': u'20180808190346684060', u'tgt': u'minion-host', u'ret': u'', u'user': u'root', u'arg': [u'/.archive.tar.md5', u'salt://aio/archive/archive.tar.md5'], u'fun': u'file.get_diff'}
[INFO ] Starting a new job with PID 7387
[DEBUG ] LazyLoaded file.get_diff
[DEBUG ] LazyLoaded direct_call.execute
[DEBUG ] LazyLoaded cp.cache_file
[DEBUG ] Initializing new AsyncZeroMQReqChannel for (u'/etc/salt/pki/minion', u'minion-host', u'tcp://10.1.1.100:4506', u'aes')
[DEBUG ] Initializing new AsyncAuth for (u'/etc/salt/pki/minion', u'minion-host', u'tcp://10.1.1.100:4506')
[DEBUG ] Connecting the Minion to the Master URI (for the return server): tcp://10.1.1.100:4506
[DEBUG ] Trying to connect to: tcp://10.1.1.100:4506
[DEBUG ] In saltenv 'base', looking at rel_path 'aio/archive/archive.tar.md5' to resolve 'salt://aio/archive/archive.tar.md5'
[DEBUG ] In saltenv 'base', ** considering ** path '/var/cache/salt/minion/files/base/aio/archive/archive.tar.md5' to resolve 'salt://aio/archive/archive.tar.md5'
[ERROR ] A command in 'file.get_diff' had a problem: Failed to read salt://aio/archive/archive.tar.md5: No such file or directory
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/salt/minion.py", line 1606, in _thread_return
return_data = minion_instance.executors[fname](opts, data, func, args, kwargs)
File "/usr/lib/python2.7/site-packages/salt/executors/direct_call.py", line 12, in execute
return func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/salt/modules/file.py", line 4990, in get_diff
exc.strerror
CommandExecutionError: Failed to read salt://aio/archive/archive.tar.md5: No such file or directory
[DEBUG ] Minion return retry timer set to 8 seconds (randomized)
[INFO ] Returning information for job: 20180808190346684060
[DEBUG ] Initializing new AsyncZeroMQReqChannel for (u'/etc/salt/pki/minion', u'minion-host', u'tcp://10.1.1.100:4506', u'aes')
[DEBUG ] Initializing new AsyncAuth for (u'/etc/salt/pki/minion', u'minion-host', u'tcp://10.1.1.100:4506')
[DEBUG ] Connecting the Minion to the Master URI (for the return server): tcp://10.1.1.100:4506
[DEBUG ] Trying to connect to: tcp://10.1.1.100:4506
[DEBUG ] minion return: {u'fun_args': [u'/.archive.tar.md5', u'salt://aio/archive/archive.tar.md5'], u'jid': u'20180808190346684060', u'return': u'ERROR: Failed to read salt://aio/archive/archive.tar.md5: No such file or directory', u'success': False, u'fun': u'file.get_diff', u'out': u'nested'}
[Truncated]
pycrypto: 2.6.1
pycryptodome: Not Installed
pygit2: Not Installed
Python: 2.7.5 (default, Aug 4 2017, 00:39:18)
python-gnupg: Not Installed
PyYAML: 3.11
PyZMQ: 15.3.0
RAET: Not Installed
smmap: Not Installed
timelib: Not Installed
Tornado: 4.2.1
ZMQ: 4.1.4
System Versions:
dist: centos 7.4.1708 Core
locale: UTF-8
machine: x86_64
release: 3.10.0-693.el7.x86_64
system: Linux
version: CentOS Linux 7.4.1708 Core
Answers:
username_1: looks like i'm able to bisect this to commit <PASSWORD>
ping @username_2 any ideas here?
username_2: Fixed in https://github.com/saltstack/salt/pull/49033.
Status: Issue closed
|
sosy-lab/benchexec | 986691866 | Title: Tool info module for validators
Question:
username_0: I'm preparing a PR to add dartagnan as a (violation) validator and a I have some doubts about how to retrieve the witness path in the python code.
While I can use `task.property_file` to retrieve the path of the property file, I'm not sure how to do the same for the path of the witness. I suspect somehow this information is given by `input_files`, but than can also be used for the path of the program so I'm not sure how to do this properly.
I checked the modules of other validators but I could not find any lead.
Also, AFAIK, there is no naming convention for validators. I might follow the `witness2test` naming because we are basically converting a violation witness into an execution (i.e. test) and encoding the execution into SMT to reduce the search space of our verification encoding.
Answers:
username_1: BenchExec does not know anything about witnesses, so these do not exist in its data model.
`input_files` is just what the user configures as `input_files` in their task definitions. For software verification, this would typically be the program sources and not the witness (but one could do this in principle from the point of view of BenchExec, it would just be inconvenient probably).
For the existing validators, paths to witnesses are just configured as command-line arguments ([example](https://gitlab.com/sosy-lab/sv-comp/bench-defs/-/blob/a9bfd363f1c222165e09708585c5fdcfa1d47495/benchmark-defs/cpa-seq-validate-correctness-witnesses.xml#L16)). This would also be my recommendation.
We do not have any strict naming convention for tool-info modules in BenchExec, so just pick one that suits you, is intuitive for users, and not too general (to avoid collisions).
username_0: Ohhh then I guess this is much easier than I thought: I don't really need to add a new tool module but just create a new `benchmark-defs` using `dartagnan` as a tool and setting the corresponding option `-witness` with the value `../../results-verified/LOGDIR/${rundefinition_name}.${taskdef_name}/witness.graphml` (I guess this is fixed and depends on the configuration of the server where the competition is run), right?
username_1: Yes. The path just comes from how the competition organizer has his local setup, actually. This is basically defined and documented in [this repository](https://gitlab.com/sosy-lab/sv-comp/bench-defs), but not necessary to worry about. |
okviz/free-visuals | 639066547 | Title: Data labels not working
Question:
username_0: I'm working on a custom map, but I've hit a weird issue. Synoptic panel pulls in my data areas correctly and highlights the zones, but the data labels on some of the cells won't show up.
If I hover over the affected cells, the data is displayed correctly, but there's no data label.
Answers:
username_0: 
I forgot to include a screenshot. Here's what I'm seeing
username_1: Please, attach the report.
username_0: Here's a copy of the report and relevant files:
https://drive.google.com/file/d/1twgmuQ0k0YbhPTOdEr2O065W_bR54Qr2/view?usp=sharing
username_1: Sorry, I think you should create a smaller map (8MB is too much for any browser), I can't even inspect the single areas...
username_2: I have the same problem. Some fields are displayed correctly, other fields not.
 |
DouglasAmarelo/github-api-test | 456406820 | Title: Project Scope
Question:
username_0: ### Página inicial
- [ ] Campo para buscar o usuário do GitHub.
#### Quando o campo for preenchido
- [ ] Trazer as informações do usuário
- [ ] Exibir botão para "Listar repositórios"
### Listagem de repositórios
- [ ] Página que exibe todos os repositórios públicos do usuário
- [ ] Todos os repositórios devem ser clicáveis e trazer os commits referentes aquele repositório
- [ ] Mater o perfil do usuário na página
- [ ] Opção de buscar outro usuário
### Listagem de commits
- [ ] Página que exibe todos os commits do repositório escholido anteriormente
- [ ] Cada commit deve trazer informaões relevantes como: (...)
- [ ] Mater o perfil do usuário na página
- [ ] Opção de voltar para a listagem de repositórios |
davidhalter/jedi-vim | 242545711 | Title: jedi#goto() uses `python_host_prog` not current virtualenv
Question:
username_0: ### Issue
I'm using `jedi-vim` for _goto_ functionality alongside [deoplete-jedi](https://github.com/zchee/deoplete-jedi) for completions. I have disabled `jedi-vim`'s completions to accomplish this, as seen in my [vimrc here](https://github.com/username_0/dotfiles/blob/2cbadb126a13deb4969b1d3f003882a0ae20c8af/neovim/.config/nvim/config/plugins.vim#L66). Autocompletion appears to work in my use case, but goto does not.
I recently used [pyenv](https://github.com/pyenv/pyenv) and [pyenv-virtualenv](https://github.com/pyenv/pyenv-virtualenv) as described [here](https://github.com/zchee/deoplete-jedi/wiki/Setting-up-Python-for-Neovim#using-virtual-environments) to set up a virtualenv specific to neovim, named `neovim3`. You can see I have this set in my [g:python3_host_prog](https://github.com/username_0/dotfiles/blob/2cbadb126a13deb4969b1d3f003882a0ae20c8af/neovim/.config/nvim/config/builtins.vim#L2).
I also have a virtualenv set for each project I manipulate. For the sake of example let's say I have a Tornado app that I install in a virtualenv named `myapp`. Installing this via my `setup.py` will install all the requirements, such as Tornado.
When editing my code, pyenv has correctly activated the virtualenv named `myapp` when I switched to that directory. I load up vim, which uses the Python interpreter found in the `neovim3` virtualenv that has the neovim Python client and other tools.
Now, when I try autocompletion with something like the following
```Python
from tornado import gen
gen.<tab>
```
I get completions for everything defined and imported in the scope of that `gen` module, from `absolute_import` through to `YieldPoint`. I pick one of these classes or methods, such as `gen.sleep` and want to jump to its definition by hitting `<leader>d` and I get this error: `jedi-vim: Couldn't find any definitions for this.`
At this point I'm confused, since I thought `deoplete-jedi` uses `jedi` for its completion list. As an experiment I install `myapp` in the `neovim3` virtualenv, and suddenly `goto` starts working for all the Tornado packages/modules.
So it looks like `jedi` looks for definitions only in the scope of what's available in `neovim3`.
Since I've recently started trying to adopt best practices relating to virtualenvs, I would prefer not to install all of my projects into the `neovim3` virtualenv, and only keep neovim/plugin requirements in there. What then is my best option? Am I doing something wrong in my configuration or is this simply not a supported case?
### Steps to reproduce
As above
### Output of “:verbose JediDebugInfo”
#### Jedi-vim debug information
Using Python version: 3
- sys.version: `3.6.1 (default, Jul 11 2017, 10:53:38), [GCC 7.1.1 20170621]`
- site module: `/home/mpelikan/.pyenv/versions/3.6.1/lib/python3.6/site.py`
Jedi path: `/home/mpelikan/dotfiles/neovim/.config/nvim/plugged/jedi-vim/jedi/jedi/__init__.py`
- version: 0.10.2
- sys_path:
- `/home/mpelikan/.pyenv/versions/3.4.2/envs/pvc_appliance/lib/python3.6/site-packages`
- `/home/mpelikan/dotfiles/neovim/.config/nvim/plugged/jedi-vim`
- `/home/mpelikan/.pyenv/versions/3.6.1/lib/python36.zip`
- `/home/mpelikan/.pyenv/versions/3.6.1/lib/python3.6`
- `/home/mpelikan/.pyenv/versions/3.6.1/lib/python3.6/lib-dynload`
- `/home/mpelikan/.pyenv/versions/neovim3/lib/python3.6/site-packages`
- `_vim_path_`
- jedi-vim git version: 6411de0
- jedi git submodule status: 5427b02712828b2875d35b5ee1c8b5e58f820537 jedi (v0.10.2)
##### Settings
```
g:jedi#force_py_version = '3' (default: 'auto')
g:jedi#completions_enabled = 0 (default: 1)
omnifunc=pythoncomplete#Complete
Last set from /usr/share/nvim/runtime/ftplugin/python.vim
completeopt=menuone,longest,preview
Last set from ~/dotfiles/neovim/.config/nvim/plugged/jedi-vim/plugin/jedi.vim
```
[Truncated]
121: ~/dotfiles/neovim/.config/nvim/plugged/ale/ale_linters/python/mypy.vim
122: ~/dotfiles/neovim/.config/nvim/plugged/ale/ale_linters/python/pylint.vim
123: ~/dotfiles/neovim/.config/nvim/plugged/ale/autoload/ale/events.vim
124: ~/dotfiles/neovim/.config/nvim/plugged/ale/autoload/ale/cursor.vim
125: ~/dotfiles/neovim/.config/nvim/plugged/vim-airline/autoload/airline/extensions/tabline/buflist.vim
126: ~/dotfiles/neovim/.config/nvim/plugged/vim-devicons/autoload/airline/extensions/tabline/formatters/webdevicons.vim
127: ~/dotfiles/neovim/.config/nvim/plugged/vim-airline/autoload/airline/extensions/tabline/formatters/default.vim
128: ~/dotfiles/neovim/.config/nvim/plugged/tagbar/autoload/tagbar.vim
129: ~/dotfiles/neovim/.config/nvim/plugged/ale/autoload/ale/statusline.vim
130: ~/dotfiles/neovim/.config/nvim/plugged/deoplete.nvim/autoload/deoplete/custom.vim
131: ~/dotfiles/neovim/.config/nvim/plugged/vim-signify/autoload/sy/sign.vim
132: ~/dotfiles/neovim/.config/nvim/plugged/ale/autoload/ale/engine.vim
133: ~/dotfiles/neovim/.config/nvim/plugged/ale/autoload/ale/python.vim
134: ~/dotfiles/neovim/.config/nvim/plugged/ale/autoload/ale/path.vim
135: ~/dotfiles/neovim/.config/nvim/plugged/ale/autoload/ale/sign.vim
136: ~/dotfiles/neovim/.config/nvim/plugged/ale/autoload/ale/list.vim
137: ~/dotfiles/neovim/.config/nvim/plugged/ale/autoload/ale/highlight.vim
138: /usr/share/nvim/runtime/autoload/provider/clipboard.vim
```
</details>
Answers:
username_1: What is the Python version in your neovim3 virtualenv? (likely different than the host (3.6))
Does it match (MAJOR.MINOR) the one in your project?
Jedi will use `$VIRTUAL_ENV`, but only if it matches..
username_0: Thanks @username_1 , you're right, `3.6.1` in the neovim3 environment and `3.4.2` in the development environment.
Since I want to develop against the same version as what that code will be deployed with, would it make sense to move the `neovim3` version to `3.4.2` (can you think of any limitations of doing this even outside of Jedi?)
I think in my desperate attempts I did see mention of pending work that will fix this on the Jedi side. Is there some more general way to deal with this for the time being? Did I miss any documentation about this behaviour available in the README or wiki for Jedi?
For example, have a neovim version `neovim342`, `neovim361`, etc. for every version of development venv, detect the current VENV's python version and set the appropriate neovim venv?
username_1: I've made some PR for jedi, which was rejected to use a VIRTUAL_ENV always. You might want to use that patch.
Apart from that, you can install `neovim` also in the work/development environment, of course.
The best is to have the current Python (3.6) by default, and move your projects over there, too.. ;)
username_0: @username_1 You're referring to https://github.com/davidhalter/jedi/pull/829? Shame it wasn't merged and put behind some kind of feature flag or non-default option... Hopefully https://github.com/davidhalter/jedi/issues/385 addresses this in the near future then. Thank you.
As much as I'd like to have everything on latest Python, that is unfortunately not an option :(. Other than your patch, I'm still tempted to see if it's possible to have a neovim venv for every project version, but not sure if this is viable/practical.
username_1: Why not install `neovim` just per project then?
username_0: I don't have a very compelling answer. I only really need to deal with two Python versions, the latest for internal/personal projects and `3.4.2` for shippable ones. That would mean I could pay the upfront cost of creating two venvs and have vim determine which one to use at startup. I thought maybe I could do this via shelling out to `python --version`, and just doing some naive string manipulation. That is, if I had any idea of how to script in `VimL` and assuming my idea even makes sense to have this work.
Looks like neovim has no real dependencies or potential for conflict, so installing in every venv is probably the next best thing then.
Status: Issue closed
username_1: Yes, it is not that heavyweight.
You only have to do it for the non-default (i.e. old version) also.
Closing the issue then for now. |
rwatts3/vscode-svn | 249438005 | Title: v0.1.0 Planning & Discussions
Question:
username_0: Planning & Discussions for v0.1.0 Release
Below are a few things that will need to be ironed out in order to get the ball rolling on this project.
- [ ] Model / Replicate `scm-hg` or `scm-git` ? *this depends on which implementation is closer related to how we will need to structure `scm-svn`*
Answers:
username_1: Not too familiar with either Mercury (Hg) nor Git. But from what I understand, Hg is fairly simple and too the point. So perhaps less fuss?
What would be the benefits of using TortoiseSVN? Speed of implementation and familiar GUI? I say we go for CLI and aim to match the experience of Git and Hg. Built in diffs, icons, etc.
I propose we start with supporting the basic commit, upgrade, add and delete. This should include indicating when files are uncommitted, added, etc. Then add branch and tag functionalities, diffing, log viewing, etc.
username_2: I'd like to propose not to rely on TortoiseSVN, because it's not platform independent and I would use a svn extension both on Windows and Linux.
There are a couple of svn implementations available for node.js (e.g. spawn-svn), but I'm not familiar which one would suit best.
username_0: I agree I want to have the workflow very inline with the existing git scm
username_0: Just posting an update.
Now that i've received some feedback, I will probably start the first round of development next week.
username_2: +1
username_3: +1 for a cross platform solution that doesn't require specific versions of SVN installed. Would like to contribute/fork if time and code structure permits... currently this cindarella like SVN support is the only thing holding me back from doing more work "cross platform" with minimal effort.
username_4: 👍 for platform independent solution
I would like to contribute aswell, I can't wait for SVN support 😃
username_2: I'd be willing to support and contribute despite my limited typescript experience.
username_5: Hi, has there been any progress? |
HJReachability/ilqgames | 681311688 | Title: Termination conditions / variable time length games
Question:
username_0: I apologize if it is already covered in the documentation and I missed it, but I was wondering if there is a straightforward/existing approach for using termination conditions to end a game instead of a fixed-time endpoint or fixed-length horizon.
For example, take the basic target guarding problem (a pursuit-evasion, zero-sum game discussed in Isaacs' Differential Games book, pg 19, example 1.9.2). The game terminates when the distance between the pursuer and evader is less than a given threshold and the cost function is only evaluated at the moment of termination.
I could see how to use an "indicator function" to only evaluate the cost when the P-E distance is below the threshold (as described in section V.A of the [ilqgames paper](https://arxiv.org/pdf/1909.04694.pdf)), but this doesn't terminate the game.
Thank you!
Answers:
username_1: Hi @username_0,
Very good question. I've wondered the same thing, though I'm afraid I'm not sure I see how it could be done cleanly. The first idea that comes to mind is to start with a set of strategies, then integrate forward a new trajectory as usual, then check and see when the game would terminate and compute the cost only at that time, then treat the game as finite time with that horizon and final cost, and somehow handle the fact that the final time will change at each iteration with some sort of heuristic. That sort of approach could totally totally work in this implementation, but I certainly haven't tried it and have no idea how well it would work. If you'd like to try something like that, I'd be happy to discuss further, and needless to say but if you do implement something for this problem based on this repository, I'd be happy to review a pull request and merge it in.
Best,
David
username_0: @username_1 thank you for the insights. I'm still getting spun up with the library and I don't know how much time I'll be able to commit to it, but if I make any progress on this front, I'll discuss on this thread.
username_0: @username_1 I'm coming back to this work after a few months focusing on other projects. It looks like there has been a significant overhaul of some of the timing mechanics (e.g. https://github.com/HJReachability/ilqgames/commit/b3a45e9c3445026f855a60d242861fd9e9974504, https://github.com/HJReachability/ilqgames/commit/9b77c3873afa956d1a761392f2a6eb85ec5b2cb8).
It sounds simple, but I can't figure out how to even specify a fixed final time anymore; let alone a variable final time. Previously I used an expressions like
```
static constexpr Time kTimeHorizon = 30.0;
...
solver_.reset(new ILQSolver(dynamics, {p1_cost, p2_cost}, kTimeHorizon, params));
```
But that functionality seems deprecated. Now it seems that the time horizon is strictly hardcoded to be 10.0 in `types.h` ([see here for reference ](https://github.com/HJReachability/ilqgames/blob/master/include/ilqgames/utils/types.h#L138))
Is there a simple way to change the time horizon of my problem to a different fixed values that I am missing?
username_1: Good catches. I realized that it was repetitive to force every new example to declare a separate time horizon, so I just moved timing constants into `include/utils/types.h` under the namespace `::ilqgames::time::`. If you want to change it, just change that number. If you want a variable horizon, then you'll have to be very careful. To the best of my recollection, the only places time horizon shows up are in the core ILQ solver, the innermost loop LQ solver, and the operating point / strategy in each example. To make time horizon vary in each solve, you'll have to adjust each of these things very carefully... but you were always going to have to be very careful with these things and all the refactor has done is move a bunch of replicated member variables into a single float in a shared global namespace. |
altimetrik-onboarding-uy/asalas | 333288377 | Title: as a writer I want to be able to type content in a markdown implementaiton
Question:
username_0: Generate new inner tab in Post's Detail Page to write content in a markdown implementation
Status: Issue closed
Answers:
username_0: Generate new inner tab in Post's Detail Page to write content in a markdown implementation
Status: Issue closed
username_0: Added a 2nd text area to have the correct setting to use whenever the html code with the markdown is need it
username_0: Generate new inner tab in Post's Detail Page to write content in a markdown implementation
username_0: Refactor of the concept.
It has now it own inner tab.
As the person write in the text area, the content is displayed in the right
Lightning component "Markdown_Preview" is used for this.
Status: Issue closed
|
ANGSD/angsd | 325729638 | Title: PCA single read sampling approach
Question:
username_0: Hi!
I'm trying this new function that was added in the latest versions to do PCA only by sampling one single read per site. I was hoping you could clarify this information that is in the wiki:
"For the PCA / MDS methods you should called SNP sites (use PCA if you do not want to call SNPs). SNPs can be called based on genotype likelihoods (see SNP_calling)".
What kind of information exactly is used to produce the covariance matrix? Is it the information about the precise allele inferred in each site (e.g., A, C, T, G information)? SNPs are inferred based on genotype likelihoods, but the likelihood information is not used?
What is the major difference from this method to the NGStools method that you recommend in the wiki? NGStools is based on genotype likelihoods?
I would really appreciate your help because I'm new to this kind of analyses!
Answers:
username_1: hi
you have the option to use a single sampled read at each site or to use all information using genotype likelihoods
The single read sampling is described here
http://www.popgen.dk/angsd/index.php/PCA_MDS
For the genotype likelihood approach you can use either PCangsd or NGStools.
1) NGStools can be used without calling variable sites but it cannot handle large differences in depth between samples
2) PCAngsd handles any depth differences but you need to call variables. PCAngsd is the method I use.
Both methods have a article that you can read.
-Anders
username_2: Im closing this issue feel free to reopen if needed.
Status: Issue closed
|
json-c/json-c | 843342507 | Title: Linking to libjson-c Issue
Question:
username_0: I have tried various ways. But I coldn't understand what the linking really meant.

This is what I'm trying to run.

Answers:
username_1: you should use cmake to build this;
there's a guide here https://github.com/json-c/json-c/wiki#building using cmake + msbuild
but you should already be familiar with how to setup cmake + visual studio on windows;
username_0: Thanks for your response. I opened the link you provided.

I went to README.md from there and did the following

Now I have a "json-c-build" folder in my C Drive. Then I tried this below


Should I skip this?
username_2: By default, cmake will not create a Makefile when you're on Windows, so you need to use the windows-specific instructions in the _next_ paragraph of the README.md file.
Status: Issue closed
username_3: Thank You. Its Working.
username_0: Thank you. Its working. |
sanmiguel/websocket_client | 986831203 | Title: Error decoding fragmented frames
Question:
username_0: NodeJS websocket server
```
var server = require('websocket').server;
var http = require('http');
var app = http.createServer();
app.listen(8080, function() {
console.log('running on port 8080');
});
var ws = new server({
httpServer: app,
autoAcceptConnections: false,
keepalive: false,
disableNagleAlgorithm: false
});
ws.on('request', function(req) {
if (req.httpRequest.url.indexOf('/0/session/') === 0) {
console.log('new request');
var connection = req.accept('ws', req.origin);
connection.on('message', function(message) {
if (message.type === 'utf8') {
/* Receive integer, send as may bytes as JSON */
var bytes = Number(message.utf8Data);
if (!Number.isNaN(bytes)) {
console.log('sending ' + bytes + ' bytes of data');
connection.send(new Array(bytes + 1).join('.'));
}
}
});
}
});
```
When asked for + tenth kB (to be defined), client crashes with:
```
** (FunctionClauseError) no function clause matching in :websocket_client.disconnected/3
(websocket_client 1.4.2) /home/jean/ws_cli_test/deps/websocket_client/src/websocket_client.erl:291: :websocket_client.disconnected(:cast, :connect, {:context, {:websocket_req, :ws, 'localhost', 8080, '/0/session/', 5000, #Reference<0.1445969634.340525062.178319>, 1, #Port<0.10>, {:transport, :gen_tcp, :tcp, :tcp_closed, :tcp_error, [mode: :binary, active: true, packet: 0]}, "zkOLuxAroVzrOKg721ev0Q==", :undefined, 1, :undefined, :undefined, :undefined}, {:transport, :gen_tcp, :tcp, :tcp_closed, :tcp_error, [mode: :binary, active: true, packet: 0]}, [{"Sec-WebSocket-Protocol", "ws"}], {:ws, 'localhost', 8080, '/0/session/'}, {WsCliTest.Socket, %{}}, "", false, 0})
(stdlib 3.15.2) gen_statem.erl:1194: :gen_statem.loop_state_callback/11
(stdlib 3.15.2) proc_lib.erl:226: :proc_lib.init_p_do_apply/3
Initial Call: :websocket_client.init/1
Ancestors: [#PID<0.230.0>, #PID<0.229.0>]
Message Queue Length: 0
Messages: []
Links: [#PID<0.230.0>]
Dictionary: []
Trapping Exits: false
Status: :running
Heap Size: 1598
Stack Size: 29
Reductions: 8702
``` |
caicloud/ciao | 368643078 | Title: [UX] non-kubeflow backend
Question:
username_0: /kind feature
Now we are required to install kubeflow to use ciao, to make it easier for people to get start, i think we can do:
- use kubernetes job backend
- only install required operators (can be installed via helm charts)
wdyt? @username_1
Answers:
username_0: yeah, but the installation guide asks me to install kubeflow..
username_1: Because I am not sure if we will support katib in the future, thus I wrote that.
username_1: /priority p3 |
pytest-dev/pytest | 568211259 | Title: Use a specific PytestAssertionError class?
Question:
username_0: I think it would be good to have a specific `PytestAssertionError(AssertionError)` class maybe.
That way it would be possible to distinguish between `assert 0` in user code (outside of tests), and e.g. `assert 0, "Pattern {!r} not found in {!r}".format(regexp, str(self.value))` (used by `ExceptionInfo.match` - although that could/should use `Failed` in the first place probably?
The new class could then also be used from within assertion rewriting, so that "assert 0" there gets turned into `PytestAssertionError`.
This is just an idea / asking for feedback, but would allow for custom additions to it then, e.g. with regard to terminal representation (highlighting).
Answers:
username_1: Sounds good for use in pytest assertion helpers like `ExceptionInfo.match`, but I'd want to be careful about rewriting the exception type.
Many users would have trouble predicting which assertions get re-written, and this would now lead to behaviour differences. Would it be possible to improve the terminal representation without using a custom subclass?
(it would also cause some weird side effects in Hypothesis, where we deduplicate bugs based on the exception type and location - you'd get flaky-test warnings if you ran via pytest and then directly) |
stm32-rs/stm32f4xx-hal | 578210054 | Title: add instrutions to README
Question:
username_0: Instructions of how to integrate this crate in your project should be added to README.
For example in [this](https://github.com/stm32-rs/stm32f1xx-hal) crate they are documented.
Answers:
username_1: @username_0 Sure, please feel free to PR anything you feel noteworthy.
Status: Issue closed
|
rust-lang/cargo | 119269103 | Title: `cargo doc` ignores build script
Question:
username_0: I have the libcore in a custom location because I'm cross-compiling to a custom target. I specify the link path with a build script. My library builds fine but generating documentation with `cargo doc --target=thumbv7em-none-eabi` fails because it can't find libcore. From `--verbose` I see cargo doesn't run my build script and doesn't pass my link directory.
Answers:
username_0: Still an issue.
I also forgot to say it has worked fine somewhen before Nov 28, 2015.
username_0: Found a workaround. `rustc` and `rustdoc` flags can be set using `RUSTFLAGS` and `RUSTDOCFLAGS` respectively.
So in my case, I used the following command to specify my library directory:
```
RUSTFLAGS="-L lib/thumbv7em-none-eabi" RUSTDOCFLAGS="-L lib/thumbv7em-none-eabi" cargo doc --target=thumbv7em-none-eabi
``` |
openmaplt/vector-map | 257965326 | Title: Cross domain ajax requests
Question:
username_0: ```
XMLHttpRequest cannot load https://tiles.osm.pauliaus.com/all/9/290/162.pbf. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://osm.pauliaus.com' is therefore not allowed access. The response had HTTP status code 504.
```
Answers:
username_1: I try this code: https://jsfiddle.net/0u86r16d/26/
Everything works for me.
Status: Issue closed
|
richfelker/musl-cross-make | 740227837 | Title: Target directory in output folder
Question:
username_0: Is it possible to rather than the header files, libraries, and linker to be placed rather than into the target directory in sysroot to the root output directory.
So from this...
```
output
|
+ - bin
+ - lib
+ - ...
+ - x86_64-linux-musl
+ - bin
+ - lib
+ - ...
```
To this...
```
output
|
+ - bin
+ - lib
+ - ...
```
Thank you in advance.
Answers:
username_1: You would use `--with-sysroot` not `--with-build-sysroot` for this, but I don't have any idea if it will work - mcm is not setup to be installed like that, and it's rather going against logical organization, since the top-level output dir is binaries/libraries/etc. for the *host* (the system you run the cross toolchain on) and the `$target`-named directory under that is binaries/libraries/etc. for the *target* (the system you're cross-compiling for).
username_0: I attempted to compile with `--with-sysroot` however to no avail; the target directory was still present in the output directory. Is there any other option which can possibly build the toolchain in the way that I've described?
Also, why is there no compiler in the target directory, e.g. `output/x86_64-linux-musl/bin/cc`?
username_1: Because building a cross compiler and cross-compiling a native compiler for the target are two completely different tasks, and doing them both when you don't need the latter would double the time and space required to build.
username_0: That's fair. Is there not a `NATIVE` option I can set in config.mak?
username_1: Yes, mcm can also cross-compile native compilers for your target, and in this case they don't have the cross directory structure you're unhappy with. You need to already have the cross compiler in your PATH, then run mcm with NATIVE=y, and it should just work.
username_0: I set the compuler to my path and set `NATIVE=1` in `COMMON_CONFIG` however my output directory still has the x86_64-linux-musl directory. Is this the intended result for building a native compiler? When looking into the bin dir in the x86_64... dir I can't seem to find the compiler.
username_0: Upon adding `NATIVE=y` to `config.mak` I receive the following error:
```
make[1]: Entering directory '/home/ben/musl-cross-make/build/x86_64-linux-musl/x86_64-linux-musl'
mkdir -p obj_musl
ln -sf ../../../musl-1.2.1 src_musl
cd obj_musl && ../src_musl/configure --prefix= --host=x86_64-linux-musl
checking for C compiler...
../src_musl/configure: cannot find a C compiler
make[1]: *** [Makefile:226: obj_musl/.lc_configured] Error 1
make[1]: Leaving directory '/home/ben/musl-cross-make/build/x86_64-linux-musl/x86_64-linux-musl'
make: *** [Makefile:182: all] Error 2
```
I've checked to make sure that the compiler exists in the output directory
username_0: After some playing around with the environment I was able to compile a native Musl GCC compiler. Thanks @username_1 !
username_2: It would be really helpful to have a section in the readme on how to build a native compiler. Also it would be nice if it was scripted so it could be built in a single run of `make`. |
postmanlabs/postman-app-support | 612019496 | Title: Release notes has spurious characters in GitHub URLs
Question:
username_0: **Describe the bug**
The JSON coming back from the Release Notes endpoint contains spurious characters (escaping?) in the GitHub URLs, causing some web crawlers to see these as 404s on the rendered page.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'view-source:https://www.postman.com/downloads/release-notes/'
2. Search for 'https:\/\/github'
**Expected behavior**
The rest of the URLs coming back are properly formed. All URLs coming back from this endpoint should be properly formed at the source as `https://github.com/*`
**Screenshots**
<img width="1055" alt="Screen Shot 2020-05-04 at 9 42 21 AM" src="https://user-images.githubusercontent.com/4358288/80991082-57058800-8dec-11ea-9c6d-06ac35e699d5.png">
As seen by web crawler:
<img width="983" alt="Screen Shot 2020-05-04 at 9 33 00 AM" src="https://user-images.githubusercontent.com/4358288/80991104-61278680-8dec-11ea-85f1-5e07bdc0032b.png">
**App information (please complete the following information):**
- n/a (not in app, specifically)
**Additional context**
Status: Issue closed
Answers:
username_1: This has been resolved. Closing. |
plotly/dash | 341140021 | Title: Determining Which Input Has Been Fired
Question:
username_0: We need a way to determine which input has changed. We've provided a temporary hack with `n_clicks_timestamp` but we need something that is more general.
I'm not worried about the implementation, it fits into the `dash-renderer` architecture nicely.
What's not as clear to me is what the `app.callback` decorated functions should look like. How does this interface scale when we want to add more things like:
- Previous state
- Multiple outputs
- Keyword-style arguments instead of list-style arguments
- Which input changed
- Timestamps when inputs changed
- Nested properties (e.g. subscribing to `figure.layout.title`)
- Error handling (e.g. updating an error div container when an exception is thrown)
So, let's use this thread to just propose different interfaces to the callbacks. Of course, doing this in a way that would be backwards compatible is preferred.
***
cc @plotly/dash
Answers:
username_1: Just spitballing here, not actually sure how much I'm into this. One approach could be to use a request object that is passed into the callback functions, as is common with many web frameworks, and then retrieve the component registrations from the request using `element_id.prop_name` identifiers. The request object could track which element fired and the individual component objects could track things like `has_changed` and `prev_value`. eg
```python
app.layout = html.Div([
html.Div(id='target'),
dcc.Input(id='my-input', type='text', value=''),
html.Button(id='submit1', n_clicks=0, children='Submit 1')
html.Button(id='submit2', n_clicks=0, children='Submit 2')
])
@app.callback([Output('target', 'children')],
[Input('submit', 'n_clicks'), Input('submit2', 'n_clicks')],
[State('my-input', 'value')])
def callback(request):
my_input = request.state['my-input.value']
if my_input.has_changed:
result = f"Input {request.trigger.id} triggered callback; {my_input.id} changed value from {my_input.prev_value} to {my_input.value}"
else:
result = f"Input {request.trigger.id} triggered callback; {my_input.id} did not change value."
return result
```
This also solves the problem of managing unwieldy lists of Input/State that you need to align with the callback function arguments, as I describe as being an issue in #159.
However this would likely mean either a non-backwards compatible change to callback function signatures, or we have two callback functions, the previous simple list of argument values alongside the the new request-based one.
username_0: One option at our disposal is checking the number or even the type of arguments in the `def my_callback` function from our decorator and then passing a new set of arguments through. We could also check the types and number of arguments passed into our `app.callback` function.
That is, roughly:
```python
def callback(output, inputs, states):
def scoped_wrapper(func):
def wrapper(*args, **kwargs):
if len(args) == 1 and (len(inputs) + len(states) > 1)
# e.g. callback signature type 1
request = {
'inputs': inputs,
'states': states
}
func(request)
else:
# e.g. existing callback signature
return func(*(inputs + states))
```
In your example, we'd some way to differentiate between a callback with a single input and a callback with a single input that uses the `request` object
username_2: Why not make 2 options available for the user to choose from? With `@app.callback(output, inputs, states, as_request=True)` assume signature `request` where `as_request=False` is default. This way if there are more arguments with `as_request=True` exception could be raised to inform the user that he mistakenly used wrong signature.
It may be also good idea to allow configuring `Dash` object with default `Dash(..., callbacks_with_request=False)` but allowing to set `as_request=app.callbacks_with_request` allowing user to define his choice upfront.
username_1: Ah, good point @username_0. Polymorphism through decorators! If we went down the path of a request-like context object, this could well be a good approach to supporting it alongside the original callback signature.
username_0: Note that in general, I'm looking for solutions that are unified and ideally backwards compatible. As in the zen of python, there should be one way to do things.
username_3: You could add another optional parameter I'm the callback: `PreviousState()` which would feed the previous state to the function, not sure what that would look like, but once you have previous state then you can work out which one or many things have changed
username_4: Got a prototype working...
```python
import dash
import dash_html_components as html
from dash.dependencies import Output, Input
from dash.exceptions import PreventUpdate
app = dash.Dash(__name__)
BUTTONS = ['btn-{}'.format(x) for x in range(1, 6)]
app.layout = html.Div([
html.Div([
html.Button(x, id=x) for x in BUTTONS
]),
html.Div(id='output'),
])
@app.callback(Output('output', 'children'),
[Input(x, 'n_clicks') for x in BUTTONS])
def on_click(*args):
if not dash.callback.triggered:
raise PreventUpdate
trigger = dash.callback.triggered[0]
input_value = dash.callback.inputs.get(trigger)
return 'Just clicked {} for the {} time!'.format(trigger, input_value)
if __name__ == '__main__':
app.run_server(debug=True, port=9091)
```
username_5: Hi @username_4,
Look like a very good example. I have tried it but i got this bug: AttributeError: module 'dash' has no attribute 'callback'. Maybe I am using different of Dash? Do you know how to fix it?
username_6: @username_5 this solution has not been published to PyPI yet, it's a WIP at the two pull requests linked just above.
username_5: @username_6 my bad. Can't wait to see how it works. :D
Status: Issue closed
username_5: Should the above example by @username_4 replaced by this https://github.com/plotly/dash-docs/blob/87c7afd2267bc4b195a1c61ed2c422b043485502/tutorial/examples/faqs/last_clicked_button.py?
username_6: You mean the change from `dash.callback` to `dash.callback_context`? Yes, the example here is out of date. But GitHub issues and PR comments are not documentation, they’re a working conversation, so I wouldn’t want to be going back and sanitizing them after the fact.
username_7: This was helpful! Looks like it's been added to the "FAQs" here: https://dash.plot.ly/faqs
username_8: It seems to be missing from the FAQs now. Could someone point to the new location?
username_0: https://dash.plotly.com/advanced-callbacks |
kirbydesign/designsystem | 910296361 | Title: [Enhancement] Replace HighCharts Bar Chart with ChartJS
Question:
username_0: <!--**Mandatory steps to ensure alignment between stakeholders and the progression of Kirby**-->
<!--In order to ensure steady progress and quality of Kirby, please follow our outlined process. By default four labels are added to new component issues and enhancements. To help Kirby please follow these steps, and remove the labels from the issue when done.-->
<!--*New*-->
<!--Indicates that this is a new issue that has not yet been addressed by the Kirby team. The `New` label will be removed by the Kirby team. -->
<!--*NOT Prioritized*-->
<!--Describe any deadlines for the issue - eg. X needs this done by Y date, to be used in Z sprint. Suggest a milestone for the issue. The `Not Prioritized` label will be removed by the Kirby team. -->
<!--*NOT UX Refined*-->
<!--Make sure the new Component, has a name, can be found in Zeplin, and is used in minimum one reviewed screen. Remove the `NOT UX Refined` label and add links to Zeplin.-->
<!--*NOT Tech Refined*-->
<!--Sketch a solution in technical terms, that is how will the component be enhanced - eg. build it from scratch or build using X Ionic component. Call for a brief meeting or spend enough time with someone from @kirbydesign/kirby-guild to get a "go ahead". Remove the `NOT Tech Refined` label.-->
**Please add a short description of your enhancement request**
This is part of an epic - please have a look at: #1413
The current Bar Chart that is relying on Highcharts should be replaced with ChartJS.
**Describe the solution you'd like**
The horizontal bar chart type should be used to replace it: https://www.chartjs.org/docs/latest/charts/bar.html#horizontal-bar-chart
## Tasks
### Kick Off:
- [ ] Ensure the enhancement is `UX refined` and aligned with UX
_The component and/or enhancement should be published and available in the [Kirby Styleguide on Zeplin](https://zpl.io/258pXGj)_
- [ ] Ensure the enhancement has been `Tech refined` with @kirbydesign/kirby-guild and this issue is updated with a clear implementation description
_This issue should be in the [Ready to do](https://github.com/kirbydesign/designsystem/projects/1#column-4590936) column of the [Kirby kan-ban board](https://github.com/kirbydesign/designsystem/projects/1) before starting implementation)_
- [ ] Assign yourself to this issue and move it to the [In progress](https://github.com/kirbydesign/designsystem/projects/1#column-4590937) column of the [Kirby kan-ban board](https://github.com/kirbydesign/designsystem/projects/1)
### Code:
- [ ] Create Feature Branch from [master branch](https://github.com/kirbydesign/designsystem/tree/master)
- [ ] Create a draft implementation and push to Github
- [ ] Ask a member of @kirbydesign/kirby-guild for a WIP review by creating a draft Pull Request
- [ ] Implement unit tests
- [ ] Update Cookbook Examples and Showcase, i.e. see [Radio](https://cookbook.kirby.design/home/showcase/radio)
_Also remember to add any relevant new API documentation_
### Review:
- UX review:
- [ ] Ensure implementation is correct in relation to the UX design and the [Kirby Styleguide on Zeplin](https://zpl.io/258pXGj)
- [ ] With UX agree on the version of the implementation
- Code review:
- [ ] Open a pull request (or mark the existing draft PR as `Ready for review`) and ask @kirbydesign/kirby-guild for a review
_Remember to add `closes #issueno` to the description of the PR._
- [ ] Once approved, merge feature branch/PR to master
- [ ] Ask a member of @kirbydesign/kirby-guild to add a link to component showcase from Kirby Component Status and update the version number
:tada: Celebrate<issue_closed>
Status: Issue closed |
AdguardTeam/AdguardForAndroid | 342023693 | Title: IllegalArgumentException: No view found for id 0x7f0f00f7 (com.adguard.android:id/fragment_container) for fragment SslWhitelistFragment
Question:
username_0: Crash report:
```
java.lang.IllegalArgumentException: No view found for id 0x7f0f00f7 (com.adguard.android:id/fragment_container) for fragment SslWhitelistFragment{5e91e1a #14 id=0x7f0f00f7}
at android.support.v4.app.FragmentManagerImpl.moveToState(FragmentManager.java:1422)
at android.support.v4.app.FragmentManagerImpl.moveFragmentToExpectedState(FragmentManager.java:1759)
at android.support.v4.app.FragmentManagerImpl.moveToState(FragmentManager.java:1827)
at android.support.v4.app.BackStackRecord.executeOps(BackStackRecord.java:797)
at android.support.v4.app.FragmentManagerImpl.executeOps(FragmentManager.java:2596)
at android.support.v4.app.FragmentManagerImpl.executeOpsTogether(FragmentManager.java:2383)
at android.support.v4.app.FragmentManagerImpl.removeRedundantOperationsAndExecute(FragmentManager.java:2338)
at android.support.v4.app.FragmentManagerImpl.execPendingActions(FragmentManager.java:2245)
at android.support.v4.app.FragmentManagerImpl$1.run(FragmentManager.java:703)
at android.os.Handler.handleCallback(Handler.java:815)
at android.os.Handler.dispatchMessage(Handler.java:104)
at android.os.Looper.loop(Looper.java:207)
at android.app.ActivityThread.main(ActivityThread.java:5728)
at java.lang.reflect.Method.invoke(Method.java)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:888)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:749)
```
Answers:
username_1: Resolved.
Testing instructions:
-Reinstall AG
-Try to open SslWhite/Black List on tablet
Status: Issue closed
|
limacaiquelg/video-pymaker | 561401461 | Title: Versão 1.0: Definição de escopo e de atividades
Question:
username_0: ### Escopo da Versão 1.0
- [ ] **Atividades Iniciais**
- [ ] Criação do repositório no GitHub
- [ ] Criação do projeto no PyCharm
- [ ] Definição do escopo do projeto
___
- [ ] **Orquestrador**
- [ ] Perguntar pelo termo de busca
- [ ] Perguntar pelo prefixo de busca
- [ ] Iniciar robô de estado
- [ ] Iniciar robô de texto
- [ ] Iniciar robô de imagens
- [ ] Iniciar robô de vídeo
- [ ] Iniciar robô do YouTube
___
- [ ] **Robô de Estado**
- [ ] Definir estrutura de dados
- [ ] Definir função de escrita do conteúdo (estrutura de dados) em arquivo
- [ ] Definir função de leitura do conteúdo (estrutura de dados) em arquivo
___
- [ ] **Robô de Texto**
- [ ] Carregar estrutura de dados
- [ ] Pegar conteúdo do Wikipédia
- [ ] Sanitizar o conteúdo
- [ ] Dividir o conteúdo em sentenças
- [ ] Interpretar as sentenças utilizando o IBM Watson
- [ ] Adicionar tags
- [ ] Salvar estrutura de dados
___
- [ ] **Robô de Imagens**
- [ ] Carregar estrutura de dados
- [ ] Buscar as imagens no Google Images
- [ ] Baixar as imagens
- [ ] Salvar estrutura de dados
___
- [ ] **Robô de Vídeo**
- [ ] Carregar estrutura de dados
- [ ] Preparar as imagens baixadas
- [ ] Criar as sentenças com imagens
- [ ] Criar a thumbnail para o vídeo
- [ ] Criar o template para o vídeo
- [ ] Renderizar o vídeo
- [ ] Salvar estrutura de dados
___
- [ ] **Robô do YouTube**
- [ ] Autenticar com o OAuth
- [ ] Realizar upload do vídeo
- [ ] Realizar upload da thumbnail
_As atividades determinadas acima podem ser modificadas conforme houver necessidade, sempre buscando atender ao escopo do projeto._
Answers:
username_0: - [ ] **Ajustes Finais para a versão 1.0**
- [ ] Revisão e correção de bugs identificados nos robôs
- [ ] Criação do README do projeto
- [ ] Commit dos diretórios gerados com seus respectivos READMEs
- [ ] Commit da release
Status: Issue closed
username_0: **Versão 1.0 finalizada!** |
scieloorg/search-journals | 75325426 | Title: Alterar o formato de importação do processamento
Question:
username_0: Com a última refatoração temos um pipeline que converte um JSON em XML, porém o Solr a partir da versão 3.1 aceita como entrada o formato JSON, ver: https://wiki.apache.org/solr/UpdateJSONrelease.
Aproveitando que o JSON é um formato menos verbose e mais performático e atualmente mais aderente a novas tecnologias é importante termos os dois tipos de formato de saída no processamento do sistema de busca.
Interessante seria termos um para ``-e`` em que as opções seriam ``XML`` ou ``JSON``.
Answers:
username_1: @username_0
Você acha mesmo que é interessante ter os 2 formatos? O XML atende as necessidades. O uso desse XML é apenas interno e apenas para carga de registros no Lucene.
Ter 2 formatos significa dar manutenção nos 2 formatos.
username_0: Sim, dar manutenção em dois formato é ruim :-), mas acho JSON mais interessante, podemos manter somente o JSON evitaria bastante coisa no processamento como o custo do ``json.loads`` e estaríamos mais aderentes a outros indexadores.
username_1: @username_0
Não vejo o json.loads como um grande problema, mesmo porque você ainda terá que gerar o objeto Xylose para produzir um json adequado sem ter que passar o json original do articlemeta para o lucene.
Para mim isso é só retrabalho.
Status: Issue closed
username_0: @username_1
concordo que não podemos nos dar o luxo de re-trabalhos nesse momento. |
neutralinojs/neutralinojs | 901802501 | Title: Implement API extesions
Question:
username_0: As explained in: https://github.com/neutralinojs/proposals
Status: Issue closed
Answers:
username_1: Do extensions work on macos? I can't seem to get them to work no matter what I try, even if I use the example extension you created. https://github.com/neutralinojs/neutralinojs/issues/790 |
ccsa-ufrn/seminario-ccsa-old | 228808046 | Title: Participantes que não pagaram
Question:
username_0: Essa mudança que sugeri é em Certificados e Anais- acrescentar participante. Quando acrescentarmos o nome de algum participante que não pagou tal nome não deveria aparecer. Da forma que está o procedimento fica mais longo: antes de acrescentar participantes teremos primeiro que constatar se o pagamento foi feito. Nosso raciocínio é: se não pagou, o nome não deveria estar ali para ser validado para fins de certificação. Espero ter sido mais clara.<issue_closed>
Status: Issue closed |
intellij-rust/intellij-rust | 820865290 | Title: Debug process hangs with MSVC LLDB
Question:
username_0: <!--
Hello and thank you for the issue!
If you would like to report a bug, we have added some points below that you can fill out.
Feel free to remove all the irrelevant text to request a new feature.
-->
## Environment
* **IntelliJ Rust plugin version:** 0.3.142.3705-203
* **Rust toolchain version:** 1.48.0 (7eac88abb 2020-11-16)/1.50.0 (cb75ad5db 2021-02-10) x86_64-pc-windows-msvc
* **IDE name and version:** CLion 2020.3.2 (CL-203.7148.70)
* **Operating system:** Windows 10 10.0
* **Macro expansion engine:** new
* **Name resolution engine:** old
## Problem description
Debugging process sometimes hangs on reading input from stdin. In the particular case, it only happens if you stop on `println!("Hello!")` line. If there isn't a breakpoint on this line, everything works fine. Also, if you drop this line, everything works as expected as well.
Reproducible for me only with MSVC LLDB, with MinGW and GDB it works as expected in all cases.
## Steps to reproduce
* Create project with the following code of the executable
```rust
use std::io::stdin;
fn main() {
println!("Hello!"); // break
let mut buf = String::new();
stdin().read_line(&mut buf);
println!("{}", buf);
}
```
* Set breakpoint on the first line of `main` function
* Start debugger process
* Press `F8` (Step over) several times
* Type any text in console and press enter (to provide input for `read_line` call)
<!--
Please include as much of your codebase as needed to reproduce the error.
If the relevant files are large, please provide a link to a public repository or a [Gist](https://gist.github.com/).
--> |
golemfactory/concent-deployment | 291157984 | Title: JSON error messages from nginx
Question:
username_0: Configure our nginx instances to always return errors in JSON, not HTML.
We have that working for HTTP 403 but 404, 413, 5xx and probably others are not covered. Check which errors are in HTML and make them all JSON.<issue_closed>
Status: Issue closed |
cljsjs/packages | 125965265 | Title: Improve documentation related to deployment
Question:
username_0: Can someone remind me what command-line I should use to push a package?
Answers:
username_1: Some is covered here: https://github.com/cljsjs/packages/blob/master/CONTRIBUTING.md
For pushing to Clojars or similar use Boot's `push` task.
username_0: Ah nice! I was confused because this [page](https://github.com/cljsjs/packages/wiki/Creating-Packages) does not mention the last part regarding deployment (btw the circleci link is dead).
So I understand simply committing the new version is sufficient and the continuous testing will perform the deployment to clojars?
username_2: Yes.
username_0: Great! Thanks.
Status: Issue closed
username_2: Looks like some packages fail to build currently but script doesn't correctly set exit status. |
almost/hubbub | 414667986 | Title: Can't find post file
Question:
username_0: Hello!
Whenever I try to submit the comment the popup with "Failed to send comment" emerges and server log shows "Failed to save comment: Can't find post file".
I re-deployed the Heroku node multiple times to make sure that I had entered the correct credentials and repo details.
Webpage: https://username_0.me/jekyll/update/2019/02/05/comments-work-here/
Repo: https://github.com/username_0/username_0.github.io
Heroku log:
```
2019-02-26T15:17:41.358962+00:00 heroku[router]: at=info method=GET path="/hubbub.js" host=hubbub-bot.herokuapp.com request_id=bacb4143-ba23-48d6-a9a6-b9f43ab2977c fwd="172.16.58.3" dyno=web.1 connect=1ms service=3ms status=200 bytes=9427 protocol=https
2019-02-26T15:17:53.225282+00:00 heroku[router]: at=info method=OPTIONS path="/api/default/comments" host=hubbub-bot.herokuapp.com request_id=92d0801d-6385-4339-abec-33a326aa0d24 fwd="172.16.58.3" dyno=web.1 connect=0ms service=1ms status=204 bytes=301 protocol=https
2019-02-26T15:17:53.796484+00:00 heroku[router]: at=info method=POST path="/api/default/comments" host=hubbub-bot.herokuapp.com request_id=aa3e7bfb-f6a7-4ebd-b754-cd3c2bcca3e2 fwd="172.16.58.3" dyno=web.1 connect=0ms service=334ms status=500 bytes=362 protocol=https
2019-02-26T15:17:53.793122+00:00 app[web.1]: Failed to save comment: Can't find post file: _posts/2019-02-05-comments-work-here.markdown
```
Answers:
username_0: hubbub commenter user is @hubbub-bot
Status: Issue closed
|
jlippold/tweakCompatible | 339114322 | Title: `GoodbyeCoverArt` not working on iOS 11.3.1
Question:
username_0: ```
{
"packageId": "com.oskarw.goodbyecoverart",
"action": "notworking",
"userInfo": {
"arch32": false,
"packageId": "com.oskarw.goodbyecoverart",
"deviceId": "iPad7,3",
"url": "http://cydia.saurik.com/package/com.oskarw.goodbyecoverart/",
"iOSVersion": "11.3.1",
"packageVersionIndexed": false,
"packageName": "GoodbyeCoverArt",
"category": "Tweaks",
"repository": "Oskar's Repo",
"name": "GoodbyeCoverArt",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "com.oskarw.goodbyecoverart",
"commercial": false,
"packageInstalled": false,
"tweakCompatVersion": "0.0.7",
"shortDescription": "Removes the small cover art image on lockscreen",
"latest": "1.0.1",
"author": "<NAME>",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "not working",
"notes": "Causes safe mode"
}
``` |
Satbek/CMC-master-dissertation | 423115715 | Title: Сравнить работу метода при различных срезах супергауссом
Question:
username_0: - Фронт имеет физические координаты на квардрате **[-1,1] x [-1,1]**
- Степень супергаусса **2**
- Радиусы **{0.25, 0.5, 0.75}**
Сравнение будет показано в jupyter notebook
Answers:
username_0: Уточнения:
1) на вход подаются два массива наклонов ВФ в квадрате (без супрегаусса)
2) алгоритм использует супергаусс для расчета.
3) сравнивается результат восстановления ВФ в круге супергаусса с
исходным ВФ (без супергаусса).
4) в зависимости от радиуса супергаусса строится график погрешности в во
внутреннем круге и изучается чувствительность алгоритма к выбору радиуса
супергаусса.
username_0: Уточнения:
нужны новые метрики
- метрика C = max|u(x,y) - v(x,y)| / max |u(x, y)|
- метрика L2 = ||u - v|| / ||u||
|| - L2 норма, все считается в ненулевой области
username_0: Есть зависимость и от крутизны супергаусса (парметр N)
username_0: Проверить восстановление в случае выреза в центре фронта
Status: Issue closed
|
bear-metal/tunemygc | 178930842 | Title: Cannot build native extensions under macOS 10.12/Ruby 2.3.1.
Question:
username_0: Hey there! I'm running into an issue with trying to integrate TuneMyGC into my Rails app. When I try to install the gem, Bundler throws an error building the native extensions:
```
Gem::Ext::BuildError: ERROR: Failed to build gem native extension.
current directory: /usr/local/rvm/gems/ruby-2.3.1@volmar/gems/tunemygc-1.0.68/ext/tunemygc
/usr/local/rvm/rubies/ruby-2.3.1/bin/ruby -r ./siteconf20160923-17327-1eybjzv.rb extconf.rb
checking for RUBY_INTERNAL_EVENT_GC_END_SWEEP... yes
creating Makefile
To see why this extension failed to compile, please check the mkmf.log which can be found here:
/usr/local/rvm/gems/ruby-2.3.1@volmar/extensions/x86_64-darwin-16/2.3.0/tunemygc-1.0.68/mkmf.log
current directory: /usr/local/rvm/gems/ruby-2.3.1@volmar/gems/tunemygc-1.0.68/ext/tunemygc
make "DESTDIR=" clean
current directory: /usr/local/rvm/gems/ruby-2.3.1@volmar/gems/tunemygc-1.0.68/ext/tunemygc
make "DESTDIR="
compiling getRSS.c
compiling tunemygc_ext.c
tunemygc_ext.c:21:7: warning: implicit declaration of function 'clock_gettime' is invalid in C99
[-Wimplicit-function-declaration]
if (clock_gettime(CLOCK_REALTIME, &ts) == -1) {
^
tunemygc_ext.c:21:21: error: use of undeclared identifier 'CLOCK_REALTIME'
if (clock_gettime(CLOCK_REALTIME, &ts) == -1) {
^
1 warning and 1 error generated.
make: *** [tunemygc_ext.o] Error 1
make failed, exit code 2
Gem files will remain installed in /usr/local/rvm/gems/ruby-2.3.1@volmar/gems/tunemygc-1.0.68
for inspection.
Results logged to
/usr/local/rvm/gems/ruby-2.3.1@volmar/extensions/x86_64-darwin-16/2.3.0/tunemygc-1.0.68/gem_make.out
```
I'm not altogether sure what to make of this myself, but I'd be happy to provide any further debugging information, if it'd be useful, and super-grateful for any advice you have. Thanks in advance for your help!
Answers:
username_1: @username_0 I'll bump my env. to OS X Sierra this evening and try to repro - thx for flagging :-)
username_2: Hey @username_0 are you using the latest (8.0.0) Xcode or could you upgrade to it to confirm this issue still exists? (Sierra + Xcode 8.0.0 seems to work for me)
username_0: I was definitely on a beta of Sierra at the time and it's possible that I was using an old Xcode! At any rate, you're right, it's working fine now. Thanks for the tip, and hooray!

Status: Issue closed
|
xavierpuigf/virtualhome_unity | 1063511613 | Title: Failed to load NavMesh.asset because it was serialized with a newer version of Unity.
Question:
username_0: Hello
I opened the latest version of master in unity 2018.4.4 and whenever I open a test scene I encounter errors similar to this one:
**Failed to load 'D:/Code/virtualhome_unity/Assets/Story Generator/TestScene/TestScene_6/NavMesh.asset' because it was serialized with a newer version of Unity. (Has a higher SerializedFile version)**
One of the authors mentioned that the latest build was made using 2019.X.
Can you please specify the exact unity version that was used ?
Also, you should update the documentation and specify to open the project in unity 2019.X and not in 2018.4.
Thank you
Answers:
username_1: Hi @username_0! The current codebase uses Unity 2019.4.29 LTS. I will be updating the documents this week, thank you for letting me know!
Status: Issue closed
|
SamYStudiO/es-theme-next-pixel | 661503921 | Title: For RecalBox 7.0+, some mistmatch in readme files ;-)
Question:
username_0: Hi username_1,
I saw that you are preparing the arrival of recalbox 7.0+... just for your info, some times the details go to use v1.3 (lisezmoi.txt) and other in v1.4 (readme.txt)... is it normal ?
Good job in all cases ;-)
Regards,
Valéry
Answers:
username_1: Hello username_0,
Thx, this should be ok by now, let me know if you see more.
++
Status: Issue closed
|
covertsan/Test-Py | 178829109 | Title: from py 1474623168.3
Question:
username_0: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse interdum diam sit amet arcu imperdiet mollis. Proin at aliquet augue. Praesent ut pharetra lectus. Nunc hendrerit nibh augue, in vehicula est scelerisque facilisis. Phasellus tincidunt turpis convallis sagittis congue. Vivamus quis mauris eu est posuere scelerisque vitae quis lorem. Vivamus sit amet commodo orci. Mauris viverra dignissim nibh, id interdum massa ultricies sed. Vivamus sit amet consectetur ante. In non nibh vitae orci eleifend congue non ullamcorper risus. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Nunc a convallis tellus, in accumsan nunc.
Duis nunc quam, aliquam nec faucibus non, efficitur nec nunc. Donec dapibus ligula vitae nunc hendrerit, in rhoncus velit pellentesque. Nulla turpis nibh, luctus in libero vitae, condimentum porta lacus. Ut vitae mauris ullamcorper, iaculis ex mattis, feugiat elit. Ut aliquam volutpat gravida. Morbi sed accumsan neque, sed sagittis ipsum. Praesent vel rutrum purus. Nulla facilisi. Sed quis nisl mauris.
Nullam purus magna, sollicitudin in nisi vitae, pretium euismod purus. Proin eleifend in tellus quis aliquam. Donec tristique tortor id ipsum tristique rhoncus. In id leo sed mi vehicula suscipit et in velit. In hac habitasse platea dictumst. Suspendisse scelerisque dui eros, eu lobortis mi finibus eget. In est nisi, maximus nec tincidunt eu, consectetur vel lorem. Etiam tempor rutrum semper. Nunc semper ac est at malesuada. Integer euismod convallis aliquam. Ut sed dui a ex imperdiet molestie. Praesent mauris est, pharetra sit amet velit vel, interdum scelerisque enim. Ut in nisl et nibh varius semper dignissim a arcu. Nunc scelerisque, mi sed rhoncus congue, ipsum tortor rhoncus neque, nec blandit risus lorem vel sapien.
Suspendisse ultricies aliquet ipsum eget pretium. Mauris semper pulvinar lectus eget suscipit. Donec placerat libero id tortor euismod, id posuere ipsum laoreet. Proin tincidunt, velit non scelerisque pulvinar, mauris lacus feugiat dolor, et convallis arcu risus vitae erat. Quisque at ex in nisl posuere rhoncus. Donec sed facilisis eros. Vestibulum et lectus quis eros elementum ullamcorper. Pellentesque efficitur vulputate sapien sed efficitur. Maecenas commodo tempus blandit. Vivamus vel metus ac ex fermentum finibus a sit amet elit. Etiam in efficitur metus. Aliquam erat volutpat. Aliquam lacinia dapibus risus eu gravida. Duis tristique tincidunt ante, ut facilisis justo convallis eget. In rutrum interdum urna, nec ullamcorper dolor. Nam lacinia mollis mi, vel venenatis diam.
Sed eget libero sed tortor tempor dictum vitae vitae turpis. Etiam quam felis, euismod sit amet lacinia vel, dignissim a leo. In accumsan venenatis vehicula. Vivamus diam massa, consectetur eleifend erat quis, consectetur imperdiet purus. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas ut est blandit, feugiat sem blandit, tristique ex. Nulla aliquam nulla odio, non tempus ligula vehicula nec. Donec malesuada mollis metus. Mauris ornare, leo in euismod malesuada, purus libero egestas lorem, vel egestas tellus libero in lorem. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Donec dignissim ex sed diam malesuada, nec tempor nunc cursus. Praesent eget posuere turpis, in elementum eros.
Vestibulum eget faucibus mauris. Curabitur ut nisl ante. Sed finibus venenatis dui, non pharetra quam feugiat non. Praesent molestie odio turpis, at vehicula lacus porta nec. Aliquam eu arcu leo. Morbi lacus nisi, fringilla nec vulputate eu, egestas dapibus nisl. Donec dignissim, sem ac maximus porttitor, arcu odio laoreet dolor, tristique suscipit neque est eget magna. Mauris ac lobortis arcu. In hac habitasse platea dictumst. Maecenas tempus euismod nunc in accumsan. Nunc quis efficitur est, a porttitor nisl. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Nam volutpat libero vitae dolor luctus semper. Integer varius, lacus in dictum tristique, arcu neque venenatis diam, non luctus ex nulla sed.<issue_closed>
Status: Issue closed |
allenai/allennlp | 401581329 | Title: MultiLabelField's `empty` method always fails
Question:
username_0: **Describe the bug**
MultiLabelField's `empty` method always fails, preventing it from being used with padding.
**To Reproduce**
Steps to reproduce the behavior
```python
from allennlp.data.fields import MultiLabelField
f = MultiLabelField([])
f.empty_field()
```
Prints error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/julian/.dotfiles/local/miniconda3/lib/python3.6/site-packages/allennlp/data/fields/multilabel_field.py", line 120, in empty_field
return MultiLabelField([], self._label_namespace, skip_indexing=True)
File "/Users/julian/.dotfiles/local/miniconda3/lib/python3.6/site-packages/allennlp/data/fields/multilabel_field.py", line 68, in __init__
raise ConfigurationError("In order to skip indexing, num_labels can't be None.")
allennlp.common.checks.ConfigurationError: "In order to skip indexing, num_labels can't be None."
```
Also,
```python
f = MultiLabelField([], skip_indexing = True, num_labels = 4)
f.empty_field()
```
throws the same error.
**Expected behavior**
Basically I think the implementation of `empty_field(self)` should be something like:
```python
return MultiLabelField([], self._label_namespace, skip_indexing = self._num_labels is not None, self._num_labels)
```
This way it would bypass the indexing step when possible (i.e., `num_labels` is already known, for example if we're producing an empty field from an already-indexed one), but always do it when necessary (i.e., when `num_labels` is not already known).
**System (please complete the following information):**
- OS: OSX
- Python version: 3.6.5
- AllenNLP version: v0.8.0
**Additional context**
This makes it impossible to use MultiLabelField inside ListField. Surprising that nobody has come across this before but I guess MultiLabelField is not often used...
Answers:
username_1: Thanks for finding this; PR welcome!
Status: Issue closed
|
WarEmu/WarBugs | 833054822 | Title: magus skill bug.
Question:
username_0: <!--
Issues should be unique. Check if someone else reported
the issue first, and please don't report duplicates.
Only ONE issue in a report. Don't forget screens or a video.
-->
**Expected behavior and actual behavior:**
**Steps to reproduce the problem:**
**Testing Screenshots/Videos/Evidences (always needed):**
<!-- Drag and drop an image file here to include it directly in the bug report,
no need to upload it to another site -->
magus summun demon and take buff.
and demon is atteck
and Sometimes I don't get summoned.
<!--
Note that game critical and game breaking bugs may award a manticore/griffon (realm specific) at the leads discretion however, asking for one instantly disqualifies you from this reward.
-->
summun demon dont atteck. and. cant magus ability
Answers:
username_0: https://youtu.be/dmme1i7QPiY
https://youtu.be/fcj1lipiuxQ
magus core ability resummon after work. but next new summun demon is not work
username_1: Hey,
I've tried to reproduce your issues, but it works fine for me. I'll close this ticket now.
If you are the opinion the issue still persists, please open a new ticket and we will take a look into your issue again.
Status: Issue closed
|
KSP-CKAN/NetKAN | 639951023 | Title: [Mod] "Real Fuels" and "ScrapYard" do not have new versions in CKAN for more than a day
Question:
username_0: 
Answers:
username_1: The RealFuels release is a pre-release. CKAN doesn't index pre-releases.

The latest version of ScrapYard on SpaceDock is 2.1.0.0.
https://spacedock.info/mod/1746/ScrapYard
Status: Issue closed
|
swagger-api/swagger-ui | 188992426 | Title: Selected OAuth2 scopes are not respected on authentication
Question:
username_0: No matter which OAuth2 scopes are selected using checkboxes, the crafted request includes all of the available ones.
Steps to reproduce:
1. Open Petstore demo: http://petstore.swagger.io/
2. Click "Authorize" in the top right corner of the page
3. Select only one of the OAuth2 scopes, e.g. `write:pets`
4. Click "Authorize"
The result:
A new page is opened with the following URL: `http://petstore.swagger.io/oauth/dialog?response_type=token&redirect_uri=http%3A%2F%2Fpetstore.swagger.io%2Fo2c.html&realm=your-realms&client_id=your-client-id&scope=write%3Apets%2Cread%3Apets&state=petstore_auth`. The URL includes `write:pets` AND `read:pets`.
The expected result:
A new page is opened with the following URL: `http://petstore.swagger.io/oauth/dialog?response_type=token&redirect_uri=http%3A%2F%2Fpetstore.swagger.io%2Fo2c.html&realm=your-realms&client_id=your-client-id&scope=write&state=petstore_auth`. The URL includes ONLY `write:pets`.<issue_closed>
Status: Issue closed |
jlippold/tweakCompatible | 484919277 | Title: `PictureInPicture` not working on iOS 12.4
Question:
username_0: ```
{
"packageId": "com.rpetrich.pictureinpicture",
"action": "notworking",
"userInfo": {
"arch32": false,
"packageId": "com.rpetrich.pictureinpicture",
"deviceId": "iPhone8,1",
"url": "http://cydia.saurik.com/package/com.rpetrich.pictureinpicture/",
"iOSVersion": "12.4",
"packageVersionIndexed": true,
"packageName": "PictureInPicture",
"category": "Tweaks",
"repository": "rpetrich repo",
"name": "PictureInPicture",
"installed": "",
"packageIndexed": true,
"packageStatusExplaination": "This package version has been marked as Working based on feedback from users in the community. The current positive rating is 100% with 1 working reports.",
"id": "com.rpetrich.pictureinpicture",
"commercial": true,
"packageInstalled": false,
"tweakCompatVersion": "0.1.5",
"shortDescription": "Picture-in-picture for video; multitask while playing movies",
"latest": "0.9",
"author": "<NAME>",
"packageStatus": "Working"
},
"base64": "<KEY>
"chosenStatus": "not working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
vjrantal/bot-sample | 223833072 | Title: azure function context.done
Question:
username_0: Hi,
thanks for those example, are pretty clear.
But while I was really testing the Azure Function I remembered about the context object and its "done" function that should be called as soon as the computation is ended. Is there any clear way on how and when it should be invoked?
Thank you again.
Status: Issue closed
Answers:
username_1: @username_0 The logic that tells the Functions runtime that the computation is ended is included in the Microsoft Bot Framework SDK code so you are not responsible for calling a done function. If you see some unexpected behavior when you try the code in this repository, please open an issue with the error message and reproduce steps. Thanks for your comment! |
taichi-dev/taichi | 962743410 | Title: PLEASE STOP UPGRADING TO v0.7.28
Question:
username_0: Recently Taichi's build processed has been streamlined. Unfortunately, this has resulted in the wheel not being correctly built. Please keep your Taichi version at `v0.7.26` at this moment. Thanks for your patience!
Answers:
username_1: I encounter the same problem when creating a new acc in Linux and trying to run some taichi programs XD
Status: Issue closed
username_0: Yanked `v0.7.28` |
MicrosoftDocs/azure-docs | 635263561 | Title: Can I control the Spoke VNet routing from vWAN HUB towards OnPrem Devices? Is there multi-tenancy supported from Azure vWAN?
Question:
username_0: [Enter feedback here]
Can I control the Spoke VNet routing from vWAN HUB towards OnPrem Devices? Is there multi-tenancy supported from Azure vWAN?
e.g. If I have Spoke VNets (VNet-1, VNet-2, VNet-3 & VNet-4) all are connected to vWAN HUB using VNet connections option.
Now I have On Premises Device-1 and On Premises Device-2 connected to vWAN HUB VPN gateway using IPSec tunnel + BGP.
Both devices (Device-1 & Device-2) will learn all 4 Spoke VNets (VNet-1, VNet-2, VNet-3 & VNet-4) via BGP.
Is it possible if I can control the routing at vWAN HUB level so that only 2 Spoke VNets (VNet-1 & VNet-2) are learnt by Device-1.
Similarly, only 2 Spoke VNets ((VNet-3 & VNet-4) are learnt by Device-2.
If it is possible, please add the example for this use case as well.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: ff59f117-79c3-f134-f6ff-9c3745a7cd52
* Version Independent ID: 6e1bf338-5501-b1d7-6b2d-3dc98dc25a60
* Content: [Virtual WAN: Create virtual hub route table to NVA: Azure portal](https://docs.microsoft.com/en-us/azure/virtual-wan/virtual-wan-route-table-portal)
* Content Source: [articles/virtual-wan/virtual-wan-route-table-portal.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/virtual-wan/virtual-wan-route-table-portal.md)
* Service: **virtual-wan**
* GitHub Login: @cherylmc
* Microsoft Alias: **cherylmc** |
su27/qcloud_cos_py3 | 284423184 | Title: Get code -48, message ERROR_UNKNOWN when upload file.
Question:
username_0: Get response {'code': -48, 'message': 'ERROR_UNKOWN', 'request_id': '*********************'} when using upload_file. Any idea?
Answers:
username_1: The message is too brief, I can't idenfity where the problem is. Maybe you can check if everything(environment, apikey/secret, path, local file) is ok and give more tries.
username_0: You are sure this SDK currently works fine, right?
Maybe it's my local problem.
username_1: Yes, you can run the testcases (I just checked), which have covered all the functions in the SDK and have made sure they work fine.
https://cloud.tencent.com/document/product/436/8432 Here's the error code table, but -48 is undocumented, unfortunately.
username_0: OK, thanks.
username_0: Is this right?
with open(local_file_name, "rb") as f:
file_object = f.read()
r = bucket.upload_file(file_object, "test.jpg",
"testdir", mime='image/jpeg')
f.close()
username_0: I'm pretty sure there is something wrong with upload_file. the parameter of "cos_path" in function of "sign_more" is empty. This may be the cause of my problem.
username_1: Are you sure you're using my latest code? Because if you do this:
```
r = bucket.upload_file(file_object, "test.jpg", "testdir", mime='image/jpeg')
```
You will get a fatal error because there's only 2 positional arguments are allowed.
And the `cos_path` must be empty.
Status: Issue closed
|
peerplays-network/peerplays | 497681493 | Title: Attacker can DDOS a regular node with invalid item hashes
Question:
username_0: * Bug Description
* Impacts
* Steps To Reproduce
* Expected Behavior
* Screenshots (optional)
* Host Environment (optional)
* Additional Context (optional)
Finally, press the 'Submit new issue' button. The Core Team will evaluate and prioritize your Bug Report for future development.
**Bug Description**
Attacker or buggy node can send a lot of incorrect item hashes before it is disconnected, and attacked node will request other peers this items in spite of the fact that the first peer was recognized as unsafe source of information and disconnected.
**Porting from Bitshares or other Graphene forks**
Corresponding PR:
- https://github.com/bitshares/bitshares-core/pull/1007
**Impacts**
Describe which portion(s) of Peerplays may be impacted by this bug. Please tick at least one box.
- [ ] API (the application programming interface)
- [ ] Build (the build process or something prior to compiled code)
- [ ] CLI (the command line wallet)
- [ ] Deployment (the deployment process after building such as Docker, Gitlab, etc.)
- [*] P2P (the peer-to-peer network for transaction/block propagation)
- [*] Performance (system or user efficiency, etc.)
- [ ] Protocol (the blockchain logic, consensus, validation, etc.)
- [*] Security (the security of system or user data, etc.)
- [ ] UX (the User Experience)
- [ ] Other (please add below)
**Steps To Reproduce**
To reproduce this bug you need to create and start the node that emulate attacker logic. This attacker node should send a lot of blocks with invalid item hashes, attacked node should disconnect it but it should request this invalid items from other peers.
**Expected Behavior**
Attacked node should not request from its other peers invalid items received from disconnected attacker . |
pyeve/cerberus | 306849548 | Title: UserWarning while running sample code for class-based Validator
Question:
username_0: I have copied an example code for class-based Validator from here : http://cerberus-sanhe.readthedocs.io/customize.html#class-validator
```
from cerberus import Validator
class MyValidator(Validator):
def _validate_isodd(self, isodd, field, value):
if isodd and not bool(value & 1):
self._error(field, "Must be an odd number")
```
Unfortunately the code emits user warning: (python3.5, cerberus 1.1)
```
/usr/local/lib/python3.5/dist-packages/cerberus/validator.py:1338: UserWarning: No validation schema is defined for the arguments of rule 'isodd'
"'%s'" % method_name.split('_', 2)[-1])
```
Answers:
username_1: where is that resource originating from? please consult the docs provided by @nicolaiarocci.
the UserWarning fortunately works as expected.
Status: Issue closed
username_0: @username_1 turns out the docstring is important. The following code works:
```
class MyValidator(Validator):
def _validate_isodd(self, isodd, field, value):
""" Test the oddity of a value.
The rule's arguments are validated against this schema:
{'type': 'boolean'}
"""
if isodd and not bool(value & 1):
self._error(field, "Must be an odd number")
```
username_1: i know. but do you know what resource that is you quoted and how did you come come across it?
username_0: Looks like an old fork of cerberus. My bad for not looking for the exact URL.
username_2: Note that you need to include this text literally
```
The rule's arguments are validated against this schema:
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.