repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
Keeper-Security/Commander | 1018317221 | Title: Persistent-login only works once
Question:
username_0: I have the persistent-login enabled on my commander. If I check the `this-device` command it gives the following info:
Device Name: Commander CLI on win32
Client Version: Commander 16.2.3
Data Key Present: YES
IP Auto Approve: ON
Persistent Login: ON
Logout Timeout: 30 days
The next time I open keeper it works fine and logs in automatically.
But if I close that one and open keeper again it asks for the password again. Even though the Persistent-login is still on.
Answers:
username_1: How do you close the Commander? `quit` or `logout`?
Do you see any message about registering a new device in the Commander output?
username_1: Is it possible that it is an enterprise account and the enterprise prevents "Persistent Login" feature? |
huettenhain/bint | 230661296 | Title: Add a bprintf routine like GMP does
Question:
username_0: GMP has a version of `printf` which can contain format expressions for their big integer type. This is very useful. Also: If two bigints `a` and `b` share the same context, then
```c
printf("%s = %s\n", bint_to_str(a,10), bint_to_str(b,10));
```
will not have the desired effect, because `bint_to_str` uses the context buffer to store the string representation. With our own `printf` like function, this could work without allocating additional memory, though. |
dib-lab/genome-grist | 1073894187 | Title: can't download gather genomes starting from gather csv file
Question:
username_0: I'm trying to use a gather csv output generated by another workflow as input to genome-grist, with the goal of downloading those gather matches.
Genome-grist version: 0.7.3
My conf file looks like this:
```
sample:
- gather_gtdb-rs202-genomic
outdir: outputs
metagenome_trim_memory: 1e9
```
I have the gather csv here:
```
outputs/genbank/gather_gtdb-rs202-genomic.x.genbank.gather.csv
```
When I try and run the `download_matching_genomes` target directly, I get this error:
```
$ genome-grist run conf/genome-grist-conf.yml download_matching_genomes
```
```
sample: ['gather_gtdb-rs202-genomic']
outdir: outputs
Building DAG of jobs...
InputFunctionException in line 189 of /home/tereiter/github/2021-metapangenome-example/.snakemake/conda/86762d5f0fa5b4ff96cae083277aef7f/lib/python3.10/site-packages/genome_grist/conf/Snakefile:
Error:
WorkflowError:
Missing wildcard values for sample
Wildcards:
Traceback:
File "/home/tereiter/github/2021-metapangenome-example/.snakemake/conda/86762d5f0fa5b4ff96cae083277aef7f/lib/python3.10/site-packages/genome_grist/conf/Snakefile", line 160, in __call__
Error in snakemake invocation: Command '['snakemake', '-s', '/home/tereiter/github/2021-metapangenome-example/.snakemake/conda/86762d5f0fa5b4ff96cae083277aef7f/lib/python3.10/site-packages/genome_grist/conf/Snakefile', '-j', '1', '--use-conda', 'download_matching_genomes', '--configfile', '/home/tereiter/github/2021-metapangenome-example/.snakemake/conda/86762d5f0fa5b4ff96cae083277aef7f/lib/python3.10/site-packages/genome_grist/conf/defaults.conf', '/home/tereiter/github/2021-metapangenome-example/.snakemake/conda/86762d5f0fa5b4ff96cae083277aef7f/lib/python3.10/site-packages/genome_grist/conf/system.conf', 'conf/genome-grist-conf.yml']' returned non-zero exit status 1.
```
My work around used to be to use the make_sgc_conf target, but I get this error when I do that:
```
$ genome-grist run conf/genome-grist-conf.yml make_sgc_conf
sample: ['gather_gtdb-rs202-genomic']
outdir: outputs
Building DAG of jobs...
MissingInputException in line 214 of /home/tereiter/github/2021-metapangenome-example/.snakemake/conda/86762d5f0fa5b4ff96cae083277aef7f/lib/python3.10/site-packages/genome_grist/conf/Snakefile:
Missing input files for rule make_sgc_conf:
outputs/sgc/gather_gtdb-rs202-genomic.conf
Error in snakemake invocation: Command '['snakemake', '-s', '/home/tereiter/github/2021-metapangenome-example/.snakemake/conda/86762d5f0fa5b4ff96cae083277aef7f/lib/python3.10/site-packages/genome_grist/conf/Snakefile', '-j', '1', '--use-conda', 'make_sgc_conf', '--configfile', '/home/tereiter/github/2021-metapangenome-example/.snakemake/conda/86762d5f0fa5b4ff96cae083277aef7f/lib/python3.10/site-packages/genome_grist/conf/defaults.conf', '/home/tereiter/github/2021-metapangenome-example/.snakemake/conda/86762d5f0fa5b4ff96cae083277aef7f/lib/python3.10/site-packages/genome_grist/conf/system.conf', 'conf/genome-grist-conf.yml']' returned non-zero exit status 1.
```
When I try and run the rules directly, I get this error:
```
$ genome-grist run conf/genome-grist-conf.yml --until download_matching_genome_wc -j 1
sample: ['gather_gtdb-rs202-genomic']
outdir: outputs
Building DAG of jobs...
Using shell: /bin/bash
Provided cores: 1 (use --cores to define parallelism)
Rules claiming more threads will be scaled down.
Traceback (most recent call last):
File "/home/tereiter/github/2021-metapangenome-example/.snakemake/conda/86762d5f0fa5b4ff96cae083277aef7f/lib/python3.10/site-packages/snakemake/__init__.py", line 699, in snakemake
success = workflow.execute(
File "/home/tereiter/github/2021-metapangenome-example/.snakemake/conda/86762d5f0fa5b4ff96cae083277aef7f/lib/python3.10/site-packages/snakemake/workflow.py", line 1039, in execute
logger.run_info("\n".join(dag.stats()))
File "/home/tereiter/github/2021-metapangenome-example/.snakemake/conda/86762d5f0fa5b4ff96cae083277aef7f/lib/python3.10/site-packages/snakemake/dag.py", line 2153, in stats
"min threads": min(min_threads.values()),
ValueError: min() arg is an empty sequence
Error in snakemake invocation: Command '['snakemake', '-s', '/home/tereiter/github/2021-metapangenome-example/.snakemake/conda/86762d5f0fa5b4ff96cae083277aef7f/lib/python3.10/site-packages/genome_grist/conf/Snakefile', '-j', '1', '--use-conda', '--until', 'download_matching_genome_wc', '-j', '1', '--configfile', '/home/tereiter/github/2021-metapangenome-example/.snakemake/conda/86762d5f0fa5b4ff96cae083277aef7f/lib/python3.10/site-packages/genome_grist/conf/defaults.conf', '/home/tereiter/github/2021-metapangenome-example/.snakemake/conda/86762d5f0fa5b4ff96cae083277aef7f/lib/python3.10/site-packages/genome_grist/conf/system.conf', 'conf/genome-grist-conf.yml']' returned non-zero exit status 1.
```
I'm thoroughly confused by these goings ons, and not really sure where to start with the debugging
Answers:
username_1: Looking into this - and I think I understand it, but am not sure how cleanly I can explain it :). Fixes incoming!
The basic problem is this: the `download_matching_genomes` target does not provide a sample name.
---
More specifically,
```
rule download_matching_genomes:
input:
Checkpoint_GatherResults("genbank_genomes/{acc}_genomic.fna.gz"),
```
doesn't specify a sample wildcard with `{sample}`, and that's why you get the error message `Missing wildcard values for sample`.
The reason why `make_sgc_conf` works is that the rule provides `sample`, like so:
```
rule make_sgc_conf:
input:
expand(f"{outdir}/sgc/{{sample}}.conf", sample=SAMPLES)
```
which then passes through the `create_sgc_conf_wc` rule et voila.
---
Digging into this more, it looks like I the real root of the problem is that I am using the wildcard handling class `Checkpoint_GatherResults` inconsistently. In some places, I do:
```
Checkpoint_GatherResults(outdir + f"/minimap/{{sample}}.x.{{acc}}.consensus.fa.gz")
```
where I request that snakemake fill in both `sample` and `acc`.
In other places, I do:
```
Checkpoint_GatherResults('genbank_genomes/{acc}.info.csv')
```
where I only request `acc`.
One solution is to split the `Checkpoint_GatherResults` class in two - one that does it for all samples, the other that does it for whichever sample is requested by the passed-in pattern.
A different solution is to adjust the `Checkpoint_GatherResutls` class to automatically do the thing for all samples, if no sample is specified.
Either way, a better error message is called for :)
---
Also, while the error message from trying to run `download_matching_genome_wc` is confusing, all of the _wc rules are wildcard rules and can't be run directly - you need to ask for a specific file output, or run a concrete rule.
Status: Issue closed
|
riscv/riscv-fast-interrupt | 598215775 | Title: Clarify interrupt selection
Question:
username_0: Please can you explicitly state what _enabled_ means in this context? I assume it is `clicintip[i]` for the interupt _and_ the relevant `mstatus.xie` bit, but this is not stated.
Answers:
username_1: Yes, both clicintie[i] and mstatus.xie should be enabled. clicintie[i] is the individual control bit while mstatus.xie is a global control bit for that particular mode (only valid in the corresponding mode).
This will be clarified in the spec later.
Status: Issue closed
|
dnnsoftware/Dnn.Platform | 340523442 | Title: Search for Umlauts, accents and other special characters
Question:
username_0: ## Description
Search does not work for words that contain special characters like ä, ö, ß etc.
## Steps to reproduce
In an HTML module, put a phrase like "Die Stadt, in der ich wohne, betreut viele Kindergärten". Make sure, that the index is up-to date, and then search for "Kinderg" - you will see a result link, but as soon as you add "ä" to it, the result ist gone.
## Current result
No search result, when the search word contains special characters.
## Expected result
Search result should be available.
## Affected version
<!-- Check all that apply and add more if necessary -->
* [x] 9.2
* [x] 9.1.1
* [x] 9.1
* [x] 9.0
## Affected browser
<!--
Check all that apply and add more if necessary.
If possible, please also specify exact versions and mention the operating system
-->
* [X] Chrome
* [X] Firefox
* [X] Safari
* [X] Internet Explorer
* [X] Edge
This is NOT a browser issue.
Answers:
username_1: this might be an issue of the Lucene search index not to be language aware.
username_0: Sorry, Sebastian, this answer is not helpful :-)
I think the problem is that the text is stored HTML-encoded, and that could be "Kindergärten" or "Kindergärten" depending on the method used in the module and/or HTML provider - so the indexed content should be HTML-decoded to get results.
username_1: Michael,
of course the content of HTML table needs to be html decoded by the API before passing it to the indexer and I just tested on a 7,4,2 site that this worked as expected before (using HTML or AF module). Did you check, whether umlauts in other modules are not indexed as well?
username_0: I know it has been working before. I just tried that using an HTML module now with 7.4.2 and 8.0.4 - it did not work there either. I created a ticket about that in Jira when DNN 9.0.0 was released, but this ticket did not make it to here, therefore I created this one here as well.
username_1: hm, it works on my site with DNN 7.4.2, but not on yours with same version - strange? You made sure the indexer is executed properly?
username_1: @username_0 would you mind to check again with DNN 9.2.2, AFAIK there have been fixes applied by ESW.
Status: Issue closed
username_0: @username_1 It works now! |
ezsystems/launchpad | 383275509 | Title: Check if docker is alive before starting projectwizard in "init"
Question:
username_0: | Q | A
| ---------------- | -----
| Bug report? | yes, _or rather DX improvment_
| Feature request? | no
| BC Break report? | no
| RFC? | no
| Version | x.y.z
| Environment | Mac/Windows
To avoid having to redo all steps if you forget to start docker server.
Probably most relevant for Windows and Mac, as this can happen if Docker server/app is stopped/paused.
Unsure which command is best to check against, but here is one possible:
```
$ docker system info
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
```<issue_closed>
Status: Issue closed |
AusDTO/gov-au-beta | 169269044 | Title: Remove text from short summary sections from two pages
Question:
username_0: Issue #338 inadvertently reintroduced text that had been removed from the short summary sections in a previous build of the platform.
Please remove the text, "The Australian Government's ministers include Cabinet ministers, outer ministers, assistant ministers, ministers assisting and parliamentary secretaries," from `/ministers`
Please remove the text, "The Australian Government has 18 departments and 189 agencies," from `/departments`
However, please retain the functionality of the short summaries for now. Please just remove the text.
Answers:
username_1: Doesn't #338 explicitly ask for the content to be added?
username_2: Tracked in SITES-607
Status: Issue closed
username_2: Released to Production in https://github.com/AusDTO/gov-au-beta/releases/tag/v1.1.0 |
SonarSonic/Calculator | 329036867 | Title: [5.0.6/1.12] Conductor Mast crash when breaking top block
Question:
username_0: Attempt to break middle/top block of Conductor Mast crashes client (doesn't crash when breaking bottom block). Crash Report provided below.
* Forge Version 14.23.3.2655
* Calculator 5.0.6
* SonarCore 5.0.10
* Singleplayer
* Crash Report Link: https://gist.github.com/username_0/0edb9288db0f0a5b57f018522b5c67ad |
youpin-city/youpin-api | 222958338 | Title: Executive-level admins does not need to get notification
Question:
username_0: # Problem
Executives of the projects needs to be able to manage all issues, see all insights, the same level as organization admin. But executives do not need notifications.
The first solution is to let each user configures custom notification preference.
Or second, create a new role with same level of privileges but do not get any notifications.
Answers:
username_1: Let's go for the easiest way which is to create a new `executive_admin`.
username_0: @username_1 Ok! Let's go
Status: Issue closed
|
aspnetboilerplate/aspnetboilerplate | 589568093 | Title: AbpuserManager.ReplaceclaimAsync does not work
Question:
username_0: abp .net core 5.4 version
```
var cs = await userManager.GetClaimsAsync(loginResult.User);//it is work
var sessionKey = cs.Single(c => c.Type == "session_key");//ClaimType="session_key" Value="2222"
//ct.WeChatMiniProgramUser.session_key= "777",but here does not work
var claimRT = await userManager.ReplaceClaimAsync(loginResult.User, sessionKey, new Claim("session_key", ct.WeChatMiniProgramUser.session_key));
```
The value of the database does not change

I tried to delete and add again. The addition will succeed, but the deletion is invalid. There will be two records in the database
Also, can a getclaimasync method be provided to get a single claim of the specified type<issue_closed>
Status: Issue closed |
FLAMESpl/io | 184909393 | Title: Komunikacja administratora z użytkownikiem
Question:
username_0: Jako że administrator zleca pracę swoim pracownikom to muszą się jakoś identyfikować. Ja widzę to tak, że wszyscy posiadają konta, którymi autoryzują swoją robotę w aplikacji - tym samym administrator może wysłać pakiet obrazów januszowixxxpl bez względu na to na jakiej maszynie pracuje: wtedy pliki lądują na serwerze, a pracownik może pobrać je jak tylko rozpocznie pracę i tak samo w drugą stronę.
Answers:
username_1: No generalnie innej opcji nie widzę, ewentualnie admin rzuca na serwer paczke obrazów z protokołem jako "zlecenie do wzięcia", a janusz może ją pobrać (co jest od razu logowane) i zacząć pracę, a potem wysłać xmla z rezutlatami.
username_2: A to nie to samo co Łukasz właśnie powiedział?
Admin wrzuca na serwer pliki dla konkretnego robola -> robol pobiera -> robol robi -> robol wysyła xml z powrotem na serwer -> admin pobiera i zatwierdza
username_3: ja to co kamil napisał zrozumiałem jako wolny przydział, a można by też zrobić żeby admin mógł konkretnemu coś przedzielić
username_2: No w sumie tak. Ale bardziej logiczne wydaje mi się jednak zlecenie dla konkretnej osoby.
Pytanie, czy robol jest przypisany do admina, czy też każdy admin może zlecić/odebrać pracę każdego robola? Zakładając, oczywiście, liczbę adminów większą niż 1.
username_0: No to robol może widzieć tylko zlecenia swojego administratora
username_3: "@username_0 has invited you to collaborate on the
username_0/io repository"
Skubany mnie do jakiejś kolaboracji zachęca........
username_3: halo?
username_3: ja bym stawiał żeby robol miał jednego admina
username_3: a co do dawania tasków to mi to obojętne czy wolny wybór czy odgórnie
username_0: Admin powinien komunikować się z userami poprzez nasz serwis: użytkownik ma fragment okna w którym widzi czy jakaś paczka do niego przyszła.
Jak należy do jednego admina to wtedy jaka jest procedura kiedy inny chce go sobie przypisać. Może to zrobić czy czeka na aż tamten go zwolni. Czy jest jakiś super-admin?
username_3: może nie wiem "wyrazić prośbę" o transfer usera i jak tamten się zgodzi to można wtedy zmienić
username_0: System w którym użytkownik może wybierać ogólne zadania można jakoś nazwać i zdefiniować w słowniku.
username_2: Super admin trochę komplikuje sprawę. Jakie byłyby jego obowiązki? Tylko przydział userów do adminów? Trochę słabo.
Zgadzam się z konceptem Mateusza, user ma jednego admina i admin może przekazać innemu adminowi usera, albo tamten może o niego poprosić.
username_1: Generalnie jak to ma być jakiś system do firmy, to powinien być tam jeden szef, załóżmy że to mała firma i mają jednego admina. Ewentualnie jak już dwóch adminów to o równych prawach, i mogą sobie userów nawzajem przypisywać.
username_0: no to ok
https://github.com/username_0/io/blob/72ba82132bb3b0641cb78c37c25a12eeb30bb747/Specyfikacja.md
btw. komenty można też wrzucać na pull requesta
username_3: @username_1 można i tak
username_2: Łukasz, czy zatwierdziłeś merge twoich zmian z masterem? Bo na masterze tego nie widzę... (chyba że to coś nie tak z mojej strony)
username_1: Nope, to jest w lszafirski-specs
username_0: pokazałem to żebyście najpierw to skomentowali |
conan-io/conan-center-index | 657073885 | Title: [package] libiconv: compile errors with msvc
Question:
username_0: followup issue to #2211
the configure script detects the paths to system headers with `\\`, like
```
conanfile.py (libiconv/1.16): checking absolute name of <sys/stat.h>... "C:\\Program Files (x86)\\Windows Kits\\10\\include\\10.0.18362.0\\ucrt\\sys/stat.h"
```
then the compilation fails with:
```
libiconv/1.15: rm -f alloca.h-t alloca.h && \
libiconv/1.15: { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */'; \
libiconv/1.15: cat ./alloca.in.h; \
libiconv/1.15: } > alloca.h-t && \
libiconv/1.15: mv -f alloca.h-t alloca.h
libiconv/1.15: rm -f errno.h-t errno.h && \
libiconv/1.15: { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */' && \
libiconv/1.15: sed -e 's|@''GUARD_PREFIX''@|GL|g' \
libiconv/1.15: -e 's|@''INCLUDE_NEXT''@|include|g' \
libiconv/1.15: -e 's|@''PRAGMA_SYSTEM_HEADER''@||g' \
libiconv/1.15: -e 's|@''PRAGMA_COLUMNS''@||g' \
libiconv/1.15: -e 's|@''NEXT_ERRNO_H''@|"C:\\Program Files (x86)\\Windows Kits\\10\\include\\10.0.18362.0\\ucrt\\errno.h"|g' \
libiconv/1.15: -e 's|@''EMULTIHOP_HIDDEN''@|0|g' \
libiconv/1.15: -e 's|@''EMULTIHOP_VALUE''@||g' \
libiconv/1.15: -e 's|@''ENOLINK_HIDDEN''@|0|g' \
libiconv/1.15: -e 's|@''ENOLINK_VALUE''@||g' \
libiconv/1.15: -e 's|@''EOVERFLOW_HIDDEN''@|0|g' \
libiconv/1.15: -e 's|@''EOVERFLOW_VALUE''@||g' \
libiconv/1.15: < ./errno.in.h; \
libiconv/1.15: } > errno.h-t && \
libiconv/1.15: mv errno.h-t errno.h
libiconv/1.15: sed: -e expression #5, char 93: invalid reference \1 on `s' command's RHS
libiconv/1.15: make[1]: *** [Makefile:1367: errno.h] Error 1
libiconv/1.15: make[1]: Leaving directory
```
from my understanding `sed` doesn't parse `\\10` as `[ '\', '1']`, but as `[ '\', '\', '1']`: the first backslash isn't used to escape the second, but sed will interpret `\1` as back-reference.
i must admit, i'm quite clueless how to continue with this issue
### Package and Environment Details (include every applicable attribute)
* Package Name/Version: **libiconv/1.16**
* Operating System+version: **Windows10**
* Compiler+version: **msvc**
* Conan version: **conan 1.26.1**
### Conan profile (output of `conan profile show default` or `conan profile show <profile>` if custom profile is in use)
```
[settings]
arch=x86_64
arch_build=x86_64
build_type=RelWithDebInfo
compiler=Visual Studio
compiler.runtime=MD
compiler.version=16
os=Windows
os_build=Windows
[options]
[build_requires]
[env]
```
### Steps to reproduce (Include if Applicable)
building libiconv. |
NaturalHistoryMuseum/pyzbar | 350065025 | Title: Max resolution limit ?
Question:
username_0: Hi,
In my tests, max input image resolution limited to 2000x2000. Where this limit may coming from? I was hoping to feed 10K*10K images for qrcode detection.
Thanks,
Answers:
username_1: I just tried an empty PNG of 4000 x 4000 pixels and did not get an error.
Could you post the output you see from Python, and share an example image? |
kubeflow/pipelines | 505009282 | Title: Store the information about the user who created each run
Question:
username_0: This will open path for improving the experience for teams using the same cluster by only showing user's own runs in the UX by default.
Answers:
username_1: This sounds sweet. Is this a feature related with the multi-tenant scenario? @gaoning777
username_2: I think this is great. Something I couldn't figure out -- what can we use as a user identity?
username_3: Close this one as we have another one there.
https://github.com/kubeflow/pipelines/issues/2397
Status: Issue closed
username_0: That issue seems to be newer and has bigger scope. |
ant-design/pro-components | 1044398524 | Title: 🧐[问题] protable 中使用了select与dependencies,request属性,被依赖的选项变化了之后,没有清空依赖链路的选择值
Question:
username_0: 提问前先看看:
https://github.com/ryanhanwu/How-To-Ask-Questions-The-Smart-Way/blob/main/README-zh_CN.md
### 🧐 问题描述
<!--
protable 中使用了select与dependencies属性,被依赖的选项变化了之后,没有清空依赖链路的选择值:
省,市,区 3个维度,省选择了,request过滤市,市选择了过滤区;
省市区都选择了,再改变省,如何才能清空后面两个已经选择的值
-->


### 💻 示例代码
<!--
如果你有解决方案,在这里清晰地阐述
-->
### 🚑 其他信息
<!--
如截图等其他信息可以贴在这里
-->
Answers:
username_1: 现在是没有的,要在renderForm里面自定义了
Status: Issue closed
|
architecture-building-systems/CityEnergyAnalyst | 322772203 | Title: BUG: Decentralized Supply system failed to execute
Question:
username_0: ```Executing: DecentralizedBuildings C:\Users\Gabriel\Documents\GitHub\cea-reference-case\reference-case-WTP\03_MIX_medium_density\baseline true false false false false false
Start Time: Mon May 14 15:50:27 2018
Running script DecentralizedBuildings...
Executing: C:\Users\Gabriel\Anaconda2\envs\cea\python.exe -u -m cea.interfaces.cli.cli decentralized --scenario C:\Users\Gabriel\Documents\GitHub\cea-reference-case\reference-case-WTP\03_MIX_medium_density\baseline --scuflag false --ahuscuflag false --aruscuflag false --aruflag false --ahuflag true --ahuaruflag false
B001
1.77913269892 seconds process time for the Substation Routine.
B001
3.05989384378 seconds process time for the Substation Routine.
B001
1.9117211275 seconds process time for the Substation Routine.
B001
3.3478572744 seconds process time for the Substation Routine.
B001
2.06216648403 seconds process time for the Substation Routine.
B001
3.14566595616 seconds process time for the Substation Routine.
B001
3.15573639269 seconds process time for the Substation Routine.
B001
B002
1.7859141524 seconds process time for the Substation Routine.
B002
3.19350481459 seconds process time for the Substation Routine.
B002
2.16128051171 seconds process time for the Substation Routine.
B002
3.27151742133 seconds process time for the Substation Routine.
B002
2.02946518926 seconds process time for the Substation Routine.
B002
3.21545409956 seconds process time for the Substation Routine.
B002
3.2507536873 seconds process time for the Substation Routine.
B002
B003
1.80699661037 seconds process time for the Substation Routine.
B003
3.13612976513 seconds process time for the Substation Routine.
B003
1.99167269896 seconds process time for the Substation Routine.
B003
3.16125024387 seconds process time for the Substation Routine.
B003
1.98932748885 seconds process time for the Substation Routine.
B003
3.15461356546 seconds process time for the Substation Routine.
B003
3.2793713772 seconds process time for the Substation Routine.
B003
B004
1.79421191495 seconds process time for the Substation Routine.
B004
3.08744997146 seconds process time for the Substation Routine.
B004
1.97817288039 seconds process time for the Substation Routine.
B004
3.0970288292 seconds process time for the Substation Routine.
B004
1.98834360191 seconds process time for the Substation Routine.
B004
[Truncated]
File "c:\users\gabriel\documents\github\ceaforarcgis\cea\technologies\boiler.py", line 166, in calc_Cop_boiler
boiler_eff = (eff_score * eff_of_T_return(T_return_C)) / 100.0
File "C:\Users\Gabriel\Anaconda2\envs\cea\lib\site-packages\scipy\interpolate\polyint.py", line 79, in __call__
y = self._evaluate(x)
File "C:\Users\Gabriel\Anaconda2\envs\cea\lib\site-packages\scipy\interpolate\interpolate.py", line 610, in _evaluate
below_bounds, above_bounds = self._check_bounds(x_new)
File "C:\Users\Gabriel\Anaconda2\envs\cea\lib\site-packages\scipy\interpolate\interpolate.py", line 639, in _check_bounds
raise ValueError("A value in x_new is below the interpolation "
ValueError: A value in x_new is below the interpolation range.
Traceback (most recent call last):
File "C:\Users\Gabriel\AppData\Roaming\ESRI\Desktop10.5\ArcToolbox\My Toolboxes\cea\interfaces\arcgis\arcgishelper.py", line 68, in execute
run_cli(self.cea_tool, **kwargs)
File "C:\Users\Gabriel\AppData\Roaming\ESRI\Desktop10.5\ArcToolbox\My Toolboxes\cea\interfaces\arcgis\arcgishelper.py", line 155, in run_cli
raise Exception('Tool did not run successfully')
Exception: Tool did not run successfully
Failed to execute (DecentralizedBuildings).
Failed at Mon May 14 18:57:04 2018 (Elapsed Time: 3 hours 6 minutes 36 seconds)```
Answers:
username_0: Assigning this to you @username_1 .
username_0: duplicate of #1300
username_1: fixed this in PR of cost-analysis-plots #1358
Status: Issue closed
|
postcss/autoprefixer | 655179424 | Title: env option does not work
Question:
username_0: I am working in a monorepo with the following root configuration:
`.browserslistrc`
```
[legacy]
last 2 versions
not dead
[modern]
last 2 Chrome versions
last 2 ChromeAndroid versions
last 2 Firefox versions
last 2 FirefoxAndroid versions
last 2 Safari versions
last 2 iOS versions
last 2 Edge versions
[node]
node 12
```
`babel.config.js`
```
module.exports = {
presets: [
['@babel/preset-env', {
browserslistEnv: 'node',
corejs: 3,
useBuiltIns: 'usage',
}],
'@babel/preset-react',
'@babel/preset-typescript',
],
};
```
Then in `packages/app` I have a `webpack.config.js` file that uses `autoprefixer` as a plugin for the `postcss-loader` and also uses the `babel-loader`. This is how it looks like:
```
module.exports = {
// ...
module: {
rules: [
{
loader: 'babel-loader',
options: {
cacheDirectory: true,
overrides: [{
presets: [
['@babel/preset-env', {
browserslistEnv: 'legacy',
corejs: 3,
useBuiltIns: 'usage',
}],
],
}],
rootMode: 'upward',
},
test: /\.(jsx?|tsx?)$/,
type: 'javascript/auto',
[Truncated]
options: {
plugins: [
autoprefixer({ env: 'legacy' }),
],
},
{
loader: 'sass-loader',
},
],
test: /\.scss$/,
},
],
},
// ...
};
```
When I run this config, `babel-loader` transpiles the code as expected, it detects all the targets specified in the `legacy` environment in the `.browserslistrc` at the root of the monorepo. However, `autoprefixer` does not seem to be able to detect any browser configuration because I tried `console.log(autoprefixer({ env: 'legacy' }).info())` and the output was `No browsers selected`. I also tried moving the `.browserslistrc` to `packages/app` but the result was the same.
Is this a bug? OR am I doing anything wrong?
Answers:
username_1: `npx autoprefixer --info` could be helpful in debugging this issue, but it misses `--env` option.
Can I ask you to send a PR with adding `--env` option to [`bin/autoprefixer`](https://github.com/postcss/autoprefixer/blob/master/bin/autoprefixer)?
username_1: Can you try to run
```
npx browserslist
npx browserslist --env="legacy"
npm ls | grep autoprefixer
npm ls | grep browserslist
```
username_0: * `npx browserslist` outputs nothing.
* `npx browserslist --env="legacy"` outputs this:
```
and_chr 81
and_ff 68
and_qq 10.4
and_uc 12.12
android 81
busername_1du 7.12
chrome 83
chrome 81
edge 83
edge 81
firefox 77
firefox 76
ie 11
ios_saf 13.4-13.5
ios_saf 13.3
kusername_1os 2.5
op_mini all
op_mob 46
opera 68
opera 67
safari 13.1
safari 13
samsung 11.1-11.2
samsung 10.1
```
* `npm ls | grep autoprefixer`: [email protected]
* `npm ls | grep browserslist`: [email protected] deduped
username_1: Can you show the full output of `npm ls | grep autoprefixer`? I assume that you can have multiple versions of Autoprefixer.
username_1: OK. Can you find this file in your `node_modules` https://github.com/postcss/autoprefixer/blob/21becd444a2fe65648ec994c204884ace4c0dc82/lib/browsers.js#L55
and add this debug code:
```diff
+ console.log(opts)
+ console.log((new Error('stack')).stack)
return browserslist(requirements, opts)
```
username_0: The first `console.log` looks like this:
```
{
ignoreUnknownVersions: undefined,
stats: undefined,
path: '/Users/me/Repositories/project/packages/app',
env: undefined
}
Error: stack
at Browsers.parse (/Users/me/Repositories/project/node_modules/autoprefixer/lib/browsers.js:65:18)
at new Browsers (/Users/me/Repositories/project/node_modules/autoprefixer/lib/browsers.js:46:26)
at loadPrefixes (/Users/me/Repositories/project/node_modules/autoprefixer/lib/autoprefixer.js:97:20)
at Function.info (/Users/me/Repositories/project/node_modules/autoprefixer/lib/autoprefixer.js:129:17)
at postCssLoader (/Users/me/Repositories/project/tool/webpack/src/helpers/loaderHelper.js:138:38)
at createWebClientConfig (/Users/me/Repositories/project/tool/webpack/src/helpers/configHelper.js:168:21)
at Object.<anonymous> (/Users/me/Repositories/project/packages/app/webpack.config.es6.js:7:22)
at Module._compile (/Users/me/Repositories/project/node_modules/v8-compile-cache/v8-compile-cache.js:194:30)
at Module._compile (/Users/me/Repositories/project/node_modules/pirates/lib/index.js:99:24)
at Module._extensions..js (internal/modules/cjs/loader.js:995:10)
at Object.newLoader [as .js] (/Users/me/Repositories/project/node_modules/pirates/lib/index.js:104:7)
at Module.load (internal/modules/cjs/loader.js:815:32)
at Function.Module._load (internal/modules/cjs/loader.js:727:14)
at Module.require (internal/modules/cjs/loader.js:852:19)
at require (/Users/me/Repositories/project/node_modules/v8-compile-cache/v8-compile-cache.js:161:20)
at Object.<anonymous> (/Users/me/Repositories/project/packages/app/webpack.config.js:6:18)
at Module._compile (/Users/me/Repositories/project/node_modules/v8-compile-cache/v8-compile-cache.js:194:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:995:10)
at Module.load (internal/modules/cjs/loader.js:815:32)
at Function.Module._load (internal/modules/cjs/loader.js:727:14)
at Module.require (internal/modules/cjs/loader.js:852:19)
at require (/Users/me/Repositories/project/node_modules/v8-compile-cache/v8-compile-cache.js:161:20)
at WEBPACK_OPTIONS (/Users/me/Repositories/project/node_modules/webpack-cli/bin/utils/convert-argv.js:114:13)
at requireConfig (/Users/me/Repositories/project/node_modules/webpack-cli/bin/utils/convert-argv.js:116:6)
at /Users/me/Repositories/project/node_modules/webpack-cli/bin/utils/convert-argv.js:123:17
at Array.forEach (<anonymous>)
at module.exports (/Users/me/Repositories/project/node_modules/webpack-cli/bin/utils/convert-argv.js:121:15)
at /Users/me/Repositories/project/node_modules/webpack-cli/bin/cli.js:71:45
at Object.parse (/Users/me/Repositories/project/node_modules/yargs/yargs.js:576:18)
at /Users/me/Repositories/project/node_modules/webpack-cli/bin/cli.js:49:8
```
From this point onwards all `console.log` seem to look like this one:
```
{
ignoreUnknownVersions: undefined,
stats: undefined,
path: '/Users/me/Repositories/project/packages/media/src/components/Image/Image.scss',
env: 'legacy'
}
Error: stack
at Browsers.parse (/Users/me/Repositories/project/node_modules/autoprefixer/lib/browsers.js:65:18)
at new Browsers (/Users/me/Repositories/project/node_modules/autoprefixer/lib/browsers.js:46:26)
at loadPrefixes (/Users/me/Repositories/project/node_modules/autoprefixer/lib/autoprefixer.js:97:20)
at plugin (/Users/me/Repositories/project/node_modules/autoprefixer/lib/autoprefixer.js:108:20)
at LazyResult.run (/Users/me/Repositories/project/node_modules/postcss/lib/lazy-result.es6:352:14)
at LazyResult.asyncTick (/Users/me/Repositories/project/node_modules/postcss/lib/lazy-result.es6:280:26)
at /Users/me/Repositories/project/node_modules/postcss/lib/lazy-result.es6:317:12
at new Promise (<anonymous>)
at LazyResult.async (/Users/me/Repositories/project/node_modules/postcss/lib/lazy-result.es6:314:23)
at LazyResult.then (/Users/me/Repositories/project/node_modules/postcss/lib/lazy-result.es6:201:17)
at /Users/me/Repositories/project/node_modules/postcss-loader/src/index.js:142:8
```
The complete list is quite long. Do you want it all or is this enough?
username_1: Why do you have a different `console.log` outputs? Maybe you have a two `postcss-loader` in your webpack config?
Check out the stacktrace of the first `console.log`.
username_1: I am closing this issue, since `opts.env` works in Autoprefixer. The problem is how and who call `autoprefixer()`.
Status: Issue closed
username_0: Do you mean this is a problem of `postcss-loader`?
username_1: I am working in a monorepo with the following root configuration:
`.browserslistrc`
```
[legacy]
last 2 versions
not dead
[modern]
last 2 Chrome versions
last 2 ChromeAndroid versions
last 2 Firefox versions
last 2 FirefoxAndroid versions
last 2 Safari versions
last 2 iOS versions
last 2 Edge versions
[node]
node 12
```
`babel.config.js`
```
module.exports = {
presets: [
['@babel/preset-env', {
browserslistEnv: 'node',
corejs: 3,
useBuiltIns: 'usage',
}],
'@babel/preset-react',
'@babel/preset-typescript',
],
};
```
Then in `packages/app` I have a `webpack.config.js` file that uses `autoprefixer` as a plugin for the `postcss-loader` and also uses the `babel-loader`. This is how it looks like:
```
module.exports = {
// ...
module: {
rules: [
{
loader: 'babel-loader',
options: {
cacheDirectory: true,
overrides: [{
presets: [
['@babel/preset-env', {
browserslistEnv: 'legacy',
corejs: 3,
useBuiltIns: 'usage',
}],
],
}],
rootMode: 'upward',
},
test: /\.(jsx?|tsx?)$/,
type: 'javascript/auto',
[Truncated]
options: {
plugins: [
autoprefixer({ env: 'legacy' }),
],
},
{
loader: 'sass-loader',
},
],
test: /\.scss$/,
},
],
},
// ...
};
```
When I run this config, `babel-loader` transpiles the code as expected, it detects all the targets specified in the `legacy` environment in the `.browserslistrc` at the root of the monorepo. However, `autoprefixer` (v9.8.4) does not seem to be able to detect any browser configuration because I tried `console.log(autoprefixer({ env: 'legacy' }).info())` and the output was `No browsers selected`. I also tried moving the `.browserslistrc` to `packages/app` but the result was the same.
Is this a bug? Or am I doing anything wrong?
username_1: Show the full webpack config.
username_0: I am not allowed to share the full webpack config. But I can share the contents of `module.rules`:
```js
{
"module": {
"rules": [
{
"loader": "babel-loader",
"options": {
"cacheDirectory": true,
"overrides": [
{
"presets": [
[
"@babel/preset-env",
{
"corejs": 3,
"browserslistEnv": "legacy",
"useBuiltIns": "usage"
}
]
]
}
],
"rootMode": "upward"
},
"test": /\.ext/,
"type": "javascript/auto"
},
{
"loader": "file-loader",
"options": {
"emitFile": true,
"name": "some/path"
},
"test": /\.ext/,
},
{
"test": /\.ext/,
"use": [
{
"loader": "file-loader",
"options": {
"emitFile": true,
"name": "some/path"
}
},
{
"loader": "img-loader",
"options": {
"plugins": [
svgo({
plugins: [
{ removeDesc: true },
],
}),
]
}
}
]
[Truncated]
ignoreUnknownVersions: undefined,
stats: undefined,
path: '/Users/me/Repositories/project/packages/media/src/components/Image/Image.scss',
env: 'legacy'
}
Error: stack
at Browsers.parse (/Users/me/Repositories/project/node_modules/autoprefixer/lib/browsers.js:65:18)
at new Browsers (/Users/me/Repositories/project/node_modules/autoprefixer/lib/browsers.js:46:26)
at loadPrefixes (/Users/me/Repositories/project/node_modules/autoprefixer/lib/autoprefixer.js:97:20)
at plugin (/Users/me/Repositories/project/node_modules/autoprefixer/lib/autoprefixer.js:108:20)
at LazyResult.run (/Users/me/Repositories/project/node_modules/postcss/lib/lazy-result.es6:352:14)
at LazyResult.asyncTick (/Users/me/Repositories/project/node_modules/postcss/lib/lazy-result.es6:280:26)
at /Users/me/Repositories/project/node_modules/postcss/lib/lazy-result.es6:317:12
at new Promise (<anonymous>)
at LazyResult.async (/Users/me/Repositories/project/node_modules/postcss/lib/lazy-result.es6:314:23)
at LazyResult.then (/Users/me/Repositories/project/node_modules/postcss/lib/lazy-result.es6:201:17)
at /Users/me/Repositories/project/node_modules/postcss-loader/src/index.js:142:8
```
Does this help?
username_1: Can you try to make this 2 changes:
`node_modules/autoprefixer/lib/autoprefixer.js`:
```diff
- let brwlstOpts = {
- ignoreUnknownVersions: options.ignoreUnknownVersions,
- stats: options.stats
- }
+ let brwlstOpts = {
+ ignoreUnknownVersions: options.ignoreUnknownVersions,
+ stats: options.stats,
+ env: options.env
+ }
```
`node_modules/autoprefixer/lib/browsers.js`:
```diff
opts.path = this.options.from;
- opts.env = this.options.env;
return browserslist(requirements, opts);
```
username_0: Yes, that works! Then autoprefixer detects my configuration:
```
Browsers:
Chrome for Android: 81
Firefox for Android: 68
And_qq: 10.4
UC for Android: 12.12
Android: 81
Busername_1du: 7.12
Chrome: 83, 81
Edge: 83, 81
Firefox: 77, 76
IE: 11
iOS: 13.4-13.5, 13.3
Kusername_1os: 2.5
Opera Mini: all
Opera Mobile: 46
Opera: 68, 67
Safari: 13.1, 13
Samsung: 11.1-11.2, 10.1
```
username_1: The fix was released in 9.8.6
You can help us by retweeting about our Open Collective:
https://twitter.com/PostCSS/status/1288937473276444672
Status: Issue closed
username_0: Awesome! Thank you for looking into it so fast! |
Chlumsky/msdfgen | 775299935 | Title: SVG file
Question:
username_0: It does not seem to work if one curve is created inside of the other as a hole. A shape read from a font that looks like an 'O' is perfectly fine, but if a similar shape is read as a path from an SVG file, it looks like a filled circle. Is that a problem with winding?
Thanks
Answers:
username_1: Can you post the file?
username_0: Sure (hope this works)
[svgtest4.zip](https://github.com/username_1/msdfgen/files/5747533/svgtest4.zip)
username_1: Yes, it seems like a winding issue. Also, there is some weird stuff in the top left corner. In any case, Skia interprets the shape in this format as filled, so I would lean towards not a bug.
username_0: I made this shape as a test in Affinity Designer (using two rounded rectangles and a boolean operation and exported it in svg. In Designer and in several different browsers, the inner curve is interpreted as a whole inside the outer curve. Maybe they don't care about winding.
username_0: I think I got it. This svg file is saved using fill-rule=evenodd, and every other contour has to be reversed to get the hole inside the outer contour.
Status: Issue closed
|
franzenzenhofer/lsd | 103857912 | Title: Ball Slows down
Question:
username_0: I couldn't nail down when/why exactly it happens. But sometimes the ball starts to become really slow and makes curve movements when it doesn't touch any line.
Answers:
username_1: android chrome browser?
username_1: about the becoming slow ... hmmm .... mostly i think / hope this is a drawing issue. currently i draw everything on an invisible virtual canvas, then copy that one to the visible canvas (here https://github.com/username_1/lsd/blob/gh-pages/main.coffee#L65-L67 )
but well, these are still a lot of full window size draws.
to optimize drawing performance i would need a layered drawing approach. probably something like this
virtual canvas for the lines
visible canvas for the lines
line virtual canvas only draws to line visible canvas on line update (user interaction)
visible dot canvas (transpartent background)
visible dot canvas only does partial redraws of the canvas (where the ball was, where it now is)
visible dot canvas is on top of visible line canvas
username_2: I can migrate it to a 2d canvas library but I don't want to ruin your pure coffee code. :D
username_1: @username_2 well, not all forks must get merged
i would love to keep my clean code as i actually know what happens, but i would love to see if the other libs can make this faster, then i would try to find out what they did.
i now have three canvas
visible layer 0 - full canvas with lines and text - only gets updated when lines are drawn
on top of this
visible layer 1 - the dot canvas where only 16x16 pix area gets cleared and redrawn with every tick
virtual canvas - all changes to layer 0 gets drawn there fist, then copied over (supposed to be faster)
sadly drawing performance on my samsung 3 chrome android still sucks |
matrix-org/matrix-appservice-irc | 1066099325 | Title: Matrix -> IRC message repeating
Question:
username_0: **Describe the bug**
On hackint we are observing Matrix -> IRC messages being repeated. The source of this seems to be that the AS receives the same event multiple times. The source of this seems to be intermitted network connectivity issues within a couple of HS. Still messages shouldn't arrive multiple times in IRC when the origin HS is unable to reach the HS of the Bridge.
Messages are basically "carried" through other participants sending messages to the same channel. This is apparently causing a fetch from another HS.
**To Reproduce**
Steps to reproduce the behavior:
1. Create three HSs & setup matrix-appservice-irc on one of them
2. Create two uses on the two HSs that aren't running the appservice. Join them into the same bridged channel.
3. Interrupt federation (iptables, nginx firewall, simulate a network segmentation ...) between one of the user HSs with the appservice hosting HS.
4. Send message in the common channel. Message from the HS that has limited federation due to network issues should now *only* arrive on IRC after the other (properly federated) user is sending messages.
5. After some time (minutes?) restore the federation and observe messages arriving on the IRC side again.
**Expected behavior**
Matrix events should only be relayed to IRC once.
**Logs**
At first we check if there are duplicate events in the log of the appservice:
```
# journalctl -u matrix-appservice-irc | grep onMessage | grep id= | cut -f15 -d' ' | sort | uniq -c
1 id=$1AGi-D-qxV-OTzAB1kH8FrqiW7x6rqZyuyI_HR2k0MY
1 id=$2qN99Uc7aPQifRQWT70nM10zwr4EGzIB9Eui2Bj3gDc
1 id=$2yaPjcUDs4myQB6tukpLsCFxvlL7m2Tkgrz_-KJfsoQ
1 id=$6yuMuc3tWWkcxlMa2hGsvNPo1OlvumIJkrOJygEz1go
1 id=$8gpvxIHxkfavKGterQuwwnC0JH-YZwHEUnRfLy1R0MM
1 id=$aJBolPqI6fokf-ESAgDc0fvgOl_KTMgOuDphlGitMJ0
2 id=$AxdBy5WFl_uV3WIoIr1jlnJoBu9w3pBDtUPRhZnn2EQ
1 id=$CFcGAtzc8qfv9G6PGZ9oU9PgwqcHp8sTeqiv_W1S7ew
1 id=$DAfnaEQCRPAxRH3Z9WoPNxkRHV_88Xpv5qSul5l9jI0
1 id=$DErv6S4QcmQOTnNNge3mQD6c0voRcDPCw5Jhk1hDpbQ
1 id=$EXESNfbp5ilmj-IzBpZdQXOPIl-55iAVVToXvmhTP1Y
1 id=$Faa2DFWByNiYIqaIUVLz9V8pk_tys9a07NyVjy0WD7Q
1 id=$I9V63DX7fBOt-z75sqdlYjSHp-UwERiisdEMuG-trVU
1 id=$K0sfiWHQktAKPsEYcTfxmtHNkqgQ9jc_luj4jDFpiBE
1 id=$-kOgihic4DkuDcsj3PtGY7odnArwE-t8FJkfbyS41GU
1 id=$kVQZ9RJn7Gm9gTNRdqB7Xqd-JnGczGWmOcRLPhtDRzU
1 id=$Kzd96yAJv9WxJCblm1dwFGjmytShEkL0h9tvF0xKruI
1 id=$mQIt86TXn1CmNKXwnnoRsTqWZcqPRWQbETGO1cGnI3s
1 id=$MUa_v_xhIM7xtg6maY8uaSl8yZv44WD7jr80RBDj5GA
1 id=$NliIhYnUUkYXbGCsLZUYFPKCIGln09kiG0P77HvrzUs
1 id=$Opi2YbMITksfMEZMN7tS9KCoz7c14SFyY_wJdXIcZzQ
1 id=$p0RkyIhEKe_cZMmzPo9y56v00941qAUSYZDWBptOb-8
1 id=$P4pichLthO9Cn5o1j4EAPdTbFHMp1WxCGMPTgwF5txM
1 id=$_QBMWgZXqEnj765X2azDW4i3ufDoX_bzADEXQ2St-sQ
2 id=$QMwOU0SjOd50xyQORZdi-PKvlaWJZ3VZ46urYxZbPHI
1 id=$ShUDKkMK8kAfst0wgQX6kjm_gHdzhVRGtjy4LMGHaGk
2 id=$STkFEq3U7ydX-58NcAQZLfhGOY6yJUoj3sd2vL1TDBI
1 id=$tGFlAOqs_GSTPiHr1Xy7m-okHzwl2e2Z0IgKSuZr6yI
1 id=$U8dV35zERU8UJL0dsXUc8fgYrOLRgJtWZUm1FiJd-98
1 id=$Uwmt3UQ7fZTdiJSKhIL9o6pNt2bfxF2L98jCuXA71uo
1 id=$uxFJAr0awkpZ8tgPLkWRgTUdvMjTAEdvifI95LWBpJA
1 id=$V-EuLcLaNmpyoMTFKN9C_I7zUdJfKSfwiDn0dBUKE8M
1 id=$WV9gdbxl3xTaXIXrEMAISsGntg9RMIyroL5nar5TBZc
1 id=$X91K4TdHvJS6ProS3JyAeI-blc3aJJxwIx1GnFtTACA
1 id=$xehJ0vXTCypQnpXjGqXn1CNGaC9yRqYJ1xqUID_M_wM
1 id=$zpiNt4TuJrxnNk97rabXRKj9w2hqL8eTgVADs1lX6qE
2 id=$ZYcQIWlhmF66arc82TM8TfDe17LqxRPE2tVzMZYkVBA
```
[Truncated]
Nov 29 14:38:24 matrix matrix-appservice-irc[244522]: 2021-11-29 14:38:24 INFO:req [bxyukex6v1400] [[M->I]] onMessage: m.room.message usr=@XXXX:XXXX.XXX rm=!ejeHaJxvRKPeZFKrdQ:hackint.org id=$ZYcQIWlhmF66arc82TM8TfDe17LqxRPE2tVzMZYkVBA
Nov 29 14:56:29 matrix matrix-appservice-irc[244522]: 2021-11-29 14:56:29 INFO:req [1y4xzqvlkr5s0] [[M->I]] onMessage: m.room.message usr=@XXXX:XXXX.XXX rm=!ejeHaJxvRKPeZFKrdQ:hackint.org id=$ZYcQIWlhmF66arc82TM8TfDe17LqxRPE2tVzMZYkVBA
```
Aha, it is indeed the same message being retrieved twice.
Looking at the synapse log for the same message id gives us this:
```
Nov 29 14:38:24 matrix synapse[232133]: synapse.handlers.federation_event: [_process_incoming_pdus_in_room_inner-436-$TXNfz4jHkIwel0KBfyawn9DSERdl5TYtG65E7Y35mjI] Acquiring room lock to fetch 1 missing prev_events: ['$ZYcQIWlhmF66arc82TM8TfDe17LqxRPE2tVzMZYkVBA']
Nov 29 14:38:24 matrix synapse[232133]: synapse.handlers.federation_event: [_process_incoming_pdus_in_room_inner-436-$TXNfz4jHkIwel0KBfyawn9DSERdl5TYtG65E7Y35mjI-$ZYcQIWlhmF66arc82TM8TfDe17LqxRPE2tVzMZYkVBA] Processing pulled event <FrozenEventV3 event_id=$ZYcQIWlhmF66arc82TM8TfDe17LqxRPE2tVzMZYkVBA, type=m.room.message, state_key=None, outlier=False>
Nov 29 14:56:29 matrix synapse[232133]: synapse.handlers.federation_event: [_process_incoming_pdus_in_room_inner-438-$Yyt0bDIcRgCNhrr2LogbmkJK0nC4TUlsGuN_TvPqMzc-$TXNfz4jHkIwel0KBfyawn9DSERdl5TYtG65E7Y35mjI] Event $TXNfz4jHkIwel0KBfyawn9DSERdl5TYtG65E7Y35mjI is missing prev_events ['$ZYcQIWlhmF66arc82TM8TfDe17LqxRPE2tVzMZYkVBA']: calculating state for a backwards extremity
Nov 29 14:56:29 matrix synapse[232133]: synapse.handlers.federation_event: [_process_incoming_pdus_in_room_inner-438-$Yyt0bDIcRgCNhrr2LogbmkJK0nC4TUlsGuN_TvPqMzc-$TXNfz4jHkIwel0KBfyawn9DSERdl5TYtG65E7Y35mjI] Requesting state after missing prev_event $ZYcQIWlhmF66arc82TM8TfDe17LqxRPE2tVzMZYkVBA
Nov 29 14:56:29 matrix synapse[232133]: synapse.http.matrixfederationclient: [_process_incoming_pdus_in_room_inner-438-$Yyt0bDIcRgCNhrr2LogbmkJK0nC4TUlsGuN_TvPqMzc-$TXNfz4jHkIwel0KBfyawn9DSERdl5TYtG65E7Y35mjI-$ZYcQIWlhmF66arc82TM8TfDe17LqxRPE2tVzMZYkVBA] {GET-O-10903} [xxxx.io] Completed request: 200 OK in 0.04 secs, got 32556 bytes - GET matrix://xxxx.io/_matrix/federation/v1/state_ids/%21ejeHaJxvRKPeZFKrdQ%3Ahackint.org?event_id=%24ZYcQIWlhmF66arc82TM8TfDe17LqxRPE2tVzMZYkVBA
Nov 29 14:56:29 matrix synapse[232133]: synapse.http.matrixfederationclient: [_process_incoming_pdus_in_room_inner-438-$Y<KEY>-$TXNfz4jHkIwel0KBfyawn9DSERdl5TYtG65E7Y35mjI-$ZYcQIWlhmF66arc82TM8TfDe17LqxRPE2tVzMZYkVBA-$ZYcQIWlhmF66arc82TM8TfDe17LqxRPE2tVzMZYkVBA] {GET-O-10904} [xxxx.io] Completed request: 200 OK in 0.02 secs, got 899 bytes - GET matrix://xxxx.io/_matrix/federation/v1/event/%24ZYcQIWlhmF66arc82TM8TfDe17LqxRPE2tVzMZYkVBA
Nov 29 14:56:29 matrix synapse[232133]: synapse.handlers.federation_event: [_process_incoming_pdus_in_room_inner-438-$<KEY>kVBA] Fetched 1 events of 1 requested
Nov 29 14:56:29 matrix synapse[232133]: synapse.handlers.federation_event: [_process_incoming_pdus_in_room_inner-438-$<KEY>-$<KEY>BA] Persisting 1 of 1 remaining outliers: ['$ZYcQIWlhmF66arc82TM8TfDe17LqxRPE2tVzMZYkVBA']
```
**Additional context**
I do have more logs that I could share privately. Contact me on matrix `@andi:kack.it` if you need them.
Answers:
username_1: This sounds like a Synapse bug to me. The appservice should **never** see the same event twice over the appservice transaction API, regardless of whether the event was federated or not. If you think you've got logs of this happening, I would suggest opening a bug against https://github.com/matrix-org/synapse/.
While the bridge *could* hold previously handled events in memory, it seems superflous as the homeserver should not be sending us duplicates in any cases. |
pomelovico/keep | 376402889 | Title: canvas 学习笔记 (二) - 动画
Question:
username_0: ## 动画循环
动画是一种持续的循环,由于浏览器是单线程的,所以不能通过一个`while(true){}`的形式来达到持续执行动画代码的目的,否则将会使页面失去响应,取而代之的是让浏览器有一个休息的时间,每个一小段时间来执行一次动画
实现循环的方式大致有两类:
1. **setTimeout && setInterval**: 这两种浏览器提供的定时器函数都不是特别精准,这依赖于当前的事件队列,此外,重要的是,使用定时器函数来让浏览器定时执行动画,是一种**命令式的操作,它要求浏览器这么做**,所以绘制动画的开发人员必须知道绘制下一帧动画精准时机,也就是间隔时间,而这个间隔时间往往会因浏览器的不同而略有差别。
2. **`window.requesAnimationFrame()`** 方法是W3C标准中的全局方法,通过该方法可以向浏览器注册动画绘制函数,浏览器在绘制下一帧(浏览器在实时的绘制页面,就算你对页面没有任何操作,他也在不停的刷新和绘制当前页面)的时候会自动调用注册的函数,从而达到绘制动画的效果,该方法的优点在于,是**浏览器主动通知用户绘制**,绘制的时机交给了更有经验的浏览器,用户更加专注于绘制函数本身
### requestAnimationFrame()
所以在现代浏览器的动画绘制中,都应当使用`requestAnimationFrame()`这个API,与之对应的是`cancleRequestAnimationFrame(handle)`方法,其中handle参数是一个句柄,由`requestAnimationFrame`返回,可用于取消回调函数的执行.
动画函数调用规则:
- 最大频率为每秒钟60次。
- 当动画所在分页不可见时,浏览器将不再调用动画函数(这点很重要,页面不可见时是不会执行动画的,节省性能),直到再次可见时再继续执行。
- 调用回调函数的速度不会高于渲染页面的速度。
#### 参数
`requestAnimationFrame(callback,optional Element)`中,callback回调函数的参数是一个time,表示此次动画执行的时间,由于有些浏览器的bug,time可能为undefined,所以需要手动更正。第二个参数是一个可选的HTML元素引用,表示,当该HTML元素不可见时,该方法也就不会再调用传入的动画回调函数了。
Answers:
username_0: ## 帧速率(fps)计算
动画的帧速率就是**动画每秒钟播放的帧的次数**,在`window.requestAnimationFrame()`注册的回调函数里,将上次绘制动画的时间从当前时间中减去得到时间差,就是两次帧之间的时间差,取倒数就是帧速率了。
```javascript
let lastTime = 0;
function fps(){
var now = (+new Date()),
f = 1000 / (now - lastTime);
lastTime = now;
return f;
}
```
之所以会有帧速率这个概念,是因为在一个复杂的动画中,可以能有些动画不需要太高频次的速度运行,所以在复杂动画 中,可以把不懂的任务安排在不同的帧速率上执行。大致过程就是用不同的变量记录上次动画执行的时刻,然后与当前时刻做差值,把不同的任务放到不同的差值范围内进行绘制,这样可以避免一次动画中绘制太多内容影响性能。
在这里个人有个**问题**,如果把不同的绘制任务放到不同的帧速率中,按上诉方式来看,那么针对慢速率的动画内容,就不会在每次的`requestAnimationFrame()`注册函数中执行,然而由于canvas的特点以及动画要求,每次绘制都会清除原来画板中的内容,那么在这次绘制中,没有绘制出来的内容岂不就是被擦除掉变成空白。所以我理解的慢速率绘制应该是要保证在人眼能够接受动画的正常帧速率以上,大概就是24帧以上,才能给人一种连续性的感觉(视觉暂留效应)
username_0: ## 恢复动画背景
在动画绘制过程中,在下一帧绘制动画时,由于canvas中并没有层级概念,在canvas绘制都是后者覆盖前者,所以需要考虑如何处理背景(不变的内容),有以下几种方式:
- 将所有内容全部擦除,并重新绘制(背景内容以及动画本体也简单的情况下,此种方式较为有利)
- 仅重绘内容发生变化的那部分区域
- 从**离屏缓冲区**中将内容发生变化的那部分背景图像复制到屏幕上
在复杂的背景情况下,每次都重绘背景,那会导致绘制耗时太长。
### 利用剪辑区域处理背景(`context.clip`)
利用剪辑区域,可以将需要绘制的背景的区域缩减到上一帧动画元素的内容区域,也就是**将上一帧动画物体所在的位置恢复为背景图**。此种方式其实也是调用绘制整个背景的方法,但是在`context.clip()`方法的约束下,绘制背景只会影响到clip所剪辑到的路径区域里,具体做法如下:
1. 调用`context.save()`保存绘图环境(必须要调用保存状态,clip方法不可逆,只会越来越小)
2. `context.beginPath()`开始一条新路径
3. 根据上一帧动画物体所在的路径绘制新路径
4. **`context.clip()`**剪辑区域,将绘制区域缩减到上一帧动画物体所在的路径里(当然也不一定非要是上一帧动画物体路径,可以是某个圆形,矩形区域,重点是此区域要包含上一帧动画物体的区域)
5. 擦除剪辑区域内的内容
6. 调用绘制背景的方法(此处背景的绘制只会影响到剪辑区域),相当于使用背景覆盖掉了上一帧动画物体的内容
7. `context.resotre()`恢复绘图环境
此种方式给我的感觉是有点牵强,虽然将背景绘制区域限制在了剪辑区域里,看似绘制的区域变小了,但是**绘制背景的方法仍然是绘制全局背景的方法**,这跟擦除重绘没有什么区别吧。除非我们要针对背景绘制函数做优化,做到可以实时根据路径来绘制背景。
### 离屏内容复制处理动画背景(drawImage)
我们设置一个离屏canvas,将整个背景绘制到该canvas中,同理,使用上述的剪辑方法,但是在第6步中,不再绘制,而是从离屏canvas中取出该区域的背景图,直接“修复”到当前的canvas中,也就是复制图像块,使用`drawImage()`等方法绘制。
一般来说,图像块复制的方式比剪辑区域的速度快,但是多了一个离屏canvas,占据更多内存,但是个人觉得,一个离屏canvas会让整个绘制过程简单很多。
username_0: ## 双缓冲技术绘制canvas
在动画的绘制过程中,由于擦除可能会带来瞬间的空白从而导致动画看起来有点闪烁,而利用双缓冲可以避免此问题,双缓冲就是利用一个离屏canvas来绘制所有内容,然后将整个离屏canvas的内容一次性的复制到可见的canvas中。
但是现代浏览器中默认使用了双缓冲技术,所以不需要人手动的实现
username_0: ## 基于时间的运动
由于各个设备的性能以及刷新频率的不同,要保证动画在每中设备下都有流畅的动画体验,需要使用统一的速率计算方式,要让动画以稳定的速度运行而不受帧率的影响,就要根据物体的**速度**计算出它在两帧之间所移动的像素数,即:
**移动像素数/帧 = (像素/秒) * (帧/秒) = 物体运动速度 * fps**
而fps(每秒播放的帧数,计算方式前面有提到过)就是两帧之间的时间差的倒数
知道了物理每帧之间所移动的像素数,那自然就可以计算到下一帧物体的具体坐标位置了。
## 定时动画
## 背景滚动
## 视差动画
为了实现三维精神效果,不同的物体由于远近的不同,应该有不同的移动速度,以此来形成视差效果 |
hs-web/hsweb-framework | 330577025 | Title: 启动类开启访问日志 @EnableAccessLogger后,org.springframework.boot.autoconfigure.web.BasicErrorController也被拦截
Question:
username_0: 1. 问题描述:
启动类开启访问日志 @EnableAccessLogger后,每次各请求会有有个
{"method":"getErrorPath()","target":"org.springframework.boot.autoconfigure.web.BasicErrorController"
被拦截
2. 复现步骤:
略
3. 日志内容:
2018-06-08 16:55:48.606[http-nio-1882-exec-4]INFO access-logger -{"method":"getErrorPath()","target":"org.springframework.boot.autoconfigure.web.BasicErrorController","parameters":{},"httpHeaders":{"content-type":"application/json","cache-control":"no-cache","postman-token":"<KEY>","user-agent":"PostmanRuntime/7.1.1","accept":"*/*","host":"localhost:1882","accept-encoding":"gzip, deflate","content-length":"416","connection":"keep-alive"},"httpMethod":"POST","ip":"0:0:0:0:0:0:0:1","url":"http://localhost:1882/addressBook/create","response":"/error","requestTime":1528448147883,"responseTime":1528448147884,"id":"261b59891e589075c5996bd980cbf9f4","useTime":1}
Answers:
username_1: 已修复,请使用版本: `3.0.0-RC-SNAPSHOT`
Status: Issue closed
|
unitb/literate-unitb-complete | 238358815 | Title: Benchmark the Haskell z3 bindings
Question:
username_0: From literate-unitb created by [username_0](https://github.com/username_0) : unitb/literate-unitb#18
Originally reported by: **<NAME> (Bitbucket: [cipher2048](https://bitbucket.org/cipher2048), GitHub: Unknown)**
---
... to avoid serializing proof obligations
---
- Bitbucket: https://bitbucket.org/literateunitb/literate-unitb/issue/18 |
OGGM/oggm | 371527932 | Title: RGI60-17.04865 raises non-informative error
Question:
username_0: As reported on slack:
```
Traceback (most recent call last):
File "/afast/dparkes/oggm/oggm/workflow.py", line 86, in __call__
return self.call_func(gdir, **self.out_kwargs)
File "/afast/dparkes/oggm/oggm/utils.py", line 2586, in _entity_task
out = task_func(gdir, **kwargs)
File "/afast/dparkes/oggm/oggm/core/climate.py", line 961, in apparent_mb
centerlines.catchment_width_geom(gdir, reset=True)
File "/afast/dparkes/oggm/oggm/utils.py", line 2586, in _entity_task
out = task_func(gdir, **kwargs)
File "/afast/dparkes/oggm/oggm/core/centerlines.py", line 1712, in catchment_width_geom
assert len(fil_widths) == n
TypeError: object of type 'numpy.float64' has no len()
```
We should have a look at what's going on with this glacier, and at the very least send a more informative error message |
bloom-housing/bloom | 696379474 | Title: Phone Number Validation Bug
Question:
username_0: On /applications/contact/address and //applications/contact/alternate-contact-contact after you've entered in a phone number, filled out the rest of the page and then clicked Next at the bottom, the last 4 digits of the phone number get stripped away and there's an error
Answers:
username_1: Fixed in #645
Status: Issue closed
|
FasterXML/jackson-databind | 65321270 | Title: JsonCreator not working on Classes with JsonTypeInfo when input string is missing the type field
Question:
username_0: ```java
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, property = "type", include = JsonTypeInfo.As.PROPERTY)
@JsonSubTypes({
@Type(value = IntegerLiteral.class, name = "integer"),
@Type(value = StringLiteral.class, name = "string"),
})
public abstract class Literal {
public abstract Object getValue();
@JsonCreator
public static IntegerLiteral createInt(@JsonProperty("value") long lVal) {
return new IntegerLiteral(lVal);
}
}
public class IntegerLiteral extends Literal {
protected long value;
@Override
public Long getValue() {
return value;
}
public IntegerLiteral(long value) {
this.value = value;
}
}
public class Field {
private Literal value;
public Literal getValue { return value; }
public void setValue(Literal val) { value = val; }
}
```
My goal is to support both of the following json to represent an instance of Field class
{ "value": 4 } and
{ "value": { "type": "integer", "value": 4 } }
The second one works, but the first one fails with below exception.
I CANNOT remove JsonSubTypes or JsonTypeInfo annotations on Literal class - that's my constraint. I googled a lot to see how I can make jackson give my JsonCreator precedence over JsonTypeInfo - but to no avail. I tried with and without JsonProperty annotation on the JsonCreator method.
```text
com.fasterxml.jackson.databind.JsonMappingException: Unexpected token (VALUE_NUMBER_INT), expected FIELD_NAME: missing property 'type' that is to contain type id (for class Literal)
at [Source: java.io.StringReader@45002203; line: 1, column: 2] (through reference chain: Field["value"])
at com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:164) ~[jackson-databind-2.1.2.jar:2.1.2]
at com.fasterxml.jackson.databind.DeserializationContext.wrongTokenException(DeserializationContext.java:692) ~[jackson-databind-2.1.2.jar:2.1.2]
at com.fasterxml.jackson.databind.jsontype.impl.AsPropertyTypeDeserializer._deserializeTypedUsingDefaultImpl(AsPropertyTypeDeserializer.java:140) ~[jackson-databind-2.1.2.jar:2.1.2]
at com.fasterxml.jackson.databind.jsontype.impl.AsPropertyTypeDeserializer.deserializeTypedFromObject(AsPropertyTypeDeserializer.java:73) ~[jackson-databind-2.1.2.jar:2.1.2]
at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeWithType(BeanDeserializerBase.java:850) ~[jackson-databind-2.1.2.jar:2.1.2]
at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:373) ~[jackson-databind-2.1.2.jar:2.1.2]
at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:98) ~[jackson-databind-2.1.2.jar:2.1.2]
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:308) ~[jackson-databind-2.1.2.jar:2.1.2]
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:121) ~[jackson-databind-2.1.2.jar:2.1.2]
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:2797) ~[jackson-databind-2.1.2.jar:2.1.2]
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:1943) ~[jackson-databind-2.1.2.jar:2.1.2]
```
Answers:
username_1: Right, unfortunately there is no way to do that. Object id and Type id handling must precede processing first, and only once those are resolved would creator get involved.
This is by design. Error messaging may be improved and I am open to suggestions there, but I don't think it is possible to bend existing object, type id handling functionality to work the way you want.
Instead you will probably need to implement custom deserializers to support such structure.
username_0: Thank you for looking into this, appreciate it. Is there any other (dynamic) means to supply type info? Short of custom deserializer is there any other extensibility point we can make use of, to achieve this?
If none, we'll have to likely consider custom deserializer - but here, do we have a way to register custom deserializers for specific java types as opposed to taking the full ownership of entire deserialization process?
username_1: Yes, you can register per-type (de)serializers. In fact, via `Module`, you define `Serializers` / `Deserializers`, which gets called for given type, so you can use whatever criteria you want.
Or, alternatively, what sometimes makes more sense is to register `BeanSerializerModifier` / `BeanDeserializerModifier`, which gives access to modify POJO (de)serializers while they are being constructed. So you can, for example, take default (de)serializer, create your own wrapper, and either handle process (for that instance) or delegate, on case-by-case basis.
Also, default (de)serializers use delegation so there are mechanisms for your (de)serializer to handle particular type but then delegate for contained/referenced types (properties of a POJO).
So you do not need to take ownership of the full chain. Of course registrations and handling is not trivial so ideally custom handlers should only be used if other options are not applicable.
username_1: As to type info. Yes, it is possible to also register Type Id resolvers (`TypeIdResolver`), builder for resolver (`TypeResolverBuilder`) and so on. One (latter) for extracting out type id to use, another for resolving from String type id into actual `Class`. I am not sure if it would be possible to combine these to allow you to support your use case, but it may be worth the try. It's not that structure must match as much as that delegation works in certain way: type handlers "wrap" regular (de)serialization.
You can see the delegation model via `JsonSerializer` and `JsonDeserializer`; it is somewhat different for different cases (for serialization it's simpler as it just writes bit more info; for deserialization needs to drive type discovery more).
username_2: I would also like to be able to do this.
Would it not make sense to fallback to the deserilzer for the basetype if the defaultImpl is not specified in the annotation? Would seem like this could be done [here](https://github.com/FasterXML/jackson-databind/blob/fbc1fa23c92a69da474f3a7039e5494de7dc7ae2/src/main/java/com/fasterxml/jackson/databind/jsontype/impl/AsPropertyTypeDeserializer.java#L134) with this snippet
```java
@SuppressWarnings("resource")
protected Object _deserializeTypedUsingDefaultImpl(JsonParser p, DeserializationContext ctxt,
TokenBuffer tb) throws IOException
{
// As per [JACKSON-614], may have default implementation to use
JsonDeserializer<Object> deser = _findDefaultImplDeserializer(ctxt);
if(deser == null) {
//fallback to getting deserializer for basetype
deser = _findDeserializer(ctxt, super.baseTypeName());
}
if (deser != null) {
if (tb != null) {
tb.writeEndObject();
p = tb.asParser(p);
// must move to point to the first token:
p.nextToken();
}
return deser.deserialize(p, ctxt);
}
// or, perhaps we just bumped into a "natural" value (boolean/int/double/String)?
Object result = TypeDeserializer.deserializeIfNatural(p, ctxt, _baseType);
if (result != null) {
return result;
}
// or, something for which "as-property" won't work, changed into "wrapper-array" type:
if (p.getCurrentToken() == JsonToken.START_ARRAY) {
return super.deserializeTypedFromAny(p, ctxt);
}
ctxt.reportWrongTokenException(p, JsonToken.FIELD_NAME,
"missing property '"+_typePropertyName+"' that is to contain type id (for class "+baseTypeName()+")");
return null;
}
```
username_1: @username_0 actually perhaps specifying `defaultImpl = IntegerLiteral.class` should work here.
@username_2 I don't know. I am bit hesitant to add new kinds of fallbacks for case where missing information may well signal a problem elsewhere -- in general type id is expected to be used; and if not, `defaultImpl` indicates that this is acceptable. So wouldn't it make sense to specify that `defaultImpl` explicitly?
username_2: The problem is if you have a class that extends a class that has that annotation, the default impl will not be valid for that extended class and will fail to deserialise. You could just put the annotation on to the extended class as well, but in my use case lots of classes will extend it and it is cleaner not to need a new annotation with a different default impl on all of them.
Status: Issue closed
username_1: @username_2 Such usage is incorrect: all subtypes must have conceptually same shared base class. Their default implementations can not vary either. Leaving out `defaultImpl` would not necessarily help either since "default default" would then be contextual base type.
At this point I will close this issue since original usage is incorrect and assumes behavior that is not what Jackson provides. Type id must be included, except if `defaultImpl` is specified. If it is specified, deserialization should work and there is nothing specific about `@JsonCreator`: it is applied only after polymorphic type resolution has completed. It does not compete with `@JsonTypeInfo` in any way and specifically can not "override" it. Processing starts with handling of excepted Object Id (if any); followed by Type Id (if any); finally delegating to the deserializer for resolved type.
@username_2 If you would like to request additional defaulting above and beyond `defaultImpl`, please file a separate issue -- it is usually better to have clear lineage, and your request is somewhat different from problem as reported originally here (it is related, yes, but not same thing). I am open to changes but just want to be clear on what exactly is being suggested. |
witsa/synapps | 362963472 | Title: Commande dans composite envoyée à l'affichage d'une scène
Question:
username_0: Un commande peut être envoyée lorsqu'une scène initialise un composite avec une source de donnée en écriture. Problème constaté sur l'acteur commutateur boutton mais d'autres acteurs d'intéractions potentiellement concernés.
Problème lié à l'affichage d'une scène avec que toutes les données des sources soient chargées.
Answers:
username_1: Il semblerait que l'acteur switch button a un comportement non attendu à son initialisation. Il est recommandé en attendant sa correction d'utiliser le switch image à la place.
Status: Issue closed
username_0: Fix déployé en **1.3.9** en attendant résolution #146 |
PavlosMelissinos/henry | 642530923 | Title: CLI startup is too slow
Question:
username_0: ```
$ time clj -A:gantt -f svg test_resources/ml-data.edn
20-06-21 09:42:21 tim0.localdomain INFO [henry.cli:60] - Validating cli arguments: ("--mode" "gantt" "-f" "svg" "test_resources/ml-data.edn")
20-06-21 09:42:21 tim0.localdomain INFO [henry.gantt:20] - Building vega lite spec...
20-06-21 09:42:21 tim0.localdomain INFO [henry.gantt:42] - test_resources/ml-data.gantt.svg
20-06-21 09:42:24 tim0.localdomain INFO [henry.cli:98] - Done! Check out test_resources/ml-data.gantt.svg
nil
clj -A:gantt -f svg test_resources/ml-data.edn 118,93s **user 2,32s system 268% cpu 45,121 total**
```
Solution: [GraalVM](https://www.graalvm.org/)
Answers:
username_0: With graalvm it's still slow because it still depends on jdk (some of the dependencies do not translate well to graalvm) but it has improved considerably
```bash
$ time ./henry -m gantt -f png test_resources/ml-data.edn
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
20-06-21 16:44:41 tim0.localdomain INFO [henry.cli:61] - Validating cli arguments: ("-m" "gantt" "-f" "png" "test_resources/ml-data.edn")
20-06-21 16:44:41 tim0.localdomain INFO [henry.gantt:20] - Building vega lite spec...
20-06-21 16:44:41 tim0.localdomain INFO [henry.gantt:42] - test_resources/ml-data.gantt.png
20-06-21 16:44:45 tim0.localdomain INFO [henry.cli:95] - Done! Check out test_resources/ml-data.gantt.png
nil
./henry -m gantt -f png test_resources/ml-data.edn 45,21s user 1,40s system 277% cpu 16,783 total
``` |
js-data/js-data | 70594164 | Title: Memory leak in DS.changes
Question:
username_0: I have encountered a leak in `DS.changes` method:
var ignoredChanges = options.ignoredChanges || [];
DSUtils.forEach(definition.relationFields, function (field) {
return ignoredChanges.push(field);
});
If no options were passed to `DS.changes`, this code will keep pushing new values to `defaultsPrototype.ignoredChanges` array.
Answers:
username_1: Thanks, I will check it out.
Status: Issue closed
|
yashinomi/sysdev2020_advanced | 727949136 | Title: Class 3. Exercise 4. log
Question:
username_0: 特定のコミットの状態を確認。
Answers:
username_0: まずはコミットツリーをだした。
```bash
git log --graph --all
* commit cbae756b782609e58ec380f97ed08b2c3c20b000 (HEAD -> master)
| Author:
| Date: Fri Oct 23 15:13:51 2020 +0900
|
| all hunk
|
* commit <PASSWORD>
| Author:
| Date: Fri Oct 23 15:11:05 2020 +0900
|
| partial hunk
|
* commit ff9444fa9302a63<PASSWORD>84e6<PASSWORD>
| Author:
| Date: Fri Oct 23 15:06:46 2020 +0900
|
| init
|
* commit d4bc849da5da155de027a4ed6d5ae4e1c05e658d
| Author:
| Date: Fri Oct 23 14:37:15 2020 +0900
|
| init
|
* commit 7d5589d75575792483da4937cea26b81a057ac40
Author:
Date: Fri Oct 23 14:36:44 2020 +0900
test
```
username_0: lorem.txt を最初に作ったところに巻き戻す。
```bash
$ ls
hello_world.txt lorem.txt readme.md
$ git checkout ff9444fa9302a63602ab6e84e6c67d2fe0e27844
Note: switching to 'ff9444fa9302a63602ab6e84e6c67d2fe0e27844'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:
git switch -c <new-branch-name>
Or undo this operation with:
git switch -
Turn off this advice by setting config variable advice.detachedHead to false
HEAD is now at ff9444f init
```
username_0: lorem.txtは最初の状態であった。
```bash
$ ls
hello_world.txt lorem.txt readme.md
$ cat lorem.txt
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
```
username_0: 元に戻した。
```bash
$ git switch -
```
Status: Issue closed
|
Scille/umongo | 797405637 | Title: implementations as attribute into the instance
Question:
username_0: First of all, congratulations for this work!
As mentioned [here](https://umongo.readthedocs.io/en/latest/_modules/umongo/instance.html?highlight=auto%20register#) "implementations are registered as attribute into the instance".
But it doesn't seem to work... It seems that the `retrieve_document' function is made for this but I would have preferred an instance.MyImplementedDocument approach.
Can you shed some light on this?
Note that I don't want to use register as a decorator, to decouple the documents from the backend choice.
Thanks for your help!
Answers:
username_1: Indeed, looks like the docstring is outdated. The note below doesn't mention Mixin.
I think I removed this because
- I didn't see a use case
- There could be name clashes if a Doc and an Embedded doc have the same name that could catch the user by surprise.
Do you really have a use case for this?
If you use `instance.Doc` somewhere, why not just import `Doc`? |
jlippold/tweakCompatible | 343739496 | Title: `NoSubstitute (Electra)` working on iOS 11.3.1
Question:
username_0: ```
{
"packageId": "co.vexation.nosubstitute",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "co.vexation.nosubstitute",
"deviceId": "iPhone8,2",
"url": "http://cydia.saurik.com/package/co.vexation.nosubstitute/",
"iOSVersion": "11.3.1",
"packageVersionIndexed": true,
"packageName": "NoSubstitute (Electra)",
"category": "Tweaks",
"repository": "BigBoss",
"name": "NoSubstitute (Electra)",
"packageIndexed": true,
"packageStatusExplaination": "This package version has been marked as Working based on feedback from users in the community. The current positive rating is 96% with 30 working reports.",
"id": "co.vexation.nosubstitute",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.0.7",
"shortDescription": "Disables substitute in apps.",
"latest": "1.0-1",
"author": "Dagzer",
"packageStatus": "Working"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": "Work"
}
```<issue_closed>
Status: Issue closed |
ant-design/ant-design-pro | 338499878 | Title: 代码分割后,登录页感觉还是太大了
Question:
username_0: 
这是分割后的代码,index.js依然还是有1m多,之前是2m
进入登录页加载这个js文件时间还是要4.95秒
而且你们的说明文档也太省了,设置分割要改.webpackrc.js文件中的disableDynamicImport: false,提都不提一下,在issue里找半天才知道
Answers:
username_1: 请仔细阅读文档!
Status: Issue closed
|
LLNL/RAJA | 860066817 | Title: Fix "exhaustive" atomic tests
Question:
username_0: The tests:
test-forall-atomic-basic-OpenMP.exe
test-forall-AtomicRefMinMax-OpenMP.exe
fail when we build and run exhaustive tests. They work with the default test builds.
@username_1 assigning this to you to look at when you have time. You won't see these test failures on the develop branch so they are likely due to me adding in new OpenMP execution policies to test. A PR for these changes is coming soon.
Answers:
username_0: @username_1 is this still an issue? I tried a couple of compilers (gnu, clang) and everything passed.
username_1: I'll try this with XL because that's usually the most problematic compiler.
username_1: Strange, I'm running in to 2 different compilation issues with test-forall-atomic-basic-OpenMP.exe. One is the excessive recursion on our templating (saw this in https://github.com/LLNL/RAJA/pull/1165), the other is the camp::resources::Omp not found issue. I'll need to dig further in to why these are happening.
nvcc/10.1.243 + xl/2021.12.22 -
cd /usr/workspace/wsrzc/chen59/allraja/rajaatomicexhaustive/raja_git_atomicexhaustive/build_lc_blueos-nvcc10.1.243-sm_70-xl2021.12.22/test/functional/forall/atomic-basic && /usr/tce/packages/cuda/cuda-10.1.243/bin/nvcc -ccbin=/usr/tce/packages/xl/xl-2021.12.22/bin/xlc++_r -DGTEST_HAS_DEATH_TEST=1 -I/usr/workspace/wsrzc/chen59/allraja/rajaatomicexhaustive/raja_git_atomicexhaustive/test/include -I/usr/workspace/wsrzc/chen59/allraja/rajaatomicexhaustive/raja_git_atomicexhaustive/test/functional/forall/atomic-basic/tests -I/usr/workspace/wsrzc/chen59/allraja/rajaatomicexhaustive/raja_git_atomicexhaustive/blt/thirdparty_builtin/googletest-master-2020-01-07/googletest -I/usr/workspace/wsrzc/chen59/allraja/rajaatomicexhaustive/raja_git_atomicexhaustive/include -I/usr/workspace/wsrzc/chen59/allraja/rajaatomicexhaustive/raja_git_atomicexhaustive/build_lc_blueos-nvcc10.1.243-sm_70-xl2021.12.22/include -I/usr/tce/packages/cuda/cuda-10.1.243/include -I/usr/workspace/wsrzc/chen59/allraja/rajaatomicexhaustive/raja_git_atomicexhaustive/tpl/camp/include -I/usr/workspace/wsrzc/chen59/allraja/rajaatomicexhaustive/raja_git_atomicexhaustive/build_lc_blueos-nvcc10.1.243-sm_70-xl2021.12.22/tpl/camp/include -isystem=/usr/workspace/wsrzc/chen59/allraja/rajaatomicexhaustive/raja_git_atomicexhaustive/blt/thirdparty_builtin/googletest-master-2020-01-07/googletest/include -isystem=/usr/workspace/wsrzc/chen59/allraja/rajaatomicexhaustive/raja_git_atomicexhaustive/tpl/cub -restrict -arch sm_70 --expt-extended-lambda --expt-relaxed-constexpr -Xcudafe "--display_error_number" -O3 -Xcompiler -O3 -Xcompiler -qxlcompatmacros -Xcompiler -qalias=noansi -Xcompiler -qsmp=omp -Xcompiler -qhot -Xcompiler -qnoeh -Xcompiler -qsuppress=1500-029 -Xcompiler -qsuppress=1500-036 -Xcompiler=-fPIE -Xcompiler=-qsmp=omp -std=c++14 -x cu -c /usr/workspace/wsrzc/chen59/allraja/rajaatomicexhaustive/raja_git_atomicexhaustive/build_lc_blueos-nvcc10.1.243-sm_70-xl2021.12.22/test/functional/forall/atomic-basic/test-forall-atomic-basic-OpenMP.cpp -o CMakeFiles/test-forall-atomic-basic-OpenMP.exe.dir/test-forall-atomic-basic-OpenMP.cpp.o
/usr/workspace/wsrzc/chen59/allraja/rajaatomicexhaustive/raja_git_atomicexhaustive/blt/thirdparty_builtin/googletest-master-2020-01-07/googletest/include/gtest/internal/gtest-internal.h(731): error #456: excessive recursion at instantiation of function "testing::internal::TypeParameterizedTest<Fixture, TestSel, Types>::Register [with Fixture=ForallAtomicBasicTest, TestSel=testing::internal::TemplateSel<gtest_suite_ForallAtomicBasicTest_::AtomicBasicForall>, Types=testing::internal::Types<camp::list<RAJA::policy::omp::omp_parallel_for_guided_exec<-1>, RAJA::policy::cuda::cuda_atomic_explicit<RAJA::builtin_atomic>, camp::resources::v1::Host, RAJA::Index_type, unsigned long long>, camp::list<RAJA::policy::omp::omp_parallel_for_guided_exec<-1>, RAJA::policy::cuda::cuda_atomic_explicit<RAJA::builtin_atomic>, camp::resources::v1::Host, RAJA::Index_type, float>, camp::list<RAJA::policy::omp::omp_parallel_for_guided_exec<-1>
xl/2021.12.22_omptarget -
[ 66%] Building CXX object test/functional/forall/atomic-basic/CMakeFiles/test-forall-atomic-basic-OpenMP.exe.dir/test-forall-atomic-basic-OpenMP.cpp.o
cd /usr/WS1/chen59/allraja/rajaatomicexhaustive/raja_git_atomicexhaustive/build_lc_blueos-xl_omptarget-2021.12.22/test/functional/forall/atomic-basic && /usr/tce/packages/xl/xl-2021.12.22/bin/xlc++_r -+ -DGTEST_HAS_DEATH_TEST=1 -I/usr/WS1/chen59/allraja/rajaatomicexhaustive/raja_git_atomicexhaustive/test/include -I/usr/WS1/chen59/allraja/rajaatomicexhaustive/raja_git_atomicexhaustive/test/functional/forall/atomic-basic/tests -I/usr/WS1/chen59/allraja/rajaatomicexhaustive/raja_git_atomicexhaustive/blt/thirdparty_builtin/googletest-master-2020-01-07/googletest -I/usr/WS1/chen59/allraja/rajaatomicexhaustive/raja_git_atomicexhaustive/include -I/usr/WS1/chen59/allraja/rajaatomicexhaustive/raja_git_atomicexhaustive/build_lc_blueos-xl_omptarget-2021.12.22/include -I/usr/WS1/chen59/allraja/rajaatomicexhaustive/raja_git_atomicexhaustive/tpl/camp/include -I/usr/WS1/chen59/allraja/rajaatomicexhaustive/raja_git_atomicexhaustive/build_lc_blueos-xl_omptarget-2021.12.22/tpl/camp/include -I/usr/WS1/chen59/allraja/rajaatomicexhaustive/raja_git_atomicexhaustive/blt/thirdparty_builtin/googletest-master-2020-01-07/googletest/include -qthreaded -std=c++14 -O3 -qxlcompatmacros -qlanglvl=extended0x -qalias=noansi -qsmp=omp -qhot -qpic -qsuppress=1500-029 -qsuppress=1500-036 -qpic -qoffload -qsmp=omp -qalias=noansi -std=c++1y -o CMakeFiles/test-forall-atomic-basic-OpenMP.exe.dir/test-forall-atomic-basic-OpenMP.cpp.o -c /usr/WS1/chen59/allraja/rajaatomicexhaustive/raja_git_atomicexhaustive/build_lc_blueos-xl_omptarget-2021.12.22/test/functional/forall/atomic-basic/test-forall-atomic-basic-OpenMP.cpp
In file included from /usr/WS1/chen59/allraja/rajaatomicexhaustive/raja_git_atomicexhaustive/build_lc_blueos-xl_omptarget-2021.12.22/test/functional/forall/atomic-basic/test-forall-atomic-basic-OpenMP.cpp:11:
In file included from /usr/WS1/chen59/allraja/rajaatomicexhaustive/raja_git_atomicexhaustive/test/include/RAJA_test-base.hpp:15:
In file included from /usr/WS1/chen59/allraja/rajaatomicexhaustive/raja_git_atomicexhaustive/include/RAJA/RAJA.hpp:44:
In file included from /usr/WS1/chen59/allraja/rajaatomicexhaustive/raja_git_atomicexhaustive/include/RAJA/pattern/forall.hpp:64:
In file included from /usr/WS1/chen59/allraja/rajaatomicexhaustive/raja_git_atomicexhaustive/include/RAJA/policy/MultiPolicy.hpp:31:
/usr/WS1/chen59/allraja/rajaatomicexhaustive/raja_git_atomicexhaustive/include/RAJA/util/resource.hpp:119:35: error: no type named 'Omp' in namespace 'camp::resources'
using type = camp::resources::Omp;
~~~~~~~~~~~~~~~~~^
username_0: FWIW, I tried compiling earlier with multiple different versions of clang-ibm and OpenMP target enabled and saw a bunch of similar errors about no 'Omp' in 'camp::resources' namespace. I recall that there was an issue where ENABLE_OPENMP was not getting propagated to camp and I thought it was fixed a while back.
username_1: Adding `-DCAMP_ENABLE_TARGET_OPENMP` looks like it's helping; I'm just waiting for XL to build it . . .
Once that's confirmed to work, I'll put a PR up. Still not sure what we can do about the excessive recursion in the compiler though.
username_0: should we add something to our CMake to set that flag when OpenMP target is enabled in RAJA?
username_1: Indeed, I'm looking in to that option!
username_2: You can probably copy and tweak this snippet from Umpire:
https://github.com/LLNL/Umpire/blob/develop/src/umpire/tpl/CMakeLists.txt#L92-L97
username_1: Thanks @username_2, that worked for the clang-omptarget tests. Still waiting on XL to see if the test works, but made a PR for this anyway because we should have it https://github.com/LLNL/RAJA/pull/1207. |
OPSkins/trade-opskins-api | 337686531 | Title: How to get the 2fa secret key
Question:
username_0: Hi i'm currently try to document me on the api and i see for some request we need a 2fa secret code and i was asking the question how to get it ? Is it like on steam with the shared_secret and the identity_secret or it's totaly different ?
if someone can help me that would be nice.
Have a nice day
Answers:
username_1: Hello,
Please see: https://github.com/OPSkins/trade-opskins-api/issues/16
username_1: Lets keep this issue open until I update docs with a guide for this, thanks.
username_0: Ok nice thanks for the answer sorry i didn't see the issue was already ask.
Have a nice day.
Status: Issue closed
|
firebase/firebase-admin-go | 481389304 | Title: Why is the client context not used in verifyToken?
Question:
username_0: For some reason the `ctx` that is passed in is not used, instead a new blank context is created for no purpose.
https://github.com/firebase/firebase-admin-go/blob/79d5dcdeff7f536eb11a549dd25579d88831d19e/snippets/auth.go#L80-L82
Answers:
username_1: No particular reason. One could argue that since only the code between the `[START]` and `[END]` tags are actually rendered in the website, using `context.Background()` results in a less ambiguous example. But I see `ctx` variable being used almost everywhere in these snippets, so it's probably safer (and better) to do the same here.
username_0: Using the provided ctx would allow the requester to cancel if needed. `context.Background()` will never cancel and severs the link between the caller.
username_1: You're welcome to provide a PR to update the snippet. But I don't think it's too critical, since we don't render the `verifyToken()` signature in the HTML output: https://firebase.google.com/docs/auth/admin/verify-id-tokens#verify_id_tokens_using_the_firebase_admin_sdk
username_1: @lahirumaramba this is a trivial change. Might be a good place for you to start.
Status: Issue closed
|
facebook/react-native | 243920288 | Title: iOS Native UI Component NOT rendering
Question:
username_0: Hi all,
I followed the guide in [Native UI Component](https://facebook.github.io/react-native/docs/native-components-ios.html) to create a custom native UI that I want to render in JS. I generated a brand new project with ```react-native init CustomUI``` and then followed the instructions in the doc.
The example in the doc uses ```MKMapView``` and I've wrapped this view as a sub view to ```MyCustomUI``` class as explained in this [issue](https://github.com/facebook/react-native/issues/2948). Then I return ```MyCustomUI``` view in the ```MyCustomUIManager``` class.
Then I try to grab my view in ```index.ios.js``` file with the ```requireNativeComponent("MyCustomUI", MyCustomUIView)``` method.
However, when I run the app. Nothing is being rendered, I just get a blank background. I also tried not wrapping the ```MKMapView``` and return it directly from ```MyCustomUIManager``` and that didn't work either.
I've a repo setup [here](https://github.com/username_0/react-native-ios-native-UI) if you guys want to try and run it
### Environment
1. `react-native -v`: 0.46.4
2. `node -v`: 6.10.2
3. `npm -v`: 3.10.10
- Target Platform: iOS 10+
- Development Operating System: Mac OSX 10.12.4
### Code
```obj-c
// MyCustomUI.h
#import <UIKit/UIKit.h>
@interface MyCustomUI : UIView
@end
```
```obj-c
// MyCustomUI.m
#import "MyCustomUI.h"
#import <MapKit/MapKit.h>
@implementation MyCustomUI
-(instancetype)init {
self = [super init];
if (self) {
[self setUp];
}
return self;
}
-(void) setUp {
NSLog(@"Map Setup");
MKMapView * map = [[MKMapView alloc] init];
[self addSubview:map];
}
@end
```
```obj-c
[Truncated]
alignItems: 'center',
backgroundColor: '#F5FCFF',
},
welcome: {
fontSize: 20,
textAlign: 'center',
margin: 10,
},
instructions: {
textAlign: 'center',
color: '#333333',
marginBottom: 5,
},
custom: {
flex: 1,
}
});
AppRegistry.registerComponent('CustomUI', () => CustomUI);
```
Answers:
username_1: I had this issue and I fixed it by overriding the layoutSubviews function in my custom view. Apparently the frame will be determined by react-native after intialisation and so if you don't implement layoutSubvies the subviews inside your custom uiview will not have their frame updated.
This is already done by default in the MKMapView and so there is no need to use a MyCustomUI wrapper UIView. If you really want to use a wrapper UIView then make sure to implement the layoutSubviews functon in which you update the frame of all subviews you wish to see. I hope this helps!
username_0: @username_1 thanks for your response. I appreciate it. I tried returning the ```MKMapView``` directly in the ```MyCustomUIManager``` class as well and that did not work either.
```obj-c
// MyCustomUIManager.m
- (UIView *)view
{
return [[MKMapView alloc] init];
}
```
I tried overriding the layoutSubviews method in MyCustomUI but I'm not sure what exactly I should be doing inside this method other than calling the super to layout subviews. With this dummy overridden layoutSubview, I still don't see anything being rendered. Any thoughts?
```obj-c
// MyCustomUI.m
- (void) layoutSubviews {
[super layoutSubviews];
// What else should I do?
}
```
username_1: @username_0 It might be that you are just using incorrect styling. I'm no react-native expert but I was messing around with this yesterday and managed to find a way to test it.
Try replacing the `'MyCustomUI'` in `requireNativeComponent('MyCustomUI', MyCustomUIView)` with something incorrect like `'blahblah'`. Now the linking should be broken and react-native will render your component as a box with a red border. You can now use this box with red border to see what the true location and size of your component is. For example I once saw it was not using the full width of the screen and was just a single pixel along the left side of the screen. When I added `alignSelf: 'stretch'` to the style I could then see the component.
username_0: @username_1 thanks for the help. I was able to get it working. I had to set the frame of the subviews in ```layoutSubviews```. Another way I found also works was to initialize the ```MKMapView``` with ```initWithFrame``` method, then I don't need to override the ```layoutSubviews``` class.
### Method 1
```obj-c
// MyCustomUI.m
// No longer need to override layoutSubviews method
- (void) setMap {
MKMapView * map = [[MKMapView alloc] initWithFrame:CGRectMake(0, 0, 200, 300)];
[self addSubview:map];
}
```
### Method 2
```obj-c
// MyCustomUI.m
// Create MKMapView with just init method works with this
- (void) layoutSubviews {
[super layoutSubviews];
for(UIView* view in self.subviews) {
[view setFrame:self.bounds];
}
}
```
Status: Issue closed
username_2: Got same issue too, don't know whey React Native init my view with frame (0, 0, w, 0).
username_3: @username_2 did you ever figure it out? I am having this problem with a view as well |
USEPA/SWMM-EPANET_User_Interface | 292028274 | Title: EPANET_MTP4r2_ Map-Toolbar_Viewing-network_High
Question:
username_0: Viewing the network in the Project Explorer, Map Tab and Layers Tab can easily result in the UI program shutting down. There is something buggy about the Map and Layers.
Answers:
username_1: @username_0 User testing is one of the most important ways of exposing problems in software. Usually, certain combination of operations in certain sequence could lead to crash, it is important for users to remember what was the sequence of things that were being performed before the crash, such information can help us reproduce the crash for fixing. So, more specific details would be needed when you encounter a failure to be included in bug/issue report.
Status: Issue closed
|
zold-io/wts.zold.io | 669025099 | Title: sibit 0.19.4
Question:
username_0: @username_1 release, tag is `0.39.10`
Answers:
username_0: @username_1 release, tag is `0.39.10`
username_1: @username_0 OK, I will release it now. Please check the progress [here](https://www.username_1.com/t/22781-666560985)
username_1: @username_0 Done! FYI, the full log is [here](https://www.username_1.com/t/22781-666560985) (took me 8min)
Status: Issue closed
username_2: Job `gh:zold-io/wts.zold.io#364` is not assigned, can't get performer
<!-- https://www.username_2.com/footprint/CB28FH2NR/cab5a3ac-a113-4bc2-b6d5-56b117feea65, version: 0.54.5, hash: ${buildNumber} --> |
MicrosoftDocs/office-docs-powershell | 612088311 | Title: Are the SET commands unchanged?
Question:
username_0: This more of a question than an issue:
I see the EXO V2 cmdlets are all GET verbs. Are the cmdlets for updating mailboxes still the same? Will they be updated in the future as part of this project?
Thank you.
Answers:
username_1: @officedocsbot assign @username_1
username_2: Hi @username_0, thank you for your feedback.
The Exchange Online PowerShell V2 module is currently in Preview. If you have any feedback, concerns, or are facing any issues with the EXO V2 module, contact us at exocmdletpreview[at]service[dot]microsoft[dot]com (email address intentionally obscured to help prevent spam).
username_1: @username_2 Thank you very much for the contribution and sharing this explanation. @username_0
Hope this comment is helpful for you. If you see a documentation update is required, please feel free to open an issue for the same. We proceed here to close it. Thanks for taking out some time to open the issue. Appreciate and encourage you to do the same in future also.
Status: Issue closed
|
whatwg/html | 314289379 | Title: whatwg documents under cc by sa 4.0, but no attribution
Question:
username_0: https://html.spec.whatwg.org/multipage/introduction.html#scope
This and possibly other documents lack attribution that the CC By SA 4.0 International license requires.
I assume at the very least the "Editors" or "Authors" can be generated from XML into a meta tag and a list in the document. I looked into the document for a meta author tag, but found none, after seeing there was no attribution in the document content.
I am not in whatwg.org, but I can tell you that this could kill the web, since whatwg.org doesn't do attribution and w3.org does have a license, but only allows viewing of patented (does that mean expensive? whew! how is anybody going to write a document? that can eliminate the web) HTML 5.1-5.2 (the latest version).
big problem.
by the way, All Rights Reserved is old copyright stuff, no longer used at least. copyrights of documents or whole web sites must be submitted to copyright.gov to LOC for approximately $35 last time I looked. and changes submitted yearly, or it is not legally defensible, but you can I think (look at copyright.gov FAQ to verify) have a default author's copyright statewment, and it doesn't need to use a copyrigfht symbol.
Example: Copyright startYear SomeName
I hope this helps,
Answers:
username_1: I find this issue confusing, and am not sure what exactly we're supposed to be attributing. But, it sounds like you found what you're looking for, so I'll close this. Happy to continue discussing in the closed thread, and reopen if there's something actionable here.
Status: Issue closed
|
TwilioDevEd/call-tracking-laravel | 104907431 | Title: UI too messy/confusing in current state
Question:
username_0: We're not super concerned with visual polish on these, but in it's current state the UI is not presentable. Normally I wouldn't mind if the paths are different or the UX diverging slightly, but the deviations from the Django app's UX were not in a positive direction.
To eliminate any confusion about what is desired, let's precisely follow UX for the [Django implementation](https://github.com/TwilioDevEd/call-tracking-django):
1. The "dashboard" should have the list of editable lead sources in a left column, and charts for lead distribution on the right. There should be a form to search for a new number to add as a lead source on the dashboard as well
2. Searching for a number should initiate a workflow where you select from available numbers to purchase
3. After selecting a number, it should be purchased with the Twilio API, and then associated with a lead source, that you edit with a name and forwarding number.
I would check out and run the Django app locally, and very closely mirror it's layout and UX.
Answers:
username_1: @username_0
Does it look okay now?

username_0: @username_1 looks much better now, but the grid on the home page is still off - you should be able to use Bootstrap's grid system to prevent the content from overflowing the header. Once that's addressed it looks good.
username_1: Yeah, I had some misplaced tags in the master template. Should be fixed now.
Status: Issue closed
|
networkservicemesh/sdk | 815801497 | Title: Refactor chains/endpoint to use Option pattern
Question:
username_0: ## Motivation
We can simplify using of our chains with
[Option pattern](https://github.com/tmrts/go-patterns/blob/master/idiom/functional-options.md)
See at an example: https://github.com/networkservicemesh/sdk/commit/9007c4530373d301993ecd8985a1c9abebe359b2
Moreover, this can super help us with minimization of manual changes of upstream repositories.
Entry point: https://github.com/networkservicemesh/sdk/blob/master/pkg/networkservice/chains/endpoint/server.go<issue_closed>
Status: Issue closed |
nova-video-player/aos-AVP | 1108967695 | Title: [FEAT]: play media from NFS shares
Question:
username_0: ### Description
Moja doesn't support NFS paths. It seems me great to be able to play videos from Jellyfin TV client as an external player using NFS paths.
### Additional information
_No response_
Answers:
username_1: If we wanted to do that (which is quite unlikely though), I guess we could use [libnfs](https://github.com/sahlberg/libnfs) for that
username_0: Oh. Why not? NFS works so much better when streaming in a local network. CPU has much less work to do. Puts less strain on the machine.
username_1: Better than what?
We can't implement NFS in kernel, so we'll necessarily have a performance hit.
I assume you mean you have a video which is stuttering on some device if you're saying this.
Please say which video, and which device is stuttering, so we can take a look.
username_0: Better than streaming over a http link. No, all videos work fine, no issue there. I just want to use another protocol when playing media as it works so much better from the Linux based NAS like Synology. |
cloudfoundry/java-buildpack | 213616045 | Title: Java Process Memory Configuration
Question:
username_0: We are trying to configure the Memory Allocation for our Java/SpringBoot Apps based on the instructions detailed here: https://www.predix.io/support/article/KB0011507
JBP_CONFIG_OPEN_JDK_JRE: '[memory_calculator: {memory_heuristics: {heap: 1284, metaspace: 341, native: 574, stack: 1}}]'
option works just fine for a 512MB MEMORY_LIMIT. We are however, trying to use these instructions to scale this down further for a lower MEMORY_LIMIT. We tried 175MB with the following configuration:
JBP_CONFIG_OPEN_JDK_JRE: '[memory_calculator: {memory_heuristics: {heap: 1024, stack: 1}}]'
which results in: Memory Settings: -Xss228K -Xms44610K -XX:MetaspaceSize=64M -Xmx44610K -XX:MaxMetaspaceSize=64M
We tried different configurations but we noticed it is not possible to scale down the "MetaSpace" size to less than 64M.
As far as I understand, a metaspace that size is not needed for our Apps. On the local machine we can successfully run our Apps with a maximum metaspace of 10MB.
We would really appreciate it if you could throw some more light on the following:
* Is there a way to lower down the metaspace allocation to less than 64MB in Cloud Foundry?
Thanks
<NAME>
GE, Aviation
Answers:
username_1: The 64M metspace limit you're seeing is [codified as part of the configuration](https://github.com/cloudfoundry/java-buildpack/blob/36011451f85a53abbdcead5d46438800a9a307cb/config/open_jdk_jre.yml#L28). You can change this configuration parameter (even removing it) which will allow that number to shrink.
But in truth, JVMs actually require much larger memory spaces than that. While you might be able to get the heap below 64M, metaspace (default unbounded) for storing classes that need to be loaded, thread stacks (defaulting to 1M x 200 threads by default in Tomcat = 200M), Compresed Class space (default 1G), Code Cache (default 240M), and direct memory (default unbounded) mean that the [_actual_ amount of memory consumed by the JVM](https://github.com/cloudfoundry/java-buildpack/issues/319) is significantlly higher than the 64M you're trying to aim for.
We've been working on a new version of the memory calculator that accounts for all of these memory spaces and in our testing, it seems reasonable that the smallest container size (using JVM defaults) with any reasonable amount of heap is probably 768M, but that the Cloud Foundry default of 1G is probably the correct choice for most applications. You can shrink those memory pools if you think your application can get away with it, but we've seen **significant** performance degradation in our experiments when we have done so with typical applications.
username_0: Will this require a buildpack customization or can we do this via Manifest Configuration/Environment variable approach?
Just to add some context to the conversation,
I agree with you that high traffic Production Containers with a small Java heap (less than 768M) is not recommended. Correct me if I am wrong, but the same rule would probably apply to Production full blown apps in other platforms as well (Node.js, Golang, etc).
However, the use case for small Java containers is along the lines of low traffic, Development space containers that can even encompass HelloWorld like Apps. Allocating so much memory for such processes would most likely be on the excessive side.
Ideally, we would like to allocate a small Java "heap" with a smaller "metaspace" for such processes.
For instance, I was trying to optimize the Memory Configuration for a 175MB Memory Limit, and with a bound "metaspace" of 64MB, I could only get a "heap" allocation of "50MB". If we could shrink the "metaspace", we could increase the "heap" allocation more, and may be even be able to shrink the overall Memory Limit.
I would like to know what your thoughts are for our use case and if I am making some wrong assumptions about allocating memory for the low footprint Java processes?
Thanks
Sohil
username_1: All configuration in the buildpack can be accomplished by either a fork or environment variables as [described in the documentation](https://github.com/cloudfoundry/java-buildpack/blob/master/README.md#configuration-and-extension).
However you've got a fundamental misunderstanding about how the JVM allocates memory. You've only accounted for heap and metaspace in your calculations. You also need to allocated 1M (default) for every thread your application could _possibly_ start (200 default for Tomcat) as well as Code Cache (240M default), Compressed Class Space (1G default), direct memory (unbounded default) and of course the native memory that the JVM requires to run. So saying that you only need 175M for heap and metaspace doesn't account for **most** of the memory used by the JVM. You can tune any of these numbers down (with massive performance penalties) but you absolutely must include them in your container size. Once you've done that, you'll find that small containers with JVMs aren't feasible, regardless of heap and metaspace calculations.
username_0: Thanks for the clarification.
I agree that small JVM containers in Production is not recommended.
However, it may make sense to create small memory footprint JVM containers for minimal traffic Demo/Development spaces.
It looks like the best option for that use case is to provide memory calculator settings via the JBP_CONFIG_OPEN_JDK_JRE environment variable.
I am trying to accomplish your suggestion of
* The 64M metspace limit you're seeing is codified as part of the configuration. You can change this configuration parameter (even removing it) which will allow that number to shrink.
I cannot seem to find a way to "resize it" it through configuration via the JBP_CONFIG_OPEN_JDK_JRE Environment variable. Would you happen to have an example of what that configuration should look like?
Here is what I have tried:
[memory_heuristics: memory_sizes: {metaspace: 20m..}]
It still uses 64MB
I am using the following buildpack: https://github.com/cloudfoundry/java-buildpack.git#v3.12
Thanks
Sohil
username_1: `cf set-env <APP> JBP_CONFIG_OPEN_JDK_JRE '{ memory_calculator: { memory_sizes: { metaspace: 16m } } }'`
Note that the value is a YAML document that matches the config file itself. For ease of reading, we use the YAML "flow" style.
username_0: Thanks Ben. That worked
Status: Issue closed
username_1: Great to hear. |
trapexit/mergerfs | 404638461 | Title: mergerfs is copying files when i rename / move them.
Question:
username_0: System information
==================
Please provide as much of the following information as possible:
mergerfs version: 2.21.0
FUSE library version: 2.9.7
fusermount version: 2.9.7
using FUSE kernel interface version 7.19
Linux server 4.15.0-43-generic #46-Ubuntu SMP Thu Dec 6 14:45:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
fstab
=====
UUID=82042aab-1a2c-11e9-bf40-902b34dd724a none swap sw 0 0
UUID=82042aac-1a2c-11e9-bf40-902b34dd724a / ext4 defaults 0 0
UUID=83ede676-1a2c-11e9-bf40-902b34dd724a /home ext4 defaults 0 0
UUID=3AF5-CBC3 /boot/efi vfat defaults 0 0
# STORAGE DRIVES
UUID="accf4128-ef55-44bc-b105-5b82ce3f342f" /mnt/disk-8tb1 ext4 defaults 0 0
UUID="057aee3a-63ac-4014-9ec2-25470d81cdf1" /mnt/disk-8tb2 ext4 defaults 0 0
UUID="c2c8f098-2158-495b-bb2c-a150fd049496" /mnt/disk-8tb3 ext4 defaults 0 0
# PARITY DRIVE IS 8TB4
UUID="b8e93a30-3f17-4737-9a45-3047c43f80a8" /mnt/parity ext4 defaults 0 0
UUID="3eb93b46-7fe0-452a-8335-72aabe8dc0b5" /mnt/disk-6tb1 ext4 defaults 0 0
UUID="f413c50f-d0c8-4db9-9093-1749f51c3c60" /mnt/disk-4tb1 ext4 defaults 0 0
UUID="81ca18e1-7807-485c-ad8a-1ad826de70fe" /mnt/disk-3tb1 ext4 defaults 0 0
UUID="fb17a341-9176-4490-8df1-0c7fcea02c30" /mnt/disk-3tb2 ext4 defaults 0 0
UUID="fc6d8b67-3170-4cad-b158-7478284c07bc" /mnt/disk-3tb3 ext4 defaults 0 0
UUID="c8b7c67a-c02c-4f40-89dc-5c54c69315f3" /mnt/disk-3tb4 ext4 defaults 0 0
# STORAGE POOL
/mnt/disk* /mnt/storage fuse.mergerfs direct_io,defaults,allow_other,category.create=eplfs,use_ino,minfreespace=30G,fsname=mergerfs 0 0
Answers:
username_0: Apologies, some of the information requested maybe missing as I am just a noob with linux still.
username_0: Package: gvfs-fuse
==========
Versions:
1.36.1-0ubuntu1.2 (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_bionic-updates_main_binary-amd64_Packages)
Description Language:
File: /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_bionic_main_binary-amd64_Packages
MD5: e8ae435dfe556826602d3a021208211e
Description Language: en
File: /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_bionic_main_i18n_Translation-en
MD5: e8ae435dfe556826602d3a021208211e
Description Language:
File: /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_bionic-updates_main_binary-amd64_Packages
MD5: e8ae435dfe556826602d3a021208211e
1.36.1-0ubuntu1 (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_bionic_main_binary-amd64_Packages)
Description Language:
File: /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_bionic_main_binary-amd64_Packages
MD5: e8ae435dfe556826602d3a021208211e
Description Language: en
File: /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_bionic_main_i18n_Translation-en
MD5: e8ae435dfe556826602d3a021208211e
Description Language:
File: /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_bionic-updates_main_binary-amd64_Packages
MD5: e8ae435dfe556826602d3a021208211e
Reverse Depends:
gnome-core,gvfs-fuse 1.30
lubuntu-qt-desktop,gvfs-fuse
lubuntu-gtk-desktop,gvfs-fuse
lubuntu-desktop,gvfs-fuse
xubuntu-desktop,gvfs-fuse
vanilla-gnome-desktop,gvfs-fuse
ubuntustudio-desktop,gvfs-fuse
ubuntukylin-desktop,gvfs-fuse
ubuntu-unity-desktop,gvfs-fuse
ubuntu-mate-desktop,gvfs-fuse
ubuntu-mate-core,gvfs-fuse
ubuntu-budgie-desktop,gvfs-fuse
pcmanfm-qt,gvfs-fuse
pcmanfm,gvfs-fuse
nemo,gvfs-fuse
lubuntu-qt-desktop,gvfs-fuse
lubuntu-gtk-desktop,gvfs-fuse
lubuntu-desktop,gvfs-fuse
ubuntu-desktop,gvfs-fuse
cinnamon-core,gvfs-fuse
Dependencies:
1.36.1-0ubuntu1.2 - libc6 (2 2.4) libfuse2 (2 2.8) libglib2.0-0 (2 2.49.3) gvfs (5 1.36.1-0ubuntu1.2) fuse (2 2.8.4)
1.36.1-0ubuntu1 - libc6 (2 2.4) libfuse2 (2 2.8) libglib2.0-0 (2 2.49.3) gvfs (5 1.36.1-0ubuntu1) fuse (2 2.8.4)
Provides:
1.36.1-0ubuntu1.2 -
1.36.1-0ubuntu1 -
Reverse Provides:
username_1: It's not mergerfs. It doesn't copy/unlink on rename.
You are using a path preserving policy so I suspect your simply hitting a EXDEV error because the path doesn't exist on the destination branch nor would it be created on it (read the docs for more detail). As a result radarr is moving the file.
You can use a different, non path preserving policy or enable ignorepponrename. Or create the relevant path on the target drive.
In the future please consider updating to the most recent version of mergerfs before submitting tickets. Makes it easier to debug.
Status: Issue closed
username_1: Upgrade. You're using an old release.
username_0: sorry. thank you. |
ChluNetwork/chlu-ipfs-support | 298587567 | Title: Add listen-able lifecycle events and waitUntilReady function
Question:
username_0: Users of the library should be able to listen to
- all internal event
- starting/stopping, started/stopped
Also there should be a `waitUntilReady` function that returns a promise that resolves when the ChluIPFS instance is ready.
This is needed to better integrate UIs and making sure no calls are performed while the node is not ready yet<issue_closed>
Status: Issue closed |
kakao/cuesheet | 215642295 | Title: Support dependency version conflict resolution
Question:
username_0: Currently DependencyAnalyzer does not resolve dependency version conflicts while building assembly.
A simpler step to do is to make it include the latest version, and possibly provide an option to give a specific strategy for resolution |
axios/axios | 375283197 | Title: CDN import: TypeError: Cannot set property 'axios' of undefined
Question:
username_0: I'm trying to use axios CDN with dynamic import ES6 ,
#### My code
`(async ()=>{
await import('https://unpkg.com/axios/dist/axios.min.js')
})()`
#### Error

It works fine with [System.import](https://github.com/systemjs/systemjs#loading-a-systemregister-module) , but not with native import,
#### Context
- axios version: *v0.18.0*
- Environment: *chrome 70, windows 10*
Answers:
username_1: It works for me. What is the output of the following log statement?
```
(async () => {
const axios = await import('https://unpkg.com/axios/dist/axios.min.js')
console.log(typeof axios)
})();
```
Can you share all of the relevant code?
username_0: I just test it independently, this is the execution result on the console

username_2: It is due to the global `this` is undefined in the imported context. Hope someone can investigate whether it is expected or not.
```
(function webpackUniversalModuleDefinition(root, factory) {
if(typeof exports === 'object' && typeof module === 'object')
module.exports = factory();
else if(typeof define === 'function' && define.amd)
define([], factory);
else if(typeof exports === 'object')
exports["axios"] = factory();
else
root["axios"] = factory();
})(this, function() {
```
username_3: I am getting the same error, it seems the problem is with webpack configuration: https://stackoverflow.com/a/58990831/5922757
username_4: same here. I wanted to use axios for tampermonkey but the require statement not working, for the error message `Cannot set property 'axios' of undefined`
Status: Issue closed
username_6: Why is the issue closed when the bug is clearly reproducible and there is no proper fix even in the comments? |
14nrv/vue-form-json | 928461373 | Title: Compatibility with json schema
Question:
username_0: Hi,
Thank you for releasing vue-form-json. It looks really cool and I was considering rolling something like this myself so you've saved me a lot of time.
For my use case, I have input data in json-schema format. It's similar enough that I can probably just make it work, but I think improved compatibility with json schema would be very cool. One area that currently isn't compatible is the rules validation.
I might do some work on compatibility and first wanted to ask if you have any thoughts on this. Would you be open to accepting changes back into the project? Are there any pitfalls I should be aware of?
Answers:
username_1: Hey,
Thanks for your interest in this project.
Json-schema is already something that I have in mind as you can see [here](https://github.com/username_1/vue-form-json/projects/1#card-47296239). But, I would prefer, if possible, to have only one schema in the end, and not one for the form and another one for the UI. For the rules validation, this project uses [vee-validate](https://logaretm.github.io/vee-validate/guide/rules.html#rules).
Supporting also json-schema could be perhaps a good first step if there are no breaking changes and depending also of the size of the "mapping".
ps: you could be interested in a project like [vue-form-json-schema](https://github.com/jarvelov/vue-form-json-schema)
username_1: closed due to no answer, feel free to open a new issue if u want
Status: Issue closed
|
cake-contrib/Cake.Unity | 389704906 | Title: Update Cake.Core to 0.28
Question:
username_0: Current `Cake.Core` version being used is `0.17`. Use `0.28` for maximum compatibility with latest releases.
Answers:
username_0: Fixed in https://github.com/cake-contrib/Cake.Unity/commit/6dcf98677852a75728752b326e65e00dbac28b67
Status: Issue closed
|
hungpham2511/toppra | 1094243895 | Title: Error during building of the project, which uses toppra
Question:
username_0: During the build of catkin ros project the following error occurs.
```
Errors << toppra_trajectory:make /home/gryogor/catkin_robobar_ws/logs/toppra_trajectory/build.make.013.log
In file included from /usr/include/x86_64-linux-gnu/c++/9/bits/c++allocator.h:33,
from /usr/include/c++/9/bits/allocator.h:46,
from /usr/include/c++/9/string:41,
from /usr/include/c++/9/bits/locale_classes.h:40,
from /usr/include/c++/9/bits/ios_base.h:41,
from /usr/include/c++/9/ios:42,
from /usr/include/c++/9/ostream:38,
from /usr/include/c++/9/iostream:39,
from /home/gryogor/catkin_robobar_ws/src/robobar/toppra_trajectory/src/toppra_trajectory.cpp:1:
/usr/include/c++/9/ext/new_allocator.h: In instantiation of ‘void __gnu_cxx::new_allocator<_Tp>::construct(_Up*, _Args&& ...) [with _Up = toppra::PiecewisePolyPath; _Args = {std::vector<Eigen::Matrix<double, -1, 1, 0, -1, 1>, Eigen::aligned_allocator<Eigen::Matrix<double, -1, 1, 0, -1, 1> > >&, std::vector<int, std::allocator<int> >&}; _Tp = toppra::PiecewisePolyPath]’:
/usr/include/c++/9/bits/alloc_traits.h:482:2: required from ‘static void std::allocator_traits<std::allocator<_CharT> >::construct(std::allocator_traits<std::allocator<_CharT> >::allocator_type&, _Up*, _Args&& ...) [with _Up = toppra::PiecewisePolyPath; _Args = {std::vector<Eigen::Matrix<double, -1, 1, 0, -1, 1>, Eigen::aligned_allocator<Eigen::Matrix<double, -1, 1, 0, -1, 1> > >&, std::vector<int, std::allocator<int> >&}; _Tp = toppra::PiecewisePolyPath; std::allocator_traits<std::allocator<_CharT> >::allocator_type = std::allocator<toppra::PiecewisePolyPath>]’
/usr/include/c++/9/bits/shared_ptr_base.h:548:39: required from ‘std::_Sp_counted_ptr_inplace<_Tp, _Alloc, _Lp>::_Sp_counted_ptr_inplace(_Alloc, _Args&& ...) [with _Args = {std::vector<Eigen::Matrix<double, -1, 1, 0, -1, 1>, Eigen::aligned_allocator<Eigen::Matrix<double, -1, 1, 0, -1, 1> > >&, std::vector<int, std::allocator<int> >&}; _Tp = toppra::PiecewisePolyPath; _Alloc = std::allocator<toppra::PiecewisePolyPath>; __gnu_cxx::_Lock_policy _Lp = __gnu_cxx::_S_atomic]’
/usr/include/c++/9/bits/shared_ptr_base.h:679:16: required from ‘std::__shared_count<_Lp>::__shared_count(_Tp*&, std::_Sp_alloc_shared_tag<_Alloc>, _Args&& ...) [with _Tp = toppra::PiecewisePolyPath; _Alloc = std::allocator<toppra::PiecewisePolyPath>; _Args = {std::vector<Eigen::Matrix<double, -1, 1, 0, -1, 1>, Eigen::aligned_allocator<Eigen::Matrix<double, -1, 1, 0, -1, 1> > >&, std::vector<int, std::allocator<int> >&}; __gnu_cxx::_Lock_policy _Lp = __gnu_cxx::_S_atomic]’
/usr/include/c++/9/bits/shared_ptr_base.h:1344:71: required from ‘std::__shared_ptr<_Tp, _Lp>::__shared_ptr(std::_Sp_alloc_shared_tag<_Tp>, _Args&& ...) [with _Alloc = std::allocator<toppra::PiecewisePolyPath>; _Args = {std::vector<Eigen::Matrix<double, -1, 1, 0, -1, 1>, Eigen::aligned_allocator<Eigen::Matrix<double, -1, 1, 0, -1, 1> > >&, std::vector<int, std::allocator<int> >&}; _Tp = toppra::PiecewisePolyPath; __gnu_cxx::_Lock_policy _Lp = __gnu_cxx::_S_atomic]’
/usr/include/c++/9/bits/shared_ptr.h:359:59: required from ‘std::shared_ptr<_Tp>::shared_ptr(std::_Sp_alloc_shared_tag<_Tp>, _Args&& ...) [with _Alloc = std::allocator<toppra::PiecewisePolyPath>; _Args = {std::vector<Eigen::Matrix<double, -1, 1, 0, -1, 1>, Eigen::aligned_allocator<Eigen::Matrix<double, -1, 1, 0, -1, 1> > >&, std::vector<int, std::allocator<int> >&}; _Tp = toppra::PiecewisePolyPath]’
/usr/include/c++/9/bits/shared_ptr.h:701:14: required from ‘std::shared_ptr<_Tp> std::allocate_shared(const _Alloc&, _Args&& ...) [with _Tp = toppra::PiecewisePolyPath; _Alloc = std::allocator<toppra::PiecewisePolyPath>; _Args = {std::vector<Eigen::Matrix<double, -1, 1, 0, -1, 1>, Eigen::aligned_allocator<Eigen::Matrix<double, -1, 1, 0, -1, 1> > >&, std::vector<int, std::allocator<int> >&}]’
/usr/include/c++/9/bits/shared_ptr.h:717:39: required from ‘std::shared_ptr<_Tp> std::make_shared(_Args&& ...) [with _Tp = toppra::PiecewisePolyPath; _Args = {std::vector<Eigen::Matrix<double, -1, 1, 0, -1, 1>, Eigen::aligned_allocator<Eigen::Matrix<double, -1, 1, 0, -1, 1> > >&, std::vector<int, std::allocator<int> >&}]’
/home/gryogor/catkin_robobar_ws/src/robobar/toppra_trajectory/src/toppra_trajectory.cpp:134:91: required from here
/usr/include/c++/9/ext/new_allocator.h:145:20: error: no matching function for call to ‘toppra::PiecewisePolyPath::PiecewisePolyPath(std::vector<Eigen::Matrix<double, -1, 1>, Eigen::aligned_allocator<Eigen::Matrix<double, -1, 1> > >&, std::vector<int>&)’
145 | noexcept(noexcept(::new((void *)__p)
| ^~~~~~~~~~~~~~~~~~
146 | _Up(std::forward<_Args>(__args)...)))
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
```
Here is the CMakeLists.txt file:
```
cmake_minimum_required(VERSION 3.0.2)
project(toppra_trajectory)
add_compile_options(-std=c++11)
find_package(catkin REQUIRED COMPONENTS
roscpp
moveit_core
moveit_ros_planning_interface
geometry_msgs
)
find_package(Boost REQUIRED COMPONENTS system)
find_package(Eigen3)
find_package(toppra)
include_directories(${EIGEN_INCLUDE_DIRS})
catkin_package(
)
include_directories(
${catkin_INCLUDE_DIRS}
)
[Truncated]
// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
//toppra plan instead
}
}
return 0;
}
void targetCb(geometry_msgs::Pose msg)
{
target_pose = msg;
_recieved = true;
}
```
**Version**
Develop branch
Answers:
username_1: The error message is rather clear:
```
/usr/include/c++/9/ext/new_allocator.h:145:20: error: no matching function for call to ‘toppra::PiecewisePolyPath::PiecewisePolyPath(std::vector<Eigen::Matrix<double, -1, 1>, Eigen::aligned_allocator<Eigen::Matrix<double, -1, 1> > >&, std::vector<int>&)’
```
There is no constructor that accepts the argument your provide. The bug is on your side.
Status: Issue closed
|
quarkusio/quarkus | 642264886 | Title: Dockerfile.multistage not working (looks like an image issue)
Question:
username_0: **Describe the bug**
Can't build docker native image with the application, can't even copy pom.xml to the build image (quay.io/quarkus/centos-quarkus-maven:19.3.1-java11) it looks like an issue with the image because I can copy files into my own Docker images, even based on CentOS.
**Expected behavior**
Build application into docker container image
**Actual behavior**
Docker build fails in first stage
**To Reproduce**
Steps to reproduce the behavior:
1. Use a working Quarkus application on a docker enabled host
2. Move to the project directory (where pom.xml is located)
3. copy src/main/docker/Dockerfile.multistage .
4. docker build -t <imagename> .
5. Copy error message appears
```
$ docker build -t quay.io/username_0/rokostacio2 .
Sending build context to Docker daemon 3.584kB
Step 1/20 : FROM quay.io/quarkus/centos-quarkus-maven:19.3.1-java11 AS build
19.3.1-java11: Pulling from quarkus/centos-quarkus-maven
524b0c1e57f8: Pull complete
1beb771cdd17: Pull complete
Digest: sha256:f83fc08a9b5594df9681a60c156d41e0b225eece3889f4eb299b01684ab5696c
Status: Downloaded newer image for quay.io/quarkus/centos-quarkus-maven:19.3.1-java11
---> 0b3da81f4a33
Step 2/20 : COPY pom.xml /project/pom.xml
COPY failed: stat /var/lib/docker/tmp/docker-builder136156981/pom.xml: no such file or directory
$
```
**Configuration**
```Dockerfile
## Stage 1 : build with maven builder image with native capabilities
FROM quay.io/quarkus/centos-quarkus-maven:19.3.1-java11 AS build
#FROM quay.io/quarkus/centos-quarkus-maven:latest AS build
COPY pom.xml /project/pom.xml
ADD pom.xml /project/pom.xml
RUN echo "hola" > /project/test
RUN ls /project
RUN cat /project/test
RUN mvn -f /project/pom.xml -B de.qaware.maven:go-offline-maven-plugin:1.2.5:resolve-dependencies
COPY src /project/src
USER root
RUN chown -R quarkus /project
RUN chown -R quarkus /project/*
USER quarkus
RUN mvn -f /project/pom.xml -Pnative clean package
## Stage 2 : create the docker final image
FROM registry.access.redhat.com/ubi8/ubi-minimal
WORKDIR /work/
COPY --from=build /project/target/*-runner /work/application
# set up permissions for user `1001`
RUN chmod 775 /work /work/application \
[Truncated]
API version: 1.39
Go version: go1.11.6
Git commit: 4c52b90
Built: Tue, 03 Sep 2019 19:59:35 +0200
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 18.09.1
API version: 1.39 (minimum version 1.12)
Go version: go1.11.6
Git commit: 4c52b90
Built: Tue Sep 3 17:59:35 2019
OS/Arch: linux/amd64
Experimental: false
```
Please help
Answers:
username_1: Hi @username_0,
I should have a look at the **.dockerignore** file. The file has the following content by default.
```
*
!target/*-runner
!target/*-runner.jar
!target/lib/*
```
As you can see, all files are ignored (i.e. pom file) except the artifacts. Hence, I have to adjust this file according to your needs.
username_0: Oh got it, now it's working, thanks a lot!
Status: Issue closed
|
Connor-Marble/Greatest-Senior-Project-Of-All-Time | 212915487 | Title: JSON Decode error processing HL2 Reviews
Question:
username_0: Web scraper encounters exceptions loading: https://steamcommunity.com/app/70/homecontent/?userreviewsoffset=4450&p=446&browsefilter=mostrecent&appHubSubSection=10 in scraper.
---
UnicodeDecodeError: 'utf8' codec can't decode byte 0x8d in position 0: invalid start byte
---
Probably due to non-english character
Answers:
username_0: 56258f28f1b86dcdcdb1a8ce5d1b88ca32d9773d - patcheroo to fix the bugski
Status: Issue closed
|
bkaradzic/bgfx | 141801578 | Title: problem with dx10 video card
Question:
username_0: Hi
my old laptop has dx10 capable video card
if i run the examples of bgfx some times runs fine but all the time fails with frozen black window , this problem occure only with d3d11 renderer .
btw , the time when it runs fine , if i press F1 it says ( direct3d11 feature level 10.0 ) . so maybe the problem happens in the creation of the device and maybe with a wrong feature level !!
Answers:
username_1: Can you try with latest?
username_0: yes seems fixed , many thanks
username_0: ops ,with lastest , bgfx now always uses d3d9 , even if i explicitly inited with bgfx::init(bgfx::RendererType::Direct3D11)
Status: Issue closed
username_1: Yup that's fix. Your GPU would pick D3D11 feature level 9.2 or lower. Not even 10.0.
username_0: i have other pc with geforce gt-610 which support d3d11 feature level 11.0 , but bgfx uses d3d9 instead

username_1: It doesn't support even feature level 10.0, because it would initialize as D3D11. Try debugging it, remove if from here:
https://github.com/username_1/bgfx/blob/master/src/renderer_d3d11.cpp#L925
And then run example-00-helloworld and see what feature level your driver has. In bgfx feature level FL 9_3 and 9_2 are unsupported for desktop, because D3D9 exist there.
username_0: you are right , but why they said ,it support directx11 !!
http://www.geforce.com/hardware/desktop-gpus/geforce-gt-610/features
username_1: Well technically it does support it, but those feature levels are deceptive. Anyhow you should not care about it because feature set you can use anyway is supported by D3D9. |
Atlantiss/NetherwingBugtracker | 362888363 | Title: Blood elf treants respawn
Question:
username_0: [//]: # (Enclose links to things related to the bug using http://wowhead.com or any other TBC database.)
[//]: # (You can use screenshot ingame to visual the issue.)
[//]: # (Write your tickets according to the format:)
[//]: # ([Quest][Azuremyst Isle] Red Snapper - Very Tasty!)
[//]: # ([NPC] Magistrix Erona)
[//]: # ([Spell][Mage] Fireball)
[//]: # ([Npc][Drop] Ghostclaw Lynx)
[//]: # ([Web] Armory doesnt work)
**Description**:
Same thing as grell's in the night elf starting area
**Current behaviour**:
**Expected behaviour**:
**Server Revision**:
Status: Issue closed
Answers:
username_1: Thank you for your report.
This will be corrected ASAP. |
ministryofjustice/cloud-platform | 441239337 | Title: test support ticket automation
Question:
username_0: ## Service name
<Delivery Manager>
## Service environment
- [ ] Dev / Development
- [ ] Staging
- [ ] Prod / Production
- [ ] Other
## Impact on the service
Provide real impact description on the service mentioned. It can include any potential blockers for the product team.
<!--- Impact description -->
## Problem description
<Where will this end up?>
## Contact person
<!-- Name, slack handle, email address of the person we can contact for further information. --><issue_closed>
Status: Issue closed |
Fody/PropertyChanged | 804587998 | Title: On-PropName-Changed not called for dependent properties
Question:
username_0: I've avoided upgrading since 3.1.3 due to a behaviour change in 3.2.0 which introduces bugs in my code. It would be good to know if the change was intentional.
With 3.1.3 the code below prints:
_**Length changed to 3
Name changed to One**_
If I upgrade to 3.2.0:
**_Name changed to One_**
This seems to be because >= 3.2.0 has stopped calling On-PropName-Changed for dependencies. It looks like a property changed event is raised for the dependent property, but not the associated On-Property-Changed method;
```
static void Main(string[] args)
{
Thing thing = new Thing();
thing.Name = "One";
}
[AddINotifyPropertyChangedInterface]
public class Thing
{
public string Name { get; set; }
public int Length => Name?.Length ?? 0;
private void OnNameChanged() => Console.WriteLine($"Name changed to {Name}");
private void OnLengthChanged() => Console.WriteLine($"Length changed to {Length}");
}
```
Answers:
username_1: And here's the setter of `Name`:
```C#
[CompilerGenerated]
set
{
if (string.Equals(this.<Name>k__BackingField, value, StringComparison.Ordinal))
{
return;
}
this.<Name>k__BackingField = value;
this.<>OnPropertyChanged(<>PropertyChangedEventArgs.Length);
this.OnNameChanged();
this.<>OnPropertyChanged(<>PropertyChangedEventArgs.Name);
}
```
username_0: So it sees there is no setter for Length, therefore no possible call to OnLengthChanged(), and incorrectly concludes that a call to OnLengthChanged() shouldn't be generated elsewhere, ie in the Name setter.
I see if I change the Length property to the following, it will work:
` public int Length
{
get => Name?.Length ?? 0;
set { }
}`
username_0: Looks like it was the following revision which introduced the problem. I'm going to have a look and see if there is anything I can do, but I've not looked at the code before so there's no telling how it will go.
Revision: 65ea5fe1eb26bcc4f679f08196e86fe4dba3c52f
Date: 08/12/2019 22:23:19
Status: Issue closed
|
SimonAlling/userscripter | 832247807 | Title: Conventional Commits should be enforced
Question:
username_0: #38 made me decide to start using Conventional Commits, but I know how it's gonna end up if it's not enforced in CI …
Answers:
username_1: I think the best way to "enforce" those commits is 2 steps checks:
1. Locally enforce and configure commit convention, allowing contributors to fail quickly (and avoid the need to rewrite a remote PR history). That can be done pretty easily with [husky](https://github.com/typicode/husky) and [commitlint](https://github.com/conventional-changelog/commitlint)
2. Add benefits to the fact of following this convention. The primary advantage of this convention, besides uniformization, is the ability to automate release versions. Using [semantic-release](https://github.com/semantic-release/semantic-release), we could add ContinousDeployment on top of the CI. Automatically releasing a new version of the package if needed when new commits are pushed into `master`.
This workflow both enforce (bad release version or not release at all if not followed) and draw maximum benefits from conventional commits convention.
I have set up this workflow on this repo [userscripter-boilerplate](https://github.com/username_1/userscripter-boilerplate) and could help set up the same one here if needed.
username_0: Yes, especially Husky hooks are a given, and also a GitHub Actions check. Continuous deployment _may_ happen at some point in the future, but I think we should move in that direction with small steps.
Status: Issue closed
|
wovalle/fireorm | 741964337 | Title: Avoid storing reserved fields in document
Question:
username_0: Creating or updating documents adds the `id` field to the actual document.
There are certain reserved fields in Firestore that should not exist within the document.
The package `@nandorojo/swr-firestore` will warn you when it finds documents that contain one of the following reserved fields: `exists, id, hasPendingWrites`.
```
[use-collection] warning: Your document, <id> is using one of the following reserved fields: [exists, id, hasPendingWrites].
These fields are reserved.
Please remove them from your documents.
```
It still works, but it is annoying to see this warning when using this package.
Could we get an option to prevent storing `id` field within the document?
When creating documents normally, through `firestore.collection('name').add(...)`, the id field is not added to the document.
I think the same should apply for this package.
Answers:
username_1: 🤔 Do you know if there's any documentation that says that id should not be used in firestore documents?
username_0: No, I am only basing this on `@nandorojo/swr-firestore`.
I have forked the project and added the following lines to `serializeEntity`:
```
['id', 'exists', 'hasPendingWrites'].forEach(field => {
delete serializableObj[field];
});
```
Works fine for me!
I could make a merge request, if you wish.
username_0: It does make sense though, look at the fields that every document contains:

Wouldn't this mean that `ref, exists, metadata, id` are reserved?
username_1: Yeah, maintaining a list of reserved fields can be done.
I'm kind of interested about the `id` field usage, intertally we depend on it to map collections. I'll try to investigate if it should be used or not.
username_2: +1 We need a way to exclude id field to stay consistent with documents that were not created with fireorm. Currently we are monkey-patching toSerializableObject to exclude id which does not sound like a good idea.
username_1: As of now I don't have time to finish this issue @username_2 :(
From design perspective `id` is used to map collection but maybe we might not need to store the `id` field. If that's true, I can check any PRs that remove this limitation |
tulip-control/polytope | 116970439 | Title: setuptools 18.5 says invalid version
Question:
username_0: ```
~/.virtualenvs/dev/lib/python2.7/site-packages/setuptools/dist.py:294: UserWarning: The version specified ('0.1.2-dev-93d397e7686c65df543f1cd40556ad438a6f8e3b') is an invalid version, this may not work as expected with newer versions of setuptools, pip, and PyPI. Please see PEP 440 for more details.
"details." % self.metadata.version
```
Answers:
username_0: In [`polytope.version`](https://github.com/tulip-control/polytope/blob/4db168890c950890b2e0bf66af3fccc3fe1585f7/polytope/version.py#L62) the changes in [`github.com/tulip-control/tulip-control/a0d1c0948ff8021ba48998eb1c52dfb578e3689c`](https://github.com/tulip-control/tulip-control/commit/a0d1c0948ff8021ba48998eb1c52dfb578e3689c) are needed.
username_1: Is there any reason not to apply the patch directly?
We could also consider including a copy of the unit tests for version string generation. To avoid excessive duplication, I recommend that we manually track changes to version.py in the TuLiP repository (which should be rare). Version checking can also occur as part of integration tests.
Status: Issue closed
username_0: The only reason was to confirm that this was just not applied yet. Fixed in 4b3d1c3d28c344285557b4bbdff65aedb88a841a.
Admittedly, this duplication is characteristic of `setup.py`, due to its bootstrapping nature. Otherwise this baggage could have been moved to a common mini dependency, with its own tests at one place. It may be better to maintain a copy per project, to keep things local.
In the future, I plan to propose a top-down version definition (`setup.py` generating `version.py` automatically), as implemented in `dd`. That is somewhat simpler.
The remaining complex and duplicated part will be generation of a local version identifier that contains a `git` hash. In principle, this could be shifted to a common dependency, because it is mostly a development-time utility, when repeated installations should occur, in order for the local version identifier to be updated (unless the identifier is fetched dynamically upon package import, when installed as `dev`, though that is less desirable, for reasons to be discussed elsewhere). |
BizjapanOrg/bizjapan-website | 583020308 | Title: ホームのOur Valuesを2020年度仕様に変更する
Question:
username_0: **Describe the bug**
ホームのOur Valuesが2019年度仕様のものであるため、置換したい。
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'https://bizjapan.org/ja/'
2. See Issue
**Expected behavior**
"我々にとっての”Global”とは..."のテキストを今年のテキストに置き換える
**Additional context**
そもそも、Valuesが現在の"Global"と"Entrepreneurship"の2つで良いのか確認する必要がある |
mimblewimble/grin-pm | 411063173 | Title: Agenda: Governance, Feb 26 2019
Question:
username_0: Solicit suggestions for agenda items for the Governance meeting to be held on Tuesday Feb 26 @ 15:00 [UTC](http://www.timebie.com/std/utc.php) in [Gitter main lobby](
https://gitter.im/grin_community/Lobby). Please comment to provide topics or suggestions.
# Proposed agenda
1. Agenda review
1. Action point follow ups from previous meeting
1. @username_0 Fund spending transparency report created?
1. [Security reviews / audits](https://github.com/mimblewimble/grin/issues/1609)
1. Promotion of other projects
1. [Risk mgmt brainstorm](https://github.com/mimblewimble/docs/wiki/Risk-Brainstorming)
1. Marketing/Media/PR relations
1. Other questions
Answers:
username_1: Additional agenda item: Discuss @username_2 proposal for recurring full-time dev funding.
username_2: Here is the proposal 😄
Grin has a long list of todo, that's very good! some exciting new features are coming from this list. And I'm sure it's OK to achieve all of these by current development model: one full-time developer + all other voluntary open-source contributors (including 8 other core devs at this moment).
But if we have more donations to sponsor more full-time developers, why not? :-)
It will definitely **speed up** Grin development and let Grin more competitive comparing to other cryptocurrency projects which is being run by a company / foundation and has a lot of full-time employees to work for.
Unfortunately, current Grin funding pool is still at a low level and can't afford many full-time, but it's good time to start moving on this direction! (40 days already after mainnet launching)
Thanks for all Grin community donators, and thanks to those promised constantly donations such as Sparkpool, Grinmint pool ([announcement](https://www.grin-forum.org/t/niagara-falls-into-grinmint-blockcypher-blog/3853)), Poloniex exchange ([announcement](https://medium.com/circle-blog/poloniex-welcomes-grin-commits-to-share-transaction-fees-for-1-year-d07bc92cc0f8)), and so on (sorry I don't write full list of you here!). the current Grin funding pool should be able to sponsor **2 more full-time developers** (but only with a limited "average" salary level same as the current full-time developer @yeastplume, i.e. 10k US$/Month) , at this time.
So, for this time, I propose to offer the great founder @ignopeverell and his older brother @username_1 a full-time job to continue developing Grin, and no more day job to earn for living is needed for them! :-)
I'm sure this "average" salary can't cover the current income of their day job, according to their great programming skill, knowledges and contributions already well known in Grin development. But it's our sincerity to give this offer and invite their full-time contributions to Grin, I believe they will seriously consider this offer because of the true love with Grin project :-)
And it's definitely true that we have a lot of talents in Grin community and many of them deserve such kind of offer, including but not limited all 6 remaining core devs (@tromp @hashmap @jaspervdm @quentinlesceller @username_0 and myself) and some very nice new contributors as we already see, and some interesting community OSS projects developers, we can continue such kind of offering proposal (or any other possible sponsor methods) in the future, but obviously the limitation is the Grin funding pool level, we have to make sure the pool can afford it. Because it will be too bad if the funding pool can't continue payout for the full-time devs and in case they depend on this salary for the daily living.
that's all for my proposal, please feel free to correct me if anything wrong in it.
(btw, to avoid a boring monthly payout from Grin general funding pool, it could be better to define 3-month as an interval to launch such kind of payout, if the funding pool has enough balance).
Status: Issue closed
|
Esri/developer-support | 186734967 | Title: how to use extra lib in typescript
Question:
username_0: there is a point clustering demo, https://developers.arcgis.com/javascript/3/jssamples/layers_point_clustering.html
and need an extra clusterlayer lib, which is loaded by
var dojoConfig = {
paths: {
extras: location.pathname.replace(/\/[^/]+$/, "") + "/extras"
}
};
but when i need to use it in angular2, how can i import this extra lib?
Answers:
username_1: any ideas @username_2 @jwasilgeo ?
username_2: @username_0 this can be closed since we answered it over in https://github.com/Esri/angular-esri-map/issues/311, right?
username_1: indeed it can. thx @username_2!
Status: Issue closed
username_0: yes, thanks @username_2 . the issue has been several times ago. thanks again. |
OpenSID/tema-natra | 1127147582 | Title: Bug/error: padding/margin pada demo umum vs premium
Question:
username_0: ### Jelaskan error yg dialami
padding / margin di tema Natra berbeda?
https://berputar.opensid.or.id/

https://demo.opensid.or.id/

### Cara untuk mereplikasi errornya
akses kedua DEMO dari Premium & umum
https://berputar.opensid.or.id/
https://demo.opensid.or.id/
### Hasil yg diharapkan
Jika keduanya menggunakan template yang sama, semestinya hasilnya juga sama
### Tangkapan layar dan log error
lihat atas
### Rilis Versi OpenSID
Rilis Umum
### Versi OpenSID
*
### Tema Yang Digunakan
Natra 4.5
### Informasi tambahan
_No response_
Answers:
username_1: masalahnya ada di style.css ..
untuk yang di branch Premium, padding nya diset 0
https://github.com/OpenSID/tema-natra/blob/695e6d5b005fbf873d7ff7f24ad0baf30000b64b/partials/bottom_content_left.php#L3
https://github.com/OpenSID/tema-natra/blob/85e3f2509e97226bda6d2c3bc76914cc67e0117c/partials/bottom_content_left.php#L2
https://github.com/OpenSID/tema-natra/blob/85e3f2509e97226bda6d2c3bc76914cc67e0117c/assets/css/style.css#L96
https://github.com/OpenSID/tema-natra/blob/695e6d5b005fbf873d7ff7f24ad0baf30000b64b/assets/css/style.css#L553-L560
username_2: Issue ini di close sja, perubahan dari tema yg ada di umum mengikuti penggabungan rilis dari premium.
username_2: dipakai bagi yg mau menambahkan sendiri dengan perubahan. tp bawaan dari opensid umum mengikuti rilis premium yg digabungkan.
Status: Issue closed
|
Rocologo/MobHunting | 255953226 | Title: console ERROR 5.0.8 on spigot 1.12
Question:
username_0: 7.09 11:14:52 [Server] ERROR Could not pass event EntityDeathEvent to MobHunting v5.0.8
07.09 11:14:52 [Server] INFO org.bukkit.event.EventException: null
07.09 11:14:52 [Server] INFO at org.bukkit.plugin.java.JavaPluginLoader$1.execute(JavaPluginLoader.java:306) ~[spigot.jar:git-Spigot-596221b-86aa17f]
07.09 11:14:52 [Server] INFO at org.bukkit.plugin.RegisteredListener.callEvent(RegisteredListener.java:62) ~[spigot.jar:git-Spigot-596221b-86aa17f]
07.09 11:14:52 [Server] INFO at org.bukkit.plugin.SimplePluginManager.fireEvent(SimplePluginManager.java:499) [spigot.jar:git-Spigot-596221b-86aa17f]
07.09 11:14:52 [Server] INFO at org.bukkit.plugin.SimplePluginManager.callEvent(SimplePluginManager.java:484) [spigot.jar:git-Spigot-596221b-86aa17f]
07.09 11:14:52 [Server] INFO at org.bukkit.craftbukkit.v1_12_R1.event.CraftEventFactory.callEntityDeathEvent(CraftEventFactory.java:394) [spigot.jar:git-Spigot-596221b-86aa17f]
07.09 11:14:52 [Server] INFO at net.minecraft.server.v1_12_R1.EntityLiving.die(EntityLiving.java:1109) [spigot.jar:git-Spigot-596221b-86aa17f]
07.09 11:14:52 [Server] INFO at net.minecraft.server.v1_12_R1.EntityZombie.die(EntityZombie.java:429) [spigot.jar:git-Spigot-596221b-86aa17f]
07.09 11:14:52 [Server] INFO at net.minecraft.server.v1_12_R1.EntityLiving.damageEntity(EntityLiving.java:951) [spigot.jar:git-Spigot-596221b-86aa17f]
07.09 11:14:52 [Server] INFO at net.minecraft.server.v1_12_R1.EntityMonster.damageEntity(EntityMonster.java:44) [spigot.jar:git-Spigot-596221b-86aa17f]
07.09 11:14:52 [Server] INFO at net.minecraft.server.v1_12_R1.EntityZombie.damageEntity(EntityZombie.java:163) [spigot.jar:git-Spigot-596221b-86aa17f]
07.09 11:14:52 [Server] INFO at net.minecraft.server.v1_12_R1.EntityPigZombie.damageEntity(SourceFile:148) [spigot.jar:git-Spigot-596221b-86aa17f]
07.09 11:14:52 [Server] INFO at net.minecraft.server.v1_12_R1.EntityLiving.e(EntityLiving.java:1204) [spigot.jar:git-Spigot-596221b-86aa17f]
07.09 11:14:52 [Server] INFO at net.minecraft.server.v1_12_R1.Block.fallOn(Block.java:533) [spigot.jar:git-Spigot-596221b-86aa17f]
07.09 11:14:52 [Server] INFO at net.minecraft.server.v1_12_R1.Entity.a(Entity.java:1026) [spigot.jar:git-Spigot-596221b-86aa17f]
07.09 11:14:52 [Server] INFO at net.minecraft.server.v1_12_R1.EntityLiving.a(EntityLiving.java:190) [spigot.jar:git-Spigot-596221b-86aa17f]
07.09 11:14:52 [Server] INFO at net.minecraft.server.v1_12_R1.Entity.move(Entity.java:808) [spigot.jar:git-Spigot-596221b-86aa17f]
07.09 11:14:52 [Server] INFO at net.minecraft.server.v1_12_R1.EntityLiving.a(EntityLiving.java:1806) [spigot.jar:git-Spigot-596221b-86aa17f]
07.09 11:14:52 [Server] INFO at net.minecraft.server.v1_12_R1.EntityLiving.n(EntityLiving.java:2117) [spigot.jar:git-Spigot-596221b-86aa17f]
07.09 11:14:52 [Server] INFO at net.minecraft.server.v1_12_R1.EntityInsentient.n(EntityInsentient.java:507) [spigot.jar:git-Spigot-596221b-86aa17f]
07.09 11:14:52 [Server] INFO at net.minecraft.server.v1_12_R1.EntityMonster.n(EntityMonster.java:24) [spigot.jar:git-Spigot-596221b-86aa17f]
07.09 11:14:52 [Server] INFO at net.minecraft.server.v1_12_R1.EntityZombie.n(EntityZombie.java:155) [spigot.jar:git-Spigot-596221b-86aa17f]
07.09 11:14:52 [Server] INFO at net.minecraft.server.v1_12_R1.EntityLiving.B_(EntityLiving.java:1939) [spigot.jar:git-Spigot-596221b-86aa17f]
07.09 11:14:52 [Server] INFO at net.minecraft.server.v1_12_R1.EntityInsentient.B_(EntityInsentient.java:246) [spigot.jar:git-Spigot-596221b-86aa17f]
07.09 11:14:52 [Server] INFO at net.minecraft.server.v1_12_R1.EntityMonster.B_(EntityMonster.java:28) [spigot.jar:git-Spigot-596221b-86aa17f]
07.09 11:14:52 [Server] INFO at net.minecraft.server.v1_12_R1.World.entityJoinedWorld(World.java:1640) [spigot.jar:git-Spigot-596221b-86aa17f]
07.09 11:14:52 [Server] INFO at net.minecraft.server.v1_12_R1.World.h(World.java:1610) [spigot.jar:git-Spigot-596221b-86aa17f]
07.09 11:14:52 [Server] INFO at net.minecraft.server.v1_12_R1.World.tickEntities(World.java:1436) [spigot.jar:git-Spigot-596221b-86aa17f]
07.09 11:15:00 [Multicraft] Skipped 16 lines due to rate limit (30/s)
Answers:
username_1: I need The full error text to solve this issue: 07.09 11:15:00 [Multicraft] Skipped 16 lines due to rate limit (30/s)
I need the last 16 lines from the server log
Status: Issue closed
|
pcm-dpc/COVID-19 | 860689294 | Title: Url non validi nei file Aree
Question:
username_0: Tipo di issue:
- [ X] Dati mancanti o errati :
- https://github.com/pcm-dpc/COVID-19/blob/master/aree/geojson/dpc-covid-19-aree-nuove-g-json.zip
- https://github.com/pcm-dpc/COVID-19/blob/master/aree/shp/dpc-covid-19-ita-aree-nuove.dbf
- Regione:
## Riassunto
<!-- Scrivi qui la descrizione dell'errore, con riferimento a set di dati oppure documento specifico. -->
Alcuni link nei documenti di cui sopra non sono più raggiungibili.
In particolare, quelli nel campo legLink di tipo: https://bit.ly/3kVMXNB
--
**Attesa:**
<!-- Quali dati o documentazioni è atteso? -->
**Attuale:** <!-- Quali dati o documentazione è effetivamente presente (o mancante)? -->
Answers:
username_1: Grazie dell'osservazione, sarà l'occasione per effettuare una revisione generale dei collegamenti rendendoli possibilmente il più omogenei possibili (ad es. sostituendo i riferimenti ai pdf, oppure alla citazione che punta ad uno specifico articolo piuttosto che alla norma in sè, oppure le url contratte). Sorvolo sui singoli motivi che hanno reso necessario utilizzare queste diverse forme di url nelle diverse "versioni". Negli ultimi periodi la pubblicazione da parte del Ministero della Salute delle Ordinanze è molto più stabile e tempestiva e ha reso più facile l'inserimento omogeneo dei dati.
Nel frattempo i permalink del sito della Gazzetta Ufficiale possono sostituire i riferimenti che non rispondono.
Nella prossima pubblicazione verrà reso disponibile anche l'esito di tale revisione.
Status: Issue closed
|
dotnet/docs | 797482473 | Title: Managed code
Question:
username_0: I have a question about this line `C++ is the one exception to this rule, as it can also produce native, unmanaged binaries that run on Windows.`
Why `also`? Isn't C++ code always unmanaged?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 6c021566-b09c-3a48-f14b-209526a428a1
* Version Independent ID: 0561b29e-2239-43bf-2a18-c5c0a2891fcd
* Content: [What is managed code?](https://docs.microsoft.com/en-us/dotnet/standard/managed-code)
* Content Source: [docs/standard/managed-code.md](https://github.com/dotnet/docs/blob/master/docs/standard/managed-code.md)
* Product: **dotnet-fundamentals**
* GitHub Login: @username_2
* Microsoft Alias: **username_2**
Answers:
username_1: Nope, C++ isn't always unmanaged. See <https://en.wikipedia.org/wiki/C%2B%2B/CLI>
Status: Issue closed
username_2: Here's another reference: https://docs.microsoft.com/en-us/cpp/dotnet/mixed-native-and-managed-assemblies?view=msvc-160. I'm going to close this issue as answered. |
oemof/tespy | 639896589 | Title: off-design calculation doesn't provide the desired output
Question:
username_0: Hey Francesco,
I have modeled a gas turbine as per the code below.
my aim is to get the maximum power for an off-design run at given ambient conditions ( temperature and humidity), however, the power that I'm getting isn't the maximum ( baseload) power at that ambient conditions. I've even tried to set the exhaust temperature and mass flow to baseload data as per the data that I have
```python
from tespy.networks import network
from tespy.components import (sink, source, compressor, turbine, condenser,
combustion_chamber, pump, heat_exchanger, drum,
cycle_closer,splitter,merge,node)
from tespy.connections import connection, bus, ref
from CoolProp.CoolProp import HAPropsSI, PropsSI as CP
import numpy as np
import pandas as pd
import os
f_path = os.path.dirname(os.path.abspath(__file__))
# network
fluid_list = ['Ar', 'N2', 'O2', 'CO2', 'CH4', 'H2O','ethane','propane','butane']
nw = network(fluids=fluid_list, p_unit='bar', T_unit='C', h_unit='kJ / kg',m_unit='kg / s',
p_range=[1, 30], T_range=[10, 1500], h_range=[100, 46000])
gasFuel={'CO2': 0.00781, 'Ar': 0, 'N2': 0.03286,'O2': 0, 'H2O': 0, 'CH4': 0.8790,'ethane':0.0565,'propane':0.0159, 'butane':0.00793}
# calculate water mass fraction of humid air
def air_comp(t,rh,p):
dry_air_m=CP('M','air')
# CO2 H2O N2 Ar O2 Total(air)
dry_air_c = [0.033, 0, 78.113, 0.916, 20.938, 28.962540913]
sat_vap_pr= CP('P','T',273.15+t,'Q',1,'IF97::Water')/100000 # bar
h2o_pp=rh*sat_vap_pr /100# water partial pressure
Hratio = h2o_pp / (p - h2o_pp) * (18.0153 / dry_air_m) # 'Humidity Ratio Kgw/Kgdry air
FDair = (p - h2o_pp) / p #Fraction of dry air
h2o_molperc = (1 - FDair) * 100 # 'Water vapour mol. %'mol, w%
DAmass = 100 * (1 / (1 + Hratio)) # 'Dry air mass % 'm, dry air%
WVmass = 100 * (Hratio / (1 + Hratio)) # 'Water vapour mass %'m, w%
# CO2 H2O N2 Ar O2
#mol%/K-mol dry air
mol_f = [FDair*dry_air_c[0], h2o_molperc, FDair*dry_air_c[2],FDair*dry_air_c[3], FDair*dry_air_c[4]]
#Kg/K-mol dry air
mass_f = [mol_f[0]*44.0098, mol_f[1]*18.0153 ,mol_f[2]*28.0135 ,mol_f[3]*39.948 ,mol_f[4]*31.9988 ] #/ 100
air_tm=sum(mass_f) #total mass of air
return {'CO2': mass_f[0]/air_tm,'H2O': mass_f[1]/air_tm,'N2': mass_f[2]/air_tm,
'Ar': mass_f[3]/air_tm,'O2': mass_f[4]/air_tm ,'CH4': 0,'ethane':0, 'propane':0, 'butane':0}
def evap_cooler(t,rh,p):
wb_in=HAPropsSI('Twb','T',273.15+t,'P',p*1e5,'R',rh/100)
h1=HAPropsSI('H','T',273.15+t,'P',p*1e5,'RH',rh/100)
Tdb= t- (0.85* (t-(wb_in-273.15)))
rh_out=HAPropsSI('RH','T',273.15+Tdb,'P',p*1e5,'H',h1)
return Tdb,rh_out, air_comp(Tdb,rh_out,p)
# ===================================== components =================================================
[Truncated]
print('================== off-design ==================================')
print('================================================================')
print('')
t,h,air_2=evap_cooler(50,30,1)#air_comp(t,h,1)
gt_out.set_attr(m=449.4,p=None,T=557.1)#,T=None
c_in.set_attr(T=50, fluid=air_comp(50,30,1))
nw.solve(mode='offdesign',design_path=f_path + '\\design_point')
cols=['T','H','P','power', 'fuel', 'air','TIT','GT Exhuast']
results=pd.DataFrame(columns=cols)
L1=[
round(t,2),round(h,2),round(c_in.p.val,2),round(power.P.val/1000000,2),round(fuel.m.val,2),
round(c_in.m.val,2),round(gt_in.T.val,2),round(gt_out.T.val,2)
]
r1 = pd.DataFrame([L1], columns=cols)
results = results.append(r1,ignore_index = True)
```
Answers:
username_1: Hi @username_0,
what is the result you would expect exactly? I think I do not fully understand what you are trying to do here.
Here are some thoughts:
- The characteristics used are the default ones, which all have maximum efficiency at nominal mass flow (see https://tespy.readthedocs.io/en/master/api/tespy.data.html). Therefore the efficiency of the turbine and the compressor are lower in offdesign. If you have more measured data, you could reverse-engineer these into your own efficiency characteristic.
- In offdesign conditions, you set the flue gas mass flow, which is much lower than the fresh air mass flow in design conditions.
- The flue gas temperature after the gas turbine in higher in offdesign conditions, therefore you have higher flue gas losses.
Best regards
Francesco
username_0: Thank you, Francesco,
you are right, I need to use a customized est_s_char curve to reach the desired output.
However, I am expecting a higher fuel mass flow. the LHV (= c_c.ti.val/fuel.m.val) is higher from the one I expect from the given fuel composition? 47.7MJ/kg vs 46MJ/Kg expected which leads to lower mass flow.
username_1: What are the fluid properties are you working with? TESPy has got these data (you can get these by `print(c_c.fuels)` after first simulation run):
```
{
'CH4': {'C': 1, 'H': 4, 'O': 0, 'LHV': 50006856.65843868},
'ethane': {'C': 2, 'H': 6, 'O': 0, 'LHV': 47481396.14700037},
'propane': {'C': 3, 'H': 8, 'O': 0, 'LHV': 46351542.39808851},
'butane': {'C': 4, 'H': 10, 'O': 0, 'LHV': 45739665.73873666}
}
```
The data are calculated using formation enthalpy: https://tespy.readthedocs.io/en/master/api/tespy.components.html#tespy.components.combustion.combustion_chamber.calc_lhv. Maybe there is an error in the input data, there are hard-wired here: https://tespy.readthedocs.io/en/master/_modules/tespy/components/combustion.html#combustion_chamber.calc_lhv.
username_1: Hi @username_0,
has this issue been resolved?
Best
Status: Issue closed
|
raiden-network/raiden | 534863514 | Title: [META] - Assertion Error in a scenario run
Question:
username_0: Currently, we assume that failures in assert tasks might be related:
- #5460 (in PFS 1 on 7th, 8th, and 9th)
- #5412 (in BF6 on 30th of November), might be already fixed by https://github.com/raiden-network/scenario-player/pull/441
We try to collect all the logs here first.
Answers:
username_0: there were no other assertion errors than the pfs1 over the weekend. #5412 might be fixed by raiden-network/scenario-player#441
So closing this for now and reopen it, if we see that again
Status: Issue closed
username_0: Currently, we assume that failures in assert tasks might be related:
- ~~#5460 (in PFS 1 on 7th, 8th, and 9th)~~
- #5412 (in BF6 on 30th of November and many before that), might be already fixed by https://github.com/raiden-network/scenario-player/pull/441
We try to collect all the logs here first.
username_0: There we go again with a bf1 run today:
Failed [in line 150 ](https://github.com/raiden-network/raiden/blob/a351135932aa472d93ed5b1d486a3b45ab87c3a1/raiden/tests/scenarios/ci/sp1/bf1_basic_functionality.yaml#L150 )with
``Expected sum value \"2020000000000000000\" for channel fields \"balance\". Actual value: \"2019000000000000000\``
- [SP logs](http://172.16.17.32:8000/scenario-player/scenarios/bf1_basic_functionality/scenario-player-run_bf1_basic_functionality_2019-12-10T12%3A38%3A02.log)
- [Node0](http://172.16.17.32:8000/scenario-player/scenarios/bf1_basic_functionality/node_9_000/)
- [Node1](http://172.16.17.32:8000/scenario-player/scenarios/bf1_basic_functionality/node_9_001/)
- [Node2](http://172.16.17.32:8000/scenario-player/scenarios/bf1_basic_functionality/node_9_002/)
- [Node3](http://172.16.17.32:8000/scenario-player/scenarios/bf1_basic_functionality/node_9_003/)
- [Node4](http://172.16.17.32:8000/scenario-player/scenarios/bf1_basic_functionality/node_9_004/)
@palango
Status: Issue closed
|
dart-lang/sdk | 84536750 | Title: dart2js: Almost all throws are expressions
Question:
username_0: We know that some JS engines have a hard time optimizing the output from dart2js if we don't explicitly throw. However, it looks like after throw as an expression was implemented, that we always generate an expression without throw.
That is, we generate:
if ($length !== this.get$length(this))
$.throwExpression($.ConcurrentModificationError$(this));
But should have generated:
if ($length !== this.get$length(this))
throw $.throwExpression($.ConcurrentModificationError$(this));
Answers:
username_1: A large app has 1395 calls to `H.throwExpression(e)` and 1571 occurrences of `throw H.wrapExpression(e)`.
Often the `throwExpression` variant comes from inlining.
I actually think we should use `throwExpression` more often - it is smaller (minified). Most code is not performance critical, and the explicit `throw` is probably only useful for code that is executed enough to be JIT-ed by JavaScript - <10% of real apps but 100% of benchmarks. |
vuetifyjs/vuetify | 821380272 | Title: [Feature Request] Clickable time label for Day view
Question:
username_0: ### Problem to solve
In Day view, can you make the time label clickable like event @click: time, so that we can select the time between the intervals correctly, from the time label?
### Proposed solution
make time label in Day view clickable like event @click: time
<!-- generated by vuetify-issue-helper. DO NOT REMOVE --> |
GoogleChrome/lighthouse | 242192181 | Title: compile-devtools currently failing
Question:
username_0: This test is failing on Travis and locally.
I've tracked down the root cause and filed an issue with the Chromium git infra team here: https://bugs.chromium.org/p/chromium/issues/detail?id=741105
Answers:
username_1: I believe this has been resolved (though the crbug hasn't been)
Status: Issue closed
username_0: True. I will leave it open as I imagine it's very possible their mirroring infra can lag behind again sometime. |
mdo/github-buttons | 687182639 | Title: Star count no longer showing on one of my GitHub buttons
Question:
username_0: Thank you for creating and open sourcing github-buttons! We've used them on our website for 2+ years now.
We recently noticed that one of our buttons no longer displays the star count; however, the second button is working just fine.
Here are links to the buttons:
working: https://ghbtns.com/github-btn.html?user=async-labs&repo=saas&type=star&count=true&size=large
not working: https://ghbtns.com/github-btn.html?user=builderbook&repo=builderbook&type=star&count=true&size=large
Do you know what the problem could be?
Answers:
username_1: Both seem to work fine.
My guess is you hit the GitHub API rate limit thus for ~1 hour you won't see any buttons. We should probably document this somewhere.
username_1: Closing for the aforementioned reason. If you want, you can submit a PR to document this behavior. IIRC the unauthenticated rate limit is 60 requests per hour.
Status: Issue closed
username_0: @username_1 thanks for the reply and information.
Are you able to see the star count for this button? https://ghbtns.com/github-btn.html?user=builderbook&repo=builderbook&type=star&count=true&size=large
I still don't see anything, and it has not been showing for a few weeks now. I'm not sure that we've just hit a limit and need to wait an hour. To me, it seems there is a different problem.
username_1: I was seeing and still seeing the star count here:

Not sure why you are having this issue apart from rate limiting which you shouldn't really hit. How about trying in an incognito window and/or on a different network?
username_0: @username_1 Thanks for sharing the screenshot and suggestions. I'm still not seeing the number on my computer/phone in incognito or on a different network. I'll have to keep investigating. |
CleanTalk/phpbb3.1-3.2-antispam | 284269502 | Title: Ошибка при добавлении ключа доступа в настройках расширения.
Question:
username_0: Привет!
При добавлении ключа доступа в настройках расширения вылетает исключение:
Uncaught Error: Class 'cleantalk\antispam\acp\CleantalkHelper' not found in /var/www/forum/ext/cleantalk/antispam/acp/main_module.php:70
Answers:
username_1: Добрый день.
Исправлено в текущей версии
Status: Issue closed
|
ocaml/opam-repository | 252964441 | Title: conf-* packages shouldn't be upgraded
Question:
username_0: The default policy in OPAM is to maximize versions of installed packages, that makes sense with the packages that are distributed with OPAM, but is wrong, however, when we are dealing with the conf-packages that represent (a) system configuration and (b) set of available system packages. When a user installs `conf-llvm.XXX` version, that is not the latest, this is a user way of saying to us: "Guys, I want this XXX version, not the latest, maybe because I don't have the latest in the system".
As an example, consider the following interaction, we will use bap in the example, that supports llvm 3.4,3.8, and 4.0. However, we are installing on a system that has only 3.8.
```
opam install conf-llvm.3.8 # OK
opam install bap # fails since OPAM is upgrading conf-llvm.3.8 to conf-llvm.4.0
```
OK, let's constraint it
```
opam install conf-llvm.3.8
opam install conf-llvm.3.8 bap
```
This works, but suppose a user is trying to install another package, totally independent of bap, e.g., `merlin`:
```
opam install merlin
The following actions will be performed:
↗ upgrade conf-llvm 3.8 to 4.0.0
∗ install conf-which 1 [required by biniou]
∗ install easy-format 1.3.0 [required by yojson]
↻ recompile bap-llvm master [uses conf-llvm]
∗ install biniou 1.2.0 [required by yojson]
↻ recompile bap-x86 master [uses bap-llvm]
↻ recompile bap-arm master [uses bap-llvm]
∗ install yojson 1.4.0 [required by merlin]
↻ recompile bap-primus-x86 master [uses bap-x86]
∗ install merlin 3.0.2
↻ recompile bap master [uses bap-llvm]
```
This will break bap since it again will try to upgrade conf-llvm and recompile the whole bap.
So, what I'm thinking of, is it possible:
1) treat conf-packages differently, by applying some non-default criteria to them
2) maybe add an option for packages to specify criteria in the opam files.
P.S. of course I'm well aware, that I can specify my own criteria using the command line and environment variables, but my concern are regular users (students), who will try to install bap in their system and to whom both OCaml and OPAM are totally new ideas, and such behavior will piss them off.
P.P.S. (this is the reason why we had conf-bap-llvm)
Answers:
username_0: But it looks like that this is no longer true, as, just using opam install without criteria, would try to upgrade the whole world, while if I specify criteria as they are specified in documentation, then I will have a sane behavior:
```
$ opam install merlin --criteria='-removed,-changed,-notuptodate'
The following actions will be performed:
∗ install easy-format 1.3.0 [required by yojson]
∗ install biniou 1.0.5 [required by yojson]
∗ install yojson 1.1.3 [required by merlin]
∗ install merlin 3.0.2
```
So it looks like that this documentation page is no longer true. It looks like that the install criteria are:
```
-count(removed),-notuptodate(request),-sum(request,version-lag),-count(down),-notuptodate(changed),-count(changed),-notuptodate(solution),-sum(solution,version-lag)
```
where the upgrade is
```
-count(down),-count(removed),-notuptodate(solution),-sum(solution,version-lag),-count(new)
```
[1]: http://opam.ocaml.org/doc/Specifying_Solver_Preferences.html
username_1: Note that [opam v2](https://opam.ocaml.org/doc/2.0/Manual.html) has the ability to specify that a package is a `conf` package.
username_0: Yep, so the next step is to apply a special criterium to them, something like "do not touch".
username_1: Maybe this should be reported there.
username_0: You mean in opam itself? Yeah, this is a good point.
username_0: So, if we will not confuse the system package versions with the opam conf package version, then conf packages, like it is described [here][1], then there is no need for special treatment of the conf-* packages. If everyone will agree with the `conf-<system-package>-<version_number>` convention, we can move forward and close this issue.
[1]: https://github.com/ocaml/opam-repository/issues/9853#issuecomment-325007734
username_2: Is "pinning" not suitable for you ? ```opam pin conf-llvm $version```
username_1: Doesn't that become a bit unwieldy ?
username_0: I'm trying to make installation of my package as smooth as possible. Basically, it should be one command. If an installation of a package requires a user to perform commands, then basically it means that we are missing some capabilities in the package manager. It is the responsibility of the package manager to resolve all the dependencies.
P.S. Of course I know, how to force opam to behave as I like, by just setting the right criteria to the ASP solver. So I'm not looking for a workaround, I'm pointing that there is some problem with the idea of conf-* packages.
username_0: That's what I'm trying to figure out.
username_1: I really think this should be discussed on `opam`'s bug tracker @AltGr doesn't watch this repo and v2 is where these things could be solved correctly along with the depext overhaul https://github.com/ocaml/opam/issues/2919
username_0: @username_1, I agree, would you like me to reopen this issue on opam's bug tracker?
username_1: Yes I think it's better to do so. This tracker is for issue about the OCaml opam repository itself. If you think your issue can be solved by changing how the OCaml opam repository is structured then keep it open otherwise, I'd suggest you to close this and move the discussion to the `opam` issue tracker.
Status: Issue closed
username_0: moved to ocaml/opam#3035 |
GoogleCloudPlatform/k8s-config-connector | 628052592 | Title: Cleaning up objects no longer declared?
Question:
username_0: I have a set of manifests in a path, and I'm running `kubectl apply -f config-connector/` when I make changes to them.
When I rename one of the objects, or remove one, am I right in thinking the old objects are not deleted? And that I have to actively run `kubectl delete ...`?
Is there a way of clearing up objects no longer declared, or otherwise syncing the GCP objects to a set of manifests?
Thanks!
Answers:
username_1: Hi @username_0, you're correct in thinking that renaming one of the objects just creates a new object and the old one is not deleted. You will have to actively run `kubectl delete ...` to delete the old object.
Config Connector does not have any clean-up functionality that checks for objects no longer declared, but it sounds like you might be interested in [Config Sync](https://cloud.google.com/kubernetes-engine/docs/add-on/config-sync/overview). With Config Sync, you can continue to manage your manifests in that one folder (but in a Git repository) and it will make sure it's in sync with the resources in that folder.
username_0: Thanks @username_1 .
Config Sync looks heavier duty and I think requires Anthos; I'll keep an eye on how it evolves. FWIW the new `config-connector -project` tool + diffs could probably fill most of this need. |
hyb1996-guest/AutoJsIssueReport | 231776416 | Title: java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
Question:
username_0: Description:
---
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
at com.stardust.view.ViewBinder.invokeMethod(ViewBinder.java:126)
at com.stardust.view.ViewBinder.access$000(ViewBinder.java:16)
at com.stardust.view.ViewBinder$5.onClick(ViewBinder.java:117)
at android.view.View.performClick(View.java:5639)
at android.view.View$PerformClick.run(View.java:22446)
at android.os.Handler.handleCallback(Handler.java:751)
at android.os.Handler.dispatchMessage(Handler.java:95)
at android.os.Looper.loop(Looper.java:154)
at android.app.ActivityThread.main(ActivityThread.java:6165)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:895)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:785)
Caused by: java.lang.reflect.InvocationTargetException
at java.lang.reflect.Method.invoke(Native Method)
at com.stardust.view.ViewBinder.invokeMethod(ViewBinder.java:124)
... 11 more
Caused by: java.lang.NullPointerException: Attempt to invoke virtual method 'void com.stardust.scriptdroid.ui.main.script_list.MyScriptListFragment.newScriptFile()' on a null object reference
at com.stardust.scriptdroid.ui.main.MainActivity.createScriptFile(MainActivity.java:213)
... 13 more
Device info:
---
<table>
<tr><td>App version</td><td>2.0.11 Beta2</td></tr>
<tr><td>App version code</td><td>133</td></tr>
<tr><td>Android build version</td><td>7.5.25</td></tr>
<tr><td>Android release version</td><td>7.1.1</td></tr>
<tr><td>Android SDK version</td><td>25</td></tr>
<tr><td>Android build ID</td><td>NMF26X</td></tr>
<tr><td>Device brand</td><td>Xiaomi</td></tr>
<tr><td>Device manufacturer</td><td>Xiaomi</td></tr>
<tr><td>Device name</td><td>sagit</td></tr>
<tr><td>Device model</td><td>MI 6</td></tr>
<tr><td>Device product name</td><td>sagit</td></tr>
<tr><td>Device hardware name</td><td>qcom</td></tr>
<tr><td>ABIs</td><td>[arm64-v8a, armeabi-v7a, armeabi]</td></tr>
<tr><td>ABIs (32bit)</td><td>[armeabi-v7a, armeabi]</td></tr>
<tr><td>ABIs (64bit)</td><td>[arm64-v8a]</td></tr>
</table> |
apollographql/apollo-client | 490821727 | Title: Introspective FragmentMatcher error in documentation.
Question:
username_0: **Intended outcome:**
Implement IntrospectionFragmentMatcher in Next.js during init. Expected everything to run as usual.
**Actual outcome:**
Next.js correctly renders information, but server log throws error:
`Error while running `getDataFromTree` TypeError: Cannot read property 'query' of null`
**How to reproduce the issue:**
Connect to a graphQL server utilizing fragments and attempt to implement fragment matcher, following instructions provided by documentation. https://www.apollographql.com/docs/react/advanced/fragments/#fragments-on-unions-and-interfaces
This points toward the SSR query failing, while the browser succeeds.
**Versions**
System:
OS: macOS High Sierra 10.13.6
Binaries:
Node: 12.3.1 - /usr/local/bin/node
Yarn: 1.16.0 - /usr/local/bin/yarn
npm: 6.9.0 - /usr/local/bin/npm
Browsers:
Chrome: 76.0.3809.132
Safari: 12.1.2
npmPackages:
apollo-boost: ^0.1.16 => 0.1.28
react-apollo: ^2.5.8 => 2.5.8
npmGlobalPackages:
apollo: 2.18.0
** Solution **
I was able to solve this error by adding fragment matcher not only to my inMemoryCache but directly to Apollo client as well. Ex.
`function create(initialState) {
return new ApolloClient({
connectToDevTools: process.browser,
ssrMode: !process.browser, // Disables forceFetch on the server (so queries are only run once)
link: new HttpLink({
uri: publicRuntimeConfig.GRAPH_API_URL, // Server URL
// Additional fetch() options like `credentials` or `headers`
credentials: "same-origin",
// Use fetch() polyfill on the server
fetch: !process.browser && fetch,
}),
cache: new InMemoryCache({fragmentMatcher}).restore(initialState || {}),
fragmentMatcher
});
}`
Answers:
username_0: I think this should be noted in the documentation to help others. I spent about an hour trying to figure out how to rid myself of this error.
username_0: This solution was incorrect. Still looking for a solution but I will create another more relevant post in the future.
Status: Issue closed
|
bahamas10/vsv | 925419963 | Title: Show status of per-user service
Question:
username_0: It could be neat if this could include all service inside user home
https://docs.voidlinux.org/config/services/user-services.html
Answers:
username_1: This should be possible as-is. You can specify `vsv -u` to have `vsv` look in `~/runit/service`. You can also use `vsv -d ~/service` to manually specify the dir.
Also, `vsv` honors the `SVDIR` variable, so as long as that is set and exported `vsv` should do what you want. |
absinthe-graphql/absinthe | 252876193 | Title: import_field does not work with subscriptions
Question:
username_0: When using `import_fields` with subscriptions, I ended up with the following error:
```
** (exit) an exception was raised:
** (Protocol.UndefinedError) protocol Enumerable not implemented for nil. This protocol is implemented for: DBConnection.PrepareStream, DBConnection.Stream, Date.Range, Ecto.Adapters.SQL.Stream, File.Stream, Function, GenEvent.Stream, HashDict, HashSet, IO.Stream, List, Map, MapSet, Postgrex.Stream, Range, Stream, Timex.Interval
(elixir) /private/tmp/elixir-20170801-32483-1rf8an1/elixir-1.5.1/lib/elixir/lib/enum.ex:1: Enumerable.impl_for!/1
(elixir) /private/tmp/elixir-20170801-32483-1rf8an1/elixir-1.5.1/lib/elixir/lib/enum.ex:116: Enumerable.reduce/3
(elixir) lib/enum.ex:1832: Enum.reduce/3
(stdlib) lists.erl:1263: :lists.foldl/3
(absinthe) lib/absinthe/subscription.ex:79: Absinthe.Subscription.get_subscription_fields/1
(absinthe) lib/absinthe/subscription.ex:65: Absinthe.Subscription.publish/3
(absinthe) lib/absinthe/subscription.ex:138: Absinthe.Subscription.call/2
(absinthe) lib/absinthe/phase/document/execution/resolution.ex:194: Absinthe.Phase.Document.Execution.Resolution.reduce_resolution/1
(absinthe) lib/absinthe/phase/document/execution/resolution.ex:164: Absinthe.Phase.Document.Execution.Resolution.do_resolve_field/4
(absinthe) lib/absinthe/phase/document/execution/resolution.ex:150: Absinthe.Phase.Document.Execution.Resolution.do_resolve_fields/6
(absinthe) lib/absinthe/phase/document/execution/resolution.ex:87: Absinthe.Phase.Document.Execution.Resolution.walk_result/5
(absinthe) lib/absinthe/phase/document/execution/resolution.ex:57: Absinthe.Phase.Document.Execution.Resolution.perform_resolution/3
(absinthe) lib/absinthe/phase/document/execution/resolution.ex:25: Absinthe.Phase.Document.Execution.Resolution.resolve_current/3
(absinthe) lib/absinthe/pipeline.ex:248: Absinthe.Pipeline.run_phase/3
(absinthe_plug) lib/absinthe/plug.ex:274: Absinthe.Plug.run_query/3
(absinthe_plug) lib/absinthe/plug.ex:202: Absinthe.Plug.execute/2
(absinthe_plug) lib/absinthe/plug.ex:155: Absinthe.Plug.call/2
(phoenix) lib/phoenix/router/route.ex:161: Phoenix.Router.Route.forward/4
(phoenix) lib/phoenix/router.ex:278: Phoenix.Router.__call__/1
```
This is closely related with #374
Answers:
username_1: Should be fixed in master too.
Status: Issue closed
|
mittagessen/kraken | 356209477 | Title: manifest.txt is not automatically created when having a large number of files [limitation of ls]
Question:
username_0: @username_2
When using `ketos linegen input.txt -f fontname`, by default a `manifest.txt` file is created which contains a list of all the .png files.
The problem is when your `input.txt` contains +70,000 lines, the `manifest.txt` doesn't get created.
My guess is that it's a limitation of the `ls` or `cat` command used in listing the files.
The Solution is to implement the `find` command for creating the `manifest.txt` list:
```
find -type f -name '*.png' > manifest.txt
```
Answers:
username_0: @username_2 implement the `find` command, and close this topic.
username_1: Isn't the `manifest.txt` only created when extracting from transcription HTML (`ketos extract`)?
username_2: The linegen command doesn't produce a manifest. We can change that but with `ketos train` not using it, I don't really see the point. |
Kozea/WeasyPrint | 21377710 | Title: Table headers and footers are not displayed when nothing else is on the page
Question:
username_0: When I print following webpage, the first table (with header) is not printed, unless I remove <thead> tags. I got the error when I was trying to print irregular tables (some of the tables were connected to look like one table).
http://langust.x10.mx/test_wrongly_printed_header.html
Answers:
username_1: What a funny bug!
- working: `<html>text<table style="border: 2px solid"><thead><tr><th>Header`
- not working `<html><table style="border: 2px solid"><thead><tr><th>Header`
This is caused by this line (and the same ones in the next blocks):
https://github.com/Kozea/WeasyPrint/blob/f0da0374bf0d54deb802393f6c636adf7541f9b9/weasyprint/layout/tables.py#L281
This test asserts that if there's nothing on the page, that's because nothing can be be drawn, so the header/footer doesn't fit. But when there's nothing else to draw on the page, that's normal :).
username_1: It’s fixed in branch `fix_129` (surprise!), we need tests before merging.
Status: Issue closed
|
singularityhub/singularityhub.github.io | 585614445 | Title: el7 builders possible?
Question:
username_0: ### Links
- container collection:
https://singularity-hub.org/collections/4175
- GitHub repository or recipe file:
https://github.com/username_0/cvmfs2go
### Version of Singularity
singularity version 3.5.3-1.1.el7
### Behavior when Building Locally
no problem, succesful
### Error on Singularity Hub
Nothing happens : No containers found in the pending queue.
### What do you think is going on?
well, i use yum bootstrap so i need a centos7 host
is it possible?
Thank you!
Answers:
username_1: Please try a Docker bootstrap with a centos image, the host is not centos.
username_0: ok, thanks, this is a no-go for me.
Status: Issue closed
|
myDevicesIoT/Cayenne-API-Sample-App | 275659247 | Title: Issue in doc
Question:
username_0: The URL is incorrect and should be changed to `https://auth.mydevices.com/oauth/token`
`curl -X POST -H 'Content-Type: application/json' 'https://auth.mydevices.com/oauth/token' -d '{"grant_type": "password", "email": "YOUR EMAIL", "password": "<PASSWORD>"}'`<issue_closed>
Status: Issue closed |
sindresorhus/globby | 269951166 | Title: Expose `node-glob` event api
Question:
username_0: https://github.com/isaacs/node-glob#events
Use case: I want to check why my globs are running slowly.
Answers:
username_0: I found that I can examine the cache afterwards. A "progress" feature which is what I am really looking for is an issue for `node-glob`.
Status: Issue closed
|
Shadows-of-Fire/Apotheosis | 1021790379 | Title: Lighting issue with Enchantment Library
Question:
username_0: `Minecraft v1.16.5`
`Forge v36.2.4`
`Apotheosis v4.8.1`
When the enchantment library is facing east/west, it produces an odd shadow on its side, which is more easily visible if multiple are placed together. It doesn't have this problem when facing north/south.

Answers:
username_1: what in the world
username_0: ¯\_(ツ)_/¯
Forgot to mention this was on a server, but probably unrelated. |
freecodecampnorman/freecodecampnorman.github.io | 503049119 | Title: Add a spinning loading icon when loading leaderboard data
Question:
username_0: On the leaderboard page, it currently shows "Loading..." while waiting for leaderboard data to load. For a better user experience, we should have a loading spinner that makes it easy to indicate data is loading.
A loading icon using CSS animations would be preferred. Feel free to use spinner icons from places like https://loading.io/css/ or make your own!
Answers:
username_1: I can also try this out :)
username_2: may i give it a shot sir
Status: Issue closed
|
2DegreesInvesting/r2dii.data | 647604994 | Title: NA - Namibia Iso code
Question:
username_0: NA is required when a value is missing however this is also the ISO code for the country, Namibia.
Perhaps Null or blank can be used instead of NA or Namibia's ISO code can be changed and a note made of it?
Not sure of the exact issue this will cause but raising it preemptively here.
Thanks
---
Answers:
username_1: ``` r
library(tidyverse, warn.conflicts = FALSE)
library(r2dii.data)
# In R, NA and "NA" are two different things:
# * NA is a special representation of missing data
# * "NA" is a literal string that represents the letter "N" followd by "A"
identical(NA, "NA")
#> [1] FALSE
typeof(NA)
#> [1] "logical"
typeof("NA")
#> [1] "character"
# Upper case NA versus 'NA' may confuse some people and some functions; it
# could be interpreted as a string or as a missing value if the quotation
# marks are used incorrectly -- this is subtle and potentially problematic.
read_csv("x, y \n 'NA', NA")
#> # A tibble: 1 x 2
#> x y
#> <chr> <lgl>
#> 1 'NA' NA
# But lowercase na versus 'na' is okay; it's always interpreted as a string
read_csv("x, y \n' na', na")
#> # A tibble: 1 x 2
#> x y
#> <chr> <chr>
#> 1 ' na' na
# As far as I know, all datasets in the r2dii package-ecosystem use the literal
# lowercase 'na' -- they should be fine.
r2dii.data::iso_codes %>% filter(country == "namibia")
#> # A tibble: 1 x 2
#> country country_iso
#> <chr> <chr>
#> 1 namibia na
r2dii.data::region_isos %>% filter(isos == "na")
#> # A tibble: 6 x 3
#> region isos source
#> <chr> <chr> <chr>
#> 1 africa na weo_2019
#> 2 sub saharan africa na weo_2019
#> 3 global na weo_2019
#> 4 developing economies na weo_2019
#> 5 non oecd na weo_2019
#> 6 non opec na weo_2019
r2dii.data::region_isos_demo %>% filter(isos == "na")
#> # A tibble: 1 x 3
#> region isos source
#> <chr> <chr> <chr>
#> 1 global na demo_2020
```
<sup>Created on 2020-06-29 by the [reprex package](https://reprex.tidyverse.org) (v0.3.0)</sup>
username_0: Thanks - makes sense to move to .data
username_2: Can this be closed? is this an issue?
username_1: I argued there is no issue and heard no response, so I assume we can close.
Status: Issue closed
|
thag8keepr/ctf | 368741296 | Title: Brutal Oldskull write_up question
Question:
username_0: Could you explain how you manage to create a new loc where to jump with ida?
I didn't manage to.
Answers:
username_1: Hi,
Are you asking how to patch the binary with IDA? I'm using the freeware version so AFAIK you cannot easily (if at all) save the patched binary. So I'm doing it in two steps:
1) Open the hex view in IDA and then manually calculate and enter the correct OPcodes (press F2 in hexview, enter data, press F2 to close edit mode), and then verify by looking in the text view.
2) To do the actual patching on the binary, I simply used a hex editor to save the changes.
I think you can use both radare and binaryninja to patch directly (even at assembly level), but since I'm just starting out learning assembly, I find it a great learning experience to write opcodes directly, since I constantly have to consult the x86/ARM reference documentation
username_0: Mh no, I'm able (more or less) to patch op code, but in your write-up I saw that you added a jump location (loc_40191f) that there wasn't before, and I didn't manage to do the same,,.
username_1: I actually did not add the jump location myself. When editing in hex view and entering correct opcode, IDA automagically inserts a jump location according to the jmp instruction I created in the hex view. |
metatron-app/metatron-discovery | 501232267 | Title: Treemap chart selection filter error
Question:
username_0: **Describe the bug**
Error when using selection filter by clicking on Tree map chart.
**To Reproduce**
Steps to reproduce the behavior:
1. Create TreeMap Chart and Bar Chart
2. Click on TreeMap part and apply Selection Filter
3. See error

**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
Answers:
username_1: I think It's bug relate to Echart library, So To solve this bug needs more time. |
k8ssandra/k8ssandra | 817661062 | Title: Missing MCAC and MAAC in 1.0 blog post
Question:
username_0: ## Bug Report
**Describe the bug**
MCAC and MAAC are missing in the 1.0 release blog post
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://k8ssandra.io/blog/2021/02/26/k8ssandra-1.0-stable-release-and-whats-next/
2. Look for Metrics Collector for Apache Cassandra
3. See error
**Expected behavior**
We should list MAAC and MCAC as components and include their version numbers<issue_closed>
Status: Issue closed |
weseek/growi | 589834237 | Title: 添付ファイルが多いページで右側の目次リンクが機能しない
Question:
username_0: Environment
------------
### Host
| item | version |
| --- | --- |
|OS | (growi.cloud) |
|GROWI |3.6.10|
|node.js | 12.16.1 |
|npm |6.13.4|
|Using Docker|(growi.cloud)|
|Using [growi-docker-compose][growi-docker-compose]|(growi.cloud)|
[growi-docker-compose]: https://github.com/weseek/growi-docker-compose
*(Accessing https://{GROWI_HOST}/admin helps you to fill in above versions)*
### Client
| item | version |
| --- | --- |
|OS | Mac OS 10.14|
|browser |Chrome 80.0.3987.149 |
How to reproduce? (再現手順)
---------------------------
添付ファイルが多いページで、`Attachments` のdivクラスに目次が乗ると、その上のリンクが押せなくなります。
<img width="1035" alt="Screen_Shot_2020-03-30_at_1_49_04" src="https://user-images.githubusercontent.com/1632478/77855047-f7ceb980-7228-11ea-9551-ee0345e0df74.png">
What is the expected result? (期待される動作)
-------------------------------------------
リンクが押せること。
Answers:
username_1: GROWI のご利用ありがとうございます。
上記の問題について再現確認しました。
修正いたしますので少々お待ちくださいませ
Status: Issue closed
username_2: @username_0 修正マージされました。
次回リリースをお待ちください。
username_0: ご対応ありがとうございました! |
JelmerT/cc2538-bsl | 1065118591 | Title: Verify CRC32 results in error
Question:
username_0: I am using your script to flash a **2652RB-F1-chip** on [sla.sh's CC2652RB development stick ](https://slae.sh). At the end of programming the CRC32-validation reports "verified match: (...)". But after doing a manual verify via `sudo python cc2538-bsl.py -v "hex-File"` the result is "ERROR: NO CRC32 match (...)". See attached Screenshot.
**Do I make something wrong or did I discover a bug?**
I'm running the script on a Raspberry Pi 3 Model B+ with Raspberry Pi OS (Debian-Buster based).

Best regards, Christian
Answers:
username_1: Looks like something is going wrong indeed, the calculation of the CRC seems to be correct the first time, but the second time the chip is giving you a different value.
What happens when you do a `read` and compare the read hex file with the original?
Could it be that your code altered its own contents in between the flash and then the second verification?
username_0: I do not know - It's an image of @Koenkk resp. [slae.sh](https://slae.sh/projects/cc2652/#from-me).
Here are the results of the `read` followed by a CRC32-check, which results in different checksums :-( Maybe the image works as designed despite the crc32-difference? I will try to check this out with @Koenkk.
**Read from CC2652RB-Stick followed by a CRC32-check:**
```
$ sudo cc2538-bsl/cc2538-bsl.py -r -l 360448 -p /dev/ttyUSB0 -b 115200 znp_CC2652RB_20200715-from-stick.bin_
Opening port /dev/ttyUSB0, baud 115200
Connecting to target...
CC1350 PG2.0 (7x7mm): 352KB Flash, 20KB SRAM, CCFG.BL_CONFIG at 0x00057FD8
Primary IEEE Address: fdf8:f53e:61e4::18
Reading 360448 bytes starting at address 0x0
Read done
$ ls -l
insgesamt 1108
drwxr-xr-x 4 pi pi 4096 Nov 27 18:48 cc2538-bsl
-rw-r--r-- 1 pi pi 404980 Jul 8 19:52 CC2652RB_coordinator_20210708.hex
-rw-r--r-- 1 pi pi 360448 Jul 22 2020 znp_CC2652RB_20200715.bin
-rw-r--r-- 1 root root 360448 Nov 29 18:03 znp_CC2652RB_20200715-from-stick.bin
$ rhash --simple --crc32 *.bin
47570b28 znp_CC2652RB_20200715.bin
747f9c98 znp_CC2652RB_20200715-from-stick.bin
```
username_0: I finally succeeded in bringing the application on the stick to cooperate with zigbee2mqtt by @Koenkk :-) So the flashing must have been correct, otherwise it would not run, right?
The question, why the checksums are different is still unsolved, but as long as the software runs...
So, thank's a lot for your help. This issues can be closed from my point of view.
username_1: Seeing the same size but different hashes on the read operation, it does look like the code altered its own flash in some way. That would be my best guess, since the crc does match on the write operation.
You could do a diff between both and check which sections are different, and look what those sections mean.
I'll close this one for now since it looks like the script isn't malfunctioning.
Anyone feel free to re-open if you discover otherwise.
Thanks for the report @username_0
Status: Issue closed
|
typescript-eslint/typescript-eslint | 1091104495 | Title: [no-undef] `this` in `typeof this` is reported as undefined incorrectly
Question:
username_0: - [x] I have tried restarting my IDE and the issue persists.
- [x] I have updated to the latest version of the packages.
- [x] I have [read the FAQ](https://typescript-eslint.io/docs/linting/troubleshooting) and my problem is not listed.
**Repro**
<!--
Include a ***minimal*** reproduction case.
The more irrelevant code/config you give, the harder it is for us to investigate.
Please consider creating an isolated reproduction repo to make it easy for the volunteer maintainers debug your issue.
-->
```JSON
{
"rules": {
"no-undef": []
}
}
```
```TS
// Add code below to valid cases of `packages/eslint-plugin/tests/eslint-rules/no-undef.test.ts`
// and run `yarn test no-undef`
const obj = {
foo: '',
bar() {
let foo: typeof this.foo;
},
};
```
**Expected Result**
No error should be reported.
**Actual Result**
`this` in `typeof this` is reported as undefined.
```
● no-undef › valid ›
const obj = {
foo: '',
bar() {
let foo: typeof this.foo;
},
};
assert.strictEqual(received, expected)
Expected value to strictly be equal to:
0
Received:
1
Message:
Should have no errors but had 1: [
[Truncated]
**Additional Info**
<!--
Did eslint throw an exception?
Please run your lint again with the --debug flag, and dump the output below.
i.e. eslint --ext ".ts,.js" src --debug
-->
**Versions**
Tested on latest commit, a9eb0b9e.
| package | version |
| ---------------------------------- | ------- |
| `@typescript-eslint/eslint-plugin` | `X.Y.Z` |
| `@typescript-eslint/parser` | `X.Y.Z` |
| `TypeScript` | `X.Y.Z` |
| `ESLint` | `X.Y.Z` |
| `node` | `X.Y.Z` |
Answers:
username_1: [repro](https://typescript-eslint.io/play/#ts=4.5.2&sourceType=module&code=PQKgBApgzgNglgOwC5gEQIPYFoCuCAmEAZqgFxoQBOlGlqYIwAsAFADGGCUKGARgFZgAvGADerMGCIYM5AORyANBLC8AhpQAUASjErJMCCmmywSAJ4AHCBiJmAFnCgA6EwG4VAX2UtPboA&rules=N4XyA&tsConfig=N4XyA) |
actions/starter-workflows | 1094969300 | Title: Mills
Question:
username_0: name: GitHub Actions Demo
on: [push]
jobs:
Explore-GitHub-Actions:
runs-on: ubuntu-latest
steps:
- run: echo "🎉 The job was automatically triggered by a ${{ github.event_name }} event."
- run: echo "🐧 This job is now running on a ${{ runner.os }} server hosted by GitHub!"
- run: echo "🔎 The name of your branch is ${{ github.ref }} and your repository is ${{ github.repository }}."
- name: Check out repository code
uses: actions/checkout@v2
- run: echo "💡 The ${{ github.repository }} repository has been cloned to the runner."
- run: echo "🖥️ The workflow is now ready to test your code on the runner."
- name: List files in the repository
run: |
ls ${{ github.workspace }}
- run: echo "🍏 This job's status is ${{ job.status }}."
Answers:
username_1: It seems you want to add a template. Kindly raise a PR for the same |
prometheus/snmp_exporter | 217382306 | Title: [generator] failing to build generator - "too many errors"
Question:
username_0: Running on Debian Jessie (installed the prerequisites as per the README), with Go v1.8 (but also tried with v1.7.4), `snmp_exporter` is built with no issues, but can't get the `generator` to build.
```
$ go build
# github.com/prometheus/snmp_exporter/generator
./main.go:16: undefined: Node
./main.go:77: undefined: initSNMP
./main.go:80: undefined: getMIBTree
./main.go:89: undefined: Node
./tree.go:13: undefined: Node
./tree.go:21: undefined: Node
./tree.go:23: undefined: Node
./tree.go:24: undefined: Node
./tree.go:30: undefined: Node
./tree.go:124: undefined: Node
./tree.go:30: too many errors
```
Answers:
username_1: Those are all from net_snmp.go, are you missing that file?
username_0: seems to be there
```
osboxes@osboxes:~/work/src/github.com/prometheus/snmp_exporter/generator$ ls -l
total 48
-rw-r--r-- 1 osboxes osboxes 1629 Mar 27 22:18 config.go
-rw-r--r-- 1 osboxes osboxes 4961 Mar 27 22:18 generator.yml
-rw-r--r-- 1 osboxes osboxes 2470 Mar 27 22:18 main.go
-rw-r--r-- 1 osboxes osboxes 3364 Mar 27 22:18 net_snmp.go
-rw-r--r-- 1 osboxes osboxes 3119 Mar 27 22:18 README.md
-rw-r--r-- 1 osboxes osboxes 5048 Mar 27 22:18 tree.go
-rw-r--r-- 1 osboxes osboxes 13517 Mar 27 22:18 tree_test.go
```
username_1: That's also the file with the cgo stuff, so something likely is going wrong there. Try debugging with flags like `-x`.
Which arch are you on?
username_0: Ok, so CGO pointed me to the right direction, I forgot I had `export CGO_ENABLED=0` in my `.bashrc`, changed it back to 1 and all is good.
Thanks!
Status: Issue closed
|
kbuffington/Georgia | 892844878 | Title: Add ability to manually cycle through artwork and/or pause slideshow
Question:
username_0: I love the way this skin displays artwork, but there are times when I'd like to see a specific piece (e.g. the booklet page about the current track) and I have to wait for it to cycle around, and then it's only up for 30s. The cycling is great for albums with just a few bits of artwork (e.g. front, back, tray and cd) but some of my albums have 20+ pages of booklet scans and it kind of breaks down at that point.
Another possible solution would be some method of setting a preferred artwork for a track, e.g. the relevant booklet page. Then it could cycle between the main artwork and that page, rather than all pages? Or linger longer on the main artwork and that page than the rest?
Answers:
username_1: Not gonna come in v2.0.3 which should be releasing soon, but I have plans to allow you to manually cycle through artwork. |
weapp-socketio/weapp.socket.io | 603640247 | Title: socket.close() 关闭时候 会有报错,是库的问题
Question:
username_0: 
thirdScriptError
setTimeout expects a function as first argument but got undefined.;at pages/chat/index onUnload function;at setTimeout callback function
TypeError: setTimeout expects a function as first argument but got undefined.
Answers:
username_0: 好像之前有个版本 是不会报错的
[weapp.socket.io的副本.docx](https://github.com/weapp-socketio/weapp.socket.io/files/4507403/weapp.socket.io.docx)
username_1: 这个问题我也遇到了,白思不得其解
username_0: 就我那个第二条留言 上传的 是没有的 就是120kb而已 多了20kb
username_1: 解决了 谢了
username_0: 我下载下来 好像打不开哦·····
username_1: 把后缀名称改成.js就好
username_2: 把上面的副本下载了替换了index.js还是报错是为什么?有没有解决办法呢谢谢
username_2: Cannot read property 'close' of undefined
username_3: 感谢反馈,问题已经修复。
Status: Issue closed
username_0: 刚才试了下 还是有报错呢 |
iterative/dvc | 407897094 | Title: Add option to `dvc run` retroactively
Question:
username_0: When working with dvc, I find this workflow to be very common:
1. I run some command I got from somewhere in the internet, whose outputs I don't know in advance. The simplest example would be a tarball with some structure I only find out about after I extract it, or more interestingly, a complex program which I will actually have to analyze to find out what it creates and where.
2. After running it, I would like to record that run as a dvc stage.
3. I get bummed about running `dvc run` since it forces me to actually re-run the command, instead of reusing the already existing outputs.
`--no-exec` doesn't work, since it records a blank file with no hashes, and I don't want to `dvc repro` if I can avoid it. I suppose I could manually add all the output files and dependencies and hashes, but that would suck.
I would suggest a `--retro` flag to `dvc run`, which will do everything `dvc run` does EXCEPT running the command itself.
If this is not a duplicate, I can get started on a PR.
Answers:
username_1: Hi @username_0 !
This seems like a case for `dvc commit` command from https://github.com/iterative/dvc/issues/919 . So the workflow would be:
```
$ tar -xvf a.tar.gz
$ dvc run --no-exec -f untar.dvc -o out1 -o out2 -d a.tar.gz tar -xvf a.tar.gz
$ dvc commit untar.dvc # this will fill out all the checksums and will put outputs to cache
```
I'm working on `dvc commit` right now, it is 90% ready, I'll hopefully publish a change for it until the end of the week, maybe earlier.
Thanks,
Ruslan
username_0: Great, thanks! I would still suggest the retro run as a possible UX improvement, if there's demand for it.
username_1: @username_0 So `--retro` would do both `--no-exec` and `commit` in a single command, right? If so, I agree, it would be useful. How about naming it `--commit` instead?
username_1: Also, another useful feature in such scenario would be https://github.com/iterative/dvc/issues/931 :)
username_0: @username_1 Right. Regarding the naming conventions, you're the boss, I just suggested the name that reflected my workflow.
I think I would be way too much of a coward to ever use #931 🤣
username_1: @username_0 We are trying hard to listen to our users, so I'm interested in hearing your opinion :slightly_smiling_face:
I don't think `--auto-outs` would be that scary, if handled with care :slightly_smiling_face: But I agree, it is a bit too implicit.
username_0: Hi @username_1 , just wanted to ask if there's any update? Can I help somehow to release this soon?
I'll explain why this is urgent to me. I'm trying to use DVC for a large-ish NLP task, where each preprocessing stage takes me a whole night to run. Often, when I come in the morning, I find out that I made a slight mistake in the command and e.g. some unrelated files ended up in my cached output folder. I do NOT want to re-run `dvc run` and wait a whole night, and discover that I made another mistake. This is important, because if I was just a normal user, this would be the point where I give up on using DVC completely. I would like to just manually fix the mistake, modify the command in the dvc file, and run `dvc commit` to take a snapshot of the current (fixed) state of my file system, so I can push it and not have to recalculate it later. This may be problematic from a pure reproducibility perspective, but that's less immediately urgent than carrying on with my work.
username_1: Hi @username_0 !
Sorry, forgot to link it https://github.com/iterative/dvc/pull/1601 . There is only one question left to solve https://github.com/iterative/dvc/pull/1601#discussion_r255646527 . If it is very urgent, you could try it out right now. Dev version should be handled with care, of course :)
Thanks,
Ruslan
username_1: Hi @username_0 !
#1601 is merged into the upstream. We will release new version with it soon.
Thanks for the feedback! :slightly_smiling_face:
Status: Issue closed
username_0: @username_1 Thanks buddy, I will try it out ASAP
username_1: @username_0 FYI: 0.28.1 is out :slightly_smiling_face: |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.