repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
BioinformaticsFMRP/TCGAbiolinks | 334135225 | Title: Error with GDCquery function when quering different transcriptomic dataset
Question:
username_0: Dear authors,
for the last two days i was trying to assess and download different transcriptomic TCGA datasets from the TCGAbiolinks R package. However, with queries like the following:
```
read.hg38 <- GDCquery(project = "TCGA-READ", data.category = "Transcriptome Profiling",
data.type = "Gene Expression Quantification",workflow.type = "HTSeq - Counts",
experimental.strategy = "RNA-Seq",sample.type = "Primary solid Tumor")
Error in value[[3L]](cond) :
GDC server down, try to use this package later
```
So, the same error consistently appears even if i try different projects or options-also my current version included:
```
sessionInfo()
R version 3.4.4 (2018-03-15)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows >= 8 x64 (build 9200)
Matrix products: default
locale:
[1] LC_COLLATE=Greek_Greece.1253 LC_CTYPE=Greek_Greece.1253
[3] LC_MONETARY=Greek_Greece.1253 LC_NUMERIC=C
[5] LC_TIME=Greek_Greece.1253
attached base packages:
[1] parallel stats4 stats graphics grDevices utils datasets
[8] methods base
other attached packages:
[1] edgeR_3.20.9 limma_3.34.9
[3] DESeq2_1.18.1 SummarizedExperiment_1.8.1
[5] DelayedArray_0.4.1 matrixStats_0.53.1
[7] Biobase_2.38.0 GenomicRanges_1.30.3
[9] GenomeInfoDb_1.14.0 IRanges_2.12.0
[11] S4Vectors_0.16.0 BiocGenerics_0.24.0
[13] TCGAbiolinks_2.9.0
loaded via a namespace (and not attached):
[1] backports_1.1.2 circlize_0.4.4
[3] Hmisc_4.1-1 aroma.light_3.8.0
[5] plyr_1.8.4 selectr_0.4-1
[7] ConsensusClusterPlus_1.42.0 lazyeval_0.2.1
[9] splines_3.4.4 BiocParallel_1.12.0
[11] ggplot2_2.2.1 sva_3.26.0
[13] digest_0.6.15 htmltools_0.3.6
[15] foreach_1.4.4 checkmate_1.8.5
[17] magrittr_1.5 memoise_1.1.0
[19] cluster_2.0.7-1 doParallel_1.0.11
[21] ComplexHeatmap_1.17.1 Biostrings_2.46.0
[23] readr_1.1.1 annotate_1.56.2
[25] R.utils_2.6.0 prettyunits_1.0.2
[27] colorspace_1.3-2 blob_1.1.1
[29] rvest_0.3.2 ggrepel_0.8.0
[31] dplyr_0.7.5 crayon_1.3.4
[33] RCurl_1.95-4.10 jsonlite_1.5
[35] genefilter_1.60.0 bindr_0.1.1
[Truncated]
[95] geneplotter_1.56.0 stringi_1.2.3
[97] GenomicFeatures_1.30.3 lattice_0.20-35
[99] Matrix_1.2-14 psych_1.8.4
[101] KMsurv_0.1-5 pillar_1.2.3
[103] GlobalOptions_0.1.0 data.table_1.11.4
[105] bitops_1.0-6 rtracklayer_1.38.3
[107] R6_2.2.2 latticeExtra_0.6-28
[109] hwriter_1.3.2 RMySQL_0.10.15
[111] ShortRead_1.36.1 gridExtra_2.3
[113] codetools_0.2-15 assertthat_0.2.0
[115] rjson_0.2.20 GenomicAlignments_1.14.2
[117] Rsamtools_1.30.0 mnormt_1.5-5
[119] GenomeInfoDbData_1.0.0 mgcv_1.8-23
[121] hms_0.4.2 grid_3.4.4
[123] rpart_4.1-13 tidyr_0.8.1
[125] ggpubr_0.1.6 base64enc_0.1-3
```
Thank you in advance,
Efstathios
Answers:
username_1: Could you update the package we have version 2.9.2, which has fixed the api links after GDC update.
`devtools::install_github("BioinformaticsFMRP/TCGAbiolinks")`
username_0: @username_1 thank you for your comments and updated notes !!
username_0: R version 3.4.4 (2018-03-15)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows >= 8 x64 (build 9200)
Matrix products: default
locale:
[1] LC_COLLATE=Greek_Greece.1253 LC_CTYPE=Greek_Greece.1253
[3] LC_MONETARY=Greek_Greece.1253 LC_NUMERIC=C
[5] LC_TIME=Greek_Greece.1253
attached base packages:
[1] parallel stats4 stats graphics grDevices utils datasets
[8] methods base
other attached packages:
[1] DESeq2_1.18.1 SummarizedExperiment_1.8.1
[3] DelayedArray_0.4.1 matrixStats_0.53.1
[5] Biobase_2.38.0 GenomicRanges_1.30.3
[7] GenomeInfoDb_1.14.0 IRanges_2.12.0
[9] S4Vectors_0.16.0 BiocGenerics_0.24.0
[11] TCGAbiolinks_2.9.2
loaded via a namespace (and not attached):
[1] backports_1.1.2 circlize_0.4.4
[3] Hmisc_4.1-1 aroma.light_3.8.0
[5] plyr_1.8.4 selectr_0.4-1
[7] ConsensusClusterPlus_1.42.0 lazyeval_0.2.1
[9] splines_3.4.4 BiocParallel_1.12.0
[11] ggplot2_2.2.1 sva_3.26.0
[13] digest_0.6.15 htmltools_0.3.6
[15] foreach_1.4.4 checkmate_1.8.5
[17] magrittr_1.5 memoise_1.1.0
[19] cluster_2.0.7-1 doParallel_1.0.11
[21] limma_3.34.9 ComplexHeatmap_1.17.1
[23] Biostrings_2.46.0 readr_1.1.1
[25] annotate_1.56.2 R.utils_2.6.0
[27] prettyunits_1.0.2 colorspace_1.3-2
[29] blob_1.1.1 rvest_0.3.2
[31] ggrepel_0.8.0 dplyr_0.7.5
[33] crayon_1.3.4 RCurl_1.95-4.10
[35] jsonlite_1.5 genefilter_1.60.0
[37] bindr_0.1.1 survival_2.42-3
[39] zoo_1.8-2 iterators_1.0.9
[41] glue_1.2.0 survminer_0.4.2
[43] gtable_0.2.0 zlibbioc_1.24.0
[45] XVector_0.18.0 GetoptLong_0.1.7
[47] shape_1.4.4 scales_0.5.0
[49] DESeq_1.30.0 DBI_1.0.0
[51] edgeR_3.20.9 ggthemes_3.5.0
[53] Rcpp_0.12.17 htmlTable_1.12
[55] xtable_1.8-2 progress_1.2.0
[57] cmprsk_2.2-7 foreign_0.8-70
[59] bit_1.1-14 matlab_1.0.2
[61] km.ci_0.5-2 Formula_1.2-3
[63] htmlwidgets_1.2 httr_1.3.1
[65] RColorBrewer_1.1-2 acepack_1.4.1
[67] pkgconfig_2.0.1 XML_3.98-1.11
[69] R.methodsS3_1.7.1 nnet_7.3-12
[71] locfit_1.5-9.1 tidyselect_0.2.4
[Truncated]
[93] compiler_3.4.4 rstudioapi_0.7
[95] curl_3.2 tibble_1.4.2
[97] geneplotter_1.56.0 stringi_1.2.3
[99] GenomicFeatures_1.30.3 lattice_0.20-35
[101] Matrix_1.2-14 psych_1.8.4
[103] KMsurv_0.1-5 pillar_1.2.3
[105] GlobalOptions_0.1.0 data.table_1.11.4
[107] bitops_1.0-6 rtracklayer_1.38.3
[109] R6_2.2.2 latticeExtra_0.6-28
[111] hwriter_1.3.2 RMySQL_0.10.15
[113] ShortRead_1.36.1 gridExtra_2.3
[115] codetools_0.2-15 assertthat_0.2.0
[117] rjson_0.2.20 GenomicAlignments_1.14.2
[119] Rsamtools_1.30.0 mnormt_1.5-5
[121] GenomeInfoDbData_1.0.0 mgcv_1.8-23
[123] hms_0.4.2 grid_3.4.4
[125] rpart_4.1-13 tidyr_0.8.1
[127] ggpubr_0.1.6 base64enc_0.1-3
```
username_1: Hello,
you forgot to create a vector for sample.type (`c()`)
```
read.hg38 <- GDCquery(project = "TCGA-READ",
data.category = "Transcriptome Profiling",
data.type = "Gene Expression Quantification",
workflow.type = "HTSeq - Counts",
experimental.strategy = "RNA-Seq",
sample.type = c("Primary solid Tumor","Solid Tissue Normal"))
```
username_0: @username_1 thank you for your response and i would like to apologize for my false alarm question and typo !!
username_2: Hello @username_1 , I am facing the problem while running GDC query in Rstudio using 'TCGAbiolinks' package stating that 'print.header could not function', while other function of GDC are running successfully like GDCDownload and GDCprepare, What could be the problem? I am using the https://bioconductor.org/packages/devel/bioc/vignettes/TCGAbiolinks/inst/doc/analysis.html pipeline.
I have installed TCGAbiolinks, dplyr and DT packages. Is there any other dependencies for TCGAbiolinks, which you would like to suggest me????
my sessionInfo()
R version 3.5.1 (2018-07-02)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 18.04.1 LTS
Matrix products: default
BLAS: /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.7.1
LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.7.1
locale:
[1] LC_CTYPE=en_IN.UTF-8 LC_NUMERIC=en_IN.UTF-8 LC_TIME=en_IN.UTF-8 LC_COLLATE=en_IN.UTF-8
[5] LC_MONETARY=en_IN.UTF-8 LC_MESSAGES=en_IN.UTF-8 LC_PAPER=en_IN.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C LC_MEASUREMENT=en_IN.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats4 parallel stats graphics grDevices utils datasets methods base
other attached packages:
[1] curl_3.2 httr_1.3.1 DT_0.4 dplyr_0.7.6
[5] TCGAbiolinks_2.8.4 Rsubread_1.30.6 BiocInstaller_1.30.0 systemPipeRdata_1.8.0
[9] systemPipeR_1.14.0 ShortRead_1.38.0 GenomicAlignments_1.16.0 SummarizedExperiment_1.10.1
[13] DelayedArray_0.6.6 matrixStats_0.54.0 Biobase_2.40.0 BiocParallel_1.14.2
[17] Rsamtools_1.32.3 Biostrings_2.48.0 XVector_0.20.0 GenomicRanges_1.32.6
[21] GenomeInfoDb_1.16.0 IRanges_2.14.11 S4Vectors_0.18.3 BiocGenerics_0.26.0
loaded via a namespace (and not attached):
[1] backports_1.1.2 GOstats_2.46.0 circlize_0.4.4 aroma.light_3.10.0
[5] plyr_1.8.4 selectr_0.4-1 ConsensusClusterPlus_1.44.0 lazyeval_0.2.1
[9] GSEABase_1.42.0 splines_3.5.1 BatchJobs_1.7 ggplot2_3.0.0
[13] sva_3.28.0 digest_0.6.17 htmltools_0.3.6 foreach_1.4.4
[17] GO.db_3.6.0 magrittr_1.5 checkmate_1.8.5 memoise_1.1.0
[21] BBmisc_1.11 cluster_2.0.7-1 doParallel_1.0.11 limma_3.36.3
[25] ComplexHeatmap_1.18.1 readr_1.1.1 annotate_1.58.0 R.utils_2.7.0
[29] prettyunits_1.0.2 colorspace_1.3-2 blob_1.1.1 rvest_0.3.2
[33] ggrepel_0.8.0 crayon_1.3.4 RCurl_1.95-4.11 jsonlite_1.5
[37] graph_1.58.0 genefilter_1.62.0 bindr_0.1.1 zoo_1.8-4
[41] brew_1.0-6 survival_2.42-6 sendmailR_1.2-1 iterators_1.0.10
[45] glue_1.3.0 survminer_0.4.3 gtable_0.2.0 zlibbioc_1.26.0
[49] GetoptLong_0.1.7 Rgraphviz_2.24.0 shape_1.4.4 scales_1.0.0
[53] DESeq_1.32.0 pheatmap_1.0.10 DBI_1.0.0 edgeR_3.22.3
[57] ggthemes_4.0.1 Rcpp_0.12.18 viridisLite_0.3.0 xtable_1.8-3
[61] progress_1.2.0 cmprsk_2.2-7 bit_1.1-14 matlab_1.0.2
[65] km.ci_0.5-2 AnnotationForge_1.22.2 htmlwidgets_1.2 RColorBrewer_1.1-2
[69] pkgconfig_2.0.2 XML_3.98-1.16 R.methodsS3_1.7.1 locfit_1.5-9.1
[73] tidyselect_0.2.4 labeling_0.3 rlang_0.2.2 AnnotationDbi_1.42.1
[77] munsell_0.5.0 tools_3.5.1 downloader_0.4 RSQLite_2.1.1
[81] broom_0.5.0 stringr_1.3.1 knitr_1.20 bit64_0.9-7
[85] survMisc_0.5.5 purrr_0.2.5 bindrcpp_0.2.2 EDASeq_2.14.1
[89] RBGL_1.56.0 nlme_3.1-137 R.oo_1.22.0 xml2_1.2.0
[93] biomaRt_2.36.1 compiler_3.5.1 rstudioapi_0.7 tibble_1.4.2
[97] geneplotter_1.58.0 stringi_1.2.4 GenomicFeatures_1.32.2 lattice_0.20-35
[101] Matrix_1.2-14 KMsurv_0.1-5 pillar_1.3.0 BiocManager_1.30.2
[105] GlobalOptions_0.1.0 data.table_1.11.6 bitops_1.0-6 rtracklayer_1.40.6
[109] R6_2.2.2 latticeExtra_0.6-28 hwriter_1.3.2 gridExtra_2.3
[113] codetools_0.2-15 assertthat_0.2.0 Category_2.46.0 rjson_0.2.20
[117] GenomeInfoDbData_1.1.0 mgcv_1.8-24 hms_0.4.2 grid_3.5.1
[121] tidyr_0.8.1 ggpubr_0.1.8 base64enc_0.1-3
username_3: R version 3.5.2 (2018-12-20)
Platform: x86_64-apple-darwin15.6.0 (64-bit)
Running under: macOS High Sierra 10.13.6
Matrix products: default
BLAS: /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/3.5/Resources/lib/libRlapack.dylib
locale:
[1] zh_CN.UTF-8/zh_CN.UTF-8/zh_CN.UTF-8/C/zh_CN.UTF-8/zh_CN.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] TCGAbiolinks_2.11.5
loaded via a namespace (and not attached):
[1] backports_1.1.3 circlize_0.4.6
[3] AnnotationHub_2.14.5 aroma.light_3.12.0
[5] plyr_1.8.4 selectr_0.4-1
[7] ConsensusClusterPlus_1.46.0 lazyeval_0.2.2
[9] splines_3.5.2 BiocParallel_1.16.6
[11] usethis_1.4.0 GenomeInfoDb_1.18.2
[13] ggplot2_3.1.0 sva_3.30.1
[15] digest_0.6.18 foreach_1.4.4
[17] htmltools_0.3.6 magrittr_1.5
[19] memoise_1.1.0 cluster_2.0.7-1
[21] doParallel_1.0.14 limma_3.38.3
[23] remotes_2.0.2 ComplexHeatmap_1.20.0
[25] Biostrings_2.50.2 readr_1.3.1
[27] annotate_1.60.1 matrixStats_0.54.0
[29] sesameData_1.0.0 R.utils_2.8.0
[31] prettyunits_1.0.2 colorspace_1.4-1
[33] blob_1.1.1 rvest_0.3.2
[35] ggrepel_0.8.0 xfun_0.6
[37] dplyr_0.8.0.1 callr_3.2.0
[39] crayon_1.3.4 RCurl_1.95-4.12
[41] jsonlite_1.6 genefilter_1.64.0
[43] zoo_1.8-5 survival_2.44-1.1
[45] iterators_1.0.10 glue_1.3.1
[47] survminer_0.4.3 gtable_0.3.0
[49] sesame_1.0.0 zlibbioc_1.28.0
[51] XVector_0.22.0 GetoptLong_0.1.7
[53] DelayedArray_0.8.0 wheatmap_0.1.0
[55] pkgbuild_1.0.3 shape_1.4.4
[57] BiocGenerics_0.28.0 scales_1.0.0
[59] DESeq_1.34.1 DBI_1.0.0
[61] edgeR_3.24.3 ggthemes_4.1.0
[63] Rcpp_1.0.1 cmprsk_2.2-7
[65] xtable_1.8-3 progress_1.2.0
[67] bit_1.1-14 matlab_1.0.2
[69] km.ci_0.5-2 preprocessCore_1.44.0
[71] stats4_3.5.2 httr_1.4.0
[73] RColorBrewer_1.1-2 pkgconfig_2.0.2
[75] XML_3.98-1.19 R.methodsS3_1.7.1
[77] locfit_1.5-9.1 DNAcopy_1.56.0
[79] tidyselect_0.2.5 rlang_0.3.3
[81] later_0.8.0 AnnotationDbi_1.44.0
[83] munsell_0.5.0 tools_3.5.2
[Truncated]
[121] pillar_1.3.1 BiocManager_1.30.4
[123] GlobalOptions_0.1.0 data.table_1.12.0
[125] bitops_1.0-6 httpuv_1.5.0
[127] rtracklayer_1.42.2 GenomicRanges_1.34.0
[129] R6_2.4.0 latticeExtra_0.6-28
[131] hwriter_1.3.2 promises_1.0.1
[133] ShortRead_1.40.0 gridExtra_2.3
[135] IRanges_2.16.0 sessioninfo_1.1.1
[137] codetools_0.2-16 assertthat_0.2.1
[139] pkgload_1.0.2 SummarizedExperiment_1.12.0
[141] rprojroot_1.3-2 rjson_0.2.20
[143] withr_2.1.2 GenomicAlignments_1.18.1
[145] Rsamtools_1.34.1 S4Vectors_0.20.1
[147] GenomeInfoDbData_1.2.0 mgcv_1.8-28
[149] parallel_3.5.2 hms_0.4.2
[151] grid_3.5.2 tidyr_0.8.3
[153] ggpubr_0.2 Biobase_2.42.0
[155] shiny_1.2.0
Please help me to check.
THX |
CrunchyData/pg_featureserv | 794876227 | Title: pg_featureserv and QGIS
Question:
username_0: Enhancement proposal:
Can pg_featureserv be configured to work with QGIS WMS/OGC API Features connection?
Goal is for testing and demo purposes and to integrate pg_featureserv and QGIS in a microservice architecture.
First attempts by active QGIS devs. seem to be unsuccessful so far: lists.osgeo.org/pipermail/qgis-user/2020-November/047208.html .
Answers:
username_1: It would be great to have pg_featureserv work with QGIS. But as far as I know pg_featureserv is correctly supporting the OAPIF protocol. QGIS fails when trying to request the top-level page from the service. It's not clear to me why QGIS is unable to do this. It might be because it is not setting the `Accept` header to indicate that it wants a JSON response.
It might require working with a QGIS developer to sort this out on both ends.
username_1: Update on this:
* It turns out that the pg_featureserv landing page JSON was missing a couple of the mandatory OAPIF resource links (`service-desc` and `conformance`). Once these are added, QGIS is able to list the feature collections in the service. This fix will be committed soon.
* However, QGIS is unable to request the metadata for collections. I think this is because it is emitting the collection request as `http://localhost:9000/collections.json/ne.us_state`. This is clearly malformed - note that the format extension `.json` is not at the end of the URL, as it should be. This is a problem that QGIS should fix.
username_1: @username_0 are you in touch with QGIS devs to help get the format specifier issue sorted out?
username_1: Update: The malformed collection URL is because `pg_featureserv` is emitting the format specifier `.json` in the links in JSON documents. This probably needs to be changed so that links to JSON requests don't include the format specifier, since this can be assumed. Working on this.
When this is changed, QGIS is able to load layers from `pg_featureserv`!
username_1: This should be fixed now.
@username_0 are you able to try out the code in the repo?
username_0: Yes, I'll involve an assistant of mine (exam session here :-( ). Give me some time this week.
username_2: @username_1 can I grab 'latest build'? Or do I have to compile myself to test your fix?
username_2: @username_1 is there anything you think should be fixed/changed on the QGIS end to stay as much to 'standards' as possible?
username_1: I think that's only partly implemented in `pg_featureserv`. Does QGIS support this? If so I can try and see if I can get it working.
username_2: I'm not so familiar with the code intricities, but...
I also was not able to use some kind of filtering (it even freezes my QGIS (although I use a pretty heavy point db: 9.000.000 adresses)...
Older WFS providers did use paging, but I cannot make this work here yet.
I will try if I use the QGISserver endpoint if the client behaves differently, and I will ask on the mailing list.
Status: Issue closed
|
flow-typed/flow-typed | 421399849 | Title: [react-redux-5.x.x][flow_v0.89.x-] Missing dispatch prop for one case connect()
Question:
username_0: Someone forgot add `dispath: D` function for [this case](https://github.com/flow-typed/flow-typed/blob/52edb7ed59dbb6ac02867f70a3508184ddbde69b/definitions/npm/react-redux_v5.x.x/flow_v0.89.x-/react-redux_v5.x.x.js#L120-L126)
```js
declare export function connect<-P, -OP, -SP, -DP, S, D>(
// If you get error here try adding return type to your mapStateToProps function
mapStateToProps: MapStateToProps<S, OP, SP>,
mapDispatchToProps: MapDispatchToPropsFn<D, OP, DP>,
mergeProps?: null | void,
options?: ?Options<S, OP, SP, {| ...OP, ...SP, ...DP |}>,
): Connector<P, OP, {| ...OP, ...SP, ...DP |}>; // <-- HERE forgot add `dispath: D`!!!
```
I tested this case
```
const mapStateToProps = state => ({
layout: [],
});
const mapDispatchToProps = {
refreshAll: () => ({type : 'test'}),
};
connect(
mapStateToProps,
mapDispatchToProps,
)(props => {
console.log(' --- props', props);
/*
{
layout:[]
refreshAll: () => {},
dispatch: () => {},
}
*/
})
```
Answers:
username_0: And these also don't have `dispatch: D`
https://github.com/flow-typed/flow-typed/blob/52edb7ed59dbb6ac02867f70a3508184ddbde69b/definitions/npm/react-redux_v5.x.x/flow_v0.89.x-/react-redux_v5.x.x.js#L83-L84
Status: Issue closed
|
thelounge/thelounge | 362459947 | Title: prefetch broken on lastest nodejs
Question:
username_0: <!-- Have a question? Join #thelounge on freenode -->
* *Node version:* nodejs 8.60 and more newer
* *Browser version:* firefox 62.0 x64
* *Device, operating system:* client : windows 10 x64/server : linux x64
* *The Lounge version:* 2.7.1
---
image prefetch not work with new version nodejs

like this (not prepetched and havn't petch button)

with nodejs v8.5.0
nodejs 8.5.0 and old version worked well
nodejs 8.6.0 and new version not worked
all nodejs binary downloaded from [https://nodejs.org/en/download/releases ](https://nodejs.org/en/download/releases )
command
**lounge start
nodejs --max-old-space-size=1024 /usr/lib/node_modules/thelounge/index.js start
node-v8.6.0-linux-x64/bin/node --max-old-space-size=1024 /usr/lib/node_modules/thelounge/index.js start**
same result
work with
**node-v8.5.0-linux-x64/bin/node --max-old-space-size=1024 /usr/lib/node_modules/thelounge/index.js start**
image size is 6mb and limit is 10240k
install thelounge with npm and yarn but same result
Answers:
username_1: Considering you're using 2.7.1 and there's been a rewriting of over half of the code in 3.1 (vue branch), I think it would be nice if you could test to see if the issue is still in vue branch.
How to test 3.1 vue branch:
(1) cd to somewhere you want the folder
(2) git clone https://github.com/thelounge/thelounge.git
(3) cd thelounge
(4) git checkout vue
(5) yarn install
(6) NODE_ENV=production yarn build
To start it, stay in the /thelounge/ folder and type yarn start.
username_2: Can you try latest the lounge RC?
username_2: Closing this because I suspect this to be fixed by #2159.
@username_0 If you still have issues on 3.0.0-rc.1, feel free to re-open the issue.
Status: Issue closed
username_0: not yet
i don't want to service with not stable version
i will waiting new stable version
i want to you are test that with lastest version nodejs before release
username_2: I already use latest node version, and it works fine.
username_0: thanks it work with 3.0.0-RC.1 :) |
simukappu/activity_notification | 235060834 | Title: Is it supposed to work on Rails 5.1?
Question:
username_0: Hi there!
Has anyone tested the gem with Rails 5.1? I mean, probably I'm doing something wrong.
It's a bit confusing when after "bin/rails generate activity_notification:install" it says I should have:
```
include ActivityNotification::Notifiable
acts_as_notifiable :users ...
```
```
include ActivityNotification::Target
acts_as_target email: :email, email_allowed: :confirmed_at
```
But in readme it doesn't say to include 'includes'.
What's the right way?
Thanks,
Ben
Answers:
username_1: Hi,
Yes, *activity_notification* is tested with Rails 5.1 in our Travis CI.
When you use *ActiveRecord* ORM (as default), *activity_notification* automatically load *Notifiable* and *Target* modules to your models here,
https://github.com/username_1/activity_notification/blob/master/lib/activity_notification/models.rb#L20
so you don't need include *Notifiable* or *Target* manually.
When you use other ORM like *Mongoid*, you have to include *Notifiable* or *Target* in your models manually.
I have updated README shown after installation to make it simple.
Thanks
username_1: I will close this issue.
Status: Issue closed
|
sr320/course-fish310-2015 | 78649168 | Title: qPCR analysis
Question:
username_0: Place to discuss the best way to communicate to the students about analysis of qPCR data.
Open questions:
1) We for each lab section should we provide actin data?
2) How should we instruct students run statistics on the data?
- we can't use t-tests on the calculated expression values they something like log-normal.
- We can ask students to log-transform - but then we basically get back the C(T) scale.
3) when normalizing with Actin - we divide the calculated expression value of the target by the expression value Actin, rather than dividing the C(t), right?
4) should we pay attention to the efficiency value? Does an efficiency of 100 mean doubling each PCR cycle? Is this on an absolute or on a scale relative to some other control?
Answers:
username_1: 1) Sure you can post actin- but would be good to show them how it was determined.
2) Toughy as we can easily confuse students and they might get lost in details ...
Your call - could also go with would need many more reps to do correctly so for this we will just use xxx. understanding its more complicated
via http://www.biomedcentral.com/1756-0500/5/502
Average Ct (fluorescence-based cycle threshold) values across replicates and average gene efficiencies were calculated with PCR Miner ( http://www.miner.ewindup.info/version2 webcite[10]). Gene expression (R0) was calculated based on the equation R0 = 1/(1 + E)Ct, where E is the average gene efficiency and Ct is the cycle threshold for fluorescence. Each primer pair amplified a single product, as demonstrated by a single melting curve (see Additional file 2). All expression values were normalized to expression of EF1α [GenBank: AB122066]. EF1α did not show differential expression between treatments as verified by a t-test done on expression values of the qPCR run in duplicate. All qPCRs were run in duplicate and significant differences in expression were determined using a linear model with α = 0.05. Box-Cox plots were used to assess skewness of gene expression data and determine if transformations needed to be made. The need to transform the data and the transformation used were determined from the lamda (λ) corresponding to the maximum log likelihood in the Box-Cox plot. If λ = 1 fell within the 95% confidence interval of the maximum log likelihood, then no transformation was used. All statistical analyses were done in R [30].
3) yes GOI exp/actin exp
4) no just ignore eff. for simplicity sake
username_2: The actin data is posted but I'm not sure which groups did it so I need to confirm the actin data before committing it.
username_0: How about we ask them to run t-tests on the C(t) values directly? we can ask them to plot the expression values to get a sense of the scale of differences.
Status: Issue closed
username_0: Place to discuss the best way to communicate to the students about analysis of qPCR data.
Open questions:
1) We should post actin data to be used for standardization
2) How should we instruct students run statistics on the data?
- we can't use t-tests on the calculated expression values they are very non-normal.
- We can ask students to log-transform - but then we basically get back the C(T) scale.
- Parametric (t-test on log-transformed data) vs non-parametric (Mann-Whitney)?
3) when normalizing with Actin - we divide the calculated expression value of the target by the expression value Actin, rather than dividing the C(t), right? This would be something like subtracting the
4) should we pay attention to the efficiency value? Does an efficiency of 100 mean doubling each PCR cycle? Is this on an absolute or on a scale relative to some other control?
username_1: Why not just have them run t-test on expression data - explaining not completely appropriate?
username_0: Works for me!
If we have the log-transformed data in hand - I dont think it adds much if any complexity. the results of the t-test on untransformed expression data will be almost completely determined by the single sample with highest expression value.
We'll try to get the students to get target and normalized expression values - and interpret them any statistics done (on C(t) or expression) will be gravy. If stats are applied the students should be expected to evaluate their appropriateness.
Status: Issue closed
username_0: Both of the groups doing qPCR analysis in the Monday section - were on very much on top of the expression analysis. I had put together an example workflow, but it wasn't necessary - they were already there.
The groups may or may report a statistical analysis of the qPCR data - but they seem on top of graphically presenting and interpreting it. One group had done a t-test of the expression data - I hope they include it their presentation. |
kubernetes-sigs/controller-runtime | 992312085 | Title: unable to find a version that was supported for platform darwin/arm64
Question:
username_0: Is controller-runtime tools unsupported on darwin/arm64?
```
make test
/Users/ct/git/memcached-operator/bin/controller-gen "crd:trivialVersions=true,preserveUnknownFields=false" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
/Users/ct/git/memcached-operator/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
go: creating new go.mod: module tmp
Downloading sigs.k8s.io/controller-runtime/tools/setup-envtest@latest
go get: added sigs.k8s.io/controller-runtime/tools/setup-envtest v0.0.0-20210906140630-386c2b5b29ba
unable to find a version that was supported for platform darwin/arm64
KUBEBUILDER_ASSETS="" go test ./... -coverprofile cover.out
? gitlab.bell.corp.bce.ca/AI/aiocp/example-operator [no test files]
? gitlab.bell.corp.bce.ca/AI/aiocp/example-operator/api/v1alpha1 [no test files]
Running Suite: Controller Suite
===============================
Random Seed: 1631198122
Will run 0 of 0 specs
STEP: bootstrapping test environment
2021-09-09T10:35:22.533-0400 DEBUG controller-runtime.test-env starting control plane
2021-09-09T10:35:22.536-0400 ERROR controller-runtime.test-env unable to start the controlplane {"tries": 0, "error": "exec: \"etcd\": executable file not found in $PATH"}
```
Answers:
username_1: It looks like there is no kubebuilder-tools that support `darwin/arm64`. Considering Intel Mac will be replaced by M1 Mac, the `darwin/arm64` files must be added in near future.
https://storage.googleapis.com/kubebuilder-tools
However, there is a workaround for M1 Mac.
If you need to run kubebuilder-tools on M1 Mac, you can just add `--arch=amd64`option to `setup-envtest` command to use Intel binaries. The downloaded Intel binaries will run via Rosseta2.
Just modify Makefile's `test` target generated by kubebuilder:
```diff
test: manifests generate fmt vet envtest ## Run tests.
- KUBEBUILDER_ASSETS="$(shell $(ENVTEST) use $(ENVTEST_K8S_VERSION) -p path)" go test ./... -coverprofile cover.out
+ KUBEBUILDER_ASSETS="$(shell $(ENVTEST) --arch=amd64 use $(ENVTEST_K8S_VERSION) -p path)" go test ./... -coverprofile cover.out
```
username_2: Native binaries for darwin/arm64 would be much appreciated!
username_3: pushed a PR to init the mitigation of this, you can give a thumb up in here --> https://github.com/kubernetes-sigs/kubebuilder/pull/2516
username_4: when do you plan to release a new version which includes this fix? |
LeaVerou/awesomplete | 58540949 | Title: [OPGAJA22.COM] ᒘ분당오피㈒평촌오피^서울대오피ㅞꂣ
Question:
username_0: [OPGAJA22.COM] ᒘ분당오피㈒평촌오피^서울대오피ㅞꂣ[OPGAJA22.COM] ᒘ분당오피㈒평촌오피^서울대오피ㅞꂣ[OPGAJA22.COM] ᒘ분당오피㈒평촌오피^서울대오피ㅞꂣ[OPGAJA22.COM] ᒘ분당오피㈒평촌오피^서울대오피ㅞꂣ[OPGAJA22.COM] ᒘ분당오피㈒평촌오피^서울대오피ㅞꂣ[OPGAJA22.COM] ᒘ분당오피㈒평촌오피^서울대오피ㅞꂣ[OPGAJA22.COM] ᒘ분당오피㈒평촌오피^서울대오피ㅞꂣ[OPGAJA22.COM] ᒘ분당오피㈒평촌오피^서울대오피ㅞꂣ[OPGAJA22.COM] ᒘ분당오피㈒평촌오피^서울대오피ㅞꂣ[OPGAJA22.COM] ᒘ분당오피㈒평촌오피^서울대오피ㅞꂣ[OPGAJA22.COM] ᒘ분당오피㈒평촌오피^서울대오피ㅞꂣ[OPGAJA22.COM] ᒘ분당오피㈒평촌오피^서울대오피ㅞꂣ[OPGAJA22.COM] ᒘ분당오피㈒평촌오피^서울대오피ㅞꂣ[OPGAJA22.COM] ᒘ분당오피㈒평촌오피^서울대오피ㅞꂣ[OPGAJA22.COM] ᒘ분당오피㈒평촌오피^서울대오피ㅞꂣ[OPGAJA22.COM] ᒘ분당오피㈒평촌오피^서울대오피ㅞꂣ[OPGAJA22.COM] ᒘ분당오피㈒평촌오피^서울대오피ㅞꂣ[OPGAJA22.COM] ᒘ분당오피㈒평촌오피^서울대오피ㅞꂣ[OPGAJA22.COM] ᒘ분당오피㈒평촌오피^서울대오피ㅞꂣ[OPGAJA22.COM] ᒘ분당오피㈒평촌오피^서울대오피ㅞꂣ[OPGAJA22.COM] ᒘ분당오피㈒평촌오피^서울대오피ㅞꂣ[OPGAJA22.COM] ᒘ분당오피㈒평촌오피^서울대오피ㅞꂣ[OPGAJA22.COM] ᒘ분당오피㈒평촌오피^서울대오피ㅞꂣ[OPGAJA22.COM] ᒘ분당오피㈒평촌오피^서울대오피ㅞꂣ[OPGAJA22.COM] ᒘ분당오피㈒평촌오피^서울대오피ㅞꂣ |
huangblue/jiaju | 234101398 | Title: 事不避难,义不逃责,素位而行,随适而安;毋戚戚于功名,毋孜孜于娱乐。
Question:
username_0: http://mp.weixin.qq.com/s/nQvbW3VRIow94x752eFlvA
Answers:
username_0: 汤用彤:盛世的恐慌
口述:汤一介 撰文:陈远
家风不可中断
在抗日战争期间,父亲最喜欢读《桃花扇》中的《哀江南》。这首曲子说的是南明亡国的情况。在国难危重的时期,他几乎每天都用湖北的乡音读这支曲子以寄托自己的忧患意识。他的这种情绪也感染了我们,当时我只有十三岁,不仅学会了读这支曲子,还对自己国家的命运有了一种关怀。
在我不到十五岁的时候,大约是在1942年,父亲找来庾信的《哀江南赋》,指着序中的其中一段对我讲:一个家族应该有他的家风,如果家风断了,那么这个家族也就衰落了。庾信写那篇文章的背景是他被梁派到北魏作聘问(外交工作),因为才华出众,被扣押在北魏做官。在这种情况下,庾信把怀念故土的意识写进了《哀江南赋》,希望自己的家风不要在他那一代断了。父亲让我把这篇文章熟读,还给我讲到我的祖父。父亲对我说他喜欢读《哀江南》和《哀江南赋》的原因,是因为我祖父生前就喜欢读这两篇东西。
父亲还给我看祖父六十岁的时候,祖父的学生在万生园(现在的动物园)给他过寿,祖父的学生给祖父画了一幅万生园的图,祖父在这幅图上写了一个序。序中有两句话:事不避难,义不逃责,素位而行,随适而安;毋戚戚于功名,毋孜孜于娱乐。前一句话是说做事情不要怕困难,合乎道义的事情不要怕负责任,过很简单的生活,就可以随遇而安,减少苦恼。后一句话是说不要总是追求功名和安乐。父亲对我说你记住这两句就可以了,以此来告诫我如何做人。
一生写给我的三封信
1943年,当时我们全家都在昆明,我只身到重庆的南开中学读书。父亲一生给我写过三封信,都是我在南开的一年半期间写的。
当时在南开读书非常艰苦,吃饭的时候是八个人一桌,菜很快就被抢吃一空,然后就只能吃白饭。其他的学生家在重庆的居多,他们可以在家中带东西吃,而我家在昆明,没有办法带东西吃,于是就给家里写信抱怨生活太苦。
父亲很快给我写了一封信,信的大意说:现在是抗日战争时期,前方的战士都在流血牺牲,你受的这一点苦跟前方的战士比起来算不了什么。你应该安心学习,不应该把生活的困苦看得太重,生活的困苦对于你的锻炼会有好处。但是在私下里,他让母亲弄了一些猪油,在里面加了一些盐给我带到了学校,在没有菜的时候可以用猪油伴饭吃。
我曾经有个妹妹,比我小一岁。在我离开昆明的时候她得了肾脏炎。我到了重庆之后她就去世了,但是父亲他们没有告诉我。后来我从堂姐那里知道了这个消息,就写信责怪家里不告诉我(这个消息)。父亲又给我写了一封信,他说当时之所以没有告诉我是希望我能够安心读书,我的妹妹去世他也很难过。他说:不告诉你是因为爱护你,因为告诉你,也于事无补,徒增你的悲伤。
我在联大附中读完初二之后,就直接到了南开读高中,因为没有读初三,高一时在功课上感到很紧张,有点跟不上。我就给父亲写信诉说我学习上的困难和苦恼。父亲就写信给我,说做学问就跟爬山一样,爬山爬得越高看得才能越远`。你现在在学习上虽然有很多困难,但是你不要松懈。你的知识越多,对于许多道理就可以弄得越清楚。
围城前后
1948年12月,解放军包围了北京城。
胡适先生走的时候,给父亲和郑天挺先生留了一封信,大意说:南京已经来了几个电报催我走,我来不及和你们二位告别,北大的事情就托付你们二位照顾一下。
胡适先生到了南京之后,国民党政府又派了飞机来接包括我父亲在内的那一批教授。父亲没有走,而且大多数教授也没有走。
都没走的原因,就是大家觉得国民党非常腐败,跟他们走没有什么希望,但是他们对于共产党也完全不了解,没有什么认识。
现在分析起来,父亲留下来的原因大概有三个:
一个是刚才咱们说到的国民党的腐败;
二来,父亲的很多学生是地下党,比如说汪子嵩,当时已经是助教了,他们给父亲做工作,要父亲安心留下来,共产党来了之后不会对你们如何如何;
其三就是因为胡适先生的信了,因为胡适在信中要父亲照顾北大胡适走了之后,北大一时没有校长,北大的教授就自己成立了校委会,并推选父亲做校委会主席。
这让他觉得他应该留下来照顾北京大学。他在北京大学待了那么长的时间,对于北大有很深的感情。同时他也觉得,大家都走了,北大谁来管?1949年1月29号,北京解放,到了5月份,北京军管会的主任叶剑英给父亲送了一个任命书,任命他做北京大学校委会的主席。而在此之前,父亲则是北大教授们公选的(主席)。我觉得父亲对于共产党的认识,可能有两条让他在思想震动很大。
一个是在新中国建立之时,毛泽东在天安门上宣告:中国人民站起来了!这不仅仅对父亲一个人有影响,而是对于一批知识分子都有很大的影响。他们(包括我)都曾经亲身感受到一百年来中国受屈辱的状况。北大人可能感受更深一些。现在,祖国终于摆脱了屈辱的状态,他们那一代的知识分子是分兴奋的。
另外一点,共产党在刚刚进城的时候,十分艰苦,也十分朴素。当时实行的是供给制,给像父亲这样的知识分子每个月供给相当于一千五百斤小米这个价值的钱,比当时他们自己的干部的待遇要高很多。这让他们很感动,觉得共产党能吃苦,跟国民党不一样。
这两点,让相当多的知识分子在思想上转变过来,诚心诚意地接受共产党来管理国家。而后来的思想改造运动,则是他们这一批人没有想到的。
担任北大副校长
父亲校委会主席的职位一直担任到1951年的夏天。
那年夏天马寅初到了北京做校长,父亲则是副校长。当时北大的校址要从沙滩红楼迁到燕园,父亲则主要负责基建和校舍的建设。但是他并不懂基建,学校就派了张龙翔给他作助手。张龙翔当时是化学系的教授,我想他也不懂,但是他们还是硬着头皮开始了工作。
1954年,父亲得了一场大病,这场病我想跟批判胡适有关。
[Truncated]
晚年的遗憾
你刚才说到父亲的学术成就主要是在1949年以前取得的,这之后他就没有写出过像样的学术著作。你这个看法我很认同,实际上,如果你认真地来看,1949年之后不仅是父亲一个人,而是一批老学者都没有写出过比较好的著作:
冯友兰先生的学术地位是由解放前的《贞元六书》奠定的,解放之后的著作包括《哲学史新编》都没有超越他以前的东西;
金岳霖先生的《论道》和《知识论》也是在1949年以前完成的,之后的东西甚至都走错了路,他在《逻辑学》中说逻辑有阶级性到现在恐怕要成为学界的笑话,虽然这怪不得金先生。
思想改造对于知识分子来说是一种伤害,让他们不再说真话了,在学术上也就没有办法前进了。
父亲的晚年就是坐在家里不出去,自然也不接触社会,偶尔写一些考证的小文章,更多的时间是在看书,查资料。他阅读了几百种佛学著作,写了大约四十万字的读书札记,作为他修改《隋唐佛教史》的准备。但是这项工作父亲最终也没有完成。后来我用他二三十年代的两种原稿综合了一下出版,如果在他生前是不会做这样的事情的,因为他觉得还有很多材料没有补充进去。
早在昆明的时候,父亲就有修改《隋唐佛教史》的想法,但是因为在去昆明的途中大量的资料都丢失了,修改也就无法进行,同时在那时他也已经开始了魏晋选学的研究,确立这个方向的初衷是试图梳理印度文化传入中国之后的线索以及两种文化在融会之后的相互影响,并且试图把佛教中重要的思想家纳入到我们的传统文化思想中来。你如果看过他这些著作的话,可以看出他这种在本位文化上的努力。
在他的晚年,父亲的内心世界是非常矛盾的。当时他的身体已经很不好了,想做重头的研究已经没有精力了,但是他又不甘心。从他留下的二十本读书札记来看,他是十分想修改《隋唐佛教史》的,但是已经力不从心了。
拍卖时光
微信号:syds1978
由历史学者陈远发起,长按二维码可加关注,投稿及合作请致信<EMAIL>,点击“阅读原文”可进入拍卖时光小店 |
dandi/dandi-cli | 768113574 | Title: Interface new implementation (API server) in dandi-cli top-level interfaces
Question:
username_0: A decision was made in https://github.com/dandi/dandi-cli/pull/283 to aim to get `master` in a shape to support interaction with both current girder-based deployed dandiarchive **and** WiP https://github.com/dandi/dandi-api implementations within the same code base. I think it is doable. #283 will provide "internal" interfaces for an upload/download cycle and then we would need to use/expose them through top level interfaces. For that, I think we should
- to `known_instances` add a new entry `"dandi-api"` which would have no girder, we ui, redirector and only API.
- we would need also at least a `dandi-api-local-docker-tests` to point to the fixture'd instance
- we should not occlude other existing fixtures/instances such as `local-docker-tests` since they reflect the instances of currently deployed dandiarchive setup (albeit now without any API which was provided by publish. for a bit)
- most likely we should add `metadata-version` "field" to those records, where for old ones it would be 0 (unversioned ad-hoc), and then `1` for the new version of metadata (ideally it should [be the API server](https://github.com/dandi/dandi-api/issues/36) which tells which version of metadata schema it expects, but that is -- later).
### upload
- `--dandi-instance` option of `dandi upload` (in DANDI_DEVEL mode) would automagically list those added instance(s)
- `upload` code should be RFed (main logic is in [`process_path`](https://github.com/dandi/dandi-cli/blob/master/dandi/upload.py#L147) so it could support both upload to original girder-based client and new API-based one
- for new API-based one, metadata extraction should use new metadata (version 1) schema
### download
for `download`, which does not rely on explicit specification of which instance to talk to, but parses from URL/identifier we need to come up with a "schema" on how to reference content from new API server, which would be easy on humans and flexible to support multiple instances/servers. Specification/parsing of such urls should be added to `known_urls` (https://github.com/dandi/dandi-cli/blob/master/dandi/dandiarchive.py#L109).
So what about just taking what we had/(have) on a recent iteration with `publish` API and tune it to correspond to our API (`/dandisets/{version__dandiset__pk}/versions/{version__version}/assets/paths/`) call:
```python
f"{server_grp}#.*/(?P<asset_type>dandiset)s/{dandiset_id_grp}"
"/(?P<version>versions/([.0-9]{5,}|draft))"
"(/assets/path(?location=(?P<location>.*)?)?)?"
"$": {"server_type": "dandi-api"}
```
NB initially I thought we might need to add some optional prefix `f"(dandi-api::)?" for early "decision making" but because of unique `/dandisets/` I think we do not need it
so sample url for download would look like
- `https://api.dandiarchive.org/dandisets/000001`, `https://api.dandiarchive.org/dandisets/000001/versions/draft`, `https://api.dandiarchive.org/dandisets/000001/versions/draft/path/` -- entire dataset, draft version
- `https://api.dandiarchive.org/dandisets/000001/0.0.1` -- entire dataset, `0.0.1` version
- `https://api.dandiarchive.org/dandisets/000001/draft/path/sub-XXX` -- sub-XXX folder (or a file if points to an asset) of draft version
instead of `api.dandiarchive.org`, urls could point to `localhost:port` in the test.
Later, whenever web UI starts using API, we would need to adjust other "schemas". redirector will start redirecting to new web ui, possibly different urls. But for now, IMHO it is sensible to just follow the API URLs. Those will not be really "visible" to users, but would allow us to test new functionality/interaction with API server.
Answers:
username_1: The dandi-api Docker Compose setup seems to currently depend on the Girder Docker Compose setup, so they can't be split apart, so creating separate `known_instance` values doesn't seem like a good idea.
username_0: I don't exactly see how known_instances so tightly bound to docker compose setup - it is ok to reuse the same compose setup for multiple instances with some code support
username_1: @username_0 The Girder upload code has a number of provisions for handling files that have already been uploaded. Should this be carried over to the new API? I'm not entirely sure how all the parts should be translated.
username_0: No. In our reimplantation, iirc, we will rely on gc (not sure if there already) to delete no longer used assets and unfinished uploads.
username_0: No. In our reimplantation, iirc, we will rely on gc (not sure if there already) to delete no longer used assets and unfinished uploads.
username_1: @username_0 Adding a new `dandi-api` server type for the new download URL pattern requires adding a `dandi-api` entry to `_dandi_url_parser.map_to` and populating it. How exactly should that be done?
username_0: That is something I do not have immediate answer for. I have suggested/asked on slack either we might want to establish `gui-beta.dandiarchive.org` and then provide such mapping. But I am not sure if that is the approach we would be taking, so for now let's just breed a custom `api+<URL>` url schema to parse, so `https://gui.dandiarchive.org/#/dandiset/000007` - would be old girder based, and `api+https://gui.dandiarchive.org/#/dandiset/000007` would be the new one. That would allow later to change easily.
username_1: @username_0 I'm not entirely clear on what you're proposing here. Could you elaborate?
username_0: I have pushed f9fa14bd9ed464d982864eefd34349b3f043e979 to #330 which adds that and trims lots of elderly no longer needed URL schemas etc. See commit message (and some TODOs in the diff) for more information. Now you should be able to download `api+https://gui.dandiarchive.org/#/dandiset/000007/draft` or `https://gui.dandiarchive.org/#/dandiset/000007/draft` (which would be entirely different things ;)).
Status: Issue closed
|
innoave/genevo | 238296074 | Title: Add default genotype for tree encoding
Question:
username_0: in order to complete the basic encoding types of genes - see issue #2 - a genotype implementation for tree encoding shall be added.
Answers:
username_0: As expierenced with issue #2, #6 and #8 such traits for the encoding type might not serve any purpose.
So the traits defining the encoding types are currently not used and if there is no purpose in having them I will remove them at some point.
Status: Issue closed
|
kennyb5000/spy_fu | 129025132 | Title: new issue
Question:
username_0: Reported by: username_0
null
Answers:
username_0: this is a comment
username_0: this is a new comment
username_0: try again
username_0: Reported by: username_0
who are you
username_0: Reported by: username_0
what do you want
username_0: Reported by: username_0
<@U0K5FPYCU> update 16 title:new title body:new body
https://testeroo-team.slack.com/archives/test/p1453869483000023
username_0: Reported by: username_0
what do you want |
dotnet/roslyn | 589332698 | Title: Add a CI test phase that uses a (debug) bootstrap compiler
Question:
username_0: Build and test a bootstrap (debug) compiler as part of CI runs
to catch errors like
- https://github.com/dotnet/roslyn/issues/42340
- https://github.com/dotnet/roslyn/issues/42837
- https://github.com/dotnet/roslyn/issues/42838 |
vuetifyjs/vuetify | 295868805 | Title: [Bug Report] Webpack-ssr Dialog document is not defined
Question:
username_0: ### Versions and Environment
**Vuetify:** 1.0.0-beta.6
**Vue:** 2.5.13
**Browsers:** Chrome 63.0.3239.132
**OS:** Windows 10
### Steps to reproduce
A setting : value = "true" , an error occurs .
### Expected Behavior
For the response for issue #411 #423 but still I am getting the error.
### Actual Behavior
in ssr can run
### Reproduction Link
[https://codepen.io/JAELYS/pen/paeJKL](https://codepen.io/JAELYS/pen/paeJKL)
<!-- generated by vuetify-issue-helper. DO NOT REMOVE -->
Answers:
username_1: I can't reproduce this, can you provide a basic git repo that will show the error?
Status: Issue closed
|
wso2/product-apim | 714001542 | Title: Add support for 3.0.x OpenAPI definitions in API publisher.
Question:
username_0: ### Description:
Currently in 2.2.0 publisher we only support version 3.0.0 OpenAPI definitions. However we should add improved support for 3.0x versions because they can only contain minor fixes and not major changes in the spec.
### Steps to reproduce:
Import an OpenAPI 3.0.1 API definition. UI will show an error saying the OpenAPI version is not supported.
### Affected Product Version:
APIM 2.2.0
### Environment details (with versions):
- OS:
- Client:
- Env (Docker/K8s):
---
### Optional Fields
#### Related Issues:
<!-- Any related issues from this/other repositories-->
#### Suggested Labels:
<!--Only to be used by non-members-->
#### Suggested Assignees:
<!--Only to be used by non-members-->
Answers:
username_1: Already fixed via https://github.com/wso2/carbon-apimgt/pull/5837. Hence, can close the issue.
Status: Issue closed
|
romisfrag/little_mmx_encode-decode | 221303025 | Title: Factorize AST
Question:
username_0: In [ast_instructions.v](https://github.com/romisfrag/little_mmx_encode-decode/blob/master/src/ast_instructions.v#L4), an opcode must appear _at most once_. For example, `ADD` must appear once, with multiple qualifiers (signed, unsigned, floating point, on 4-words, 8-words, 16-words). |
strapi/strapi | 388951430 | Title: strapi-provider-upload-local public path
Question:
username_0: Within the `strapi-provider-upload-local` package it does not take into account if one were to change the public path in `./config/application.json`
**What is the current behavior?**
It uploads to the wrong public directory.
**Steps to reproduce the problem**
- Change `public.path` in `./config/application.json`
- Upload a file
- Try and access it, you will get a 404, as file is still placed in ./public/uploads
**What is the expected behavior?**
That it takes into account the path and puts the files in the correct place to be served.
**Suggested solutions**
Make use of `strapi.config.public.path` in place of `public` in: https://github.com/strapi/strapi/blob/master/packages/strapi-provider-upload-local/lib/index.js#L19
And any fluff like checking dir exists etc
Answers:
username_1: @username_0 in the mean time you can use symlinks (which I think is a bit more clean)
username_2: Hello @username_0 , thank you for reporting this issue. We are currently on another feature and bug fix. Since time is lacking on our side, feel free investigate and submit a PR, we’ll appreciate your contribution on this issue!
Check out the contributing guide to get started: https://github.com/strapi/strapi/blob/master/CONTRIBUTING.md
Status: Issue closed
username_2: Fixed in this PR https://github.com/strapi/strapi/pull/2966 |
indico/indico | 99714422 | Title: Dependcies errors on installation
Question:
username_0: Dears, i need to know which Linux distribution are used for install Indico.
Im used Ubuntu, but many dependencies are missed or not founded.
Is possible what are use Scientific Linux distribute?
Please, tell me which is you recommended Distribution.
Thanks you very much!
Answers:
username_1: Hi,
There is no "recommended distribution" as such. Any linux distro with Python 2.7 will work.
As for the dependency errors, if you paste some info on the errors you're getting, then we'll be able to help.
username_0: Ok, if i follow the "step by step" installation at manual, doesnt work.
I paste the logs errors, to find a solution.
Thank you very much!!
[Mon Jun 29 06:54:33.859963 2015] [mpm_event:notice] [pid 22721:tid 140091192878976] AH00489: Apache/2.4.7 (Ubuntu) OpenSSL/1.0.1f mod_wsgi/3.4 Python/2.7.6 configured -- resuming normal operations
[Mon Jun 29 06:54:33.860016 2015] [core:notice] [pid 22721:tid 140091192878976] AH00094: Command line: '/usr/sbin/apache2'
access.log
192.168.1.15 - - [29/Jun/2015:11:48:34 -0300] "OPTIONS / HTTP/1.1" 200 195 "-" "Mozilla/5.0 (compatible; Nmap Scripting Engine; http://nmap.org/book/nse.html)"
192.168.1.15 - - [29/Jun/2015:11:48:34 -0300] "OPTIONS / HTTP/1.1" 200 195 "-" "Mozilla/5.0 (compatible; Nmap Scripting Engine; http://nmap.org/book/nse.html)"
192.168.1.15 - - [29/Jun/2015:11:48:34 -0300] "OPTIONS / HTTP/1.1" 200 195 "-" "Mozilla/5.0 (compatible; Nmap Scripting Engine; http://nmap.org/book/nse.html)"
192.168.1.15 - - [29/Jun/2015:11:48:34 -0300] "OPTIONS / HTTP/1.1" 200 195 "-" "Mozilla/5.0 (compatible; Nmap Scripting Engine; http://nmap.org/book/nse.html)"
192.168.1.15 - - [29/Jun/2015:11:48:35 -0300] "OPTIONS / HTTP/1.1" 200 195 "-" "Mozilla/5.0 (compatible; Nmap Scripting Engine; http://nmap.org/book/nse.html)"
192.168.1.15 - - [29/Jun/2015:11:48:35 -0300] "OPTIONS / HTTP/1.1" 200 195 "-" "Mozilla/5.0 (compatible; Nmap Scripting Engine; http://nmap.org/book/nse.html)"
192.168.1.15 - - [29/Jun/2015:11:48:35 -0300] "OPTIONS / HTTP/1.1" 200 195 "-" "Mozilla/5.0 (compatible; Nmap Scripting Engine; http://nmap.org/book/nse.html)"
192.168.1.15 - - [29/Jun/2015:11:48:37 -0300] "OPTIONS / HTTP/1.1" 200 195 "-" "Mozilla/5.0 (compatible; Nmap Scripting Engine; http://nmap.org/book/nse.html)"
192.168.1.15 - - [29/Jun/2015:11:48:40 -0300] "GET /robots.txt HTTP/1.1" 404 463 "-" "Mozilla/5.0 (compatible; Nmap Scripting Engine; http://nmap.org/book/nse.html)"
192.168.1.15 - - [29/Jun/2015:11:48:48 -0300] "GET /favicon.ico HTTP/1.1" 404 464 "-" "
username_1: Those logs are not very helpful, considering that you are having (apparently) dependency issues. That would be the important thing to know: which dependencies are missing and error messages you are getting.
username_0: i was looking for the indico logs, but i dont know where find.
What is the ordinary path where are that logs?
username_1: Sorry, maybe I misunderstood, but you told me that "many dependencies are missed or not found." Which dependencies are those?
Status: Issue closed
|
SharePoint/sp-dev-docs | 473247073 | Title: This method has been updated for SharePoint Server 2016-2019
Question:
username_0: [Enter feedback here]
Based on the source code the method forcedeletesite() can take in SPS 2016-2019 5 parameters. The 2 additional parameters are updateSiteMap and deleteIsForMigration.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d46d7feb-0734-97e4-bb0f-cea450f8f260
* Version Independent ID: 7c712dff-e1cd-5fee-f384-df6fee7bf342
* Content: [SPContentDatabase.ForceDeleteSite(Guid, Boolean, Boolean) Method (Microsoft.SharePoint.Administration)](https://docs.microsoft.com/en-us/dotnet/api/microsoft.sharepoint.administration.spcontentdatabase.forcedeletesite?view=sharepoint-server)
* Content Source: [sharepoint-server/xml/Microsoft.SharePoint.Administration/SPContentDatabase.xml](https://github.com/SharePoint/sp-dev-docs-2013-server-ref-dotnet/blob/live/sharepoint-server/xml/Microsoft.SharePoint.Administration/SPContentDatabase.xml)
* Product: **sharepoint**
* Technology: **sdk**
* GitHub Login: @dotnet-bot
* Microsoft Alias: **o365devx**
Answers:
username_1: Sorry... I'm not clear on what you are saying in your post... can you elaborate? |
LindsayXX/Music-generation | 441648337 | Title: Week 2
Question:
username_0: Read VAE-papers. Specifically https://arxiv.org/pdf/1901.08810v1.pdf, we still don’t really understand what they’re doing (and by extension what we’re trying to do ehehe)
Answers:
username_0: Code som encoder/decoder to plug into the WN.
username_0: Fix the wavenet. Why does it just generate noise? How do we make it lighter? What input should the tacked-on encoder give it? What does the WN give to the decoder? Change WN to https://github.com/hrbigelow/ae-wavenet?
username_0: - [ ] Write abstract.
username_0: - [ ] Write some background |
benetech/MathShare | 491250974 | Title: Missing Accessible Indication of Toggle Control State
Question:
username_0: The “Help Center” button acts as a disclosure control. That is, it can be used to toggle the display of the Help Center content. On initial page load, the Help Center content is hidden, but the required aria-expanded attribute is not present on this button to indicate this fact (or that the control will act as a toggle at all).
An aria-expanded attribute is added to the control once it has been activated for the first time, and from this point onward the value of the attribute is correctly updated in line with the control’s state. However, it must be in place from the very first time that the control is rendered.
Suggested Solution
Ensure that this (and all other toggle/disclosure controls) have an aria-expanded attribute applied as soon as they are initially rendered. Here, the initial value should be set to “false”.
Proposed Code Sample
File: src/components/Home/components/Header/index.js
`<button className={`nav-link dropdown-toggle btn ${header.dropDownMenu}`} id={questionBtnId} … aria-expanded="false" …> `
Answers:
username_1: Fixed in https://github.com/benetech/MathShare/commit/a658015b7f1ce8c6fdbe279aee56a2a6bcd71b6f.
Status: Issue closed
|
aws-amplify/amplify-cli | 630274873 | Title: Unable to list bucket contents using Storage.list(), getting 403 Access Denied
Question:
username_0: **Describe the bug**
Unable to list the contents of a file in S3 using Storage.list()
**To Reproduce**
`
Storage.list('images/').then(result => log.debug(result)).catch(err => log.error(err));
`
The error i get is:
xhr.js:83 GET https://my-bucket.s3.ap-southeast-2.amazonaws.com/?prefix=public%images%2F 403 (Forbidden)
AWSS3Provider - list error AccessDenied: Access Denied
**Expected behavior**
My Cognito Auth is admin level so i can Put, Get and Remove objects from the bucket so i would be expect to be able to List the contents.
**Screenshots**

<details>
<summary><strong>Environment</strong></summary>
<!-- Please run the following command inside your project and copy/paste the output into the codeblock: -->
```
OS: Windows Server 2012 6.3.9600
Node: 12.14.0
npm: 6.13.4
aws-amplify: ^2.2.1 => 2.2.1
```
</details>
**Additional context**
I have taken some steps to see if it would fix the issue:
- Redeployed the bucket e.g. amplify add storage
- Confirmed that the permissions in the backend file show the correct permissions
- Checked all IAM roles to confirm that they have all the correct permissions
- The bucked does have content
Answers:
username_1: Moving to Amplify CLI in order to understand the backend configuration here.
Status: Issue closed
username_2: Please ensure that if a user is part of a user pool group, run ```amplify storage update``` to enable IAM group policies for CRUD operations. We are working on adding relevant message in the CLI and in the docs related to this. Closing this as a duplicate of https://github.com/aws-amplify/amplify-js/issues/5729, please feel free to go through that thread for more information or comment if you think this is a separate issue. |
react-hook-form/react-hook-form | 513849446 | Title: mode onBlur not work in React Native
Question:
username_0: Maybe is related to this issue #23 but in React Native version **"0.61.2"** when i change the mode to **"onBlur"** the form don't trigger the validation when i change the input focus.
I tried with the example in the documentation:
```js
import React from 'react';
import {Alert, Text, SafeAreaView, View, TextInput, Button} from 'react-native';
import useForm from 'react-hook-form';
export default function App() {
const {register, setValue, handleSubmit, errors} = useForm({
mode: 'onBlur',
});
const onSubmit = data => Alert.alert('Form Data', data);
return (
<SafeAreaView style={{flex: 1}}>
<Text>First name</Text>
<TextInput
ref={register({name: 'firstName'}, {required: true})}
onChangeText={text => setValue('firstName', text, true)}
/>
{errors.firstName && <Text>This is required.</Text>}
<Text>Last name</Text>
<TextInput
ref={register({name: 'lastName'})}
onChangeText={text => setValue('lastName', text)}
/>
<View>
<Button title="SUBMIT" onPress={handleSubmit(onSubmit)} />
</View>
</SafeAreaView>
);
}
```
Answers:
username_1: <img width="922" alt="Screen Shot 2019-10-29 at 10 21 35 pm" src="https://user-images.githubusercontent.com/10513364/67762868-808ad300-fa9a-11e9-8c4c-df9198b3247d.png">
hey @username_0 for react native you have to attach onBlur manually.
username_1: https://react-hook-form.com/api#setValue you can trigger validation while set value as well
username_0: Sorry, I didn't see the tooltip. 😅
I added onBlur prop in my TextInput and now it works:
```js
<Text><NAME></Text>
<TextInput
ref={register({name: 'firstName'}, {required: true})}
onChangeText={text => setValue('firstName', text, true)}
onBlur={async () => await triggerValidation()}
/>
{errors.firstName && <Text>This is required.</Text>}
```
Status: Issue closed
username_2: Using React Native, I am finding that `formState.isValid` only updates on blur when `mode: 'onBlur'` is set, even though this seems to contrary to what the docs suggest. Adding `onBlur` prop to the input can validate the input, but doesn't update the `formState`. Is that the intended functionality?
username_1: Hey @username_2 , `mode` is only available to DOM API (i will imporve the doc website in the near future)
<img width="977" alt="Screen Shot 2019-11-03 at 8 38 13 am" src="https://user-images.githubusercontent.com/10513364/68077310-9b03da00-fe15-11e9-9e5b-aa829d9938dc.png">
. if you want to update `formState` during onChange, you can do `onChange={e => setValue('name', e.target.value, true)}`
username_3: Hey @username_1, if I use `onChange={e => setValue('name', e.target.value, true)}`, the form will validate on every `onChange`, but I want the validation to occur only `onSubmit`, just like the default DOM API mode. How can we achieve that on React Native?
username_1: @username_3 I believe if you are using `Controller`, it will respect the `mode` which i should update the doc as well. i will leave a note to myself.
username_3: @username_1 You are right! I was using the `register` function with regular `TextInput` components instead of using `Controller`. Thank you for the super quick reply! |
google/argh | 646914723 | Title: Auto add `--version` to print program version
Question:
username_0: Please automatically add `--version` to print program version: `env!("CARGO_PKG_VERSION")`.
Also `--help` should print the version as well.
Answers:
username_1: This library is currently used most widely through build systems that are not Cargo based. I don't think that cargo package version is always guaranteed to match the version surfaced to users either. It's pretty trivial to add this flag yourself, so for now I'd like to hold off on this feature request. |
dankamongmen/notcurses | 541272322 | Title: O(1) damage map
Question:
username_0: Our per-line damage map is pretty decent. Not bad. We can do better, and at the same time implement `notcurses_at_yx()` (very useful for testing). Maintain a copy of the rendered scene as `cell`s rather than a `memstream`. This is described in the comments on #150 .
With this, we get damage detection at the highest possible resolution, with no need for active engagement. Ought be a big win, certainly in terms of robustness/complexity.
I don't think we should release 1.0.0 without this.
Answers:
username_0: When this is done, O(1) `notcurses_at_yx()` is trivially implemented. `notcurses_refresh()` becomes slightly more complex, but it's just a marshaling of the stored render (as opposed to the current dump of the retained memstream).
username_0: Note that this still has to do all of the actual output computation, and only saves work by eliding the output (mostly saving terminal emulator work). We still ought pursue a method allowing us to elide the former, likely in conjunction with this approach.
username_0: got this working except for some misses on wide characters. should be done shortly.
username_0: got it!
username_0: 20% reduction in emitted data, sweeeeeet
Status: Issue closed
|
dotnet/AspNetCore.Docs | 806769464 | Title: remove the redundant entries here and link to the API docs instead.
Question:
username_0: See [this issue](https://github.com/dotnet/aspnetcore/issues/28297#issuecomment-774133945)
cc @nmnick18
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 5e347c54-6a6a-d685-4af7-39b023bf7f15
* Version Independent ID: 41aa77f5-9466-6580-5ec6-b1bf97bd0f01
* Content: [HTTP.sys web server implementation in ASP.NET Core](https://docs.microsoft.com/en-us/aspnet/core/fundamentals/servers/httpsys?view=aspnetcore-5.0#configure-the-aspnet-core-app-to-use-httpsys)
* Content Source: [aspnetcore/fundamentals/servers/httpsys.md](https://github.com/dotnet/AspNetCore.Docs/blob/master/aspnetcore/fundamentals/servers/httpsys.md)
* Product: **aspnet-core**
* Technology: **aspnetcore-fundamentals**
* GitHub Login: @username_0
* Microsoft Alias: **riande**
Answers:
username_0: @nmnick18 have you had a chance to work on this?
username_1: Hello!
A newbie to aspnetcore docs here.
Is this available to work on?
username_0: @username_1 yes, let me know if you need any help.
username_2: Hello, I want to contribute on this if its available to work on.
From what i understand we want to update the comments here:
[https://github.com/dotnet/aspnetcore/blob/58a7eb63ad5c6b037c43d2bd178c79f4248bb71f/src/Servers/HttpSys/src/HttpSysOptions.cs](url) using the descriptions from [https://docs.microsoft.com/en-us/aspnet/core/fundamentals/servers/httpsys?view=aspnetcore-5.0#configure-the-aspnet-core-app-to-use-httpsys](url).
The final doc will be rendered in the [ https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.server.httpsys.httpsysoptions?view=aspnetcore-5.0](url) with detail descriptions from the concept docs for HttpSys. |
dotnet/project-system | 301245619 | Title: Native debugging isn't working (.NET Framework only?)
Question:
username_0: We have a profile in project system with the following:
```
"ProjectSystemSetup (with native debugging)": {
"commandName": "Executable",
"executablePath": "$(DevEnvDir)devenv.exe",
"commandLineArgs": "/rootsuffix $(VSSDKTargetPlatformRegRootSuffix) /log",
"environmentVariables": {
"VisualBasicDesignTimeTargetsPath": "$(VisualStudioXamlRulesDir)Microsoft.VisualBasic.DesignTime.targets",
"FSharpDesignTimeTargetsPath": "$(VisualStudioXamlRulesDir)Microsoft.FSharp.DesignTime.targets",
"CSharpDesignTimeTargetsPath": "$(VisualStudioXamlRulesDir)Microsoft.CSharp.DesignTime.targets"
},
"nativeDebugging": true
```
This doesn't start native debugging when we F5 from ProjectSystemSetup.
Answers:
username_1: What build are you on? There was a recent bug where picking a different launch profile didn't work. could it be that instead?
cc @username_2
username_0: I'm on latest build. Yes, the issue appears to be that you can't switch profiles. Do we have a bug on that?
username_2: Changing it on the property page doesn’t change the default. That is done on the debug dropdown. I opened a bug\suggestion a long time ago to look at changing this
See https://devdiv.visualstudio.com/web/wi.aspx?pcguid=011b8bdf-6d56-4f87-be0d-0092136884d9&id=410821
username_0: Yeah confirmed that was the issue, used to change properties in Debug page to affect my next debugging session.
username_3: @username_0 Can we close this as not a bug? |
Azure/azure-cli | 989900329 | Title: python -m pip install --upgrade azure-cli installs and older version
Question:
username_0: I am trying to install the latest az cli, but instead I am getting an older version due to a jsmin setup error:
```
python -m pip install --upgrade azure-cli
2021-09-07T11:26:04.9965939Z Downloading https://pkgs.dev.azure.com/username_0/49255723-5232-4e9f-9501-068bf5e381a9/_packaging/e1661b2f-7e5d-484a-aab6-54c2a1cb5b0e/pypi/download/jsmin/2.2.2/jsmin-2.2.2.tar.gz (12 kB)
2021-09-07T11:26:05.2418035Z ERROR: Command errored out with exit status 1:
2021-09-07T11:26:05.2421084Z command: /usr/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-a863hy6i/jsmin_710895a4c3e44d21a48f56af2b8b8174/setup.py'"'"'; __file__='"'"'/tmp/pip-install-a863hy6i/jsmin_710895a4c3e44d21a48f56af2b8b8174/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-etdryiat
2021-09-07T11:26:05.2423588Z cwd: /tmp/pip-install-a863hy6i/jsmin_710895a4c3e44d21a48f56af2b8b8174/
2021-09-07T11:26:05.2424522Z Complete output (1 lines):
2021-09-07T11:26:05.2424978Z error in jsmin setup command: use_2to3 is invalid.
2021-09-07T11:26:05.2425795Z ----------------------------------------
2021-09-07T11:26:05.2429984Z WARNING: Discarding https://pkgs.dev.azure.com/username_0/49255723-5232-4e9f-9501-068bf5e381a9/_packaging/e1661b2f-7e5d-484a-aab6-54c2a1cb5b0e/pypi/download/jsmin/2.2.2/jsmin-2.2.2.tar.gz#sha256=b6df99b2cd1c75d9d342e4335b535789b8da9107ec748212706ef7bbe5c2553b (from https://pkgs.dev.azure.com/username_0/IngOne/_packaging/P10641-incoming/pypi/simple/jsmin/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
2021-09-07T11:26:05.2515374Z Collecting azure-cli
```
`Successfully installed PyJWT-1.7.1 adal-1.2.7 antlr4-python3-runtime-4.9.2 applicationinsights-0.11.10 argcomplete-1.12.3 azure-batch-8.0.0 azure-cli-2.0.73 azure-cli-command-modules-nspkg-2.0.3 azure-cli-core-2.0.73 azure-cli-nspkg-3.0.4 azure-cli-telemetry-1.0.6 azure-common-1.1.27 azure-core-1.18.0 azure-cosmos-3.2.0 azure-datalake-store-0.0.52 azure-devops-6.0.0b4 azure-functions-devops-build-0.0.22 azure-graphrbac-0.60.0 azure-identity-1.6.1 azure-keyvault-1.1.0 azure-keyvault-secrets-4.3.0 azure-mgmt-advisor-2.0.1 azure-mgmt-apimanagement-2.1.0 azure-mgmt-appconfiguration-2.0.0 azure-mgmt-applicationinsights-0.1.1 azure-mgmt-authorization-0.52.0 azure-mgmt-batch-7.0.0 azure-mgmt-batchai-2.0.0 azure-mgmt-billing-0.2.0 azure-mgmt-botservice-0.2.0 azure-mgmt-cdn-3.1.0 azure-mgmt-cognitiveservices-5.0.0 azure-mgmt-compute-6.0.0 azure-mgmt-consumption-2.0.0 azure-mgmt-containerinstance-1.5.0 azure-mgmt-containerregistry-3.0.0rc17 azure-mgmt-containerservice-5.3.0 azure-mgmt-core-1.3.0 azure-mgmt-cosmosdb-0.8.0 azure-mgmt-datalake-analytics-0.2.1 azure-mgmt-datalake-nspkg-3.0.1 azure-mgmt-datalake-store-0.5.0 azure-mgmt-datamigration-0.1.0 azure-mgmt-deploymentmanager-0.1.0 azure-mgmt-devtestlabs-2.2.0 azure-mgmt-dns-2.1.0 azure-mgmt-eventgrid-2.2.0 azure-mgmt-eventhub-2.6.0 azure-mgmt-hdinsight-1.1.0 azure-mgmt-imagebuilder-0.2.1 azure-mgmt-iotcentral-1.0.0 azure-mgmt-iothub-0.8.2 azure-mgmt-iothubprovisioningservices-0.2.0 azure-mgmt-keyvault-1.1.0 azure-mgmt-kusto-0.3.0 azure-mgmt-loganalytics-0.7.0 azure-mgmt-managedservices-1.0.0 azure-mgmt-managementgroups-0.2.0 azure-mgmt-maps-0.1.0 azure-mgmt-marketplaceordering-0.2.1 azure-mgmt-media-1.1.1 azure-mgmt-monitor-0.5.2 azure-mgmt-msi-0.2.0 azure-mgmt-netapp-0.5.0 azure-mgmt-network-4.0.0 azure-mgmt-nspkg-3.0.2 azure-mgmt-policyinsights-0.3.1 azure-mgmt-privatedns-0.1.0 azure-mgmt-rdbms-1.9.0 azure-mgmt-recoveryservices-0.4.0 azure-mgmt-recoveryservicesbackup-0.4.0 azure-mgmt-redis-6.0.0 azure-mgmt-relay-0.1.0 azure-mgmt-reservations-0.3.1 azure-mgmt-resource-3.1.0 azure-mgmt-search-2.1.0 azure-mgmt-security-0.1.0 azure-mgmt-servicebus-0.6.0 azure-mgmt-servicefabric-0.2.0 azure-mgmt-signalr-0.3.0 azure-mgmt-sql-0.12.0 azure-mgmt-sqlvirtualmachine-0.4.0 azure-mgmt-storage-4.2.0 azure-mgmt-trafficmanager-0.51.0 azure-mgmt-web-0.42.0 azure-multiapi-storage-0.2.4 azure-nspkg-3.0.2 azure-storage-blob-1.5.0 azure-storage-common-1.4.2 cryptography-2.9.2 fabric-2.6.0 humanfriendly-4.18 invoke-1.6.0 isodate-0.6.0 javaproperties-0.5.1 jsondiff-1.2.0 knack-0.6.3 mock-2.0.0 msal-1.14.0 msal-extensions-0.3.0 msrest-0.6.21 msrestazure-0.6.4 oauthlib-3.1.1 pathlib2-2.3.6 pbr-5.6.0 portalocker-1.7.1 psutil-5.8.0 pyOpenSSL-19.1.0 pydocumentdb-2.3.5 python-dateutil-2.8.2 pytz-2019.1 requests-oauthlib-1.3.0 scp-0.13.6 sshtunnel-0.1.5 tabulate-0.8.9 vsts-0.1.25 vsts-cd-manager-1.0.2 websocket-client-0.56.0 wheel-0.30.0 xmltodict-0.12.0`
Answers:
username_0: Might be related to this https://github.com/pypa/setuptools/issues/2775
username_1: @username_3 for awareness
username_2: It's because you depend on `jsmin`, which is no longer maintained: https://github.com/tikitu/jsmin/issues/33
username_3: Azure CLI requires `jsmin`
https://github.com/Azure/azure-cli/blob/ea46ad4a424a63406f40f732a9d76258623fd12a/src/azure-cli/setup.py#L140
`jsmin` requires `use_2to3`, so it doesn't work with the latest `setuptools` (https://github.com/tikitu/jsmin/issues/33), but `jsmin` is not maintained anymore.
The failure causes `pip` to install Azure CLI 2.0.73 which is the last version that doesn't depend on `jsmin` (#10389).
## Workaround
For now, please pin the version of `setuptools` until https://github.com/tikitu/jsmin/pull/34 is merged and Azure CLI starts to use the updated version, or Azure CLI drops `jsmin`:
```
pip install setuptools==57.5.0
```
Status: Issue closed
username_3: We have released a hotfix 2.28.1 specifically on PyPI: https://pypi.org/project/azure-cli/2.28.1/
Now Azure CLI installs correctly:
```
$ docker run -it --rm python /bin/bash
```
```
# pip install --upgrade setuptools
...
Successfully installed setuptools-58.0.4
# pip install azure-cli
...
Successfully installed ... azure-cli-2.28.1 azure-cli-core-2.28.1 ...
# az --version
azure-cli 2.28.1
core 2.28.1
telemetry 1.0.6
Python location '/usr/local/bin/python'
Extensions directory '/root/.azure/cliextensions'
Python (Linux) 3.9.7 (default, Sep 3 2021, 20:10:26)
[GCC 10.2.1 20210110]
username_4: Is there a permanent fix for this issue? I was surprised to still be encountering it on a new install. It looks like the JSMin change was merged.
username_3: This issue has been fixed and released long ago. Please try to install the latest Azure CLI (2.30.0). |
jdkandersson/OpenAlchemy | 543049618 | Title: Examine object reference nullability
Question:
username_0: Investigate how nullability currently works for objects. Consider:
1. Whether nullability is considered if it is included in an object targeted by an object reference.
2. Whether it could be included in an _allOf_ around an object reference
3. Whether it makes sense for an array to be nullable
The consequences of the above need to be considered for:
a. The _array_ref_ and _object_ref_ section in the _column_factry_ section
b. The schema generated for a model
c. The models file type annotation
Answers:
username_0: Also consider that nullable is False by default.
username_0: Also investigate what it means for model function arguments.
username_0: For a many to one relationship (where the object reference constructs the foreign key on the model referencing the other model), whether the referenced model is nullable depends on whether the foreign key is nullable. Currently the foreign key nullability is implied by whether the property is required.
username_0: For a one to one relationship (similar to many to one except useless parameter is used), the same is true for constructing the parent.
On the reverse side, None is an accepted value.
username_0: For the case of a one to many relationship, where the foreign key is constructed on the referenced model, None is not an acceptable input. Any empty list is acceptable, even when the constructed foreign key is not nullable.
On the reverse side, whether None is accepted depends on the nullability of the foreign key that is constructed.
username_0: For a many to many relationships, on both sides None is not an acceptable value.
username_0: The nullable property should be respected and treated similar to things like _x-backref_. This means they are allowed on the referenced model and as a part of _allOf_.
- For a many to one and one to one relationship, it implies that the foreign key column is not nullable.
- For a one to many relationship, it does nothing because the referenced model must still be constructible without the referee.
- For a many to many relationship, it does nothing.
Similarly, required should impact many to one and one to one relationships but not one to many and many to many relationships.
username_0: Gather nullable as a part of gathering artefacts for an object reference. Add it as a parameter to the foreign key construction. This parameter is always None from an array reference. For an array reference, check that nullable is None.
Status: Issue closed
|
linuxdeepin/developer-center | 376333925 | Title: Resizing the file manager's window cannot be stopped
Question:
username_0: https://youtu.be/-aNKKi68N44
Answers:
username_1: I also get this, not very often but I do get it, does it with the terminal sometimes, I will post a log if I get one.
username_2: @username_0 anyway to reproduce this ?
username_0: As in the video?
username_0: If I touch the screen, it doesn't obey.
username_3: @username_0 Which window manager are you using?
username_0: ```
$ wmctrl -m
Name: Mutter(DeepinGala)
Class: N/A
PID: N/A
Window manager's "showing the desktop" mode: N/A
```
Deepin Mutter 3.20.34
username_4: I also have it on both of my test machines, it happens occasionally with touchpad / mouse doesn't really matter.
username_4: @username_2 @username_3 @username_5
I know it's not easy, but i've found reliable way to reproduce this bug in Manjaro Deepin:
1. Open Deepin file manager in **/** , stick it to right half of the screen
2. Right mouse button on any empty place in folder
3. Hover mouse over **Open in new window as admin**
4. Press Ctrl + Alt + A to take screenshot
5. Choose whole screen area
6. Choose text-tool and place it somwhere on right side of the screen, write anything
7. Press ESC, than ESC again to get out of screenshot tool
8. now try to move mouse left - Deepin file manager will stick
It's reproduceable on both of my real test-machines and VM on Manjaro Deepin.
Please fix it, because no matter how hard it is to get reproduceable effect, in reality usually it just randomly happens with some windows)
username_4: Here's video just in case, https://i.imgur.com/4SY1Qgh.mp4
username_1: I can confirm this does happen sometimes, I have to right click for it to release the grabbing of the window, very random.
username_5: Not sure but did this still happends with DDE and qt5dxcb-plugin 1.2.0 ?
username_6: Just tested with qt5dxcb 1.2.0 but the issue does not change
username_5: Yep I know (#1086), but at least it should works fine under DDE, since @username_0 reported this issue within DDE so I would like to ensure if it works now :)
username_0: Do you need any further info?
username_5: Did your issue still happens in the most recent version of Deepin? Thanks.
username_4: @username_5
I can confirm it happens in all recent versions of Deepin and always did.
With Linux Deepin 15.10 Stable on Kwin maybe little less - yet still happens.
But it's really easy to see if you try Arch Deepin + deepin-kwin, you don't even need to do anything just try to move or resize window - it will stuck.
With deepin-mutter it is still reproducible by https://github.com/linuxdeepin/developer-center/issues/641#issuecomment-463628381, but without this trickery it just happens randomly.
username_5: Thanks for the info! It's still very weird, though.
username_4: Now **that** is really weird, since i can reproduce it on 3 real machines and all of my VMs...
There must be something else different
username_5: No sure, but probably. As I said it use Qt's private header so you should also consider the Qt version differences. Since this issue got reported before dde-kwin project start, and my LXQT installation using xfwm as the windowmanager, so it definitely not a KWin issue.
username_4: Current Manjaro Deepin qt5 packages versions, if it helps:

username_7: Sorry, this issue will be closed soon. If it is necessary to discuss it again, please create a new issue.
Status: Issue closed
|
steventhan/portfolio | 160810831 | Title: Navbar hover will change color of entire container box
Question:
username_0: Mouse hover over navbar buttons will not highlight the container box which the link is wrapped in. This issue may have something to do with `display: inline` vs `display: block` for each of the navbar button. |
quasarframework/quasar | 1050878396 | Title: prevent scroll issues an error in spa in jest test
Question:
username_0: **Describe the bug**
TypeError: Cannot read property 'ios' of undefined
at apply (node_modules/quasar/dist/quasar.cjs.prod.js:6:155837)
at preventScroll (node_modules/quasar/dist/quasar.cjs.prod.js:6:157237)
at preventBodyScroll (node_modules/quasar/dist/quasar.cjs.prod.js:6:157340)
at node_modules/quasar/dist/quasar.cjs.prod.js:6:161781
at callWithErrorHandling (node_modules/@vue/runtime-core/dist/runtime-core.cjs.js:6599:22)
at callWithAsyncErrorHandling (node_modules/@vue/runtime-core/dist/runtime-core.cjs.js:6608:21)
at Array.job (node_modules/@vue/runtime-core/dist/runtime-core.cjs.js:7007:17)
at flushPreFlushCbs (node_modules/@vue/runtime-core/dist/runtime-core.cjs.js:6767:45)
at flushJobs (node_modules/@vue/runtime-core/dist/runtime-core.cjs.js:6807:5)
at flushJobs (node_modules/@vue/runtime-core/dist/runtime-core.cjs.js:6846:13)
**To Reproduce**
Steps to reproduce the behavior:
1. Mount a Component that uses a dialog and is not set to seemless
2. Try to run the test and take a screenshot
**Expected behavior**
Should not throw an error
**Platform (please complete the following information):**
Quasar Version:
@quasar/app Version: 3.1.3
Quasar mode:
- [X] SPA
- [ ] SSR
- [ ] PWA
- [ ] Electron
- [ ] Cordova
- [ ] Capacitor
- [ ] BEX
Tested on:
- [X] SPA
- [ ] SSR
- [ ] PWA
- [ ] Electron
- [ ] Cordova
- [ ] Capacitor
- [ ] BEX
OS: Windows 10 10.0.19042
Node: 14.17.3
NPM: 7.20.6
**Additional context**
The error disappears when seemless is set which leads me to the conclusion that the trigger for this issue is here https://github.com/quasarframework/quasar/blob/82ba9059401c4324fe390d527a8faaa86a76d53f/ui/src/components/dialog/QDialog.js#L167 because https://github.com/quasarframework/quasar/blob/82ba9059401c4324fe390d527a8faaa86a76d53f/ui/src/components/dialog/QDialog.js#L143 will be false.
But the source of the issue seems to be here https://github.com/quasarframework/quasar/blob/82ba9059401c4324fe390d527a8faaa86a76d53f/ui/src/utils/prevent-scroll.js#L111
Since during the test `client` doesn't seem to have `.is`
Answers:
username_1: Closing in favor of https://github.com/quasarframework/quasar-testing/issues/190
Status: Issue closed
username_1: Another related, probably the same issue: https://github.com/quasarframework/quasar-testing/issues/186
username_0: @username_1 quasarframework/quasar-testing#190 seem to be something different to me, no teleport, no dialog no backdrop ?
username_1: Yeah, issue 186 looks more like it
The underlying issue appears related to me anyway
username_0: ok, at least you have some line numbers that might help and a reproducible scenario |
dotnet/docs | 770435840 | Title: Conflicting advice on Materialized View pattern
Question:
username_0: This is contradictory and confusing, which doesn't help increase our architectural knowledge.
It would be really good to understand the most common and most well-regarded approaches to the problem of "Challenge #2: How to create queries that retrieve data from several microservices". For instance, other documentation on this challenge mentions the use of events to capture data and create a local cache of the required data from other services. Is this a good pattern? What are the pros and cons of these patterns, in what scenarios would you choose one over the other?
Thanks very much!
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 19862882-c106-afa7-660e-ead3b5dd880a
* Version Independent ID: fea4a2d6-8aa5-a400-4261-6098e76d362b
* Content: [Challenges and solutions for distributed data management](https://docs.microsoft.com/en-us/dotnet/architecture/microservices/architect-microservice-container-applications/distributed-data-management)
* Content Source: [docs/architecture/microservices/architect-microservice-container-applications/distributed-data-management.md](https://github.com/dotnet/docs/blob/master/docs/architecture/microservices/architect-microservice-container-applications/distributed-data-management.md)
* Product: **dotnet-architecture**
* Technology: **microservices**
* GitHub Login: @nishanil
* Microsoft Alias: **nanil** |
USGS-R/gsplot | 96142445 | Title: Axis workflow
Question:
username_0: ```r
points(1,2, side=c(1,2,3,4) %>%
axis(c(3,4), labels=FALSE)
```
Answers:
username_0: Support many axes.
username_0: So, still need to:
- [ ] Support many axis
- [ ] Support multiple axis on the same side
- [ ] Support single axis calls
So, say I want major and minor axis tickmarks on the x axis:
```r
gs <- gsplot() %>%
points(y=c(3,1,2), x=1:3, xlim=c(0,NA),ylim=c(0,NA),
col="blue", pch=18, legend.name="Points", xlab="Index") %>%
axis(side=c(3,4), labels=FALSE) %>%
axis(side=1, at=seq(0,3,0.1), tcl=0.1)
```
Status: Issue closed
username_0: Closing this and breaking it into individual issues. |
flutter/flutter | 151513982 | Title: List item trailing elements don't line up with rightmost Toolbar action
Question:
username_0: There are two examples of this problem in the gallery.

<issue_closed>
Status: Issue closed |
crollalowis/.github | 494580849 | Title: add templates for bug
Question:
username_0: ## Expected behaviour## Current behaviour## Steps to Reproduce1. 2. 3. ## Specifications- Version used- Browser Name and version- Operating System and version (desktop or mobile)http://browser.crolla-lowis.de/<issue_closed>
Status: Issue closed |
dotnet/EntityFramework.Docs | 877252551 | Title: InitializeComponent and Grid_Loaded
Question:
username_0: Hi,
I took your code but it gives some errors:
InitializeComponent(); doens't load the window
productsDataGrid and categoryDataGrid aren't linked from the xaml, it says that they don't exist in the current context.
(first two maybe are connected).
Chapter: "Add Code that Handles Data Interaction"
In your code, you don't show the method:
private void Grid_Loaded(object sender, RoutedEventArgs e)
{ }
It appears when I do double click on "Loaded" on grid.
Cheers
Vittorio
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a691101f-b7ea-4539-3745-6d9b0f5e317f
* Version Independent ID: 8f488a69-cedf-9c45-f977-23b3d8abd3df
* Content: [Get Started with WPF - EF Core](https://docs.microsoft.com/en-us/ef/core/get-started/wpf)
* Content Source: [entity-framework/core/get-started/wpf.md](https://github.com/dotnet/EntityFramework.Docs/blob/main/entity-framework/core/get-started/wpf.md)
* Product: **entity-framework**
* Technology: **entity-framework-core**
* GitHub Login: @username_2
* Microsoft Alias: **jeliknes**
Answers:
username_1: /cc @username_2
username_2: Hi @username_0 can you provide more details about your setup? I just re-walked the tutorial and followed it exactly, and it worked fine.
username_1: **EF Team Triage:** Closing this issue as the requested additional details have not been provided and we have been unable to reproduce it.
*BTW this is a canned response and may have info or details that do not directly apply to this particular issue. While we'd like to spend the time to uniquely address every incoming issue, we get a lot traffic on the EF projects and that is not practical. To ensure we maximize the time we have to work on fixing bugs, implementing new features, etc. we use canned responses for common triage decisions.*
Status: Issue closed
|
realm/realm-js | 876888896 | Title: BSON: For React Native please polyfill crypto.getRandomValues
Question:
username_0: <!---
Questions: If you have questions about HOW TO use Realm, please ask on
StackOverflow: http://stackoverflow.com/questions/ask?tags=realm
We monitor the `realm` tag.
Feature Request: Just fill in the first two sections below.
Bugs: To help you as fast as possible with an issue please describe your issue
and the steps you have taken to reproduce it in as much detail as possible.
-->
## Goals
Regular use.
<!--- What are you trying to achieve? -->
## Expected Results
Regular use.
<!--- What did you expect to happen? -->
## Actual Results
A warning started appearing after upgrading to Realm 10.4.0.
```
JSWarning
BSON: For React Native please polyfill crypto.getRandomValues, e.g. using: https://www.npmjs.com/package/react-native-get-random-values.
src/conf.js:594:34
node_modules/bson/dist/bson.browser.umd.js:2617:18 insecureRandomBytes
node_modules/bson/dist/bson.browser.umd.js:5182:40
node_modules/bson/dist/bson.browser.umd.js:14:9 createCommonjsModule
node_modules/bson/dist/bson.browser.umd.js:5175:37
node_modules/bson/dist/bson.browser.umd.js:2:72
node_modules/bson/dist/bson.browser.umd.js:5:2
node_modules/metro/src/lib/polyfills/require.js:322:6 loadModuleImplementation
node_modules/realm/lib/extensions.js:54:35
node_modules/realm/lib/index.js:64:24
node_modules/metro/src/lib/polyfills/require.js:322:6 loadModuleImplementation
src/db/index.js:8
```
## Steps to Reproduce
<!--- What are steps we can follow to reproduce this issue? -->
Just use Realm (offline)
## Code Sample
<!---
Please provide a code sample or test case that highlights the issue.
If relevant, include your model definitions.
For larger code samples, links to external gists/repositories are preferred.
Full projects that we can compile and run ourselves are ideal!
-->
## Version of Realm and Tooling
- Realm JS SDK Version: 10.4.0
- Node or React Native: React Native
- Client OS & Version: All
- Which debugger for React Native: None
Answers:
username_0: Looks like this started to happen after BSON upgraded from "4.2.3" to "4.3.0" (dependency of Realm) which is odd since Realm locks the version to "^4.2.0"
username_1: by default I have exported the new Realm(databaseOptions) but instead of importing this, i just started using methods which i wrote in this file due to which i was getting this error.
I just imported the default export of realm instance with the method i had written to get all data in Home screen and that fixed the issue.
username_2: @username_0 very sorry for the confusion - it's an oversight on my part in the latest changes to the BSON package.
This warning was only suppose to trigger, if/when generating a new `ObjectId` (or other random ids), without a propper polyfill.
A PR has been dispatched and will hopefully be out soon: https://github.com/mongodb/js-bson/pull/435
username_0: @username_2 thank you! Looking forward to the update.
Do you happen to know if we should still install the recommended dependency when using Realm? It would be great for warnings to only happen in DEV also.
username_2: @username_0 If you're generating `ObjectId` (or the upcoming `UUID`, part of https://github.com/realm/realm-js/pull/3605) on the device, then yes this would be the recommended (this will also be mentioned in the docs soon).
The reason being that a part of `ObjectId`, and most of `UUID` is random, and React Native does not (yet) expose any cryptographically strong random value generation. (as Node and newer browsers currently do).
username_2: @username_0 in regards to your comment here: https://github.com/mongodb/js-bson/pull/435#issuecomment-833820693 the warning should disappear if you install https://www.npmjs.com/package/react-native-get-random-values (and follow the instructions - it has to be loaded).
This is not a solution, by any means, but could perhaps be better than having false error reporting, for the time being?
Just a suggestion.
username_0: @username_2 thanks for the suggestion! I would rather wait than adding a dependency that will be pretty much unused.
Would it break Realm if I downgrade bson to 4.2.x directly in the lock file?
Status: Issue closed
username_3: @username_2, I'm using RealmJS 10.10.1 and I still get this warning message when I use `new ObjectID()` from bson. I've tried to install the suggested `react-native-get-random-values` and load it in my `index.js` but the warning was still occurring so I've now uninstalled it. What's the current situation on this issue?
username_4: @username_3
Which version of the `bson` package do you use?
Can you provide a small code snippet which can reproduce the issue?
username_3: @username_4
It looks like I don't have the `bson` package installed in `npm list`. Should I have installed it separately from `realm`?
username_2: @username_3 did you rebuild the app, after adding `react-native-get-random-values`? That could result in the behaviour you're describing (I think).
It looks like `[email protected]` is bundled with `[email protected]`, so there shouldn't be a need to install it separately.
I'm not working at Realm anymore, so I'd probably refer you to @username_4, if further investigation is needed :)
username_4: @username_3
My simple app (with the code from https://github.com/realm/realm-js/issues/3714#issuecomment-979856288) doesn't give me a warning. Please run `npm list --depth 1 realm`.
username_3: @username_4, thanks for your reply.
So, I've run `npm list --depth 1 bson` and I get:
[email protected]
└── [email protected]
I've created a simple app and installed BSON (with `npm install [email protected]`) then added this line in App.js:
`const a = new ObjectId()` (also had to add `import { ObjectId } from "bson"`) at the top of App.js.
I get the same warning `BSON: For React Native please polyfill crypto.getRandomValues, e.g. using: https://www.npmjs.com/package/react-native-get-random-values.`
username_5: I am getting the same and realm doesnt write after that warning. I have tried everything in vain. Did u get a work around?
username_3: @username_5, nope, still haven't found a way around it. In my case, it doesn't prevent Realm writes though.
@username_4, any idea why those warnings are showing?
username_6: I put `import 'react-native-get-random-values'` before `import Realm` and the warning stopped.
```
import 'react-native-get-random-values'
import Realm from 'realm'
```
username_7: Thx @username_6! This fixed my issue. |
Chia-Network/chia-blockchain | 894194637 | Title: Not staying in Sync
Question:
username_0: **Describe the bug**
My ISP blocks all incoming ports, so I cannot do any port forwarding. I have added the western-us introducer but I never get more than 2-5 full nodes. I have tested adding more nodes manually which works for a while but then they eventually drop off too.
**Desktop (please complete the following information):**
- OS: Windows
- OS Version/Flavor: 10 20H1
- CPU: Intel i7-8800k
**Additional context**
Add any other context about the problem here.
Answers:
username_1: 2021-05-18T12:54:44.930 full_node full_node_server : WARNING Banning 172.16.31.10 for 600 seconds
2021-05-18T12:54:44.931 full_node chia.full_node.full_node: ERROR Error with syncing: <class 'RuntimeError'>Traceback (most recent call last):
File "chia\full_node\full_node.py", line 628, in _sync
RuntimeError: Weight proof did not arrive in time from peer: 172.16.31.10
I am getting this erors to
username_2: If you have 2 farming device try to connect them correctly using wallet key sync and allow using network while 1st run GUI
username_0: I would like to see if we can address my issues and not another issue in the bug post.
username_3: can you post your debug.log |
JuliaDynamics/ChaosTools.jl | 289448365 | Title: Cao’s method for estimating dimension of delay coordinates embedding
Question:
username_0: _From @username_0 on December 22, 2017 23:40_
Prof. <NAME> knows this.
_Copied from original issue: JuliaDynamics/DynamicalSystemsBase.jl#2_
Answers:
username_0: http://linkinghub.elsevier.com/retrieve/pii/S0167278997001188
[1] <NAME>, “Practical method for determining the minimum embedding dimension of a scalar time series,” Phys. D Nonlinear Phenom., vol. 110, no. 1–2, pp. 43–50, 1997.
Here is the paper.
Status: Issue closed
|
JokerQyou/bot | 95925748 | Title: Add /listadmin /deladmin /addadmin commands
Question:
username_0: for:
* list current admin usernames
* delete a specific admin (cannot delete oneself)
* add a user as admin
The admin list is stored as `config.__name__:admins` list in Redis.
An admin can:
* add other user as admin
* delete other user as admin
* add a vim tip
Answers:
username_0: Assumptions:
* an admin **must** have a username
Status: Issue closed
|
scala/bug | 913881238 | Title: Ordering change in List.groupBy between 2.12.12 and 2.12.13
Question:
username_0: ## reproduction steps
using Scala 2.12.13 or 2.12.14:
```scala
scala> List(1, 2, 3).groupBy(identity).foreach(println)
(1,List(1))
(2,List(2))
(3,List(3))
```
## problem
In 2.12.12 and below:
```scala
scala> List(1, 2, 3).groupBy(identity).foreach(println)
(2,List(2))
(1,List(1))
(3,List(3))
```
While both results seem correct, the change in ordering is visible when you convert back to an ordering-sensitive data structure, such as a list (or if you do `.foreach`). This may break code that relies on the old ordering.
Answers:
username_1: a next step here might be to bisect the Scala version to see which scala/scala PR is responsible
but also, @scala/collections does this ring a bell for anybody?
username_2: It rang a bell but this wasn't das Ding https://github.com/scala/bug/issues/11276 get it, ding.
username_3: https://github.com/scala/scala/pull/8948? (https://github.com/scala/scala/pull/9376 is also possible, but tiny)
username_2: This is a good one https://github.com/scala/bug/issues/4558 where the problem is
```
scala> Seq(1,2,3).view.groupBy(identity)
java.lang.StackOverflowError
```
and the expectation is
```
scala> Seq(1,2,3).groupBy(identity)
res3: scala.collection.immutable.Map[Int,Seq[Int]] = Map(3 -> List(3), 1 -> List(1), 2 -> List(2))
```
username_4: This did come up for discussion. We considered adding a backwards compatiblity mode with a system property.
But we considered that that 2.12.12 ordering was not the same as the the input collection (ie, we didn't use a `LinkedHashMap` internally) it seemed like we were within our rights to change the undefined behaviour here to achieve better performance.
username_4: I have added a compatibility note to the release notes.
```
The internal implementation of `groupBy` has been [optimized](https://github.com/scala/scala/pull/8948) to reduce allocations. This can result in different ordering elements if you iterate the resulting `Map`. The ordering of the returned map is not specified behaviour and should not be relied upon, for ordering sensitive use cases consider building a `LinkedHashMap` or `TreeMap` instead.
```
Status: Issue closed
username_3: Obligatory https://xkcd.com/1172/ |
ant-design/ant-design | 584791131 | Title: Table内嵌子表格时,middle/small模式下,内嵌Table与父级Row存在错位的情况
Question:
username_0: - [ ] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### Reproduction link
[https://github.com/ant-design/pro-table/issues/237](https://github.com/ant-design/pro-table/issues/237)
### Steps to reproduce
内嵌表格时,middle/small模式下,内嵌Table与父级Row存在错位的情况
### What is expected?
不错位
### What is actually happening?
错位
| Environment | Info |
|---|---|
| antd | 4.0.3 |
| React | 16.13.0 |
| System | Windows64 |
| Browser | Chrome 80 |
---
原样式如下:
``` css
.ant-table tbody > tr .ant-table {
margin: -16px -16px -16px 33px;
}
```
global全局覆盖后,被解决
``` css
.ant-table.ant-table-middle tbody > tr .ant-table {
margin-top: -12px;
margin-bottom: -12px;
}
.ant-table.ant-table-small tbody > tr .ant-table {
margin-top: -8px;
margin-bottom: -8px;
}
```
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --><issue_closed>
Status: Issue closed |
seanmonstar/warp | 841966017 | Title: Tests with spaces in query string
Question:
username_0: Using the test framework to simulate requests like:
```warp::test::request().path("/api?somekey=someparam")```
Works fine.
But if you insert a space like:
```warp::test::request().path("/api?somekey=some param")```
It crashes with "test request path invalid" at https://github.com/seanmonstar/warp/blob/b6d1fc0719604ef1010aec00544408e6af1289a5/src/test.rs#L200
Because it attempts to directly parse the given path as a URI. But spaces in URI should be percent encoded, so it is rejected.
And, if you do percent encode it:
```warp::test::request().path("/api?somekey=some param")```
It parses fine, but instead of sending a space to the request handler, it actually sends a ```%20```
So the handler performs incorrectly.
However, if you make a web request with that same path: ```"https://someurl.com/api?somekey=some param```, it parses fine, and sends the space to the handler.
So tests cannot be written to match web requests (with a space in query parameters).
Answers:
username_1: If you use the `filter` function defined for `request`, like here https://github.com/seanmonstar/warp/blob/c52638be787fc01374c5263d75ea74e63283238a/tests/query.rs#L7-L16, it'll parse the `%20` and pass a space to the handler.
Status: Issue closed
|
mash-up-kr/Thing-BackEnd | 467234695 | Title: 중복 닉네임 방지
Question:
username_0: ---
<br>
## Check List
- [ ] issue 제목은 유의미한가?
- [ ] issue 내용은 issue 내용만 확인하고도 모르는 사람도 파악할 수 있을 정도로 기술되었는가? (무엇을, 언제, 어디서...)
- [ ] reference가 있다면 추가했는가?
- [ ] 관련 issue가 있다면 추가했는가?
- [ ] 유의미한 Label을 추가했는가?
- [ ] Assginees를 추가했는가?
- [ ] Estimate를 추가했는가?
- [ ] 관련 Milestone이 있다면 추가했는가?
- [ ] 관련 Epics가 있다면 추가했는가?
---<issue_closed>
Status: Issue closed |
magnusmanske/cersei | 957591111 | Title: Bug?
Question:
username_0: Shouldn't this
https://github.com/username_1/cersei/blob/b5350497f05a227f9bfc158fb7c5e3aa1511679f/src/values.py#L62
be
`if letter not in self.LETTER_TO_TYPE:`
Answers:
username_0: Anyway, if you use Enums it looks something like this:
```
class WikidataLetters(Enum):
PROPERTY="P"
ITEM="Q"
LEXEME="L"
def __init__(self, q):
super().__init__()
q = q.strip().upper()
if length(q) < 2:
raise Exception("ItemValue: "+q+" is too short")
self.item_type = Letters(str(q[0])) # this throws a meaningful exception if it cannot be converted
number_string = str(q[1:])
if not number_string.isnumeric():
raise Exception("ItemValue: "+number_string+" is not numeric")
self.item_id = int(number_string)
```
username_0: Also consider renaming the "item_id" to entity_number or something which is more generic and less confusing. :)
username_0: See my Wikidata classes here, inspired by yours :)
https://github.com/username_0/LexODS/blob/master/models/wikidata.py (untested code)
username_1: Went with enums as per #1
Status: Issue closed
|
regolith-linux/regolith-desktop | 493134412 | Title: i3 does not show error dialog if config file fails to parse
Question:
username_0: Stock i3 and previous versions of Regolith would display an error dialog if the i3 config file has a parsing issue. In testing this is no longer the case and there is no visual indication of a parse error on config file reload.
Status: Issue closed
Answers:
username_0: I made no explicit fix but just verified works as expected on fresh install of Regolith R1.2 on Ubuntu 19.10. |
interbit/interbit | 340406113 | Title: Update platform to use interbit-core 0.14.0
Question:
username_0: [ ] Update `interbit-core` and `interbit-covenant-utils` refs to 0.14.0
[ ] Update API test cases in `interbit-test`
[ ] Update middleware in `interbit-ui-tools` to handle changes to `subscribe()` behaviour
[ ] Update blockchain viewer to handle changes to use `blockSubscribe()`
[ ] Use `local-forage` for local storage instead of `cli.kvPut()` and `cli.kvGet()`
[ ] Publish new `interbit` version
[ ] Update covenant packages to use new published `interbit` version
[ ] Test deploy<issue_closed>
Status: Issue closed |
darklang/dark | 1125565331 | Title: consider switch to NodaTime
Question:
username_0: See https://www.roji.org/postgresql-dotnet-timestamp-mapping for rationale
Answers:
username_0: @username_1 Can you give me your opinion on whether you think we should handle this now (as part of the rewrite) or later (as a tech-debt improvement thing)?
username_1: I'd recommend tackling it sooner than later.
The `DateTime`/`DateTimeOffset` pains in .NET are real; if we could avoid them from the start, that'd be ideal.
username_0: OK, I'll do this next then. |
lordmilko/PrtgAPI | 538918910 | Title: Get-SensorHistory sometimes returns a decimal value
Question:
username_0: **Describe the bug**
When attempting to run the below command:
```powershell
Get-SensorHistory -Id 18955 -StartDate (Get-Date) -EndDate (Get-Date).AddDays(-30) -Average 86400
```
The free bytes return value can sometimes produce a decimal value, which can be seen on the data return for the 17/12 and 12/12:
```
DateTime SensorId FreeSpace(%) FreeBytes(MByte) Total(MByte) Downtime(%) Coverage(%)
-------- -------- ------------ ---------------- ------------ ----------- -----------
17/12/2019 00:00:00 18955 70 42.778 60911 0 100
16/12/2019 00:00:00 18955 70 42787 60911 0 100
15/12/2019 00:00:00 18955 70 42856 60911 0 100
14/12/2019 00:00:00 18955 70 42881 60911 0 100
13/12/2019 00:00:00 18955 70 42887 60911 0 100
12/12/2019 00:00:00 18955 70 42.889 60911 0 100
11/12/2019 00:00:00 18955 70 42883 60911 0 100
10/12/2019 00:00:00 18955 70 42843 60911 0 100
09/12/2019 00:00:00 18955 70 42825 60911 0 100
08/12/2019 00:00:00 18955 70 42810 60911 0 100
07/12/2019 00:00:00 18955 70 42822 60911 0 100
06/12/2019 00:00:00 18955 70 42823 60911 0 100
05/12/2019 00:00:00 18955 70 42837 60911 0 100
04/12/2019 00:00:00 18955 70 42844 60911 0 100
03/12/2019 00:00:00 18955 70 42864 60911 0 100
02/12/2019 00:00:00 18955 70 42857 60911 0 100
01/12/2019 00:00:00 18955 70 42872 60911 0 100
30/11/2019 00:00:00 18955 70 42879 60911 0 100
29/11/2019 00:00:00 18955 70 42871 60911 0 100
28/11/2019 00:00:00 18955 70 42882 60911 0 100
27/11/2019 00:00:00 18955 70 42899 60911 0 100
26/11/2019 00:00:00 18955 70 42819 60911 0 100
25/11/2019 00:00:00 18955 70 42791 60911 0 100
24/11/2019 00:00:00 18955 70 42781 60911 0 100
23/11/2019 00:00:00 18955 70 42795 60911 0 100
22/11/2019 00:00:00 18955 70 42798 60911 0 100
21/11/2019 00:00:00 18955 70 42809 60911 0 100
20/11/2019 00:00:00 18955 70 42825 60911 0 100
19/11/2019 00:00:00 18955 70 42825 60911 0 100
18/11/2019 00:00:00 18955 70 42846 60911 0 100
```
**Steps to reproduce**
Steps to reproduce are above. I have checked the PRTG Server using the web client, and the history data appears to be there in full so I cannot see any reason why it is showing some of the values as a decimal.
**What is the output of `Get-PrtgClient -Diagnostic`?**
```powershell
Get-PrtgClient -Diagnostic
PSVersion : 5.1.18362.145
PSEdition : Desktop
OS : Microsoft Windows 10 Enterprise
PrtgAPIVersion : 0.9.12
Culture : en-GB
CLRVersion : .NET Framework 4.8 (528040)
PrtgVersion : 19.4.54.1506
PrtgLanguage : english.lng
```
**Additional context**
This may be a result of attempting to fix the other issues in the new 0.9.12 release
Answers:
username_1: Hi @username_0,
Can I get you to please run `Set-PrtgClient -LogLevel Response`, run this again and provide the XML of the records that had a decimal value? (the December 12th and 17th) and then for comparison also provide the XML for the 16th
username_0: Hi @username_1,
I have run that, but for some reason, it isn't outputting any data for the 17th. Below is what I managed to get:
```
VERBOSE: Get-SensorHistory: <?xml version="1.0" encoding="UTF-8"?> <histdata totalcount="30" listend="0"> <prtg-version>19.4.54.1506</prtg-version> </histdata> VERBOSE: Get-SensorHistory: <?xml version="1.0" encoding="UTF-8"?>
<histdata totalcount="30" listend="1">
<prtg-version>19.4.54.1506</prtg-version>
<item>
<datetime>16/12/2019</datetime>
<datetime_raw>43816.0000000000</datetime_raw>
<value channel="Free Space" channelid="0">70 %</value>
<value_raw channel="Free Space" channelid="0">70.2311</value_raw>
<value channel="Free Bytes" channelid="1">42,778 MByte</value>
<value_raw channel="Free Bytes" channelid="1">44856456689.7778</value_raw>
<value channel="Total" channelid="2">60,911 MByte</value>
<value_raw channel="Total" channelid="2">63869808640.0000</value_raw>
<value channel="Downtime" channelid="-4">0 %</value>
<value_raw channel="Downtime" channelid="-4">0.0000</value_raw>
<coverage>100 %</coverage>
<coverage_raw>0000010000</coverage_raw>
</item>
<item>
<datetime>15/12/2019</datetime>
<datetime_raw>43815.0000000000</datetime_raw>
<value channel="Free Space" channelid="0">70 %</value>
<value_raw channel="Free Space" channelid="0">70.2453</value_raw>
<value channel="Free Bytes" channelid="1">42,787 MByte</value>
<value_raw channel="Free Bytes" channelid="1">44865509504.0000</value_raw>
<value channel="Total" channelid="2">60,911 MByte</value>
<value_raw channel="Total" channelid="2">63869808640.0000</value_raw>
<value channel="Downtime" channelid="-4">0 %</value>
<value_raw channel="Downtime" channelid="-4">0.0000</value_raw>
<coverage>100 %</coverage>
<coverage_raw>0000010000</coverage_raw>
</item>
<item>
<datetime>14/12/2019</datetime>
<datetime_raw>43814.0000000000</datetime_raw>
<value channel="Free Space" channelid="0">70 %</value>
<value_raw channel="Free Space" channelid="0">70.3579</value_raw>
<value channel="Free Bytes" channelid="1">42,856 MByte</value>
<value_raw channel="Free Bytes" channelid="1">44937460424.0287</value_raw>
<value channel="Total" channelid="2">60,911 MByte</value>
<value_raw channel="Total" channelid="2">63869808640.0000</value_raw>
<value channel="Downtime" channelid="-4">0 %</value>
<value_raw channel="Downtime" channelid="-4">0.0000</value_raw>
<coverage>100 %</coverage>
<coverage_raw>0000010000</coverage_raw>
</item>
<item>
<datetime>13/12/2019</datetime>
<datetime_raw>43813.0000000000</datetime_raw>
<value channel="Free Space" channelid="0">70 %</value>
<value_raw channel="Free Space" channelid="0">70.4000</value_raw>
<value channel="Free Bytes" channelid="1">42,881 MByte</value>
<value_raw channel="Free Bytes" channelid="1">44964333795.5556</value_raw>
<value channel="Total" channelid="2">60,911 MByte</value>
<value_raw channel="Total" channelid="2">63869808640.0000</value_raw>
<value channel="Downtime" channelid="-4">0 %</value>
<value_raw channel="Downtime" channelid="-4">0.0000</value_raw>
<coverage>100 %</coverage>
<coverage_raw>0000010000</coverage_raw>
</item>
<item>
<datetime>12/12/2019</datetime>
<datetime_raw>43812.0000000000</datetime_raw>
<value channel="Free Space" channelid="0">70 %</value>
<value_raw channel="Free Space" channelid="0">70.4094</value_raw>
<value channel="Free Bytes" channelid="1">42,887 MByte</value>
<value_raw channel="Free Bytes" channelid="1">44970348188.4444</value_raw>
<value channel="Total" channelid="2">60,911 MByte</value>
<value_raw channel="Total" channelid="2">63869808640.0000</value_raw>
<value channel="Downtime" channelid="-4">0 %</value>
<value_raw channel="Downtime" channelid="-4">0.0000</value_raw>
<coverage>100 %</coverage>
<coverage_raw>0000010000</coverage_raw>
</item>
```
username_1: Did that result in the value 42.887 for 12/12/19 when you ran that?
username_1: I think the `datetime` and `datetime_raw` are slightly different values; can you also provide the data for 11/12/19?
Status: Issue closed
username_1: Hi @username_0,
I have published a new [release candidate](https://ci.appveyor.com/api/projects/username_1/prtgapi/artifacts/PrtgAPI.zip) that should have resolved this issue. Can you please give it a try and let me know how you go
Thanks for raising this issue
Regards,
username_1
username_0: Hi @username_1,
I have just manually downloaded the latest version, and that looks to have resolved the issue.
Many thanks again for resolving this so quickly.
Regards,
username_0
username_1: Hi @username_0,
Please be advised PrtgAPI 0.9.12 has now been released, which includes the fix for this issue as well as other enhancements
If you wish to update your system PrtgAPI installation, simply run
```powershell
Update-Module PrtgAPI
```
and reopen PowerShell
Regards,
username_1 |
vimeo/psalm | 593226022 | Title: InvalidPropertyAssignmentValue false positive
Question:
username_0: The code
```
class HttpError {
/** @var array */
private const ERRS = [
'403' => 'Forbidden',
'404' => 'NotFound',
'500' => 'InternalServerError'
];
/** @var null|string */
protected $code;
public function init(string $code) : void {
if(array_key_exists($code, self::ERRS))
$this->code = $code; // InvalidPropertyAssignmentValue
else
$this->code = '500';
}
}
```
triggers
`ERROR: InvalidPropertyAssignmentValue - invalidPropertyAssignmentValue.php:17:27 - $this->code with declared type 'null|string' cannot be assigned type 'int(403)|int(404)|int(500)' (see https://psalm.dev/145)`
It seems the assigned type has been inferred from ERRS by type-juggling, whereas the function declaration indicates otherwise.
Answers:
username_1: Basically `array_key_exists()` handling needs to be more intelligent and should not assume first argument (`$code`) has the same type as keys of the second one, as it may have returned `true` even though the actual types differ.
username_2: Yeah, this falls under PHP's ickiness around array keys.
You can work around this by adding an explicit cast: https://psalm.dev/r/886a972ce8
Status: Issue closed
username_0: Cool, thanks! |
BragaAndrei/testing | 1111808205 | Title: Descending function is working wrong
Question:
username_0: Open “ ListBoxer ” app
Insert in input field “ 9 “ click on “ Add to list “
1. Insert in input field “ 8 “ click on “ Add to list “
2. Insert in input field “ 7 “ click on “ Add to list “
3. Select “ Sort Order “ “ Descending ”
4. **Expected result :** 9,8,7 in result list.
5. **Actual result :** 9,7,8 in result list.
6. **Severity :** 2
7. **Priority:** High
Sistem/Win10
<issue_closed>
Status: Issue closed |
open-telemetry/opentelemetry-python-contrib | 844165105 | Title: opentelemetry-propagator-ot-trace not released as a part of the release process
Question:
username_0: **Describe your environment**
N/A
**Steps to reproduce**
N/A
**What is the expected behavior?**
`opentelemetry-propagator-ot-trace` package should be released with the release process and present in PyPI.
**What is the actual behavior?**
`opentelemetry-propagator-ot-trace` is not present in PyPI.
**Additional context**
It seems that with the addition of `opentelemetry-propagator-ot-trace` package a new top category/folder was introduced into the codebase, but the build script was not updated accordingly: https://github.com/open-telemetry/opentelemetry-python-contrib/blob/main/scripts/build.sh#L19<issue_closed>
Status: Issue closed |
borgbackup/borg | 148898150 | Title: repository wide verification
Question:
username_0: If I want to do repository wide verification, there is currently no efficient way to do it.
I can run `borg extract -n` on individual archives. I could loop this over all archives in a repository, but this is very inefficient, and runs contrary to the deduplicating principle. All chunks should be verified only once.
So some repository wide verification which checks all the chunks would be useful.
Answers:
username_1: There is also `borg check`.
username_2: Yeah but that only does CRC32 verification. At the chunk length probably still okay for bit rot detection, but not good enough for tamper detection.
Btw. check "kind of" does this if there is no manifest in the repo. It'll iterate over all chunks and decrypt them, which then also verifies their integrity.
I'd propose something like `borg check --full/thorough/paranoid` for this.
username_1: We can do that, but it won't be cheap. Lots of CPU for decrypt/decompress/hash and for remote repos also lots of traffic.
username_0: For remote repositories, would it be possible to run such verifications directly on the remote site?
username_1: borg check (repo only, simple crc32 check): yes
for encrypted repos, real verification needs verifying the hmac and thus needs the hmac key - only the client has that key, thus: no
username_0: But if you are throwing the information away (all you need to know is that that the verification succeeded), why does there need to be lots of traffic coming back to local? If that is what you meant (and I assume you did), then all you need to know is that the verification succeeded. And if it didn't, you need to get the error. But that shouldn't take much traffic either.
username_2: If you trust the remote anyway, just run the check command on the server?
username_3: Why is the hash calculated after compression and encryption?
username_2: There are two different HMACs with two different keys used for two different purposes:
- One is for the chunk ID and is over the uncompressed, unencrypted data as an identity function of the data. This is an identity MAC and can be used to detect exchanged data.
- One is the authentication HMAC. This is an encrypt-then-MAC scheme which is often recommended over other schemes, because (synposis) it's hard to fuck up.
username_2: No, this would make fingerprinting attacks very easy.
Status: Issue closed
username_2: Scheduled for 1.1
username_1: https://github.com/borgbackup/borg/commit/0bceaf0736d503c7b5718275e340d2f0ab36185f |
fabric/fabric | 68121597 | Title: "fab four" should play a Beatles tune
Question:
username_0: This is patently obvious.
Status: Issue closed
Answers:
username_1: If we did this, though, then `fab five` would have to display an image of some basketball players, and then who knows what comes after that! It's madness, I tell you!
*it was cool meeting username_0 at PyCon :heart:* |
Azure/azure-xplat-cli | 240618850 | Title: azure VM image list-publishers errors with "Cannot find module 'is-stream'"
Question:
username_0: CLI Version: **0.10.14**
OS Type: Win
Installation via: win installer
Mode: **ARM**
Environment: **AzureCloud**
Description:
`azure vm image list-publishers -l "West US"` command outputs 'Cannot find module 'is-stream''.
Steps to reproduce:
1) Run `azure vm image list-publishers -l "West US.`
Error stack trace:
**2017-07-05T13:06:20.653Z:
{ Error: Cannot find module 'is-stream'
<<< async stack >>>
<<< raw stack >>>
at Function.Module._resolveFilename (module.js:469:15)
at Function.Module._load (module.js:417:25)
at Module.require (module.js:497:17)
at require (internal/module.js:20:19)
at Object.<anonymous> (C:\Program Files (x86)\Microsoft SDKs\Azure\CLI\node_modules\azure-arm-compute\node_modules\ms-rest\lib\serialization.js:7:18)
at Module._compile (module.js:570:32)
at Object.Module._extensions..js (module.js:579:10)
at Module.load (module.js:487:32)
at tryModuleLoad (module.js:446:12)
at Function.Module._load (module.js:438:3)
stack: [Getter/Setter],
code: 'MODULE_NOT_FOUND',
__frame: undefined,
rawStack: [Getter] }
Error: Cannot find module 'is-stream'
<<< async stack >>>
<<< raw stack >>>
at Function.Module._resolveFilename (module.js:469:15)
at Function.Module._load (module.js:417:25)
at Module.require (module.js:497:17)
at require (internal/module.js:20:19)
at Object.<anonymous> (C:\Program Files (x86)\Microsoft SDKs\Azure\CLI\node_modules\azure-arm-compute\node_modules\ms-rest\lib\serialization.js:7:18)
at Module._compile (module.js:570:32)
at Object.Module._extensions..js (module.js:579:10)
at Module.load (module.js:487:32)
at tryModuleLoad (module.js:446:12)
at Function.Module._load (module.js:438:3)
**
Answers:
username_1: Please see: https://github.com/Azure/azure-xplat-cli/issues/3624#issuecomment-311789820
username_0: I hope it will be fixed with that release, we had an other issue with is-stream, which was solved by using an updated package.json from https://github.com/username_2/azure-xplat-cli/blob/0cf10abf042c6725ba0bda9d39d7d964ec74dc78/package.json
However, I did test with dev and batch-beta pulls, and experienced the same issue on this specific VM list part.
username_2: @username_0 --
1. are you saying that you continued to experience the same error after taking that package.json and running npm install on your branch?
2. can you try installing the last release `0.10.14` from npm and see if you still run into that error?
Knowing these would help us a lot. I tried to repro this, but found only one scenario as mentioned in #3624 and I believe I've fixed that.
username_2: Okay I can confirm this much -
With my changes linked to #3624 (one PR to touch package.json and another PR to improve the windows installer), I built a windows installer for this app locally and installed it. With that version installed, I see that this command works as expected.
@username_0 -- If you have my package.json change and if you've npm installed it and are running from code, I expect your scenario to work without issues. If it doesn't, please let me know.
username_2: potential dupe of #3624
username_2: Currently testing the 0.10.15 installer and this command now works as expected. :shipit:
Status: Issue closed
username_2: 0.10.15 is now publicly available and this bug is now fixed. |
yous/YousList | 207441341 | Title: Problem with making rules using python 2.6 or below
Question:
username_0: There are compatibility issues when generating `Rules.1blockpkg` and `Rules.1blockpkg.json`. <br/> `TypeError: __init__() got an unexpected keyword argument 'object_pairs_hook'` error is brought on if the following scripts are executed:
* `./bin/generate_rules.py username_1list.txt > Rules.1blockpkg.json`
* `./bin/minify_pkg.py`
* `./bin/prettify_pkg.py` <br />
See this: 
This error comes from the incompatibility between *python 2.7* and *2.6 (or below)* since `json` package in *python 2.6* doesn't include `object_pairs_hook` keyword([Can I get JSON to load into an OrderedDict in Python?](http://stackoverflow.com/a/6921842)). To fix this, every contributor who tries to add rules with *python 2.6* has to install `simplejson` package in advance and insert this code:
``` python
#import json
import simplejson as json
```
I think there are two solutions for this problem:
1. write another script for *python 2.4 and 2.6*
2. check what version of the Python Interpreter is interpreting the script, and import module conditionally (looks a bit messy, though)
``` python
import sys
if sys.version_info >= (2,7):
import json
else:
import simplejson as json
```

Answers:
username_1: Thank you for the report! I added `requirements.txt` and `requirements26.txt`. For python >= 2.7, running `pip install -r requirements.txt` is enough, whereas python < 2.7 needs to run `pip install -r requirements26.txt` additionally.
Status: Issue closed
|
raiden-network/raiden | 515388153 | Title: Add `amount_for_fees` to `EventPaymentSentSuccess`
Question:
username_0: ## Problem Definition
As also seen from this [issue](https://github.com/raiden-network/raiden/issues/5078#issuecomment-547977286) when a payment is succesfully sent and has had fees paid the `amount` of the event is only the amount transferred without the fees.
That is really confusing. When the user sees this he think okay I spent 1000 tokens. But then his balance got reduced by 1012 tokens and he thinks a bug made his lose money!!!
## Task
Add the `amount_for_fees` as a field of the `EventPaymentSentSuccess`. This way users who consume the event from the API will have all the information they need about the payment that happened and will not be surprised.
Answers:
username_1: @ezdac Is this still an open issue? |
Varying-Vagrant-Vagrants/VVV | 34642651 | Title: All .dev sites unreachable on Mac OSX Mavericks
Question:
username_0: Just installed the latest version of VVV and all 5 .dev sites listed in the readme we unreachable. To get them I had to do the following.
- SSH into the instance with `vagrant ssh`
- Run `ifconfig` and grab the IP Address associated with eth1
- Exit the vagrant box with `exit`
- Add each domain to your Mac's /etc/hosts with `sudo sh -c "echo '192.168.xx.x local.wordpress.dev' >> /etc/hosts"` (Repeat for all five domains listed in the readme)
After you do this you can view all five .dev pages in the browser of your choice.
I didn't see a ticket open for this so not sure if it is a feature/flaw/personal problem. |
tomalrussell/snkit | 1178432780 | Title: Consider building wheels
Question:
username_0: `snkit` is currently a pure python package
So `cibuildwheel` will fail with something like this:
```
+ python -m pip wheel D:\a\snkit\snkit --wheel-dir=C:\Users\RUNNER~1\AppData\Local\Temp\cibuildwheelc_4qg399\built_wheel --no-deps
Processing d:\a\snkit\snkit
DEPRECATION: A future pip version will change local packages to be built in-place without first copying to a temporary directory. We recommend you use --use-feature=in-tree-build to test your packages with this new behavior before it becomes the default.
pip 21.3 will remove support for this functionality. You can find discussion regarding this at https://github.com/pypa/pip/issues/7555.
Building wheels for collected packages: snkit
Building wheel for snkit (setup.py): started
Building wheel for snkit (setup.py): finished with status 'done'
Created wheel for snkit: filename=snkit-1.7.0-py3-none-any.whl size=11231 sha256=1421895037e21a418ddf9cc7a5db1f973176b49e199d5ce90fff341f436e41be
Stored in directory: C:\Users\RUNNER~1\AppData\Local\Temp\pip-ephem-wheel-cache-hakryx70\wheels\e2\d2\f4\dcd923214b0cdd9db36e6aa46f747eeb86480df2be56ce7db9
Successfully built snkit
Traceback (most recent call last):
File "C:\hostedtoolcache\windows\Python\3.9.10\x64\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\hostedtoolcache\windows\Python\3.9.10\x64\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Program Files (x86)\pipx\.cache\709265f20f5c9cd\Scripts\cibuildwheel.exe\__main__.py", line 7, in <module>
File "C:\Program Files (x86)\pipx\.cache\709265f20f5c9cd\lib\site-packages\cibuildwheel\__main__.py", line 195, in main
cibuildwheel.windows.build(options)
File "C:\Program Files (x86)\pipx\.cache\709265f20f5c9cd\lib\site-packages\cibuildwheel\windows.py", line 364, in build
raise NonPlatformWheelError()
cibuildwheel.util.NonPlatformWheelError:
cibuildwheel: Build failed because a pure Python wheel was generated.
If you intend to build a pure-Python wheel, you don't need cibuildwheel - use
`pip wheel -w DEST_DIR .` instead.
If you expected a platform wheel, check your project configuration, or run
cibuildwheel with CIBW_BUILD_VERBOSITY=1 to view build logs.
```
But the suggested `pip wheel -w dist .` fails due to missing GDAL_VERSION:
```
Processing ~\projects\snkit
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting geopandas>=0.10
Downloading geopandas-0.10.2-py2.py3-none-any.whl (1.0 MB)
---------------------------------------- 1.0/1.0 MB 13.1 MB/s eta 0:00:00
Collecting pygeos>=0.12
Downloading pygeos-0.12.0-cp39-cp39-win_amd64.whl (1.4 MB)
---------------------------------------- 1.4/1.4 MB 18.4 MB/s eta 0:00:00
Collecting pandas>=0.25.0
Downloading pandas-1.4.1-cp39-cp39-win_amd64.whl (10.5 MB)
--------------------------------------- 10.5/10.5 MB 25.2 MB/s eta 0:00:00
Collecting pyproj>=2.2.0
Downloading pyproj-3.3.0-cp39-cp39-win_amd64.whl (6.3 MB)
---------------------------------------- 6.3/6.3 MB 29.0 MB/s eta 0:00:00
Collecting fiona>=1.8
Downloading Fiona-1.8.21.tar.gz (1.0 MB)
---------------------------------------- 1.0/1.0 MB 33.2 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'error'
error: subprocess-exited-with-error
[Truncated]
running build_ext
building 'fiona.schema' extension
creating build\temp.win-amd64-3.9
creating build\temp.win-amd64-3.9\Release
creating build\temp.win-amd64-3.9\Release\fiona
"C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.24.28314\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -I~\AppData\Local\Continuum\miniconda3\envs\snkit\include -I~\AppData\Local\Continuum\miniconda3\envs\snkit\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.24.28314\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.24.28314\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\cppwinrt" /Tcfiona/schema.c /Fobuild\temp.win-amd64-3.9\Release\fiona/schema.obj
schema.c
fiona/schema.c(642): fatal error C1083: Cannot open include file: 'gdal.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.24.28314\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for fiona
Running setup.py clean for fiona
Successfully built snkit
Failed to build fiona
ERROR: Failed to build one or more wheels
```
Stopping this investigation here for now - may be something to test on Ubuntu/MacOS where the GDAL dependency is easier to manage? Otherwise leave it at source distribution. |
Z3Prover/z3 | 421855086 | Title: [z3py] Model decls does not contain all declared constants
Question:
username_0: **Spec1_ex.z3**
```
(declare-datatypes () (( permission deny permit )))
(declare-datatypes () (( node HopThroughLaptop cr-rtr lnx-rtr-1 lnx-rtr-2 lnx-rtr-3 nullhost tp-rtr )))
(declare-datatypes () (( acl_name cs-acl-01 null_acl tp-acl-01 )))
(declare-fun acl_action ( Int node acl_name (_ BitVec 32) Int Int (_ BitVec 32) Int Int Int ) permission)
(assert (! ( = (acl_action 1 tp-rtr tp-acl-01 #b00000000000000000000000000000000 0 0 #b00000101000001010000010100000001 0 22 0) deny) :named dummy_name73))
```
**Spec2_ex.z3**
```
(assert (! ( = (acl_action 2 tp-rtr tp-acl-01 #b00000000000000000000000000000000 0 0 #b00000110000001100000011000000001 0 22 0) permit) :named dummy_name77))
```
**Code**
```python
import z3
ctx = z3.Context()
s = z3.Solver(ctx=ctx)
abc = z3.parse_smt2_file('spec1_ex.z3',ctx=ctx)
s.add(abc)
s.check()
dce = z3.parse_smt2_file('spec2_ex.z3',decls={x.name():x for x in s.model().decls()}, ctx=ctx)
```
**Results**
```
Z3Exception: (error "line 1 column 30: unknown constant tp-rtr")
```
The model only contains [acl_action = [else -> deny]] . I'm guessing the model is simplified from the original smt2 file as the solver still contains the full model with constants by calling to_smt2.
Answers:
username_1: tp-rtr is an enumeration sort. It doesn't belong to the model and isn't included.
I updated the parser to traverse sorts in function declarations that are passed.
Even that isn't enough for your script as is.
The following change does work with the update:
<code>
def get_sorts(sorts, ts):
for t in ts:
sorts[t.sort().name()] = t.sort()
for i in range(t.decl().arity()):
s = t.decl().domain(i)
sorts[s.name()] = s
get_sorts(sorts, t.children())
sorts = {}
get_sorts(sorts, abc)
s.add(abc)
s.check()
dce = z3.parse_smt2_file('spec2_ex.smt2',decls={x.name():x for x in s.model().decls()}, sorts=sorts, ctx=ctx)
</code>
Status: Issue closed
username_0: Thanks for the help worked like a charm, beats my string manipulation based workaround any day of the week.
username_1: Just one note. The traversal code sample I gave doesn't scale to terms with shared subexpressions.
There are some examples online of how to deal with this. They are of the form:
from z3 import *
def subterms_rec(seen, t):
if t in seen:
return
yield t
seen |= { t }
for ch in t.children():
for s in subterms_rec(seen, ch):
yield s
def subterms(t):
seen = set([])
return subterms_rec(seen, t)
You get retrieve sorts from the subterms. |
landlab/landlab | 322377583 | Title: component tutorial does not work with current master branch
Question:
username_0: Code block 32 fails as follows
``` python
out_interval = 20.
last_trunc = total_t # we use this to trigger taking an output plot
for (interval_duration, rainfall_rate) in precip.yield_storm_interstorm_duration_intensity():
if rainfall_rate > 0.:
# note diffusion also only happens when it's raining...
fr.route_flow()
sp.run_one_step(interval_duration)
lin_diffuse.run_one_step(interval_duration)
z[mg.core_nodes] += uplift_rate * interval_duration
this_trunc = precip.elapsed_time // out_interval
if this_trunc != last_trunc: # time to plot a new profile!
print ('made it to time %d' % (out_interval * this_trunc))
last_trunc = this_trunc
figure("long_profiles")
profile_IDs = prf.channel_nodes(mg, mg.at_node['topographic__steepest_slope'],
mg.at_node['drainage_area'],
mg.at_node['flow__receiver_node'])
dists_upstr = prf.get_distances_upstream(
mg, len(mg.at_node['topographic__steepest_slope']),
profile_IDs, mg.at_node['flow__link_to_receiver_node'])
plot(dists_upstr[0], z[profile_IDs[0]])
# no need to track elapsed time, as the generator will stop automatically
# make the figure look nicer:
figure("long_profiles")
xlabel('Distance upstream (km)')
ylabel ('Elevation (km)')
title('Long profiles evolving through time')
made it to time 0
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-24-772b02ad7928> in <module>()
15 profile_IDs = prf.channel_nodes(mg, mg.at_node['topographic__steepest_slope'],
16 mg.at_node['drainage_area'],
---> 17 mg.at_node['flow__receiver_node'])
18 dists_upstr = prf.get_distances_upstream(
19 mg, len(mg.at_node['topographic__steepest_slope']),
~/projects/landlab/landlab/landlab/plot/channel_profile.py in channel_nodes(grid, starting_nodes, drainage_area, flow_receiver, number_of_channels, threshold, main_channel_only)
305 channel_network = []
306 for i in starting_nodes:
--> 307 (channel_segment, nodes_to_process) = _get_channel_segment(i, flow_receiver, drainage_area, threshold, main_channel_only)
308 channel_network.append(numpy.array(channel_segment))
309 profile_structure.append(channel_network)
~/projects/landlab/landlab/landlab/plot/channel_profile.py in _get_channel_segment(i, flow_receiver, drainage_area, threshold, main_channel_only)
197
198 # add the reciever of j to the channel segment if it is not j.
--> 199 recieving_node = flow_receiver[j]
200 if recieving_node != j:
201 channel_segment.append(recieving_node)
IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
<matplotlib.figure.Figure at 0x1c1bd4fb38>
```
Answers:
username_0: related to issue #690
username_0: and relevant to the discussion we had about the tutorials breaking on hydroshare. @nicgaspar @mcflugen @margauxmouchene @nathanlyons @SiccarPoint.
Status: Issue closed
|
endojs/endo | 1176051475 | Title: When NPM?
Question:
username_0: I want to use `lockdown` to increase security of my node programs (eliminate prototype pollution types of issues). In addition, harden is just a useful utility that could be beneficial to anyone.
Any plans to release harden / lockdown as their own separate packages?
Status: Issue closed
Answers:
username_0: Nevermind, the repo / package name mismatch caused brief confusion: https://www.npmjs.com/package/ses
username_1: Thanks @username_0. We did originally publish a separate package, but we were convinced to consolidate `lockdown()` and `harden()` because calling `harden` on just about any object obviates the possibility of later successfully calling `lockdown()`, since the latter step requires the intrinsics to be repairable. We’ve since changed `lockdown` such that it reveals `harden` only when it’s safe to use. |
naser44/1 | 105957253 | Title: كل الخوف خوف .! الآ الخوف من الله .. طمأنينه
Question:
username_0: <a href="http://ift.tt/1K1h5Ai">كُل الخَوفْ خُوف .,! اِلآ الخَوف مِنْ الله .. طَمأنيِنه</a> |
nodejs/next-10 | 820380909 | Title: Draft of blog post to go along with Survey
Question:
username_0: Next 10 years of Node.js - Understanding the the needs of the Node.js constituencies
TLDR; We need your help to make sure the Next 10 years of Node.js are as successful as the first. We are launching a survey, you can take [here]() to help us do that. To get a bit more context on why this survey is important, read on….
Node.js had a very successful first 10 years of Node.js and the project is working to make the next 10 years even better. As part of that we’ve kicked off the Next-10 effort to document what we think is important for that to happen. You can follow the ongoing work of that team in this repo: https://github.com/nodejs/next-10.
Without a handy crystal ball, it turns out that it’s a lot harder than just diving in and discussing our favorite technologies to see what the keys to success are going to be. Are things like WebAssembly, Typescript, etc. important to the people who use Node.js ? I guess we need to better understand/document who uses Node.js first…..
So far the team has spent most of its team laying the foundation on which we hope to base discussions around specific technologies.
We started by documenting our understanding of the project’s technical values as these will help us balance different aspects when necessary: https://github.com/nodejs/node/blob/master/doc/guides/technical-values.md. It’s not as simple as X overrides Y but instead highlight what key values need to be factored into decisions. For example, there was consensus that good developer experience has been a key part of the success of Node.js and that it’s important for future success that we maintain that.
Next the team documented the Node.js “Constituencies”. The people/groups who have a stake in the Node.js ecosystem. We captured these in [CONSTITUENCIES.md](https://github.com/nodejs/next-10/blob/main/CONSTITUENCIES.md) are include:
*Direct end users
*Application operators
*Application Developers
*Back-end server authors
*Library/package authors
*Node.js core maintainers
*Organizations with investments in Node.js
We also documented what we thought was important to those “constituencies” in [CONSTITUENCY-NEEDS.md](https://github.com/nodejs/next-10/blob/main/CONSTITUENCY-NEEDS.md).
We think we’ve got a good start, but at this point it only reflects the understanding of the small number of Next-10 team members contributing. At this point we need your help to make sure we’ve got it right and/or update until we do. You can do that by:
* Taking the [survey]()
* Getting involved in the [Next-10](https://github.com/nodejs/next-10.) effort.
Thanks for reading and we hope to get your feedback through the survey or see you get involved in the ongoing work of the Next-10 team. Thanks in advance for your help.
Answers:
username_0: @nodejs/next-10 I'm forwarding this on to Rachel on the Foundation team to go along with the survey mentioned in https://github.com/nodejs/next-10/issues/48
Please take a look and give me any comments/suggestions so that we can be ready for the survey to go out soon.
username_1: Most of its time?
username_0: @username_1 thanks, fixed.
username_2: @username_0 I guess the link here for the survey is not up-to-date, right?
Should it be https://www.surveymonkey.com/r/86SSY9Q ?
username_3: The link is accurate in the blog post: [Next 10 years of Node.js](https://nodejs.medium.com/next-10-years-of-node-js-understanding-the-needs-of-the-node-js-constituencies-2f95a1df6a6f)
username_0: Added the link in the original post just in case people look there.
username_1: I took a look at the survey. In general it's pretty good, but I was a bit confused by the part that asked which priorities were important to me. Like, almost all are important, but not all equally so. I feel like this survey would generate more informative data if each priority was listed with a scale next to it, 1 to 5 not at all important to very important, something like that.
username_4: 2. I agree with @username_1 that we should have a scale for these options. Actually, I took a look at the question and the options are so many that scares me to make a choice...
3. Has this published already? I saw the medium post above. What is the due date of this survey then?
4. Do you consider translating this into different languages? Not just in the English Internet?
username_0: The blog is already published and the survey is live. We don't have a date to close the survey yet.
Thinking about translating surveys and potentially using a scale as suggested above is good feedback that we can think about for future surveys.
username_0: Opened this issue so that we discuss the issue of translations - https://github.com/openjs-foundation/user-feedback/issues/6
Status: Issue closed
username_0: Closing as we have published the blog post.
username_5: Are the results from this somewhere?
username_6: @username_5 the results of the survey are now in a PR https://github.com/nodejs/next-10/issues/48#issuecomment-832917723 |
Seneca-CDOT/telescope | 527498938 | Title: Move constants to .env file
Question:
username_0: link to issue #297, as per comment left by @humphd
Move constants from `src/index.js` file into the `.env` file.
```
await feedQueue.add(feedJob, {
attempts: 8,
backoff: {
type: 'exponential',
delay: 60 * 1000,
},
```
Answers:
username_1: hey can i work on this please
Status: Issue closed
|
nodejs/help | 484883159 | Title: Error -13
Question:
username_0: <!-- Please remove this line and fill the template -->
10.16.3
* **OS**:
version checking
Following is error code I'm receiving. Even with hidden files shown, I can't find the relevant file to set permissions:
{ [Error: EACCES: permission denied, access '/usr/local/lib/node_modules/npm/node_modules/agentkeepalive']
npm ERR! stack:
npm ERR! 'Error: EACCES: permission denied, access \'/usr/local/lib/node_modules/npm/node_modules/agentkeepalive\'',
npm ERR! errno: -13,
npm ERR! code: 'EACCES',
npm ERR! syscall: 'access',
npm ERR! path:
npm ERR! '/usr/local/lib/node_modules/npm/node_modules/agentkeepalive' }
```
-->
Answers:
username_0: Mac OS 10.14.6
username_1: Please tell us which command you were running that led to the error.
username_2: ping @username_0
username_3: @username_0 , can you elaborate the issue please? so that it will be helpful for me to identify the bug.
Status: Issue closed
username_3: inactive. closing!! |
lovebai/hexo | 1040017244 | Title: 学习SpringBoot合并笔记-狂神 | 小白博客
Question:
username_0: https://19981115.xyz/posts/21119/#more
(一)1、SpringBoot简介回顾什么是Spring Spring是一个开源框架,2003 年兴起的一个轻量级的Java 开发框架,作者:<NAME> 。 Spring是为了解决企业级应用开发的复杂性而创建的,简化开发。 Spring是如何简化Java开发的为了降低Java开发的复杂性,Spring采用了以下4种关键策略: 1、基于POJO的轻量级和最小侵入性编程,所有东西都是b |
tdryer/hangups | 152891365 | Title: Filtering out typing self-notifications
Question:
username_0: Is there a way to filter out typing self-notifications coming from other clients. For example, logging in with the same account in hangouts.google.com and hangups, then typing in hangouts.google.com shows the 'User is typing' message in hangups even though it is the same user. Also, it looks like the typing notification has a user_id that is not listed in the conversation's UserList in hangups.
Answers:
username_1: You mean hangups is showing "Unknown is typing..." instead of your name?
Status: Issue closed
username_0: Maybe is it because I didn't load each conversation completely?
username_1: The contacts should be loaded when you run `build_user_conversation_list`. Are there any other warnings in the logs? Do your contacts appear correctly in the hangups UI?
username_1: I see the issue now. `Conversation.get_user` expects a `UserID` object as its argument rather than the `chat_id` integer.
You might want to use the `Conversation.on_typing` event since that gives you a `TypingStatusMessage` object which contains the `UserID`.
username_0: ```py
user_id = UserID(chat_id=typing_notification.sender_id.chat_id,
gaia_id=typing_notification.sender_id.gaia_id)
``` |
RWTH-EBC/AixLib | 547546739 | Title: Propagate Parameter in HydraulicModule
Question:
username_0: **What is the problem?**
- The time constant of heat transfer of the temperature sensor in the hydraulic modules is not propagetes
**How do we want to solve it? Describe the solution you'd like**
- Add an additional parameter for the time constant
Status: Issue closed
Answers:
username_1: Closed by PR #846 |
turingwars/turingwars | 290264488 | Title: Docker deployment
Question:
username_0: I've made a `Dockerfile` and a `docker-compose.yml` in c41e2156c333adc3592c31f22102083c4e7ed032
that seem to work, EXCEPT when running `docker-compose up`, it complains that it does not have the npm package `sqlite3`. If i install it manually on the running container, it works. It's strange, because the package is indeed declared in `web/packages.json`. Any ideas ?
Answers:
username_0: Fixed 59055222f92a5c69cc24ffb04710da8d90a3174e
Status: Issue closed
|
fholzer/docker-nginx-brotli | 1147333459 | Title: Tag latest with a specific verison
Question:
username_0: Hi!
After recent updates to the image's entrypoint/run commands https://github.com/username_1/docker-nginx-brotli/commit/2a6e0ebd0596a4ed75c67ed06bbed711468aa6fc#diff-dd2c0eb6ea5cfc6c4bd4eac30934e2d5746747af48fef6da689e85b752f39557R164-R165, our installation of the image became broken.
One of the reasons was that we weren't able to pin the version of the image that we use to a specific version. The latest tag just doesn't have any specific tag to pin to. Would be great to have such a tag that we can pin to the latest version and don't be afraid that it will become broken again.
Answers:
username_1: Hi,
sorry to hear that. I've been working on getting the image updated and introducing use of Github Actions the past couple of days. Regarding tags, have you tried using a tag for a specific version? .e.g `username_1/nginx-brotli:v1.18.0` or `username_1/nginx-brotli:v1.19.1`? These are very unlikely to be updated with breaking changes, while the `latest` and `mainline-latest` tags are. Or maybe I misunderstood how the issue relates to your use of tags.
At the moment there's still new builds running for both `latest` and `mainline-latest`, once these finished, I'll add tags for these new versions `v1.20.2` and `v.1.21.6`
Cheers,
Ferdinand
username_0: so, we should consider latest as "preview" release and when they become "stable" you will label them with a specific version, right?
username_1: Well, no. `latest` simply points to the latest stable release of nginx. With each new version there could also be changes. I try to keep those to a minimum, though sometimes they could break things like in your case. What I was trying to say is that if you pin to a specific version tag, then you can expect there to be no such breaking changes. Though in that case you'll need to manually change the tag you use when a new version is released, but at least in that case it is less of a surprise that something could break when actively switching to a new version's tag. Does that make sense?
username_0: so, you are saying that we should choose between the latest nginx version (without pinning) or the previous version with the ability to pin to it? 🤔
username_0: the official nginx image is tagged with both "latest" and version tags:
<img width="435" alt="image" src="https://user-images.githubusercontent.com/1364479/155230275-2998ea35-cb60-4b86-87c0-64bca0908e17.png">
<img width="607" alt="image" src="https://user-images.githubusercontent.com/1364479/155230389-d80f0037-408a-49fe-8bd6-7a61375e1fa6.png">
username_1: Not quite. There'll also be tags for v1.20.2 (stable) and v1.21.6 (mainline) published shortly. I am just in the process of switching from TravisCI to Github Actions, and generally updating the Dockerfile. These version tags should be available about an hour from now.
username_1: image tags v1.20.2 (stable) and v1.21.6 (mainline) are now published.
username_0: thanks for the explanation, we pinned our images to the desired versions 👍
Status: Issue closed
|
yardnsm/vim-import-cost | 788259903 | Title: Plugin opens :ImportCost text in separate vim-window
Question:
username_0: Here's what happened after I did `:ImportCost` on a .mjs file
<img width="887" alt="Screen Shot 2021-01-18 at 14 11 18" src="https://user-images.githubusercontent.com/2758453/104919794-1c9e4400-5997-11eb-9dde-5d0de5b0486c.png">
How did I install:
```bash
$ cd .vim/bundle && git clone <repo>
$ npm i
```
Then I opened vim and the above is my result. Let me know if you need more information.
Answers:
username_1: Hey @username_0,
That's the desired behaviour. The virtual text rendering only works in Neovim. In Vim, the plugin will open a scratch buffer filled with the results.
username_0: Hey @username_1,
thanks for your reply. Is it possible to adjust the plugin to work with regular vim? If not, feel free to close the issue.
username_1: I'm not aware of a virtual text feature in regular Vim, so I think that's not possible at the moment :(
Status: Issue closed
|
wangeditor-team/wangEditor | 781082826 | Title: 新增自定义菜单后粘贴方法会执行两次、粘贴worde文档会有css样式
Question:
username_0: ## bug 描述
*1.新增自定义菜单后复制粘贴,editor.config.pasteTextHandle粘贴方法执行了两次,把自定义菜单代码去掉后正常
2.粘贴word文档到富文本里会有css样式代码*
## 浏览器及版本号
*谷歌*
## wangEditor 版本
*4.6.0*
## 官网能否复现该 bug ?
能/不能
## 最小成本的复现步骤
(请告诉我们,如何最快的复现该 bug)
- 步骤一 新增自定义菜单
- 步骤二 复制粘贴
Answers:
username_1: 大哥,你是我哥,至少你的扩展功能代码得有个路径可以看吧
username_2: 该问题长时间没有人回复,暂且关闭
Status: Issue closed
|
spacetelescope/jwst | 832199298 | Title: 1.1.0 resample
Question:
username_0: _Issue [JP-1979](https://jira.stsci.edu/browse/JP-1979) was created on JIRA by [<NAME>](https://jira.stsci.edu/secure/ViewProfile.jspa?name=acanipe):_
resample1.1.0Make inverse variance weight_type="ivm" the default weighting scheme for multiple exposures resampled into a single output. [#⁠5738]0.18.0Implement memory check in resample to prevent huge arrays [#⁠5354]Add pixel_scale_ratio parameter to allow finer output grid. [#⁠5389]Enable resample_spec for NIRSpec line lamp exposures [#⁠5484]
Answers:
username_0: _Comment by [<NAME>](https://jira.stsci.edu/secure/ViewProfile.jspa?name=cracraft) on [JIRA](https://jira.stsci.edu/browse/JP-1979?focusedCommentId=516271#comment-516271):_
The output of the attached MIRI notebook looks as expected. From the header, I see that the weighting used for the resample step was 'ivm'. This step passes our tests.
username_0: _Comment by [<NAME>](https://jira.stsci.edu/secure/ViewProfile.jspa?name=bsunnquist) on [JIRA](https://jira.stsci.edu/browse/JP-1979?focusedCommentId=517503#comment-517503):_
The output of this step looks as expected for the NIRCam coronagraphy notebook ([https://jwst-validation-notebooks.stsci.edu/jwst_validation_notebooks/calwebb_coron3/jwst_calcoron3_nircam_test.html)] - the PSF-subtracted images were combined properly with low residual RMS.
username_0: _Comment by [<NAME>](https://jira.stsci.edu/secure/ViewProfile.jspa?name=jotaylor) on [JIRA](https://jira.stsci.edu/browse/JP-1979?focusedCommentId=518760#comment-518760):_
The NIRISS team has not tested this step.
Status: Issue closed
username_0: _Comment by [<NAME>](https://jira.stsci.edu/secure/ViewProfile.jspa?name=acanipe) on [JIRA](https://jira.stsci.edu/browse/JP-1979?focusedCommentId=519282#comment-519282):_
Dry run is complete. |
stevekrouse/WoofJS | 221693040 | Title: code section block comment
Question:
username_0: We have students that are creating Woof projects of thousands of lines, like [this one](http://woofjs.com/create.html#wildiaxfirst). Woohoo!
In order to make these projects more readable and easy to find things, we should give students a template for code section block comments like the following in the Woof documentation to copy and paste:
```javascript
// ----------------------------------------------------------
// ------------ Instruction Screen Sprites ------------------
// ----------------------------------------------------------
```
Answers:
username_1: I'll start working on the issue, just tell me the exact details of what is to be done and i'll have it all fixed one by one.
Do we need comment block for each element in the Documentation block ? Should they be embedded along with the code or should be present outside the code block (for convenient copy) ?
Let me know the details please @username_0 :smile:
username_0: I was thinking that we could have this a "block" in one place in the documentation. Potentially it could go in the More Blocks menu, at the bottom?
You could explain what the comment block is for and put the comment where the code goes so they could copy it easily with the Copy button.
username_0: https://github.com/username_0/WoofJS/pull/477/files
Status: Issue closed
|
ravens-engine/core | 959195676 | Title: Allow asynchronous process action
Question:
username_0: At the moment, `processAction` is a synchronous function. This means that when the action of a player arrives.
Client-side, it may be useful to allow for an asynchronous `processAction`. This could be used to trigger animations during the processing of an action. In code, this would translate into:
```ts
async processAction(userId, action) {
if (action.type == "pay-money-for-item-in-shop") {
await decreaseMoneyOfPlayer();
this.state.money -= priceOfItem;
await animateItemFromShopToPlayerInventory();
transferItemFromShopToInventory();
}
}
```
This is useful because sometimes you may want to maintain the state of the game to an intermediary state, between the processing of actions.
This would have a lot of impact on the architecture of Ravens, as all the code was done synchronously. There are also some questions that should be answered, like what to do when an action arrives from the server while `processAction` is waiting for a promise. |
SkygearIO/features | 276571623 | Title: Removal of record subscription
Question:
username_0: # Description
I would like to propose that we remove the record subscription feature and related APIs, because:
* it is rarely used
* it introduces a concept of “internal pubsub” which is problematic
* no one understand why there is an internal pubsub anymore
* SDK keeps two connections to the server, with the internal pubsub rarely used
* complicates cloud deployment because the `/_/pubsub` routes to a different service in the case of multiple instance skygear apps
* the code is not maintained for a long time and the developer who wrote that part has left the core team
* have related unresolved panics https://github.com/SkygearIO/skygear-server/issues/136
* the performance characteristics for a Public DB is not clear
* there are potentially huge number of records in a Public DB, and subscribing for changes in that DB is something we have no guarantee for
* this is likely to interfere with https://github.com/SkygearIO/features/issues/141 which aims to remove the requirement for a user account from device registration, record subscription requires device and record subscription requires a user
# Portal Design/Features
There is no changes to the portal.
# API Design
API related to record subscription are to be removed across all SDKs
## Scenario
While record subscription has been with skygear server for a long time, there is a lack of use case for its use. Developer is likely to send custom push notification and/or pubsub via cloud code instead of relying record subscription.
# Open Questions
The most likely question is probably “if there is no complaints regarding subscription, why not keep it?” I would like to argue that we do not have the necessary resource to maintain this part of the system, and removing this part allows us to focus on other components of Skygear.
# Related Issues
- Server Issues
- https://github.com/SkygearIO/skygear-server/issues/136
- Client Issues
- Guides Issues
# Progress Tracker
- [ ] Specification Design Approval
- [ ] Write (code + tests + API docs) then get them merged. **Put All PR there**
- [ ] Code
- [ ] Minimal tests
- [ ] Minimal API Docs
- [ ] Guides. **Put All PR there**
- [ ] Release
- [ ] Update release notes
- [ ] Release
# Advice
- Specification Design Approval
- Once you get LGTM from another Skygear Core Team Member, you can check this checkbox. And apply the "workflow/design-complete" label.
- Coding
- Use as many PRs as you need. Write tests in the same or different PRs, as is convenient for you.
- API doc should goes in the same PR with the code.
- As each PR is merged, add a comment to this issue referencing the PRs.
- When you are done with the code, apply the "workflow/code-complete" label.
- Guides
- Write or modify guides and get them merged in https://github.com/skygeario/guides
- When the PR of guides is merged, check this checkbox and apply the "workflow/guides-complete" label.
Answers:
username_1: Sounds weird to me...
* it is rarely used
* how we know it is relatively rarely used? I know at least one paid user using it...
* on internal pubsub
* is that something we should refactor if that is problematic? Say share the same pubsub connections with the pubsub module.
* the code is not maintained for a long time and the developer who wrote that part has left the core team
* I guess in most large software project 80% of the code are were never touch for years...
For performance, I guess it is a real issue that might need some attentions in future.
Status: Issue closed
|
angular/angular | 288324366 | Title: Compiler after angular 5.1.1 does not emit factories for external libraries
Question:
username_0: ## I'm submitting a...
<!-- Check one of the following options with "x" -->
<pre><code>
[x] Regression (a behavior that used to work and stopped working in a new release)
[ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
[ ] Feature request
[ ] Documentation issue or request
[ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
</code></pre>
## Current behavior
Factories does not generated for external modules
## Expected behavior
Factories generated as per 5.1.1
## Minimal reproduction of the problem with instructions
https://github.com/username_0/ng-cli-issue
npm i && npm run build
See preseng ngfactory.js files at generated/node_modules/ng-cli-submodule-issue.
Now remove generated directory.
Update to any newer version of angular (~5.1.2 or ~5.2.0).
Repeat. Now files missing. Only metadata.json & may be d.ts
## Environment
<pre><code>
Angular version: 5.1.1, 5.1.3, 5.2.0
Browser:
- [ ] Chrome (desktop) version XX
- [ ] Chrome (Android) version XX
- [ ] Chrome (iOS) version XX
- [ ] Firefox version XX
- [ ] Safari (desktop) version XX
- [ ] Safari (iOS) version XX
- [ ] IE version XX
- [ ] Edge version XX
For Tooling issues:
- Node version: 9.4.0
Answers:
username_1: I'm having this issue too, which causes a fatal "module not found" error when Webpack runs. Is there any workaround?
username_2: I'm unable to reproduce this - I get factories generated both with 5.1.2 and with the latest Angular (7.2.4). |
infor-design/enterprise-ng | 574765460 | Title: SohoModalService breaks when using the options() method of SohoModalRef
Question:
username_0: **Describe the bug**
It seems that the new **SohoModalService** breaks when using the **options()** on the returned **SohoModalRef**.
**To Reproduce**
Edit the **modal-dialog.demo.ts** and replace the code starting on line 56 with the following code then, launch the demo and click the "Open New Modal" button:
```javascript
this.dialogRef = this.newModalService
.modal<ExampleModalDialogComponent>(ExampleModalDialogComponent)
.options({
buttons: [
{
id: 'cancel-button',
text: Soho.Locale.translate('Cancel'),
click: (e) => { this.dialogRef.close('CANCEL'); }
},
{
text: 'Submit', click: (e, modal) => {
this.dialogRef.close(this.dialogRef.componentDialog.model.comment);
}, isDefault: true
}
],
title: this.title
})
.isAlert(this.isAlert)
.apply((componentDialog) => {
componentDialog.model.header = 'Header Text Update!!';
})
.open();
```
All that we are doing is passing the buttons and title via the settings rather than using the separate methods.
**Expected behavior**
The demo page goes blank/breaks instead of opening the dialog.
**Version**
- ids-enterprise-ng: 7.0.0-dev.20200219
**Platform**
- OS Version: Windows 10
- Browser Name: Chrome
- Browser Version: Latest
**Additional context**
The options method is present and used in the new **SohoModalRef**.
Answers:
username_1: @nbcp can the new service you added support options? Or does that introduce back the memory leak.
Not 100% sure this is "breaking" because its a new service vs the old one.
Suggestions?
username_2: @username_1 - both the new and original modal have the same issue with options. The memory leak is fixed in both versions.
Status: Issue closed
|
fluentscheduler/FluentScheduler | 683236418 | Title: how to use it in .netcore 3.1
Status: Issue closed
Answers:
username_1: Just `dotnet add package FluentScheduler` and follow what's on README.
Here's a little console application example for you:
```cs
using FluentScheduler;
using System;
using System.Threading;
class Program
{
static void Main(string[] args)
{
JobManager.AddJob(() => Console.WriteLine("Hey, it's working!"), s => s.ToRunNow().AndEvery(1).Seconds());
JobManager.Initialize();
Thread.Sleep(-1);
}
}
``` |
pytorch/pytorch | 394109069 | Title: How to optimize my dataloader
Question:
username_0: I have dataloader code below. my data is of 200gb and for one batch it takes 5 sec to laod. this made gpu waiting for thimport torch
import numpy as np
import os, random
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
from torch.utils.data import Dataset
from torch import nn
import torchvision.transforms as transforms
import torchvision.utils as vutils
import time
import easydict
import matplotlib.pyplot as plt
from PIL import Image
import os
import imageio
from torch.autograd import Variable
channel_num = 3
img_size = 96
Nd = 0
Np = 0
Nz = 50
multiPIE_train_transform = transforms.Compose([
transforms.CenterCrop(160),
transforms.Resize(img_size),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
def PIL_list_reader(fileList):
imgList = [];
idList = [];
poseList = [];
with open(fileList, 'r') as file:
for line in file.readlines():
imgPath, id_label, pose_label = line.strip().rstrip('\n').split(' ')
# use
imgList.append(imgPath)
idList.append(int(id_label.encode("utf-8")))
poseList.append(int(pose_label.encode("utf-8")))
Np = int(max(poseList) + 1)
Nd = int(max(idList) + 1)
return [imgList, poseList, idList, Np, Nd]
class multiPIE(Dataset):
def __init__(self, fileList,transform_mode=None, list_reader=PIL_list_reader):
self.channel_num = channel_num
self.image_size = img_size
self.Nz = 50
self.imgList, self.pose_label, self.id_label, self.Np, self.Nd = list_reader(fileList)
if transform_mode == 'train_transform':
self.transform = multiPIE_train_transform
def __getitem__(self, index):
[Truncated]
args = easydict.EasyDict({
"batch_size": 64,
"save_freq": 1,
"data_path": "/home/naveed/myDRGAN-master/data/imagessingle.txt",
" Np": 9,
"Nd": 200,
"Nd": 50,
})
device = torch.device("cuda:0")
dataset = multiPIE(args.data_path, transform_mode='train_transform')
dataloader = torch.utils.data.DataLoader(dataset, batch_size=args.batch_size, shuffle=True, num_workers=0)
start = time.time()
realBatch = next(iter(dataloader))
a = realBatch[0]
print("time for data loading", time.time() - start)
e next batch. How can i optimize my dataloader.
Answers:
username_1: please use https://discuss.pytorch.org for questions
Status: Issue closed
|
cypress-io/cypress | 505359452 | Title: Windows 10 does not load cypress extension on Chrome 77 or Chomium 79
Question:
username_0: ### Current behavior:
When running a test in either Chrome 77 or Chromium 79 the browser opens but the extension does not load. This works on Ubuntu 18.04 with the same configuration files.
Troubleshooting steps taken:
- Tested on multiple Windows machines. 2 of the machines are administered by our company. The other machine was a system that was not on our network nor administered by our company.
- Tested Chromium
- Tested both cypress open and cypress run --browser Chrome/Chromium
- Installed a local copy of Cypress with examples on both administered and non-administered machines.
### Steps to reproduce: (app code and test code)
Install Chrome 77
Run cypress open
Run test
### Versions
<!-- Cypress, operating system, browser -->
Tested On:
Cypress 3.4.1
Chrome 77.0.3865.90
Chromium 79.0.3937.0
Ubuntu 18.04 - worked as expected
Centos 7 - worked as expected
Windows 10 64 bit Version 1803 OS build 17134.556
Windows 10 64 bit Version 1803 OS build 17134.1006
Answers:
username_1: @username_0 Thanks for opening this issue. Could you detail more about what you mean by 'the extension does not load'.
What are you seeing? How are you assessing the extension is 'not loading'? Are you getting an error somewhere? Screenshots are ideal. Thanks!
username_0: This is what I see when running it with either command:

username_0: Here is the console log:

username_2: @username_0 How do you have Chrome installed? It looks like none of the command-line arguments Cypress passes are being accepted by Chrome.
username_0: Downloaded and installed. I tried both Chrome and Chromium with the same warnings
username_2: Downloaded from where?
username_0: https://www.google.com/chrome/
and
https://www.chromium.org/Home
username_2: Can you visit `chrome://policy` in the browser and share the contents please?
username_0: 
username_2: Strange, it should work on the computer with no policies set... can you share the entire DEBUG logs from the time you start Cypress to when you launch Chrome on that computer?
Instructions: https://docs.cypress.io/guides/guides/debugging.html#Print-DEBUG-logs
username_0: This is the only thing I get when following the instructions:

Status: Issue closed
|
larrybotha/styleguide | 339477123 | Title: Convert Coffeescript to Node6.9.1 compliant js
Question:
username_0: Would you be open to me making a PR for this change?
Answers:
username_1: go for it - the less coffeescript in the world the better
Status: Issue closed
username_1: closed as per https://github.com/username_1/styleguide/pull/15
username_1: :tada: This issue has been resolved in version 1.0.0 :tada:
The release is available on:
- [npm package (@latest dist-tag)](https://www.npmjs.com/package/struct-scss/v/1.0.0)
- [GitHub release](https://github.com/username_1/struct-scss/releases/tag/v1.0.0)
Your **[semantic-release](https://github.com/semantic-release/semantic-release)** bot :package::rocket: |
ContinuumIO/anaconda-issues | 22783295 | Title: anaconda.bat 32bit doesn't append to path given a 64bit install
Question:
username_0: my main installation is a 64bit installation. i also installed the 32 bit anaconda but i unchecked the options that alter PATH. so when i started the 32bit anaconda cmd prompt, it was still pointing to stuff in the 64 bit installation. the PATH wasn't changed.
Status: Issue closed
Answers:
username_1: Stale issue. Closing |
WarEmu/WarBugs | 225371259 | Title: Beastlord Hunt: Festitt
Question:
username_0: I've returned the "Beastlord Hunt: Festitt" quest with 30/30 Enemy player killed and Slay Festitt 1/1 and recived "Beastlord Wingmantle" (Shoulders for Zealot). The problem is that, the Brolgr (NPC) don't give me a next "stage" with players killed (for example Slayers or Engineers).
I don't know what to do, I've multiply killed Festitt and it doesn't work.
Answers:
username_0: UPDATE: When you have Quest done, and someone want to share it - it worked and bug the NPC. Don't work with others BL quest.
username_1: Your quests are probably bugged. Try contacting a GM in-game to have them reset.
Status: Issue closed
|
umijs/umi | 1146833629 | Title: [Bug] umi构建产物有泄漏信息的风险
Question:
username_0: 在umi的构建过程中,只要使用了process.env的地方,包含umi内部构件流程的模版代码和业务代码,process.env都会被转换成一个包含所有环境变量的巨大对象。
Answers:
username_0: <img width="741" alt="WeChata0a8692e23f091c7070535c479efd63f" src="https://user-images.githubusercontent.com/43140769/155130665-140d8399-0ba2-4fed-b244-e65d673cedb8.png">
username_0: 问题已找到。因为在.umirc中使用define定义了process.env,所以被全量打进去了。
Status: Issue closed
|
svgdotjs/svg.js | 493649846 | Title: Position isn't parsed correctly from element in imported svg
Question:
username_0: version 3.0.13
I'm importing svg markup to my canvas. In this markup I have red rectangle. I get the red rectangle by id and I'm drawing a blue rectangle at it's position. Problem is that the position isn't matching, see this fiddle: https://jsfiddle.net/4p1u3gar/
Answers:
username_0: If I change the size of the canvas to be the same as my SVG code, it works as expected;
`const draw = SVG().addTo("#canvas").size(423, 247);`
How can I find the scaling factor used to recalculate the position? Or is this a bug?
username_1: Its not a bug. You are creating a new svg document and add a rectangle and move it to the coordinates you got from a different svg (which has a viewbox applied to it).
The new svg does not have that viewbox. Viewboxes scale and translate the contents of an svg. So whatever coordinates you get from the original svg do not match the new ones.
You can try to use `myArea.rbox(newSVG)` to get the coordinates transformed into the newSVG. rbox will give you a box with x, y, width and height
Status: Issue closed
username_0: @username_1 Thanks, `rbox` fixed it :)
https://jsfiddle.net/aoqvt7se/ |
ContinuumIO/anaconda-issues | 180150839 | Title: Spyder application appears to cause screen flashes / glitches
Question:
username_0: Hello,
I am having an issue where the Spyder application appears to cause visual glitches, including recurring black screen flashes (like my monitor is powering on / off or the cable is unplugged / plugged). It seems unlikely that Spyder would be the cause, but I cannot replicate the issue in any other scenario except Spyder being open. It's been a problem for a few weeks now, including with Spyder 2 and 3.
I thought it was a hardware issue at first, but after repeated tests (e.g., changing monitor connectors, different video card drivers, trying video-driven applications (e.g., movies, games), and CPU and memory stress tests), I cannot replicate the issue.
I am running Anaconda / Spyder on Windows 10 (not Anniversary, but I tried Anniversary and rolled back thinking that causes my issue) with the latest build of Anaconda / Spyder as well as the last two versions.
Thanks, and let me know if you need more info.
Answers:
username_1: I'd also say this is video drivers issue, specifically a wrong installation of DirectX or OpenGL. I don't know what else to say, given that you're clear that's not the case...
Status: Issue closed
|
facebook/react-native | 84095787 | Title: Chrome Debugger affects code behavior. (defines navigator.userAgent)
Question:
username_0: I ran into a situation where a 3rd-party library was working fine when I had the chrome debugger open, but failed when I ran it normally.
It looks like opening the chrome debugger ends up adding/modifying global state. In particular, navigator.userAgent is undefined when you run normally but if you have the chrome debugger open, it becomes Chrome's user agent.
I don't know how the chrome debugger integration works, but username_4ally it would not affect the runtime behavior of your app! Else you end up with these heisenbugs. :-/
Answers:
username_1: It actually affects the runtime behavior a lot more than this, since running it on chrome uses google's V8 engine, while the app runtime uses JavascriptCore. I don't know how this would be fixed though, since I don't think the chrome remote debugging protocol supports using a different engine :cry:
username_2: I'm seeing a similar issue. I wonder if it has something to do with ES6 class usage?

username_1: ES6 classes are supported by the packager AFAIK, but some other ES6 features aren't
The full list of supported features is here https://facebook.github.io/react-native/docs/javascript-environment.html#content
All of them are supported in master, because it's using babel.
username_3: I am using es6 classes with .4.4 and don't see that problem
JG
:: @moduscreate
:: sent from my mobile device ::
>
username_2: Is there anywhere I can set breakpoints to see if JavascriptCore is failing on some bit of code? Not seeing anything in the console output of Xcode.
username_2: I'm not sure if I'm hunting in the right place but I tried to set some breakpoints on javaScriptDidLoad and bundleFinishedLoading in RCTRootView. They only fire after I enable Chrome debugging.
I thought it was could have been my simulator settings forcing debugging so I reset the sim, but still seeing the same results.
username_2: Turns out it was a const keyword!
If you look at this page you'll see that const in strict mode is not supported by iOS 8 (JavascriptCore). It does work if you're not using strict mode. In my case, I was in strict mode.
https://kangax.github.io/compat-table/es6/#ios8
@username_0 see if you're using that anywhere.
username_1: Cool! This will be supported once babel lands though. (Already in master if you wanna live on the edge)
username_3: Living on the edge with This project can cause bleeding sometimes. :)
John, any username_4a when the next version of RN will be cut?
@username_2, please feel free to close this issue.
username_1: I'm bleeding a bit from living on the edge, but nothing that's unsalvageable with responsible use of shrinkwrap and testing :+1:
@username_3 No username_4a, I'm not on the core team, but I'm sure 0.5.0 is soon to be released (you can already use the rc)
username_2: @username_3 can't close as I kind of highjacked @username_0 original post. Hopefully this thread helps him out.
Status: Issue closed
username_4: Closing since 0.5.0 is out |
odalic/odalic-ui | 178009464 | Title: Provide a cancel option to the properties dialog
Question:
username_0: When I change something in the dialog (e.g. change class, change relation) in Odalic UI, it is directly changed. It would be good to have the possibility to confirm the changes or withdraw (by pressing cancel or "x" in the corner of the dialog) at the end.
Answers:
username_0: Hi, @username_2 , for the 4th iteration, just try to think about it, how hard would it be to implement, what time would it take and so on...
username_1: Yes, seems like this is quite hard, @username_2 please rather help Kata with her issues for iteration 4. @username_0, move to 5th iter?
username_2: I don't think it's hard to implement... But I can't make the descision. @KataBoku directly works with the result object sent by the server, which is then mirrored by the GUI. (The binding of GUI to data is basically 1:1)
To implement this I suppose we would need to:
1) clone the result object and temporarily save it
2) provided the 'cancel' button was pressed, replace the current result object with the temporary clone
- I believe it should be this simple, but I'll have to discuss with @KataBoku
username_1: Ok, thanks for investigating that! We should realize that, because it was also pointed couple of times during the workshop in September
username_0: @username_2 Thanks! Assigning @KataBoku and moving to 5th iteration.
username_0: @KataBoku @username_2 Reassigning beack to @username_2 , as @KataBoku has alrady too much high priority issues. You can still @username_2 consult her about it.
username_2: changing prio to low due to a shortage of time.
though not hard to implement from a logical point of view, will take a significant time (mainly because I would be dealing with a code that is not mine)
username_1: As we discussed, just please change the "OK" button to close and leave it as it is. Then it can be closed
username_1: overlapping with https://github.com/odalic/odalic-ui/issues/247
username_2: Seems like @KataBoku solved this one, so closed.
Status: Issue closed
|
elastic/ecctl | 641503795 | Title: Deployment instructions on www.elastic.co/downloads/ecctl don't support default macOS shell (zsh)
Question:
username_0: <!--- Thank you for taking the time to create a Bug Report! -->
<!--- Before creating an issue please make sure you are using -->
<!--- the latest version of our software, if possible. -->
<!--- Provide a general summary of the issue in the title above. -->
## Readiness Checklist
<!--- Please answer the following questions for yourself before -->
<!--- submitting an issue, and put an `x` in all the boxes that -->
<!--- apply. -->
- [x] I am running the latest version
- [x] I checked the documentation and found no answer
- [x] I checked to make sure that this issue has not already been filed
- [x] I am reporting the issue to the correct repository (for multi-repository projects)
## Current/Expected Behavior
<!--- If you're describing a bug, tell us what should happen. -->

I followed these steps from www.elastic.co/downloads/ecctl today on macOS on 10.15.5 and run into this problem:
```
tony@tony-mbp ~ % source <(ecctl generate completions)
complete:13: command not found: compdef
tony@tony-mbp ~ % bash
bash-3.2$ source <(ecctl generate completions)
bash-3.2$ echo $?
0
```
The issue is [zsh is now the default shell on macOS](https://support.apple.com/en-us/HT208050). This works fine on bash.
Answers:
username_1: That's interesting, @username_0 I've got zsh and it works fine for me:
```sh
$ source <(ecctl generate completions)
$ echo $SHELL
/bin/zsh
$ ecctl version
Version: v1.0.0-beta3
Client API Version: 2.5.0-ms36
Go version: go1.13.10
Git commit: 4d5ca4e0
Built: Tue 12 May 00:30:44 2020
OS/Arch: darwin / amd64
```
username_0: Hrm.
```
tony@tony-mbp Desktop % source <(ecctl generate completions)
complete:13: command not found: compdef
tony@tony-mbp Desktop % echo $SHELL
/bin/zsh
tony@tony-mbp Desktop % ecctl version
Version: v1.0.0-beta3
Client API Version: 2.5.0-ms36
Go version: go1.13.10
Git commit: 4d5ca4e0
Built: Tue 12 May 00:30:44 2020
OS/Arch: darwin / amd64
```
It looks like this is the problem: https://apple.stackexchange.com/a/340718
I just followed those instructions, and now it's working for me on zsh:
```
tony@tony-mbp Desktop % autoload -Uz compinit
tony@tony-mbp Desktop % compinit
tony@tony-mbp Desktop % source <(ecctl generate completions)
tony@tony-mbp Desktop %
```
username_1: Ah good to know, I'll tag this issue as "Documentation" and see how we can address this. |
timc1/kbar | 1100640537 | Title: How to combine with user defined shortcuts
Question:
username_0: I have a custom keyboard shortcut handler on other parts of my app:
```ts
import { useWindowEvent } from "./useWindowEvent"
export function useKeyboardShortcut(key: string, callback: () => void, metaKey?: boolean) {
useWindowEvent("keydown", (event) => {
const activeElem = document.activeElement
if (
event.key === key &&
event.metaKey === !!metaKey &&
activeElem?.getAttribute("role") !== "menu" &&
activeElem?.tagName !== "INPUT" &&
activeElem?.tagName !== "TEXTAREA"
) {
event.preventDefault()
event.stopPropagation()
callback()
}
})
}
```
Most of the time this works fine, except when I want to access the kbar shortcut instead of my shortcut, so for example I have a navigation shortcut `g + r`, then if I have a custom shortcut on my component for `r`, the kbar shortcut might be triggered but also the shortcut in my component
Any good idea to make both work? maybe the `useKbar` shortcut holds some state property I could use to not trigger my custom shortcuts?
Answers:
username_1: Hey @username_0 apologies for the later response – can you early return within `useWindowEvent` if the event's default has been prevented?:
```
useWindowEvent("keydown", event => {
if (event.defaultPrevented) return;
// const activeElem...
})
```
username_0: thanks @username_1 , I'll give it a try, but this might not solve the issue, it seemed that the behavior was non-deterministic, meaning sometimes my custom shortcut would fire first and sometimes kbar shortcut would fire first, then checking for the prevented flag might not always yield the expected result
username_1: Let's get an understanding of why it's firing in arbitrary order before tackling this – events _should_ be firing in the same order, so figuring that out will give us a good idea on how to approach a solution
Status: Issue closed
username_2: @username_0 did using `useWindowEvent` solve your problem?
username_0: unfortunately, I left the project where I was using kbar, so I didn't get to try it out, I just replaced my shortcut with a shift modifier in my local component. |
pytorch/pytorch | 1001083877 | Title: `conda install` from pytorch-nightly channel installs CPU version even from a clean environment.
Question:
username_0: ## 🐛 Bug
Clean python 3.9 environment fails to install GPU version of pytorch when running the following:
`conda install pytorch torchvision -c pytorch-nightly -c conda-forge`
Happens on both windows and Linux.
## To Reproduce
Steps to reproduce the behavior:
1. Run `conda install pytorch torchvision -c pytorch-nightly -c conda-forge` on python==3.9 environment, observe cpuonly version being installed
## Expected behavior
GPU version of pytorch is installed as expected
## Environment
- PyTorch Version: Attempted 1.10
- OS (e.g., Linux): Linux/Windows
- How you installed PyTorch (`conda`, `pip`, source): Conda
- Python version: 3.9
- CUDA/cuDNN version: 11.1
- GPU models and configuration: RTX 3090, A100
Answers:
username_1: The [install instructions](https://pytorch.org/get-started/locally/) show:
```
conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch-nightly -c nvidia
```
as the install command for the nightly with the CUDA11.1 runtime, so could you replace the `conda-forge` channel with `nvidia`? (I was running into the same issue last week)
username_2: Closing since this sounds like it is expected? The instructions indeed say to use the `nvidia` channel. Feel free to reopen if I'm missing something.
Status: Issue closed
|
striot/striot | 320623022 | Title: Cabal configure problem (Haskell 8.4.2 and striot-0.1.0.4)
Question:
username_0: I've just upgraded to haskell 8.4.2 on Windows 10. When I try to run `cabal configure` I hit the problem:
-----
```
$ cabal configure
Resolving dependencies...
Warning: solver failed to find a solution:
Could not resolve dependencies:
[__0] trying: striot-0.1.0.4 (user goal)
[__1] next goal: base (dependency of striot)
[__1] rejecting: base-4.11.1.0/installed-4.1... (conflict: striot => base>=4.9
&& <4.10)
[__1] fail (backjumping, conflict set: base, striot)
After searching the rest of the dependency tree exhaustively, these were the
goals I've had most trouble fulfilling: striot, base
Trying configure anyway.
Configuring striot-0.1.0.4...
cabal.exe: Encountered missing dependencies:
HTF -any, base ==4.9.*, time ==1.6.*
```
Anyone any ideas how to fix this?
Answers:
username_1: I believe the issue is caused by the base package for Haskell 8.4.2 being `base-4.11.1.0`, which we have restricted in our build file as `<4.10`. Changing the dependency in the `striot.cabal` file [L22](https://github.com/striot/striot/blob/master/striot.cabal#L22) and [L29](https://github.com/striot/striot/blob/master/striot.cabal#L29) would potentially fix this error, but could introduce others. I do not know which versions of the packages we use are compatible with one another, and whether they are all supported in Haskell 8.4.2, but I'll have a look tomorrow and get back to you.
username_0: Hi Adam
That’s really helpful thanks.. I’ll revert to 8.2.1
Regards
Paul
Status: Issue closed
|
GC-spigot/AdvancedEnchantments | 646654235 | Title: PlayerMoveEvent spamm
Question:
username_0: pl ver: v7.9.12
link error: https://pastebin.com/fw3smHyT
server: 1.12.2 (paperspigot)
pls fix
Answers:
username_0: https://pastebin.com/7fhP9pHF please
Status: Issue closed
username_1: I already told you the issue, stop making new issues. Your enchantment 'plaguecarrier' has an issue with conditions |
binghe819/study-effective-java | 842670448 | Title: 16주차 스터디: Item78 - 80
Question:
username_0: ### 내용
* [아이템 78. 공유 중인 가변 데이터는 동기화해 사용하라](https://github.com/username_0/TIL/blob/master/JAVA/Effective%20Java/item78.md)
* [아이템 79. 과도한 동기화는 피하라](https://github.com/username_0/TIL/blob/master/JAVA/Effective%20Java/item79.md)
* [아이템 80. 스레드보다는 실행자, 태스크, 스트림을 애용하라](https://github.com/username_0/TIL/blob/master/JAVA/Effective%20Java/item80.md)
<br>
### 느낀점
* 스레드를 많이 사용해본 적이 없어서.. 너무 어렵게 다가온 것 같아요
Answers:
username_1: ### 내용
[아이템 78. 공유 중인 가변 데이터는 동기화해 사용하라](https://github.com/username_1/TIL/blob/master/Books/EffectiveJava/Effective%20Java%20item%2078.md)
[아이템 79. 과도한 동기화는 피하라](https://github.com/username_1/TIL/blob/master/Books/EffectiveJava/Effective%20Java%20item%2079.md)
[아이템 80. 스레드보다는 실행자, 태스크, 스트림을 애용하라](https://github.com/username_1/TIL/blob/master/Books/EffectiveJava/Effective%20Java%20item%2080.md)
### 느낀점
동시성 이슈를 겪어 본 적이 없어서 어려웠습니다... 특히 마지막 아이템에서 실행자를 사용해 본적이 없어서 ㅎㅎ 그래도 잘 모르는 내용을 봐서 새로운 점은 많이 배운 것 같내요.
username_2: - [Item 78. 공유 중인 가변 데이터는 동기화해 사용하라](https://github.com/username_2/EffectiveJava/blob/main/item78.md)
- [Item 79. 과도한 동기화는 피하라](https://github.com/username_2/EffectiveJava/blob/main/item79.md)
- [Item 80. Item 80. 스레드보다는 실행자, 태스크, 스트림을 애용하라](https://github.com/username_2/EffectiveJava/blob/main/item80.md)
- 느낀점
저도 주로 스레드로 사용하고 실행자는 제대로 사용해본 적이 없는데, 앞으로는 실행자를 사용하는 습관을 들여야 할 것 같네요. 그리고 세개가 딱 적당했다는 생각..ㅎㅎㅎ
username_3: - [아이템 78. 공유 중인 가변 데이터는 동기화해 사용하라](https://github.com/username_3/EffectiveJava/blob/master/src/concurrencyEx/item78/item78.md)
- [아이템 79. 과도한 동기화는 피하라](https://github.com/username_3/EffectiveJava/blob/master/src/concurrencyEx/item79/item79.md)
username_4: ### 내용
- [Item 80. 스레드보다는 실행자, 태스크, 스트림을 애용하라](https://github.com/username_4/study/blob/master/Effective%20Java/Item80.md)
### 느낀점
좀 더 공부가 필요한 부분임을 확실히 느꼈네요.. 반성합니다 ㅎ_ㅎ
username_0: ### 내용
* [아이템 78. 공유 중인 가변 데이터는 동기화해 사용하라](https://github.com/username_0/TIL/blob/master/JAVA/Effective%20Java/item78.md)
* [아이템 79. 과도한 동기화는 피하라](https://github.com/username_0/TIL/blob/master/JAVA/Effective%20Java/item79.md)
* [아이템 80. 스레드보다는 실행자, 태스크, 스트림을 애용하라](https://github.com/username_0/TIL/blob/master/JAVA/Effective%20Java/item80.md)
<br>
### 느낀점
* 스레드를 많이 사용해본 적이 없어서.. 너무 어렵게 다가온 것 같아요 |
Live-Charts/Live-Charts | 372696142 | Title: Mapper Only Applied On Hover on Stack Column Series
Question:
username_0: I'm not sure if i'm doing something wrong here, but basically I'm trying to change the colour of the stack column series base of the value. It does change the colour but only when the mouse is hovered.
public partial class PointStateExample : UserControl
{
public PointStateExample()
{
InitializeComponent();
ColumnMapper = Mappers.Xy<ObservableValue>()
.X((item, index) => index)
.Y(item => item.Value)
.Fill(item => item.Value > 5 ? GreenBrush : RedBrush)
.Stroke(item => item.Value > 5 ? GreenBrush : RedBrush);
ColumnMapperT = Mappers.Xy<ObservableValue>()
.X((item, index) => index)
.Y(item => item.Value)
.Fill(item => item.Value > 0 ? TransBrush : TransBrush)
.Stroke(item => item.Value > 0 ? TransBrush : TransBrush);
SeriesCollection = new SeriesCollection
{
new StackedColumnSeries
{
Values = new ChartValues<ObservableValue> {new ObservableValue(4), new ObservableValue(6),new ObservableValue(1),new ObservableValue(8)},
StackMode = StackMode.Values, // this is not necessary, values is the default stack mode
DataLabels = true,
Configuration = ColumnMapperT
},
new StackedColumnSeries
{
Values = new ChartValues<ObservableValue> {
new ObservableValue(2),
new ObservableValue(5),
new ObservableValue(6),
new ObservableValue(7)},
StackMode = StackMode.Values,
DataLabels = true,
Configuration = ColumnMapper
}
};
RedBrush = new SolidColorBrush(Color.FromRgb(238, 83, 80));
GreenBrush = new SolidColorBrush(Color.FromRgb(127, 255, 0));
TransBrush = Brushes.Transparent;
DataContext = this;
}
public Brush RedBrush { get; set; }
public Brush GreenBrush { get; set; }
public Brush TransBrush { get; set; }
public CartesianMapper<ObservableValue> ColumnMapper { get; set; }
public CartesianMapper<ObservableValue> ColumnMapperT { get; set; }
public SeriesCollection SeriesCollection { get; set; }
public string[] Labels { get; set; }
public Func<double, string> Formatter { get; set; }
}
and Xaml file:
```
[Truncated]
<lvc:CartesianChart.AxisX>
<lvc:Axis Title="Browser"
Labels="{Binding Labels}"
Separator="{x:Static lvc:DefaultAxes.CleanSeparator}" />
</lvc:CartesianChart.AxisX>
<lvc:CartesianChart.AxisY>
<lvc:Axis Title="Usage" LabelFormatter="{Binding Formatter}"></lvc:Axis>
</lvc:CartesianChart.AxisY>
</lvc:CartesianChart>
</Grid>
</UserControl>
```
Should be like below once the window is loaded.

version: LiveCharts.0.9.7
Answers:
username_1: Having same issue, mapper works correctly with most other types, except StackedColumn.
username_2: I know this thread is quite stale, but worth asking -- anyone figure this out? |
dmwm/Docker | 196119251 | Title: Write vulture checker for changed code for PR
Question:
username_0: Vulture finds unused code.
We could check which files a PR changes and generate a vulture report restricted to those files. Should do this with and without the testing directory to find parts of the code that are only used in the tests.
It's not perfect, though. It misses things imported or called indirectly. |
trailofbits/ebpfpub | 766099728 | Title: 封丘县妹子真实找上门服务b
Question:
username_0: 封丘县哪里有真实大保健(找特色服务(十微781372524) 《占有者》剧照 《占有者》剧照 《占有者》剧照 由李鼎传媒监制的加拿大科幻惊悚片《占有者》正式入围年圣丹斯电影节世界剧情片竞赛单元,同时发布了全新剧照。该片由获得戛纳电影节金摄影机奖导演处女作奖提名和关注大奖提名的《病毒抗体》的导演布兰登·柯南伯格自编自导。肖恩·宾《权力的游戏》,《指环王》,克里斯托弗·阿波特《都市女孩》和安德丽亚·瑞斯波罗格《鸟人》,《黑镜》担任主演。影片讲述一个秘密组织的特工塔斯雅利用大脑植入技术,使他人成为自己的栖身之所,并控制驱使他们执行暗杀。然而在一次例行工作出了差错,她很快发现自己陷入了一个不知情的嫌疑犯的思想中,其对暴力的欲望导致她要和自己作斗争。 另外,奈飞与李鼎传媒联合制作的动作科幻片《导线以外》于今年月在匈牙利布达佩斯完成制作。 匈牙利探班照主演安东尼·麦凯与执行制片人李威达先生 匈牙利探班照导演米凯尔·哈弗斯特罗姆与执行制片人李威达先生 该片由李鼎传媒李威达担任执行制片,米凯尔·哈弗斯特罗姆《金蝉脱壳》担任导演。影片设定在未来世界,“猎鹰”安东尼·麦凯《美国队长》扮演的无人机驾驶员进入了一个致命的军事化区域,在那里他发现自己正在为一名机器人官员工作,而他们的任务是在一个世界末日装置落入叛乱分子手中之前找到它。此外年新晋戛纳影后艾米丽·比查姆《小小乔》也将出演此片。 李鼎传媒是目前唯一一家为开发并联合制作西方电影的,驻扎在中国的国际影视公司。经常会有一些中国公司向出售完整的中国电影,但并未成功开发或共同制作过项目。作为一家成熟的国际影视公司,李鼎传媒非常了解西方观众和制片方的口味,能够为观众制作出高质量的好作品。 除了与制片公司和电影人合作,李鼎传媒还拥有着近部电影的中国独家数字发行权。 另一主营领域,片库方面公司已在今年月的美国电影市场中完成了多部经典影片的续约工作其中包括奥斯卡获奖影片《爆裂鼓手》,《夜行者》,《鲍比》,《滑动门》,《简爱》,《不朽真情》等。杂敝逝障逞https://github.com/trailofbits/ebpfpub/issues/3372?HHj21 <br />https://github.com/trailofbits/ebpfpub/issues/1992?gbfxr <br />https://github.com/trailofbits/ebpfpub/issues/612?ntatd <br />https://github.com/trailofbits/ebpfpub/issues/1129?fiiks <br />https://github.com/trailofbits/ebpfpub/issues/6044?nxjhl <br />https://github.com/trailofbits/ebpfpub/issues/4664?97838 <br />https://github.com/trailofbits/ebpfpub/issues/3284?08684 <br />yndiflyreiwxtecpiuoeedqzmxfeekhrabw |
Subsets and Splits