repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
talkncloud/tnc-cup-client
798125709
Title: use proper currency symbol Question: username_0: **Is your feature request related to a problem? Please describe.** Users can choose the currency from a list of currencies, not all currencies use $. the client should display the correct currency. **Describe the solution you'd like** show the correct currency e.g. USD = $, EUR = โ‚ฌ **Describe alternatives you've considered** Just using $, this might be confusing for people using EUR Answers: username_0: symbol is now available in the response (dev): { "system": { "version": "0.0.7", "currencySymbol": "$" } }
frictionlessdata/frictionlessdata.io
170448037
Title: [super] Publishing data tutorials / patterns Question: username_0: _From @rgrp on December 1, 2013 20:28_ * [x] general instructions - http://data.okfn.org/publish * [x] tabular data #87 - http://data.okfn.org/doc/publish-tabular * [ ] geo data #90 - http://data.okfn.org/doc/publish-geo * [x] any kinds of data #123 - http://data.okfn.org/doc/publish-any Common: * [x] Putting your data packages online - http://data.okfn.org/doc/publish-online * [ ] Creating DataPackage.json * [ ] FAQs re standard conventions etc - #154 _Copied from original issue: frictionlessdata/project#91_ Answers: username_0: _From @rgrp on February 8, 2014 10:2_ @tlevine if you have done any data packaging please feedback here re improvements in process and instructions (current tabular tutorial is up at http://data.okfn.org/publish and could no doubt be improved). geo tutorial is in progress ... username_0: _From @tlevine on August 18, 2014 9:4_ I only just noticed this somehow. The main feedbacks are that 1. I'd like a thing that writes the package for, akin to `npm init`. 2. The data package specification should allow nested CSV files. I [did it anyway](http://thomaslevine.com/dada/open-data-500-data-package/), but I recall that this was improper. username_0: _From @rgrp on August 17, 2014 14:17_ Terminology question: should it be "publish" or "package" data tutorials etc (publish to me in this context = package up then put online). username_0: _From @rgrp on August 17, 2014 13:50_ @peterdesmet this is the main issue on the publishing documentation side that i'm working through @jalbertbowden any feedback you have based on your experience warmly welcomed here. username_0: _From @rgrp on August 18, 2014 9:9_ @tlevine great feedback - `npm init` already exists its `dpm init` - see https://github.com/okfn/dpm#initialize-create-a-data-package. Perhaps we should mention this better in the tutorial ... (dpm is linked from the tools page but ...) - not quite sure what you mean by nested csv - could you explain a bit more (link to previews of csv via datapipes very help here - you can even do a combo of webshot.okfnlabs.org + http://datapipes.okfnlabs.org/html to get csv screenshots on the fly ...) username_0: _From @tlevine on August 18, 2014 9:29_ It was a while ago that I did this, so it's possible the tutorial was different then. By "nested" CSV, I mean CSV with a column of type CSV. username_0: _From @jalbertbowden on August 20, 2014 18:49_ i'm going to catch up on all of this...but off the top of my head, now that dat is in alpha, i think we should all start dabbling with it and provide feedback username_0: _From @rgrp on August 18, 2014 9:38_ @tlevine ah ok - so this is like the existing thing of columns of type json ... Please keep the feedback coming - really useful! username_0: _From @rgrp on May 6, 2016 11:3_ Is this now a DUPLICATE of #182?
JuliaLang/julia
109138367
Title: Segfault in threading: complex numbers Question: username_0: Segfaults seem to occur for any type of operations on complex numbers. I ran the following code: ``` using Base.Threads z = 1 + 2im @threads all for i = 1:10 z + 2 end ``` This is the stack trace: ``` #0 0x00007ffdf016402b in julia_+_21471 ( z=<error reading variable: DWARF-2 expression error: DW_OP_reg operations must be used either alone or in conjunction with DW_OP_piece or DW_OP_bit_piece.>, x=<error reading variable: DWARF-2 expression error: DW_OP_reg operations must be used either alone or in conjunction with DW_OP_piece or DW_OP_bit_piece.>) at complex.jl:123 #1 0x00007ffff62c519b in jl_apply (nargs=2, args=0x7ffdf135cca0, f=<optimized out>) at julia.h:1337 #2 jl_apply_unspecialized (meth=<optimized out>, meth=<optimized out>, nargs=2, args=0x7ffdf135cca0) at gf.c:29 #3 jl_apply_generic (F=0x7ffdf2117410, args=0x7ffdf135cca0, nargs=2) at gf.c:1672 #4 0x00007ffdf0166130 in julia_#1###_threadsfor#6594_21470 () at threadingconstructs.jl:2 #5 0x00007ffdf016618d in jlcall_#1###_threadsfor#6594_21470 () #6 0x00007ffff633b15a in jl_apply (nargs=<optimized out>, args=0x7ffdf1be8018, f=0x7ffdf479db50) at julia.h:1337 #7 ti_run_fun (f=0x7ffdf479db50, args=0x7ffdf1be8010) at threading.c:149 #8 0x00007ffff633b416 in ti_threadfun (arg=0x6a8b00) at threading.c:202 #9 0x00007ffff5f1a182 in start_thread (arg=0x7ffdf135d700) at pthread_create.c:312 #10 0x00007ffff5c4747d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111 ``` The stack trace for other operators, `-, *, /, ^` were all very similar (`julia_+_21471` changes depending on the operator) Answers: username_0: I guess this similar to issue #13255. Probably the same? username_0: Here is a gist to produce on instructions to reproduce these segfaults: https://gist.github.com/username_0/dc8f5912a1080415ff6b username_1: Update: I can reliably reproduce the crash in graph500. From looking at stack traces from several crashes, it seems the GC barrier is leaking somehow. I have seen thread 0 collecting without all other threads waiting in the barrier. A possible reason is that some threads are finishing their work function before hitting GC, when thread 0 hits jl_gc_collect. Still looking into it with @vtjnash. username_2: @username_1 Does #14190 makes any difference? username_2: With #14190, all the thread other than the one being scanned is waiting in `jl_wait_for_gc` correctly. I can still get a segfault sometimes and it looks like it's hitting the `FIXME` in `gc_mark_task_stack` since it can't handle stack address from other tasks correctly. username_2: Backtrace on #14190 https://gist.github.com/username_2/1eeec73226e9553b3a7c username_2: The following patch fixes the segfault for me on top of #14190. It now dead locks since one thread is holding a pthread lock while the GC is waiting for it. This is the problem I'm trying to address in #14190 and should be fixed when the codegen part is done. ```diff diff --git a/src/gc.c b/src/gc.c index 2c486b3..db2c166 100644 --- a/src/gc.c +++ b/src/gc.c @@ -1774,20 +1774,22 @@ static void gc_mark_task_stack(jl_task_t *ta, int d) { int stkbuf = (ta->stkbuf != (void*)(intptr_t)-1 && ta->stkbuf != NULL); // FIXME - we need to mark stacks on other threads - int curtask = (ta == jl_all_task_states[0].ptls->current_task); + int tid = ta->tid; + jl_tls_states_t *ptls = jl_all_task_states[tid].ptls; + int curtask = (ta == ptls->current_task); if (stkbuf) { #ifndef COPY_STACKS - if (ta != jl_root_task) // stkbuf isn't owned by julia for the root task + if (ta != ptls->root_task) // stkbuf isn't owned by julia for the root task #endif gc_setmark_buf(ta->stkbuf, gc_bits(jl_astaggedvalue(ta))); } if (curtask) { - gc_mark_stack((jl_value_t*)ta, *jl_all_pgcstacks[0], 0, d); + gc_mark_stack((jl_value_t*)ta, *jl_all_pgcstacks[tid], 0, d); } else if (stkbuf) { ptrint_t offset; #ifdef COPY_STACKS - offset = (char *)ta->stkbuf - ((char *)jl_stackbase - ta->ssize); + offset = (char *)ta->stkbuf - ((char *)ptls->stackbase - ta->ssize); #else offset = 0; #endif ``` username_2: With the latest commit in https://github.com/JuliaLang/julia/pull/14190 I can run the graph5000 example many times without segfault or dead lock now. =) username_2: @username_0 All the tests in your gists passes on #14190 now. The last one is failing because of a bug in the test https://github.com/username_0/MT-Workloads/issues/3 . username_0: @username_2 Thanks for pointing that out. I have fixed that issue. username_0: @username_2 I have been getting a segfault on the `ALS.jl` gist, even with your fixes. Is that passing for you? username_2: I can also reproduce the segfault sometimes now. It seems that someone wrote a NULL pointer to the binding remset. Will check later. username_2: The issue I saw above should be fixed by https://github.com/JuliaLang/julia/pull/14307, not sure if it's the only issue though. There's also a few other fixes I haven't committed or finished testing yet to #14190 related to finalizers (move their execution outside gc and allow GC in them). username_2: @username_0 With #14060 merged, your tests doesn't work anymore. username_1: Should be straightforward to update them. username_0: @username_2 @username_1 My gists are now updated. I tried to do a `git pull` on this branch and I found that an automatic merge wasn't possible because of a conflict in `test/threads.jl`. These seem like extra test cases. Can I just stash this and pull your branch to test it further? username_2: I've rebased my branch on current master and also includes all the related fixed I've committed (https://github.com/JuliaLang/julia/pull/14301). You can simply reset your local branch unless you have some other fixes. username_0: @username_2 Thanks, all the gists seem to run now without segfaults. Status: Issue closed
zedshaw/mongrel2
37405227
Title: unnecessary(?) implementation of get/setcontext on arm breaks armhf Question: username_0: On armhf, the test suite fails with the following error message: ```` sh ./tests/runtests.sh Running unit tests: Segmentation fault ERROR in test tests/bstr_tests: make[1]: *** [tests] Error 1 ```` This is caused by the implementation of getcontext/setcontext embedded in src/task/. It's just not compatible with armhf. And it seems unnecessary even on armel, because by now these functions are implemented by the libc. (But I only verified that on debian - not sure about other linux distributions.) If it's true that all (relevant) linux distributions do implement these functions, I can provide a patch to disable the embedded version. Status: Issue closed Answers: username_1: Fixed by #253
fairlearn/fairlearn
679196764
Title: Interactive dashboard doesn't show on Azure Compute Instances, either JupyterLab or Jupyter Notebooks Question: username_0: If I setup a local Conda environment and run FairLearn dashboards, it works: ![image](https://user-images.githubusercontent.com/32428960/90259470-5d26ee00-de4a-11ea-883a-b77f33f2fd16.png) If I run the same notebook -and libraries- on an Azure Compute Instance, it doesn't. More specifically: - On JupyterLab, it reports "Error displaying widget: model not found" ![image](https://user-images.githubusercontent.com/32428960/90259651-aaa35b00-de4a-11ea-9318-21f486b6cdb1.png) - On a Jupyter Notebook (running on Azure Compute Instance), the dashboard shows up but when I try using it, it gets stuck on this window: ![image](https://user-images.githubusercontent.com/32428960/90259875-066de400-de4b-11ea-8962-899c3a73ad48.png) I checked the pip libraries, I simply have much more on Azure than locally but the common ones are the same (latest) version, installed today. I also read that JupyterLab is still non supported, but I supposed that Jupyter Notebooks are supported, even on Azure, aren't they? Thanks Answers: username_1: @username_0 you're right in that JupyterLab isn't support yet. It should work in Jupyter on Azure. You wrote "Azure Compute Instance" so I assume it's an [AzureML Compute Instance](https://docs.microsoft.com/en-us/azure/machine-learning/concept-compute-instance). The dashboard itself is showing up, just the calculation seems to not finish. Sometimes it does take a little time to calculate if the dataset size is really large. I understand from your description that that can't be the case since it worked locally with the same data, right? Please confirm the size of the data (# of rows). Something that jumped out at me from your code is that you're passing the `sensitive_feature_names` as an `np.array` as opposed to a list of strings. It doesn't make a difference when I'm running in v0.4.6, though. Finally, I can't tell from your code what type `ys_pred` is. It should be a dictionary like `{"unmitigated": unmitigated_predictor.predict(X_test)}`, but I can't be sure from the partial snippet. If you can reproduce the same issue with a minimal example that would be very helpful. username_0: Hi @username_1 , thanks for your answer. I can't say why, but it works now. I just rebooted the VM. I confirm it's this AzureML Compute Instance: ![image](https://user-images.githubusercontent.com/32428960/90600276-8c48b100-e1f6-11ea-8f7b-6162e0f84d64.png), even after reboot. ys_pred is a dict whose first element is the model_id and the second one is the array with the predictions as described at [this link](url) ``` {'mauromi_model_classifier_LR:7': array([0, 0, 0, ..., 0, 0, 1]), 'mauromi_model_classifier_SVM:6': array([0, 0, 0, ..., 0, 0, 1]), 'mauromi_model_classifier_CBC:6': array([0., 0., 0., ..., 0., 0., 0.])} ``` Anyway, problem solved. Thanks again, Mauro Status: Issue closed
rust-lang/rust
133581938
Title: Confusing help message on an attempt to call a module-local macro Question: username_0: ```rust mod m { macro_rules! k { () => () } } fn main() { k!(); } ``` ``` <anon>:8:5: 8:6 error: macro undefined: 'k!' <anon>:8 k!(); ^ <anon>:8:5: 8:6 help: did you mean `k!`? ``` The help message is off the point. - https://play.rust-lang.org/?gist=f707c254c6feded72200&version=nightly Answers: username_1: I'm looking at it. The function in cause is [find_best_match_for_name](https://github.com/rust-lang/rust/blob/master/src/libsyntax/util/lev_distance.rs#L49). Status: Issue closed username_0: Already fixed by #31707, closing
jart/cosmopolitan
1041158597
Title: noinline attribute conflicts Question: username_0: When trying to define my own macro for `__attribute__((noinline))`, getting this error: ``` In file included from <command-line>: ./cosmopolitan/cosmopolitan.h:533:18: error: expected โ€˜)โ€™ before โ€˜__attribute__โ€™ 533 | #define noinline __attribute__((__noinline__)) | ^~~~~~~~~~~~~ ../../source/m3_config_platforms.h:77:40: note: in expansion of macro โ€˜noinlineโ€™ 77 | # define M3_NOINLINE __attribute__((noinline)) | ^~~~~~~~ ../../source/m3_compile.c:25:8: note: in expansion of macro โ€˜M3_NOINLINEโ€™ 25 | static M3_NOINLINE | ^~~~~~~~~~~ compilation terminated due to -Wfatal-errors. ``` As a workaround, I can do this: ``` # if defined(noinline) # define M3_NOINLINE noinline # else # define M3_NOINLINE __attribute__((noinline)) # endif ``` Answers: username_1: @username_0, yes, it's an issue. This already [came up previously in the context of LuaJIT discussion](https://github.com/username_3/cosmopolitan/issues/272#issuecomment-921654759). @username_2, do you want to submit a PR to fix this, as you mentioned earlier? username_2: I did a find-and-replace to change `noinline` to `dontinline` but the change led to a complaint in a chibicc compilation `o/tokenize.chibicc.o` ``` libc/integral/c.inc:250: #define dontinline __attribute__((__noinline__)) ^ unknown function attribute third_party/chibicc/tokenize.c:123: dontinline int read_ident(char *start) { ^ unknown function attribute `make MODE= -j4 o//third_party/chibicc/tokenize.chibicc.o` exited with 1: o//third_party/chibicc/chibicc.com.dbg -fno-common -include libc/integral/normalize.inc -DIMAGE_BASE_VIRTUAL=0x400000 -c -o o//third_party/chibicc/tokenize.chibicc.o third_party/chibicc/tokenize.c ``` Also I thought it was `__atttribute__((noinline))`? What difference do the underscores make? username_3: @username_2 The underscores simply help avoid conflicts. I'm surprised chibicc isn't working with `__noinline__`. We should fix that. @username_0 Please accept our apologies for the inconvenience. Playing fast and loose with names helped the project be rapidly developed. However as it grows the user needs to be the one in charge of names, so we're working to resolve these conflicts by either: 1. Adding underscore prefixes to the non-standard names we've defined, or 2. Using highly creative names, e.g. `printfesque`, that don't appear elsewhere In any case thanks for the report and we'll do the necessary refactorings shortly. @username_2 let me know if you need my help troubleshooting chibicc consume_attribute(). username_2: chibicc does work with `__noinline__`. the chibicc parser checks for attributes by removing the underscores and then checking for `noinline`, which I had blindly changed by find-and-replace. Anyways I submitted #312, it uses `dontinline` instead of `noinline`. Status: Issue closed username_3: Now that https://github.com/username_3/cosmopolitan/pull/312 is in I'm pushing new binaries to the website which are available here: - https://justine.lol/cosmopolitan/cosmopolitan.zip - https://justine.lol/cosmopolitan/download.html Let me know if you need a semver release on GitHub for your CI system. In which case we might call it 2.0 due to the breaking change with the `/zip/` vs. old `zip:path` way of doing things.
angular/angular
514593176
Title: Skip Location Change not working Question: username_0: <!--๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”… Oh hi there! ๐Ÿ˜„ To expedite issue processing please search open and closed issues before submitting a new one. Existing issues often contain information about workarounds, resolution, or progress updates. ๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…--> # ๐Ÿž bug report ### Affected Package <!-- Can you pin-point one or more @angular/* packages as the source of the bug? --> <!-- โœ๏ธedit: --> The issue is caused by package @angular/.... ### Is this a regression? <!-- Did this behavior use to work in the previous version? --> <!-- โœ๏ธ--> Yes, the previous version in which this bug was not present was: .... ### Description When execute code inside a guard to redirect: `return this.router.createUrlTree(['/view'], { skipLocationChange: true });` skipLocationChange not working, that it is to say, I want to keep the previous url. ## ๐Ÿ”ฌ Minimal Reproduction <!-- Please create and share minimal reproduction of the issue starting with this template: https://stackblitz.com/fork/angular-issue-repro2 --> <!-- โœ๏ธ--> https://stackblitz.com/... <!-- If StackBlitz is not suitable for reproduction of your issue, please create a minimal GitHub repository with the reproduction of the issue. A good way to make a minimal reproduction is to create a new app via `ng new repro-app` and add the minimum possible code to show the problem. Share the link to the repo below along with step-by-step instructions to reproduce the problem, as well as expected and actual behavior. Issues that don't have enough info and can't be reproduced will be closed. You can read more about issue submission guidelines here: https://github.com/angular/angular/blob/master/CONTRIBUTING.md#-submitting-an-issue --> ## ๐Ÿ”ฅ Exception or Error <pre><code> <!-- If the issue is accompanied by an exception or an error, please share it below: --> <!-- โœ๏ธ--> </code></pre> ## ๐ŸŒ Your Environment **Angular Version:** <pre><code> <!-- run `ng version` and paste output below --> <!-- โœ๏ธ--> </code></pre> **Anything else relevant?** <!-- โœ๏ธIs this a browser specific issue? If so, please specify the browser and version. --> <!-- โœ๏ธDo any of these matter: operating system, IDE, package manager, HTTP server, ...? If so, please mention it below. --> Answers: username_1: The functionality you need is still an open feature request at https://github.com/angular/angular/issues/27148. username_0: Ok, thank. I can see https://github.com/angular/angular/issues/27148 Status: Issue closed username_0: <!--๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”… Oh hi there! ๐Ÿ˜„ To expedite issue processing please search open and closed issues before submitting a new one. Existing issues often contain information about workarounds, resolution, or progress updates. ๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…--> # ๐Ÿž bug report ### Affected Package <!-- Can you pin-point one or more @angular/* packages as the source of the bug? --> <!-- โœ๏ธedit: --> The issue is caused by package "@angular/animations": "^8.2.5", "@angular/cdk": "^8.1.4", "@angular/common": "^8.2.5", "@angular/compiler": "^8.2.5", "@angular/core": "^8.2.5", "@angular/forms": "^8.2.5", "@angular/material": "^8.1.4", "@angular/platform-browser": "^8.2.5", "@angular/platform-browser-dynamic": "^8.2.5", "@angular/router": "^8.2.5", ### Is this a regression? <!-- Did this behavior use to work in the previous version? --> <!-- โœ๏ธ--> Yes, the previous version in which this bug was not present was: .... ### Description When execute code inside a guard to redirect: `return this.router.createUrlTree(['/view'], { skipLocationChange: true });` skipLocationChange not working, that it is to say, I want to keep the previous url. ## ๐Ÿ”ฌ Minimal Reproduction <!-- Please create and share minimal reproduction of the issue starting with this template: https://stackblitz.com/fork/angular-issue-repro2 --> <!-- โœ๏ธ--> https://stackblitz.com/... <!-- If StackBlitz is not suitable for reproduction of your issue, please create a minimal GitHub repository with the reproduction of the issue. A good way to make a minimal reproduction is to create a new app via `ng new repro-app` and add the minimum possible code to show the problem. Share the link to the repo below along with step-by-step instructions to reproduce the problem, as well as expected and actual behavior. Issues that don't have enough info and can't be reproduced will be closed. You can read more about issue submission guidelines here: https://github.com/angular/angular/blob/master/CONTRIBUTING.md#-submitting-an-issue --> ## ๐Ÿ”ฅ Exception or Error <pre><code> <!-- If the issue is accompanied by an exception or an error, please share it below: --> <!-- โœ๏ธ--> </code></pre> ## ๐ŸŒ Your Environment **Angular Version:** <pre><code> <!-- run `ng version` and paste output below --> <!-- โœ๏ธ--> </code></pre> **Anything else relevant?** <!-- โœ๏ธIs this a browser specific issue? If so, please specify the browser and version. --> <!-- โœ๏ธDo any of these matter: operating system, IDE, package manager, HTTP server, ...? If so, please mention it below. -->
apache/cloudstack
442753354
Title: Volume name for failed and successful snapshots Question: username_0: <!-- Verify first that your issue/request is not already reported on GitHub. Also test if the latest release and master branch are affected too. Always add information AFTER of these HTML comments, but no need to delete the comments. --> ##### ISSUE TYPE <!-- Pick one below and delete the rest --> * Feature Idea ##### COMPONENT NAME <!-- Categorize the issue, e.g. API, VR, VPN, UI, etc. --> ~~~ API, UI ~~~ ##### CLOUDSTACK VERSION <!-- New line separated list of affected versions, commit ID for issues on master branch. --> ~~~ 4.11.2 ~~~ ##### CONFIGURATION <!-- Information about the configuration if relevant, e.g. basic network, advanced networking, etc. N/A otherwise --> ##### OS / ENVIRONMENT <!-- Information about the environment if relevant, N/A otherwise --> ##### SUMMARY <!-- Explain the problem/feature briefly --> We would like to see additional information such as Volume name, supplied for events related to failed/successful snapshots so that these messages are distinguishable. e.g. the customer would like to see the Volume name for failed and successful snapshots: +--------------------------------------------------------------------+ | description | +--------------------------------------------------------------------+ | Successfully completed taking snapshot | | Error while taking snapshot | | creating snapshot for volume: fcc93103-605f-493a-882b-2d0fbd239a11 | +--------------------------------------------------------------------+ [Truncated] | description | +--------------------------------------------------------------------+ | Successfully completed taking snapshot | | Error while taking snapshot | | creating snapshot for volume: fcc93103-605f-493a-882b-2d0fbd239a11 | +--------------------------------------------------------------------+ ~~~ ##### ACTUAL RESULTS <!-- What actually happened? --> <!-- Paste verbatim command output between quotes below --> ~~~ ~~~ Answers: username_1: I will take a look @username_0 username_2: @shwstppr can you pick this up? Status: Issue closed
JuliaTime/TimeZones.jl
360517124
Title: Failure to download IANA database Question: username_0: I consistently get this error on Ubuntu -- and some have reported it to me on Windows. ```julia Building TimeZones โ†’ `~/.julia/packages/TimeZones/wytr8/deps/build.log` โ”Œ Error: Error building `TimeZones`: โ”‚ [ Info: Downloading 2018e tzdata โ”‚ ERROR: LoadError: IOError: could not spawn `curl -g -L -f -o /home/parallels/.julia/packages/TimeZones/wytr8/deps/tzarchive/tzdata2018e.tar.gz https://www.iana.org/time-zones/repository/releases/tzdata2018e.tar.gz`: no such file or directory (ENOENT) โ”‚ Stacktrace: โ”‚ [1] _jl_spawn(::String, ::Array{String,1}, ::Cmd, ::Tuple{RawFD,RawFD,RawFD}) at ./process.jl:367 โ”‚ [2] (::getfield(Base, Symbol("##495#496")){Cmd})(::Tuple{RawFD,RawFD,RawFD}) at ./process.jl:509 โ”‚ [3] setup_stdio(::getfield(Base, Symbol("##495#496")){Cmd}, ::Tuple{RawFD,RawFD,RawFD}) at ./process.jl:490 โ”‚ [4] #_spawn#494(::Nothing, ::Function, ::Cmd, ::Tuple{RawFD,RawFD,RawFD}) at ./process.jl:508 โ”‚ [5] _spawn at ./process.jl:504 [inlined] โ”‚ [6] #run#505(::Bool, ::Function, ::Cmd) at ./process.jl:652 โ”‚ [7] run at ./process.jl:651 [inlined] โ”‚ [8] download(::String, ::String) at ./download.jl:48 โ”‚ [9] tzdata_download(::String, ::String) at /home/parallels/.julia/packages/TimeZones/wytr8/src/tzdata/download.jl:90 โ”‚ [10] macro expansion at ./logging.jl:310 [inlined] โ”‚ [11] #build#32(::Bool, ::Function, ::String, ::Array{String,1}, ::String, ::String, ::String) at /home/parallels/.julia/packages/TimeZones/wytr8/src/tzdata/build.jl:28 โ”‚ [12] #build at ./none:0 [inlined] โ”‚ [13] build(::String, ::Array{String,1}) at /home/parallels/.julia/packages/TimeZones/wytr8/src/tzdata/build.jl:72 โ”‚ [14] #build#5(::Bool, ::Function, ::String, ::Array{String,1}) at /home/parallels/.julia/packages/TimeZones/wytr8/src/TimeZones.jl:116 โ”‚ [15] build at /home/parallels/.julia/packages/TimeZones/wytr8/src/TimeZones.jl:116 [inlined] (repeats 2 times) โ”‚ [16] top-level scope at none:0 โ”‚ [17] include at ./boot.jl:317 [inlined] โ”‚ [18] include_relative(::Module, ::String) at ./loading.jl:1038 โ”‚ [19] include(::Module, ::String) at ./sysimg.jl:29 โ”‚ [20] include(::String) at ./client.jl:388 โ”‚ [21] top-level scope at none:0 โ”‚ in expression starting at /home/parallels/.julia/packages/TimeZones/wytr8/deps/build.jl:6 โ”” @ Pkg.Operations /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.0/Pkg/src/Operations.jl:1068 ``` Answers: username_0: The problem is that `curl` is not installed by default: ``` parallels@parallels-vm:~$ curl -g -L -f -o /home/parallels/.julia/packages/TimeZones/wytr8/deps/tzarchive/tzdata2018e.tar.gz https://www.iana.org/time-zones/repository/releases/tzdata2018e.tar.gz The program 'curl' is currently not installed. You can install it by typing: sudo apt install curl ``` username_1: I'm using the Julia [`Base.download`](https://github.com/JuliaLang/julia/blob/3c8119f4c1fe3a83a488e04caa834f26432317a2/base/download.jl#L25) and not using `curl` directly. Do you have `wget` or `fetch` installed on your system? username_0: Yes, `wget` comes bundled with the Ubuntu distro. username_1: Fixed by https://github.com/JuliaLang/julia/commit/d6e43e25ff7ca57ff85342243fd072e51ae303d5 which will be included in [Julia 1.0.1](https://github.com/JuliaLang/julia/pull/28764) Status: Issue closed
digital-asset/daml
711781473
Title: Improve codegen UX Question: username_0: One of the major feedback from users going through the getting started guide is that they run into issues with the JavaScript code generation. This can be either that they forget to run the first code generation step or that they subsequently leave out a code generation step or don't upload a new package to the in-memory sandbox. This results in DAML code and UI code being out of sync and usually manifests in a cryptic `unknown template ID` error. The goal of this improvement is a UX where a user going through the getting started guide does not need to touch the code generation and we have a guarantee that the JavaScript and the DAML code are in sync and that the sandbox is deployed with the newest version of the DAML code. Achieving this goal for all supported code generations might be rather hard, but we want this UX at least for the "recommended path". Ultimately we want the `daml start` command to have a hot-reload feature: On keypress `r` in terminal running `daml start`: 1. compile a new dar package 2. generate new JavaScript code for the new dar package (possibly other code generation specified via an option) 3. upload and replace the current dar to the in-memory sandbox (currently not possible to delete any package from the sandbox) The UX achieved thereby would be very similar to popular UI frameworks like `flutter` or `npm` devserver. Answers: username_1: A few comments: 1. As you mentioned most of the issues are from the JS code and the DAML code being out of sync. However, just regenerating the code is not sufficient you need to run `npm install`. 2. Your description doesnโ€™t describe how you reload the JS code. I assume the idea is to have `npm start` running which will do this? Does `npm start` pick up a call to `npm install`? I thought that you need to restart it in this case which means that changing `daml start` on its own isnโ€™t sufficient regardless of what you do unless it also restarts `npm start`. username_0: Just checked, `npm start` will reload automatically when we overwrite the generated JS code. username_1: Fantastic! That makes this much nicer. So given this I would suggest the following steps: 1. Change our tests and docs to no longer suggest the call to `npm install`. 2. Add the `codegen:js` section to `daml.yaml` and make `daml codegen js` read it so that you donโ€™t need to pass the arguments. 3. Change `daml start` to call the codegens of all sections listed in `daml.yaml` (without any arguments, they read it themselves). username_0: Agreed with 2 and 3! username_1: Good point, I was thinking of the following calls to `npm install`, e.g., https://github.com/digital-asset/daml/blob/master/daml-assistant/integration-tests/src/DA/Daml/Assistant/CreateDamlAppTests.hs#L114 username_2: Could we not call the reset service and rerun the initialization script? I think if you change your DAML, it would be OK to wipe ledger state. username_1: No reset service please :scream: The package store is also the one thing that doesnโ€™t get reset (just confirmed that). Thatโ€™s a significant reason why resets are so much faster than restarting sandbox when you have large DARs. username_2: I see, that's a shame. username_0: @username_1 Is there any chance that we will get some way to clear the packages from the in-memory sandbox for the above purpose? username_1: Iโ€™m not aware of any remotely concrete plans. My vague idea for something like this would be to have a mode similar to the scenario service where you have a single (or potentially multiple) packages with a fixed package id that you can mutate. Status: Issue closed username_0: Fixed by #7562 .
fsbolero/Bolero
750086789
Title: NullPointerException during initial Router.inferWithModel on .NET 5.0 Question: username_0: I have a [small test project](https://github.com/username_0/PmaBolero) that I tried to run after upgrading to .NET 5.0 via Visual Studio. The following exception was thrown: ``` An exception of type 'System.TypeInitializationException' occurred in PmaBolero.Client.dll but was not handled in user code: 'The type initializer for '<StartupCode$PmaBolero-Client>.$PmaBolero.Client.Pages.Main' threw an exception.' Inner exceptions found, see $exception in variables window for more details. Innermost exception System.NullReferenceException : Object reference not set to an instance of an object. at Bolero.RouterImpl.getCtor(FSharpFunc`2 defaultPageModel, UnionCaseInfo case) at Bolero.RouterImpl.parseEndPointCase(FSharpFunc`2 getSegment, FSharpFunc`2 defaultPageModel, UnionCaseInfo case) at Bolero.RouterImpl.unionSegment(FSharpFunc`2 getSegment, FSharpFunc`2 defaultPageModel, Type ty) at Bolero.RouterImpl.getSegment(Dictionary`2 cache, FSharpFunc`2 defaultPageModel, Type ty) at Bolero.Router.inferWithModel[ep,model,msg](FSharpFunc`2 makeMessage, FSharpFunc`2 getEndPoint, FSharpFunc`2 defaultPageModel) at <StartupCode$PmaBolero-Client>.$PmaBolero.Client.Pages.Main..cctor() ``` - I re-installed .NET Core 3.1 and confirmed that the code runs fine. - Upgrading Bolero from 14 to 15 and updating Blazor packages didn't resolve the issue. - On my computer a Hello World Bolero 14 application ran with no issue. The related defaultModel and router within [Main.fs](https://github.com/username_0/PmaBolero/blob/master/src/PmaBolero.Client/Pages/Main.fs): ```fsharp let defaultModel = function | Home -> () | SignIn model -> Router.definePageModel model SignIn.initModel | SignUp model -> Router.definePageModel model SignUp.initModel | ViewProjects model -> Router.definePageModel model ViewGroup.Project.initModel | ViewProject (_, model) -> Router.definePageModel model ViewItem.Project.initModel | CreateProject model -> Router.definePageModel model Create.Project.initModel | EditProject (_, model) -> Router.definePageModel model Edit.Project.initModel | ViewEmployees model -> Router.definePageModel model ViewGroup.Employee.initModel | ViewEmployee (_, model) -> Router.definePageModel model ViewItem.Employee.initModel | CreateEmployee model -> Router.definePageModel model Create.Employee.initModel | EditEmployee (_, model) -> Router.definePageModel model Edit.Employee.initModel | ViewDepartments model -> Router.definePageModel model ViewGroup.Department.initModel | ViewDepartment (_, model) -> Router.definePageModel model ViewItem.Department.initModel //... let router = Router.inferWithModel SetPage (fun model -> model.Page) defaultModel ``` The only idea I have is that it's related to `Home` not calling Router.definePageModel and instead returning unit. Any help is much appreciated. Answers: username_1: Yes, I have a Bolero version 0.16 for .NET 5 that will be released very soon and fixes this issue. I'll update this issue when it's out! username_1: Bolero 0.16 is now released, and with it, this issue is fixed. Status: Issue closed
kubevirt/kubevirtci
631015849
Title: Update ceph Question: username_0: The versions of ceph that gets installed via `--enable-ceph` param is is old and should be updated to a version that supports the [beta VolumeSnapshot api](https://kubernetes.io/blog/2019/12/09/kubernetes-1-17-feature-cis-volume-snapshot-beta/). This PR: https://github.com/kubevirt/kubevirt/pull/3220 includes manifests that will deploy a rook/ceph installation when the environment variable `KUBEVIRT_STORAGE=rook-ceph`. This is similar to how storage providers are handled in CDI. But it would also be nice to bake the provider into kubevirtci for k8s 1.17 and above.
bazelbuild/bazel
208728684
Title: Tensorflow building from source - PYTHONPATH not respected Question: username_0: This is a repost of: https://github.com/tensorflow/tensorflow/issues/7668#issuecomment-280930018 So I'm trying to build tensorflow from source, main reason is that I do not have root access and the `GLIBC` version was incompatible with the binaries. Additionally, I can not install packages on the python3. Steps so far: 1. Build `gcc-4.9.1` from source - SUCCESS 2. Build `bazel-0.4.4` from source - SUCCESS 3. Install all CUDA stuff - SUCCESS 4. Install extra packages in a separate directory (protobuf, nose, argparse, numpy, six etc..) - SUCCESS 5. Build `tensorflow` with the `bazel` binary - FAIL OS: ``` LSB Version: :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch Distributor ID: CentOS Description: CentOS release 6.5 (Final) Release: 6.5 Codename: Final ``` The new `gcc` is installed in `/share/apps/barber/system/` together with the other library dependencies `gcc` needs (gmp, mpfr, mpc, elf). The `bazel` binary is also copied into `/share/apps/barber/system/bin` CUDA is installed under `/share/apps/barber/cuda` and CuDNN under `/share/apps/barber/cudnn`. The python I'm using is not the default one and lives in `/share/apps/python-3.6.0-shared/bin/python3`. The alternative directory for my own packages is `/share/apps/barber/system/lib/python3.6/site-packages/` (it contains protobuf, argparse, nose etc...). Given all this my environment has the following modified deifnitions: ``` export BARBER_PATH=/share/apps/barber export LD_LIBRARY_PATH=${BARBER_PATH}/system/lib/:${BARBER_PATH}/system/lib64/:/opt/gridengine/lib/linux-x64:/opt/openmpi/lib/ export PATH=${BARBER_PATH}/system/bin/:$PATH export CC=${BARBER_PATH}/system/bin/gcc export CXX=${BARBER_PATH}/system/bin/g++ export CUDA_ROOT=${BARBER_PATH}/cuda export CUDA_HOME=${CUDA_ROOT} export CUDNN_PATH=${BARBER_PATH}/cudnn export CPATH=${CUDNN_PATH}/include:$CPATH export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${CUDA_ROOT}/lib64/:${CUDA_ROOT}/nvvm/lib64/:${CUDA_ROOT}/extras/CUPTI/lib64:${CUDNN_PATH}/lib64/ export PYTHONPATH=${BARBER_PATH}/system/lib/python3.6/site-packages/ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/share/apps/python-3.6.0-shared/lib/ alias python=/share/apps/python-3.6.0-shared/bin/python3 alias pip=/share/apps/python-3.6.0-shared/bin/pip ``` Getting back to the tensorflow build, I'm selecting corretly the python to use and the gcc to use when using CUDA. The `./configure` completes fine and works (I think I only had to change something to `_async`). However, when I try to run ``` bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package --verbose_failures ``` I get the following error: ``` WARNING: Output base '/home/ausername_0/.cache/bazel/_bazel_ausername_0/9abd1d3abe11b8f0417e465a29633fc7' is on NFS. This may lead to surprising failures and undetermined behavior. INFO: Found 1 target... ERROR: /home/ausername_0/.cache/bazel/_bazel_ausername_0/9abd1d3abe11b8f0417e465a29633fc7/external/farmhash_archive/BUILD.bazel:12:1: C++ compilation of rule '@farmhash_archive//:farmhash' failed: crosstool_wrapper_driver_is_not_gcc failed: error executing command (cd /home/ausername_0/.cache/bazel/_bazel_ausername_0/9abd1d3abe11b8f0417e465a29633fc7/execroot/tensorflow && \ exec env - \ LD_LIBRARY_PATH=/share/apps/barber/system/lib/:/share/apps/barber/system/lib64/:/opt/gridengine/lib/linux-x64:/opt/openmpi/lib/:/share/apps/barber/cuda//lib64/:/share/apps/barber/cuda//nvvm/lib64/:/share/apps/barber/cuda//extras/CUPTI/lib64:/share/apps/barber/cudnn_5_1/lib64/:/share/apps/barber/arrayfire-3/lib/:/share/apps/python-3.6.0-shared/lib/ \ PATH=/share/apps/java/bin/:/share/apps/barber/system/bin/:/opt/openmpi/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/bio/ncbi/bin:/opt/bio/mpiblast/bin:/opt/bio/EMBOSS/bin:/opt/bio/clustalw/bin:/opt/bio/tcoffee/bin:/opt/bio/hmmer/bin:/opt/bio/phylip/exe:/opt/bio/mrbayes:/opt/bio/fasta:/opt/bio/glimmer/bin:/opt/bio/glimmer/scripts:/opt/bio/gromacs/bin:/opt/bio/gmap/bin:/opt/bio/tigr/bin:/opt/bio/autodocksuite/bin:/opt/bio/wgs/bin:/opt/eclipse:/opt/ganglia/bin:/opt/ganglia/sbin:/usr/java/latest/bin:/opt/rocks/bin:/opt/rocks/sbin:/opt/gridengine/bin/linux-x64:/home/ausername_0/bin \ external/local_config_cuda/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc -U_FORTIFY_SOURCE '-D_FORTIFY_SOURCE=1' -fstack-protector -fPIE -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 -DNDEBUG -ffunction-sections -fdata-sections -g0 '-std=c++11' -MD -MF bazel-out/host/bin/external/farmhash_archive/_objs/farmhash/external/farmhash_archive/src/farmhash.d '-frandom-seed=bazel-out/host/bin/external/farmhash_archive/_objs/farmhash/external/farmhash_archive/src/farmhash.o' -iquote external/farmhash_archive -iquote bazel-out/host/genfiles/external/farmhash_archive -iquote external/bazel_tools -iquote bazel-out/host/genfiles/external/bazel_tools -isystem external/farmhash_archive/src -isystem bazel-out/host/genfiles/external/farmhash_archive/src -isystem external/bazel_tools/tools/cpp/gcc3 -no-canonical-prefixes -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -fno-canonical-system-headers -c external/farmhash_archive/src/farmhash.cc -o bazel-out/host/bin/external/farmhash_archive/_objs/farmhash/external/farmhash_archive/src/farmhash.o): com.google.devtools.build.lib.shell.BadExitStatusException: Process exited with status 1. Traceback (most recent call last): File "external/local_config_cuda/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc", line 41, in <module> from argparse import ArgumentParser ImportError: No module named argparse Target //tensorflow/tools/pip_package:build_pip_package failed to build INFO: Elapsed time: 16.723s, Critical Path: 0.88s ``` Thi suggests that the `bazel` build for some reason is ignoring my `$PYTHONPATH` and can not find argparse. If I run my python argparse is imported with no problems. Now, I really am not that much concerned with why this is happening, but more of how could I can bypass it to build `tensorflow`? I'm more than happy if I have to modify/hard code something into the `bazel` source code and rebuild it. However, I do need some pointers of where to do this. Related issues: https://github.com/tensorflow/tensorflow/issues/190 - most related. However, the solution there is only for the case of uncompatible gcc, no resolution for the import error. http://stackoverflow.com/questions/15093444/importerror-no-module-named-argparse - I can't install system packages https://github.com/rg3/youtube-dl/issues/4483 - does not help me for tensorflow https://github.com/tensorflow/tensorflow/issues/2860 - does not resolve the issue, but seems pretty similar https://github.com/tensorflow/tensorflow/issues/2021 - this shows this might be a `bazel` issue Answers: username_1: @username_2 can you take a look at this? Have you encountered something similar before? username_2: @username_0 If you look at [`crosstool_wrapper_driver_is_not_gcc`](https://github.com/tensorflow/tensorflow/blob/master/third_party/gpus/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc.tpl) script in TensorFlow repo. At the very first line, there is a shebang `#!/usr/bin/env python`, indicating this script is using the python in your PATH, so I guess you'll have to make sure you have the right version of python in your path or change this shebang. `PYTHONPATH` only affects the python for running a `py_binary` built by Bazel, howevery `crosstool_wrapper_driver_is_not_gcc` is a python wrapper script in TF's custom CROSSTOOL for cuda compilation. Hope this could help. username_3: i fix this by change `crosstool_wrapper_driver_is_not_gcc ` shebang `#!/usr/bin/env python` to my python bin username_0: Unfortunately we have moved now to a new infrastructure where I no longer observe the problem, so can't confirm the fix works, but the newer tensorflow we are compiling here works. Feel free to close the issue. Status: Issue closed username_4: '/home/puneet/MySoftwares/PYTHONPACKAGES/2.7.5_gnu4.8.5/NUMPY/1.16.1/lib64/python2.7/site-packages/numpy-1.16.1-py2.7-linux-x86_64.egg/numpy/__init__.pyc' Here are the error messages, seems i may have to create some soft links to numpy in /usr/lib64/python2.7/site-packages. ``` Traceback (most recent call last): File "/home/puneet/.cache/bazel/_bazel_puneet/3bba6e21d84de0cdd7c3b508ea615d20/sandbox/processwrapper-sandbox/2/execroot/org_tensorflow/bazel-out/host/bin/tensorflow/create_tensorflow.python_api_1.runfiles/org_tensorflow/tensorflow/python/tools/api/generator/create_python_api.py", line 27, in <module> from tensorflow.python.tools.api.generator import doc_srcs File "/home/puneet/.cache/bazel/_bazel_puneet/3bba6e21d84de0cdd7c3b508ea615d20/sandbox/processwrapper-sandbox/2/execroot/org_tensorflow/bazel-out/host/bin/tensorflow/create_tensorflow.python_api_1.runfiles/org_tensorflow/tensorflow/python/__init__.py", line 47, in <module> import numpy as np ImportError: No module named numpy ERROR: /home/puneet/MySoftwares/INSTALLATION_ROOT/TF/python2/tensorflow-1.12.0/tensorflow/BUILD:533:1: Couldn't build file tensorflow/_api/v1/__init__.py: Executing genrule //tensorflow:tf_python_api_gen_v1 failed (Exit 1) bash failed: error executing command /bin/bash -c ... (remaining 1 argument(s) skipped) ```
connectome-neuprint/neuPrintExplorer
522342158
Title: ROI breakdown and sunburst graph disagree on input ROI percentages Question: username_0: Body ID 549610772 has the ROI breakdown and sunburst graph of input shown below. The percentage of SMP input is shown in yellow in the ROI breakdown, and there is a clear discrepancy between the percentage of SMP input in the ROI breakdown vs. in the sunburst graph. In addition, the sunburst graph shows the b'L as well as the MB(R/L) ROI, but the b'L is inside of the MB, so should not be displayed separately. <img width="1345" alt="smp2" src="https://user-images.githubusercontent.com/51955732/68783949-c652aa00-0609-11ea-9026-571631ee1b8d.PNG"> <img width="910" alt="smp" src="https://user-images.githubusercontent.com/51955732/68784251-45e07900-060a-11ea-98aa-b3360c2ff1cb.PNG"> Answers: username_1: Fixed in bb3af70ec590032 Status: Issue closed
crossbario/autobahn-testsuite
199574594
Title: Possibly flaky case 6.4.3? Question: username_0: Description of the case looks like this: ``` Same as Case 6.4.1, but we send message not in 3 frames, but in 3 chops of the same message frame. MESSAGE PARTS: PART1 = cebae1bdb9cf83cebcceb5 PART2 = f4908080 PART3 = 656469746564 ``` But in my server I receive following: ```ascii read: cebae1bdb9cf83cebcceb5(11) -> valid utf8 read: caaf0607(4) -> valid utf8 (?) read: 5b5beff35b5b(6) -> invalid utf8 ``` Or ``` read: cebae1bdb9cf83cebcceb5(11) -> valid utf8 read: 96df7f52(4) -> invalid utf8 ``` Note that first chop is always same as in description, but second and third are not. Status: Issue closed Answers: username_0: Sorry, my fault โ€“ bug in my implementation =)
8398a7/abilitysheet
54430415
Title: font-awesome Question: username_0: production็’ฐๅขƒใงfont-awesomeใŒๅ‹•ไฝœใ—ใฆใ„ใชใ„๏ผŽ ๅŽŸๅ› ใฏiidxas.tk/abilitysheet/assets/ใงใฏใชใ๏ผŒiidxas.tk/assetsใ‚’ๅ‚็…งใ—ใซใ„ใฃใฆใ„ใ‚‹ใŸใ‚ใงใ‚ใ‚‹๏ผŽ ไป–ใฎassetsใงใฏใ“ใฎ็—‡็Šถใฏ่ตทใใฆใ„ใชใ„ใŸใ‚๏ผŒfont-awesomeๅ˜ไฝ“ใฎๅ•้กŒใงใ‚ใ‚‹ใจๆ€ใ‚ใ‚Œใ‚‹๏ผŽ ๅฏพ็ญ–ใจใ—ใฆใฏ๏ผŒ[rails + passengerใงใ‚ตใƒ–ใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒช้‹็”จใ™ใ‚‹ๆ–นๆณ•](http://blog.kz-dev.com/archives/335)ใŒๆœ‰ๅŠนใงใ‚ใ‚‹ใจ่€ƒใˆใ‚‰ใ‚Œใ‚‹๏ผŽ Answers: username_0: config/enviroments/production.rb ```ruby config.action_controller.relative_url_root = '/abilitysheet' ``` ใจใ„ใฃใŸๅฝขใงproduction.rbใซ็›ธๅฏพไฝ็ฝฎใ‚’่จ˜ใ—ใฆprecompileใ‚’ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใฃใŸ๏ผŽ Status: Issue closed
symfony/symfony
57166593
Title: bootstrap_3_layout.html.twig template should be traitable Question: username_0: shouldn't the template `bootstrap_3_layout.html.twig` be reusable in the same way as `form_table_layout.html.twig` in the traitable inheritance changing ``` {% extends 'form_div_layou.html.twig %} ``` into ``` {% use 'form_div_layout.html.twig %} ``` as it was made for the `form_table_layout.html.twig` here: https://github.com/symfony/symfony/commit/6737bd36bd39f491ea2936551d7db72e29b944fd would give the possibility to reference block from that template like it is described in docs: http://symfony.com/doc/current/cookbook/form/form_customization.html#referencing-blocks-from-inside-the-same-template-as-the-form Answers: username_1: Would you mind submitting a PR? username_0: sure! I've just tried with quick-pull-request but I will prepare the proper one yet today username_2: +1 Status: Issue closed
vim-airline/vim-airline
356263929
Title: AirlineRefresh command is not working Question: username_0: #### expected behavior redraw airline properly. Status: Issue closed Answers: username_1: Your mapping is not correct. I guess you want something along: `nnoremap <leader>sv :source $MYVIMRC<cr><Bar>:AirlineRefresh<cr>` Closing, as this is not an airline issue. Please read the help for details.
Wanchai/FTPbucket
145598160
Title: Multiple bitbucket user Question: username_0: Hello, Sorry for stupid question, i am new to this and wonder to ask if its possible to make multiple credentials bitbucket user? there are 2 of us working on the same project, and if both of us pushing to the project, we want the ftpbucket recognize both of us and push to the server files. Thanks Answers: username_1: Hello there, you just need the credentials of an account who has access to the repository. The user is unimportant because the webhook will always be triggered by any change. -Keyjin Status: Issue closed
symfony/symfony
1106059356
Title: Why is symfony/http-client only require-dev and not require in symfony/mailgun-mailer ? Question: username_0: ### Symfony version(s) affected 6.x ### Description The constructors for `Symfony\Component\Mailer\Bridge\Mailgun\Transport\AbstractApiTransport` and `MailgunHttpTransport` both have a `HttpClientInterface $client` argument, so therefore I don't understand why `symfony/http-client` is in require-dev, as it will always be needed, even in production, won't it? Shouldn't it be in the require section of composer.json instead? [I'm coming from the context of Laravel 9.x (currently in beta) by the way, where symfony/mailer is a standard dependency, but if you need to use Mailgun transports, you have to `composer require symfony/mailgun-mailer` (just like in Symfony), but unless you also manually require the HTTP Client, it won't work without it.] ### How to reproduce Try to use mailgun-mailer in Laravel 9.x without also requiring HTTP Client (as it's require-dev). ### Possible Solution _No response_ ### Additional Context _No response_ Answers: username_1: @username_0 because of MailgunSmtpTransport, which is the third transport available in the package. `symfony/http-client` is only needed by 2 or the 3 transports username_1: @fabpot maybe the `mailgun` scheme should fallback to SMTP rather than API if the transport factory does not have an HTTPClient (while `mailgun+api` would always force using the API) ? username_0: Understand, but that doesn't make having it as require-dev make sense still. Should it not therefore use a composer 'suggest' line, explaining that http-client is needed if you're using certain transports? username_0: I don't think that would be a sensible change. username_1: The suggest line makes sense (but I would add it in `symfony/mailer` rather than each mailer bridge, as it applies to any API-based or HTTP-based transport) username_2: I think the deps are fine according to our policies. Instead of adding `suggest` line, we usually prefer throwing a LogicException telling that the component is missing. Looks like this would be a nice idea here (and maybe in other bridges) PR welcome! username_0: Indeed, I was about to say exactly that .. the pattern is repeated in all the 3rd-party/API transports. username_0: Oh, I think it did throw a LogicException .. but I didn't see it until I checked my logs, as something odd was happening with the new `spatie/laravel-ignition` package around shutdown errors and sessions :( Was just flagging this up, as there might be a number of similar requests from Laravel + Mailgun users when Laravel 9.0 finally releases.. username_0: Here's the exception. https://github.com/symfony/mailer/blob/5.4/Transport/AbstractHttpTransport.php#L36 Guess that's good enough.. username_0: @username_2 Feel free to close! username_2: this might be the place to look at next thanks for the discussion Status: Issue closed username_0: Yeah, whatever `spatie/laravel-ignition` was getting wrong wasn't a problem for me once I'd seen the LogicException and installed HttpClient, but I'm sure it's a) tangental to this, and b) Spatie will get it fixed in time ;)
vuetifyjs/vuetify
496703370
Title: [Bug Report] persistent property not working on fullscreen dialog Question: username_0: ### Environment **Vuetify Version:** 2.0.18 **Vue Version:** 2.6.10 **Browsers:** Chrome 76.0.3809.132 **OS:** Windows 7 ### Steps to reproduce visit reproduction link and Click on the button, you see a fullscreen dialog and a snackbar, click on snackbar and you can see dialog and snackbar both disappear ### Expected Behavior when persistent enabled on dialog it should stay open when clicking other components ### Actual Behavior click on other elements outside of fullscreen dialog close the dialog and persistent prop doesn't work as expected ### Reproduction Link <a href="https://codepen.io/username_0-gholami/pen/mdbaWgv?&editable=true&editors=101#anon-signup" target="_blank">https://codepen.io/username_0-gholami/pen/mdbaWgv?&editable=true&editors=101#anon-signup</a> <!-- generated by vuetify-issue-helper. DO NOT REMOVE --> Status: Issue closed Answers: username_1: Duplicate of #8697 - fullscreen implies hide-overlay You might also want to follow #7310
cloudfoundry/php-buildpack
203123720
Title: getallheaders Question: username_0: What version of Cloud Foundry and CF CLI are you using? (i.e. What is the output of running `cf curl /v2/info && cf version`? PCF 1.9 & PCF 1.7 What version of the buildpack you are using? PHP bp 4.3.18 If you were attempting to accomplish a task, what was it you were attempting to do? use of PHP native function getallheaders() What did you expect to happen? http://php.net/manual/en/function.getallheaders.php What was the actual behavior? ``` 2017-01-25T14:48:37.000+00:00 [APP] OUT 14:48:37 httpd | [Wed Jan 25 14:48:37.878268 2017] [proxy_fcgi:error] [pid 49:tid 140605208389376] [client 172.16.1.1:35546] AH01071: Got error 'PHP message: PHP Fatal error: Call to undefined function getallheaders() in /home/vcap/app/htdocs/index.php on line 3\n', referer: https://XXX /organizations/d4dbf194-09dd-4980-b9cb-1809eca4ec6c/spaces/92860b17-4bc6-4f3c-9005-f887693d3f6f/applications/623bf335-12bd-48a6-a27d-ca46f6845a4e ``` code is ``` <?php echo getallheaders(); phpinfo(); echo "<hr/>"; echo getallheaders(); echo "<hr/>"; ?> ``` Please confirm where necessary: * [ ] I have included a log output * [ ] My log includes an error message * [ ] I have included steps for reproduction Answers: username_1: The `apache_*` methods are only available when you are running using mod_php. The PHP build pack does not install or use mod_php. It uses nginx or HTTPD talking to PHP via fastcgi. You'll need to use some other method to get access to the headers. username_0: That was unclear if the buildpack could be configured with mod_php - as I see there are different things (cfi, fm, pear, etc). Could you confirm the only way is to manually impl the getheaders (which is easy) - and that the buildpack cannot be configured with mod_php. username_1: It would be challenging to use mod_php. The binaries that we produce don't build mod_php, so that would be the first step to making this work. From there, you need to reconfigure HTTPD to use the module and not php-fpm, and you'd need to change the start command which will start the php-fpm processes. All in all, it would require a lot of changes to the build pack, possibly even a fork. Not something I would recommend unless it's absolutely necessary to get your app running. Status: Issue closed username_1: `$_SERVER` or `$_ENV` would be an option. Headers are included there. They start with `HTTP_`. Ex: `$_SERVER['HTTP_X_FORWARDED_PROTO']` == `https` $_SERVER['HTTP_X_FORWARDED_PROTO'] | https -- | -- $_SERVER['HTTP_X_FORWARDED_PROTO'] | https -- | -- $_SERVER['HTTP_X_FORWARDED_PROTO'] | https -- | -- username_2: @username_1 i need to get **Authorization** from the response . in nodejs by doing this `req.get('Authorization')` i am getting **Authorization** code. how i can do the same in PHP with same setup on SIEMENS Mindsphere paltform. username_1: You should be able to get any header via `$_SERVER['HTTP_<header>']`. https://stackoverflow.com/questions/541430/how-do-i-read-any-request-header-in-php#541450 username_2: i tried but not getting the token nodejs response if you can guide by seeing this `"host": "tokenkeyapp-ipmindev.apps.eu1.mindsphere.io", "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36", "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8", "accept-encoding": "gzip", "accept-language": "en-US,en;q=0.9", "authorization": "Bearer <KEY>` username_1: Sorry, I knew this sounded familiar but had to refresh my memory a bit. Most headers you can get using the method above. `Authorization` is special because it can contain user id & password. By default, it's not passed along to scripts. You should be able to allow it by adding this setting: https://httpd.apache.org/docs/2.4/en/mod/core.html#cgipassauth If you add a `.htaccess` file to your app & put `CGIPassAuth On` in that file, I think that should make the `Authorization` header pass through. Alternatively, you can configure this way -> https://docs.cloudfoundry.org/buildpacks/php/gsg-php-config.html#engine-configurations Other options are to have HTTPD handle this for you. Currently, Basic & digest auth can be done by dropping settings into a `.htaccess` file. When this PR is merged, you'll be able to make HTTPD perform Oauth2/OpenID authentication too. https://github.com/cloudfoundry/binary-builder/pull/41 username_1: I believe that we provide this module as well. In theory, it would allow you to create a FastCGI authorizer which can make authorization decisions. I haven't done this though, so it might take more work and customizations to make it actually work. If you go this route and do get it working, feel free to share what you find. Perhaps we could do something to make support of this scenario easier. https://httpd.apache.org/docs/2.4/mod/mod_authnz_fcgi.html username_1: @sclevine - We might want to consider doing this out-of-the-box. https://github.com/cloudfoundry/php-buildpack/issues/190#issuecomment-433998851 The main reason I've heard it is disabled by default is to prevent accidental disclosure of a username/password to the script that's running, but in our use case the HTTPD server is specifically set up to service the scripts, thus it's a fair assumption that the scripts are trusted. I don't think it would have any other implications. username_2: @username_1 You are the Man. Thanks you so much. by using your first suggestion related to **CGIPassAuth ** now i am able to get the token. now i am facing one more issue is that when i am trying to use **CF Sync Plugin** i am getting the **Application "cfphpapp" is not running on Diego** how i can run my application on Diego? i raise the issue here but no solution yet. https://github.com/cloudfoundry-attic/Diego-Enabler/issues/12 any help is appreciated username_1: All apps deployed to Cloud Foundry for probably the last three years have been on Diego. You'd be on a dangerously old version of CF if you're still using DEAs. I'm not sure where you are pushing your app, but if it's a public provider you're definitely on Diego. If your target is an on-premise CF deployment, you might want to check with your operator to see if they're still using DEAs. Aside from that, check with the cf sync plugin author cause it's possible there is a bug in that plugin.
JiwoonKim/The-Batman-Archive
465067798
Title: The Beginner Scarecrow Question: username_0: collection of fears as a beginner in programming, developing, engineering ------------------------------------ **FEARS** 1. ๊ธฐ์–ต์— ์˜์กดํ•ด์„œ ์ฝ”๋“œ๋ฅผ ์งœ์•ผ ํ•œ๋‹ค๋Š” ๊ฐ•๋ฐ•๊ด€๋… 2. ๋ชจ๋“  ์–ธ์–ด, ํ”„๋ ˆ์ž„์›Œํฌ๋ฅผ ์•Œ์•„์•ผํ•œ๋‹ค๋Š” ์กฐ๊ธ‰ํ•จ 3. ๋ฌธ์ œ๊ฐ€ ์˜์›ํžˆ ํ’€๋ฆฌ์ง€ ์•Š์„ ๊ฒƒ ๊ฐ™์€ ๋‘๋ ค์›€ ----------------------------------- **How to PECK that Scarecrow** 1. [์ƒํ™œ์ฝ”๋”ฉ - '์ด ๋งŽ์€ ๊ฒƒ์„ ๋‹ค ์•”๊ธฐํ•ด์•ผ ๋˜๋‚˜' ](https://opentutorials.org/course/1189/6341) 2. 3.
quasiben/dask-scheduler-performance
838727530
Title: DGX Nightly Benchmark run 20210323 Question: username_0: ## Benchmark history <img width="641" alt="Benchmark Image" src="https://raw.githubusercontent.com/username_0/dask-scheduler-performance/benchmark-images/assets/dgx-20210323-benchmark-history.png"> ## Raw Data <Client: 'tcp://127.0.0.1:34667' processes=10 threads=10, memory=540.94 GB> Distributed Version: 2021.03.0+22.g2ca82c01 simple 5.653e-01 +/- 2.819e-02 shuffle 2.775e+01 +/- 2.088e+00 rand_access 1.459e-02 +/- 1.259e-02 anom_mean 1.082e+02 +/- 1.462e+00 ## Raw Values simple [0.56979805 0.54139622 0.5177581 0.52951218 0.60028652 0.58074642 0.56507068 0.5647274 0.56969071 0.61360503] shuffle [27.95517103 26.31068145 26.56002967 32.09061405 25.02927266 25.80638943 30.55723476 26.84784676 28.8809704 27.48835342] rand_access [0.00948068 0.00940035 0.01050852 0.01083578 0.01054282 0.01063542 0.00997696 0.0120705 0.01017979 0.05228633] anom_mean [107.76325505 108.24565658 108.10769936 106.42280182 107.99309418 108.33229485 106.84017018 109.34635637 111.78673699 106.80892501] ## Dask Profiles - [Shuffle Profile](https://raw.githack.com/username_0/dask-scheduler-performance/benchmark-images/assets/20210323-shuffle-scheduler.html) - [Random Access Profile](https://raw.githack.com/username_0/dask-scheduler-performance/benchmark-images/assets/20210323-rand-access-scheduler.html) - [Simple Profile](https://raw.githack.com/username_0/dask-scheduler-performance/benchmark-images/assets/20210323-simple-scheduler.html) - [Anom Mean](https://raw.githack.com/username_0/dask-scheduler-performance/benchmark-images/assets/20210323-anom-mean-scheduler.html) ## Scheduler Execution Graph <img width="641" alt="Sched Graph Image" src="https://raw.githubusercontent.com/username_0/dask-scheduler-performance/benchmark-images/assets/20210323-sched-graph.png">
mattermost/mattermost-plugin-jira
716727682
Title: Jira v3 Post menu options do not update until refresh on Community Question: username_0: There is an issue occurring on Community that does not seem to repo on other servers. When a users connected / disconnected status changes, the post menu options for connecting or creating and attaching do not change until a refresh is done. I have tested this on my local and other servers and I do not see this issue. Post menu options update without a refresh based on my status. This is occurring on community with the 3.0.0 release applied. Seps: - with a connected user note that create and attach options are available - type /jira disconnect - Feedback post shows user is disconnected - Post menu now shows Connect to Jira option - type /jira connect - Authenticate user in the pop-up window - Feedback post show user has connected **Observed**: - Post menu still shows connect option until next refresh - No JS error seen in the console Answers: username_1: @username_2 to add thread. username_2: Here's some discussion https://community-daily.mattermost.com/core/pl/g9qdgr84qtr9brfm1pfw8gkbic The conclusion is to have `GetUserInfo` accept a second argument of `*User`, and avoid a trip to `MigrateV2User` if it is passed in. So far, this would only be for the one call from `disconnectUser`. Other places may need to be updated. Status: Issue closed
bastibe/annotate.el
869329551
Title: Accessing both annotated file and annotations database via TRAMP? Question: username_0: Hello! I just took a crack at working on an annotated file using TRAMP. I configured annotate-file with `(setq annotate-file "/ssh:username@host:/path/to/annotations/database")` and then opened the annotated file via command line: `emacs /ssh:username@host:/path/to/file.org` The annotations were nowhere to be found. Upon further investigation, annotate-show-annotation-summary revealed that the database was looking for the annotated file relative to the local home path. So, working from my Mac on a file stored on a Linux machine, it had a bunch of annotations listed for `/Users/mac-username/path/to/file.org` when the file it should have been looking for was in `/ssh:linux-username@host:/home/linux-username/path/to/file.org`. I feel like I'm missing something very obvious or maybe just not using TRAMP as intended. Can anyone point me in the right direction? Thank you for all your work on this excellent package! Answers: username_1: Hi! The problem you issued seems interesting, i just need some time to address it because we are working on a new release and there is another problem for this package waiting. Please allow some delay before i can check this report. Thank you. C. username_1: Hi @username_0 ! Unfortunately i can not reproduce the issue, these are the step i made to investigate the problem: 1. set the annotation file as remote: ```lisp (setq annotate-file "/ssh:user@host:/home/user/annotation-remote") ``` 2. annotate a remote file (using TRAMP of course :)) 3. save the annotated file and the annotations 4. the remote annotated file can be found in the database file and annotation are restored when the file is visited: contents of the database: ``` lisp (("/ssh:user@host:/home/user/test-ann.lisp" ((1 5 "annotation text" "annotated text")) "hashdigits")) ``` Can you help me how to reproduce this issue? Thanks! C. username_0: Hi! Thanks for your reply. To reproduce, you might try annotating a file using the remote machine first so that there is already an existing annotations database that you then try to access and edit to via TRAMP. My guess as to what is happening is that the database is recording file locations using a tilde to represent the home folder, but then when the database is accessed remotely, that tilde means something completely different. I hope this makes sense. If you still can't reproduce, let me know and I'll keep trying to figure it out. Thank you for your help! username_1: Your guessing is good, the database use tilde to abbreviate the patch so that "/home/user/foo" become "~/foo" the motivation for this kind of behaviour can be found here: https://github.com/bastibe/annotate.el/issues/89 Unfortunately when the database is updated the files are saved with paths abbreviated (with tilde) to the computer where Emacs is running. And i believe (correct me if i am wrong) that even if i got rid of the tilde using absolute path changing computer likely does not change the problem you are getting here, if a path is local is local respect of the host where the Emacs instance that loaded the annotation database is running. The best thing i can figure out is to always edit your remote file using TRAMP, but i understand this is not a proper solution so i am very open to any suggestion here. :) Bye! C. Status: Issue closed username_0: This solution works for me. Thanks for your help! username_1: Hi @username_0 ! I am happy that this solution was acceptable, i wish you happy hacking with annotate-mode! :) Bye! C.
quochoangvp/mvc_cms
71089344
Title: AutoLoad.php Question: username_0: Not work Warning: This feature has been DEPRECATED as of PHP 5.3.0 and REMOVED as of PHP 5.4.0. How to fix? Answers: username_1: Can you more clearly? username_2: Deprecated just advises you not to use this. Some additional information here: http://php.net/manual/en/function.spl-autoload-register.php username_1: @username_2 I used in /index.php file /** * Tแปฑ ฤ‘แป™ng load cรกc file cรณ tรชn class tฦฐฦกng แปฉng ฤ‘ฦฐแปฃc gแปi * @param string $className Tรชn class */ function __autoload($className) { $path = LIBSPATH . $className . '.php'; if (file_exists($path) && is_file($path)) { require_once $path; } } /** * Gแปi thฦฐ viแป‡n Autoload ฤ‘แปƒ chแบกy แปฉng dแปฅng * @var Autoload */ $app = new Autoload(); You can see! username_2: ``` function __autoload( $class ) { require_once( strtolower( $class ) . '.php' ) ; } ``` is the only code you should need to use the deprecated autoload obviously the classes that you're creating should match to the strtolower result, otherwise you can modify the code accordingly username_1: @username_2 Oh thanks! I see :+1: I'm going to edit now! username_2: make sure to comment out line 123 as it's still going to try and load your autoload class ``` $app = new Autoload(); ```
jlippold/tweakCompatible
509563252
Title: `EmojiPort (iOS 12)` working on iOS 12.4 Question: username_0: ``` { "packageId": "com.ps.emojiportpe", "action": "working", "userInfo": { "arch32": false, "packageId": "com.ps.emojiportpe", "deviceId": "iPhone10,6", "url": "http://cydia.saurik.com/package/com.ps.emojiportpe/", "iOSVersion": "12.4", "packageVersionIndexed": true, "packageName": "EmojiPort (iOS 12)", "category": "Tweaks", "repository": "PoomSmart's Repo", "name": "EmojiPort (iOS 12)", "installed": "1.0.3~b9", "packageIndexed": true, "packageStatusExplaination": "This package version has been marked as Working based on feedback from users in the community. The current positive rating is 100% with 1 working reports.", "id": "com.ps.emojiportpe", "commercial": false, "packageInstalled": true, "tweakCompatVersion": "0.1.5", "shortDescription": "Latest emojis for iOS 12", "latest": "1.0.3~b9", "author": "PoomSmart", "packageStatus": "Working" }, "base64": "<KEY> "chosenStatus": "working", "notes": "" } ```<issue_closed> Status: Issue closed
ConnectBox/connectbox-pi
200833154
Title: Improve structure of client html Question: username_0: Create build to combine and minify JS, CSS, etc Have build pull in current version of font awesome css assets automatically (or a tag) Answers: username_1: @username_0 , do you still want to do this? Perhaps it's already done in your new client interface? username_0: The react implementation does everything except automatically pull in the font awesome assets. If you think that's valuable, we can change this issue to just that. username_1: What we now have is sufficient. Thanks. Status: Issue closed
moondust46/sparta_myproject
597277081
Title: ๊ฐœ๋ฐœ์ผ์ง€_20200409~10 Question: username_0: 1. ํ•œ ์ฃผ ๋™์•ˆ์˜ ํšŒ๊ณ  2. ํ•œ ์ฃผ ๋™์•ˆ์˜ ๋ฐฐ์šด ๊ฒƒ๋“ค - ๋ชฝ๊ณ DB์— ๊ณต์ค‘ํ™”์žฅ์‹ค ๋ฐ์ดํ„ฐ ์ž…๋ ฅํ•˜๋Š” ๋ฒ•. ๋‹ค์šด๋กœ๋“œ ๋ฐ›์€ ์ž๋ฃŒ๋ฅผ ๋””๋ ‰ํ† ๋ฆฌ์— ๋„ฃ๊ณ  ํŒŒ์ด์ฌ์œผ๋กœ ํ˜ธ์ถœํ•ด์ฃผ๋Š” ๊ฒƒ๋งŒ์œผ๋กœ๋„ ์‰ฝ๊ฒŒ ๋ฐ์ดํ„ฐ๊ฐ€ ์ €์žฅ๋˜์—ˆ๋‹ค. - ํŒŒ์ด์ฌ find ๋ฒ”์œ„ ์ง€์ •ํ•ด์„œ ์ถœ๋ ฅํ•˜๊ธฐ ํ…Œ์ŠคํŠธ. ์ง€๋„๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ์„œ๋น„์Šค๋ผ์„œ ๋‚ด ์œ„์น˜๊ธฐ์ค€ x,y์ขŒํ‘œ๋ฅผ ๋ฝ‘์•„์„œ ๋ณด์—ฌ์ค„ ์ˆ˜ ์žˆ์–ด์•ผ ํ–ˆ๋‹ค. Robo3T์—์„œ ํ…Œ์ŠคํŠธํ•ด๋ณธ ๊ฒฐ๊ณผ ์ž˜ ์ฐพ์•„์ฃผ์—ˆ๋‹ค. ` db.getCollection('toilet').find({x:{ $gt:'208770', $lt:'208771'}}) ` ์ด ์ฝ”๋“œ์—์„œ ์ค‘์š”ํ•œ ๊ฒƒ์€ ์ˆซ์ž์ž„์—๋„ ๋ฌธ์ž์—ด ํ‘œ์‹œ๋ฅผ ๊ผญ ํ•ด์ฃผ์–ด์•ผ ์ฐพ์•„์ค€๋‹ค๋Š” ๊ฒƒ. 3. ์ด๋ฒˆ์ฃผ์˜ ๋ชฉํ‘œ
rust-lang/rust
462305790
Title: Specify: int->float and f32->f64 round to nearest, overflow to infinity Question: username_0: As suggested by @username_4, this issue is for getting @rust-lang/lang approval for https://github.com/rust-lang-nursery/reference/pull/607 (Note: will update that patch to explicitly talk about overflow behavior, once approved here) cc #62175 ## Detailed proposal The following casts: - `(expr: iM) as fN` - `(expr: uM) as fN` - `(expr: f64) as f32` all can see values that cannot be represented losslessly in the destination type, e.g.: - `i32::MAX` is not exactly representable as f32 (though it is less than `f32::MAX`, so it can be rounded to a finite float) - `u128::MAX` is larger than `f32::MAX`, so it cannot even be rounded to a nearby finite float (it "overflows") - `f64` obviously has more precision than `f32`, so this cast has to round (infinities and NaNs can be carried over to f32, so this is only about finite numbers) These cases are not unsound any more (see #15536), but currently the rounding mode is explicitly "unspecified" in the reference. What happens on overflow (e.g. `u128::MAX as f32`) is also not specified explicitly anywhere AFAICT. I proposed to define these casts as follows: - rounding mode: to nearest (as usual, in the sense of IEEE 754-2008 ยง4.1, i.e., breaking ties by picking the result with the LSB of the significand equal to zero) - on overflow: return infinity (with the sign of the source) ## Rationale Leaving the behavior of this core part of the language unspecified only hinders Rust programmers in using them and causes uncertainty and (de jure) non-portability. They should be defined as *something*, and I argue (round to nearest, overflow to infinity) is the most obvious, most consistent, and most useful choice: - IEEE 754-2008 prescribes round-to-nearest as the default rounding mode and returning (+/-) infinity as the default behavior on overflow (see ยง4.1, ยง7.3). - This rounding mode and overflow behavior gives the closest possible result (which is of course why IEEE 754 makes it the default). - Other operations in Rust (e.g., string parsing) and conversions in other languages (e.g., [Java](https://docs.oracle.com/javase/specs/jls/se8/html/jls-5.html#jls-5.1.3)) also default to these behaviors. - LLVM and hardware support this rounding mode and overflow handling natively, so it is also efficient. Answers: username_1: So I wonder whether "panicking on overflow" (e.g. when debug-assertions=on) was considered as a potential alternative, and if so, what were the tradeoffs. username_0: I can't find the second part of your quote anywhere in the standard (and certainly not in ยง7.**3**), but here's some reasons for not panicking: - Panics are entirely unprecedented for `as` casts (see also discussion in #10184). - We'd need to add a new API for the conversion that allows people to choose infinity-on-overflow when they want it. - Arguably, if this conversion errors on overflow, it should do so by returning `Result`, not panicking. Of course, that's not possible for `as`. - Generally `as` casts are for unchecked, potentially quite lossy casts (consider casting to from larger to smaller integer types, which doesn't ever check for overflow), so it would be weird to start checking in this one particular case. - Panicking is arguably incompatible with the current wording in the reference, which implies that rounding happens, and overflow -> inf is arguably rounding. - It would slow down an operation that is probably common in some performance sensitive code paths. I should also note that IEEE 754-2008 cannot possibly serve as justification for such behavior, despite the suggestive quote. <details> In the context of that standard, "exception" and "signaling an exception" does not imply trapping, aborting, or otherwise diverting execution flow, but simply refers to an exceptional circumstance that by default is handled by supplying a sensible result (rounded result in the case of inexact, returning an infinity in the case of overflow). Other handling of these exceptions is possible, including diverting execution flow, but as far as IEEE 754-2008 is concerned, there is no distinction between exceptions raised from e.g. casts versus arithmetic. So in this framework, if you want to panic on overflow, it should happen on every result that overflows, including e.g. `exp(HUGE_VAL)`, `f64::MAX * f64::MAX`, and `"1e1000".parse::<f64>().unwrap()`. Needless to say, such a sweeping change to the default behavior is neither desirable as nor backwards compatible. The overflow *flag* that is to be raised refers to floating point exception flags, which we do not currently expose in any way in Rust (and this too would be a much broader discussion than about just casts). </details> username_1: That clarifies some of the doubts I had. Section 7.2 states that invalid operations are signaled using a NaN, so I supposed that when Section 7.4 said "signaling" a NaN would be returned, but then that same section states a default result is returned (+-INF), so I supposed overflow would be signaled in some other way, e.g. by setting an overflow flag in some register that one can check. Since otherwise one can't know whether overflow happen (e.g. if f64 to f32 returns +INF, did overflow happened because, e.g., f64 == f64::MAX, or was f64 == +INF?). username_0: The overflow flag is raised (set) as part of the default exception handling. But as I mentioned (and, I think, discussed with you in the past), Rust does not currently support users reading the flags, so they effectively don't exist in Rust. I'd like to change this eventually but again, far out of scope (and currently infeasible to implement due to LLVM limitations). username_2: Staring at the proposal for a while, and thinking about it further, I think if we document the assumption we're making (as well as noting all the cases we're covering like saturating to infinity) then I don't have a problem signing off on this. In addition to the documentation changes already requested in the other issues, this needs to explicitly document the assumption that it'll be possible to set hardware floating-point modes to match Rust's expectations on rounding, and that we understand this could potentially limit our ability to use hardware floating point on some future platform that can't support that assumption. username_3: Where would you document the assumption that it's possible to set the hardware FP modes? That seems to be more of a rule about developing Rust compilers than it does about the language itself. Are *other* similar restrictions documented anywhere? username_0: When I was discussing with @username_2 on Discord they said to put these notes in the reference. I don't have strong opinions on where it goes, I just want to know what I have to add where to get this issue settled. Can you two (& anyone else who has strong opinions on the matter) please find some consensus about that so I know what patch(es) to write? username_4: It's a bit odd to note in the reference what expectations on hardware that exist for Rust since that isn't necessary for the purposes of a definitional interpreter / abstract machine. That said, I suppose we can add a lightweight note about what it means for rustc? perhaps the rustc guide is a better place for this note? username_3: Wherever we note this assumption by Rust, we should also make note that we assume that pointer width is at least 16 bits (via `impl Into<isize> for i16`) as well. username_3: Actually, can we open a new issue for where these assumptions should be documented; it feels unrelated to actually making this choice and there are others we should do it for even if we choose not to specify the behavior in this issue. username_2: @username_3 There are many assumptions we might wish to document, but I don't think we should block documenting *one* thing that we know about on trying to document others. I'm just proposing that we have a quick footnote somewhere saying that by specifying this behavior we assume that we can efficiently implement such behavior on all hardware we might want to run on. username_0: When you put it like that it seems kind of tautological. I stand by not really caring one way or another but may I suggest reconsidering what the note is intended to achieve specifically? username_1: If documenting this is strictly necessary to make progress, @username_0 I personally would think that a tautological note: "Note: if your hardware doesn't support these rounding semantics floats will be _slow_." I'm not sure of the value of this note, but not documenting the semantics of `x_f32 as i32` has a cost. A user just asked precisely that on Discord, and I pointed them to the reference, and had then to point them to the PR to the reference, and then here. So I'll be fine with just merging @username_0 PR to the reference, and opening an issue on the reference about the Note. We could add them right in the middle of the `as`-expression section, but there are so many things like this in which if your hardware doesn't support something your programs might run much slower (e.g. using `i64` on hardware that doesn't support 64-bit integers), that if someone cares enough about documenting all of these it might be better to add an Appendix to the reference whose job is to document that kind of thing, and that we can hyperlink. username_2: Supporting 64-bit integers via 32-bit math isn't especially slow; software floating point can be *incredibly* slow, compared to people's expectations of floating-point performance. username_1: software floating point can be *incredibly* slow, compared to people's expectations of floating-point performance. My point is that there are many different levels of slowness for many of the features of the language depending on the target and that we don't document these anywhere - we don't even document that you should have an FPU to use f32 and f64 appropriately in the first place. I really don't see what value does adding this note here add. It requires people looking for information about whether their target can run Rust programs efficiently to go through the whole Rust reference skimming for performance-related footnotes. They won't even find this in the Numeric types section introducing `f32` or `f64`, and have to go to the section about `as` expressions instead. If you think these issues are important and should be documented, I think it would be better to open an issue in the reference, where these issues are collected, and an appropriate approach to document them can be discussed. username_0: I pushed an updated patch including performance note in https://github.com/rust-lang-nursery/reference/pull/607, please check it out. username_2: The updated version looks great! @rfcbot fcp merge username_0: ping @nikomatsakis @pnkfelix @withoutboats for checkboxes Status: Issue closed username_4: The reference PR https://github.com/rust-lang-nursery/reference/pull/607 has been merged. Closing this issue as "merged".
rust-lang/rust
770479940
Title: Better suggestion for closure that needs to capture bindings, but not all Question: username_0: [Given](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=887faf2cf5ecc01cfbc3cbb1b1a8078a) ```rust let numbers: Vec<i32> = (1..100).collect(); let len = numbers.len(); let _sums_of_pairs: Vec<_> = (0..len) .map(|j| ((j + 1)..len).map(|k| numbers[j] + numbers[k])) .flatten() .collect(); ``` we currently emit ``` error[E0373]: closure may outlive the current function, but it borrows `j`, which is owned by the current function --> src/main.rs:6:37 | 6 | .map(|j| ((j + 1)..len).map(|k| numbers[j] + numbers[k])) | ^^^ - `j` is borrowed here | | | may outlive borrowed value `j` | note: closure is returned here --> src/main.rs:6:18 | 6 | .map(|j| ((j + 1)..len).map(|k| numbers[j] + numbers[k])) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: to force the closure to take ownership of `j` (and any other referenced variables), use the `move` keyword | 6 | .map(|j| ((j + 1)..len).map(move |k| numbers[j] + numbers[k])) | ^^^^^^^^ ``` if we [apply the suggestion](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=9308ebec747edfad090b28a5e1aa5b9e) we get the following ``` error[E0507]: cannot move out of `numbers`, a captured variable in an `FnMut` closure --> src/main.rs:6:37 | 2 | let numbers: Vec<i32> = (1..100).collect(); | ------- captured outer variable ... 6 | .map(|j| ((j + 1)..len).map(move |k| numbers[j] + numbers[k])) | ^^^^^^^^ ------- | | | | | move occurs because `numbers` has type `Vec<i32>`, which does not implement the `Copy` trait | | move occurs due to use in closure | move out of `numbers` occurs here ``` because we're moving `numbers` into the closure and consuming it. Ideally, we would suggest to [introduce a new binding that borrows `numbers` to avoid moving it](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=fd2e909dfa52c5af9e70e13106108d82). _Example taken from https://stackoverflow.com/questions/65258521/how-do-i-write-a-lazily-evaluated-double-for-loop-in-a-functional-style_
artbear/1commands
405549839
Title: ะ”ะพะฑะฐะฒะธั‚ัŒ ะฒะพะทะผะพะถะฝะพั‚ัŒ ะฝะต ะฟะตั€ะตั…ะฒะฐั‚ั‹ะฒะฐั‚ัŒ ะฒะฒะพะด ะธ ะฒั‹ะฒะพะด Question: username_0: ะกะตะนั‡ะฐั ะฟั€ะธะฝัƒะดะธั‚ะตะปัŒะฝะพ ะฟะตั€ะตั…ะฒะฐั‚ั‹ะฒะฐะตั‚ัั ะฒะฒะพะด ะธ ะฒั‹ะฒะพะด ะฟั€ะพั†ะตััะฐ ะŸั€ะตะดะปะฐะณะฐัŽ ัะดะตะปะฐั‚ัŒ ะฒะพะทะผะพะถะฝะพัั‚ัŒ ัั‚ะพ ะฟะตั€ะตะพะฟั€ะตะดะตะปัั‚ัŒ, ะดะปั ั‚ะพะณะพ ั‡ั‚ะพะฑั‹ ะทะฐะฟัƒั‰ะตะฝะฝั‹ะต ะฟั€ะพั†ะตััั‹ ะฟะธัะฐะปะธ ะฒ ั‚ะตะบัƒั‰ัƒัŽ ะบะพะฝัะพะปัŒ Answers: username_0: ะŸะปะพั… ั‚ะตะผ ั‡ั‚ะพ - ัƒะฑะธั€ะฐะตั‚ ะฒะพะทะผะพะถะฝะพัั‚ัŒ ะฒั‹ะฒะพะดะฐ ะฒ ั‚ะตะบัƒั‰ัƒัŽ ะบะพะฝัะพะปัŒ ะบะฐะบ ะตัั‚ัŒ! ะงั‚ะพ ัะพะฑัั‚ะฒะตะฝะฝะพ ะธ ะฝะฐะดะพ. ะ“ะพั‚ะพะฒ ัะดะตะปะฐั‚ัŒ -) ะะต ั…ะพั‡ัƒ ะดัƒะฑะปะธั€ะพะฒะฐั‚ัŒ ะบะพะด 1commands ะธะปะธ ะฟะธัะฐั‚ัŒ ะฐะฝะฐะปะพะณะธั‡ะฝัƒัŽ ะปะธะฑัƒ ) username_1: @username_0 ะธัˆัƒะท ั€ะตัˆะตะฝ? ะตัะปะธ ะดะฐ, ะทะฐะบั€ะพะน ะตะณะพ. ะ“ะพั‚ะพะฒะปัŽััŒ ะบ ะฒั‹ะฟัƒัะบัƒ ั€ะตะปะธะทะฐ Status: Issue closed
r0x0r/pywebview
724044916
Title: Proxy support for a window Question: username_0: ### Specification - pywebview version: 3.2 - platform / version: Any ### Description I think it would be greatly useful to be able to add http & https proxy support to each window instance. Perhaps it could be included as an argument like: window = webview.create_window("Example Window", "https://examplewebsite.com", proxy=(172.16.31.10, 1010)) Or something of that sort! ### Practicalities - NO I am not willing to work on this issue myself. - NO I am not prepared to support this issue financially. Answers: username_1: Just a head up on the realities of this project. I am the sole developer and I don't have much time due other work. While I have nothing against this, it is unlikely that I will find time to work on this in foreseeable future. As always you are welcomed to submit a pull request or sponsor this issue. username_2: @username_0 can you tell more info? username_2: GTK - https://stackoverflow.com/questions/6915840/python-webkit-with-proxy-support/19068804 username_2: Qt -https://github.com/jsoffer/eilat/
dotsam/homebridge-milight
282649559
Title: Unhandled rejection Error: no response timeout Question: username_0: I found this code after I done every thing and the homebridge isnt appear in my home app any idea about this issue Thanks, `Unhandled rejection Error: no response timeout at Timeout._onTimeout (/usr/local/lib/node_modules/homebridge-milight/node_modules/node-milight-promise/src/milight-v6-mixin.js:125:26) at ontimeout (timers.js:365:14) at tryOnTimeout (timers.js:237:5) at Timer.listOnTimeout (timers.js:207:5)` Answers: username_1: Make sure you have the right IP address for your bridge set in your config. Also try installing the latest version as I've updated the node-milight-promise dependancy and I believe it has slightly tweaked error handling for v6 communication timeouts now. username_2: I have the same issue: Unhandled rejection Error: no response timeout at Timeout._onTimeout (/usr/lib/node_modules/homebridge-milight/node_modules/node-milight-promise/src/milight-v6-mixin.js:128:26) at listOnTimeout (internal/timers.js:549:17) at processTimers (internal/timers.js:492:7) Is there any solution for that?
opensourcegamedev/SpaceChaos-Multiplayer
305887999
Title: Issue Vorlage erstellen Question: username_0: Issue Vorlage fรผr Github erstellen. Answers: username_0: Gerade fรผr Bugs sollten wir die Vorlage aber noch deutlich detailierter ausarbeiten. OS, Grafikkarte usw. username_2: # _[aussagekrรคftige รœberschrift die bereits erste Hinweise gibt, was der Fehler ist]_ [kurze Beschreibung: was kann man schlecht in die anderen Punkte schreiben, ist aber wichtig fรผr das Verstรคndnis des Bugs; mรถglicherweise unterstรผtzt mit Bildern/Videos] ## **Schritte zur Reproduktion** 1. [Erster Schritt] 2. [Zweiter Schritt] 3. [etc.] ## **Systeminformationen** **Software-Version:** [Version unserer Software] **Betriebssystem:** [Betriebssystem und Version] **Hardware:** [falls relevant, z. B.: -Grafikkarte: -Prozessor: -freier Speicherplatz: -Arbeitsspeicher: ] **Error-Log**: [Falls vom Spiel/System ein Error-Log erstellt wurde oder eine Fehlermeldung ausgegeben wurde, hier anhรคngen] ## **Fehlerbeschreibung** [Was ist passiert? Warum ist dies als Fehler zu werten? ] ### Was hat sich geรคndert? [ggfs.: Was ist bisher immer an dieser Stelle passiert? Was hat sich verรคndert?] username_2: erstmal unwichtig -> also vorerst wird auf diese Vorlagen zurรผckgegriffen Status: Issue closed
wallabyjs/quokka
727031724
Title: Extension causes high cpu load Question: username_0: - Issue Type: `Performance` - Extension Name: `quokka-vscode` - Extension Version: `1.0.321` - OS Version: `Windows_NT x64 10.0.18363` - VSCode version: `1.49.3` :warning: Make sure to **attach** this file from your *home*-directory: :warning:`C:\Users\kaavy\WallabyJs.quokka-vscode-unresponsive.cpuprofile.txt` Find more details here: https://github.com/microsoft/vscode/wiki/Explain-extension-causes-high-cpu-load Answers: username_0: Issue Type: Performance Extension Name: quokka-vscode Extension Version: 1.0.321 OS Version: Windows_NT x64 10.0.18363 VSCode version: 1.49.3 โš ๏ธ Make sure to attach this file from your home-directory: โš ๏ธC:\Users\kaavy\WallabyJs.quokka-vscode-unresponsive.cpuprofile.txt Find more details here: https://github.com/microsoft/vscode/wiki/Explain-extension-causes-high-cpu-load username_1: This is a one time operation so it is unlikely you'll see this warning again. If we see that the high CPU is caused by your operating system's file operations then you will need to update your virus scanner to exclude Quokka application folders from being scanned, or else ignore this problem. Status: Issue closed
decalage2/oletools
189385239
Title: olevba UnboundLocalError Question: username_0: I get the following error when attempting to extract macros from a file: ```python /usr/local/lib/python2.7/dist-packages/oletools/olevba.py in extract_macros(self) 2626 for stream_path, vba_filename, vba_code in \ 2627 _extract_vba(self.ole_file, vba_root, project_path, -> 2628 dir_path, self.relaxed): 2629 # store direntry ids in a set: 2630 vba_stream_ids.add(self.ole_file._find(stream_path)) /usr/local/lib/python2.7/dist-packages/oletools/olevba.py in _extract_vba(ole, vba_root, project_path, dir_path, relaxed) 1555 vba_codec = 'cp%d' % projectcodepage_codepage 1556 log.debug("ModuleName = {0}".format(modulename_modulename)) -> 1557 log.debug("ModuleNameUnicode = {0}".format(uni_out(modulename_unicode_modulename_unicode))) 1558 log.debug("StreamName = {0}".format(modulestreamname_streamname)) 1559 streamname_unicode = modulestreamname_streamname.decode(vba_codec) UnboundLocalError: local variable 'modulename_unicode_modulename_unicode' referenced before assignment ``` I noticed, when I looked at the code, that the variable `modulename_unicode_modulename_unicode` is only set if a certain condition is triggered, so this file must have not triggered that condition. Status: Issue closed Answers: username_0: You fixed it in a newer version than what I had installed. My mistake!
riot/riot
184048314
Title: Speed up the rendering process Question: username_0: According to my tests riot is too slow during the rendering and mounting process - [riot](http://plnkr.co/edit/V2mhy1CxwawCNjBA4qFS?p=preview) boot time ~1000ms - [vue](http://plnkr.co/edit/KTqvngteLKmX3ZXMDyZO?p=preview) boot time ~200ms - [react](https://plnkr.co/edit/TqrC3o5MIHQLWR0QIWaN?p=preview) boot time ~100ms Answers: username_0: Btw riot@next appears to be twice faster than the riot@2 but I am still not happy 100% username_1: Are you using the chrome devtool profiler? It could be useful to know where most time is spent https://i.imgsafe.org/8189f62f89.jpg username_2: Your test uses an `each`, which creates a new (anonymous) tag for every item in the list. Creating tags in riot is pretty slow, because it does so much: * Create new DOM using setInnerHtml (which the browser then has to parse) * Walk that DOM and parse expressions (regex against every attribute and text node) * Evaluate those expressions and update the DOM A while back I was kicking around the idea of "tag stamping" to avoid some of the redundant work here. Basically you'd do `setInnerHTML` and `parseExpressions` just once, to create a "pristine" version of the tag. Every time you wanted an instance, you'd `cloneNode` and copy the expressions. It's a pretty major change though, and I didn't really want to work on it until riot 3 was out. username_0: @username_2 I think we could already work on it for the following reasons: 1. The compiler is not ready yet and probably will require still a bit of time ( @username_4 could probably tell us more about it ) 2. I would prefer to make a release that solves all the performances issue riot had before, I was focused on the `update` method that was heavily improved, but the `mount` should be improved as well 3. This update is not a breaking change because it will change how riot handles the DOM creation internally so it has no side effects for our users username_3: Are the numbers in the OP the latest numbers? username_4: I think the compiler will took ~4 weeks. The tags can be instances of a real prototype. The compiler must emit (almost) pure JS-precompiled code, see (this comment)[https://github.com/riot/riot/issues/2283#issuecomment-308052204]. I'm not sure yet where to get `data` , maybe from `this.state` or from the closure. Status: Issue closed username_0: the riot loops rendering was heavily improved and the tests above show almost equivalent values across all the libraries except for polymer that has a really slow boot. I am closing this issue moving forward
forumone/generator-web-starter
167250526
Title: Generate a root composer.json Question: username_0: Drupal projects can use Composer to download PHP dependencies (including Drupal core). We should (optionally?) generate a root composer.json file for Drupal 8. We can base it off https://github.com/drupal-composer/drupal-project/blob/8.x/composer.json -- note that we'll also need https://github.com/drupal-composer/drupal-project/blob/8.x/scripts/composer/ScriptHandler.php. _Note: Drupal 7 also supports Composer, but Drupal 7 projects seem unlikely to use it. If you're reading this and would like to use Composer with Drupal 7, see <https://github.com/drupal-composer/drupal-project/tree/7.x>._ Answers: username_0: I've assigned this to myself to submit a PR. username_0: #72 might be blocking this. Without knowing the webroot, we can't tell Composer where to install Drupal. username_1: I concur, would be awesome to have the yeoman questions configure the resulting composer file you would get from `composer create-project drupal-composer/drupal-project`. IE choosing a host option (f1/pantheon/acquia) should set your docroot appropriately. That would resolve #72 as well. Status: Issue closed
CrossRef/event-data-query
273830299
Title: Allow whitelist for prefixes on ingestion Question: username_0: As we see a potentially diversifying set of RAs for Events, add an optional prefix whitelist filter. When supplied as a newline-separated Artifact ID, this will filter out Events that don't include a DOI with a whitelisted prefix in the subject or object position. Events that don't have a DOI (e.g. Wikipedia) are allowed through.<issue_closed> Status: Issue closed
nimble-dev/nimble
189860525
Title: Running the Rmcmc first causes compilation of Rmcmc to crash Question: username_0: If the Rmcmc algorithm is executed before calling compileNimble(Rmcmc), then compilation of the Rmcmc algorithm will fail. Reproducible example: ``` library(nimble) code <- nimbleCode({ a ~ dnorm(0, 1) }) Rmodel <- nimbleModel(code, inits = list(a=0)) Rmcmc <- buildMCMC(Rmodel) Rmcmc$run(10) ## IF THIS LINE IS EXECUTED Cmodel <- compileNimble(Rmodel) Cmcmc <- compileNimble(Rmcmc, project = Rmodel) ## THEN THIS LINE FAILS (see below) compiling... this may take a minute. Use nimbleOptions(showCompilerOutput = TRUE) to see C++ compiler details. Warning, mismatched dimensions in assignment: samplerTimes <<- initialize(0, 1, size(samplerFunctions)). Going to browser(). Press Q to exitFALSE Called from: sizeAssignAfterRecursing(code, symTab, typeEnv) Browse[1]> debug: if (assignmentTypeWarn(LHS$type, RHStype)) { message(paste0("Warning, RHS numeric type is losing information in assignment to LHS.", nimDeparse(code))) } Browse[2]> ``` Answers: username_1: This also occurs in 0.6-1 so not a result of recent changes... username_2: Well I see what is happening. In buildMCMC, samplerTimes is established as a vector in setup code. But in this case there is only one sampler. So if the Rmcmc is run, then in the run code, samplerTimes ends up with length of 1. And if the compilation happens after that, inspection of samplerTimes leads to the conclusion it should be a scalar. And that causes compilation of samplerTimes <- numeric(length(samplerFunctions)) to fail because it thinks the assignment is "scalar <- vector" Thinking about a solution. Ideas welcome. AFAIK this is not related to recent changes. username_2: I could see workarounds for buildMCMC so users don't encounter the problem in this particular case. But this is a glitch that could happen in other situations too. username_2: Here are a couple of more general options: 1. We could arrange to record some type information (dimensionality and double/integer/logical) of any numeric/integer/logical objects included in setupOutputs immediately after setup code is run. This could be preserved even if uncompiled execution modifies the objects. Then makeTypeObject (which inspects types of setupOutputs) can refer to that saved information. 2. We could allow a more explicit setup type declaration system, so a programmer can directly state that samplerTimes (e.g.) will be a vector double. Obviously the c(0,0) trick is a bit kludgy. These are not mutually exclusive options. username_0: Understood. Yes, I can also imagine work-arounds to put into buildMCMC to prevent this, but still could happen in other situations. I didn't mean to open a whole can of worms here. But as Perry pointed out, it does get at the existing awkwardness in trying to define vector quantities in setup code. A more refined system for this might be in order, but I didn't realize this problem would suggest such a (potentially) large change. username_2: Agreed: this is a can of worms. It relates to some other future directions I've been thinking of to be more flexible with numeric types. So for now I suggest the smallest workaround as follows: In run(): samplerTimes <<- numeric(length(x) + 1) ## add the +1 so it's always a vector In getTimes(): return(samplerTimes[1:(length(samplerTimes)-1)]) Then we can discuss the more general situation instead of rushing to a quick fix for 0.6-2. username_0: Sounds good. The suggested changes were made, and pushed to devel. It appears to work fine, pending a larger system-wide upgrade to address this. Closing issue. Status: Issue closed username_0: Sounds good. The suggested changes were made, and pushed to devel. It appears to work fine, pending a larger system-wide upgrade to address this. Closing issue.
opencv/opencv_contrib
299187054
Title: cuda.hpp Question: username_0: <!-- If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses. If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute). This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library. --> ##### System information (version) <!-- Example - OpenCV => 3.1 - Operating System / Platform => Windows 64 Bit - Compiler => Visual Studio 2015 --> - OpenCV => :grey_question: - Operating System / Platform => :grey_question: - Compiler => :grey_question: ##### Detailed description <!-- your description --> ##### Steps to reproduce <!-- to add code example fence it with triple backticks and optional file extension ```.cpp // C++ code example ``` or attach as .txt or .zip file --><issue_closed> Status: Issue closed
gradle/gradle
808071404
Title: Plugin publish plugin publishes compile-only dependencies in POM Question: username_0: Compile only dependencies for a plugin project are included in the generated and published POM file as `compile` scope dependencies unnecessarily and undesirably. ### Expected Behavior Such dependencies should be omitted from the published POM file. Observed with version 0.12. Answers: username_0: This is actually not the case. For the case in question, the dependency is declared as compileOnlyApi, so the behaviour of the plugin publish plugin is valid. Status: Issue closed
dmustanger/7dtd-ServerTools
753642497
Title: Zones issue Question: username_0: Hi guys! I have a fully dedicated 7 days to die server. I have been using the zone messages for 3 years. Only the message, no pve or anything. but after this complete wipe when I enable the zone message, then a constant error appears in the terminal window. The eroor is this: INF [SERVERTOOLS] Error in ProcessDamage.ProcessPlayerDamage: Object reference not set to an instance of an object I wouldn't even care about the error, but it takes down the server fps a lot! I would appreciate it if anyone could help! thank you Answers: username_1: I believe this is fixed in the latest version 19.3.1 Sorry for the delay. Let me know if it continues Status: Issue closed
empus/armour
659970166
Title: update help commands for v4.0 changes Question: username_0: many new commands: access, addchan, data, ipqs, modchan, newuser, note, queue, register, remchan, showlog, team, whois almost every other command needs an update for multi-chan support Status: Issue closed Answers: username_0: many new commands: access, addchan, data, ipqs, modchan, newuser, note, queue, register, remchan, showlog, team, whois almost every other command needs an update for multi-chan support
zhenglibao/FlexLib
583582868
Title: FlexCollectionCell ๅœจ swift ไธ‹้—ฎ้ข˜ Question: username_0: FlexCollectionCell ๅœจ swift ้กน็›ฎๆˆ–่€…ๆ˜ฏๆททๅˆ้กน็›ฎไธญ resName ไผšๅธฆไธŠ้กน็›ฎๅ็งฐใ€‚ๅˆ™ xml ๆ–‡ไปถๅŠ ่ฝฝไธๅ‡บๆฅ ใ€‚ไนŸไธ่ƒฝ่‡ชๅฎšไน‰ xml ๆ–‡ไปถ็š„ๅ็งฐ ใ€‚ +(FlexRootView*)loadWithNodeFile:(NSString*)resName Owner:(NSObject*)owner { if(resName==nil){ resName = NSStringFromClass([owner class]); } FlexRootView* root = [[FlexRootView alloc]init]; root->_owner = owner; FlexNode* node = [FlexNode loadNodeFromRes:resName Owner:owner]; if(node != nil){ UIView* sub ; @try{ sub = [node buildViewTree:owner RootView:root]; }@catch(NSException* exception){ NSLog(@"Flexbox: FlexRootView exception occured - %@",exception); } if(sub != nil && ![sub isKindOfClass:[FlexModalView class]]) { [root addSubview:sub]; } } return root; } Answers: username_1: FlexCollectionCell็š„ๆดพ็”Ÿ็ฑป้œ€่ฆไฝฟ็”จ@objcๅ…ณ้”ฎๅญ—ๆฅๅฃฐๆ˜Ž๏ผŒ่ฟ™ๆ ทๅฐฑไธไผšๅธฆๆœ‰้กน็›ฎๅ็งฐๅ‰็ผ€๏ผŒๅ‚่€ƒdemoไธญFlexBaseTableCell็š„ไพ‹ๅญ Status: Issue closed
alafr/SVG-to-PDFKit
647241574
Title: Pdf with black border Question: username_0: PDF and svg display inconsistency. What could be the reason? SVG as browser renders it: ![image](https://user-images.githubusercontent.com/25047438/85994002-be753800-ba29-11ea-927c-28db3bfd80ee.png) SVG-to-PDFKit: ![image](https://user-images.githubusercontent.com/25047438/85994043-cfbe4480-ba29-11ea-9ee4-8909c2537b7c.png) ` <svg fill="none" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"><path fill-rule="evenodd" clip-rule="evenodd" d="M14 15.0122C14 14.8003 14.1345 14.6132 14.3298 14.5309C16.4863 13.6214 18 11.4875 18 9C18 5.68629 15.3137 3 12 3C8.68629 3 6 5.68629 6 9C6 11.4875 7.5137 13.6214 9.67021 14.5309C9.86546 14.6132 10 14.8003 10 15.0122V19C10 20.1046 10.8954 21 12 21C13.1046 21 14 20.1046 14 19V15.0122Z" fill="#fff"/><path d="M11.25 12H12.75V19C12.75 19.4142 12.4142 19.75 12 19.75C11.5858 19.75 11.25 19.4142 11.25 19V12Z" fill="url(#map_location_dot_c_x_paint0_linear)"/><path fill-rule="evenodd" clip-rule="evenodd" d="M12 13.75C14.6234 13.75 16.75 11.6234 16.75 9C16.75 6.37665 14.6234 4.25 12 4.25C9.37665 4.25 7.25 6.37665 7.25 9C7.25 11.6234 9.37665 13.75 12 13.75Z" fill="url(#map_location_dot_c_x_paint1_linear)"/><path fill-rule="evenodd" clip-rule="evenodd" d="M12 12.95C14.1815 12.95 15.95 11.1815 15.95 9C15.95 6.81848 14.1815 5.05 12 5.05C9.81848 5.05 8.05 6.81848 8.05 9C8.05 11.1815 9.81848 12.95 12 12.95ZM16.75 9C16.75 11.6234 14.6234 13.75 12 13.75C9.37665 13.75 7.25 11.6234 7.25 9C7.25 6.37665 9.37665 4.25 12 4.25C14.6234 4.25 16.75 6.37665 16.75 9Z" fill="url(#map_location_dot_c_x_paint2_linear)" style="mix-blend-mode:overlay"/><g filter="url(#map_location_dot_c_x_filter0_f)"><path fill-rule="evenodd" clip-rule="evenodd" d="M12.5 11C13.8807 11 15 9.88071 15 8.5C15 7.11929 13.8807 6 12.5 6C11.1193 6 10 7.11929 10 8.5C10 9.88071 11.1193 11 12.5 11Z" fill="#FF9957"/></g><path fill-rule="evenodd" clip-rule="evenodd" d="M13.5 9C14.0523 9 14.5 8.55228 14.5 8C14.5 7.44772 14.0523 7 13.5 7C12.9477 7 12.5 7.44772 12.5 8C12.5 8.55228 12.9477 9 13.5 9Z" fill="#fff"/><defs><linearGradient id="map_location_dot_c_x_paint0_linear" x1="11.25" y1="12" x2="11.25" y2="19.75" gradientUnits="userSpaceOnUse"><stop stop-color="#C6C6C6"/><stop offset="1" stop-color="#D8D8D8"/></linearGradient><linearGradient id="map_location_dot_c_x_paint1_linear" x1="7.358" y1="4.467" x2="7.358" y2="13.75" gradientUnits="userSpaceOnUse"><stop stop-color="#FF5B49"/><stop offset="1" stop-color="#FF2E22"/></linearGradient><linearGradient id="map_location_dot_c_x_paint2_linear" x1="7.25" y1="4.25" x2="7.25" y2="13.75" gradientUnits="userSpaceOnUse"><stop stop-color="#999"/><stop offset="1"/></linearGradient><filter id="map_location_dot_c_x_filter0_f" x="5.973" y="1.973" width="13.054" height="13.054" filterUnits="userSpaceOnUse" color-interpolation-filters="sRGB"><feFlood flood-opacity="0" result="BackgroundImageFix"/><feBlend in="SourceGraphic" in2="BackgroundImageFix" result="shape"/><feGaussianBlur stdDeviation="2.014" result="effect1_foregroundBlur"/></filter></defs></svg> ` If you are not busy looking at it, thank you
jcabi/jcabi-http
188782832
Title: Is restriction from README about Java8 up-to-date? Question: username_0: At the end of `README` file I read the following: ``` Make sure you're using Maven 3.2+ and Java7 (in Java8 you won't be able to use Qulice, because of teamed/qulice#379). ``` As I can see teamed/qulice#379 is fixed, so, most likely, this isn't an issue anymore. Answers: username_1: @username_2 dispatch this issue please, see [par.21](http://at.teamed.io/policy.html#21) username_2: @username_0 fixed in 01f6498 thanks! Status: Issue closed
rundeck/rundeck
41420684
Title: Support custom Timezone in all views Question: username_0: Hi, I'm running Rundeck 2.2.1-1-GA on Ubuntu and I configured it with timezone EST5DST (https://groups.google.com/d/msg/rundeck-discuss/_NUBNfuS9tM/qUBT6G5HBG8J) It's showing the right time for the activity at Job's level but at the project's activity page (URL/project/SOME_PROJECT/activity) its showing an hour less. Could some please advice of any custom timezone settings that need to be done? Answers: username_1: Hi @username_0 ; You can fix this issue easily by yourself. You just need to modify in "/etc/init.d/rundeckd " like the following: from ` rundeckd="${JAVA_HOME:-/usr}/bin/java ${RDECK_JVM} -cp ${BOOTSTRAP_CP} com.dtolabs.rundeck.RunServer /var/lib/rundeck ${RDECK_HTTP_PORT}" ` To ` rundeckd="${JAVA_HOME:-/usr}/bin/java ${RDECK_JVM} -cp ${BOOTSTRAP_CP} -Duser.timezone=<TimeZone you want> com.dtolabs.rundeck.RunServer /var/lib/rundeck ${RDECK_HTTP_PORT}" ` username_2: It would be really convenient if rundeck used the /etc/defaults file to configure arguments like this. Right now I have to hack into the init.d scripts to insert that line, not really nice! username_3: added trello card, please vote there https://trello.com/c/W4M03gnm Status: Issue closed
DigitalRuby/IPBan
704043868
Title: {year}, {month} and {day} placeholders Question: username_0: How to use {year}, {month} and {day} placeholders for a log file? If i provide the path like this <PathAndMask>C:\Program Files\...\security_log\VPN\sec_{year}{month}{day}.log</PathAndMask> IPBan try to read "sec_{year}{month}{day}.log" file. `Adding log file to parse: C:\Program Files\...\security_log\VPN\sec_{year}{month}{day}.log` Answers: username_1: The log sends out the original string, internally the log file scanner will replace those with the correct values. Status: Issue closed
navikt/fpinfo-historikk
546879492
Title: Deploy av 20200108150041-0d5fccd Question: username_0: Kommenter med <b>/promote 20200108150041-0d5fccd cluster</b>, hvor <b>cluster</b> er et gyldig clusternavn <table> <tr><th>Cluster</th></tr> <tr><td>dev-fss</td></tr> <tr><td>prod-fss</td></tr> </table> Answers: username_0: /promote 20200108150041-0d5fccd dev-fss
ivanjonas/login-manager
274592030
Title: Shortcut key to auto-add a Login via the normal page form Question: username_0: Suppose the target application login form has two fields (username and password) and a Submit button. Ensure that pressing, say, Shift+Enter while focus is inside the fields or on the Submit button will silently create a new Login and submit the form. This avoids the unpleasant situation where the user has just finished typing both UN and PW into the form fields and realizes that he needs to first click the Add New Login button inside the Login Manager and re-type those two fields.
yegor256/cactoos
265390819
Title: sonarcloud Question: username_0: Let's integrate sonarcloud. Answers: username_0: @username_1 release, tag is `0.20.1` username_1: @username_0 OK, I will release it now. Please check the progress [here](http://www.username_1.com/t/13140-336547805) username_1: @username_0 Done! FYI, the full log is [here](http://www.username_1.com/t/13140-336547805) (took me 12min) Status: Issue closed username_2: Oops! Job `gh:username_0/cactoos#438` was not in scope
github-vet/rangeloop-pointer-findings
772971181
Title: chuan717/gopub: src/controllers/p2p/check.go; 11 LoC Question: username_0: [Click here to see the code in its original context.](https://github.com/chuan717/gopub/blob/5b38181e7e8a152fd5e075cc5c0f4086af645688/src/controllers/p2p/check.go#L69-L79) <details> <summary>Click here to show the 11 line(s) of Go which triggered the analyzer.</summary> ```go for _, project := range projects { s := components.BaseComponents{} s.SetProject(&project) ips := s.GetHostIps() proRes := init_sever.P2pSvc.CheckAllClient(ips) for key, value := range proRes { if !common.InList(key, ss) { ss[key] = value } } } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 5b38181e7e8a152fd5e075cc5c0f4086af645688
react-navigation/react-navigation
321426007
Title: DrawerNavigatior's Menu Is Not Showing Question: username_0: I am trying to implement DrawerNavigator in my React Native Application. I have already added some screens in StackNavigator then i have added DrawerNavigator inside StackNavigator. but Navigator menu is not showing. can anyone tell me why its not showing . thanks in advance This is my repo link **https://github.com/username_0/DrawerNavigatorSample**. please review my code and let me know where i have been wrong. Answers: username_1: can you put this example on https://snack.expo.io so it is easier to run, please? also please fill out the entire issue template in the future username_0: @username_1 I have put all code here **https://snack.expo.io** . please check it and do the needful as soon as possible. Thank you in advance. username_1: @username_0 - you need to save it and send me a link to the url on snack that has the code username_0: @username_1 i have saved all code in expo. this is the url **https://snack.expo.io/Hk7uQLZCG** . pleaselet me know you need anything else. Thank you! username_0: @username_1 . Its working now. this is snack url **https://snack.expo.io/@username_0/drawernavigator** you can see home screen but side menu is not showing. please check it and let me know where I have been wrong. I am just new to React Native. so please . username_0: @username_1 I have figured it out. Thanks for you support.
comunica/comunica
455156311
Title: Detach context key constants Question: username_0: #### Issue type: - :heavy_plus_sign: Feature request ____ #### Description: Currently, our context key constants are spread all over the place (actor-init-sparql, bus-query-operation, ...), which requires various imports for using the constants. Either we should: * move all constants into a single package, * or not export constants at all, and require each package to use the full string.<issue_closed> Status: Issue closed
jhalterman/lyra
52469223
Title: Consumers never recover if withRequestedHeartbeat() is used Question: username_0: If we set a request heartbeat > 0 and the connection goes down Lyra will only try to recover for the heartbeat duration. In the example below handleShutdownSignal will be called 10 seconds after the network goes down, once that have happened Lyra seems to stop trying. If the network goes up after 7 seconds the consumer recovers without any problem. If we remove the heartbeat the consumer is able to recover after 30 seconds. ``` java Config config = new Config() .withConnectionRecoveryPolicy(RecoveryPolicies.recoverAlways()) .withConsumerRecovery(true) .withChannelRetryPolicy(RetryPolicies.retryAlways()) .withRecoveryPolicy(RecoveryPolicies.recoverAlways()); final ConnectionOptions connectionOptions = new ConnectionOptions() .withUsername(username) .withPassword(<PASSWORD>) .withHost(url) .withRequestedHeartbeat(Duration.seconds(10)) .withVirtualHost(virtualHost); try { final ConfigurableConnection connection = Connections.create(connectionOptions, config); final Channel channel = connection.createChannel(); channel.basicConsume("foo", new DefaultConsumer(channel) { @Override public void handleCancel(final String consumerTag) throws IOException { super.handleCancel(consumerTag); log.info("handleCancel"); } @Override public void handleCancelOk(final String consumerTag) { super.handleCancelOk(consumerTag); log.info("handleCancelOk"); } @Override public void handleConsumeOk(final String consumerTag) { super.handleConsumeOk(consumerTag); log.info("handleConsumeOk"); } @Override public void handleDelivery(final String consumerTag, final Envelope envelope, final AMQP.BasicProperties properties, final byte[] body) throws IOException { final String message = new String(body); onCloudMessageHandler.onCloudMessage(message); log.info("Got a message"); } @Override public void handleRecoverOk(final String consumerTag) { super.handleRecoverOk(consumerTag); log.info("handleRecoverOk"); } @Override public void handleShutdownSignal(final String consumerTag, final ShutdownSignalException sig) { super.handleShutdownSignal(consumerTag, sig); log.info("handleShutdownSignal"); } }); } catch (IOException e) { e.printStackTrace(); } ``` Status: Issue closed Answers: username_1: Not enough information and @jhalterman could not reproduce => closing.
microsoft/onnxruntime
580380524
Title: How can I get the value of the multiple layer as output? Question: username_0: **System information** - OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10 - ONNX Runtime installed from (source or binary): binary (install by VS Nuget Package) - ONNX Runtime version: 1.1.0 - Visual Studio version (if applicable): VS 2015 - CUDA/cuDNN version: 10.0 / 7.3 - GPU model and memory: RTX 2080 --------------------------------------------------------------------------------------------- How can I get the value of the multiple layer as output? I want to use a network with multiple outputs from one input. (For example, input the face image, and receive the face posture (yaw, pitch, roll) as output.) We are using class as the code below. How can we use multiple output if we modify it? ``` onnx_module::onnx_module(std::string sModelPath, int nInputC, int nInputWidth, int nInputHeight, int nOutputC, int nOutputWidth, int nOutputHeight) { std::string sPath = sModelPath; wchar_t* wPath = new wchar_t[sPath.length() + 1]; std::copy(sPath.begin(), sPath.end(), wPath); wPath[sPath.length()] = 0; Ort::SessionOptions session_options; Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProvider_CUDA(session_options, 0)); session_options.SetGraphOptimizationLevel(GraphOptimizationLevel::ORT_ENABLE_EXTENDED); session_ = new Ort::Session(env, wPath, session_options); //session_ = new Ort::Session(env, wPath, Ort::SessionOptions{ nullptr }); delete[] wPath; const int batch_ = 1; const int channel_in = nInputC; const int width_in = nInputWidth; const int height_in = nInputHeight; const int channel_out = nOutputC; const int width_out = nOutputWidth; const int height_out = nOutputHeight; input_image_.assign(width_in * height_in * channel_in, 0.0); results_.assign(nOutputWidth * nOutputHeight * nOutputC, 0.0); input_shape_.clear(); input_shape_.push_back(batch_); input_shape_.push_back(channel_in); input_shape_.push_back(width_in); input_shape_.push_back(height_in); output_shape_.clear(); output_shape_.push_back(batch_); output_shape_.push_back(channel_out); output_shape_.push_back(width_out); output_shape_.push_back(height_out); auto memory_info = Ort::MemoryInfo::CreateCpu(OrtDeviceAllocator, OrtMemTypeCPU); input_tensor_ = Ort::Value::CreateTensor<float>(memory_info, input_image_.data(), input_image_.size(), input_shape_.data(), input_shape_.size()); output_tensor_ = Ort::Value::CreateTensor<float>(memory_info, results_.data(), results_.size(), output_shape_.data(), output_shape_.size()); } void onnx_module::Run(std::vector<float>& vResults) { const char* input_names[] = { "input" }; const char* output_names[] = { "output" }; (*session_).Run(Ort::RunOptions{ nullptr }, input_names, &input_tensor_, 1, output_names, &output_tensor_, 1); vResults.assign(results_.begin(), results_.end()); } ``` Answers: username_1: See this as an example https://github.com/microsoft/onnxruntime/issues/3170#issuecomment-596613449. username_0: @username_1 I checked the comment and found output name. But also try in various ways, maybe because I was incorrectly set output shape causes the error still occurs. If you know how to generate output_tensor in this case, can you tell me how? For your information, the code below is the part I am testing. ``` output_shape_.clear(); output_shape_.push_back(batch_); output_shape_.push_back(channel_out); output_shape_.push_back(nOutputDims); auto memory_info = Ort::MemoryInfo::CreateCpu(OrtDeviceAllocator, OrtMemTypeCPU); input_tensor_ = Ort::Value::CreateTensor<float>(memory_info, input_image_.data(), input_image_.size(), input_shape_.data(), input_shape_.size()); output_tensor_ = Ort::Value::CreateTensor<float>(memory_info, results_.data(), results_.size(), output_shape_.data(), output_shape_.size()); ``` username_2: What is the error that occurs? It's not necessary to pre-allocate the output buffers - they will be automatically allocated when the model runs with the correct size if needed. You would still need to release them though.
epics-base/pva2pva
688741508
Title: p2p: Deprecation notice missing in README Question: username_0: When the p2p application is being started (and then only if a config file is provided), the following notice will appear: https://github.com/epics-base/pva2pva/blob/cda2222ed5fa22d65bef2e1d033bcb655f58b4a6/p2pApp/gwmain.cpp#L262-L270 So basically after spending all the effort of downloading the repo, compiling the software and adapting a config file I got this message the moment before I actually wanted to use it for the first time. A bit unfortunate, I'd say. In my view, this deprecation notice is an important information that should be found prominently in the README of this project. Answers: username_1: This is a reasonable point. d100eac09eb0572c1270e986abf76e41405a6fa3 Status: Issue closed username_0: That was a quick fix. Thanks!
scieloorg/nurl-2g
228720103
Title: Tracker persistente Question: username_0: A segunda geraรงรฃo do Nurl apresenta objetos tipo ``nurl.tracker.Tracker``, que armazenam metadados dos acessos ร s URLs curtas. A รบnica implementaรงรฃo disponรญvel atรฉ o momento รฉ a ``nurl.tracker.InMemoryTracker``, que nรฃo garante a durabilidade dos dados armazenados.<issue_closed> Status: Issue closed
hapijs/hoek
95831858
Title: Syntax error: unexpected token Question: username_0: I am attempting to install `electron-prebuilt` which has a dependency on hawk, which has a dependency on hoek. During that install I get an error while installing hoek: ``` e:\code\evolve\evolve-client\node_modules\request\node_modules\hawk\node_modules\hoek\lib\index.js:596 }); ^ SyntaxError: Unexpected token ) at exports.runInThisContext (vm.js:53:16) at Module._compile (module.js:413:25) at Object.Module._extensions..js (module.js:448:10) at Module.load (module.js:355:32) at Function.Module._load (module.js:310:12) at Module.require (module.js:365:17) at require (module.js:384:17) at Object.<anonymous> (e:\code\evolve\evolve-client\node_modules\request\node_modules\hawk\node_modules\hoek\index.js:1:80) at Module._compile (module.js:430:26) at Object.Module._extensions..js (module.js:448:10) ``` Do you know anything about this? It seems like there is a syntax error in the latest version of hoek. Answers: username_1: What version of hoek are you installing? I just installed the latest from npm and was able to run all of the tests. Status: Issue closed username_2: i can't reproduce this, i'm going to close the issue but feel free to reopen it if you continue to have issues. username_3: Usual windows shit, something went wrong with npm when it installed. username_0: Sorry for not closing this myself :( I don't know the exact bug numbers anymore but there was a bug in the version of iojs that I happened to be using that was hitting a max path issue on windows. This issue was fixed and no longer happens on latest versions of iojs, thanks.
superpoweredSDK/Low-Latency-Android-iOS-Linux-Windows-tvOS-macOS-Interactive-Audio-Platform
321701215
Title: SuperpoweredDecoder chokes on an AAC encoded via Androidโ€™s native MediaCodec/MediaMuxer Question: username_0: After encoding recorded WAV using Androidโ€™s native MediaCodec/MediaMuxer, the code crashes with error โ€œ**Unknown file format**โ€ when trying to open that file with SuperPowered: _const char *error = _decoder->open(path().c_str(), false, 0, 0);_ Any idea why is that happening? Very annoying! Answers: username_1: What format are you encoding to? username_0: M4A username_1: That is a container format. What's the stream format inside? How do you set up the format with MediaCodec? username_0: It is an AAC format, I do it like this: MediaMuxer mux = new MediaMuxer(outputFile.getAbsolutePath(), MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4); MediaFormat outputFormat = MediaFormat.createAudioFormat(COMPRESSED_AUDIO_FILE_MIME_TYPE,sampleRate, 2); outputFormat.setInteger(**MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectLC**); outputFormat.setInteger(MediaFormat.KEY_BIT_RATE, COMPRESSED_AUDIO_FILE_BIT_RATE); outputFormat.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 16384); MediaCodec codec = MediaCodec.createEncoderByType(COMPRESSED_AUDIO_FILE_MIME_TYPE); codec.configure(outputFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE); codec.start(); username_1: Please send the file so we can investigate. username_2: Attached file as example. It's encoded with Android's MediaCodec + MediaMuxer. It's AAC-LC in M4A container. It plays fine on PC and on Android. FFmpeg is happy with it. But AdvancedAudioPlayer and SuperpoweredDecoder crash on trying to open it. Claims: Error loading player: Unknown file format. [smfmryavaz.zip](https://github.com/username_1/Low-Latency-Android-iOS-Linux-Windows-tvOS-macOS-Interactive-Audio-Platform/files/1990774/smfmryavaz.zip) username_0: Thanks **username_2**. username_2: @username_1 Any clue? username_1: We had no time to investigate the file yet. username_1: This file has invalid data in the STTS atom, indicating strange samples per frame data. In other words, the encoder seems to be buggy. We will improve Superpowered for the next update to handle this kind of badly encoded format. Status: Issue closed username_0: Thanks for your message. Looking forward to next update.
googleapis/python-bigquery
677867342
Title: Adding Opentelemetry Instrumentation Question: username_0: Adding opentelemetry instrumentation to all of the API calls made by the BigQuery Client. This will cover all API calls and attributes will be added based on the call. Some unit tests for the tracing module will be added. Additional tests for individual calls will be added in separate PR and issue. **Please assign to me.**
cloudfoundry/dotnet-core-buildpack
947126073
Title: **Release:** dotnet-core-buildpack 2.3.33 Question: username_0: **Dependency Changes:** ```diff + Added dotnet-runtime at version(s): 3.1.17 + Added dotnet-sdk at version(s): 3.1.411, 5.0.302 ``` **New Commits on Develop**: fff73d11 Updating github-config 80af6b4b Updating github-config 83094339 Auto merge pull request 397 5f4b2861 Add dotnet-sdk 3.1.411 16e94198 Auto merge pull request 396 a1faa8b1 Add dotnet-sdk 5.0.302 52fdad32 Auto merge pull request 395 fd037dbe Add dotnet-runtime 3.1.17 629b4884 Updating github-config 4a7689f7 Update libbuildpack f09994d8 Update libbuildpack Refer to [release instructions](https://docs.cloudfoundry.org/buildpacks/releasing_a_new_buildpack_version.html). Answers: username_1: Closing as outdated (correct me if I'm wrong @sophiewigmore) Status: Issue closed
christianvoigt/argdown
806652744
Title: sandbox gets a 404 error Question: username_0: https://argdown.org/ has lots of links to an online sandbox, at https://argdown.org/sandbox/. But this link gives a 404 error Answers: username_1: Thanks, I updated the docs this week because of issue #206. Somehow the sandbox was not correctly built/uploaded. I repeated the process and the sandbox is online again. It is so great to see that there are Argdown users like you now that instantly find these issues and post them here. Note to myself: If I have the time I should really try to automate the whole build process with Github actions. Status: Issue closed
MadimetjaShika/vuetify-google-autocomplete
498889354
Title: Modify selected Address display format. Question: username_0: Option to show only street name rather than complete address and other details in other inputs Answers: username_1: Hi, please have a look at #83, which makes it possible to only display the place name. Is this preferable for you, or do you specifically require to display the street name? Alternatively, I'll only get a chance to look into this during the month of December. PS - #83 was published in version ``2.0.0-beta.8``. Side Node - In addition to your request, it possibly might be a good idea to allow the consumer to dictate the format of the result... username_2: Hi, yes it would be really nice to show just street name... :-) username_1: @username_2 @username_0, have you tried using the ``placeName`` prop? Does this provide the output you expect, or do you expect to see something else? If you expect something different, please let me know which attribute you'd like to text to map to from the Google Places response. <img width="1680" alt="Screenshot 2019-12-31 at 13 14 22" src="https://user-images.githubusercontent.com/6643245/71619963-c154b480-2bcf-11ea-931d-43218ca403b3.png"> username_2: @username_1 for me, it would be really nice city, zip and country to have as a result. Now if I set for types: (cities) I can see just zip from small cities which just one zip code has. But i.e. for big city like Bonn I can't search 53111 Bonn. It shows just Bonn, Deutschland without zip.
jasontaylordev/CleanArchitecture
884468804
Title: So why is the Update Command not using Automapper Question: username_0: I was working through my own code and converting to the IMapFrom interface when I noticed this. In the UpdateTodoItemCommand you use handcrafted a=b mappings. https://github.com/username_4/CleanArchitecture/blob/d0f133ee026aec5cd5856c5592c307b5f20fa8e4/src/Application/TodoItems/Commands/UpdateTodoItem/UpdateTodoItemCommand.cs#L37 I tried to fix this in my own code, but when using the IMapFrom interface this would require IMapFrom in the Domain project that has no project references, and referencing Application would be wrong. I previously had a custom MappingProfile method in the Application project that would setup all mappings. Here I can create a Map from the Application object to a Domain object. But you cannot do this using the IMapFrom interface? Is this a limitation of the IMapFrom (definied in Application) or am I doing something wrong to require a mapping from an Application object to a Domain object [I use this to map the Command objects inside the Handler methods to the corresponding Domain object) Or are we supposed to use two different mapping setups Answers: username_0: Since you add an IMapFrom<T> to map the object from the T wuold it be OK to also include a mapping to the reverse in the Mapping method on the from object! ``` public class ApplicationCommand : IRequest<DomainObject>, IMapFrom<DomainObject> { public void Mapping(Profile profile) { profile.CreateMap<DomainObject, ApplicationCommand>(); profile.CreateMap<ApplicationCommand, DomainObject>(); } } ``` Now I do have both sides and this code resides in Application. But is the obsious/expressive enough since the interface is named explicitly *IMapFrom* . In my case there are some ForMember custom mappings in there as well otherwise I would not need to have this expliciet Mapping method. username_1: But Application is already referencing Domain. And Automapper creates a dynamic map from properties between two types so what is the concern when I map my Mediatr UpdateTodoItemCommand to TodoItem domain object. What is the "lose invariants and encapsulation" mean, what is the concern? I mean UpdateTodoItemCommand and TodoItem are pretty much "related" If I want to add or change fields to TodoItem, I will need to update almost alle Commands to do with storing TodoItem, ie. The field has to be added in CreateTodoItem, UpdateTodoItem and probably even DeleteTodoItem. And vice versa most of the times adding stuff to the Command's will require changes to TodoItem. And even when they don't this will not break the dynamic mapping that automapper creates. It might break code, but that will also happen when I add fields to TodoItem and I forget to update the manually added mappings as described in my reference to the source. I would say most of the time those mappings can be created by automapper and hence will result in fewer mistakes. I am not a TDD of design based developer so I am trying to understand the reasoning behind *not* using automapper for some stuff where it seems it could be used? username_2: I too don't quite understand why AutoMapper is not used. I kinda have been using the same structure as Jasons template in past, but I always used AutoMapper for query and command mapping as well. If someone could give some detailed insight in why this is bad I would very much appreciate it. username_3: I already wrote a short version why not to do this. If you want, a more detailed version, then you should read some books and understand basic principles of OOP. - Object Thinking, David West. - Object-oriented analysis and design with applications, Booch Grady - Design Patterns, GoF - Object-Oriented Software Engineering: A Use Case-Driven Approach, <NAME> - Domain Driven Design, <NAME> etc - All of Martin Fowler books and bliki entries Or you could just read couple of blog posts, and see what does the creator of automapper have to say about this topic: https://lostechies.com/jimmybogard/2009/09/18/the-case-for-two-way-mapping-in-automapper/ https://enterprisecraftsmanship.com/posts/on-automappers/ You should read at least these two blogs. But if you want to become a better craftsman, you should also read above books. Software development is not just googling and copy pasting working code from stackoverflow. I do not say It is a bad thing per se, but you should always give meaning to your code and have review of what you are doing. Status: Issue closed
willyelm/pug-html-loader
203135390
Title: Variable interpolation not working Question: username_0: My templates are rendering but they can't see the variables in my component, what am I doing wrong? :/ `main h1 Hello from Angular App with Webpack #{title} div.someButton button(md-raised-button) Basic Button ` `import { Component } from '@angular/core'; import '../../public/css/styles.css'; @Component({ selector: 'mep-fe', templateUrl: './app.component.pug', styleUrls: ['./app.component.css'] }) export class AppComponent { public title = 'Some Title'; }` Answers: username_1: Since you are compiling it with angular it should be {{title}} instead #{title} Status: Issue closed
just4fun/meaning
93195438
Title: [MEAN-005] Replace the whole UI of front site Question: username_0: As [current](http://talent-is.me/) displayed, the UI is very simple and even a little UGLY. Should replace it with an entirely amazing & responsive UI. Answers: username_0: Close due to https://github.com/username_0/meaning#updated-in-2015. Status: Issue closed
AnthonyNahas/ngx-auth-firebaseui
437913189
Title: feat: add microsoft and yahoo as authentication providers Question: username_0: ### Bug Report or Feature Request (mark with an `x`) ``` - [ ] bug report -> please search issues before submitting - [ x] feature request ``` ### Repro steps <!-- Simple steps to reproduce this bug. Please include: commands run, packages added, related code changes. A link to a sample repo would help too. --> ### Desired functionality if yahoo or microsoft auth providers have been enabled via the firebase console --> the user should be able to process an authentication via the library ### Mention any other details that might be useful Microsoft and Yahoo sign in now available for Firebase Auth [microsoft](https://firebase.google.com/docs/auth/web/microsoft-oauth) [yahoo](https://firebase.google.com/docs/auth/web/yahoo-oauth) Answers: username_1: Anthony, this would be awesome.. Microsoft login is a most wanted feature... I am working a lot in education and a lot of schools in the Netherlands/Belgium have either Microsoft accounts or Google accounts username_0: I will add these two auth providers in the next release username_0: @username_1 #232 is ready for deploy username_2: @username_0 I've been testing out your commit for the Microsoft login and it works well. However, is there support for specifying the tenant in the setCustomParameters? I'm using your commit right now in an internal application I've developing and I just modified the microsoftAuthProvider to have a tenant (which was required for login). username_0: @username_2 can u provide a short example ? PS: I will appreciate a PR from you โค๏ธ Status: Issue closed
solana-labs/solana
388431410
Title: [storage mining] Validator needs to respond to RPC request for storage last id Question: username_0: #### Problem Validator needs to respond to RPC request for storage last id so validators can use it to pick the block to store. #### Proposed Solution Add shared storage state to the bank and add RPC interface to query for storage mining last id.<issue_closed> Status: Issue closed
birkir/gatsby-source-prismic-graphql
482339905
Title: Plugin is leaking absolute file paths Question: username_0: When im building my site on a local machine, the source code contains the absolute paths to the components wich were used to generate pages. How can i avoid this? ![path](https://user-images.githubusercontent.com/10587159/63272755-f4d81780-c29c-11e9-9da1-f5e931a20df3.PNG) Answers: username_1: There isn't really a fix. But you can try to use relative paths like this: ```js { resolve: 'gatsby-source-prismic-graphql', options: { pages: [{ component: 'src/templates/article.js', }], } } ``` We need to pass the object to the client side, but we don't need the pages object so we could fix at least just that. username_0: This doesn't seem to work. ```bash The plugin "gatsby-source-prismic-graphql" must set the absolute path to the page component when create creating a page. The (relative) path you used for the component is "src/templates/index.jsx" You can convert a relative path to an absolute path by requiring the path module and calling path.resolve() e.g. const path = require("path") path.resolve("src/templates/index.jsx") ```
spatie/laravel-cors
369855304
Title: Does this support laravel 5.7? Question: username_0: Hi, I checked, it is not working in laravel 5.7. Answers: username_1: Yes username_0: Hi, I checked, it is not working in laravel 5.7. username_1: Can you submit a failing test? username_0: $composer test [Symfony\Component\Console\Exception\CommandNotFoundException] Command "test" is not defined. username_1: Try vendor/bin/phpunit username_2: Looks like there is a general issue with Laravel 5.7 whatever someone use, package or a custom-made middleware. I guess is Laravel issue and will be fixed in the next releases Status: Issue closed
dart-lang/markdown
229214511
Title: Track compliance with GFM (GitHub) spec Question: username_0: Very exciting that GitHub [released a spec](https://github.com/blog/2333-a-formal-spec-for-github-flavored-markdown) of GitHub-flavored Markdown (GFM), that we can finally work toward, rather than opening GitHub comment boxes and trying out different syntaxes. Status: Issue closed Answers: username_0: Fixed by #165
uspki/policies
271760359
Title: Section 6.1.1.3 Question: username_0: **Organization / Program:** DoD <br>**Section:** 6.1.1.3 <br>**PDF Page:** 60<br>**PDF Line(s):** 1547-1549<br>**Comment:** What are the requirements for subscribers when generating keys? Also, how does a CA know if the key is known to be weak?<br><br>**Suggested Change:** State that CA keys shall not be archived Answers: username_0: Related to #141 discussions during draft policy development. Summary is we removed the FIPS 140 requirement for subscriber key generation as this is an auditable event **and** unenforceable from the CA's perspective. Other compensating controls exist in USG and are based on the Risk Management Framework. username_0: [suggested change does not match comment] Status: Issue closed
CocoaPods/CocoaPods
827706583
Title: Error when trying to run a flutter project on iOS. Android project is working absolutely fine but getting errors when running for iOS. Question: username_0: Launching lib/main.dart on iPhone 12 Pro Max in debug mode... Running pod install... 2.3s CocoaPods' output: โ†ณ Preparing Analyzing dependencies Inspecting targets to integrate Using `ARCHS` setting to build architectures of target `Pods-Runner`: (``) Fetching external sources -> Fetching podspec for `Flutter` from `Flutter` -> Fetching podspec for `advance_pdf_viewer` from `.symlinks/plugins/advance_pdf_viewer/ios` -> Fetching podspec for `cloud_firestore` from `.symlinks/plugins/cloud_firestore/ios` -> Fetching podspec for `firebase_auth` from `.symlinks/plugins/firebase_auth/ios` -> Fetching podspec for `firebase_core` from `.symlinks/plugins/firebase_core/ios` -> Fetching podspec for `fluttertoast` from `.symlinks/plugins/fluttertoast/ios` -> Fetching podspec for `hexcolor` from `.symlinks/plugins/hexcolor/ios` -> Fetching podspec for `path_provider` from `.symlinks/plugins/path_provider/ios` -> Fetching podspec for `shared_preferences` from `.symlinks/plugins/shared_preferences/ios` -> Fetching podspec for `sqflite` from `.symlinks/plugins/sqflite/ios` -> Fetching podspec for `stripe_payment` from `.symlinks/plugins/stripe_payment/ios` -> Fetching podspec for `url_launcher` from `.symlinks/plugins/url_launcher/ios` -> Fetching podspec for `video_player` from `.symlinks/plugins/video_player/ios` -> Fetching podspec for `wakelock` from `.symlinks/plugins/wakelock/ios` -> Fetching podspec for `webview_flutter` from `.symlinks/plugins/webview_flutter/ios` Resolving dependencies of `Podfile` โ€•โ€•โ€• MARKDOWN TEMPLATE โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€•โ€• ### Command ``` /usr/local/bin/pod install --verbose ``` ### Report * What did you do? * What did you expect to happen? * What happened instead? ### Stack [Truncated] Search for existing GitHub issues similar to yours: https://github.com/CocoaPods/CocoaPods/search?q=dlsym%280x7fc68350bd60%2C+In it_ffi_c%29%3A+symbol+not+found+-+%2FLibrary%2FRuby%2FGems%2F2.6.0%2Fgems%2F ffi-1.14.2%2Flib%2Fffi_c.bundle&type=Issues If none exists, create a ticket, with the template displayed above, on: https://github.com/CocoaPods/CocoaPods/issues/new Be sure to first read the contributing guide for details on how to properly submit a ticket: https://github.com/CocoaPods/CocoaPods/blob/master/CONTRIBUTING.md Don't forget to anonymize any private data! Looking for related issues on cocoapods/cocoapods... Found no similar issues. To create a new issue, please visit: https://github.com/cocoapods/cocoapods/issues/new Error running pod install Error launching application on iPhone 12 Pro Max. Status: Issue closed Answers: username_1: Please search for issues regarding `ffi` gem installation. There are countless here.
robotframework/RIDE
326521742
Title: Python not responding - robotframework 3.0.4 Question: username_0: Hello everybody, I have prepared everything to start using robotframework: ``` chromedriver==2.24.1 Pygments==2.2.0 robotframework==3.0.4 robotframework-ride==1.7.1 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==3.1.1 selenium==3.12.0 six==1.11.0 wxPython==4.0.1 ``` I can start it, but after few seconds everything is hanging, I am getting messages: ![image](https://user-images.githubusercontent.com/28060263/40546135-a7e7212c-602e-11e8-9583-30d34035d026.png) In task manager: ![image](https://user-images.githubusercontent.com/28060263/40546147-ae08ed42-602e-11e8-8e33-ad0b528e6cfd.png) In console I got that: ``` Traceback (most recent call last): File "c:\program files\python36\lib\site-packages\wx\core.py", line 2158, in Notify self.notify() File "c:\program files\python36\lib\site-packages\wx\core.py", line 3315, in Notify self.result = self.callable(*self.args, **self.kwargs) File "c:\program files\python36\lib\site-packages\robotide\editor\editors.py", line 154, in _collabsible_changed self._store_settings_open_status() File "c:\program files\python36\lib\site-packages\robotide\editor\editors.py", line 89, in _store_settings_open_status self._settings.IsExpanded() RuntimeError: wrapped C/C++ object of type Settings has been deleted Traceback (most recent call last): File "c:\program files\python36\lib\site-packages\wx\core.py", line 2158, in Notify self.notify() File "c:\program files\python36\lib\site-packages\wx\core.py", line 3315, in Notify self.result = self.callable(*self.args, **self.kwargs) File "c:\program files\python36\lib\site-packages\robotide\editor\editors.py", line 154, in _collabsible_changed self._store_settings_open_status() File "c:\program files\python36\lib\site-packages\robotide\editor\editors.py", line 89, in _store_settings_open_status self._settings.IsExpanded() RuntimeError: wrapped C/C++ object of type Settings has been deleted ``` Does anybody know what is going on? I was trying to fix it in may different ways, eg set highest priority or change Minimum processor state to biggesr value I also upgrade all stuff and still same bug. Thanks for helping me Answers: username_1: @username_0 The version of RIDE you are using is from my fork at [here](https://github.com/username_1/RIDE). Issues should be reported there. Please see the Wiki on my project, for the known problems when using wxPython different from 2.8.12.1. You could install and try a newer version, but there is no assurance of hangings or crashes. (Problems happen frequently when using the Grid Editor and navigating in the project tree. It is safer to use the Text Editor.) username_2: @username_0 Have you tried out latest release? Does the issue still occur? If it is, then please provide more info about when RIDE becomes non responsive. Status: Issue closed username_1: Closing because is too old and we cannot reproduce, or lack of information.
mbraak/jqTree
15211735
Title: Dotted lines for the tree Question: username_0: HI, Thanks for this wonderful plugins. I am using it with AngularJS directive with easily. I am using it to replace the legacy tree and the user wanted the new tree to show dotted line like the old one. Is it possible to add dotted lines to show the hierarchy of the nodes?<issue_closed> Status: Issue closed
pinkywafer/Calendarific
925262064
Title: Dates not being returned Question: username_0: For some reason the dates are not being returned, when I pull the state for a date I get the following... date: '-' description: NOT FOUND attribution: Data provided by calendarific.com unit_of_measurement: Days friendly_name: Independence Day icon: mdi:calendar-blank Answers: username_1: I'm also getting this - on start up can see the integration is saying that i've exceeded my usage limits, even though i've used 2/1000 at the moment Status: Issue closed
bibtex/bibsleigh
96774680
Title: visualise venues in a group Question: username_0: Since now the โ€œvenueโ€ (next-to-top level in BibSLEIGH) can contain multiple only vaguely related true venues, would be nice to put several icons of inner venues on the page. Simple traversal of children with collection of venue attribute (report missing ones while we're at it?) at visualisation stage.<issue_closed> Status: Issue closed
google/go-containerregistry
933675814
Title: How to set SSL_CERT_DIR and SSL_CERT_FILE with crane command in crane push. Question: username_0: Yeah thanks the above command worked for me.. Answers: username_1: Based on [these docs](https://golang.org/pkg/crypto/x509/#SystemCertPool), it sounds like you can set these environment variables directly, though I've never tested it. What behavior are you seeing when you try it? ``` SSL_CERT_DIR=/path/to/certs crane push some.tar some/image ``` Status: Issue closed username_0: Yeah thanks the above command worked for me..
stellar/go
642478238
Title: Bad feebump signature gives result code `tx_insufficient_fee` Question: username_0: #2353 # What version are you using? ``` "horizon_version": "1.4.0-d535a7970fa8f82f31d00eff35d442b23add9ba7", "core_version": "stellar-core 13.1.0 (469b2e70dec29ade2c57d39bc9db129579e63207)", "network_passphrase": "<PASSWORD> ; September 2015", "current_protocol_version": 13, "core_supported_protocol_version": 13 ``` ### What did you do? Submitted fee bump transaction with bad fee bump signature. https://laboratory.stellar.org/#txsubmitter?input=AAAABQAAAABkRWYVA4NxmiZUibHxDNXohsINnVWEc7tfktZd3rrrUQAAAAAAvrwgAAAAAgAAAAB5M7UAjmmALSznmFU8nqnSS1TOIRgHWTzeOhfLlu4DHACYloAADLwrAAAAAQAAAAAAAAAAAAAAAQAAAAAAAAABAAAAAGRFZhUDg3GaJlSJsfEM1eiGwg2dVYRzu1%2BS1l3euutRAAAAAAAAAAABMS0AAAAAAAAAAAGW7gMcAAAAQJPj50gRrhyvuMRHtFhT3Lcjnf%2BNCx1XGqYdR%2BmwrzXjfQGfCqL40H5CoOlzexMzsPWcMFcsJpGD74azLmrPyQMAAAAAAAAAAd6661EAAABAeCxe6mF833titr0DlatXp7CAGqqkAmQvj2kxMMEjWAVN%2F91mYNaagFBysFRLp8Ra%2BbLdFf80dXPkVZGZhAKaCA%3D%3D&network=test ### What did you expect to see? Error denoting bad signature. ### What did you see instead? Error denoting insufficient fee. ``` extras.result_codes: {"transaction":"tx_insufficient_fee"} Result XDR: AAAAAAAAAMj////3AAAAAA== ``` Answers: username_0: This may be how it's specified and it's my expectation that is wrong. username_1: I think that checking signature is an expensive operation so it first checks if the fee is sufficient. It seems it's also correct according to [CAP-15](https://github.com/stellar/stellar-protocol/blob/master/core/cap-0015.md#validity). Moving to stellar-core for confirmation.
henare/henare.github.io
329717336
Title: Imported image captions not working Question: username_0: It seems like the use a WordPress `[caption]` shortcode so they're not coming through properly in the imported data. Page example: https://username_0.github.io/blog/2009/08/06/introducing-openaustralia-devlive/<issue_closed> Status: Issue closed
cdnjs/cdnjs
192923568
Title: [Request] Add react-textarea-autosize Question: username_0: **Library name:** react-textarea-autosize **Git repository url:** https://github.com/andreypopp/react-textarea-autosize **npm package url(optional):** https://www.npmjs.com/package/react-textarea-autosize **License(s):** The MIT License (MIT) **Official homepage:** http://andreypopp.github.io/react-textarea-autosize/ **Wanna say something? Leave message here:** Thanks a million! ๐Ÿ‘ ===================== Notes from cdnjs maintainer: You are welcome to add a library via sending pull request, it'll be faster then just opening a request issue, and please don't forget to read the guidelines for contributing, thanks!!<issue_closed> Status: Issue closed
pythonprofilers/memory_profiler
333857438
Title: Does it monitor memory usage on GPU from TensorFlow python script? Question: username_0: Can I use mprof to monitor the amount of GPU RAM being used by my TensorFlow python script? Or will it only monitor the CPU RAM? Thanks. -Tony Answers: username_1: It doesn't monitor GPU memory, unless the GPU memory is copied back to main memory, which it often is but depends on the program username_0: Thanks! Status: Issue closed
getkirby/kql
760811353
Title: Block/Layout fields return type confusion Question: username_0: I'm seeing block and layout fields returned as a string of the raw object. I'd have expected either a string of HTML or the raw object. What should the return type be for block and layout fields? Answers: username_1: Blocks are not supported in KQL yet. I will add support asap. username_0: @username_1 Any updates on this? KQL not supporting blocks is becoming quite a blocker(!). username_1: Sorry for the delay! It's finally here :) Blocks and layouts are now fully supported. ๐ŸŽ‰ ![Screenshot 2021-02-12 at 10 43 16](https://user-images.githubusercontent.com/24532/107754383-b0a6c580-6d21-11eb-92dd-12b90c651fa9.png) ![Screenshot 2021-02-12 at 10 39 26](https://user-images.githubusercontent.com/24532/107754375-ae446b80-6d21-11eb-956d-e036259478e1.png) Status: Issue closed
haskell/haskell-language-server
675749914
Title: GHC-8.8.3 compile error Question: username_0: Running `stack build --stack-yaml=stack-8.8.3.yaml` on the latest master (`d36bb9929`) yields ``` haskell-language-server> configure (lib + exe) haskell-language-server> Configuring haskell-language-server-0.3.0.0... haskell-language-server> build (lib + exe) haskell-language-server> Preprocessing library for haskell-language-server-0.3.0.0.. haskell-language-server> Building library for haskell-language-server-0.3.0.0.. haskell-language-server> Preprocessing executable 'haskell-language-server-wrapper' for haskell-language-server-0.3.0.0.. haskell-language-server> Building executable 'haskell-language-server-wrapper' for haskell-language-server-0.3.0.0.. haskell-language-server> Preprocessing executable 'haskell-language-server' for haskell-language-server-0.3.0.0.. haskell-language-server> Building executable 'haskell-language-server' for haskell-language-server-0.3.0.0.. haskell-language-server> [3 of 3] Compiling Main haskell-language-server> haskell-language-server> /home/david/projects/haskell-language-server/exe/Main.hs:179:104: error: haskell-language-server> โ€ข Couldn't match expected type โ€˜IO IdeStateโ€™ haskell-language-server> with actual type โ€˜p0 -> IO IdeStateโ€™ haskell-language-server> โ€ข The lambda expression โ€˜\ getLspId haskell-language-server> event haskell-language-server> vfs haskell-language-server> caps haskell-language-server> wProg haskell-language-server> wIndefProg haskell-language-server> _getConfig haskell-language-server> -> ...โ€™ haskell-language-server> has 7 arguments, haskell-language-server> but its type โ€˜IO LspId haskell-language-server> -> (FromServerMessage -> IO ()) haskell-language-server> -> VFSHandle haskell-language-server> -> haskell-lsp-types-0.22.0.0:Language.Haskell.LSP.Types.ClientCapabilities.ClientCapabilities haskell-language-server> -> WithProgressFunc haskell-language-server> -> WithIndefiniteProgressFunc haskell-language-server> -> IO IdeStateโ€™ haskell-language-server> has only six haskell-language-server> In the second argument of โ€˜($)โ€™, namely haskell-language-server> โ€˜\ getLspId event vfs caps wProg wIndefProg _getConfig haskell-language-server> -> do t <- t haskell-language-server> hPutStrLn stderr $ "Started LSP server in " ++ showDuration t haskell-language-server> ....โ€™ haskell-language-server> In a stmt of a 'do' block: haskell-language-server> runLanguageServer haskell-language-server> options haskell-language-server> (pluginHandler plugins) haskell-language-server> getInitialConfig haskell-language-server> getConfigFromNotification haskell-language-server> $ \ getLspId event vfs caps wProg wIndefProg _getConfig haskell-language-server> -> do t <- t haskell-language-server> hPutStrLn stderr $ "Started LSP server in " ++ showDuration t haskell-language-server> .... haskell-language-server> | haskell-language-server> 179 | runLanguageServer options (pluginHandler plugins) getInitialConfig getConfigFromNotification $ \getLspId event vfs caps wProg wIndefProg _getConfig -> do haskell-language-server> | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^... haskell-language-server> -- While building package haskell-language-server-0.3.0.0 using: /home/david/.stack/setup-exe-cache/x86_64-linux-tinfo6/Cabal-simple_mPHDZzAJ_3.0.1.0_ghc-8.8.3 --builddir=.stack-work/dist/x86_64-linux-tinfo6/Cabal-3.0.1.0 build lib:haskell-language-server exe:haskell-language-server exe:haskell-language-server-wrapper --ghc-options " -fdiagnostics-color=always" Process exited with code: ExitFailure 1 ``` Answers: username_0: Whoops, didn't update submodules. Never mind Status: Issue closed
pytorch/pytorch
1106394256
Title: Error in `torch.trapz` documentation Question: username_0: ### ๐Ÿ“š The doc issue In the documentation of `torch.trapz`, it is an alias of `torch.trapzoid`. However, their signatures in the doc are different: ```python torch.trapz(y, x, *, dim=- 1) torch.trapezoid(y, x=None, *, dx=None, dim=- 1) ``` ### Suggest a potential alternative/fix They should have the same signature Answers: username_1: Hi all, I am new to OpenSource and want to contribute, if you could allow me this issue to work then it will be a great start for me. Thanks : ) username_1: Hi @username_4 I am really interested to work on this case, I would like to ask your permission to go-ahead username_2: Fixing the signature (and verifying the updated signature works as expected) would be great. username_4: Hi @username_3, if you're looking for a docs issue to start with, this would be a great one. We would especially accept a patch, like @username_2 said, if you're also able to verify the updated signature works. If you're looking for more technical fixes, the [good first issue label](https://github.com/pytorch/pytorch/labels/good%20first%20issue) may have some good issues to sink your teeth into
tensorflow/tensorflow
467411094
Title: tf.distribute.MirroredStrategy incompatible with tf.estimator training when defining tf.train.Scaffold with saver Question: username_0: <em>Please make sure that this is a bug. As per our [GitHub Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template</em> **System information**: - Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No - OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 18.04 - Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: No - TensorFlow installed from (source or binary): binary - TensorFlow version (use command below): 1.14.0 - Python version: 3.7.3 - CUDA/cuDNN version: 10.0/7.1 - GPU model and memory: TitanXp 12G x 4 **Describe the current behavior** When I use `tf.estimator` together with `tf.distribute.MirroredStrategy()` for single worker multiple GPUs training, I meet the following error if I try to define `tf.train.Scaffold` for `tf.estimator.EstimatorSpec()` to configure the saver parameters. Everything works fine JUST I remove the scafflold and the multiple gpu training for estimator is referred this [tutorial](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/guide/distribute_strategy.ipynb#scrollTo=_098zB3vVhuV). ``` ... File "/data/fanzong/miniconda3/envs/tf_cuda10/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py", line 126, in _require_cross_replica_or_default_context_extended raise RuntimeError("Method requires being in cross-replica context, use " RuntimeError: Method requires being in cross-replica context, use get_replica_context().merge_call() ``` **Code to reproduce the issue** Here is my minimum snippet of code to reproduce this error. ```python import tensorflow as tf from tensorflow.python.keras.applications import MobileNetV2 l = tf.keras.layers def input_fn(): dataset = tf.data.Dataset.from_tensor_slices({"feature": tf.random_normal(shape=(1, 224, 224, 3), dtype=tf.float32), "label": tf.random.uniform(shape=[1], minval=0, maxval=2, dtype=tf.int32)}) dataset = dataset.repeat() dataset = dataset.batch(2) return dataset def model_fn(features, labels, mode): input_tensor = features['feature'] label = features['label'] if mode == tf.estimator.ModeKeys.TRAIN: model = MobileNetV2(input_shape=(224, 224, 3), classes=2, weights=None) output = model(input_tensor) loss = tf.losses.sparse_softmax_cross_entropy(label, output) train_op = tf.train.GradientDescentOptimizer(0.1).minimize(loss, global_step=tf.train.get_global_step()) # define scaffold saver = tf.train.Saver( sharded=True, keep_checkpoint_every_n_hours=1, save_relative_paths=True) tf.add_to_collection(tf.GraphKeys.SAVERS, saver) scaffold = tf.train.Scaffold(saver=saver) # remove scaffold this code could work return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op, scaffold=scaffold) # multiple gpu configuration for estimator devices = ["/device:GPU:0", "/device:GPU:1"] strategy = tf.distribute.MirroredStrategy(devices=devices) config = tf.estimator.RunConfig(model_dir="test_multi_gpu",) [Truncated] File "/data/fanzong/miniconda3/envs/tf_cuda10/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 404, in _GroupByDevices pydev.canonical_name(spec.tensor.device) for spec in saveable.specs) File "/data/fanzong/miniconda3/envs/tf_cuda10/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 404, in <genexpr> pydev.canonical_name(spec.tensor.device) for spec in saveable.specs) File "/data/fanzong/miniconda3/envs/tf_cuda10/lib/python3.7/site-packages/tensorflow/python/training/saving/saveable_object.py", line 52, in tensor return self._tensor() if callable(self._tensor) else self._tensor File "/data/fanzong/miniconda3/envs/tf_cuda10/lib/python3.7/site-packages/tensorflow/python/distribute/values.py", line 1358, in tensor return strategy.extended.read_var(sync_on_read_variable) File "/data/fanzong/miniconda3/envs/tf_cuda10/lib/python3.7/site-packages/tensorflow/python/distribute/mirrored_strategy.py", line 768, in read_var return replica_local_var._get_cross_replica() # pylint: disable=protected-access File "/data/fanzong/miniconda3/envs/tf_cuda10/lib/python3.7/site-packages/tensorflow/python/distribute/values.py", line 1424, in _get_cross_replica axis=None) File "/data/fanzong/miniconda3/envs/tf_cuda10/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py", line 832, in reduce return super(StrategyV1, self).reduce(reduce_op, value, axis) File "/data/fanzong/miniconda3/envs/tf_cuda10/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py", line 552, in reduce _require_cross_replica_or_default_context_extended(self._extended) File "/data/fanzong/miniconda3/envs/tf_cuda10/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py", line 126, in _require_cross_replica_or_default_context_extended raise RuntimeError("Method requires being in cross-replica context, use " RuntimeError: Method requires being in cross-replica context, use get_replica_context().merge_call() ``` Answers: username_0: This error seems to be due to a redundant initialization of Saver when using MirroredStrategy for multi-gpu training since TF estimator would initialize `Saver` based on its `RunConfig`. In this case, commenting saver inside the EstimatorSpec could solve this problem. BUT if anyone has some ideas about the detailed mechanism of this conflict. username_1: I also meet this problem, have you had any idea of solving it? @username_0 username_1: I fix this bug when training mobilenetv2_ssd in tensorflow object detection API with TF1.15, by add `inplace_batchnorm_update: true` in pipeline.config. I hope this can help anyone else. username_2: ------------------------------------------------------------------------------------------- hahaha kaobi I also meet it remove scaffold and saver in EstimatorSpec can fix it it is caused by keras + estimator haha username_0: This [thread](https://github.com/tensorflow/models/issues/5421#issuecomment-510821659) may help. I'm not sure whether you are meeting this error in tf models repo. @username_1 username_3: "What's supported now? In TF 2.0 release, there is limited support for training with Estimator using all strategies except TPUStrategy. **_Basic training and evaluation should work, but a number of advanced features such as scaffold do not yet work._** There may also be a number of bugs in this integration." Qutoed from link below: https://www.tensorflow.org/guide/distributed_training#whats_supported_now_3 username_0: @username_3 Yep, maybe it's time to turn to the Keras type. Status: Issue closed
alibaba/weex
179953517
Title: Android็ซฏๆต‹่ฏ•๏ผšweex้กต้ขๆ˜ฏไธชๅˆ—่กจ๏ผŒๅˆ—่กจ้กนไธขๅคฑ๏ผŒๆ˜พ็คบไธๅ‡บๆฅ Question: username_0: ` data:{ listdata:[ ], testdata:[1,1,1,1,1,1], },` ๅฆ‚ไธŠ็š„๏ผš่ฟ™ไธชๆ—ถๅ€™listdataๆ˜ฏไธช็ฉบๆ•ฐๆฎ๏ผŒ็„ถๅŽๆˆ‘ๅœจcreated้‡ŒๅŽปthis.listdata = this.testdata; ๆŠŠtestdata่ต‹ๅ€ผ็ป™listdata๏ผŒ่ฟ™ไธชๆ—ถๅ€™ๅˆ—่กจๆœ‰็š„ๆ—ถๅ€™ไผšไธขๅคฑ๏ผŒๆœ‰็š„ๆ—ถๅ€™ๆ˜ฏๅฏไปฅๅฎŒๅ…จๆ˜พ็คบ๏ผŒๆœ‰็š„ๆ—ถๅ€™็ฌฌไธ€้กนๆˆ–่€…ๅ‰ๅ‡ ้กน้ƒฝไธๆ˜พ็คบ๏ผŒไธ€็‰‡็ฉบไฝ๏ผŒ Answers: username_0: ่กฅๅ……ๅˆ—่กจไปฃ็ ๏ผš `<list class="listclass" loadmoreoffset="20"> <refresh class="refresh-view" display="{{refresh_display}}" onrefresh="onrefresh"> <loading-indicator style="height:60;width:60;color:#3192e1;margin-top:10"></loading-indicator> </refresh> <cell class="cell" append="tree" > <confer-item repeat="{{item in listdata}}" > </confer-item> </cell> <loading class="loading-view" display="{{loading_display}}" onloading="onloading"> <text style="text-align: center; color:black;font-size:30;margin-top:30">{{loadtext}}</text> </loading> </list>` username_0: ่ฟ™ไธชๅ’Œๆˆ‘repeatๆ”พ็š„ไฝ็ฝฎๆœ‰ๅ…ณไบ†๏ผŒ่ฒŒไผผๅฅฝไบ†๏ผŒไธๅฅฝๆ„ๆ€ Status: Issue closed
IgniteUI/help-topics
413482228
Title: Add note for igTimePicker that setting value runtime, should have the display format Question: username_0: When setting value in igTimePicker at runtime, setting the value as a string should be according to the display format and cannot accept string ISO values, like "2019-02-21T00:00:00.000Z". In that cases the value should be passed as a [date](http://jsfiddle.net/uwkpx8b0/) in order to work: $("#timePicker").igTimePicker("option","value", new Date("2019-02-21T00:00:00.000Z")); This should be added as a not in the API value description or in the igTimePicker topics. Answers: username_1: Added a note in the time picker overview topic and not in the API because the time picker value property is inherited from date picker. Status: Issue closed
exastro-suite/oase
989638900
Title: ๆœช็Ÿฅไบ‹่ฑกใฎๆ™‚ใซITAใงใƒญใ‚ฐๅŽ้›†ใ‚’ๅฎŸๆ–ฝใงใใ‚‹ใ‚ˆใ†ใซใ—ใŸใ„(SERVER_LIST+Conductor) Question: username_0: โ€ปๆฉŸ่ƒฝๆ”นไฟฎใ—ใชใใฆใ‚‚ใ€ไปŠใฎOASEใงๅฎŸ็พใงใใใ†ใงใ‚ใ‚Œใฐใ€ใใฎๆ‰‹ๆณ•ใ‚’็ขบ็ซ‹ใ•ใ›ใŸใ„ ๆœช็Ÿฅไบ‹่ฑกใฎๅ ดๅˆใ€ใƒญใ‚ฐใ‚„ใ‚ทใ‚นใƒ†ใƒ ๆƒ…ๅ ฑใ‚’ๅ–ๅพ—ใ—ใฆใ€ใใ‚Œใ‚’ๅŸบใซ่ชฟๆŸปใ‚’้€ฒใ‚ใ‚‹ใ‚ฑใƒผใ‚นใŒๅคšใ„ใ€‚ ใใฎใŸใ‚ใ€ใใ†ใ„ใฃใŸใ“ใจใ‚’ITAใงๅฎŸๆ–ฝใ—ใ€ใใฎๆƒ…ๅ ฑใ‚’่ชฟๆŸปๆ‹…ๅฝ“ใซๆŠ•ใ’ใ‚‹ใ‚ˆใ†ใชไป•็ต„ใฟใ‚’ๆไพ›ใ—ใŸใ„ใจ่€ƒใˆใฆใ„ใ‚‹ (ๆทปไป˜่ณ‡ๆ–™ๅ‚็…ง) โ– ๅ—ใ‘ๅ…ฅใ‚ŒๅŸบๆบ– ใƒปๆœช็Ÿฅไบ‹่ฑกใฎๅ ดๅˆใ€ITAใ‚’ไธ€ๅพ‹ใงใ‚ญใƒƒใ‚ฏใ—ใ€ใƒฆใƒผใ‚ถใซ้€š็Ÿฅใงใใ‚‹ใ“ใจ ใƒปๆœช็Ÿฅไบ‹่ฑกใฎๅ ดๅˆใฎใ‚ขใ‚ฏใ‚ทใƒงใƒณใฏใ€ๅฟ…้ ˆใจใฏใ—ใŸใใชใ„(ๅฟ…่ฆใชใ‚‰่จญๅฎšใ™ใ‚‹ใ‚ˆใ†ใซใ—ใŸใ„) ใƒปๆœช็Ÿฅไบ‹่ฑกใฎๅ ดๅˆใ€ใ‚ขใ‚ฏใ‚ทใƒงใƒณๅ‡บๆฅใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹ใ“ใจ ใƒปITAใงใ‚ญใƒƒใ‚ฏใ—ใŸใ“ใจใ‚’ใ‚ขใ‚ฏใ‚ทใƒงใƒณๅฑฅๆญดใง็ขบ่ชใงใใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹ใ“ใจ ใƒปๆœช็Ÿฅไบ‹่ฑกใฎๅ ดๅˆใ€ใƒ›ใ‚นใƒˆๅใ‚’ๆŠฝๅ‡บใงใใ‚‹ใ“ใจ ใƒปใ‚ขใ‚ฏใ‚ทใƒงใƒณๅฑฅๆญดใงๆ—ขๅญ˜ใ‹ๆœช็Ÿฅใ‹ใŒใ‚ใ‹ใ‚‹ใ“ใจ ใƒกใƒข conductorๅ„ชๅ…ˆ
webpack/webpack.js.org
316403706
Title: None Answers: username_1: Question: if this is true, how can you determine if you need one chunk to skip the hash, and the rest to have the hash? The problem is we're loading one single common file first, before any of the other entry files in a JSP which doesn't konw anything about the hash. What's the recommended solution in that case? For example: ``` <script type="text/javascript" src="${requestScope.staticContentPath}/dist/webpack.common.js" data-order="1" data-relative="/dist/webpack.common.js" onerror="onScriptError(event)" onload="onScriptLoad(event)"></script> <script type="text/javascript" src="${requestScope.staticContentPath}/dist/home.entry.js" data-order="2" data-relative="/dist/home.entry.js" onerror="onScriptError(event)" onload="onScriptLoad(event)"></script> ``` In that instance, I don't want the webpack.common.js to have the hash as the Java server doesn't know what it is. username_2: What is the actual technical reason blocking this? I've seen it referenced a few times. We don't necessarily need our chunknames to need to be a function, but having the path capable of being a function would work fine. **The use case:** We have multiple repos being joined together at build time and are generating 1 common chunk, and 1 library chunk using splitChunks. For the moment these files need to be stored in their respective repos because these outputted files are version controlled (we are trying to move away from this though). Even after doing that though, we'd like the files to be put in their respective repo folders so that the application can add preload tags for them. To do that it needs to know they are in a predictable location based on the addon belong to. Right now this is being worked around by adding the requisite file paths into the name and chunkname of each entry. This automated for the initial entries, which I crawl and generate long names with filepaths in them, but the chunknames require them to be manually typed. This works for now but is not feasible long term. Eg. ```ts if (richEditor) { const mountEditor = await import(/* webpackChunkName: "plugins/rich-editor/js/webpack/chunks/mountEditor" */ "@rich-editor/mountEditor"); mountEditor.default(richEditor); } ``` I'd like to be able to use a function either for the path or the chunkName so we don't need to add bits like `plugins/rich-editor/js/webpack/chunks` to the chunkname comment. username_3: I'll add my use case where we want to protect some code behind authentication inside an SPA. So we need to move some lazy loaded chunk into a protected folder (based on modules it contains) and keep the rest public. Current workaround is using a plugin to edit chunk name like above to include full path and using `chunkFilename: [name].js`. username_4: Had anyone discovered a workaround for this issue? I've just bumped into the same problem. username_5: I would like to see this too. My use case is when the `chunkFilename` becomes very long, longer than 128 characters, I cannot upload the sourcemaps to our error reporting service. I'm working to lift that constraint, but something like this would be great. ```js // if slicing on name was supported config.output.chunkFilename = 'assets/[name:10].[contenthash].chunk.js'; // or config.output.chunkFilename = (chunkData) => { if (chunkData.chunk.name.length > 120) { return `assets/${truncate(chunkData.chunk.name)}.[contenthash].chunk.js` } return 'assets/[name].[contenthash].chunk.js'; }; ``` username_6: @username_4 @username_5 https://webpack.js.org/plugins/source-map-dev-tool-plugin/ can be a great workaround username_5: @username_6 I was missing `config.output.sourceMapFilename`, I can just update this. ```js // attempt to get our sourcemap filenames under 128 characters config.output.sourceMapFilename = 'assets/sourcemaps/[contenthash].js.map'; ``` username_7: at the moment I use https://www.npmjs.com/package/chunk-rename-webpack-plugin as a workaround. but if `config.output.filename` can be a function, `config.output.chunkFilename` should be handled equal. thx! grettings username_7 username_8: also need function chunkFilename to give conditional chunkFilename username_8: I used this plugin but got an `bootstrap:811 Uncaught ChunkLoadError: Loading chunk [mymodulename] failed.` error I'm using webpack 4. Is that the problem? thx! username_7: I using webpack 4 too, mhh... username_9: @username_8, @username_7 For Webpack 4 I would recommend https://www.npmjs.com/package/enhanced-webpack-chunk-rename-plugin as workaround. Status: Issue closed username_10: Let's close in favor https://github.com/webpack/webpack/pull/11530, after merge we will open new issue
michaelkourlas/voipms-sms-client
80525067
Title: [Feature Request] UI Enhancements Question: username_0: Thanks again Michael for the latest update with the GCM notifications. That was quite unexpected and much appreciated. I do have however have a few minor suggestions from a UX perspective which aren't critical in anyway but hopefully others may have noticed as well. 1) Message scrolling when opening chat. When a chat is opened, rather than immediately showing me the last message it starts from the beginning and scrolls down to the most recent message. Normally not a big deal, but if there are quite a few messages in the history it takes several seconds for the scrolling to finish. 2) Screen repaint when beginning to type. Normally when viewing a chat, messages are displayed in fullscreen portrait mode with the last message on the bottom. However when I click on the text field to beginning typing a message, it would be nice to have the last message remain directly above the text field, but currently the screen does not adjust. You have to wait for the keyboard to appear and then scroll again to see the last message. 3) Delay when sending messages. Now obviously the amount of time it takes to delivery a message is dependent on things beyond the apps control; I understand the app is merely waiting for an OK response from the API. However from a UX perspective, it might be better that once the send button is pressed, the message is moved to the history and the entry field released so the user can continue typing. Meanwhile the thread in the background does the normal delivery process and waits for an OK. As a user I personally don't need absolute confirmation that the message is delivered, I just assumed it will be at some point and I'm okay with that. However if users are absolutely concerned, then perhaps a "D" delivered flag on each message can be implemented to address that. Answers: username_1: Thanks for your suggestions! I'm happy to say that I've already fixed 1 and 2 in my test build. 3 will take a little more work, but it's next on my list. username_1: Fortunately, 3 looks like it will also be possible. You should also see it in the next release. Status: Issue closed
thephpleague/oauth2-client
68144180
Title: Move to guzzle 5 Question: username_0: Installing this client I get the message that Guzzle 3 is deprecated. Is it an idea to see if an up-to-date version of Guzzle can be supported? <code> composer require league/oauth2-client Using version 0.10.* for league/oauth2-client ./composer.json has been updated Loading composer repositories with package information Updating dependencies (including require-dev) - Installing guzzle/guzzle (v3.9.3) Downloading: 100% - Installing league/oauth2-client (0.10.1) Downloading: 100% guzzle/guzzle suggests installing guzzlehttp/guzzle (Guzzle 5 has moved to a new package name. The package you have installed, Guzzle 3, is deprecated.) </code> Answers: username_1: We are moving to Guzzle 5 for the 1.0 release. The 0.x releases will remain on Guzzle 3. Thanks! Status: Issue closed
lh3/minimap2
254667632
Title: segmentation fault when using an index file Question: username_0: When using an index file as the target input I get segmentation faults caused by main.c attempting to check the kmer and minimizer lengths in the index agree with those set on the command line. The seg fault occurs on the second pass through the loop when the index is not loaded. I've sent a pull request with a simple fix, but it may be worth checking this isn't causing any other issues, particulalry if there are multiple query files. Answers: username_1: Thanks! PR #23 was merged. Status: Issue closed
Wynncraft/Issues
148848670
Title: /armor (/armour) Not working Question: username_0: So,I have armor skins toggled off,and I only recently learned about this command.I wanted to toggle the skins on,but the command doesn't work. Version:1.8.9 Mods:OptiFine Answers: username_1: Armor skins were disabled server wide some time after the release of gavel, hopefully the texture making team will finish making a skin for all of the new armor soon. username_0: Oh.Thanks! Status: Issue closed
swimlane/ngx-graph
422480034
Title: Bundle size reduction Question: username_0: **I'm submitting a ...** (check one with "x") ``` [x] feature request ``` **Current behavior** Currently `@swimlane/ngx-graph` imports the full `@swimlane/ngx-charts/release` dependency which causes an excessive increase of the final bundle size. Please check the picture below, the full @angular framework stat size is 1.31 Mb while the full import of @swimlane/ngx-charts/release adds 1.16MB ![bundle_size](https://user-images.githubusercontent.com/2125279/54570169-e039df80-49dd-11e9-9408-b59539b0c0e3.png) **Expected behavior** Import only the required modules/code. Please check this issue https://github.com/swimlane/ngx-charts/issues/699 where a similar problem was solved in @swimlane/ngx-charts. **Reproduction of the problem** * **Install / use:** Install and use the ngx-graph module according to the documentation * **Build:** Create a production build (in my case I use angular CLI) `ng build --prod --statsJson=true` * **Check stats:** Use (for example) the [webpack-bundle-analyzer](https://www.npmjs.com/package/webpack-bundle-analyzer) to see the production package content based on the generated `stats.json` **What is the motivation / use case for changing the behavior?** Reduce final bundle size to improve application load performance. **Please tell us about your environment:** * **ngx-graph version:** `@swimlane/ngx-graph 5.5.0` * **Angular version:** `@angular 7.2.7` `@angular/cli 7.3.4` * **Browser:** All browsers affected as the feature has an impact on the bundle size. It is not a runtime problem rather than a packaging issue (tree shaking?). * **Language:** `typescript 3.2.4` Answers: username_1: We have deprecated the dependency on ngx-graph in version 6, and will completely remove it in version 7. Status: Issue closed