repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
egeniq/android-tv-program-guide
780843925
Title: Not able to call setState(State.Content); Question: username_0: Hi again, I'm currently using the library from a Android TV project written in Java. So far I'm able to show the ProgramGuideFragment inside my Android TV project and I'm now populating the channels and schedule. I believe I have to call setState(State.Content) in order to have the ProgramGuideFragment show the data (I'm also before this calling SetData)? But it seems as if I'm not able to call setState(State.Content) or setState(State.Loading) from my Fragment which inherits from the ProgramGuideFragment. error: cannot find symbol setState(State.Content); Perhaps I'm doing something wrong here? :) Kind regards /Mattias Answers: username_1: Hi! Since sealed classes don't have a perfect match in Java, they translate to a class which has static instances for the objects, and static classes with constructors for the data classes. So this is how you can use `setState()` when calling it from Java: // Set content state setState(State.Content.INSTANCE); // Set error state setState(new State.Error("Error message")); // Set loading state setState(State.Loading.INSTANCE); username_0: Hi, once again, many thanks for quick response and help. I very much appreciate the help. Kind regards /Mattias Status: Issue closed
XiaoFaye/WooCommerce.NET
541008139
Title: https request exception Question: username_0: Api url is "https", and i set authorizedHeader is false,but response is error ,error result is {"code":"woocommerce_rest_cannot_view","message":"Sorry, you cannot list resources.","data":{"status":401}}”。 Is there any way to solve it?<issue_closed> Status: Issue closed
quasarframework/quasar
526283598
Title: Odd QImg corners with width == height == 2 * border-radius Question: username_0: **Describe the bug** When a q-img element has the width and height equal and the border-radius set to half of these (i.e. to make a circle), in Chrome 78.0.3904.97 the bottom left corner of the image shows briefly and in Safari 13.0.1 all for corners show briefly before the image is displayed as a circle. This is on a Mac, in case it makes any difference. (It works as expected in Firefox 70.0.1) **Codepen/jsFiddle/Codesandbox (required)** https://codepen.io/username_0/pen/yLLwXvd **To Reproduce** Steps to reproduce the behavior: 1. Go to https://codepen.io/username_0/pen/yLLwXvd in Chrome or Safari. 2. Click on the IMAGE tab and observe the weird behaviour in the corner(s). 3. Click on the NO IMAGE tab. 4. Click on the IMAGE tab and observe the weird behaviour in the corner(s). 5. Click on the CHANGE IMAGE button and observe the weird behaviour in the corner(s) : in Chrome, the bottom left corner appears twice, once briefly before the image is changed, and again after the image has changed; in Safari all 4 corners appear while the image is being changed. 5. Click on the NO IMAGE tab. 6. Go to step 2. **Expected behavior** The image in the q-img element should have the border radius applied without any weird effects in the corners. **Screenshots / videos** Strangely, when I recorded this behaviour using the Nimbus extension for Chrome, it was the top left corner that was recorded as acting weirdly. (In reality it looked even more glitchy when I was using Nimbus; what was recorded was not quite as glitchy but still bad.) Switching tabs: https://youtu.be/rv37HumYfnE Changing the image: https://youtu.be/0cclbo5XP40 **Platform (please complete the following information):** OS: Mac Node: NPM: Yarn: Browsers: Chrome, Safari iOS: Android: Electron: Status: Issue closed Answers: username_1: Fix will be available in "quasar" v1.7.0
tinderjs/tinderjs
191636445
Title: "Failed to authenticate: Access Denied" Question: username_0: I am trying to setup https://github.com/PBartrina/loltinder/tree/develop but getting the following error. I saw that one issue of the same type is already open but I was still curious to know if there is any workaround of this. ``` **C:\Users\<NAME>\Downloads\loltinder\node_modules\tinderbot\node_modules \tinderjs\tinder.js:162 throw "Failed to authenticate: " + body.error ^ Failed to authenticate: Access Denied C:\Users\<NAME>\Downloads\loltinder>** ``` I am a novice coder and working on this since yesterday but to no avail. Any help would be appreciated. Answers: username_1: Hm, I haven't worked with the `tinderbot` codebase before Do you want to write up an example using this library and I can help you if you get stuck on something?
bburky/playnite-non-steam-shortcuts
774778005
Title: Suggestion: Add an option to choose cover art for exported games Question: username_0: I wonder if it's possible to somehow implement some sort of GUI (maybe something like what Steam ROM Manager has) that allows users to choose different covers for added games (Portrait, Background, Logo, Horizontal images). Maybe use SteamGridDB as a source for covers? Answers: username_1: You can check out my fork of it. You cant choose what Cover to Export, it will take the one you have in Playnite but it at least exports Cover and Background to Steam. Something is better than nothing i guess. It's a bit hacky and it will fail if Cover or Background Image is Missing from a Playnite Entry but if you take care of that it works fine
hotosm/tasking-manager
225249343
Title: All tab fails Question: username_0: If you have used the filters to get only beginner, advanced etc and then try a search on All you are returned no projects Status: Issue closed Answers: username_0: It also appears as though no filters work if the All tab is selected username_0: If you have used the filters to get only beginner, advanced etc and then try a search on All you are returned no projects username_1: Hi there, Thank you for reporting this. This hasn't been set up yet, we will be working on this in the current sprint. Thank you, Zlata username_2: @username_0 the 'all' filter should now work username_0: Thanks Linda - I will check it out and get back to u with any issues username_0: all good now Status: Issue closed
firstcontributions/first-contributions
539845095
Title: Enhance Arabic translate for ReadMe.md Question: username_0: I think Arabic translation need some improvements. 🐞 **Problem** Some words looks like it translated by google translation. 🎯 **Goal** Changing some words to be more understandable for Arab. Answers: username_1: i am contributing!! username_2: Hi @username_0, I am Egyptian and speaking Arabic and I will contribute to edit the ReadMe.md. just I want to know if @username_1 is working on it or not? username_1: No..i am not working on it!!! username_3: Hi @username_0, thank you for opening this issue. Could you please elaborate on which words needs to be improved? @username_2, We have an Arabic translations and a different Egyptian translation https://github.com/firstcontributions/first-contributions/blob/master/translations/README.ar.md https://github.com/firstcontributions/first-contributions/blob/master/translations/README.eg.md They're slightly different from each other. I think @username_0 would be referring to `Readme.ar.md`. You can find the first contributors to the language/ reviwers in [CONTRIBUTING.md](https://github.com/firstcontributions/first-contributions/blob/master/.github/CONTRIBUTING.md). We could bring them in the conversation and get their input as well. username_2: Hi @username_3, Yes I'd like to contribute, could you guide me on what to do?
0xProject/0x-launch-kit-frontend
451512836
Title: Estimate wait times are too long Answers: username_1: UPDATE: although we merged a little improvement-PR, the feature is working as expected: - In mainet we get the transaction time estimated. - Other networks, show the time we retrieve for mainnet (:grimacing:). - If any error happens, show the default 120s. We asked internally how this error came up but didn't get much more info. Probably it makes sense to close this issue or change its priority. Status: Issue closed
brbeaird/SmartThings_MyQ
1112713838
Title: Controlling lights with Z-wave relay Question: username_0: I have outside lights being controlled by a ENERWAVE Z-Wave Plus Relay. This relay is in parallel with a single pole toggle switch.It is also controlled by a SmartThings app via a routine in the Automation section of the app The lights are set up to come on at sunset and off at 8 PM . This schedule is used to allow people safe passage during dark Sometimes the meeting will get over before 8PM and someone ,being energy conscious, shuts off the lights. They do this by turning the toggle switch ON which sends a power to the relay and the lights shut off ,but the problem , is that when the program reaches 8 PM it sends a signal to the relay and the lights that are off will now turn on. I'm wondering if there is a way to solve this issue ? Can a IF and Condition be set up to sense the lights are actually on or off and make sure the lights are in the proper condition ? I know it would be nice if I could just eliminate the switch ,but the switch needs to be in place in case someone schedules a meeting that during a time when the outside lights are not programmed . they can turn on and off via the toggle switch Any solution would be appreciated
ncbi/ngs
549840948
Title: Threading model used for ReadIterator? Question: username_0: I am currently using the ngs::ReadIterator to extract sequence data from SRA files. I notice that when I am reading sequence data from an SRA file, there is an extra thread that appears to be part of the SRA toolkit (i.e. under linux, top reports 200% CPU activity for my application). Where is this extra thread coming from, and it there a way to control the number of threads (e.g. environment variable, API call, etc.)? Thanks! Answers: username_1: If you are using NGS, there is no SRA Toolkit involved. NGS is an API built upon VDB (sorry for all the acronyms), and VDB does create additional threads when accessing data. We have no configuration for enabling or disabling this. Out of curiosity, what would be your use case for disabling it? username_0: I am actually trying to enable it under MPI (which I am using to parallelize by analysis across multiple servers). When I run my program with mpirun with a single rank (i.e. "mpirun -np 1 ..."), the NGS API does **not** spawn an extra thread (i.e. CPU usage is 100%). If I run my program **without** mpirun, then the extra thread is spawned. I am already using a hybrid programming model that combines MPI and OpenMP without any issue (I have disabled processor affinity for this test). Finally, I have lots of data to read, and lots of cores per node, so I was wondering if I could further improve performance by getting NGS to utilize even more threads ... username_1: I see - I should have realized you wanted more threads rather than fewer. It is possible to open multiple read iterators on different runs using different threads and run them in parallel. Alternatively, use more processes. In any event, we would be interested in hearing more about your experiments in this area! username_0: I have tried to accelerate the reading of SRA files by using OpenMP to read non-overlapping slices of an SRA file in different threads. If I run with N OpenMP threads, I observe CPU activity in top that indicates N+1 cores are being used (I'm only using a subset of the available CPU cores on this machine). In addition, one of the OpenMP threads finishes much faster than the rest, which suggests that the NGS API is providing an extra thread to just one of the OpenMP threads (which leads to a significant load-imbalance). Can you point me to the code that controls how and when the "helper" thread is spawned when iterating through a ReadCollection? username_1: It may not be quite so simple _(it never is)_. Are you opening several iterators? Or several ReadCollections? Internally, a `VCursor` is used which opens a background thread to help with blob decompression. Additionally _(and quite dependent upon the software version)_, there is a thread that assists with background reading which is probably the most important piece. The SRA has moved from a POSIX file system to a cloud-like object-store, and this has caused some performance issues. `VDB` _(the underlying database system)_ is in the process of getting upgrades to try to reduce the impact there. Retrieving in parallel might not be the answer. One of the things you can do to improve performance is to retrieve data series independently. For example, do *not* try to get both bases and qualities as a single row _(at least not with current release software)_ because this can cause random access within the SRA object - and this is the problem we have with an object-store, because it's really, really bad about random access. `fasterq-dump` is faster exactly because it retrieves bases and qualities and names separately and then joins them into row-wise fastq on output. username_0: I have tried (a) one global read collection and then spawn threads that each open one read iterator and (b) spawn threads that each open one read collection and one read iterator. The result is the same in both cases (i.e. N+1 threads are spawned). In my use case, I **only** need the sequence data (and am currently **not** reading the qualities or read names). Here is an example of how my code is reading sequences: `#pragma omp parallel num_threads { const size_t num_thead = omp_get_num_threads(); const size_t tid = omp_get_thread_num(); ngs::ReadCollection run( ncbi::NGS::openReadCollection("input_file.sra") ); const size_t num_read = run.getReadCount (); const size_t chunk = max( size_t(1), num_read/num_thead ); size_t start = 1 + chunk*tid; size_t stop = start + chunk; if( tid == (num_thead - 1) ){ stop = num_read; } ngs::ReadIterator run_iter( run.getReadRange ( start, stop, ngs::Read::all ) ); while( run_iter.nextRead() ){ if( run_iter.nextFragment() ){ const string seq = run_iter.getFragmentBases().toString(); /*process sequence here*/ } } } ` username_0: For the sake of correctness, the following code fixes the off-by-one error in the start/stop calculation and the incomplete reading of both reads in a pair: ``` #pragma omp parallel num_threads(NUM_SRA_THREADS) { const size_t num_thread = omp_get_num_threads(); const size_t tid = omp_get_thread_num(); // Read from a local file downloaded using prefetch ngs::ReadCollection run( ncbi::NGS::openReadCollection("filename.sra") ); const size_t num_read = run.getReadCount(ngs::Read::all); const size_t chunk = max( size_t(1), num_read/num_thread ); // Each thread is assigned an non-overlapping slice of the SRA // file to read const size_t start = 1 + chunk*tid; size_t stop = start + chunk - 1; if( tid == (num_thread - 1) ){ stop = num_read; } ngs::ReadIterator run_iter( run.getReadRange ( start, stop, ngs::Read::all ) ); while( run_iter.nextRead() ){ while( run_iter.nextFragment() ){ const string seq = run_iter.getFragmentBases().toString(); /* Do stuff with the sequence */ } } } ``` username_0: Hopefully one last bug fix to the code I posted above -- I did not realize that getReadRange takes the starting read as the first argument and and the **number of reads** as the second argument (as opposed to the index of the last read to iterate over). ``` #pragma omp parallel num_threads(NUM_SRA_THREADS) { const size_t num_thread = omp_get_num_threads(); const size_t tid = omp_get_thread_num(); // Read from a local file downloaded using prefetch ngs::ReadCollection run( ncbi::NGS::openReadCollection("filename.sra") ); const size_t num_read = run.getReadCount(ngs::Read::all); size_t chunk = num_read/num_thread ; // Each thread is assigned an non-overlapping slice of the SRA // file to read const size_t start = 1 + chunk*tid; if( tid == (num_thread - 1) ){ chunk += num_read%num_thread; } ngs::ReadIterator run_iter( run.getReadRange ( start, chunk, ngs::Read::all ) ); while( run_iter.nextRead() ){ while( run_iter.nextFragment() ){ const string seq = run_iter.getFragmentBases().toString(); /* Do stuff with the sequence */ } } } ``` username_0: While the code I posted above works well for most SRA files, it crashes when trying to read SRA runs that contain aligned reads (i.e. have one or more associated files that start with ReSeq accessions and a file that has the ".vdbcache" file extension). The crash appears to happen after the last OpenMP thread has finished reading (in the destructor for the iterator or the ReadCollection). Any insights where to look in the API code would be appreciated!
influxdata/telegraf
1140552542
Title: OPC-UA input plugin collection interval changes after bad quality Question: username_0: ### Relevant telegraf.conf ```toml [agent] interval = "1s" debug = false round_interval = true flush_interval = "1s" flush_jitter = "0s" collection_jitter = "0s" metric_batch_size = 1000 metric_buffer_limit = 100000 quiet = true [[inputs.opcua]] endpoint = "opc.tcp://127.0.0.1:12345" connect_timeout = "60s" security_policy = "auto" security_mode = "auto" #certificate = "C:/Source/telegraf/telegraf-selfsigned.crt" #private_key = "C:/Source/telegraf/telegraf-selfsigned.key" #auth_method = "Certificate" ## Entire configuration includes 42 opcua.groups with 21 nodes each [[inputs.opcua.group]] name="anodic_current" namespace="2" identifier_type="s" tags=[["device","1001"]] nodes = [ {name="current", identifier="ABCD_1234_RA.ID.I01", data_type="float",tags=[["anode","01"]],description="Current Anode 01 device 1001"}, {name="current", identifier="ABCD_1234_RA.ID.I02", data_type="float",tags=[["anode","02"]],description="Current Anode 02 device 1001"}, {name="current", identifier="ABCD_1234_RA.ID.I03", data_type="float",tags=[["anode","03"]],description="Current Anode 03 device 1001"}, {name="current", identifier="ABCD_1234_RA.ID.I04", data_type="float",tags=[["anode","04"]],description="Current Anode 04 device 1001"}, {name="current", identifier="ABCD_1234_RA.ID.I05", data_type="float",tags=[["anode","05"]],description="Current Anode 05 device 1001"}, {name="current", identifier="ABCD_1234_RA.ID.I06", data_type="float",tags=[["anode","06"]],description="Current Anode 06 device 1001"}, {name="current", identifier="ABCD_1234_RA.ID.I07", data_type="float",tags=[["anode","07"]],description="Current Anode 07 device 1001"}, {name="current", identifier="ABCD_1234_RA.ID.I08", data_type="float",tags=[["anode","08"]],description="Current Anode 08 device 1001"}, {name="current", identifier="ABCD_1234_RA.ID.I09", data_type="float",tags=[["anode","09"]],description="Current Anode 09 device 1001"}, {name="current", identifier="ABCD_1234_RA.ID.I10", data_type="float",tags=[["anode","10"]],description="Current Anode 10 device 1001"}, {name="current", identifier="ABCD_1234_RA.ID.I11", data_type="float",tags=[["anode","11"]],description="Current Anode 11 device 1001"}, {name="current", identifier="ABCD_1234_RA.ID.I12", data_type="float",tags=[["anode","12"]],description="Current Anode 12 device 1001"}, {name="current", identifier="ABCD_1234_RA.ID.I13", data_type="float",tags=[["anode","13"]],description="Current Anode 13 device 1001"}, {name="current", identifier="ABCD_1234_RA.ID.I14", data_type="float",tags=[["anode","14"]],description="Current Anode 14 device 1001"}, {name="current", identifier="ABCD_1234_RA.ID.I15", data_type="float",tags=[["anode","15"]],description="Current Anode 15 device 1001"}, {name="current", identifier="ABCD_1234_RA.ID.I16", data_type="float",tags=[["anode","16"]],description="Current Anode 16 device 1001"}, {name="current", identifier="ABCD_1234_RA.ID.I17", data_type="float",tags=[["anode","17"]],description="Current Anode 17 device 1001"}, {name="current", identifier="ABCD_1234_RA.ID.I18", data_type="float",tags=[["anode","18"]],description="Current Anode 18 device 1001"}, {name="current", identifier="ABCD_1234_RA.ID.I19", data_type="float",tags=[["anode","19"]],description="Current Anode 19 device 1001"}, {name="current", identifier="ABCD_1234_RA.ID.I20", data_type="float",tags=[["anode","20"]],description="Current Anode 20 device 1001"}, {name="total_current", identifier="ABCD_1234_RA.ID.I_Total", data_type="float",description="Current Total device 1001"}, ] [Truncated] 1. Run Telegraf connecting to 40+ group, 20+ nodes 2. Get `status not OK for node current: Bad (0x80000000)` error 3. Data only collects in 3s intervals ... ### Expected behavior Running with an agent interval of 1s. Everything works fine until one or more nodes comes back with a `BAD Quality (0x80000000)` value. Once a BAD Quality node is observed, the acquisition interval changes to 3s. ### Actual behavior Acquisition should remain at 1s as specified in agent interval ### Additional info _No response_ Answers: username_0: To do: Need to determine what does the `Bad (0x80000000)` mean that's causing this cascade of issues Other Notes: 1. The warning messages about "Collection took longer than expected; not complete after interval of 1s" are the cause of the collection changing from every second to every three seconds. The interval of one second does not complete in time, because the input is still waiting on the output possibly due to the bad value, as a result the collection interval is skipped until things clear up. This results in intervals of three seconds that the customer is seeing. 2. The "status not OK for node current: Bad (0x80000000)" error is from the OPC UA plugin attempting to read from their device and getting 'The value is bad but the reason is unknown'. username_1: @username_0 this is outside of my area of contribution, sorry. I have no experience with opcua. 🙇 username_2: What setting do you use for `request_timeout`? username_3: I do not believe the user has it set. Is there a suggested value? I have wondered about the interval being 1 second as well if there is any delay and if that is causing issues. username_2: The default value is `5s` but I'd say it always should be less than the collection interval. No clue if this is actually part of the cause for the problem. @username_0 could you retry while setting `request_timeout = "500ms"`? username_3: This is what started showing up in the logs: ``` 2022-02-25T14:21:50Z W! [inputs.opcua] Collection took longer than expected; not complete after interval of 1s 2022-02-25T14:21:50Z D! [inputs.opcua] Previous collection has not completed; scheduled collection skipped 2022-02-25T14:21:50Z D! [outputs.influxdb] Buffer fullness: 0 / 100000 metrics 2022-02-25T14:21:50Z E! [inputs.opcua] Error in plugin: get Data Failed: RegisterNodes Read failed: The operation timed out. StatusBadTimeout (0x800A0000) 2022-02-25T14:21:51Z D! [outputs.influxdb] Buffer fullness: 0 / 100000 metrics 2022-02-25T14:21:52Z D! [inputs.opcua] Previous collection has not completed; scheduled collection skipped 2022-02-25T14:21:52Z W! [inputs.opcua] Collection took longer than expected; not complete after interval of 1s 2022-02-25T14:21:52Z D! [outputs.influxdb] Buffer fullness: 0 / 100000 metrics 2022-02-25T14:21:52Z E! [inputs.opcua] Error in plugin: get Data Failed: RegisterNodes Read failed: The operation timed out. StatusBadTimeout (0x800A0000) 2022-02-25T14:21:53Z D! [outputs.influxdb] Buffer fullness: 0 / 100000 metrics 2022-02-25T14:21:54Z W! [inputs.opcua] Collection took longer than expected; not complete after interval of 1s 2022-02-25T14:21:54Z D! [inputs.opcua] Previous collection has not completed; scheduled collection skipped 2022-02-25T14:21:54Z D! [outputs.influxdb] Buffer fullness: 0 / 100000 metrics 2022-02-25T14:21:54Z E! [inputs.opcua] Error in plugin: get Data Failed: RegisterNodes Read failed: The operation timed out. StatusBadTimeout (0x800A0000) ``` The user also said that anything with a timeout was under 4 seconds and Telegraf didn't appear to get the data. Does this mean the device may not be able to be queried every second? username_2: What I understand from the comments on the Python bug mentioned by username_0 is that the kepware server does not return after 3-4 seconds when a bad quality node is part of the registered read request. Currently we do not support subscriptions or events which is the solution mentioned (https://github.com/influxdata/telegraf/issues/8083). For now, a hacky work-around could be to split the node groups between several opcua input plugins. Only the plugin with the bad read request would drop a few values while the others can continue at the once per second rate. username_3: I agree, without an event-based plugin, this does seem like the right way forward for now. username_2: I indeed think the sensor itself is assigned a bad quality. If the server takes this long to reply there is not much we can do as a client. A Wireshark capture or similar could provide definitive evidence that the server is causing the delay.
CodeForBaltimore/Bmore-Responsive
593474010
Title: Add attribute columns to all types Question: username_0: ### User Story As a user I'd like to have an attributes column on all types So that I can track various data points not covered by other columns dynamically ### Acceptance Criteria - [ ] Each applicable type (user, contact, entity) has an attributes column of type JSON - [ ] PR merged to `master`<issue_closed> Status: Issue closed
brunch/brunch
217072499
Title: Improved static assets pipeline Question: username_0: Currently, we don't have any means to `lint` and `optimize` assets. Use cases: - Lint HTML (syntax, valid attributes etc). - Optimize JSON, SVG, raster images. Currently, we have only `compileStatic` method for assets, but it is getting misused for optimization (see https://github.com/brunch/brunch/issues/1688). However, `compileStatic` is great for templates: we can have single plugin for both JS and HTML templates. Use cases for assets compilation: - Jade/Handlebars/Nunjucks to HTML (we have it). - YAML to JSON. - Solution: with `plugin.type == 'asset'`, all listed methods work on assets. `optimize` happens per-file.<issue_closed> Status: Issue closed
wix/react-native-ui-lib
297144732
Title: Proposal of channel discussion Question: username_0: Hi, I took the liberty of creating a public chat in the discord facing React-Native. With the intention of uniting the React-native community and improving communication. I'm creating a channel for public discussions about your project. To avoid flooding "chat" on your github. If you can support the initiative, great! Just share the link below xD I'm adding RN only projects to this public chat service. And I commit myself to manage chat rooms. You would just need to observe the chat related to your project. Channel #wix-react-native here's the link https://discord.gg/RvFM97v Cheers. Answers: username_1: Hey @username_0, Thanks for the initiative. We'll try to keep an eye on the channel (: Status: Issue closed
JuliaDebug/Debugger.jl
701143041
Title: Debugger does not break on @bp when using threads Question: username_0: First of all, thanks for this great tool! Unfortunately, break points don't appear to function correctly when using `Threads.@threads`. In particular, break points are never hit, even when explicitly placed in the code. `bp on error` seems to appropriately break if there is an error, but the stack trace just shows a call to `ccall(:jl_threading_run, Cvoid, (Any,), threadsfor_fun)`, not the stack for the actual thread. I am happy to break this into two separate issues if they should be tracked separately. Here is a MWE: ```julia module ModuleFoo import Debugger function foo() println("Running with $(Threads.nthreads()) threads.") Threads.@threads for i = 1:Threads.nthreads() foo_with_bp() end end function foo_with_bp() Debugger.@bp error("I should never get here") end end using Debugger @run ModuleFoo.foo() ``` Note: This fails to break on `@bp` even when you aren't explicitly using more than on thread (i.e., does not break on `@bp` even when `JULIA_NUM_THREADS=1`). Version Info: julia version 1.4.2 [31a5f54b] Debugger v0.6.6 Answers: username_1: I think this is expected for now. The threading system goes into the Julia runtime where the interpreter loses track of it. username_2: See also https://github.com/JuliaDebug/JuliaInterpreter.jl/issues/413. username_1: Closing in favor of the upstream issue. Status: Issue closed
IonDen/ion.rangeSlider
166588149
Title: Display the number after the decimal place irrespective of its value Question: username_0: Hi, I am trying to get the slider to show decimal values irrespective of the number after decimal point. i am able to do so when the number is greater than 1, e.g.3.6/4.2 etc. But if it is equal to 0 e.g. 3.0/5.0 etc. then it just shows the number. Can we force the slider to display the number after the decimal place irrespective of its value? Please find the JSFiddle link below [http://jsfiddle.net/aalok/726L0aj5/7/](http://jsfiddle.net/aalok/726L0aj5/7/) Answers: username_1: Hi, you should use prettify function for that. Example: http://jsfiddle.net/Lh6aoaep/ username_0: Hi, This is exactly what I was looking for. Not sure why I hadn't thought about it. Thanks a ton! Status: Issue closed
kubernetes/kubernetes
193933769
Title: "Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes" in Question: username_0: Test has been failing [33% of the time](https://k8s-testgrid.appspot.com/google-1.3-1.5-upgrade#gke-container_vm-1.3-container_vm-1.5-upgrade-cluster&width=20&sort-by-failures=&show-stale-tests=&include-filter-by-regex=should%20be%20able%20to%20delete%20nodes) in [ci-kubernetes-e2e-gke-container_vm-1.3-container_vm-1.5-upgrade-cluster](https://k8s-gubernator.appspot.com/builds/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-container_vm-1.5-upgrade-cluster/). Sample failure: [link](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-container_vm-1.3-container_vm-1.5-upgrade-cluster/3 ) Based on [spreadsheet](https://docs.google.com/spreadsheets/d/1sAZqyWE--0fvN1PIuKTw9JwmcXNz6tQIm1MrjdILtm4/edit#gid=606512630&vpid=A1) tracking 1.5 upgrade test failures created by @username_5. Answers: username_1: @username_2 Can you help triaging this one? username_2: Working on it now. username_2: Here are the steps taken during the test, with the error that triggered the test failure at the end. I omitted ``` STEP: creating replication controller my-hostname-delete-node STEP: ensuring each pod is running STEP: trying to dial each unique pod STEP: decreasing cluster size to 2 STEP: verifying whether the pods from the removed node are recreated STEP: ensuring each pod is running STEP: trying to dial each unique pod INFO: Controller my-hostname-delete-node: Got expected result from replica 1 [my-hostname-delete-node-kcgpn]: "my-hostname-delete-node-kcgpn", 1 of 3 required successes so far INFO: Controller my-hostname-delete-node: Got expected result from replica 2 [my-hostname-delete-node-krx3h]: "my-hostname-delete-node-krx3h", 2 of 3 required successes so far INFO: Controller my-hostname-delete-node: Failed to GET from replica 3 [my-hostname-delete-node-rk4fh]: an error on the server has prevented the request from succeeding (get pods my-hostname-delete-node-rk4fh) ``` Final Error: `failed to wait for pods responding: pod with UID 7553b69a-b98b-11e6-ab97-42010af00042 is no longer a member of the replica set. Must have been restarted for some reason.` username_2: The test has changed from this run to head: with the difference being (added by #35235, and updated by #36199): ``` By("waiting 1 minute for the watch in the podGC to catch up, remove any pods scheduled on " + "the now non-existent node and the RC to recreate it") time.Sleep(time.Minute) ``` username_3: This shouldn't be a release blocker. The behavior has changed in 1.5, and the Node Controller no longer deletes pods. It's the PodGC's responsibility to delete them. So, without the extra wait added to let the PodGC cleanup the pods, the test is expected to fail. username_3: @username_0, backporting the test fix that was added in https://github.com/kubernetes/kubernetes/pull/36199 mentioned above, to 1.3 would fix it. username_2: We can close this because of #38324. username_4: @username_0 @username_3 Is it appropriate to move this to the next milestone? (and remove the non-release-blocker tag as well) username_5: @username_4 I think it's appropriate to just close it once a similar change to #38324 is put into 1.3 as well. username_2: /close
geneontology/go-annotation
270946354
Title: All F-P annotations missing from inferred GAF Question: username_0: Contains CC annotations only Answers: username_1: possible false alarm - may refer to pombase-prediction.gaf rather than gene_association.pombase.inf.gaf username_0: I'm sooo confused......;) username_0: Are these file changes documented/announced anywhere? Did I miss it? username_0: Hello, can sombody help us to locate the inferred gaf? We asked last week, and follow @username_3 instructions, but this wasn't the file we were looking for. Now it seems that we are all thoroughly confused ;) More here: https://github.com/pombase/pombase-chado/issues/639#issuecomment-342778920 username_0: Start at the end of pombase/pombase-chado#639 (comment) username_2: We get the TAIR inf file (gene_association.tair.inf.gaf). This is the file our automatic loading scripts are built to ingest. http://build.berkeleybop.org/view/GAF/job/gaf-check-tair/lastBuild/artifact/ I didn't know the other file (like %-prediction.gaf) existed. http://build.berkeleybop.org/job/go-gaf-release-snapshot/lastSuccessfulBuild/artifact/pipeline/target/groups/tair/ When did these appear? username_0: We decided in the meantime to go back to the Jenkins location, not the one we were directed to recently. I reopened the tickets where I thought the redundancy issues were fixed because I was looking at a different file it seems. Note that the issue *Still* exists (reported independently by me and Midori), that some expected annotations are missing. username_0: See also https://github.com/geneontology/go-annotation/issues/1336 username_2: @username_0 @username_1 I checked our inferred file (the jenkins linked one, gene_association.tair.inf.gaf) and F-P inferred annotations are still present there as expected. I see process and component annotations. FYI. username_1: Yes, the gene_association.pombase.inf.gaf at GO Jenkins has BP annotations inferred from MF-BP links (and CC annotations from whatever??). But it doesn't have _as many_ inferred BP annotations as we expect, per #1336. Are any missing from yours? username_2: id: GO:0005484 name: SNAP receptor activity namespace: molecular_function alt_id: GO:0005485 alt_id: GO:0005486 def: "Acting as a marker to identify a membrane and interacting selectively with one or more SNAREs on another membran\ e to mediate membrane fusion." [GOC:mah, PMID:14570579] subset: goslim_chembl synonym: "Q-SNARE activity" NARROW [] synonym: "R-SNARE activity" NARROW [] synonym: "SNAP-25" NARROW [] synonym: "SNARE" EXACT [] synonym: "t-SNARE activity" NARROW [] synonym: "v-SNARE activity" NARROW [] is_a: GO:0005515 ! protein binding **relationship: part_of GO:0061025 ! membrane fusion** Here's one example of a gene with IDA for GO:0005484 but no IDA annotation that's inferred for membrane fusion'. There is, however, a TAS annotation for 'membrane fusion'. Is there some sort of trumping going on? If annotation exists for term X (part_of term Y), then inferred annotation for term X is not added, regardless of evidence code of annotation to term Y? username_0: Nope I don't think so, since we have examples of missing F-P annotations with no existing annotation. Conversely we have lots of examples of redundant annotation where we get a less specific annotations than the one we made manually from same paper from the F-P pipeline.... username_2: One hypothesis squashed. username_0: Hi @username_3 could you confirm where we are supposed to get the inferred gaf data from. Then I can figure if we are missing anything and hopefully close. It might be a non-issue but I got confused by the thread. Once we know we are absolutely using the correct file, we can proceed to check whether we are still missing the reported F-P link generated annotatins. Val username_0: see also https://github.com/geneontology/go-annotation/issues/1674 username_0: @username_3 @cmungall could you please help us by confirming where we are currently supposed to get these files from. The location might have changed, I am not sure. Once we have established which files are the current up-to-date files we can then look again to see if the issue with the missing F-P links is still current. username_3: Hi @username_0, Thanks for your patience, we've been developing the new pipeline I've had to consult with Chris and Seth on where things should be. So our current pipeline produces a -prediction.gaf, and I can provide you with a link to the pombase-prediction.gaf: http://snapshot.geneontology.org/annotations/pombase-prediction.gaf Is produced daily, and our current monthly release can be found at http://current.geneontology.org/annotations/pombase-prediction.gaf We currently don't produce a *.inf.gaf, like you refer to in this ticket and in others. username_0: so to be clear this replaces *.inf.gaf? This is our internal thread: https://github.com/pombase/pombase-chado/issues/639 Summary (I think) 1. There used to be a file called inf.gaf that we were told to use 2. This changed to pombase-prediction.gaf and the location changed, (not announced) 3. Now pombase-prediction.gaf has fully replaced inf.gaf (not announced) Is this correct? If so, the replacement file pombase-prediction.gaf ONLY contins component annotation s, which is the problem I reported originlly before all of the confusion about which file we should be using... It seem possible that the F-P links have disappeared. Which probably explains #1674 username_3: Yes, I just confirmed with @cmungall that *-prediction.gaf is a different name for *.inf.gaf, but are the same contents (the name changed to -prediction). Apologies for the confusion! As far as missing F-P missing links, I will look into what owltools is doing (that's what produces this file). I haven't ventured into owltools prediction gaf before, but I'll see what I can do! username_0: Ok thanks! it would be good to get the missing inferences back. It would also be good to announce the file name and location change. most groups will not know, we only discovered this by chance. Val username_3: Hi Val, I was wondering if you could give specific examples of annotations you're expecting to see given what your GAF looks like? And you're saying you used to have the F-P links in these -prediction (.inf.gaf) files? username_0: so, what I am looking for is the correct location for the file which contains the F-P (function process) inferences. It does not appear to be in here with the other files: http://build.berkeleybop.org/job/go-gaf-release-snapshot/lastSuccessfulBuild/artifact/pipeline/target/groups/pombase/ username_0: There are 2 files of "inferences" pombase-prediction.gaf | 11.47 KB | view contains ONLY cellular component data pombase-prediction-experimental.gaf | 0 B | view empty username_0: This is the file I am looking for. Is this the current version? http://build.berkeleybop.org/view/GAF/job/gaf-check-pombase/gene_association.pombase.inf.gaf username_0: If we are getting the F-P inferences file from the correct (current) location, this ticket can be closed. Once we know we are using the correct (i.e current) file, we can check whether the issue of missing annotations still exists, reported here. https://github.com/geneontology/go-annotation/issues/1336 Status: Issue closed username_0: I'll close this ticket, it's way too confusing. All we want to know is are we using the correct inf.gaf (the one with processes infererred from F-P links data) ...because there appear to be numerous problems with the contents of the file we are using. This may be because we are not using the correct file. I opened a new ticket for this. Hopefully it is clear. inferred gaf (F-P) links, correct location and possible bugs #524
ICRAR/ijson
492197188
Title: Parsing non-UTF-8 data Question: username_0: [RFC 8259](https://tools.ietf.org/html/rfc8259#section-8.1) allows non-UTF-8 data in "a closed ecosystem". I am using ijson to iteratively read JSON from stdin, and I don't presently know a way to change its encoding, without either causing an error in ijson or having to buffer the entire input. Among other attempts, I tried monkey-patching `b2s` in `ijson.compat` to use a different encoding, but it led to a different error than UnicodeDecodeError. Is there a way (or a desire) to parse non-UTF-8 data? Answers: username_1: Addressing the question: no, I don't think there's much value on modifying ijson to have it work in non-standard ways. For your particular problem, you could write a generator that reads data off stdin and yields UTF-8 encoded bytes; then use the generator as the input to ijson.
golang/vscode-go
1103778475
Title: noDebug mode: unable to process 'evaluate' request Question: username_0: I use fmt.Scanln() to get keyboard input number and want to use dlv to debug my Go code,i get this error: ``` noDebug mode: unable to process 'evaluate' request ``` who one know this and can anyone tell me how to resolve it? Answers: username_1: It appears you are running the program in noDebug mode, which will not allow you to set breakpoints or inspect the state of the program. Make sure you are starting running with debugging enabled (F5). What steps were you taking when you got that error? How did you launch the program (in the terminal, etc)? Where did you try to supply the keyboard input? username_0: ![image](https://user-images.githubusercontent.com/28972707/149603544-83f0ea92-4787-40ec-89c7-e062cd5d58b9.png) I want to supply keyboard input here,and i have configure vscode F5 debug mode,i don't know if this is the right configuration with my vscode ![image](https://user-images.githubusercontent.com/28972707/149603632-4789b9c8-c22a-4961-9046-3f599f25a1e6.png) If there is wrong way to configure golang-vscode debug mode,can you show me how to configure the right way? username_2: The default DEBUG CONSOLE does not provide access to tty. Currently we are experimenting the `console` attribute to address this issue. That experimental feature is available in the Nightly version (find golang.go-nightly from marketplace). If you provide `"console": "integratedTerminal"`, the debug session will start the debugger & the program in a terminal where your program will have access to stdin. Can you give it a try and provide feedback? https://github.com/golang/vscode-go/issues/124#issuecomment-1006122877 username_0: Firstly, I have disabled standard Go extension,and enabled go-bightly. Secondly,i add `"console": "integratedTerminal"` to my vscode ---> settings.json ![image](https://user-images.githubusercontent.com/28972707/150047819-1d2f07c2-cc3c-4c13-a637-370d2de087ec.png) Then i have try to F5 to debug my codes,but i got the same result. The difference is that instead of showing the NoDebug mode, it now shows the Debug mode. ![image](https://user-images.githubusercontent.com/28972707/150047604-b605bb97-9d00-42e1-a943-a18a872d5c2d.png) username_2: The console property should be in the debug configuration in launch.json username_0: Thank you @username_2, your advice helped me solve this problem. I've always thought configuring the Go extention was appropriate, but for now if I want to use F5 to debug code in vscode and implement keyboard input from a terminal, the go-nightly extention works better username_2: Great to hear that the new console mode worked for you. A new version of golang.go extension will be released with the `console` feature this week. Closing this issue. Status: Issue closed
ytolstyk/hema
92931181
Title: Active match page Question: username_0: Have a timer Register new exchange pauses the timer After the exchange is registered user is presented with options to * Continue the match * Finish the match Ending the match: * Match ends when one of the fighters has enough points to win (setting of the tournament) * Match ends when the timer runs out * In case of a tie, timer might be unpaused for the sudden death round.
uvdesk/api-bundle
807995858
Title: API GET tickets ActAsType customer Question: username_0: **Description** Fetching all Tickets actAsType: customer responds with wrong list , same as agent, full long list of all tickets. **How to reproduce** API GET tickets ActAsType customer **Possible Solution** line 34 $data['actAsEmail'] not defined in api-bundle/API/Tickets.php line 45 similar Answers: username_1: @username_0 Updated this [here](https://github.com/uvdesk/api-bundle/commit/1873be43b53812d6521b3f8d95565e7929429737). username_0: It will still fail at line 46 change: $data['actAsEmail'] with $request->query->get('actAsEmail') username_1: @username_0 Fixed [here](https://github.com/uvdesk/api-bundle/commit/a0e9f852f6bde1adae0875fdae857bdc3bc1ff3f). Status: Issue closed
HHK1/PryntTrimmerView
833591750
Title: Question about right handle when it's not moved Question: username_0: When the right handle is still and only the left handle is moved, the video is not played again but stops. Is there any way to fix this? Answers: username_1: Could you expand a bit please ? When you move a handle, the asset shows the image associated with the current position of the handle (to help you get a precise position for trimming), and playback should resume once you lift your finger up. username_0: I know what you're talking about. What I want to say is that video does not repeat when the right handle is located at the end of the trimmer. ![Mar-17-2021 19-07-56](https://user-images.githubusercontent.com/52123195/111450864-286a7480-8754-11eb-9634-8367b82b031b.gif) Not all videos are like this, but only a few videos seem to have this problem. And if right handle moves at least once and go back to the end of the Trimmer, the video repeats itself. username_1: Got it, thanks for the video, makes it clearer. I currently don't have time to investigate this, but feel free to open a PR and I'll review it 🙏 username_0: Thanks! 👍 username_0: I'm currently looking for a bug, and I want to know the part that is responsible for repeating the video. Please help... username_2: PR that fixes this: https://github.com/username_1/PryntTrimmerView/pull/81/files username_0: Sorry for the late reply! I changed the code and tested it, but it stops the same 😢 . And also same result in PryntTrimmerViewExample. I added observer to the part where I put AVasset in the AVplayer and added itemDidFinishPlaying to check if the player was stopped, did I applied wrong? username_2: Hi @username_0 if you are talking about the fix in my PR, since it has been merged you can just pull the newest code from this repo. You should not need to manually change code username_2: -- <NAME> Founder, *Solodigitalis* M: 289-237-1935 W: solodigitalis.com A: 270 <NAME>, Hamilton, ON L8L 6N4 username_0: I was really stupid! I declared a new Notification name 😅 (AVPlayerItemDidPlayToEndTime) Thanks @username_2!! Status: Issue closed
beaupreda/clear_mot_metrics
543401914
Title: How do you genrate XML file for the GT ? Question: username_0: @username_1 could you please tell me the annotation tool that is used to generate XML Ground Truth from a custom video dataset for object tracking. Answers: username_1: Hi, I used this annotation tool (https://www.jpjodoin.com/urbantracker/tools.html). Also, I heard good things about Viper-GT (http://viper-toolkit.sourceforge.net/docs/quickstart/) Good luck! username_0: thank you so much to your help Happy New Year Status: Issue closed
proyecto26/RestClient
476490749
Title: GET Array in Unity Question: username_0: I use the method to get the array "GetArray". But I don’t get it. But I get an error in Unity - **Unexpected node type**. Please tell me what can I do wrong? I'm work with firebase database. **The link path I pass is** https://[database-name].firebaseio.com/players.json. **JSON** {"ttt1":{"maxScore":34,"name":"ttt2"},"aaaa":{"maxScore":46,"name":"ttt3"}} **My class:** [System.Serializable] public class PlayerModel { public string name; public int maxScore; public override string ToString() { return UnityEngine.JsonUtility.ToJson(this, false); } } Status: Issue closed Answers: username_1: Because Firebase works only with objects using key-value instead, check the response using Postman or other tool
Mixxx-a/Exam_SladkovM_M3103
288259041
Title: 10 явно стоит выделить в именованную константу Question: username_0: https://github.com/username_1/Exam_SladkovM_M3103/blob/7b2be12d49359174b38cadfcc9487492ca0099f7/Exam_SladkovM_M3103/Exam_Main.c#L8 Status: Issue closed Answers: username_1: https://github.com/username_1/Exam_SladkovM_M3103/blob/7b2be12d49359174b38cadfcc9487492ca0099f7/Exam_SladkovM_M3103/Exam_Main.c#L8 Status: Issue closed
TeamSpen210/HammerAddons
628149715
Title: prop_portal's starting portal color and Hammer skin are inconsistent Question: username_0: The default Hammer skin for prop_portal is an orange portal, but the default in-game portal color is blue. This has never been an issue for me personally, but could cause problems if someone just placed the entity and assumed it was an orange portal, only to get a blue portal in-game. Rather than make the Hammer skin default to blue, it would probably be better to make the in-game portal default to orange, since those autoportals are generally more common.<issue_closed> Status: Issue closed
mholgatem/GPIOnext
429471643
Title: Failed to add edge detection Question: username_0: I've a SpotPear 1.54 display with some GPIO buttons in snes configuration, when i try to config I have the error Failed to add edge detection Answers: username_1: It is failing to add edge detection because the pins are already in use from your display. run the config with the flag `--pins 29,31,32,33,35,36,37,38,40` or list whatever pins you would like to use. Status: Issue closed
l3tnun/EPGStation
491109851
Title: 【質問】アップデート後にWebUIが表示されなくなった Question: username_0: ### 環境 * Version of EPGStation: `1.5.8` * Version of Mirakurun: `2.11.0` * Version of Node: `v10.13.0` * Version of NPM: `6.9.0` * OS:Windows10 home x64 * Architecture: x64 ### Issue ... 本日EPGStationを1.5.8にアップデートしたところ、表記のごとくWebUIが表示されなくなりました。 アクセスは成功しているのですが、描画がなされていないような感じです。 ![image](https://user-images.githubusercontent.com/35391643/64535737-c2c54d00-d352-11e9-98e6-5e001057a108.png) 一方でSwaggerUIは正常に表示されますし、録画とエンコードも問題なく行えているようです。 service\access.logには以下のようなログが残っています。 ~~~ [2019-09-09T22:29:02.456] [INFO] access - fc00:db20:35b:7399::5:192.168.12.42 - - "GET / HTTP/1.1" 304 - "" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36" [2019-09-09T22:29:02.494] [INFO] access - fc00:db20:35b:7399::5:192.168.12.42 - - "GET /css-ripple-effect/ripple.min.css HTTP/1.1" 304 - "http://192.168.12.60:8880/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36" [2019-09-09T22:29:02.499] [INFO] access - fc00:db20:35b:7399::5:192.168.12.42 - - "GET /material-design-lite/material.min.css HTTP/1.1" 304 - "http://192.168.12.60:8880/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36" [2019-09-09T22:29:02.502] [INFO] access - fc00:db20:35b:7399::5:192.168.12.42 - - "GET /material-design-lite/material.min.js HTTP/1.1" 304 - "http://192.168.12.60:8880/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36" [2019-09-09T22:29:02.503] [INFO] access - fc00:db20:35b:7399::5:192.168.12.42 - - "GET /css/style.css HTTP/1.1" 304 - "http://192.168.12.60:8880/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36" [2019-09-09T22:29:02.506] [INFO] access - fc00:db20:35b:7399::5:192.168.12.42 - - "GET /js/app.js HTTP/1.1" 304 - "http://192.168.12.60:8880/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36" [2019-09-09T22:29:03.292] [INFO] access - fc00:db20:35b:7399::5:192.168.12.42 - - "GET /icon/android.png HTTP/1.1" 200 10335 "http://192.168.12.60:8880/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36" [2019-09-09T22:29:03.307] [INFO] access - fc00:db20:35b:7399::5:192.168.12.42 - - "GET /icon/favicon.png HTTP/1.1" 304 - "http://192.168.12.60:8880/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36" ~~~ アップデートに失敗しているのでしょうか? Answers: username_1: ```npm run build``` を実行して再ビルドしてもだめでしたら、ブラウザの web コンソールを開いてエラー内容を書いてください username_0: `npm run build`を実行しましたが変化がないようです。 webコンソールでは以下のエラーが表示されています。 ``` Invalid asm.js: Unexpected token app.js:16 Uncaught TypeError: r.route.prefix is not a function at Object.<anonymous> (app.js:16) at i (app.js:1) at Object.<anonymous> (app.js:16) at i (app.js:1) at app.js:1 at app.js:1 ``` よろしくお願いします。 username_1: ライブラリが正しくインストール出来ていないのかもしれません node_modules を削除して、インストール手順を最初から実行してみてください もし、それでもだめな場合何らかのバグを踏んでいるのかもしれません 動かなかった場合、 ``` ./node_mpdules/.bin/gulp build --max_old_space_size=768 --env development ``` ↑のコマンドにてビルドをして web コンソールのエラー内容を書いて頂けると原因究明に役立ちます username_0: [15:07:59] Node flags detected: --max_old_space_size=768 [15:07:59] Respawned to PID: 9116 [15:08:15] Using gulpfile C:\TV\EPGStation\gulpfile.js [15:08:15] Starting 'build'... [15:08:15] Starting 'build-server'... [15:08:15] Starting 'clean-server'... [15:08:16] Finished 'clean-server' after 667 ms [15:08:16] Starting 'tslint-server'... [15:08:37] Finished 'tslint-server' after 21 s [15:08:37] Starting '<anonymous>'... [15:09:27] Finished '<anonymous>' after 50 s [15:09:27] Finished 'build-server' after 1.2 min [15:09:27] Starting 'build-client'... [15:09:27] Starting 'clean-client'... [15:09:27] Finished 'clean-client' after 22 ms [15:09:27] Starting 'tslint-client'... [15:09:42] Finished 'tslint-client' after 14 s [15:09:42] Starting '<anonymous>'... [15:12:58] Version: webpack 4.39.2 Built at: 2019-09-10 15:12:57 Asset Size Chunks Chunk Names app.js 1.66 MiB 0 [emitted] [big] main Entrypoint main [big] = app.js [15:12:58] Finished '<anonymous>' after 3.27 min [15:12:58] Finished 'build-client' after 3.5 min [15:12:58] Starting 'client-css-build'... [15:13:01] Finished 'client-css-build' after 2.66 s [15:13:01] Finished 'build' after ~~~ 次に `./node_modules/.bin/gulp build --max_old_space_size=768 --env development` でビルドを行いwebコンソールのエラー内容を収集しました。 ~~~ main.ts:30 Uncaught TypeError: m.route.prefix is not a function at Object../src/client/main.ts (main.ts:30) at __webpack_require__ (bootstrap:19) at Object.0 (main.ts:47) at __webpack_require__ (bootstrap:19) at bootstrap:83 at bootstrap:83 ~~~ 引き続きよろしくお願いします。 username_1: mithril.jsが正しく動作指定なさそうですね npm のキャッシュが原因かもしれません 以下のコマンドで npm のキャッシュをクリアした後、再度インストール手順の実行をしてみてください (上記に書いた node_modules 削除 〜 ./node_modules/.bin/gulp build --max_old_space_size=768 --env development まで) ``` npm cache clean npm cache ls rm -rf ~/.npm ``` username_0: お世話になります。 `npm cache clean --force`と`npm cache ls`は実行できたのですが、`rm -rf ~/.npm`はwindowsのpowershell環境では実行できませんでした。 windows環境でのnpmのキャッシュは`%AppData%\npm-cache`に存在するらしいのですが、これ以下のnpmという隠しファイルを削除するということでしょうか? しかしexploreで検索する限りでは見つからないので違う気がしています。 ![image](https://user-images.githubusercontent.com/35391643/64612588-24e38800-d40f-11e9-969a-3ce19612c145.png) experimentalであることは承知していますが、何分ネット上にも情報が乏しくお尋ねする次第です。 ご教授いただければ幸いです。 username_1: windows環境でどこにキャッシュされるか私は知らないので答えられないです 今回の場合ブラウザ側が動いていないだけなので、別環境(linuxでもok)にて EPGStation をビルドし ```dist/client``` ディレクトリを差し替えてあげれば問題なく動くようになるはずです username_0: の記述を参考に、`dist/client`のバックアップを書き戻してみたところ動作するようになりました。 果たしてこれで良かったのか、あるいは今後のアップデートはどうしようかなど今一つすっきりしないのですが、何はさておき動作するようになったので安堵しております。 自力でもう少し対処できるべきなのでしょうが至らず、いつもお世話になりすみません。 username_1: 最近socketioのポート周りの挙動が変更されたので、あまり古いものですと動作に支障があるかと思います 1.5.7であればほぼ影響はないです ちなみにバックアップの EPGStation のバージョンはいくつか分かりますか? username_0: 同じくバックアップのpackage.jsonを確認したところ1.5.2でした。 その後に思い付いたのですが、`%AppData%\npm-cache`のフォルダをまるごと削除までしなくとも、一旦リネームしておいてやってみるといいですね。 午後にでも時間が取れたらやってみようかと思います。 username_0: ~~~ npm cache clean --force npm cache ls ~~~ を実行したのちに`%AppData%\npm-cache`のフォルダをリネーム。 これでキャッシュがクリアできたと思われます。 そしてnode_modulesを削除ののち`npm install`と`npm run build`を行いましたが残念ながら変化がありませんでした。 `./node_modules/.bin/gulp build --max_old_space_size=768 --env development`を実行してwebコンソールのエラーを収集したものを再度貼ります。 内容には変化はないように思えます。 ~~~ Invalid asm.js: Unexpected token main.ts:30 Uncaught TypeError: m.route.prefix is not a function at Object../src/client/main.ts (main.ts:30) at __webpack_require__ (bootstrap:19) at Object.0 (main.ts:47) at __webpack_require__ (bootstrap:19) at bootstrap:83 at bootstrap:83 ./src/client/main.ts @ main.ts:30 __webpack_require__ @ bootstrap:19 0 @ main.ts:47 __webpack_require__ @ bootstrap:19 (anonymous) @ bootstrap:83 (anonymous) @ bootstrap:83 ~~~ その他にも試せることがあれば教えて下さい。 username_1: これ以上できることはないですね 別環境で正常に動くか切り分けするくらいですかね
jitpack/jitpack.io
401285769
Title: since their licenses or those of the packages they depend on were not accepted: Question: username_0: Accept? (y/N): Skipping following packages as the license is not accepted: Android SDK Build-Tools 27.0.3 The following packages can not be installed since their licenses or those of the packages they depend on were not accepted: build-tools;27.0.3 Found gradle Gradle build script Answers: username_1: Hi, Should be fixed now username_1: @username_2 Seems like that build succeeded. Does it not work for you? username_2: follow https://github.com/jitpack/jitpack.io/issues/3687#issuecomment-455901608 i build succeeded @username_1 Status: Issue closed username_0: worked now
home-assistant/core
593992016
Title: Couldn' Question: username_0: <!-- READ THIS FIRST: - If you need additional help with this template, please refer to https://www.home-assistant.io/help/reporting_issues/ - Make sure you are running the latest version of Home Assistant before reporting an issue: https://github.com/home-assistant/core/releases - Do not report issues for integrations if you are using custom components or integrations. - Provide as many details as possible. Paste logs, configuration samples and code into the backticks. DO NOT DELETE ANY TEXT from this template! Otherwise, your issue may be closed without comment. --> ## The problem <!-- Describe the issue you are experiencing here to communicate to the maintainers. Tell us what you were trying to do and what happened. --> ## Environment <!-- Provide details about the versions you are using, which helps us to reproduce and find the issue quicker. Version information is found in the Home Assistant frontend: Developer tools -> Info. --> - Home Assistant Core release with the issue: - Last working Home Assistant Core release (if known): - Operating environment (Home Assistant/Supervised/Docker/venv): - Integration causing this issue: - Link to integration documentation on our website: ## Problem-relevant `configuration.yaml` <!-- An example configuration that caused the problem for you. Fill this out even if it seems unimportant to you. Please be sure to remove personal information like passwords, private URLs and other credentials. --> ```yaml ``` ## Traceback/Error logs <!-- If you come across any trace or error logs, please provide them. --> ```txt ``` ## Additional information Answers: username_1: This is ZHA not deconz username_2: What is host os? username_0: Host is Ubuntu 19.10, with the modem stuff removed. username_0: Decided to try with a HUSBZB-1 stick I had laying around and got pretty much the same error: ``` 2020-04-05 14:00:18 ERROR (MainThread) [homeassistant.components.zha.core.gateway] Couldn't start EZSP coordinator Traceback (most recent call last): File "/usr/src/homeassistant/homeassistant/components/zha/core/gateway.py", line 142, in async_initialize res = await self.application_controller.startup(auto_form=True) File "/usr/local/lib/python3.7/site-packages/bellows/zigbee/application.py", line 135, in startup await self.initialize() File "/usr/local/lib/python3.7/site-packages/bellows/zigbee/application.py", line 72, in initialize await e.reset() File "/usr/local/lib/python3.7/site-packages/bellows/ezsp.py", line 57, in reset await self._gw.reset() File "/usr/local/lib/python3.7/site-packages/bellows/uart.py", line 222, in reset return await asyncio.wait_for(self._reset_future, timeout=RESET_TIMEOUT) File "/usr/local/lib/python2020-04-05 14:00:24 ERROR (MainThread) [homeassistant.components.zha.core.gateway] Couldn't start EZSP coordinator ``` username_2: Does it start at all with husbzb-1? After it fails on restart, does it try to start again? username_3: I've got the same issue with an Elelabs stick. After failing on reboot, it tries to start again after 80 seconds. `2020-04-08 11:07:55 WARNING (MainThread) [homeassistant.config_entries] Config entry for zha not ready yet. Retrying in 80 seconds. 2020-04-08 11:09:21 ERROR (MainThread) [homeassistant.components.zha.core.gateway] Couldn't start EZSP coordinator Traceback (most recent call last): File "/usr/src/homeassistant/homeassistant/components/zha/core/gateway.py", line 149, in async_initialize res = await self.application_controller.startup(auto_form=True) File "/usr/local/lib/python3.7/site-packages/bellows/zigbee/application.py", line 137, in startup await self.initialize() File "/usr/local/lib/python3.7/site-packages/bellows/zigbee/application.py", line 74, in initialize await e.reset() File "/usr/local/lib/python3.7/site-packages/bellows/ezsp.py", line 79, in reset await self._gw.reset() File "/usr/local/lib/python3.7/site-packages/bellows/uart.py", line 220, in reset return await asyncio.wait_for(self._reset_future, timeout=RESET_TIMEOUT) File "/usr/local/lib/python3.7/asyncio/tasks.py", line 449, in wait_for raise futures.TimeoutError() concurrent.futures._base.TimeoutError 2020-04-08 11:09:21 WARNING (MainThread) [homeassistant.config_entries] Config entry for zha not ready yet. Retrying in 80 seconds. 2020-04-08 11:10:47 ERROR (MainThread) [homeassistant.components.zha.core.gateway] Couldn't start EZSP coordinator Traceback (most recent call last): File "/usr/src/homeassistant/homeassistant/components/zha/core/gateway.py", line 149, in async_initialize res = await self.application_controller.startup(auto_form=True) File "/usr/local/lib/python3.7/site-packages/bellows/zigbee/application.py", line 137, in startup await self.initialize() File "/usr/local/lib/python3.7/site-packages/bellows/zigbee/application.py", line 74, in initialize await e.reset() File "/usr/local/lib/python3.7/site-packages/bellows/ezsp.py", line 79, in reset await self._gw.reset() File "/usr/local/lib/python3.7/site-packages/bellows/uart.py", line 220, in reset return await asyncio.wait_for(self._reset_future, timeout=RESET_TIMEOUT) File "/usr/local/lib/python3.7/asyncio/tasks.py", line 449, in wait_for raise futures.TimeoutError() concurrent.futures._base.TimeoutError 2020-04-08 11:10:47 WARNING (MainThread) [homeassistant.config_entries] Config entry for zha not ready yet. Retrying in 80 seconds.` username_2: it is failing on reboot or is it failing on restart? How did you configure it? username_3: I'm running home-assistant in docker on a CentOS 7 host. This is my docker-compose: ``` homeassistant: container_name: home-assistant image: homeassistant/home-assistant:rc volumes: - /home/docker/homeassistant/config/:/config - /etc/localtime:/etc/localtime:ro environment: - TZ=Europe/Amsterdam ports: - 8123:8123 devices: - /dev/ttyACM0:/dev/ttyACM0 - /dev/ttyUSB0:/dev/ttyUSB0 restart: always ``` ttyACM0 is a Z-Wave stick and ttyUSB0 is a Zigbee stick. This morning I updated to RC because in issue 32726 you said that there are some changes made in handeling port disconnects. In the configuration.yaml I configured the ZHA component: ``` zha: usb_path: /dev/ttyUSB0 database_path: zigbee1.db ``` It is failing on both. I've tried to reboot the host, restart the docker container and did a restart via the home-assistant interface. username_2: Check elelabs documentation. They've changed the default baudrate. Add `baudrate: 115200` to the zha config section in `configuration.yaml`. username_3: I'm getting the same error with `baudrate: 115200`. I've had the USB stick for two years and it has always worked with the default baud rate username_2: if it is an old stick, then leave the default baudrate, as I think they've changed it with newer sticks. can you try installing `bellows-homeassistant` in python venv on the host machine and run `bellows -d /dev/ttyUSB0 info` ? username_4: ``` /config # bellows -d /dev/ttyUSB0 info Traceback (most recent call last): File "/usr/local/bin/bellows", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.7/site-packages/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/usr/local/lib/python3.7/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/local/lib/python3.7/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/usr/local/lib/python3.7/site-packages/click/decorators.py", line 21, in new_func return f(get_current_context(), *args, **kwargs) File "/usr/local/lib/python3.7/site-packages/bellows/cli/util.py", line 36, in inner loop.run_until_complete(f(*args, **kwargs)) File "/usr/local/lib/python3.7/asyncio/base_events.py", line 587, in run_until_complete return future.result() File "/usr/local/lib/python3.7/site-packages/bellows/cli/ncp.py", line 66, in info s = await util.setup(ctx.obj["device"], ctx.obj["baudrate"]) File "/usr/local/lib/python3.7/site-packages/bellows/cli/util.py", line 102, in setup await s.reset() File "/usr/local/lib/python3.7/site-packages/bellows/ezsp.py", line 79, in reset await self._gw.reset() File "/usr/local/lib/python3.7/site-packages/bellows/uart.py", line 220, in reset return await asyncio.wait_for(self._reset_future, timeout=RESET_TIMEOUT) File "/usr/local/lib/python3.7/asyncio/tasks.py", line 449, in wait_for raise futures.TimeoutError() concurrent.futures._base.TimeoutError ``` username_0: For everyone here, if you are using docker and a custom command to start home assistant, 0.107 adds a new init system that makes it so that you are effectively running two HA instances at the same time. This means you have two instances trying to connect to your ZHA devices which will definitely cause trouble. This seems to have been my issue and now it looks like it is fixed (after I removed the custom docker `command`). For more details see #32992 username_2: The stick is not responding. Is ttyUSB0 the right port? Have you removed modem manager? What is in /dev/serial/by-id ? username_5: I'm seeing a similar issue (but with EZSP instead) after upgrading homeassistant from 0.104.3 to 0.108.2 with a HUSBZB-1 stick. I'm running on FreeBSD 12.1 i386. The ZHA component no longer works, the zwave(/dev/cuaU0) device on the same usb stick does work (confirmed with pyozw_check). I've also tried the bellows command on the CLI to no effect. (Setting a baudrate of 57600 doesn't change the outcome). ``` $bellows -v debug -d /dev/cuaU1 info debug: Using selector: KqueueSelector debug: Using selector: KqueueSelector debug: Connected. Resetting. debug: Resetting EZSP debug: Resetting ASH debug: Sending: b'1ac038bc7e' Traceback (most recent call last): File "/usr/home/hass/.hass-venv/bin/bellows", line 10, in <module> sys.exit(main()) File "/usr/home/hass/.hass-venv/lib/python3.7/site-packages/click/core.py", line 764, in __call__ return self.main(*args, **kwargs) File "/usr/home/hass/.hass-venv/lib/python3.7/site-packages/click/core.py", line 717, in main rv = self.invoke(ctx) File "/usr/home/hass/.hass-venv/lib/python3.7/site-packages/click/core.py", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/home/hass/.hass-venv/lib/python3.7/site-packages/click/core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/home/hass/.hass-venv/lib/python3.7/site-packages/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/usr/home/hass/.hass-venv/lib/python3.7/site-packages/click/decorators.py", line 17, in new_func return f(get_current_context(), *args, **kwargs) File "/usr/home/hass/.hass-venv/lib/python3.7/site-packages/bellows/cli/util.py", line 38, in inner loop.run_until_complete(f(*args, **kwargs)) File "/usr/local/lib/python3.7/asyncio/base_events.py", line 583, in run_until_complete return future.result() File "/usr/home/hass/.hass-venv/lib/python3.7/site-packages/bellows/cli/ncp.py", line 66, in info s = await util.setup(ctx.obj["device"], ctx.obj["baudrate"]) File "/usr/home/hass/.hass-venv/lib/python3.7/site-packages/bellows/cli/util.py", line 102, in setup await s.reset() File "/usr/home/hass/.hass-venv/lib/python3.7/site-packages/bellows/ezsp.py", line 57, in reset await self._gw.reset() File "/usr/home/hass/.hass-venv/lib/python3.7/site-packages/bellows/uart.py", line 222, in reset return await asyncio.wait_for(self._reset_future, timeout=RESET_TIMEOUT) File "/usr/local/lib/python3.7/asyncio/tasks.py", line 449, in wait_for raise futures.TimeoutError() concurrent.futures._base.TimeoutError ``` The following is the a logging excerpt show HASS trying to reset the coordinator but never succeeding (raising the TimeoutError). ``` 2020-04-11 18:08:26 DEBUG (MainThread) [zigpy.appdb] Loading application state from /home/hass/config/zigbee.db 2020-04-11 18:08:26 DEBUG (MainThread) [zigpy.appdb] Attribute id: 4 value: b'Jasco Products' 2020-04-11 18:08:26 DEBUG (MainThread) [zigpy.appdb] Attribute id: 5 value: b'45853' 2020-04-11 18:08:26 DEBUG (MainThread) [zigpy.appdb] Attribute id: 4 value: b'Jasco Products' 2020-04-11 18:08:26 DEBUG (MainThread) [zigpy.appdb] Attribute id: 5 value: b'45853' 2020-04-11 18:08:26 DEBUG (MainThread) [zigpy.appdb] Attribute id: 4 value: b'Jasco Products' 2020-04-11 18:08:26 DEBUG (MainThread) [zigpy.appdb] Attribute id: 5 value: b'45853' 2020-04-11 18:08:26 DEBUG (MainThread) [zigpy.appdb] Attribute id: 5 value: b'45853' 2020-04-11 18:08:26 DEBUG (MainThread) [zigpy.appdb] Attribute id: 4 value: Jasco Products 2020-04-11 18:08:26 DEBUG (MainThread) [zigpy.quirks.registry] Checking quirks for Jasco Products 45853 (00:22:a3:00:00:01:58:8d) 2020-04-11 18:08:26 DEBUG (MainThread) [zigpy.quirks.registry] Considering <class 'bellows.zigbee.application.EZSPCoordinator'> 2020-04-11 18:08:26 DEBUG (MainThread) [zigpy.quirks.registry] Fail because endpoint list mismatch: {1} {1, 2} 2020-04-11 18:08:26 DEBUG (MainThread) [zigpy.quirks.registry] Considering <class 'zhaquirks.gledopto.soposhgu10.SoposhGU10'> 2020-04-11 18:08:26 DEBUG (MainThread) [zigpy.quirks.registry] Fail because endpoint list mismatch: {11, 13} {1, 2} [Truncated] 2020-04-11 18:08:26 DEBUG (MainThread) [zigpy.ota] Initialize OTA providers 2020-04-11 18:08:26 DEBUG (MainThread) [zigpy.ota.provider] OTA image directory '/home/hass/config/zigpy_ota/' does not exist 2020-04-11 18:08:31 ERROR (MainThread) [homeassistant.components.zha.core.gateway] Couldn't start EZSP coordinator Traceback (most recent call last): File "/usr/home/hass/.hass-venv/lib/python3.7/site-packages/homeassistant/components/zha/core/gateway.py", line 152, in async_initialize res = await self.application_controller.startup(auto_form=True) File "/usr/home/hass/.hass-venv/lib/python3.7/site-packages/bellows/zigbee/application.py", line 137, in startup await self.initialize() File "/usr/home/hass/.hass-venv/lib/python3.7/site-packages/bellows/zigbee/application.py", line 74, in initialize await e.reset() File "/usr/home/hass/.hass-venv/lib/python3.7/site-packages/bellows/ezsp.py", line 79, in reset await self._gw.reset() File "/usr/home/hass/.hass-venv/lib/python3.7/site-packages/bellows/uart.py", line 220, in reset return await asyncio.wait_for(self._reset_future, timeout=RESET_TIMEOUT) File "/usr/local/lib/python3.7/asyncio/tasks.py", line 449, in wait_for raise futures.TimeoutError() concurrent.futures._base.TimeoutError 2020-04-11 18:08:31 WARNING (MainThread) [homeassistant.config_entries] Config entry for zha not ready yet. Retrying in 5 seconds. 2020-04-11 18:08:31 DEBUG (bellows.thread_0) [bellows.uart] Closed serial connection ``` username_6: I've been pulling my hair out over this issue for the past two weeks! Can't thank you enough! username_2: @username_5 there was a similar report for FreeBSD that was fixed by un-plugging/plugging stick back 🤷 username_5: @username_2 so weird. I did try a reboot thinking the stick was in a bad state. But I did not ever reseat it. I don't think I did anything in trying to debug it that would have meaningfully changed the behavior. Thank you for letting me know though. I'll keep that in mind if it happens again. To be honest it's been rock solid since I've installed it a year and a half ago or so. username_0: Should we close this issue? Or is it still something going on? Status: Issue closed username_7: I have the same problem: `2020-04-28 13:48:36 WARNING (MainThread) [homeassistant.components.sensor] Platform rest not ready yet. Retrying in 180 seconds. 2020-04-28 13:48:36 ERROR (MainThread) [homeassistant.components.zha.core.gateway] Couldn't start EZSP coordinator Traceback (most recent call last): File "/usr/src/homeassistant/homeassistant/components/zha/core/gateway.py", line 152, in async_initialize res = await self.application_controller.startup(auto_form=True) File "/usr/local/lib/python3.7/site-packages/bellows/zigbee/application.py", line 137, in startup await self.initialize() File "/usr/local/lib/python3.7/site-packages/bellows/zigbee/application.py", line 74, in initialize await e.reset() File "/usr/local/lib/python3.7/site-packages/bellows/ezsp.py", line 79, in reset await self._gw.reset() File "/usr/local/lib/python3.7/site-packages/bellows/uart.py", line 220, in reset return await asyncio.wait_for(self._reset_future, timeout=RESET_TIMEOUT) File "/usr/local/lib/python3.7/asyncio/tasks.py", line 449, in wait_for raise futures.TimeoutError() concurrent.futures._base.TimeoutError 2020-04-28 13:48:36 WARNING (MainThread) [homeassistant.config_entries] Config entry for zha not ready yet. Retrying in 80 seconds.` I have an elelabs usb stick and this is my zha config: `zha: usb_path: /dev/ttyUSB0 database_path: zigbee.db baudrate: 115200 `
ScoopInstaller/Main
1093765619
Title: App "gcc" does not contain gdb anymore. On purpose? Question: username_0: <!-- By opening this issue you confirm that you have searched for similar issues/PRs here already. Failing to do so will most likely result in closing of this issue without any explanation. Incomplete form details below might also result in closing of the issue. --> ## Bug Report **Package Name:** gcc used to contain gdb.exe with former version 8.1.0: <details> ``` PS C:\> scoop list Installed apps: ... gcc 8.1.0 [main] ... PS C:\> (gcm gdb).Path C:\Users\manna\scoop\apps\gcc\current\bin\gdb.exe PS C:\> gdb --version GNU gdb (GDB) 8.1 ``` </details> There is also an other app with name "gdb". It contains (only) gdb.exe So my QUESTION: Is the removal of gdb.exe in app "gcc" on purpose? Both apps where updated recently; see * https://github.com/ScoopInstaller/Main/commits/master/bucket/gdb.json * https://github.com/ScoopInstaller/Main/commits/master/bucket/gcc.json ### Current Behaviour Since Version 11.2.0 (head as of this writing) it does not contain gdb anymore The change (removal of gdb.exe in gcc) *breaks* at least the automatic creation of `launch.json` inside Visual Studio Code (created by C++ extension) because the extension expects gdb in the directory where gcc.exe resides. <details> ``` Todo: Add snippets from other laptop with most recent scoop apps installed. ``` </details> ### Expected Behaviour App gcc and visual studio extension creating the `launch.json` with paths to both `gcc.exe` and `gdb.exe` work seamlessly together. <!-- A clear and concise description of what you expected to happen. --> ### Possible Solution My suggestion is to add gdb.exe to app "gcc" again. Rationale: Using debugger suitable to compile/link toolchain is very common. ### System details **Windows version:** 10 **OS architecture:** 64bit **Additional software:** Visual Studio code (from bucket extras the app vscode) ``` Answers: username_1: The GCC manifest you're familiar with has been renamed to mingw.json. It was done because gcc.json name was slightly misleading, it contained make, binutils, gdb and some other stuff too. So you can just do `scoop install mingw` to get back everything. Only the manifest name has changed. Status: Issue closed
RayBenefield/dev-xp
500952818
Title: Merge test file eslint rules with other rules using overrides Question: username_0: ## Expected Behavior We can combine the eslint rules using the overrides key in configs: https://eslint.org/docs/user-guide/configuring.html#disabling-rules-only-for-a-group-of-files We should do this to simplify our project.
gbrks/docker-syncthing
107259375
Title: Issues launching with instructions given for Edge Question: username_0: # Issues with edge With your example for Edge it fails and loops. [start] 19:02:57 INFO: Generating RSA key and certificate for syncthing... [start] 19:02:58 FATAL: save cert: open /config/cert.pem: permission denied 19:03:00 WARNING: chmod /config: operation not permitted [start] 19:03:00 INFO: Generating RSA key and certificate for syncthing... [start] 19:03:02 FATAL: save cert: open /config/cert.pem: permission denied 19:03:05 WARNING: chmod /config: operation not permitted [start] 19:03:05 INFO: Generating RSA key and certificate for syncthing... [start] 19:03:07 FATAL: save cert: open /config/cert.pem: permission denied ## My config docker run -d --name=syncthing \ --restart=on-failure:20 \ -v ./appdata/syncthing:/config \ -v ./data/:/sync/data/ \ -p 8384:8384/tcp \ -p 22000:22000/tcp \ -p 21025:21025/udp \ username_1/syncthing:edge Answers: username_1: What are your permissions for ./appdata/syncthing? They should be read/writable by the host user with uid 1000 username_0: So far whatever the container comes with, but I will take that into account next time I get the chance. Maybe if its permissions are so fragile the container could enforce file permissions on boot? username_1: That directory is not in the container, it is the folder on your host. The container runs as user 1000 (which will generally be your default user. That user needs rw access to that for. You should also be using full paths rather than relative, eg /home/username_0/appears/syncthing (or where ever you wish to store the config). It is also possible to run the container as root by including -u="root" in the docker run command, however ownership of the config files will taken by root. There is an open issue to update the image to allow nomination of different user during container creation (in case you need to run as a user other than 1000). I can progress this issue if there is need for it. https://github.com/username_1/docker-syncthing/issues/2 username_0: Ah thank you for all of this great information I had an issue with file permissions similar to this in a MySQL container earlier this year turns out that specifically MySQL just absolutely will not operate with the wrong ownership let alone permissions as well. Would you be interested in a PR with some more documentation in your README to cover this issue or will/can you update it. username_2: Thank you for making this image, it works great ! Just so you know, I was trying to use named volumes with this image and encountered the same issue with permissions. To solve it I had to run the container with `--user="root"` I'll list the commands I ran here just for reference: ```bash sudo docker volume create --name syncthing-config sudo docker volume create --name syncthing-data sudo docker run -d --name=syncthing \ --restart=on-failure:20 \ -v syncthing-config:/config \ -v syncthing-data:/sync \ --user="root" \ -p 8384:8384/tcp \ -p 22000:22000/tcp \ -p 21025:21025/udp \ username_1/syncthing:latest ``` username_1: Hi, One way you could solve this, would be to `exec` into your container, and change the ownership of the /config and /sync directories (and therefore the volumes) to what UID you needed. Then destroy and re-create the container using the user you wanted. However to be honest, I would recommend trying out the image made by the guys at Linuxserver.io https://hub.docker.com/r/linuxserver/syncthing/ Well maintained, and allows you to easily apply whatever user/group you need to. I made this container, based on Alpine, because I wanted a lean image for syncthing. Since then, linuxserver.io have produced some fantastic images, and I've found myself running a few of them, so there doesnt end up being any additional overhead by using the linuxserver/syncthing image in this case. username_2: Hi, thanks, in fact it did work once I ran with `--user="root"`. I tried the linuxserver image before but yours is smaller and works just as well, so I'll be using it :)
neo4j-contrib/neo4j-tableau
351177404
Title: Currently not usable Question: username_0: Hey, currently I'm using Neo4J 1.1.8 Desktop with a database version of neo4j 3.4.1: ![image](https://user-images.githubusercontent.com/3354824/44207342-20419700-a15d-11e8-9051-12ec549fad89.png) However, I was not able to use the connector (3.0.0) giving the following setups: - using the github.io version - using my own localhost version - using a dockerized Apache deployed version in our intranet I'm guessing that the error concerns the QT-Webtoolkit browser security settings, but I have no idea how to activate the development mode of the Webtoolkit window in tableau or extract the logs of the application. This is a screenshot with my detailed setup: ![image](https://user-images.githubusercontent.com/3354824/44207294-f1c3bc00-a15c-11e8-88e2-954678f04be8.png) This is the error message: ![image](https://user-images.githubusercontent.com/3354824/44207258-d5c01a80-a15c-11e8-9c22-980a5ef10022.png) Any ideas? Thx for your help! Answers: username_1: If it uses SSL, could you try to open the Neo4j SSL URL (Port 7473) in the tableau WDC UI and accept the certificate? /cc @ralfbecher You're also both in Leipzig :) username_0: Actually, I know Ralf from my internship times at TIQ Solutions! We can meet after my holidays in sep, I would enjoy. Thx for your help!
ccxt/ccxt
812508531
Title: KuCoin retreive balances in Pool-X Question: username_0: It's not clear to me how to retrieve the balance of "staked" coins that I have under KuCoin's "pool" account type. `fetch_total_balance()` shows what is in the trading account type, and `fetchAccounts()` only shows the balance of the staking rewards I've earned. I'd like to know how to query the API so I can see that I have 25 EOS staked, and I've received 0.03 EOS in rewards. I understand that because I've staked these coins they're basically out of my possession.. I'm hoping there's a way to query just how much I've lent out to the pool programmatically. - OS: Manjaro Linux 5.10.15-1-MANJARO - Programming Language version: Python 3.9 - CCXT version: 1.42.8 ``` import os import sys import pprint import ccxt pp = pprint.PrettyPrinter(indent=4) kucoin = ccxt.kucoin({ 'apiKey': " ", 'secret': " ", 'verbose': False, # switch it to False if you don't want the HTTP log 'password': ' ' }) total_bal = kucoin.fetch_total_balance() pp.pprint(total_bal) accounts = kucoin.fetchAccounts(); for holding in accounts: symbol = holding['currency'] available = float(holding['info']['available']) balance = float(holding['info']['balance']) account_type = holding['type'] if balance != 0 and available != 0: print(f"""KuCoin {account_type} {symbol}: {balance} / {available} """) if symbol == 'EOS': pp.pprint(holding) ``` ``` { '1INCH': 7.0584, 'EOS': 1.7656, 'XRP': 142.48667749} KuCoin trade EOS: 138.7656 / 138.7656 { 'currency': 'EOS', 'id': '6022d352422b69000630b4a4', 'info': { 'available': '1.7656', # In Trading Account 'balance': '1.7656', 'currency': 'EOS', 'holds': '0', 'id': '6022d352422b69000630b4a4', 'type': 'trade'}, 'type': 'trade'} KuCoin trade 1INCH: 7.0584 / 77.0584 KuCoin trade XRP: 142.48667749 / 142.48667749 KuCoin pool EOS: 0.03082185 / 0.03082185 { 'currency': 'EOS', 'id': '601b752d9064a600066e919c', 'info': { 'available': '0.03082185', 'balance': '0.03082185', # Rewards from staking (highlighted in red below) 'currency': 'EOS', 'holds': '0', 'id': '601b752d9064a600066e919c', 'type': 'pool'}, 'type': 'pool'} ``` I'm able to pull back `Available` balance, I'd like to get back `Locked Up` as well. ![image](https://user-images.githubusercontent.com/46991/108583289-1ab10300-72fe-11eb-90f4-18614921e878.png)
spring-cloud/spring-cloud-skipper
340197636
Title: support for hashicorp nomad? Question: username_0: There is a spring cloud deployer available: https://github.com/username_1/spring-cloud-deployer-nomad Happy to help / contribute, but need some guidance. Answers: username_1: @username_0 My initial requirement to deploy to Nomad is no longer required, so the Nomad deployer is no longer maintained as actively (happy for contributions!) As for a Skipper Nomad platform implementation, you could use https://github.com/username_1/spring-cloud-skipper-platform-openshift as a reference. Note that Skipper has moved on since that was written so you might need to adjust. You could also use the [Kubernetes platform](https://github.com/spring-cloud/spring-cloud-skipper/tree/master/spring-cloud-skipper-platform-kubernetes) as a more up to date reference.
magda-io/magda
491517599
Title: Team Dropdown & data custodian dropdown should list items in alphabetical order Question: username_0: Team Dropdown & data custodian dropdown should list items in alphabetical order ![image](https://user-images.githubusercontent.com/674387/64595884-32d8df00-d3f6-11e9-8558-e1fedd4ca371.png) ![image](https://user-images.githubusercontent.com/674387/64595912-408e6480-d3f6-11e9-82b8-87db669295d9.png)
FrenetGatewaydeFretes/frenet_magento
602732382
Title: Não calcula para tipo de produto "Recurring Profile" Question: username_0: Alguém passou por isso? estou tentando resolver mas não consegui ainda... Status: Issue closed Answers: username_1: Olá, foram realizadas algumas alterações no módulo Frenet e este modulo https://github.com/FrenetGatewaydeFretes/frenet_magento foi descontinuado, este Github está sendo mantido por questões de compatibilidade de versões muito antigas do Magento. Todas nossas implementações e melhorias estão sendo realizadas no módulo a seguir: https://github.com/FrenetGatewaydeFretes/frenet-magento Algumas implementações importantes foram desenvolvidas, pensando no melhor desempenho do módulo. Como por exemplo, alteração do método de chamada da API de SOAP para REST. Fizemos alteração da referência do módulo em todos nossos meios de comunicação e caso estejam utilizando este módulo obsoleto, por gentileza migrem para a nossa nova tecnologia. Lembrem-se de testar em um ambiente de testes de sua loja antes de implantar em um ambiente produtivo. Atenciosamente,
quintel/etmodel
254950762
Title: Heat demand and production chart shows charging of buffers, even though buffer size = 0 Question: username_0: [This scenario](https://beta-pro.energytransitionmodel.com/scenarios/719385) has 100% electric heat pumps voor both space and water heating, the buffer size is set to 0 kWh. The heat demand and production chart still shows charging of the buffers. This shouldn't be the case. ![image](https://user-images.githubusercontent.com/19907658/30014826-9a3322b0-914e-11e7-9620-7901915525e7.png) notifying @ChaelKruip Status: Issue closed Answers: username_1: This is not buffering, but is demand being time-shifted (deferred) into the future when the space heating producers have insufficient capacity to meet demand. I intend for a future improvement to the chart to show this differently, which should avoid confusion with buffering. username_0: thanks for the info!
findcomrade/isbio
169633598
Title: DoesNotExist: UserProfile matching query does not exist. Question: username_0: View details in Rollbar: [https://rollbar.com/fclem/Breeze/items/258/](https://rollbar.com/fclem/Breeze/items/258/) ``` Traceback (most recent call last): File "/homes/breeze/code/venv/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 111, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/homes/breeze/code/venv/local/lib/python2.7/site-packages/django/contrib/admin/options.py", line 366, in wrapper return self.admin_site.admin_view(view)(*args, **kwargs) File "/homes/breeze/code/venv/local/lib/python2.7/site-packages/django/utils/decorators.py", line 91, in _wrapped_view response = view_func(request, *args, **kwargs) File "/homes/breeze/code/venv/local/lib/python2.7/site-packages/django/views/decorators/cache.py", line 89, in _wrapped_view_func response = view_func(request, *args, **kwargs) File "/homes/breeze/code/venv/local/lib/python2.7/site-packages/django/contrib/admin/sites.py", line 196, in inner return view(request, *args, **kwargs) File "/homes/breeze/code/venv/local/lib/python2.7/site-packages/django/utils/decorators.py", line 25, in _wrapper return bound_func(*args, **kwargs) File "/homes/breeze/code/venv/local/lib/python2.7/site-packages/django/utils/decorators.py", line 91, in _wrapped_view response = view_func(request, *args, **kwargs) File "/homes/breeze/code/venv/local/lib/python2.7/site-packages/django/utils/decorators.py", line 21, in bound_func return func(self, *args2, **kwargs2) File "/homes/breeze/code/venv/local/lib/python2.7/site-packages/django/db/transaction.py", line 209, in inner return func(*args, **kwargs) File "/homes/breeze/code/venv/local/lib/python2.7/site-packages/django/contrib/admin/options.py", line 1054, in change_view self.save_model(request, new_object, form, True) File "/homes/breeze/code/venv/local/lib/python2.7/site-packages/django/contrib/admin/options.py", line 709, in save_model obj.save() File "/homes/breeze/code/isbio/breeze/models.py", line 1188, in save self.institute = UserProfile.objects.get(pk=self.author_id).institute_info File "/homes/breeze/code/venv/local/lib/python2.7/site-packages/django/db/models/manager.py", line 131, in get return self.get_query_set().get(*args, **kwargs) File "/homes/breeze/code/venv/local/lib/python2.7/site-packages/django/db/models/query.py", line 366, in get % self.model._meta.object_name) DoesNotExist: UserProfile matching query does not exist. ```
jlippold/tweakCompatible
706421261
Title: `Little11` working on iOS 14.0 Question: username_0: ``` { "packageId": "com.ryannair05.little11", "action": "working", "userInfo": { "arch32": false, "packageId": "com.ryannair05.little11", "deviceId": "iPhone8,1", "url": "http://cydia.saurik.com/package/com.ryannair05.little11/", "iOSVersion": "14.0", "packageVersionIndexed": false, "packageName": "Little11", "category": "Tweaks", "repository": "Packix", "name": "Little11", "installed": "1.5.1", "packageIndexed": true, "packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.", "id": "com.ryannair05.little11", "commercial": false, "packageInstalled": true, "tweakCompatVersion": "0.1.5", "shortDescription": "iPhone 11 gestures and more for iOS 13", "latest": "1.5.1", "author": "<NAME>", "packageStatus": "Unknown" }, "base64": "<KEY> "chosenStatus": "working", "notes": "" } ``` Answers: username_1: This issue is being closed because your review was accepted into the tweakCompatible website. Tweak developers do not monitor or fix issues submitted via this repo. If you have an issue with a tweak, contact the developer via another method. Status: Issue closed
jquery/download.jqueryui.com
165292582
Title: Error 504. Gateway time-out. Question: username_0: I can't download any scripts during two last days. I need to download this combination: [http://jqueryui.com/download/#!version=1.12.0&components=111000000100100000000000010000000000000000000000]. Please do not send me to closed issue #313, because there are no any solution for my case: I downloaded jquery-ui-themes-1.12.0 archive but I didn't find any javascript files there. I followed to links and didn't find any other solutions there... Status: Issue closed Answers: username_1: Please give us some time to address the known issues. Opening more tickets isn't helping.
KeyBridge/lib-jose
328757934
Title: JSON Web Key (JWK) Thumbprint support Question: username_0: ``` Abstract This specification defines a method for computing a hash value over a JSON Web Key (JWK). It defines which fields in a JWK are used in the hash computation, the method of creating a canonical form for those fields, and how to convert the resulting Unicode string into a byte sequence to be hashed. The resulting hash value can be used for identifying or selecting the key represented by the JWK that is the subject of the thumbprint. ``` [RFC link](https://tools.ietf.org/html/rfc7638) Status: Issue closed Answers: username_1: won't fix
internetofwater/WQP-Mapper
794438129
Title: possible vix element Question: username_0: another possible viz, from <NAME> • If we could incorporate an easy visualization of change in values of interest from two time points and see where sites are changing most, that would be extremely valuable to ECWA. Certainly, we don't have enough sites and data to do any sort of spatial interpolation, but could be colored points. For example, if we are interested in how much conductivity has changed from 2018 - 2021 at stream monitoring sites, displaying a map that has more change in red and yellow/green as less change. Something like that could be cool.
vais-ral/CCPi-Framework
410787040
Title: Scalar multiplication of Operator Question: username_0: We need to be able to do scalar multiplication (from the left) on an operator. For example in the generic Tikhonov formulation this is needed. As we do not necessarily have the operators elements present and also multiplying the scalar on all operator elements would not be efficient we need to handle this carefully. One approach would be to introduce a new attribute to the Operator class to hold a scalar that will be multiplied on as part of the calls to direct, adjoint, norm. This is similar to the `c` constant in the `norm2sq` function. By default it would be 1 or empty and only be applied if non-one/nonempty. If an Operator already has such a non-one scalar, and the user wants to multiply with another scalar, then the scalar should be updated to the product of the existing scalar and the new scalar. Answers: username_1: I think this would be nice: ```python c = 0.5 op = c * TomoIdentity(geometry) ``` and this you would return another Operator with the internal scalar value set to `c`. What do you think? username_0: Agree. username_1: Notice that this would force you to take care of the presence of the scalar value inside the `Operator` username_1: currently implemented [here](https://github.com/vais-ral/CCPi-Framework/blob/algorithm_class/Wrappers/Python/ccpi/optimisation/ops.py#L56) username_0: Good start. As I see it, an operator will now be initialised to have the scalar attribute set to the default of 1. Then if one does a multiplication from left by a scalar, then the scalar attribute will be overwritten by the scalar input "other". This is fine the first time, but if one does another multiplication, it will again overwrite, but it really should multiply the existing one. Another thing, which I don't see is implemented yet, is the use of this scalar. It needs to be multiplied on to the output of both the direct and adjoint methods. Specifically, if we have an operator B=c*A, then we could evaluate B.direct(x) as c*A.direct(x), and similarly for the adjoint. I think this would belong in the general Operator definition of direct and adjoint, so that one does not need to worry about this when implementing specific operators. Also, if the scalar remains the default value of 1, then we would prefer to avoid to carry out the multiplication by one, so perhaps there should be an "if" to only do it if the scalar is different from 1. When implementing the operator "norm" method, the scalar also needs to be taken into account. Will write about this separately. username_1: I am a bit worried for the CIL/SIRF integration as they won't have the scalar in there. username_0: I am thinking it is okay. This scalar is only relevant in our optimisation framework. Our Operator will hold a default of 1 which will be the correct choice if we wrap their projectors in an Operator. We can then actually change to non-one and do Tikhonov for SIRF. username_1: The problem is that with @kristhielemans we decided to make sure that SIRF's `AcquisitionModel` behaves like an `Operator`. This change has to happen in a way that SIRF won't need to update anything. username_0: Ok I think we can do this instead. We do not have an attribute scalar in the Operator. Instead we implement a special ScaledOperator which takes the scalar and Operator as inputs when constructing, and for direct/adjoint returns the scalar times the (unscaled) Operator's direct /adjoint. That will in fact be more elegant since we avoid various ifs to check whether the scalar attribute is 1 and therefore shouldn't be used. username_1: I guess this'll work with SIRF classes too. username_2: I think indeed much cleaner. username_0: I should think so, yes. Similarly, later on, should we need it, we could implement operator addition/subtract/multiplication... Status: Issue closed
prometheus/prometheus
192991985
Title: SIGPIPE killed prometheus and alertmanager (journald restart) Question: username_0: **What did you do?** restarted systemd-journald.service **What did you expect to see?** prometheus should not bei killed **What did you see instead? Under which circumstances?** prometheus was killed, after I restarted the above service. Only prometheus, alertmanager, grafana and the [prometheus mailexporter](https://github.com/cherti/mailexporter) were killed on different servers. **Environment** * System information: Linux 3.16.0-4-amd64 x86_64 * Prometheus version: prometheus, version 0.20.0 (branch: master, revision: f8bb0ee) build user: <EMAIL> build date: 20160710-00:19:46 go version: go1.6.2 * Alertmanager version: alertmanager, version 0.3.0 (branch: master, revision: d263b7ab9aa8ae6213b45ba3c959d02c9600955c) build user: j<EMAIL> build date: 20160905-16:00:39 go version: go1.7 * Prometheus configuration file: I think this is not relevant. * Alertmanager configuration file: I think this is not relevant. * Logs: ``` Dec 01 17:54:40 hostname systemd[1]: prometheus.service holdoff time over, scheduling restart. Dec 01 17:54:40 hostname systemd[1]: Stopping Prometheus Server Instance... Dec 01 17:54:40 hostname systemd[1]: Starting Prometheus Server Instance... Dec 01 17:54:40 hostname systemd[1]: Started Prometheus Server Instance. ``` ``` Dec 01 18:03:35 hostname systemd[1]: Starting Prometheus Alertmanager Server Instance... Dec 01 18:03:35 hostname systemd[1]: Started Prometheus Alertmanager Server Instance. ``` The difference between prometheus.service and prometheus-alertmanager.service is, that the following option is set in the prometheus config file: ``` Restart=always ``` Answers: username_1: Can't reproduce. Please reopen if you have evidence this is a bug in Prometheus. Status: Issue closed
mengyushi/LeetCode
543274007
Title: 707. Design Linked List Question: username_0: ```python class Node: def __init__(self, val): self.val = val self.next = None class MyLinkedList: ''' head->1->3->...6->None tail ''' def __init__(self): """ Initialize your data structure here. """ self.head = Node(0) self.tail = None self.size = 0 def get(self, index: int) -> int: """ Get the value of the index-th node in the linked list. If the index is invalid, return -1. """ if index < 0 or index >= self.size: return -1 pointer = self.head.next if not index: return self.head.next.val while index-1: pointer = pointer.next index-=1 return pointer.next.val def addAtHead(self, val: int) -> None: """ Add a node of value val before the first element of the linked list. After the insertion, the new node will be the first node of the linked list. """ n = Node(val) pre_head = self.head.next self.head.next = n n.next = pre_head if not self.size: self.tail = n self.size+=1 def addAtTail(self, val: int) -> None: """ Append a node of value val to the last element of the linked list. [Truncated] pointer = self.head.next while (index-1): pointer = pointer.next pointer.next = pointer.next.next self.size-=1 return # Your MyLinkedList object will be instantiated and called as such: # obj = MyLinkedList() # param_1 = obj.get(index) # obj.addAtHead(val) # obj.addAtTail(val) # obj.addAtIndex(index,val) # obj.deleteAtIndex(index) ```
spapadopoulos/EnergyPlusOpt
1170592280
Title: Index exceeding matrix Dimension Question: username_0: I have tried to run the program using building.idf file and San Francisco Weather data file, and after running it's giving index exceeds matrix dimensions. Can you look into the problem or can you upload the Baltimore_MD File which you have provided for the simulation? ![image](https://user-images.githubusercontent.com/55843800/158528388-9c05d6c9-ff7b-4708-bb0b-4be81cc30516.png)
jlippold/tweakCompatible
418949076
Title: `LS EW110` working on iOS 12.1.1 Question: username_0: ``` { "packageId": "com.evynw.lsew110", "action": "working", "userInfo": { "arch32": false, "packageId": "com.evynw.lsew110", "deviceId": "iPhone10,4", "url": "http://cydia.saurik.com/package/com.evynw.lsew110/", "iOSVersion": "12.1.1", "packageVersionIndexed": false, "packageName": "LS EW110", "category": "Lockscreen Widgets", "repository": "Evelyn's Collection", "name": "LS EW110", "installed": "1.0", "packageIndexed": false, "packageStatusExplaination": "This tweak has not been reviewed. Please submit a review if you choose to install.", "id": "com.evynw.lsew110", "commercial": false, "packageInstalled": true, "tweakCompatVersion": "0.1.4", "shortDescription": "[Enter generic LS widget description ^_^]", "latest": "1.0", "author": "Evelyn (ev_ynw)", "packageStatus": "Unknown" }, "base64": "<KEY> "chosenStatus": "working", "notes": "" } ```<issue_closed> Status: Issue closed
Plazide/node-twitch
933588301
Title: Wrong JSON.parse on request to Twitch Question: username_0: Hello, my Answers: username_1: Hey, thanks for reporting. I cannot reproduce. Could you provide the error message and the parameters example of parameters that cause this issue? I'm a bit confused as to what you are saying, but if you mean that the issue has disappeared, it might have been a temporary error with the Twitch API. I don't know, though. username_0: Yeah ! I'm sorry for the wait. ![screenshot](https://cdn.discordapp.com/attachments/794480379796520971/859730046074748928/unknown.png) username_1: Okay, that's strange. There are two errors with the query. 1. There should not be two ampersands (&) 2. Channel is not a valid option for the streams endpoint It seems like there is something wrong with your options for the `getStreams` method. Check your code where you are calling `twitch.getStreams`, it should look like this: ```js twitch.getStreams({ channel: "ultrasaucisse" }) ``` username_0: I've this code : ```js const stream = await twitch.getStreams({ channel: stmr[0] }); ``` stmr[0] is the channel name. It's strange because I don't have any errors, and my client have a VPS and he get this error. I don't know why but... It's juste strange. Thanks you anyway. username_1: Okay, so that query is actually normal, but still a bug. I just checked the output of the query when running the tests (which are passing), and it looks exactly like the one you in your request. That means the error is not with the query. The error indicates that the result was an HTML file (based on the `<`). Twitch should never return HTML files, they always return JSON. Based on that, it's either a temporary Twitch error or an incorrectly configured VPS (ie. not allowing outbound calls). It doesn't seem to be an error with the package, but let me know if you find something that suggests otherwise. Status: Issue closed username_0: I think you’re right. I’m not sure if it’s possible with node-fetch if you use this package because I’ve never looked too hard. But why not get the html code with a `res.text()` and then do a typeof of the response to see if it is an object or not. That could rule out that kind of mistake. But thank you for the verification, it’s very likely.
dask/dask
327800140
Title: Bag items silently disappear on error when mapping Question: username_0: StopIteration Traceback ``` Answers: username_1: Thanks for opening up this issue @username_0! I was able to reproduce your issue using the current dask `master` (i.e. version `0.17.4+38.ge1c48e0c.dirty`) with Python 3.6.4. Modifying your example slightly, it looks like the dask bag `map` method ignores just the single function call that raises a `StopIteration`, while the builtin `map` function is truncated after the `StopIteration` exception is raised. <details> <summary> Code snippet:</summary> ```python from dask import bag bins = [10, 40, 100, 1000] discretize = lambda price: next(i for i, v in enumerate(bins) if v > price) seq = [12.49, 22.19, 39.99, 49.00, 1000.00, 50.55, 220.00] print('len(seq) = {}'.format(len(seq))) # Using dask bag map db = bag.from_sequence(seq) dask_map_discretize = db.map(discretize).compute() print('\ndask_map_discretize = {}'.format(dask_map_discretize)) print('len(dask_map_discretize) = {}'.format(len(dask_map_discretize))) # Using builtin Python map map_discretize = list(map(discretize, seq)) print('\nmap_discretize = {}'.format(map_discretize)) print('len(map_discretize) = {}'.format(len(map_discretize))) ``` </details> <br> <details> <summary>Python 3.6.4 output: </summary> ``` len(seq) = 7 dask_map_discretize = [1, 1, 1, 2, 2, 3] len(dask_map_discretize) = 6 map_discretize = [1, 1, 1, 2] len(map_discretize) = 4 ``` </details> <br> I'm not exactly sure how this should be addressed. Perhaps a try-except to catch if a mapped function call raises a `StopIteration`? Maybe others can comment on what they think should be done here. username_0: Well, my opinion is that Dask should adhere to it's standard behavior of propagating exceptions from user-provided functions up the call-stack. It was quite a daunting experience to have Dask just silently eat away items from the bag, after a long chain of commands, with no insight as to why this is happening. The issue was discovered by accident and only because the result was obscenely wrong (returned 60 items instead of 11k), it would be very hard to detect this on a standard test case, and even harder to debug. username_2: I agree that this is a bug and should be resolved. If anyone has time to look into this that would be quite welcome. @username_1 @username_0 do either of you have time to contribute here? username_1: @username_2 I'd be happy to help out with this :) username_2: I've really appreciated you stepping up recently to help answer questions like this. Please let me know if there is anything I can do to encourage your involvement in the future :) username_0: My guess as to what this is is probably Dask using the `StopIteration` internally to signal that the bag is done processing. If you guys need any help let me know, as long as I can get a kickstart as to where I should start looking at. username_3: Is this a non-issue for Python 3.7 and above, due to https://www.python.org/dev/peps/pep-0479/? With the original code, I get ```python In [4]: >>> len(db.map(discretize).compute()) # notice how length changes ...: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-4-6e1c9c9c3bf9> in <module> ----> 1 len(db.map(discretize).compute()) # notice how length changes ~/sandbox/dask/dask/base.py in compute(self, **kwargs) 154 dask.base.compute 155 """ --> 156 (result,) = compute(self, traverse=False, **kwargs) 157 return result 158 ~/sandbox/dask/dask/base.py in compute(*args, **kwargs) 396 keys = [x.__dask_keys__() for x in collections] 397 postcomputes = [x.__dask_postcompute__() for x in collections] --> 398 results = schedule(dsk, keys, **kwargs) 399 return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)]) 400 ~/sandbox/dask/dask/multiprocessing.py in get(dsk, keys, num_workers, func_loads, func_dumps, optimize_graph, pool, **kwargs) 190 get_id=_process_get_id, dumps=dumps, loads=loads, 191 pack_exception=pack_exception, --> 192 raise_exception=reraise, **kwargs) 193 finally: 194 if cleanup: ~/sandbox/dask/dask/local.py in get_async(apply_async, num_workers, dsk, result, cache, get_id, rerun_exceptions_locally, pack_exception, raise_exception, callbacks, dumps, loads, **kwargs) 460 _execute_task(task, data) # Re-execute locally 461 else: --> 462 raise_exception(exc, tb) 463 res, worker_id = loads(res_info) 464 state['cache'][key] = res ~/sandbox/dask/dask/compatibility.py in reraise(exc, tb) 109 def reraise(exc, tb=None): 110 if exc.__traceback__ is not tb: --> 111 raise exc.with_traceback(tb) 112 raise exc 113 ~/sandbox/dask/dask/local.py in execute_task() 228 try: 229 task, data = loads(task_info) --> 230 result = _execute_task(task, data) 231 id = get_id() 232 result = dumps((result, id)) ~/sandbox/dask/dask/core.py in _execute_task() 117 func, args = arg[0], arg[1:] 118 args2 = [_execute_task(a, cache) for a in args] --> 119 return func(*args2) 120 elif not ishashable(arg): 121 return arg ~/sandbox/dask/dask/bag/core.py in reify() 1589 def reify(seq): 1590 if isinstance(seq, Iterator): -> 1591 seq = list(seq) 1592 if seq and isinstance(seq[0], Iterator): 1593 seq = list(map(list, seq)) RuntimeError: generator raised StopIteration ```
EDDiscovery/EDDiscovery
642398453
Title: Estimated value not updating after full scan Question: username_0: Hi there, I have updated from version 10.* to current release 11.5.3. I was used to that est value field in estimated value window would update value after i scanned a body. Now it doesn't and instead stays on the initial value. Is this new behavior normal, or is something wrong. Best Regards, Rasti Answers: username_1: From the look of things, it will update on a new entry or a move position on the history panel. I'll have to check it out. username_0: Yes, that is what it used to do; update the value and move the scanned entry to the top. If there is anything I can do to help you investigate, let me know, I will be back online at 20:00 CEST. username_1: So, this was changed so we compute the estimated values of the body without scan, with discovered, with mapped etc. You can see all the values in the report that is given on a body in the scan panel. The estimated value show the best estimate of the maximum value you can get - based on the info we have. So it won't change if its not mapped and then you map it - since its already taking into account this. We had to pick one value and we picked the best result you can get hope this explains it. rob Status: Issue closed
department-of-veterans-affairs/caseflow
249665967
Title: [Backend] Hearing Worksheet | Update database columns Question: username_0: Add new columns such that we can record if a judge thinks an issue is either allow or deny. #### Acceptance Criteria - Delete current `hearing_worksheet_status` and related code from the Issue model/table - Add 4 new boolean columns to the `Issue` table: `allow`, `deny`, `remand`, and `dismiss` Status: Issue closed Answers: username_1: **PASSED** Crappy AC leads to crappy validation: ``` #<ActiveRecord::Relation [#<Issue id: 1, appeal_id: nil, vacols_sequence_id: nil, hearing_worksheet_reopen: false, hearing_worksheet_vha: false, allow: nil, deny: nil, remand: nil, dismiss: nil>]> ```
firasdib/Regex101
275177476
Title: Sometimes, escaped string is what you need Question: username_0: I found this closed issue: #392 But if I have a JS/TS system that stores regexes as strings, I feel uneasy using Java code generator - what if I introduce some hard to find bug relying on wrong tool? Also, I see this as a feature separated from testing boilerplate generation. I'd like it to be a preview/copy feature somewhere on the main page. Or, as a minimal passable solution, for languages that may need it, add escaped string as a comment into generated code at least. Answers: username_1: Why are you using the Java code generator for JavaScript/TypeScript? What is wrong with using the Javascript code generator? Or am I understanding your suggestion wrong. username_0: @username_1 JavaScript code generator produces native JavaScript RegExp (the same as written in the editor page). But I need a string that will be stored and then used to reconstruct the RegExp in the runtime with `new RegExp(regexString, flagsString)`. username_2: @username_0 Why? Status: Issue closed username_2: I found this closed issue: #392 But if I have a JS/TS system that stores regexes as strings, I feel uneasy using Java code generator - what if I introduce some hard to find bug relying on wrong tool? Also, I see this as a feature separated from testing boilerplate generation. I'd like it to be a preview/copy feature somewhere on the main page. Or, as a minimal passable solution, for languages that may need it, add escaped string as a comment into generated code at least. username_0: Example: Electron-based app stores include/exclude and some other patterns as regex strings in json file. Many vscode extensions do store include/exclude patterns like this, and there are some internals there using json with regexes for other needs. username_2: I see. Well there is no reason for you to actually use this if you're coding javascript, so the code generated by the site is the best you can use (for javascript). If you really need escaped strings, you could try copying the Java flavors regex output. Status: Issue closed username_0: That's really really sad to hear that you don't see my point. Committing to big repository and trying to not to cause issues for a lot of people is the reason why I want to have a right tool to do escaping and avoid: * mistakes from manual escaping; * mistakes from difference between languages. JavaScript is not Java. Even though regex syntax mostly the same in simple cases, I have to be always cautious not to run into issues in more complex cases. I can't change the big system I just contributing small things to. And it is really fine the way it is. Your denial to support full-fledged language feature just because you can't agree there is a use for it is what upsets me most. username_2: @username_0 The code generator does not manipulate your regex besides escaping stuff properly. So the java regex, if developed in the js flavor, will work fine in JS. I'm not denying you anything, you're asking me add another entry which uses `new RegExp()` instead of the regex literals, which I don't agree with. I don't think you should use the constructor unless you have a very good reason, especially since the regex literals are easier to read, to work with and (used to?) offer a performance boost. username_0: I explained my reason. I think it's good enough. You can't pass regex literals around in json. And now I think I should be concerned about quality of generated Java code instead...
electron/electron
583544593
Title: Object passed via contextBridge equality check fails Question: username_0: <!-- As an open source project with a dedicated but small maintainer team, it can sometimes take a long time for issues to be addressed so please be patient and we will get back to you as soon as we can. --> ### Preflight Checklist <!-- Please ensure you've completed the following steps by replacing [ ] with [x]--> * [ ] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/master/CONTRIBUTING.md) for this project. * [ ] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/master/CODE_OF_CONDUCT.md) that this project adheres to. * [ ] I have searched the issue tracker for an issue that matches the one I want to file, without success. ### Issue Details * **Electron Version:** * 8.1.1 * **Operating System:** * Windows 10 * **Last Known Working Electron version:** * 8.0.2 ### Expected Behavior <!-- A clear and concise description of what you expected to happen. --> Same object passed via contextBridge allows equality check ### Actual Behavior <!-- A clear and concise description of what actually happened. --> Equality check fails ### To Reproduce <!-- Your best chance of getting this bug looked at quickly is to provide an example. --> - clone https://github.com/username_0/ctx-bridge-reference - `npm run prev` : shows `true` in console - `npm run curr` : shows `false` in console When preload script exposes fn to return certain object's reference (i.e array), ``` // preload.js const ref = [] contextBridge.exposeInMainWorld('desktop', { store: { getState: () => ({ a: ref }) } }); ``` Renderer process could check if object is same or not: ``` const y = window.desktop.store.getState().a; const state = { noti: window.desktop.store.getState().a } [Truncated] <!-- If Fiddle is insufficient to produce an example, please provide an example REPOSITORY that can be cloned and run. You can fork electron-quick-start (https://github.com/electron/electron-quick-start) and include a link to the branch with your changes. --> <!-- If you provide a URL, please list the commands required to clone/setup/run your repo e.g. ```sh $ git clone $YOUR_URL -b $BRANCH $ npm install $ npm start || electron . ``` --> ### Screenshots <!-- If applicable, add screenshots to help explain your problem. --> ![image](https://user-images.githubusercontent.com/1210596/76938775-75eba180-68b4-11ea-93ec-7caed7e9b53f.png) ### Additional Information <!-- Add any other context about the problem here. --> Answers: username_0: This issue is more of to get clarification if this is expected behavior, not requesting to change any behavior in electron. username_1: This is expected. Object identity was never part of the API spec and by nature of it being present it caused significant issues when mutated objects were sent multiple times across the bridge. i.e. if you sent an object over the bridge, modified a property and then sent it again, it wouldn't know about the updated property and would just use the old cached version. This was incredibly unpredictable and fundamentally unsolvable so we reduced object identity caching to per-transaction so that recursive objects still work but multiple transactions will generate multiple objects. This aligns with how IPC and postMessage and other cross-boundary APIs work. Status: Issue closed
rapidsai/dask-cugraph
463424792
Title: [FEA] Return vertex IDs along with pagerank result Question: username_0: Since the whole pagerank is on GPU 0 after the C++ call we should just generate the corresponding sequence [0,V_global) Answers: username_0: The change should be in cuGraph actually. Option 1 (faster to run) call from python `cugraph::sequence<int>((int)num_vertices, (int*)identifiers->data);` Option 2 (slow to run but faster to implement) ` x['vertex'] = np.ones(pr_ptr.size,dtype=int)` -> ` x['vertex'] = np.arange(pr_ptr.size,dtype=int)` username_0: Option 1 was implemented. Keeping this open until we have profiling results in case we need option 2 for performance reasons.
Workfront/workfront-api
111916593
Title: Implement Upload Question: username_0: @username_1 I love what you've done so far. I find that I need to make use of the upload feature to attach some documents to some issues for my script. I'm assuming that you're okay with it as you have a stub out there. How would you recommend I proceed? Or should I just start working on a PR? Answers: username_1: @username_0, document upload feature was also mentioned multiple times in stackoverflow and certainly is a feature the customers will love. I was also thinking about implementing it, just didn't managed to find a time to do that. If you'll have time to work on implementation, I'd like to accept your PR :) I've some thoughts regarding how it should be implemented: 1. A new file named `upload.js` should be added to `src/plugins/` folder. It will contain method for uploading file and getting its handle. 2. Document new method with JsDoc. JsDoc comment will be used to generate API docs afterwards. 3. The file `upload.js` should be excluded from browser bundle, because upload can't work in browser-based environments. In order to exclude it, tweak `build` task in gulpfile.js. 4. Make sure to add an example in `examples/node/` folder illustrating usage of new method. 5. I was thinking about using [form-data](https://www.npmjs.com/package/form-data) package, but if you have a better choice, feel free to use it. username_0: Thanks @username_1! I'll start working on it now! Status: Issue closed
PyTables/PyTables
349432918
Title: ENH: BUG: multidimensional tables are not supported Question: username_0: ### Problem Given an HDF5 file with a multidimensional table, then when opened with PyTables * the table dimensions are incorrect * accessing data in the table - raise an exception on Windows - on macOS causes a `Abort Trap 6` and quits Python unexpectedly ``` HDF5ExtError: HDF5 error back trace File "C:\ci\hdf5_1525883595717\work\src\H5Dio.c", line 216, in H5Dread can't read data File "C:\ci\hdf5_1525883595717\work\src\H5Dio.c", line 471, in H5D__read src and dest data spaces have different sizes End of HDF5 error back trace Problems reading records. ``` ### reproducible example ```python import h5py import tables import numpy as np x = np.array([[(1, 2, 3), (4, 5, 6)], [(7, 8, 9), (10, 11, 12)]], dtype=[('a', float), ('b', float), ('c', float)]) x.shape # (2, 2) with h5py.File('multidim_table.h5','w') as f: f['x'] = x y = tables.open_file('multidim_table.h5') # with or without read mode # Abort trap: 6 ``` ### relevant info I reported this issue in the [PyTable-users google group](https://groups.google.com/forum/#!topic/pytables-users/I81IgIDUOO0). ### versions tables-3.4.4 python-3.6.6 (Anaconda) h5py-2.8.0 macOS-10.13.6 numpy-1.15.0 Status: Issue closed Answers: username_1: Yes, multidimensional tables have never been supported in PyTables and will likely stay like this for the foreseeable future (unless there is a nice PR contributing it). Closing for now.
atom-community/sync-settings
599245236
Title: sync-settings: Error restoring settings Question: username_0: When trying to do a restore, I get the the error "packages.filter is not a function". ![Screen Shot 2020-04-13 at 8 08 19 PM](https://user-images.githubusercontent.com/2837721/79178467-9c70ff80-7dc2-11ea-91ae-b7b88f13e448.png) Answers: username_1: What version of Atom are you using? username_1: Also what version of sync-settings are you using? username_1: It looks like that error message is from before version 4. Try updating sync-settings and try it again. If it still gives you an error message feel free to reopen this issue. Status: Issue closed username_0: Updating the sync-settings did the trick. Sorry, I should have realized! Thanks!
flutter/flutter
606647777
Title: Allow platforms to save user input for future autofill Question: username_0: Continuation of https://github.com/flutter/flutter/issues/13015. Android and many web browsers save user input for future autofill, when a form is finalized. Android Documentation: https://developer.android.com/guide/topics/text/autofill-optimize#ensure Answers: username_1: Thank you for your efforts. Will it be available in the first quarter of 2020? We are all looking forward to it. username_2: Note: Required for web implementation of autofill. username_3: Hi @username_0 , Did you had time to have a look at this issue? Please let me know if there are any parts I can help. username_4: Assigning to @username_0 since you have a PR out for this. username_5: Is autofill save now available for iOS and Android? If Yes, how to enable autofill save for iOS? username_0: @username_5 you should be able to manually trigger it by calling `TextInput.finishAutofillContext()` (or `TextInput.finishAutofillContext(shouldSave: false)` if you wish to create a new context without saving the current user input). username_6: Those methods are not available for me and I just upgraded my flutter to stable 1.20. What I am doing wrong? username_7: @username_6 The same goes for me... I too was not able to use those methods username_8: Looks like it's been merged to `master`, not yet in `stable`. You'll need to switch channels if you require the Autofill save feature immediately username_0: as username_8 pointed out, this was merged recently so it's not part of the 1.20 release. username_6: I might be wrong, but since there is not way to save user input right now, there is no reason to reason to use autofill right now, right? Or am I wrong? username_9: The sms code auto fill on iOS is really nice and works now. If you use an external password manager where you add an entry manually, that works too. username_0: @username_3 I think we can close this one now that the web PR has been merged? username_3: Yes, that is correct. Thanks for letting me know I closed the [web issue](https://github.com/flutter/flutter/issues/59378) but forgot about this one. Status: Issue closed
renode/renode
982416986
Title: How to load more than one elf or binary file using renode? Answers: username_1: It's actually quite simple - you just repeat the command. For binaries you have to provide load addresses, for ELFs that's not required. Please note that Renode will automatically set starting PC based on the **last** ELF you loaded, so keep this in mind. So ``` sysbus LoadELF @pathToElf sysbus LoadBinary @pathToBinary 0x1000 sysbus LoadELF @pathToElfThatWillSetThePC ``` You can see an example in the [ARVSOM script](https://github.com/renode/renode/blob/master/scripts/single-node/arvsom.resc) username_1: I'll close this issue but feel free to reopen or file a new one if you have additional questions Status: Issue closed
stormpath/stormpath-rails
125085767
Title: NoMethodError in Stormpath::Rails::SessionsController#crate Question: username_0: ruby 2.2.3p173 (2015-08-18 revision 51636) [x86_64-darwin14] Rails 4.2.5 ![46220b61-cff8-4a1c-9ab4-b220cc2a10c5](https://cloud.githubusercontent.com/assets/3199093/12131930/b3fdb50a-b3cb-11e5-88b6-229f31b7d92d.png) ![47b05a2f-4ffe-47d6-baf2-be5a32520187](https://cloud.githubusercontent.com/assets/3199093/12131996/5cb026ec-b3cc-11e5-8e24-18275c41c6b7.png) Status: Issue closed Answers: username_1: Was the old integration, we are out with 2.0.0. now. Closing because it's not relevant anymore.
element-plus/element-plus
1147548554
Title: [Bug Report] el-dialog not working in nuxt3 Question: username_0: <!-- generated by https://elementui.github.io/issue-generator DO NOT REMOVE --> ### Element Plus version 2.0.2 ### OS/Browsers version Irrelevant ### Vue version 3.2.31 ### Reproduction Link https://stackblitz.com/edit/nuxt-starter-q7pamn?file=components/Navbar.vue ### Steps to reproduce Click Open Dialog Button ### What is Expected? To Open Dialog on screen ### What is actually happening? nothing. i saw it add's `.el-popup-parent--hidden` to body but no Dialog is render <!-- generated by https://elementui.github.io/issue-generator DO NOT REMOVE --> Answers: username_1: Please try this project https://github.com/element-plus/element-plus-nuxt-starter username_0: Getting same issue with cloning base `element-plus-nuxt-starter` and calling the dialog username_1: set `append-to-body` <img width="1116" alt="image" src="https://user-images.githubusercontent.com/44761321/155258705-1305b7a1-fb48-4f82-873c-b5518e503dcf.png"> username_0: Got it working! Thank you so much! Status: Issue closed username_0: <!-- generated by https://elementui.github.io/issue-generator DO NOT REMOVE --> ### Element Plus version 2.0.2 ### OS/Browsers version Irrelevant ### Vue version 3.2.31 ### Reproduction Link https://stackblitz.com/edit/nuxt-starter-q7pamn?file=components/Navbar.vue ### Steps to reproduce Click Open Dialog Button ### What is Expected? To Open Dialog on screen ### What is actually happening? nothing. i saw it add's `.el-popup-parent--hidden` to body but no Dialog is render <!-- generated by https://elementui.github.io/issue-generator DO NOT REMOVE --> username_2: You can temporarily wrap it in ClientOnly, we will focus on this issue ```vue <ClientOnly> <el-dialog v-model="state.dialogVisible" title="Test"> <div class="custom">Hello</div> </el-dialog> </ClientOnly> ```
project-oak/oak
604662411
Title: Add `tonic` as a crate in `cargo-raze` Question: username_0: In order to finish https://github.com/project-oak/oak/issues/806 we need to add [`tonic`](https://docs.rs/tonic/0.2.0/tonic/) as a dependency using [`cargo-raze`](https://github.com/google/cargo-raze). cc @username_1 @daviddrysdale Answers: username_0: First there were the following errors: ```shell error: couldn't read external/raze__ring__0_16_12/src/ec/curve25519/ed25519/ed25519_pkcs8_v2_template.der: No such file or directory (os error 2) --> external/raze__ring__0_16_12/src/ec/curve25519/ed25519/signing.rs:266:12 | 266 | bytes: include_bytes!("ed25519_pkcs8_v2_template.der"), | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ error: couldn't read external/raze__webpki__0_21_2/src/data/alg-ecdsa-p256.der: No such file or directory (os error 2) --> external/raze__webpki__0_21_2/src/signed_data.rs:292:43 | 292 | asn1_id_value: untrusted::Input::from(include_bytes!("data/alg-ecdsa-p256.der")), | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ``` The reason was that `cargo-raze` does not automatically add non-`*.rs` files to the Bazel workspace. The problem was solved by adding the following lines to `cargo/Cargo.toml`: ```toml [raze.crates.ring.'0.16.12'] data_attr = "glob([\"**/*.der\"])" [raze.crates.webpki.'0.21.2'] data_attr = "glob([\"**/*.der\"])" ``` They add `data` attributes to `rust_library` rules in the generated `BUILD` files. username_0: Currently there are 2 types of failures: ```shell Use --sandbox_debug to see verbose messages from the sandbox error[E0432]: unresolved import `prost1` --> external/raze__tonic__0_2_0/src/codec/prost.rs:4:5 | 4 | use prost1::Message; | ^^^^^^ use of undeclared type or module `prost1` ``` ```shell error[E0599]: no method named `next` found for struct `std::pin::Pin<&mut _>` in the current scope --> external/raze__tonic__0_2_0/src/codec/encode.rs:47:5 | 47 | / async_stream::stream! { 48 | | let mut buf = BytesMut::with_capacity(BUFFER_SIZE); 49 | | futures_util::pin_mut!(source); 50 | | ... | 74 | | } 75 | | } | |_____^ method not found in `std::pin::Pin<&mut _>` | = note: `$ crate :: AsyncStreamHack` is a function, perhaps you wish to call it = note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info) ``` username_0: The first error is caused by an alias for `prost` library, defined in `tonic`: ```toml [features] prost = ["prost1", "prost-derive"] [dependencies] prost1 = { package = "prost", version = "0.6", optional = true } prost-derive = { version = "0.6", optional = true } ``` https://github.com/hyperium/tonic/blob/6f378e2bd0cdf3a1a3df87e1feff842a8a599142/tonic/Cargo.toml#L38 https://github.com/hyperium/tonic/blob/6f378e2bd0cdf3a1a3df87e1feff842a8a599142/tonic/Cargo.toml#L60 I have updated Bazel `rules_rust` library to the latest version (that should [support crate aliases](https://github.com/bazelbuild/rules_rust/pull/285)): ```starlark http_archive( name = "io_bazel_rules_rust", sha256 = "275f0166e61e6cad3e29b0e37c21ecbb66880c049dbeea6e574d74a8ec4775c5", strip_prefix = "rules_rust-e285f2bd8be77712e6b80ccb52918b727d10d70e", urls = [ # Master branch as of 2020-04-21. "https://github.com/bazelbuild/rules_rust/archive/e285f2bd8be77712e6b80ccb52918b727d10d70e.tar.gz", ], ) ``` But there is no way to automatically generate an `alias` attribute to `rust_library`, because the current version of `cargo-raze` [does not support crate aliases](https://github.com/google/cargo-raze/pull/123). Even if I manually add an alias to a generated `BUILD` file: ```starlark aliases = { ":prost1": "prost", } ``` But it leads to the following error: ```shell (10:55:31) ERROR: /opt/my-project/bazel-cache/clang/external/raze__tonic__0_2_0/BUILD.bazel:28:1: in aliases attribute of rust_library rule @raze__tonic__0_2_0//:tonic: rule '@raze__tonic__0_2_0//:prost1' does not exist ``` username_0: Looks like we only can use an old version of `tonic`: ```toml tonic = { version = "0.1.1", features = ["tls"] } ``` List of supported versions is here: https://github.com/google/cargo-raze/issues/41#issuecomment-592274128 username_1: That's annoying but I guess we can live with it. @username_0 does it require any changes to our own code, to use that old version? Also, is there a path forward at some point, or are we stuck with it until someone else figures out what the problem is and how to solve it? username_0: The old version of `tonic` compiled with the new version of gRPC pseudo-Node, so I think we can use it until we will completely move to Rust and will stop using `cargo-raze`. username_0: Currently there are different errors: ```shell error[E0425]: cannot find value `server` in this scope --> external/raze__tonic__0_1_1/src/transport/server/incoming.rs:28:5 | 28 | / async_stream::try_stream! { 29 | | futures_util::pin_mut!(incoming); 30 | | 31 | | while let Some(stream) = incoming.try_next().await? { ... | 48 | | } 49 | | } | |_____^ not found in this scope | = note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info) error[E0599]: no method named `next` found for struct `std::pin::Pin<&mut _>` in the current scope --> external/raze__tonic__0_1_1/src/codec/encode.rs:47:5 | 47 | / async_stream::stream! { 48 | | let mut buf = BytesMut::with_capacity(BUFFER_SIZE); 49 | | futures_util::pin_mut!(source); 50 | | ... | 74 | | } 75 | | } | |_____^ method not found in `std::pin::Pin<&mut _>` | = note: `$ crate :: AsyncStreamHack` is a function, perhaps you wish to call it = note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info) ``` username_1: Do you mean there are still errors even when using the old version? username_0: Yes, gRPC pseudo-node compiles successfully with `cargo build` and without Bazel, but `cargo raze; ./scripts/build_server -s base` gives these errors. Looks like we need to downgrade other packages too, but it's not obvious which ones. username_0: So after adding the following flag to the generated `tonic-0.1.1.BUILD` file: ```starlark rustc_flags = [ "-Zexternal-macro-backtrace", ], ``` It showed the following error: ```shell 1 | / ($ ($ body : tt) *) => 2 | | { 3 | | { 4 | | let (mut __yield_tx, __yield_rx) = $ crate :: yielder :: pair () ; $ ... | 8 | | # [derive ($ crate :: AsyncStreamHack)] enum Dummy | | ^^^^^^^^^^^^^^^^^^^^^^^^^^ | | | | | not found in this scope | | in this expansion of `stream_2!` (#59) 9 | | { Value = $ crate :: scrub ! { $ ($ body) * } } $ crate :: | ___|- 10 | | | dispatch ! (($ ($ body) *)) | |___|- in this macro invocation (#2) 11 | | }) 12 | | } 13 | | } | |_- in this expansion of `async_stream::stream!` (#1) | ::: <::async_stream::dispatch macros>:1:2 ``` And this error is originated from the following macro: https://github.com/tokio-rs/async-stream/blob/91f6a380cbc1181f2de5bcd1e4a369c03f1c8277/async-stream/src/lib.rs#L266 Plus it also showed a couple of very strange errors: ```shell 1 | / () => { stream_0 ! () } ; (!) => { stream_1 ! () } ; (! !) => 2 | | { stream_2 ! () } ; (! ! !) => { stream_3 ! () } ; (! ! ! !) => | | ------------- in this macro invocation (#59) 3 | | { stream_4 ! () } ; (! ! ! ! !) => { stream_5 ! () } ; (! ! ! ! ! !) => 4 | | { stream_6 ! () } ; (! ! ! ! ! ! !) => { stream_7 ! () } ; (! ! ! ! ! ! ! !) ... | 95 | | (! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! 96 | | ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !) => { stream_64 ! () } ; | |- in this expansion of `$crate::count!` (#58) | ::: external/raze__tonic__0_1_1/src/codec/encode.rs:47: ``` The problem was solved by adding `"--cfg=wrap_proc_macro"` to the `cargo/Cargo.toml` file: ```toml [raze.crates.proc-macro2.'1.0.10'] additional_flags = [ "--cfg=use_proc_macro", "--cfg=wrap_proc_macro", ] ``` Status: Issue closed username_0: In order to finish https://github.com/project-oak/oak/issues/806 we need to add [`tonic`](https://docs.rs/tonic/0.2.0/tonic/) as a dependency using [`cargo-raze`](https://github.com/google/cargo-raze). cc @username_1 @daviddrysdale username_0: ``` Looks like `cargo raze` doesn't link C++ code with Rust that runs it via FFI: https://github.com/briansmith/ring/blob/4c392ad338f61ea166a29c83d4208e8edfecc6ca/src/cpu.rs#L43-L48 username_0: The `C` code is being compiled by [`build.rs`](https://github.com/briansmith/ring/blob/92f936bc3b76163ef49baa6b9593811e7ddfc4c0/build.rs#L344). `cargo raze` can generate a `genrule` for running `build.rs` files via `gen_buildrs = true`flag: ``` [raze.crates.ring.'0.16.12'] gen_buildrs = true data_attr = "glob([\"**/*.der\"])" ``` username_0: After adding this flag a new error appeared: ```shell ERROR: /opt/my-project/bazel-cache/clang/external/raze__ring__0_16_12/BUILD.bazel:47:1: Executing genrule @raze__ring__0_16_12//:ring_build_script_executor failed (Exit 101) thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 17, kind: AlreadyExists, message: "File exists" }', external/raze__ring__0_16_12/build.rs:301:5 stack backtrace: ``` This problem is caused by the fact, that `ring` creates a directory for generated `.asm` files and fails, if it already exists (which is true in case of Bazel since everything is saves in `bazel-cache`): https://github.com/briansmith/ring/blob/92f936bc3b76163ef49baa6b9593811e7ddfc4c0/build.rs#L306 username_0: After adding a manual removal of the `pregenerated` directory to `ring-0.16.12.BUILD`: ```starlark + " cd $$(dirname $(location :Cargo.toml)) && rm -r pregenerated && $$BINARY_PATH && tar -czf $$OUT_TAR -C $$OUT_DIR .)" ``` The following problem showed up: ```shell ERROR: /opt/my-project/bazel-cache/clang/external/raze__ring__0_16_12/BUILD.bazel:47:1: Executing genrule @raze__ring__0_16_12//:ring_build_script_executor failed (Exit 101) thread 'main' panicked at 'failed to execute ["yasm.exe" "-X" "vc" "--dformat=cv8" "--oformat=win64" "--machine=amd64" "-o" "pregenerated/aes-x86_64-nasm.obj" "pregenerated/tmp/aes-x86_64-nasm.asm"]: No such file or directory (os error 2)', external/raze__ring__0_16_12/build.rs:638:9 ... running "yasm.exe" "-X" "vc" "--dformat=cv8" "--oformat=win64" "--machine=amd64" "-o" "pregenerated/aes-x86_64-nasm.obj" "pregenerated/tmp/aes-x86_64-nasm.asm" ``` The problem is that `build.rs` is trying to generate `.asm` files for all possible architectures: https://github.com/briansmith/ring/blob/92f936bc3b76163ef49baa6b9593811e7ddfc4c0/build.rs#L310 And ends up running `yasm.exe` on linux. username_0: So it looks like `.asm` files are not supposed to be generated via `cargo build`, since `pregenerate_asm_main` doesn't run if `CARGO_PKG_NAME` is set to `ring`: https://github.com/briansmith/ring/blob/92f936bc3b76163ef49baa6b9593811e7ddfc4c0/build.rs#L256 But `cargo-raze` doesn't initialize this environment variable (and a lot of other variables too). So this problem was solved by adding the following list of environment variables to the `genrule`: ```starlark genrule( name = "ring_build_script_executor", ... cmd = ... + " export CARGO_PKG_NAME=ring;" + " export CARGO_CFG_TARGET_ARCH=x86_64;" + " export CARGO_CFG_TARGET_OS=linux;" + " export CARGO_CFG_TARGET_ENV=musl;" + " export OPT_LEVEL=3;" + " export PROFILE=release;" + " export DEBUG=false;" + " export HOST=host;" ] ``` username_0: ``` username_0: Looks like `ring` requires a static `ring-core` library that was compiles from [`C` sources](https://github.com/briansmith/ring/tree/521218897a109ac1cf13a1014b64126698ec3680/crypto). But `cargo raze` doesn't provide it with a directory containing an `.a` library file. ```shell _: /opt/my-project/bazel-cache/clang/execroot/oak/bazel-out/host/bin/external/raze__ring__0_16_12/ring_build_script cargo:rustc-link-lib=static=ring-core cargo:rustc-link-lib=static=ring-test cargo:rustc-link-search=native=/opt/my-project/bazel-cache/clang/execroot/oak/bazel-out/k8-fastbuild-ST-5e74b77704d3a70b08875590eb0f067cbb9a6e09f41f090f307cf0d79d4b2461/bin/external/raze__ring__0_16_12/ring_out_dir_outputs ``` username_0: In order to fix this we need to add two additional flags (`"-lstatic=ring-core"` and `"-Lnative=STATIC_LIB_DIR"`) to `rustc_flags` (https://doc.rust-lang.org/rustc/command-line-arguments.html#-l-add-a-directory-to-the-library-search-path). The problem is that by default `.a` file is generated in the `./bazel-cache/clang/execroot/oak/bazel-out/k8-fastbuild-ST-5e74b77704d3a70b08875590eb0f067cbb9a6e09f41f090f307cf0d79d4b2461/bin/external/raze__ring__0_16_12/ring_out_dir_outputs`, and this directory may change in different environments. So we need to make it use an `execroot/oak` directory, and thus change the following lines from this: ```starlark genrule( name = "ring_build_script_executor", cmd = "mkdir -p $$(dirname $@)/ring_out_dir_outputs/;" ... # + " export OUT_DIR=$$PWD/$$(dirname $@)/ring_out_dir_outputs;" ``` to this: ```starlark genrule( name = "ring_build_script_executor", cmd = "mkdir -p ring_out_dir_outputs/;" ... + " export OUT_DIR=$$PWD/ring_out_dir_outputs;" ``` In order to refer to this directory from a sandbox we need to add the following relative path as a `native` library path: ```starlark rust_library( name = "ring", ... rustc_flags = [ ... "-lstatic=ring-core", "-Lnative=../../../../../execroot/oak/ring_out_dir_outputs/", ], ) ``` Status: Issue closed username_2: Folks, I didn't even know what is project oak before. Just was googlin around how to build tonic with Bazel. And you know, I would like to give thanks @username_0. These detailed explanations are awesome, it is a rare example of wellcrafted GitHub issue giving its fruits even after close.
yansongda/pay
281641761
Title: 支付宝支付成功,返回回调地址时出现:The Response content must be a string or object implementing __toString(), "boolean" given. Question: username_0: ## 问题描述 支付宝支付成功,返回回调地址时出现:The Response content must be a string or object implementing __toString(), "boolean" given. ## 代码 **路由:** // 支付宝回调地址 Route::get('alipay/return','AliPayController@return'); // 支付宝异步通知 Route::post('alipay/notify','AliPayController@notify'); // 支付地址 Route::get('alipay','AliPayController@pay'); **控制器:** <?php namespace App\Http\Controllers; use Pay; use Illuminate\Http\Request; class AliPayController extends Controller { public function pay () { $config_biz = [ 'out_trade_no' => time(), 'total_amount' => '0.01', 'subject' => 'test', ]; return Pay::driver('alipay')->gateway()->pay($config_biz); } public function return(Request $request) { return Pay::driver('alipay')->gateway()->verify($request->all()); } public function notify(Request $request) { if (Pay::driver('alipay')->gateway()->verify($request->all())) { file_put_contents(storage_path('notify.txt'), "收到来自支付宝的异步通知\r\n", FILE_APPEND); file_put_contents(storage_path('notify.txt'), '订单号:' . $request->out_trade_no . "\r\n", FILE_APPEND); file_put_contents(storage_path('notify.txt'), '订单金额:' . $request->total_amount . "\r\n\r\n", FILE_APPEND); } else { file_put_contents(storage_path('notify.txt'), "收到异步通知\r\n", FILE_APPEND); } echo "success"; } } ## 报错详情 **错误信息:** * * @param mixed $content Content that can be cast to string * * @return $this * * @throws \UnexpectedValueException [Truncated] $this->content = (string) $content; return $this; } /** * Gets the current response content. * * @return string Content */ public function getContent() { return $this->content; } /** * Sets the HTTP protocol version (1.0 or 1.1). * Arguments "The Response content must be a string or object implementing __toString(), "boolean" given." Answers: username_0: 哎呀 代码怎么变成这样了啊 username_0: @username_1 我就是复制你的支付宝案例,出现这个的 username_1: 初步判断似乎不是 SDK 的问题。因为 SDK 代码中并未包含您贴的报错代码。 请问有无准确的错误信息?比如抛出什么异常,哪个文件,第几行? username_0: @username_1 `public function return(Request $request) { return Pay::driver('alipay')->gateway()->verify($request->all()); }` 应该是这一行的问题 我这样写: `return json_encode(Pay::driver('alipay')->gateway()->verify($request->all())); 返回false!` 已经支付成功了。。。。。 username_1: 属于验签错误。 请参考 #5 ,#8 ,#12 。 感谢您的支持!谢谢 username_0: @username_1 的确,我把阿里公钥写成了应用公钥。。已经可以了,谢谢! Status: Issue closed
apache/shardingsphere
1083260237
Title: last_insert_id not returned when using update Question: username_0: ## Bug Report ### Which version of ShardingSphere did you use? master branch ### Which project did you use? ShardingSphere-JDBC or ShardingSphere-Proxy? proxy ### Expected behavior update id_genterator set max_id = last_insert_id(max_id + 1) where type = 1; the last_insert_id should be returned back to user ### Actual behavior can't get last_insert_id like insert statement ### Reason analyze (If you can) insert_id is only returned on insert statement Answers: username_0: `update id_gen set max_id = last_insert_id(max_id + 1) where type = 1` and then use `select last_insert_id()` is recommended in MySQL [manual](https://dev.mysql.com/doc/refman/8.0/en/information-functions.html#function_last-insert-id) ![image](https://user-images.githubusercontent.com/1700820/146628056-a3b2430f-97dd-4824-ad0f-1830145f95f0.png) for shardingsphere, there's a problem that we use a connection pool, `select last_insert_id` may send to a connection other then the one used for `update` without transaction, which leading to wrong last_insert_id.
ISISComputingGroup/IBEX
415589949
Title: OPI: Format checker script should check for objects out of bounds. Question: username_0: The `check_opi_format_tests.py` tests should be extended so that it checks the x,y positions of widgets/objects and ensures they are within sensible bounds. Answers: username_0: - https://github.com/ISISComputingGroup/ibex_gui/pull/1056 username_0: I added additional functionality to determine if a widget within an OPI is "out of bounds" and included a new test that will report the offending widgets. - I have defined the following: `<x, y>_min, <x, y>_max = [0, 1000]` A number of OPI's fail this test. In some cases like the Lakeshore 460 it has detected valid issues: ![lakeshore460](https://user-images.githubusercontent.com/43140680/75043504-18805280-54b8-11ea-9218-5a49bec531cd.PNG) However, there are also cases where a widget has a position (such as x=-2) that is aesthetically fine but ideally should be >0. We should think if my arbitrary maximum value of 1000 is sensible. username_0: I made some changes to exisiting OPI's that failed the test. Namely fixing obvious major issues, and some minor ones. There are a few instances that no changes have been made, and we should consider skipping: ## Specific requirements - Template - Spectra Plot - Detector Motion System Motors - Reflectometry - Motor Details ## Old style (Here be dragons) - SCIMAG3D () - USBspectrometers ## Untested - Unable to open Stress Rig -> I get a Null pointer exception when trying to edit even on CSS refresh(?) Status: Issue closed
Difegue/LANraragi
859643690
Title: Add quick link to source (url) of archive Question: username_0: Most downloaders and programs of this type usually have a link somewhere that leads back to the original URL. As far as I'm aware on Lanraragi the only way to get the original URL from the thumbnail view is to make the tooltip show up, select the url in the source category with your mouse and copy it to another tab. Even something as simple as a new entry in the right click menu to open the URL in a new tab would be a great improvement. Not sure if it's a good idea to put it in the tooltip, because of how easy it is to accidentally make it disappear if you're trying to bring your mouse inside it, which can be frustrating if all you're trying to get is the source.<issue_closed> Status: Issue closed
monero-project/monero-gui
387823713
Title: [Feature request] Initial sync throttle Question: username_0: Monerod can really hog up resources during the initial sync. The CLI has all the flags necessary to do a little throttling. Perhaps there could be a checkbox during the getting started walkthrough screens that has these options, and it would autopopulate the daemon flag thing in that settings area. Fast Sync - this would add no flags. Medium Sync - this would add --limit-rate 1000 to the daemon flags. 1 MB/s is probably enough to throttle everything downstream to some extent. Slow Sync - this would add --limit-rate 500 and --max-concurrency 1. This would throttle the bandwidth and the CPU usage. Answers: username_1: +feature username_2: IMO quite a niche feature that adds unnecessary extra complexity. We already add the `--max-concurrency` flag so that the daemon only uses half of the threads available. See here: https://github.com/monero-project/monero-gui/pull/1920 Status: Issue closed username_0: sure.
boulanlo/pma
467307021
Title: Some configurations lead to a rebalance loop Question: username_0: Using a pma of 16 elements (say 0..15) with densities 0.3..0.7 and 0.08..0.92, and inserting the same elements again (0..15), will eventually lead to a rebalance loop. # Diagnosis After the insertion of elements 0 to 4, the data looks like this : ``` 0 0 1 1 2 2 x x 3 3 4 4 5 x x x 6 7 8 9 10 x x x 11 12 13 14 15 x x x ``` where `x` are gaps. When we try to insert the element `5`, the density bounds for the window 0..2 will be violated (12 elements over 11 authorized elements), triggering a rebalance on the upper window (which happens to be the whole PMA). But this rebalance will achieve nothing, as the elements will be distributed evenly as they already are. After this, during the second call (recursive) to insert, there will be the same out of bounds detection, leading to an infinite loop of rebalances and eventually a stack overflow. # Solutions At the moment, I identified two solutions, none of them being good. - The one I implemented for now, is to detect a recursion of level 1 or more in the `insert` function, and triggering a size doubling for the PMA when it happens. It may break the PMA bounds though, so it is a solution to study. - The other one would be to somehow take the position of the element to be inserted into account during the rebalance : it would, in our example, shift the leftover element on the second window on the right, leaving 10 elements on the left side of the PMA and allowing the insertion. This may be the fix to use, will have to see. Status: Issue closed Answers: username_0: The fix was a bit tricky : we decided to adapt the `rebalance` function, and to insert BEFORE rebalancing, even though the PMA is in an instable state between the two operations. The way we corrected the rebalance was to use the following pattern for adding leftover elements when there is not a divisible number of elements to rearrange : instead of shifting the leftover elements on the left, we decided to spread them in the window. An example : if we had a window of size 8, and had 5 leftovers, we would insert a leftover in indices 0, 4, 2, 6, and 1. If we look at this pattern a little closer, we can see the following pattern if we inverse the bits of the sequence 0..8: ``` 0: 000 --> 000 : 0 1: 001 --> 100 : 4 2: 010 --> 010 : 2 3: 011 --> 110 : 6 4: 100 --> 001 : 1 5: 101 --> 101 : 5 6: 110 --> 011 : 3 7: 111 --> 111 : 7 ``` We can see that by reversing the bits of the sequence of integers between 0 and the size of a window (in segments), we get the indices where each leftover should go. We only need to take the correct amount (5 in our case) and to add 1 to the future size of each segment which match the indices.
yati-raz/oriens
124024350
Title: [DatabaseHandler.php] update_userprofile - Incorrect binding parameters Question: username_0: $checkVar = $statement->bind_param('ssisisisi', $name, $facultyid, $gender, $aboutme, $mobile, $email,$locationshared, $photourl, $userId); $facultyid is an integer. Answers: username_1: $checkVar = $statement->bind_param('siisisisi', $name, $faculty_id, $gender, $about_me, $mobile, $alt_email,$location_shared, $photo_url, $user_id);
OpenTreeOfLife/ot-ansible
628560733
Title: locations of things Question: username_0: Looking at locations of where things get cloned / downloaded / installed / etc: * `$HOME/repo` directory where we clone + install peyotl, ws-wrapper, opentree, phylesystem, amendments, collections; created in [functions.sh](https://github.com/OpenTreeOfLife/germinator/blob/1def898e2181250385b86402cce4a432409927f3/deploy/setup/functions.sh) and used in [install-api.sh](https://github.com/OpenTreeOfLife/germinator/blob/38cb084f8c39c2ed895090b794c8589038a10fe8/deploy/setup/install-api.sh) and [install-common.sh](https://github.com/OpenTreeOfLife/germinator/blob/78c800288631fa4e06efa160e1c3b1c06049cf5b/deploy/setup/install-common.sh) * `$HOME/Applications` directory where we clone and install otcetera, restbed in [install-otcetera.sh](https://github.com/OpenTreeOfLife/germinator/blob/1def898e2181250385b86402cce4a432409927f3/deploy/setup/install-otcetera.sh) Is this the same structure that we want going forward with ansible? Answers: username_2: I think we can go with either. Having both was (I think) an attempt to make sure that experiments with ansible not over-writing germinator deployments. I think that was @username_1 . So, I don't think we need both. On "nexttree" we (read "@username_2") had a different directory for building the synth tree and a different one for serving the tree. Neither were repo or Applications) We might want to maintain the convention of having a dir for synth and another dir for serving the tree. but i don't care what the names are. username_0: From [conversation on gitter](https://gitter.im/OpenTreeOfLife/ot-private?at=5ed53e24ff7a920a7226620a), going to keep everything that we download (+install) from github in `repos`. The `build` dir(s) should not be inside the source code repos (either sister to, or in a separate parent, perhaps `Applications`). username_0: I think I left my comment without refreshing this page. I like both @username_1 suggestion about separating the c++ apps and also @username_2 structure on nexttree. On dev / production, `ls $HOME`: `cpp_apps` - otcetera and restbed, each in their own dir with the subdirectories suggested by @username_1 `repos` - all other repos `downloads` - destination for various wgets `unpacked` - destination for unpacked tarballs on nexttree, `ls $HOME`: `synth_dir` - everything related to building the synthetic tree `ws_dir` - everything related to serving the tree Directory structures within `ws_dir` will be same / similar to $HOME on dev/prod. username_1: Sounds good. If you want to move the git source for cpp apps under `repo`, so that `ls otcetera` would show `build/` and `local/` that seems equally good. Or I guess you could make a symlink from `otcetera/git` to `repos/otcetera`....... OK, enough bike-shedding!
numba/numba
637366846
Title: Error when install numba by pip Question: username_0: I have the following error when trying to install by `pip install numba`. Could you show me ho to fix it. ``` ERROR: Command errored out with exit status 1: command: /usr/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-eulNd9/llvmlite/setup.py'"'"'; __file__='"'"'/tmp/pip-install-eulNd9/llvmlite/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-U3yvuq cwd: /tmp/pip-install-eulNd9/llvmlite/ Complete output (7 lines): running bdist_wheel /usr/bin/python /tmp/pip-install-eulNd9/llvmlite/ffi/build.py File "/tmp/pip-install-eulNd9/llvmlite/ffi/build.py", line 122 raise ValueError(msg.format(_ver_check_skip)) from e ^ SyntaxError: invalid syntax error: command '/usr/bin/python' failed with exit status 1 ---------------------------------------- ERROR: Failed building wheel for llvmlite Running setup.py clean for llvmlite Failed to build llvmlite Installing collected packages: llvmlite, numba Running setup.py install for llvmlite ... error ERROR: Command errored out with exit status 1: command: /usr/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-eulNd9/llvmlite/setup.py'"'"'; __file__='"'"'/tmp/pip-install-eulNd9/llvmlite/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-rUBf7g/install-record.txt --single-version-externally-managed --user --prefix= --compile --install-headers /home/lva/.local/include/python2.7/llvmlite cwd: /tmp/pip-install-eulNd9/llvmlite/ Complete output (10 lines): running install running build got version from file /tmp/pip-install-eulNd9/llvmlite/llvmlite/_version.py {'version': '0.32.1', 'full': 'aa11b129c0b55973067422397821ae6d44fa5e70'} running build_ext /usr/bin/python /tmp/pip-install-eulNd9/llvmlite/ffi/build.py File "/tmp/pip-install-eulNd9/llvmlite/ffi/build.py", line 122 raise ValueError(msg.format(_ver_check_skip)) from e ^ SyntaxError: invalid syntax error: command '/usr/bin/python' failed with exit status 1 ---------------------------------------- ERROR: Command errored out with exit status 1: /usr/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-eulNd9/llvmlite/setup.py'"'"'; __file__='"'"'/tmp/pip-install-eulNd9/llvmlite/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-rUBf7g/install-record.txt --single-version-externally-managed --user --prefix= --compile --install-headers /home/lva/.local/include/python2.7/llvmlite Check the logs for full command output. ``` Answers: username_1: Thanks for the report. The latest Numba requires Python >= 3.6. I would guess that your system Python is 2.7 as it doesn't support the `raise <Exception> from <Exception>` syntax (Introduced in PEP 3134, here https://www.python.org/dev/peps/pep-3134/#explicit-exception-chaining). I'd suggest getting newer version of Python, which is >= 3.6, would solve your problems. If this isn't possible, the last version of Numba that supports Python 2.7 is Numba version 0.48, explicitly installing this could also work but you'd miss new features. Status: Issue closed username_2: @username_1 Wouldn't it be better if the error message would say exactly what you said instead of failing with a syntax error? username_1: @username_2 yes, definitely, @username_3 also suggested similar (but can't find a xref) as IIRC NumPy did something along these lines and then switched to explicit rejection of unsupported pythons. Pull requests are welcomed. Thanks. username_3: The relevant numpy commit is https://github.com/numpy/numpy/commit/dabf31c74f6f3153ef4e7c72ad969c37f8652c8a#diff-2eeaed663bd0d25b7e608891384b7298 username_4: unable to uninstall numba on ubuntu 18.04 username_5: @username_4 thank you for adding to this issue. Since this issue has been closed and resolved, please do open a new issue with the details of your problem: a) What did you do? b) What did you expect to happen? c) What happened instead? And please include relevant commands and any tool output of error messages that you encounter, thanks. username_4: @username_5 I was trying to run [Voxelnet ROS](https://github.com/AbangLZU/VoxelNetRos) package. Numba was one of the package dependencies. My python version is 3.6.9 so run ``` pip3 install numba ``` add encountered with this error "Failed building wheel for numba" As per my understanding numba depends on **llvmlite**, I was able to install llvmlite 0.32.0 version separately but when installing numba it creates problem. username_5: @username_4 as I wrote above: *please open a new issue*. username_6: Collecting numba Using cached numba-0.51.2.tar.gz (2.1 MB) Requirement already satisfied: numpy>=1.15 in c:\users\abhiram shetty\appdata\local\programs\python\python39\lib\site-packages (from numba) (1.19.4) Requirement already satisfied: setuptools in c:\users\abhiram shetty\appdata\local\programs\python\python39\lib\site-packages (from numba) (51.0.0) Collecting llvmlite<0.35,>=0.34.0.dev0 Using cached llvmlite-0.34.0.tar.gz (107 kB) Building wheels for collected packages: numba, llvmlite ** On entry to DGEBAL parameter number 3 had an illegal value ** On entry to DGEHRD parameter number 2 had an illegal value ** On entry to DORGHR DORGQR parameter number 2 had an illegal value ** On entry to DHSEQR parameter number 4 had an illegal value Building wheel for numba (setup.py) ... error ERROR: Command errored out with exit status 1: command: 'c:\users\abhiram shetty\appdata\local\programs\python\python39\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Abhiram Shetty\\AppData\\Local\\Temp\\pip-install-npqmg67r\\numba_0261f68d3cdd4dddaf6b2c592c3e42b0\\setup.py'"'"'; __file__='"'"'C:\\Users\\<NAME>\\AppData\\Local\\Temp\\pip-install-npqmg67r\\numba_0261f68d3cdd4dddaf6b2c592c3e42b0\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\<NAME>\AppData\Local\Temp\pip-wheel-3ecop48g' cwd: C:\Users\<NAME>\AppData\Local\Temp\pip-install-npqmg67r\numba_0261f68d3cdd4dddaf6b2c592c3e42b0\ Complete output (11 lines): Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\<NAME>\AppData\Local\Temp\pip-install-npqmg67r\numba_0261f68d3cdd4dddaf6b2c592c3e42b0\setup.py", line 354, in <module> metadata['ext_modules'] = get_ext_modules() File "C:\Users\<NAME>\AppData\Local\Temp\pip-install-npqmg67r\numba_0261f68d3cdd4dddaf6b2c592c3e42b0\setup.py", line 87, in get_ext_modules import numpy.distutils.misc_util as np_misc File "c:\users\abhiram shetty\appdata\local\programs\python\python39\lib\site-packages\numpy\__init__.py", line 305, in <module> _win_os_check() File "c:\users\abhiram shetty\appdata\local\programs\python\python39\lib\site-packages\numpy\__init__.py", line 302, in _win_os_check raise RuntimeError(msg.format(__file__)) from None RuntimeError: The current Numpy installation ('c:\\users\\abhiram shetty\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\numpy\\__init__.py') fails to pass a sanity check due to a bug in the windows runtime. See this issue for more information: https://tinyurl.com/y3dm3h86 ---------------------------------------- ERROR: Failed building wheel for numba Running setup.py clean for numba ** On entry to DGEBAL parameter number 3 had an illegal value ** On entry to DGEHRD parameter number 2 had an illegal value ** On entry to DORGHR DORGQR parameter number 2 had an illegal value ** On entry to DHSEQR parameter number 4 had an illegal value ERROR: Command errored out with exit status 1: command: 'c:\users\abhiram shetty\appdata\local\programs\python\python39\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\<NAME>\\AppData\\Local\\Temp\\pip-install-npqmg67r\\numba_0261f68d3cdd4dddaf6b2c592c3e42b0\\setup.py'"'"'; __file__='"'"'C:\\Users\\<NAME>\\AppData\\Local\\Temp\\pip-install-npqmg67r\\numba_0261f68d3cdd4dddaf6b2c592c3e42b0\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' clean --all cwd: C:\Users\<NAME>\AppData\Local\Temp\pip-install-npqmg67r\numba_0261f68d3cdd4dddaf6b2c592c3e42b0 Complete output (11 lines): Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\<NAME>\AppData\Local\Temp\pip-install-npqmg67r\numba_0261f68d3cdd4dddaf6b2c592c3e42b0\setup.py", line 354, in <module> metadata['ext_modules'] = get_ext_modules() File "C:\Users\<NAME>\AppData\Local\Temp\pip-install-npqmg67r\numba_0261f68d3cdd4dddaf6b2c592c3e42b0\setup.py", line 87, in get_ext_modules import numpy.distutils.misc_util as np_misc File "c:\users\abhiram shetty\appdata\local\programs\python\python39\lib\site-packages\numpy\__init__.py", line 305, in <module> _win_os_check() File "c:\users\abhiram shetty\appdata\local\programs\python\python39\lib\site-packages\numpy\__init__.py", line 302, in _win_os_check raise RuntimeError(msg.format(__file__)) from None RuntimeError: The current Numpy installation ('c:\\users\\abhiram shetty\\appdata\\local\\programs\\python\\python39\\lib\\site-packages\\numpy\\__init__.py') fails to pass a sanity check due to a bug in the windows runtime. See this issue for more information: https://tinyurl.com/y3dm3h86 ---------------------------------------- ERROR: Failed cleaning build dir for numba Building wheel for llvmlite (setup.py) ... error ERROR: Command errored out with exit status 1: command: 'c:\users\abhiram shetty\appdata\local\programs\python\python39\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\<NAME>\\AppData\\Local\\Temp\\pip-install-npqmg67r\\llvmlite_8a3a37d3d0dd4fe2a83f6f0abd50c1a6\\setup.py'"'"'; __file__='"'"'C:\\Users\\<NAME>\\AppData\\Local\\Temp\\pip-install-npqmg67r\\llvmlite_8a3a37d3d0dd4fe2a83f6f0abd50c1a6\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\<NAME>\AppData\Local\Temp\pip-wheel-k7jqb0y5' cwd: C:\Users\<NAME>\AppData\Local\Temp\pip-install-npqmg67r\llvmlite_8a3a37d3d0dd4fe2a83f6f0abd50c1a6\ Complete output (24 lines): running bdist_wheel c:\users\abhiram shetty\appdata\local\programs\python\python39\python.exe C:\Users\<NAME>\AppData\Local\Temp\pip-install-npqmg67r\llvmlite_8a3a37d3d0dd4fe2a83f6f0abd50c1a6\ffi\build.py Trying generator 'Visual Studio 15 2017 Win64' Traceback (most recent call last): [Truncated] main() File "C:\Users\Abhiram Shetty\AppData\Local\Temp\pip-install-npqmg67r\llvmlite_8a3a37d3d0dd4fe2a83f6f0abd50c1a6\ffi\build.py", line 179, in main main_win32() File "C:\Users\<NAME>\AppData\Local\Temp\pip-install-npqmg67r\llvmlite_8a3a37d3d0dd4fe2a83f6f0abd50c1a6\ffi\build.py", line 88, in main_win32 generator = find_win32_generator() File "C:\Users\<NAME>\AppData\Local\Temp\pip-install-npqmg67r\llvmlite_8a3a37d3d0dd4fe2a83f6f0abd50c1a6\ffi\build.py", line 76, in find_win32_generator try_cmake(cmake_dir, build_dir, generator) File "C:\Users\<NAME>\AppData\Local\Temp\pip-install-npqmg67r\llvmlite_8a3a37d3d0dd4fe2a83f6f0abd50c1a6\ffi\build.py", line 28, in try_cmake subprocess.check_call(['cmake', '-G', generator, cmake_dir]) File "c:\users\abhiram shetty\appdata\local\programs\python\python39\lib\subprocess.py", line 368, in check_call retcode = call(*popenargs, **kwargs) File "c:\users\abhiram shetty\appdata\local\programs\python\python39\lib\subprocess.py", line 349, in call with Popen(*popenargs, **kwargs) as p: File "c:\users\abhiram shetty\appdata\local\programs\python\python39\lib\subprocess.py", line 947, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "c:\users\abhiram shetty\appdata\local\programs\python\python39\lib\subprocess.py", line 1416, in _execute_child hp, ht, pid, tid = _winapi.CreateProcess(executable, args, FileNotFoundError: [WinError 2] The system cannot find the file specified error: command 'c:\\users\\abhiram shetty\\appdata\\local\\programs\\python\\python39\\python.exe' failed with exit code 1 ---------------------------------------- ERROR: Command errored out with exit status 1: 'c:\users\abhiram shetty\appdata\local\programs\python\python39\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\<NAME>\\AppData\\Local\\Temp\\pip-install-npqmg67r\\llvmlite_8a3a37d3d0dd4fe2a83f6f0abd50c1a6\\setup.py'"'"'; __file__='"'"'C:\\Users\\<NAME>\\AppData\\Local\\Temp\\pip-install-npqmg67r\\llvmlite_8a3a37d3d0dd4fe2a83f6f0abd50c1a6\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Abhiram Shetty\AppData\Local\Temp\pip-record-384f6zsx\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\abhiram shetty\appdata\local\programs\python\python39\Include\llvmlite' Check the logs for full command output. username_5: @username_6 thank you for submitting this. I can see from the log that you are trying to install Numba on Python 3.9 -- however, this Python version is currently not supported. Please try to use 3.6/3.7/3.8. Thank you for using Numba. username_7: I have tried this to solve this issue. $pip install wheel (or pip3 install wheel) This made some more progress. But the final solution I found was below. $sudo apt install python3-numba
slimkit/plus
419828371
Title: 【PC-bug】动态列表 Question: username_0: **Describe the bug** A clear and concise description of what the bug is. **To Reproduce** Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down to '....' 4. See error **Expected behavior** A clear and concise description of what you expected to happen. **Screenshots** If applicable, add screenshots to help explain your problem. **Desktop (please complete the following information):** - OS: [e.g. iOS] - Browser [e.g. chrome, safari] - Version [e.g. 22] **Smartphone (please complete the following information):** - Device: [e.g. iPhone6] - OS: [e.g. iOS8.1] - Browser [e.g. stock browser, safari] - Version [e.g. 22] **Additional context** Add any other context about the problem here. Answers: username_1: @mutoe 第二点和第三点没有修复 username_1: @mutoe 第二点没有修复 2.@某人后输入很多文字内容,发布成功后,“查看更多”按钮没有高亮 Status: Issue closed
appium/appium
305505060
Title: .manage().logs().get("server") returns null Question: username_0: ## The problem Doing driver.manage().logs().get("server"); Returns error: org.openqa.selenium.UnsupportedCommandException: malformed response to remote logs command And on logs it appears: [debug] [BaseDriver] Retrieving 'server' logs [debug] [BaseDriver] Retrieving supported log types [debug] [MJSONWP] Responding to client with driver.getLog() result: null ## Environment * Appium version (or git revision) that exhibits the issue: 1.8.0-beta3 * Desktop OS/version used to run Appium: Windows * Mobile platform/version under test: Tried Android 8 & Android 6 (Logs from this one) * Real device or emulator/simulator: Real Device * Java Client: 6.0.0-BETA4 ## Link to Appium logs https://gist.github.com/username_0/4d246282814a95452d62e2a7324554de ## Code To Reproduce Issue [ Good To Have ] ``` Set<String> logTypes = driverA.getDriver().manage().logs().getAvailableLogTypes(); LogEntries logEntries = driverA.getDriver().manage().logs().get("server"); ``` Answers: username_1: Can you please replace the line `return logger.record;` with `return logger.unwrap().record;` inside `node_modules/appium/node_modules/appium-uiautomator2-driver/node_modules/appium-android-driver/lib/command/logs.js` file where your node modules are installed, execute `gulp transpile`and restart appium? username_0: Now sure if I did it right. I had return log.record; and updated that to: return log.unwrap().record; And the server logs seems to be there on last line response: https://gist.github.com/username_0/a3c1e497504b1894d4750f5170f08dbb Code breaks with Expection: java.lang.IllegalArgumentException: Bad level "info" username_1: Perfect. There's one more thing to do: replace the line with: `return log.unwrap().record.map(x => x.message)` And transpile it again username_0: Sorry, I was kind of busy. With that change, the following exception appears: java.lang.String cannot be cast to java.util.Map username_1: https://github.com/appium/appium-android-driver/pull/341 should do the job username_0: It works 🥇 Thanks. Status: Issue closed username_2: Hi, I apologise for hijacking this closed issue, but I am not able to get the server logs while using Appium 8.0 and the Java bindings 6.0.0. This is what I see in the log: ``` [debug] [W3C] Calling AppiumDriver.getLogTypes() with args: ["41356621-3634-412f-b361-e344c1771c37"] [debug] [BaseDriver] Retrieving supported log types [debug] [W3C] Responding to client with driver.getLogTypes() result: ["logcat","bugreport","server"] [HTTP] <-- GET /wd/hub/session/41356621-3634-412f-b361-e344c1771c37/log/types 200 8 ms - 103 [HTTP] [HTTP] --> POST /wd/hub/session/41356621-3634-412f-b361-e344c1771c37/log [HTTP] {"type":"server"} [debug] [W3C] Calling AppiumDriver.getLog() with args: ["server","41356621-3634-412f-b361-e344c1771c37"] [debug] [BaseDriver] Retrieving 'server' logs [debug] [BaseDriver] Retrieving supported log types [HTTP] <-- POST /wd/hub/session/41356621-3634-412f-b361-e344c1771c37/log 500 46 ms - 1272 ``` I am starting Appium like this: ``` appium --port 4444 --relaxed-security ``` I am not sure if this is a new bug or if I am doing something wrong. If I should open a new issue, please just let me know. username_1: @username_2 Try appium@beta username_2: @username_1 I am sorry, I was using the wrong startup script, and I just noticed while checking it to change the version. I basically forgot to do git pull in the machine where I was testing. Thank you for the quick reply though!
rollup/rollup
633716071
Title: add top level await support for iife Question: username_0: <!-- ⚡️ katchow! We 💛 issues. Please - do not - remove this template. Please - do not - skip or remove parts of this template. Or your issue may be closed. 👉🏽 Need help or tech support? Please don't open an issue! Head to https://gitter.im/rollup/rollup or https://stackoverflow.com/questions/tagged/rollupjs ❤️ Rollup? Please consider supporting our collective: 👉 https://opencollective.com/rollup/donate --> ### Feature Use Case ### Feature Proposal If the input is: ```js const a = await import('...'); export default a; ``` The output could seems like: ```js ( async function () { 'use strict'; const a = await import('...'); return a; } )(); ``` Answers: username_1: Would a PR implementing this be accepted? Is there anything that makes this more difficult than it sounds? username_2: There is one problem I see: If the IIFE is creating a global variable (because there are entry exports as in your example), it would create code like this: ```js var myBundle = (async function () { 'use strict'; const { a } = await import('...'); const b = await loadFile('...'); return a + b; })(); ``` That would mean the global variable (here `myBundle`) would no longer be the default export but a Promise resolving to the default export, which is very likely NOT what users expect. What users *might* expect is that the variable is not yet assigned until the promise is resolved. But if the use-case is not related to creating global variables but just for stand-alone modules we could say: This is allowed for IIFE unless there are exports in the entry module, in which case we throw an error. This might all be handled in `life.ts`: You already have the `hasExports` flag, and you also get `usesTopLevelAwait` from the `FinaliserOptions`. You will probably need to adjust some other safe-guards in the code-base, though. What do you think, would that fit your use-case? username_3: Yes, that is also true for other loaders like Webpack (when using top level await). That is simply how top-level await works, [Webpack has exactly the same problems](https://github.com/tc39/proposal-top-level-await/pull/61#issuecomment-475892445), and [Webpack even proposed to add a new `import await` syntax to help fix that](https://github.com/tc39/proposal-top-level-await/pull/60) (but that proposal was rejected). This issue has already been decided at the standards level (for better or worse). The most ideal situation is to make IIFE *always* use a Promise (regardless of whether top level await is used or not), but that would be a breaking change in Rollup, so it would need a major version bump. username_2: Thank you for explaining to me what in your opinion IIFE is all about, having maintained Rollup for 5 years I surely have no idea. Just note that "hereas import() works in both ES6 modules and ES5" is just nonsense, please get your terminology right (dynamic import is an ES2020/ES11 feature, "script context" was actually the only correct term). Apparently you did not understand what I was talking about, or did not take the time to have a look at or understand the implications of the [REPL link](https://rollupjs.org/repl/?version=2.66.1&shareable=<KEY> provided which should explain to you how Rollup works at the moment. OF COURSE there is no import in the output! There are imports in the INPUT! And there is a semantic to translate those to global variable accesses in Rollup that needs to be respected in some way when implementing this feature because thousands of libraries are depending on this for their browser builds. (Side note: The Webpack issue is speaking my point, and incidentally I have been part of those discussions from the beginning though I did not participate too actively. The result was that within a module graph, you can import TLA modules without any need for dynamic import(). This will NOT be possible if you put two IIFEs in separate script tags next to each other because you cannot "encode" this information in the static import). And when the imported library is a providing a Promise as a global variable, that is also fine if you know it, but I am quite sure that many current Rollup users would be very unhappy if we just forced exports to be Promises for all of them. So my question: Just make it a Promise when there is a TLA in the source, which may be even hidden and can lead to unexpected results? Or just postpone the question by forbidding exports in the first iteration (we can always add a solution later without a breaking change, that would be a sensible solution for fast iteration)? Or control it via a config option (I would like that, makes things explicit, no surprises here)? Other ideas? username_0: After moments of thinking, I think there could be too many use cases, it seems not good to try to handle them all in rollup engine. And an arbitrary choice is worse. For example, people may want: ```js var myBundle = ( async function () { return xxx; } )(); ``` ```js var myBundle = ( async function () { return xxx; } )(); myBundle.then(xxx => { myBundle = xxx; }); ``` ```js var myBundle = ( async function () { return xxx; } )(); myBundle.then(xxx => { myBundle = xxx; }); myBundle = undefined; ``` ... And what's more, no one above can cover the amd/cjs formats. And when consider your repl (with external import), thing becomes more complex. So, this feature seems not easy to add. Personally, the issue could be closed at present. (Or of cause it could still keep open to collect other uses' suggestion. I can't give more suggestion now.) username_2: Well, there are two IIFE situations * When there are exports, a global variable is created. This is the problematic one (but could be solved in some way) * When there are no exports, it is [just a function](https://rollupjs.org/repl/?version=2.66.1&shareable=<KEY>JDJTIyZXhhbXBsZSUyMiUzQW51bGwlN0Q=) that usually no other script depends upon. If this were an async function, I think this would not cause any problems or defeat expectations. So we could still add value for users when we implement it for the "no exports" case and for now throw when a variable would be created. username_4: iife can not realy have top level await top level await is a ESM Feature that is easy to implement in ESM World as every ESM Module is a Promise by Default so in ESM Context your already in a Promise and a iife for ESM is not needed everything is already a iife if it is ESM Code as it is instantiated after import i hope that makes some sense username_4: i would opt for what @username_2 sayed you should wrap your code with async iffe functions so it is clear for you that your getting back promises. you can then produce with that code normal iife builds. username_0: If the feature not mean lot of work, I think what you said is the best thing that could be done for now. username_3: Your REPL shows an external module `bar` which is outside of the Rollup system (and loaded by the user in some other way). Supporting asynchronous external imports is a separate issue from top level await, however it is quite easy to support. Rollup would simply have to generate code like this: ```js (async function (bar) { 'use strict'; bar = await bar; console.log(bar.foo); })(bar); ``` Now it will work regardless of whether `bar` is a plain value or a Promise, so even if `bar` was generated with top-level await that will be transparent to the user. This ties back into what I was saying before, that it would be ideal to *always* generate a Promise regardless of whether top-level await is used, so that way top-level await modules can be seamlessly combined with regular modules. username_2: Always injecting an `await` would be problematic as it would - generate output that relies on working `await`, making it incompatible with older browsers without another transform step, even if you do not require this feature - change execution semantic as it would insert micro task wait inside synchronous code, making it no longer synchronous. If another consumer relies synchronous execution (e.g. a library that does not know about this feature or was not updated to support it), it becomes unusable. Which gets me back to one of the things I suggested: Make this an opt-in. That means on the other hand that such a flag could control three things: * support top-level await * make the exported variable a promise instead of the actual value * await imports to external variables E.g. `output.asyncIife: true`. And throw an error if TLA is used without the flag, pointing users to use it. username_3: I personally don't think that's necessary, since these changes would only happen if top-level await is used, but there isn't anything wrong with doing things like that, it's a safer choice. However, the end goal should be that top-level await is a first class citizen, there shouldn't be a distinction between top-level await modules and regular modules. So in the long term, it would be good to *always* generate a Promise regardless of whether top-level await is used or not (this will require a major version bump). username_3: If the user writes code which uses top level await, you have to compile it to something. Using AMD or SystemJS doesn't fix that. For example, the SystemJS output [uses `async` + `await`](https://rollupjs.org/repl/?version=2.66.1&shareable=<KEY> so it has exactly the same issue with browser compatibility. This has nothing to do with IIFE vs SystemJS. If `async` is good enough for SystemJS, it should be good enough for IIFE too. username_2: Actually I only wanted to argue against enabling it "by default" without a way to get the old behaviour. For an opt-in feature, I have no worries about compatibility because then it is a conscious decision. So in the end I have the impression we mostly agree now. username_3: In general, yes, though if you're going to have a flag I think it should be a single flag for all the modes (including SystemJS), not specifically for IIFE, since the compatibility and library issues are the same. username_2: Maybe, though SystemJS would be the only other format affected at the moment anyway. But the implications are not the same for SystemJS: We do not need to await imports for SystemJS. So consumers do not need to know if a module uses TLA to be able to use it, the SystemJS runtime will take care of it, just like the ES runtime for ESM. On the other hand for IIFE, import awaiting can also break imports if having a Promise as only export is the intended interface for a module. So I am not sure having the flag for SystemJS as well would provide value for enough people to warrant the additional complexity. username_4: i have 0 real usecases for tla at all: in general as every Module is a Promise i can always export Promises i feel like implementing TLA leads in general to confusion as it is only syntax sugar it adds nothing to promise.then(succec, fail) i think that would even be the better translation for TLA There is no real TLA as everything is a Promise and TLA is syntax sugar.
sequelize/sequelize
167449666
Title: model.schema(....) not setting model.$schema Question: username_0: ## What you are doing? ```js let myModel = sequelize.define('foo', {...}); myModel.schema('test'); myModel.find({...}); ``` ## What do you expect to happen? I want generated SQL statements to prefix the correct schema ("foo") to the model's table name. e.g. `select ... from "foo"."test"...` ## What is actually happening? The schema provided is not found in the generated SQL. This used to work in a previously used version... Updating my code to call `myModel.$schema = 'foo';` makes this work as expected. But the documentation should reflect that and/or the `.schema()` should be fixed/removed __Dialect:__ postgres __Database version:__ 9.4 __Sequelize version:__ 4.0.0-0 Answers: username_1: If this worked previously it sounds like it might be a bug, `schema` is supposed to return a cloned object that you then use. username_0: Although the [latest docs](http://docs.sequelizejs.com/en/latest/api/model/#schemaschema-options-this) state "Apply a schema to this model.", which is how it used to work username_2: Is there a reason why it clones the model class? It may have unintended side effects with the inheritance in v4 username_1: @username_2 So you can have multiple schemas active, IIRC. @username_0 Hmm, yeah - maybe you're right. username_2: @username_1 maybe this could be solved better by a `Model.clone()` method
kubernetes/autoscaler
384441027
Title: AWS cloud provider tests take 120 seconds Question: username_0: The problem started to occur after merging https://github.com/kubernetes/autoscaler/commit/a36f8007afa65fcb2ff7ae291ee815be5512e01c Each of 6 test method tries to get region from AWS api (see aws_manager.go `getRegion` method). This call times out after 20seconds. @username_1 Is there a chance that you could fix that? Answers: username_1: @username_0, certainly! Inside of AWS this returns instantly, and outside, it will always timeout, so I'm thinking to add a 1 second timeout around `svc.Region()`. username_0: I was thinking more that you could mock the http call in tests so it returns immediately. Depending on build environment and sleeping (even 1s) in unit tests are both bad practices. username_2: This. Or set the env var in tests. Or read the env var earlier, e.g. when constructing AWS manager, and pass it as parameter. Etc. username_1: @username_0 @username_2 I like the environment variable idea. I had already mocked out another environment variable lookup when it occurred to me: If the tests (indirectly) relying on `getRegion` supply an `AWS_REGION` env variable, there is no timeout. Is this approach acceptable? Or will this throw something off? I think tests using `createAWSManagerInternal` would need it, and that's about it. ``` $ git grep createAWSManagerInternal aws_manager.go:func createAWSManagerInternal( aws_manager.go: return createAWSManagerInternal(configReader, discoveryOpts, nil, nil) aws_manager_test.go: m, err := createAWSManagerInternal(nil, do, &autoScalingWrapper{s}, nil) aws_manager_test.go: m, err := createAWSManagerInternal(nil, cloudprovider.NodeGroupDiscoveryOptions{}, nil, &ec2Wrapper{s}) aws_manager_test.go: m, err := createAWSManagerInternal(nil, cloudprovider.NodeGroupDiscoveryOptions{}, nil, &ec2Wrapper{s}) aws_manager_test.go: m, err := createAWSManagerInternal(nil, do, &autoScalingWrapper{s}) ``` I could set and reset it for that test alone, unless it's okay to set it for _all_ tests. Is there some central location where environment variables for tests are configured? username_2: please do it just for tests that require it (can be part of sth like createTestAWSManager ofc) username_3: @username_0 @username_1 shouldn't it be closed once #1490 is merged? Status: Issue closed username_0: Yeah, closing. Thanks.
dependabot/feedback
482382743
Title: K8s, Helm, Repos as Code recommendations Question: username_0: Dear Dependabot Team We are starting to use dependabot and are super happy! Thanks for creating this awesome tool! That said, our Repo will soon be migrated into a GitHub enterprise account. We can no longer use the hosted application and need to host a dependabot ourselves. Given: - Kubernetes landscape - Helm charts - Mostly node/java developers Wanted: - Helm deployed dependabot using your `.Dockerfile` - Reading GH token from side car mounted Vault container - Checking repos which are defined "as code", like a config file that lists all repos to check Seen: I've checked your code `.Dockerfile` and your `dependabot-script` repo. I think what we want is somewhere between but we have like not too much clue how to start but should have it if possible on Aug 26th. Can you point us in the right direction for the wanted-bullets? Happy to provide more information if needed! Thanks and best regards Answers: username_1: @username_0 hey sorry for missing this one! We currently don't have any good tools for self-hosting Dependabot so won't be able to help with much beyond what we have in `dependabot-core` and `dependabot-script`. We have some plans on making it easier to run `dependabot-core` give a `.dependabot/config.yml` file but can't promise when we'll get to this. You'll have to write your own runner that wraps something similar to the example in `dependabot-script` in a docker container and execute this your credentials and repository to check. Sorry to not be of more help! 😢 username_2: Thanks anyway, our infra-rock ⭐️'s are already setting up the self-hosted approach 👍 Can let you know as soon as we have it how it was done 💯 username_3: I’m interested to hear how you set it up too. username_2: It is scheduled for this sprint, let you know more asap! 👍🏼 username_4: Any news here @username_2? Interested here as well! username_2: @username_4 Sadly not. I postponed to check it as I have been promised that Actions come to GHE in August and with that Dependabot.... Lets see. username_4: right on, all good :) thanks for the quick response~
DFortun81/AllTheThings
374736219
Title: Stormsong Valley Question: username_0: 1. Achivement: Adventurer of Stormsong Valley do not list all creatures you need to kill if you omt hower over the achivement itself. 2 World quests: 52347 and 52344 is displayed instead of quest name 3 Sister Lilyana->Storm`s Wake Tabard displays several Missing in ATT when howering over Answers: username_1: 1. Not all mobs are listed there because the criteria will be listed with the rare. I'm in the process of moving them around to match the new format going forward. 2. There's nothing we can do about that. That's because when we ping the server for quest name Blizzard essentially returns "Invalid Quest" or "not active" quest. So you'll seem that time to time unfortunately because they coded their world quests for BfA weirdly. If it's a quest that is kill a specific rare we can bypass those, but only those. 3. If it says "Retrieving Data (Missing in ATT)" this is because Blizzard originally created an item that shared the appearance, but deleted the item and kept it in the Share Appearance database (which isn't always good) so you see the result of their database returning an invalid ID. Which we can't do anything about other than to ask Blizzard to clean up their databases. Status: Issue closed
flutter/flutter
612646023
Title: im trying to run the basic code of app brewery on emulator of section 1 and it gives me following error Question: username_0: * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights. * Get more help at https://help.gradle.org BUILD FAILED in 1m 20s Finished with error: Gradle task assembleDebug failed with exit code 1<issue_closed> Status: Issue closed
python-spain/asociacion
833122538
Title: No funciona el envío de emails del foro de Discourse Question: username_0: Creo que puede haber algún problema con la configuración de correo de Discourse. - Se ha dado de alta un nuevo socio pero no le ha llegado el mail de activación y he tenido que activarle a mano. - Si voy al panel de configuración y trato de enviar un mail de prueba me da un error. Veo que el último mail enviado es del 14 de enero, así que definitivamente parece que hay algo roto. Answers: username_0: ¿Podría ser https://www.scaleway.com/en/faq/why-can-i-not-send-any-email/ ? En los logs aparece lo siguiente: ![image](https://user-images.githubusercontent.com/41953/111370707-d7e21100-8698-11eb-9e10-d8bba40aa6f4.png) https://meta.discourse.org/t/issue-with-smtp-server-setup/139035 Status: Issue closed username_0: ¡Arreglado! Era https://meta.discourse.org/t/discourse-smtp-sends-ehlo-localhost-instead-of-domain-breaking-google-smtp-relay/176755/19. A partir de ahora Google no acepta dominios genéricos en el saludo. Lo hemos arreglado añadiendo la variable en `containers/new.yml`: ``` DISCOURSE_SMTP_DOMAIN: comunidad.es.python.org ``` Importante: NO poner nombre de usuario y contraseña, ya que tenemos puesto que use el relay sin autenticar. Si se intenta autenticando falla siempre, no sabemos por qué :( username_0: Nota: sí se puede poner `DISCOURSE_SMTP_ENABLE_START_TLS: true` y funciona, con lo que el contenido está cifrado. username_0: Hemos vuelto a tener este problema de nuevo. Al actualizar Discourse a la última versión el 28 de octubre utilizamos la plantilla `app.yml` en lugar de `new.yml`, lo que creó un contenedor con la configuración que tenía este problema. Lo que he hecho: - Renombrado `app.yml` a `old-do-not-use.yml` en la carpeta `containers`. - Renombrado `new.yml` a `app.yml` de la misma carpeta. - Ejecutar `./launcher rebuild app`, cruzar los dedos y... voilà! username_1: Genial, gracias @username_0!
microsoft/botframework-sdk
1046750726
Title: Gateway Timeout message intermitently Question: username_0: Hi I´m experiencing this issues from time to time as shown in the screenshot below. The bot works fine the most of the time, except when this "Gateway Timeout" happens. I´ll be glad if you can take a look ! Thanks. ![Captura de tela 2021-11-05 191504](https://user-images.githubusercontent.com/21314049/140647467-d3bc63d4-9484-4563-b6cc-4decfb9f8e45.png) Answers: username_1: Hi @username_0, Can I get some more information from you so we can try to narrow down your problem? - What channel are you using? - Is this a .NET or Node bot? - Is this a Composer bot? - What versions of each tool/SDK are you using? - How are you testing this? - Is this an example bot or a custom bot? - When did this issue begin? - Is there any pattern to this problem that you're aware of? - Any other relevant info that might be helpful? username_0: What channel are you using? **Direct Line** Is this a .NET or Node bot? **.NET** Is this a Composer bot? **No** What versions of each tool/SDK are you using? "Microsoft.Bot.Builder.Azure/4.1.5": { "dependencies": { "Microsoft.Azure.DocumentDB.Core": "2.1.2", "Microsoft.Bot.Builder": "4.1.5", "Newtonsoft.Json": "10.0.3", "SourceLink.Create.CommandLine": "2.8.1", "System.Threading.Tasks.Extensions": "4.4.0", "WindowsAzure.Storage": "9.3.2" } How are you testing this? **This is a customer production environment and i believe the problem occurs when the bot stays idle for long periods, but i´m just guessing.** Is this an example bot or a custom bot? **Custom bot** When did this issue begin? ![image](https://user-images.githubusercontent.com/21314049/140922692-c74df63d-3c88-4ec1-ba4d-f491a0f5828c.png) Is there any pattern to this problem that you're aware of? **I believe the problem occurs when the bot stays idle for long periods, but i´m just guessing.** Any other relevant info that might be helpful? **This bot is in production since May and it was never redeployed.** username_1: Ok then, two more things. 1. Your Bot Builder SDK version is on the older side, and a lot has changed through subsequent development. It's possible that upgrading the version might help. 2. It could also be that some API call or something within your code is timing out, and that's causing your bot to time out due to an inability to respond. Is there some way we can take a look at your bot's code? (Devoid of any keys or secrets of course) Status: Issue closed username_1: Closing due to inactivity
qvacua/vimr
593632242
Title: Interactive shell setting save will not persist Question: username_0: I want to uncheck the interactive shell setting but whenever I relaunch vimr it is back to being turned on. Also as an aside I noticed that my coc.nvim colorscheme stops working in the preview window (show_documentation window for functions). The text that is supposed to be colored have backticks around them Answers: username_0: Temporary fix: `rm ~/Library/Preferences/com.qvacua.VimR.plist && killall cfprefsd` username_1: I also had this problem, was able to work around it like so: - open vimr - check (or uncheck, whatever you want) this checkbox - open a new vimr window - close the old one - close the new one (and quit) - open vimr again, your pref should be persisted
axnsan12/drf-yasg
444307338
Title: Add DRF Token Auth to swagger Question: username_0: How to add DRF [token auth](https://www.django-rest-framework.org/api-guide/authentication/#tokenauthentication) to swagger docs? It is not clear in the [document](https://drf-yasg.readthedocs.io/en/stable/security.html) as to how it should be declared. Answers: username_1: ```python SWAGGER_SETTINGS = { 'SECURITY_DEFINITIONS': { 'DRF Token': { 'type': 'apiKey', 'name': 'Authorization', 'in': 'header' } } } ``` should be enough. You'll have to manually add the `Token` prefix (or whatever you set) to the token input in `swagger-ui`. Status: Issue closed username_2: @username_1 the class `TokenAuthentication` from drf already use the `Token` prefix so it shouldn't be used as default on drf-yasg? Is there anything in the docs that show how to add the `token` prefix to the ui input?
ch2i/LoraGW-Setup
366437849
Title: Monitor doesn't start due to bad/not compiled _rpi_ws281x.so file Question: username_0: Hi, 1st of all, thanks for the great work you've done (and are still doing), the main function, working as a TTN Lorawan gateway, is working perfectly. I have the Pi0W + RAK831 board, PCB version 1.3 (small PCB). There is one issue though, the monitor doesn't properly start because the WS281X lib is not properly compiled. The file is in the expected location but is 0 bytes. A bit too short :) Below is the log for the monitor service. What could I check/do to make it work? Compile something again differently/by itself? Thanks, Geert ``` Oct 03 18:48:42 loragw-ac6d systemd[1]: Started LoraGW monitoring service. Oct 03 18:48:46 loragw-ac6d monitor[1438]: Traceback (most recent call last): Oct 03 18:48:46 loragw-ac6d monitor[1438]: File "/opt/loragw/monitor.py", line 25, in <module> Oct 03 18:48:46 loragw-ac6d monitor[1438]: from neopixel import * Oct 03 18:48:46 loragw-ac6d monitor[1438]: File "build/bdist.linux-armv6l/egg/neopixel.py", line 5, in <module> Oct 03 18:48:46 loragw-ac6d monitor[1438]: File "build/bdist.linux-armv6l/egg/_rpi_ws281x.py", line 7, in <module> Oct 03 18:48:46 loragw-ac6d monitor[1438]: File "build/bdist.linux-armv6l/egg/_rpi_ws281x.py", line 6, in __bootstrap__ Oct 03 18:48:46 loragw-ac6d monitor[1438]: ImportError: /root/.cache/Python-Eggs/rpi_ws281x-1.0.0-py2.7-linux-armv6l.egg-tmp/_rpi_ws281x.so: file too short Oct 03 18:48:46 loragw-ac6d systemd[1]: monitor.service: Main process exited, code=exited, status=1/FAILURE Oct 03 18:48:46 loragw-ac6d systemd[1]: monitor.service: Unit entered failed state. Oct 03 18:48:47 loragw-ac6d systemd[1]: monitor.service: Failed with result 'exit-code'. Oct 03 18:48:52 loragw-ac6d systemd[1]: monitor.service: Service hold-off time over, scheduling restart. Oct 03 18:48:52 loragw-ac6d systemd[1]: Stopped LoraGW monitoring service. ``` Answers: username_0: make: Entering directory '/opt/nodejs/lib/node_modules/rpi-ws281x-native/build' CC(target) Release/obj.target/rpi_libws2811/src/rpi_ws281x/ws2811.o CC(target) Release/obj.target/rpi_libws2811/src/rpi_ws281x/pwm.o CC(target) Release/obj.target/rpi_libws2811/src/rpi_ws281x/dma.o CC(target) Release/obj.target/rpi_libws2811/src/rpi_ws281x/mailbox.o CC(target) Release/obj.target/rpi_libws2811/src/rpi_ws281x/board_info.o AR(target) Release/obj.target/rpi_libws2811.a COPY Release/rpi_libws2811.a CXX(target) Release/obj.target/rpi_ws281x/src/rpi-ws281x.o SOLINK_MODULE(target) Release/obj.target/rpi_ws281x.node COPY Release/rpi_ws281x.node COPY ../lib/binding/rpi_ws281x.node TOUCH Release/obj.target/action_after_build.stamp make: Leaving directory '/opt/nodejs/lib/node_modules/rpi-ws281x-native/build' + [email protected] updated 1 package in 81.539s /home/loragw/node_modules/rpi-ws281x-native -> /opt/nodejs/lib/node_modules/rpi-ws281x-native ``` username_0: UPDATE 2: It works !! I removed the rpi_ws281x subdirectory, ran the led.sh part of the setup again, let it re-compile all and now the LEDs are happily flashing green in turns. And the switch function to shutdown the OS properly is also working fine! Just some physical work now to properly box it. Very nice solution! Geert Status: Issue closed
benibela/internettools
159860155
Title: MimeType Question: username_0: Hello, In Synapse I define MimeType like this : ``` hsend : THTTPSend; hsend.MimeType := 'application/x-www-form-urlencoded'; ``` Trying to figure out how to do that here with httpRequest using synapseinternetaccess, tried few things but no luck. Any tips ? Answers: username_1: Hi, try something like: ```pascal defaultInternet.additionalHeaders.Text:= 'Content-Type: application/x-www-form-urlencoded'; ``` or ```pascal defaultInternet.contentTypeForData:= 'application/x-www-form-urlencoded'; ``` see: http://www.username_2.de/documentation/internettools/internetaccess.TInternetAccess.html#additionalHeaders username_0: Tried both, still doesn't work. username_2: Are you sending some data as 2nd parameter to httpRequest? The one-argument function only sends GET requests and with GET the content type is ignored. For the 2-argument version, urlencoded is actually the default content type username_0: Yes I am, and I have 2nd argument, that all works fine when there is no need for 'application/x-www-form-urlencoded' But when there is a need for 'application/x-www-form-urlencoded' it works the same way as with Synapse without hsend.MimeType := 'application/x-www-form-urlencoded'; username_2: Why is there a need? Perhaps something else is wrong username_0: When you send data with more pairs separated by & then there's a need. For example : username=foo&password=bar So, if you send just username=foo then there's no need for 'application/x-www-form-urlencoded' , but if you send username=foo&password=bar then there's a need. Nothing else is wrong :) username_0: Maybe if I give you example it would be clearer what the problem is. This is using synapse unit : ``` procedure TForm1.Button1Click(Sender: TObject); var hsend : THTTPSend; res : string; obj: ISuperObject; begin hsend:=THTTPSend.Create; hsend.MimeType := 'application/x-www-form-urlencoded'; WriteStrToStream(hsend.Document,'username=whatever'); hsend.HTTPMethod('POST', 'https://steamcommunity.com/login/getrsakey/'); obj := TSuperObject.ParseStream(hsend.Document, true); res := obj.AsString; memo1.lines.add(res); end; ``` Using superobject here for json parser but you can replace it with any other parser. Now, you don't need to change username and bother with that, just live it as it is and run the code. When you run it you will see something like this : `{"success":true,"token_gid":"a45581039c52a5c","timestamp":"562165850000","publickey_mod":"B80E1A6F0D54B4643D7872A57FA04CC1E3C9AF5345E813CA9FB01E35188175745189B1B49CEDE084AF1ED7DE99E178771999CEA2CC5F6D2D60EE32FAD3CE5BE6A0A4E0BBA6D38375463889A05688CCAF748DE24521857240460783519D82B556597FE78969292C0B3948D569B2E44F4C5D99EE41B07E771B3356448E72DBF83AB783945E8B3FA733C5E650D20F3D506AB7AC525D54DF2B1D982D6F2D6DA2CF281D3B9740AC62DDE7D041ADC044F3D94A53E3E5F5E336FA0966815C25EAE93B3B7362BDD5D81E5019737D7E2346AC32C0140D1E80BC98A9510CAD62C65D377540D03FCEB4C4F612614B29C1605CE194AFD97CC1683B50AF8282AE738122ED15AB","publickey_exp":"010001"} ` Now, if you remove or comment this line : hsend.MimeType := 'application/x-www-form-urlencoded'; you will see this result : {"success":false} And that's all I get by using internet tools, no matter what I try. Okay, so if you can get "{"success":true....." by using synapseinternetaccess that would be great and I would love to see how you did that. username_0: So, by using synapseinternetaccess like this it works actually, but I'll get back to you with the problem later. ``` procedure TForm1.Button1Click(Sender: TObject); var LoginData : TStringlist; res : string; obj: ISuperObject; begin LoginData := TStringList.Create; LoginData.Add('username=whatever'); res := httpRequest('https://steamcommunity.com/login/getrsakey/', LoginData); memo1.lines.add(res); LoginData.Free; end; ``` username_0: Actually it works, sorry about this, it was my mistake ! Status: Issue closed
taivo/parse-push-plugin
250915354
Title: unsubscribe does not work (android only..) Question: username_0: I don't know why, but in iOS this works perfectly, and in Android, the unsubscribe is ignored without error messages. Any ideas? ``` if (notify_me_NOTE==true){ try{ ParsePushPlugin.subscribe(GlobTable, function(msg) { console.log("Subscription ParsePlugin OK") }, function(e) { alert("Notification setting failed: "+e); }); } catch(e) {console.log ("do not know Parse Push Plugin: "+e);} } if (notify_me_NOTE==false){ try{ ParsePushPlugin.unsubscribe(GlobTable, function(msg) { console.log("Unsubscribe ParsePlugin OK") }, function(e) { alert("Unsubscribe failed: "+e); }); } catch(e) {console.log ("do not know Parse Push Plugin: "+e);} } ```<issue_closed> Status: Issue closed
pulumi/pulumi-awsx
1009684469
Title: statistic error in cloudwatch metric Question: username_0: I am using pulumi version v3.13.2 with Typescript. While trying to create a metric for an autoScalingGroup, I encountered a strange error. The metric should count the amount of requests during the last 60 minutes. This is the code for my metric: ``` const call_count_metric = new awsx.cloudwatch.Metric({name: 'Count', namespace: 'AWS/ApiGateway', dimensions: { Resource: '/{proxy+}', Stage: '$default', Method: 'ANY', ApiId: apiGateway.id }, statistic: 'Sum', period: 3600 }) ``` But I get the following error: ` Error: [args.metric.statistic] must be one of "Minimum", "Maximum" or "Average", but was: Sum` This is strange, since I can create this very metric using the AWS UI without any issue, as seen in the following screenshot. ![bug](https://user-images.githubusercontent.com/2683549/135085782-c7abfca4-adf6-4d7c-bdb0-df9b208d8f6f.png) Is this a bug, or am I missing something? Thank you in advance! Answers: username_1: This error seems to come from this line: https://github.com/pulumi/pulumi-awsx/blob/af311d33f20aef8462ada78b494ea17ec16e767c/nodejs/awsx/autoscaling/stepScaling.ts#L224 Are you using the `autoscaling.StepScalingPolicy` resource? username_0: Thank you for the quick answer! Yes, I am indeed using a step scaling policy. The goal is to set the amount of fargate tasks based on the metric count. ``` myScalingGroup.scaleInSteps("scale", { metric: call_count_metric, adjustmentType: "ExactCapacity", steps: { lower: [{ value: 60, adjustment: 3 }, { value: 40, adjustment: 2 }, { value: 20, adjustment: 1 }], }, }); ``` Since the aforementioned error was specifically referring to the metric, I didn't think I would be a problem with the scaling method. Again, configuring this scaling setup in the UI works like a charm. But I figure this solution may be suboptimal as a scaling concept, so I am open for better ways to do it. username_1: I'm not super familiar with this part of AWS to be honest. The error above was created explicitly to limit the possible types of statistic aggregation type to those three values. If you say this is wrong, I'm open to removing that check altogether, it was added almost 3 years ago after all. What do you think? username_0: Sadly I am far from being an expert on the matter. So I am not sure what possible implications this may have for other metrics.
opencpu/opencpu
718425962
Title: webhooks: GitHub default branch "main" (instead "master) broke opencpu.io deployment Question: username_0: Hi @username_1, just realised that the Github webhooks (https://www.opencpu.org/api.html#api-ci) are not working for my latest repo (`main` as default branch for GitHub since 2020-10-01), as you only allow builds for `master`. Would be great if you can allow for `main` for deploying on opencpu.io. Many thanks in advance! Answers: username_1: Hmm I thought it was working. I pushed a small fix, can you try again? username_0: Thanks. Now I get `We couldn’t deliver this payload: timed out` Any ideas? username_1: That happens if the installation takes a while, but usually it works anyway! You should get an email when it has deployed. username_0: Ok. Got Response 400. So this should be ok right despite `Rscript failed: Loading config from /usr/lib/opencpu/library/opencpu/config/defaults.conf` ? headers: ``` Access-Control-Allow-Credentials: true Access-Control-Allow-Headers: Origin, Content-Type, Accept, Accept-Encoding, Cache-Control, Authorization Access-Control-Allow-Origin: * Access-Control-Expose-Headers: Location, X-ocpu-session, Content-Type, Cache-Control Cf-Cache-Status: DYNAMIC Cf-Ray: 5e000bd87ff7f9f7-IAD Cf-Request-Id: 05b3e5bb4c0000f9f72bbd3200000001 Content-Type: text/plain; charset=utf-8 Date: Sat, 10 Oct 2020 11:36:53 GMT Expect-Ct: max-age=604800, report-uri=&quot;https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct&quot; Nel: {&quot;report_to&quot;:&quot;cf-nel&quot;,&quot;max_age&quot;:604800} Report-To: {&quot;endpoints&quot;:[{&quot;url&quot;:&quot;https:\/\/a.nel.cloudflare.com\/report?lkg-colo=16&amp;lkg-time=1602329814&quot;}],&quot;group&quot;:&quot;cf-nel&quot;,&quot;max_age&quot;:604800} Server: cloudflare Set-Cookie: __cfduid=da7726dc60d19052d2b0a02ff74877acd1602329813; expires=Mon, 09-Nov-20 11:36:53 GMT; path=/; domain=.opencpu.org; HttpOnly; SameSite=Lax Vary: Accept-Encoding X-Ocpu-Locale: en_US.UTF-8 X-Ocpu-R: R version 4.0.2 (2020-06-22) X-Ocpu-Server: rApache X-Ocpu-Time: 2020-10-10 11:35:05 UTC X-Ocpu-Version: 2.2.0 ``` body: ``` Rscript failed: Loading config from /usr/lib/opencpu/library/opencpu/config/defaults.conf Loading config from /etc/opencpu/server.conf Downloading GitHub repo kwb-r/kwb.heatsine.opencpu@main Downloading GitHub repo kwb-r/kwb.heatsine@62125b3b652e823a1ec543e2e2ca04b531e0ca43 Downloading GitHub repo kwb-r/kwb.utils@HEAD Running `R CMD build`... * checking for file ‘/tmp/ocpu-temp/remotes475ee38b8fa97/KWB-R-kwb.utils-c9f447a/DESCRIPTION’ ... OK * preparing ‘kwb.utils’: * checking DESCRIPTION meta-information ... OK * installing the package to build vignettes * creating vignettes ... OK * checking for LF line-endings in source and make files and shell scripts * checking for empty or unneeded directories * building ‘kwb.utils_0.7.0.tar.gz’ Installing package into '/usr/local/lib/opencpu/apps/ocpu_github_kwb-r_kwb.heatsine.opencpu_00TMP' (as 'lib' is unspecified) * installing *source* package ‘kwb.utils’ ... ** using staged installation ** R ** inst ** byte-compile and prepare package for lazy loading ** help *** installing help indices ** building package indices ** installing vignettes ** testing if installed package can be loaded from temporary location ** testing if installed package can be loaded from final location ** testing if installed package keeps a record of temporary installation path * DONE (kwb.utils) Installing 51 packages: BH, base64enc, lazyeval, htmltools, generics, dplyr, ps, processx, backports, prettyunits, callr, rstudioapi, rprojroot, pkgbuild, desc, praise, pkgload, colorspace, viridisLite, RColorBrewer, munsell, labeling, farver, testthat, withr, scales, isoband, gtable, zoo, intervals, plyr, FNN, spacetime, reshape, maptools, sp, automap, gstat, e1071, xts, data.table, crosstalk, hexbin, tidyr, htmlwidgets, httr, ggplot2, hydroTSM, plotly, lubridate, hydroGOF [Truncated] trying URL 'https://packagemanager.rstudio.com/all/__linux__/focal/latest/src/contrib/farver_2.0.3.tar.gz' Content type 'binary/octet-stream' length 2334202 bytes (2.2 MB) ================================================== downloaded 2.2 MB trying URL 'https://packagemanager.rstudio.com/all/__linux__/focal/latest/src/contrib/testthat_2.3.2.tar.gz' Content type 'binary/octet-stream' length 2952006 bytes (2.8 MB) ================================================== downloaded 2.8 MB trying URL 'https://packagemanager.rstudio.com/all/__linux__/focal/latest/src/contrib/withr_2.3.0.tar.gz' Content type 'binary/octet-stream' length 201830 bytes (197 KB) ================================================== downloaded 197 KB trying URL 'https://packagemanager.rstudio.com/all/__linux__/focal/latest/src/contrib/scales_1.1.1.tar.gz' Content type 'binary/octet-stream' length 546791 bytes (533 KB) ====================== ``` username_0: Got now response 200 but again `Ignoring non-master: refs/heads/gh-pages (default/master branch is 'main')` ![grafik](https://user-images.githubusercontent.com/11964451/95654761-7bcffb00-0b02-11eb-8e7e-46e3e77ab2b5.png) username_0: Works now. Many thanks! Status: Issue closed
dalen/puppet-puppetdbquery
105673158
Title: Support for the v4 api ? Question: username_0: PuppetDB 3.1 no longer supports the v1, v2, and v3 API versions so I was wondering if there are plans to add support for v4. Answers: username_1: Maybe something like this? ``` # Logic to support PuppetDB API versions 3 and 4 # and the associated PuppetDB terminus interfaces if Puppet::Util::Puppetdb.config.respond_to?("server_urls") uri = URI(Puppet::Util::Puppetdb.config.server_urls.first) server = uri.host port = uri.port endpoint = "/pdb/query/v4/resources" else server = Puppet::Util::Puppetdb.server port = Puppet::Util::Puppetdb.port endpoint = "/v3/resources" end ``` username_2: Will also need to change all instances os `select-` to `select_` as the key names have changed in v4. username_3: :+1: . Is there any plan to support the v4 API in a near future ? username_4: Looks like partial support was added in e20eea84d2d6ee29698625c19de727c089f414e4. I submitted pull request https://github.com/username_5/puppet-puppetdbquery/pull/62 It replaces ``` Puppet::Util:Puppetdb.server Puppet::Util:Puppetdb.port ``` with ``` uri = URI(Puppet::Util::Puppetdb.config.server_urls.first) uri.host uri.port ``` username_5: I just pushed the release of 2.0.0 now. And it actually has full support for structured facts as well :) Hope it works well (it does in my testing). But report any bugs you find. Status: Issue closed
ScanNet/ScanNet
823162274
Title: Can't browse and query Scannet data online! Question: username_0: Hello, @username_1 These two links are not available on Scannet website: http://kaldir.vc.in.tum.de/scannet_browse/scans/scannet/querier http://kaldir.vc.in.tum.de/scannet_browse/scans/scannet/grouped ![image](https://user-images.githubusercontent.com/50020414/110130544-6c12c500-7dc9-11eb-88a9-1f7404db6eaf.png) I would like to check whether some certain object labels are contained in the scans before downloading them. Are there any possibilities to do that besides querying data online? Status: Issue closed Answers: username_1: browser should work now
kwonoj/hunspell-asm
485932436
Title: Lib not working in IE Question: username_0: When I try to load module in IE11 I see a warning "No WebAssembly support found. Build with -s WASM=0 to target JavaScript instead." in the browser console. Can you re-build hunspell to asm.js and use it as a fallback?
flori/json
353951336
Title: Better error message for "JSON::ParserError at" Question: username_0: json (2.1.0) lib/json/common.rb, line 156 ``` The error message is coming from here: ``` https://github.com/flori/json/blob/master/lib/json/common.rb#L155 ``` Is there a way to tell it's an Invalid JSON instead of throwing JSON::ParserError? It seems like JSON::ParseError means it's invalid JSON. But not sure if this is always true?
Sosuke115/paper-reading
783141497
Title: MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices [ACL, 2020] Question: username_0: ## ひとことで言うと BERTの軽量化・高速化 ## 論文リンク https://www.aclweb.org/anthology/2020.acl-main.195 ## 概要 BERT largeから知識蒸留を行い、GLUEやSQuADでBERT baseと同等の性能でパラメータが少なく速いモデルを作成 ## 先行研究との差異 task-agnosticであること(普通のBERTみたいに下流タスクにfine-tuneできる) teacherはpre-training時のみ利用 層の深さではなく次元を縮小するような学習 ## 手法のキモ ![Screen Shot 2021-01-11 at 16 26 07](https://user-images.githubusercontent.com/44390274/104155067-b90d9700-5429-11eb-8e09-666d0ac28626.png) 次元を揃えたlinear transformationの導入(bottleneck)を行い特徴マップの次元を揃える。 Feature Map Transfer: 特徴マップ(各層のtransformerの出力)の二乗誤差から知識蒸留 Attention Transfer: 注意マップのKD divergenceから知識蒸留 ![Screen Shot 2021-01-11 at 16 29 00](https://user-images.githubusercontent.com/44390274/104155327-4bae3600-542a-11eb-83c8-fae518909f02.png) Progressive Knowledge Transferにより下層から順に一層ずつ蒸留する。 Pre-training Distillation: MLM + NSP + KD-MLMによる学習 ## 評価方法 GLUE, SQuADでBERT_baseと同等の性能 ## 関連論文 [TinyBERT](https://www.aclweb.org/anthology/2020.findings-emnlp.372.pdf) [DistilBERT](https://arxiv.org/abs/1910.01108)
wirepas/c-mesh-api
463556843
Title: comparison of uint8_t with constant 256 Question: username_0: In dsap.c[[1](https://github.com/wirepas/c-mesh-api/blob/60343e0d5e15a073927711a6f336fc46aed6cb7e/lib/wpc/dsap.c#L250)], there is a comparison between uint8_t and constant 256 which is always true. Is this intentional? Answers: username_1: Yes it is intentional. Main reason is that the 256 value is a constant that could be reduced in future if we want to reduce the number of EP to register. I guess it generates you a warning by your compiler. I can see two options to fix it: - Remove the test completely. It could lead to bugs in future if constant has a value lower than 256, but it will probably never happen - cast the variable to an uint16_t I think, the first option is fine Status: Issue closed
mennooo/orclapex-modal-lov
408064312
Title: Problem with special char Question: username_0: Hi, I'm glad to use your plug-in but I have a problem with specyfic position in my list. If I have as follows example: Display: W1WW+123 Return: 123456 I cannot find it on the list if i paste it cannot see any results. I came up with an idea, that I could be have a problem with char "+". The workaround for this is add char "/" before "+" and browsing on the list is possible but you know it's very time-consuming Is it possible to fix it? Kind, MsK
barberscore/barberscore-api
461725047
Title: penalty footnote Question: username_0: penalty applied for group leaving in semis shows up on Finals unexpectedy. [BHS Spring 2019 International Convention Quartet Finals OSS.pdf](https://github.com/barberscore/barberscore-api/files/3336441/BHS.Spring.2019.International.Convention.Quartet.Finals.OSS.pdf)<issue_closed> Status: Issue closed
pingcap/dm
855763827
Title: can't continue sync when switch GTID from on to off Question: username_0: ## Bug Report Please answer these questions before submitting your issue. Thanks! 1. What did you do? If possible, provide a recipe for reproducing the error. https://asktug.com/t/topic/69319/36?u=username_0 2. What did you expect to see? continue sync from previous location 3. What did you see instead? sync from start Answers: username_0: closed by https://github.com/pingcap/dm/pull/1723 relay's issue has been seperated to https://github.com/pingcap/dm/issues/1460 Status: Issue closed