repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
macournoyer/neuralconvo
158547018
Title: /rnn/SequencerCriterion.lua:42: expecting target table Question: username_0: Hello, I tried to train convo with some variations of the following: $ th train.lua --opencl --dataset 5000 --hiddenSize 100 I am running with opencl (ATI Radeon) or plain CPU, both causing the following error: libthclnn_searchpath /home/jack/torch-cl/install/lib/lua/5.1/libTHCLNN.so -- Loading dataset Loading vocabulary from data/vocab.t7 ... Dataset stats: Vocabulary size: 10682 Examples: 15877 Using Advanced Micro Devices, Inc. , OpenCL platform: AMD Accelerated Parallel Processing Using OpenCL device: Oland -- Epoch 1 / 50 /home/jack/torch-cl/install/bin/luajit: ...k/torch/install/share/lua/5.1/rnn/SequencerCriterion.lua:42: expecting target table stack traceback: [C]: in function 'assert' ...k/torch/install/share/lua/5.1/rnn/SequencerCriterion.lua:42: in function 'forward' ./seq2seq.lua:80: in function 'train' train.lua:85: in main chunk [C]: in function 'dofile' ...k/torch-cl/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk [C]: at 0x00405d30 Would you know what is the issue here. Unfortunately I am quite new to Lua Answers: username_1: Had the same issue. It happened right after I updated a couple of modules (including RNN) Trying to work through it now... username_1: The issue seems to be the decoder is outputting a table of tensors instead of a tensor and the criterion wants just tables or just tensors. The weird thing is that in the example on the RNN (encoder decoder example) page, the decoder correctly outputs the tensor from the decoder, even though the example is basically the same model as this. This is a tough one! username_2: I had the same problem. This was broken by this commit to the rnn dependency--https://github.com/Element-Research/rnn/commit/a1373c4aaf8c3a40c41332b35559fd77a64b815b--so if you checkout rnn directly from Github, revert to the commit before this one (git checkout 14aff64132aa90339b6d510604a2b090f6509300 ), and then copy and paste all the .lua files from your rnn checkout directory to the torch rnn dependency subdirectory (in your case /torch/install/share/lua/5.1/rnn/ ) it will work. username_1: I got it working again with the newer commit, but I had to rewrite the input data so that it is seqlength x 1 tensor as opposed to just a sequencelength tensor. The code is something like this: torch.Tensor({table_of_wordids}).t() I also integrated some of the newer features from the rnn example. I'm testing it now...if it works I'll post the fork (feel free to message me if you'd like to join in the testing fun) As a side note- I'm also rewriting the import script...it can't handle files larger than the LUAJIT vm size (there are tricks to get around this) and there is a bug if you try to shrink the vocab size username_3: cool, i've just been training on my cpu 2.7ghz without a gpu which is slow even when u limit -dataset 5--hiddenSize 5. It takes 2hrs to finish 1 epoch. But got impatient at waiting. And got this ![screenshot from 2016-06-06 10-26-31](https://cloud.githubusercontent.com/assets/1732471/15821582/5417c6a2-2bf9-11e6-9c6a-4564f5951559.png) which means I need a good gpu. username_1: Yeah- it takes a long time to run these models without a good GPU. The smallest decent se2seq model in the literature is 1 layer of 1000 units...so maybe its time to buy yourself an early christmas present! username_3: By the way, am planning on creating a neural net that predicts matches after training it on previous games data. I don't know how hard this will be as a beginner in neural nets, since the maths involved is way beyond me. Otherwise I'm not such a bad programmer, If you can hint me on how to go about it(some math topics), that would be great. username_0: Sorry I dug into the suggestions extensively now. For the first option to downgrade rnn. I suppose it is dependant on changes to nn as well and that you need to checkout rnn together with nn before the last commit to this repository to make it work. The second option is not clear to me either could you be somewhat more specific regarding your implementation username_1? username_1: So I don't know where the change came from, but the long and the short is that if you just change the inputs from a 1 dimensional tensor to a 2 dimensional tensor, it fixes things. So any place where you see torch.Tensor(table) you change it to torch.Tensor({table}):t(), and fix some of the outputs, it works..... If you guys can hold on a couple of days I can post my fork that has adaGrad, runs a test set every few runs and can do multilayer LSTM's username_0: Can't wait for it!!! username_4: @username_1 Can't wait for the change !! Let us know when its ready. username_1: It's in the testing stage right now. Trying it with 9m talk turns from opensubs. I had some success with 200k talk turns. 2 layers 1000 LSTM cells per layer, adagrad and 4 epochs: Hi : Hello. What is your name : What happened to you. How old are you : Thirteen What is the meaning of life : The getman is a sin of the earth. Do you like swimming : I have to go to bed It's been a long day : Like a woman, a star goodbye : Good evening, maam. username_1: If anyone wants to play with it while the bugs are still being fixed, I'd be happy to put it up now. But I should warn that the code has been going through big changes on a daily basis username_4: Hey @username_1, the results are looking nice. I would like to try it out. Can you please put it up (with a warning though) ? username_1: sure.... https://github.com/username_1/torchneuralconvo Let me know if you hit any bugs- I won't be able to handle a lot of pull requests until it's stabilized though username_4: thanks @username_1 .. This is great ! Will keep you posted with any bugs that i encounter. username_4: Hi @username_1 @username_0 @username_6 , I just checked out 13th May commit and it seems to be working perfectly without any changes. I think some issue has got introduced in the commits after 13th May. Thanks, Vikram username_4: I checked out the 13th May commit and it works perfectly without any changes. I think the issue might have got introduced in the commits after 13th May. username_5: Hi @username_4, What error are you getting? Did you try updating torch + rnn package (we are using the new nn.Select) if not try luarocks install torch luarocks install nn luarocks install rnn username_4: Thanks @username_5, it worked after the updates but i am getting out of memory error on the same dataset now. -- Epoch 1 / 50 THCudaCheck FAIL file=/tmp/luarocks_cutorch-scm-1-2235/cutorch/lib/THC/generic/THCStorage.cu line=41 error=2 : out of memory /home/ubuntu/DeepLearning/torch/install/bin/lua: ...DeepLearning/torch/install/share/lua/5.2/dpnn/Module.lua:458: cuda runtime error (2) : out of memory at /tmp/luarocks_cutorch-scm-1-2235/cutorch/lib/THC/generic/THCStorage.cu:41 stack traceback: [C]: in function 'resizeAs' ...DeepLearning/torch/install/share/lua/5.2/dpnn/Module.lua:458: in function 'momentumGradParameters' ...DeepLearning/torch/install/share/lua/5.2/dpnn/Module.lua:485: in function 'updateGradParameters' ...DeepLearning/torch/install/share/lua/5.2/dpnn/Module.lua:478: in function 'updateGradParameters' ...DeepLearning/torch/install/share/lua/5.2/dpnn/Module.lua:478: in function 'updateGradParameters' ...DeepLearning/torch/install/share/lua/5.2/dpnn/Module.lua:478: in function 'updateGradParameters' ...DeepLearning/torch/install/share/lua/5.2/dpnn/Module.lua:478: in function 'updateGradParameters' ...DeepLearning/torch/install/share/lua/5.2/dpnn/Module.lua:478: in function 'updateGradParameters' ...DeepLearning/torch/install/share/lua/5.2/dpnn/Module.lua:478: in function 'updateGradParameters' ...DeepLearning/torch/install/share/lua/5.2/dpnn/Module.lua:478: in function 'updateGradParameters' ...DeepLearning/torch/install/share/lua/5.2/dpnn/Module.lua:478: in function 'updateGradParameters' ...DeepLearning/torch/install/share/lua/5.2/dpnn/Module.lua:478: in function 'updateGradParameters' ...DeepLearning/torch/install/share/lua/5.2/dpnn/Module.lua:478: in function 'updateGradParameters' ...DeepLearning/torch/install/share/lua/5.2/dpnn/Module.lua:478: in function 'updateGradParameters' ...DeepLearning/torch/install/share/lua/5.2/dpnn/Module.lua:478: in function 'updateGradParameters' ./seq2seq.lua:81: in function 'train' train.lua:88: in main chunk [C]: in function 'dofile' ...ning/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk [C]: in ? username_5: @username_4 The default now is using a batch size of 10, before it was 1. I suggest trying to decrease --hiddenSize or use a smaller --batchSize. try to find a balance between speed and size(batchSize 1 is very slow) username_4: Thanks @username_5 Are you aware of any fundamental reasons/solutions for the out of memory issue, as the google papers mention that they are using 1000 hidden units and that also with a much bigger batch size and with 4 layers. Do we need to use a better machine than amazon ec2 instance with 4GB Nvidia GPU ? Are you able to run this on your machines with 1000 hidden units and 10 batch size? If yes, what configurations are you using? username_4: @username_5 Correct me i am wrong but batch size would also impact the quality of models. With batch size as 1, we are doing online training which may be noisy ! username_5: The google paper use number(I think 8..) of GPUs. A big issue with the current settings is the vocabulary size which makes the softmax layer huge and due to bug we can't control the vocabulary size. I'm working on it and will submit a PR soon. username_4: @username_5 Yes you are right. Anyways, lets not change the topic of this thread. If you don't mind, please share your email id on vikram.nov.1<EMAIL> so that i can discuss with you on email. Will delete the comment after you have noted the email. username_1: Hey all- I've got a fork that can control vocab size https://github.com/username_1/torchneuralconvo It seems to be working ok at this point but let me know username_3: If anyone can visualize the seq2seq model with nngraph, please postback the graph + the code used to achieve that, I've tried but I can't get it to display anything. Only when you do `print(model) `I get `neuralconvo.Seq2Seq` sorry but am new to this stuff. Thanks in advance. username_1: ok...confirmed, this sucker is working: https://github.com/username_1/torchneuralconvo if you're working with these models, ada-grad does wonders. The multilayer LSTM is nice as well. I've added a fixed vocab size and train/test splits. In the next month or so I'm planning to add beam search to the decoding. Let me know if anyone tries this and whether you see any issues username_4: Great work @username_1 ! Would be great if you can share some nice replies from the bot or the perplexity? username_6: @username_1 I'm also curious about your results. Could you share a sample conversation? Also, why ada-grad? username_1: I had a conversation earlier in this thread from a set of 200k examples. Right now I'm on the second epoch of a 9m example set, so it has a little ways to go (earlier on these models end up answering 'I don't know ' to a lot of prompts...) I'll share the results when I get another epoch or two down the road. On perplexity- this is pretty dataset and vocab size specific. But on my current set (9 million movie examples + 300k of a domain specific dataset and 30k vocab size) I have a perplexity of 7.46 at epoch 1.5 on the test set. But, this won't be comparable to other datasets or the same dataset with a different vocab size. Ada-grad is pretty awesome. It has made drastic improvements on basically every NLP problem I've ever put it to. The reason is that it makes SGD updates that are inversely proportional to the running gradient total for that parameter. In other words, if you have seen the word "I" 1,000 times, the updates to the parameters associated with this word are much smaller than a new word that you've never seen before. And it is so simple to implement. (Duchi's paper looks messy, but the algorithm is crazy simple...and torch can do it for you if you use the optim package). Many people think this should now just be the default for SGD type problems, because it isn't any harder to implement than momentum. I've even found it improves the solutions to sparse Max-Ent models! username_6: Interesting. I'll take a look at ada-grad. Thx @username_1. Can't wait to see your results! username_7: I followed thread and still I am getting the same issue th train.lua --dataset 50000.0 --hiddenSize 1000 -- Loading dataset Loading vocabulary from data/vocab.t7 ... Dataset stats: Vocabulary size: 25931 Examples: 83632 -- Epoch 1 / 50 /home/wasim/torch/install/bin/luajit: ...m/torch/install/share/lua/5.1/rnn/SequencerCriterion.lua:42: expecting target table stack traceback: [C]: in function 'assert' ...m/torch/install/share/lua/5.1/rnn/SequencerCriterion.lua:42: in function 'forward' ./seq2seq.lua:74: in function 'train' train.lua:85: in main chunk [C]: in function 'dofile' ...asim/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk [C]: at 0x00405d70 I tried updating via luarocks, I tried reinstalling torch but no luck. Please someone can help me. username_8: I also met this problem..now i fixed that.. I found it's because the SequencerCriterion only accepts table as in the source code of SequencerCriterion.lua, most functions use code like this: **_for i,input in ipairs(inputTable) do ... end_** So, currently, you have to input data in forms of table(not tensor) into the SequencerCriterion class username_7: When I converted tensor to table in lua ``` local inputTable = torch.totable(input) print(inputTable) local err = model:train(inputTable, target) ``` I am getting this error ``` /home/wasim/torch/install/bin/luajit: /home/wasim/torch/install/share/lua/5.1/nn/Container.lua:67: In 1 module of nn.Sequential: /home/wasim/torch/install/share/lua/5.1/nn/LookupTable.lua:59: attempt to call method 'isContiguous' (a nil value) stack traceback: /home/wasim/torch/install/share/lua/5.1/nn/LookupTable.lua:59: in function 'makeInputContiguous' /home/wasim/torch/install/share/lua/5.1/nn/LookupTable.lua:71: in function </home/wasim/torch/install/share/lua/5.1/nn/LookupTable.lua:68> [C]: in function 'xpcall' /home/wasim/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors' /home/wasim/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'forward' ./seq2seq.lua:71: in function 'train' train.lua:88: in main chunk [C]: in function 'dofile' ...asim/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk [C]: at 0x00405d70 WARNING: If you see a stack trace below, it doesn't point to the place where this error occured. Please use only the one above. stack traceback: [C]: in function 'error' /home/wasim/torch/install/share/lua/5.1/nn/Container.lua:67: in function 'rethrowErrors' /home/wasim/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'forward' ./seq2seq.lua:71: in function 'train' train.lua:88: in main chunk [C]: in function 'dofile' ...asim/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk [C]: at 0x00405d70 ``` username_6: I think original problem was resolved by updating nn and rnn: ``` luarocks install nn luarocks install rnn ``` If not, feel free to re-open. Status: Issue closed username_3: Hi again, Is it possible to resume training to save time incase on interruption ? username_6: @username_3 not atm, but it could be an option to load the model like in eval.th and resume training on it instead of starting from scratch. Shouldn't be too hard to implement. username_9: Hi, i did update as recommended the packages nn and rnn. It still results in this error: ` /home/picassoct/torch/install/bin/luajit: ...t/torch/install/share/lua/5.1/rnn/SequencerCriterion.lua:47: expecting target table stack traceback: [C]: in function 'assert' ...t/torch/install/share/lua/5.1/rnn/SequencerCriterion.lua:47: in function 'forward' ./seq2seq.lua:74: in function 'train' train.lua:86: in main chunk [C]: in function 'dofile' ...soct/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk [C]: at 0x00405d50 ` Can we have this reopened? username_3: @username_9 That means that lua is not able to accesss a file from the main script, make sure all your files exist and the names are ok.
dynamisdao/dynamisapp
183823172
Title: Bug: where is P2P identity verification? Question: username_0: When new user account is created + new policy is submitted -> we should have: 1) ONE **identity verification** task 2) ONE employment verification task for each job Where is 'identity verification' tasks? When user goes to /policy (main screen): On one side-panel - [![Screen Shot 2016-10-19 at 01.31.59.png](https://s21.postimg.org/4ngudd1jb/Screen_Shot_2016_10_19_at_01_31_59.png)](https://postimg.org/image/62if232mb/) 1) The name of this panel should be "Verification Tasks" 2) Show the list of tasks. Each item has only one field **task type** and PLUS at the end (like on the picture). Use this API: http://docs.dynamis1.apiary.io/#reference/0/peer-review-tasks This is how it looks like in the OLD frontend [![Screen Shot 2016-10-19 at 01.45.12.png](https://s9.postimg.org/43mi0rz9r/Screen_Shot_2016_10_19_at_01_45_12.png)](https://postimg.org/image/6xpne81fv/) I saw it worked well before. Maybe we need to test with NEW USER and it will be OK?<issue_closed> Status: Issue closed
choderalab/perses
274595040
Title: Very high energies in null hybrid system at intermediate lambda values Question: username_0: @username_1 , you told me on a call yesterday that you had very high energies in the null (where both endpoints are the same) hybrid system after 1000 steps of Langevin dynamics at the endpoint. Could you clarify/post the code you used to get that here so that I could start working on this? Answers: username_1: Sorry for the delay. The code below is taken from [test_elimination](https://github.com/choderalab/perses/blob/master/perses/tests/test_elimination.py). I found that the SmallMoleculeSetProposalEngine._get_mol_atom_map function does not map the bridgehead carbons of napthalene leading to huge errors in the custom and harmonic bond forces: ``` ----------------------- Energy components at lambda=0 CustomBondForce 5562433841603.15 kJ/mol HarmonicBondForce 1.559132968215998e+16 kJ/mol CustomAngleForce 1047.314638742342 kJ/mol HarmonicAngleForce 7102.562500935552 kJ/mol CustomTorsionForce 9.495888666549268e-07 kJ/mol PeriodicTorsionForce 2.8461500720252927e-05 kJ/mol NonbondedForce 5.683903055989541e-05 kJ/mol CustomNonbondedForce -0.004548577877067871 kJ/mol CustomNonbondedForce -9.847170273164524e-20 kJ/mol CustomBondForce 0.007901860449835528 kJ/mol ----------------------- Energy components at lambda=1 CustomBondForce 5562433841603.15 kJ/mol HarmonicBondForce 1.559132968215998e+16 kJ/mol CustomAngleForce 1047.314638742342 kJ/mol HarmonicAngleForce 7102.562500935552 kJ/mol CustomTorsionForce 9.495888666549268e-07 kJ/mol PeriodicTorsionForce 2.8461500720252927e-05 kJ/mol NonbondedForce 5.683903055989541e-05 kJ/mol CustomNonbondedForce -0.004547877224575666 kJ/mol CustomNonbondedForce -9.847170029376088e-20 kJ/mol CustomBondForce 0.007901860449835528 kJ/mol ------------------------ ``` I believe I have fixed the mcs code in that function to avoid this from happening and will push a pull request with my implementation. ``` from simtk import openmm, unit from simtk.openmm import app import os, os.path import sys, math from unittest import skipIf import numpy as np from functools import partial from pkg_resources import resource_filename from openeye import oechem if sys.version_info >= (3, 0): from io import StringIO from subprocess import getstatusoutput else: from cStringIO import StringIO from commands import getstatusoutput from openmmtools.constants import kB from perses.annihilation.new_relative import HybridTopologyFactory temperature = 300.0 * unit.kelvin kT = kB * temperature beta = 1.0/kT def simulate_hybrid(hybrid_system,functions, lambda_value, positions, nsteps=100, timestep=1.0*unit.femtoseconds, temperature=temperature, collision_rate=5.0/unit.picoseconds): platform = openmm.Platform.getPlatformByName("OpenCL") integrator = openmm.LangevinIntegrator(temperature, collision_rate, timestep) context = openmm.Context(hybrid_system, integrator, platform) for parameter in functions.keys(): context.setParameter(parameter, lambda_value) context.setPositions(positions) [Truncated] #make the hybrid topology factory: topology_proposal, positions = generate_hybrid_test_topology() factory = HybridTopologyFactory(topology_proposal, positions, positions) hybrid_system = factory.hybrid_system hybrid_topology = factory.hybrid_topology initial_hybrid_positions = factory.hybrid_positions functions = { 'lambda_sterics' : '2*lambda * step(0.5 - lambda) + (1.0 - step(0.5 - lambda))', 'lambda_electrostatics' : '2*(lambda - 0.5) * step(lambda - 0.5)', 'lambda_bonds' : 'lambda', 'lambda_angles' : 'lambda', 'lambda_torsions' : 'lambda' } equil_positions, initial_energy, final_energy = simulate_hybrid(hybrid_system,functions, 0.0, initial_hybrid_positions) print("Initial energy: ", initial_energy) print("Final energy: ", final_energy) ``` username_0: Thanks, taking a look now. username_0: Ah, maybe because the default atom mapping criteria include `OEExprOpts_HvyDegree`, and those atoms have a higher heavy degree? Thanks for finding this, btw. username_0: ![napthalene_null](https://user-images.githubusercontent.com/4674712/32917718-17fa2210-caee-11e7-8d74-0ba4d73882b8.png) Ok I'm noticing (see above) that the sterics force is going crazy as we switch lambda from 0 to 1. I also noticed that the regular `NonbondedForce` starts off at `nan` for some reason. I'm digging into this now. PS: apologies for the crappy legend. username_1: @username_0 yes I have noticed this trend across multiple ligands as well. I have not been able to determine the cause yet unfortunately. username_2: Just a few observations to summarize our phone call with @username_1: * Ring closures (like benzene -> napthalene) are NOT currently supported by the HybridTopologyAlchemicalFactory. This should raise an exception, though we'd have to figure out how to detect ring opening/closing transformations. We should create a new issue for that. * The `functions` you use above are meant for an absolute alchemical protocol. We should only be using the linear protocol (everything is set to `lambda`) for now. I think your example is based on [this test](https://github.com/choderalab/perses/blob/master/perses/tests/test_elimination.py#L167-L241) that should be running correctly on travis (but also uses the weird functions above). * It would be great if you folks could provide a list of druglike transformations that we can add a test for to make sure that the transformation engine works well for these. We can add a travis test to make sure a few steps of dynamics don't explode for lambda = 0 or 1. You can also annotate the transformations with a mapping so we can manually check we're not screwing these up. * We can prioritize the ability to specify manual maps or manually specify atom mapping rules if you'd like. We already have [this issue](https://github.com/choderalab/perses/issues/379) open but hadn't prioritized it. We can make it high priority if you like! username_0: I think we determined that this is not an issue at this point. Reopen if needed. Status: Issue closed
shxliang/chinese-sequence-labeler
360956960
Title: http://tcci.ccf.org.cn/conference/2016/papers/119.pdf 不能访问了 Question: username_0: 7 Character-Based LSTM-CRF with Radical-Level Features for Chinese Named Entity Recognition. State-of-the-art model on SIGHAN2006 NER task. http://tcci.ccf.org.cn/conference/2016/papers/119.pdf 这个链接404了,建议移除
google/mediapipe
822265268
Title: Mediapipe Face detection in Python Question: username_0: ``` But when I try to import mediapipe using .py file I get the following error: ``` (pymediapipe) santhanalakshmi@santhanalakshmi:~/Documents/SG_pj$ python face_detection.py Traceback (most recent call last): File "face_detection.py", line 2, in <module> import mediapipe ModuleNotFoundError: No module named 'mediapipe' ``` NOTE: I have mediapipe folder installed in /pymediapipe/lib/python3.7/site-packages/mediapipe/ What is the issue ? why am I getting the error. Please help to resolve the error. Answers: username_1: Do your `python` and `python3` point to the same thing? Note that you do `$ python face_detection.py` rather than `$ python3 face_detection.py`. username_0: Thanks for the response. Yes When I run with ```python3 face_detection.py``` it worked but when I run with ```python face_detection.py``` It failed. username_2: @username_0 Closing this issue as your resolved from the above comments. Thanks !! Status: Issue closed
hydroshare/hydroshare
413498798
Title: Getting rid of page refresh when updating Resource Landing page items Question: username_0: In this issue we will document our plan to get rid of page refreshes when updating Resource Landing page items. We can also discuss the possibility of turning this form into a single page application with well known tools like Redux (https://redux.js.org/). Answers: username_0: @username_1, any general ideas on how to proceed on this? I think this page form will be a great candidate to turn it into a SPA. username_0: We will move this conversation to Slack and post our decisions back in this issue after. ![image](https://user-images.githubusercontent.com/2448568/53259819-52dcc700-368d-11e9-97c3-58cbd967b97d.png) username_1: I would like to look into Vue.js together with you, @username_0 let's talk soon username_2: @username_0 The direction you set is bright and promising. I am in. Thanks. username_1: https://github.com/hydroshare/hydroshare/issues/3120 username_0: The plan to turn the Resource Landing page into a single page application (no unnecessary page refreshes) is to instantiate each component of the page as a Vue instance. While doing so, any server side responses will be changed to handle AJAX requests and return appropriate json data. We will list the components below so we can keep track of the work and overall progress. - [x] Keywords (Completed) - [ ] Resource Title (In progress @username_1) - [ ] Manage Access window (In progress - @username_0 ) username_3: Stale and in large part completed, closing Status: Issue closed
ferdikoomen/openapi-typescript-codegen
1125769720
Title: Incorrect complex query parameters Question: username_0: Hey, I'm trying to generate a GET request with a complex query object. I've made a test route. ``` "/booking/test": { "get": { "operationId": "testing", "parameters": [ { "name": "name", "in": "query", "required": true, "schema": { "type": "string" } }, { "name": "employees", "in": "query", "required": true, "schema": { "type": "array", "items": { "$ref": "#/components/schemas/Person" } } } ], "responses": { "200": { "description": "" } }, "tags": [ "booking" ] } }, ``` ``` "Responsibility": { "type": "object", "properties": { "name": { "type": "string" }, "priority": { "type": "number" }, "manager": { "type": "string", "nullable": true } }, "required": [ "name", "priority", "manager" ] [Truncated] "location":[ "Las Vegas", "New York" ], "responsiblity":{ "name":[ "Thing A", "Thing B" ], "priority":[ "1", "9001" ], "manager":"Tom" } } } ``` I'm not sure if this is a bug from this library, a limitation for query parameters or incorrect parsing server-side. Please let me know what you think. Answers: username_1: @username_0 That is a good question, I had a similar discussion in this thread: https://github.com/username_1/openapi-typescript-codegen/issues/917 It seems there is no strict definition to how query params should be handled. However in general practice (from a REST API design perspective) its much better to keep query params as simple types and have complex properties as request bodies. This makes it much easier for other clients to interact with your REST API. username_1: I'm closing this one, lets continue in https://github.com/username_1/openapi-typescript-codegen/issues/917 Status: Issue closed
fluttercandies/extended_text
832922244
Title: type 'List<InlineSpan?>' is not a subtype of type 'List<InlineSpan>?' in type cast Question: username_0: Using flutter with sound null-safe is throwing this error: ``` ══╡ EXCEPTION CAUGHT BY WIDGETS LIBRARY ╞═══════════════════════════════════════════════════════════ The following _CastError was thrown building ExtendedText-[GlobalKey#cba4d](null, ╞═╦══ textSpan ═══ ║ TextSpan: ║ "Ola mundo!" , debugLabel: (((blackMountainView bodyText1).apply).copyWith).copyWith, inherit: true, color: Color(0xff333333), family: Merriweather, size: 20.0, height: 1.5x, decoration: TextDecoration.none, textHeightBehavior: TextHeightBehavior(applyHeightToFirstAscent: true, applyHeightToLastDescent: true), dirty, dependencies: [DefaultTextStyle, MediaQuery]): type 'List<InlineSpan?>' is not a subtype of type 'List<InlineSpan>?' in type cast The relevant error-causing widget was: ExtendedText-[GlobalKey#cba4d] file:///home/username_0/Projects/biblebox/app/lib/widget/fragment_widget/fragment_widget.dart:128:27 When the exception was thrown, this was the stack: #0 ExtendedText.build (package:extended_text/src/extended_text.dart:224:11) ``` The code is at: https://github.com/fluttercandies/extended_text/blob/575489c9aed1698d11f1ed398e3c4cb3d45d2ca9/lib/src/extended_text.dart#L223 It looks like it should be: ``` children: (textSpan != null ? <InlineSpan>[textSpan] : null) as List<InlineSpan>?, ``` Answers: username_1: ok, i saw it, i will fix it in next version, thanks for your report username_0: Extendex_text is awesome. Thank you! username_1: close, if you feel ok Status: Issue closed
uhop/node-re2
677700824
Title: Not able to import re2 in Angular project Question: username_0: ### Issue Cannot use `re2` in angular project. --- ### Steps Followed - Create a new angular project or existing project. - Install re2 - `npm i re2` - Import in component or service as - `import 're2';` --- ### Error ```ERROR in ./node_modules/re2/re2.js Module not found: Error: Can't resolve './build/Release/re2'``` Answers: username_1: `re2` is a Node binary extension written in C++. As such it cannot be used in browsers directly. Status: Issue closed
TeamSQL/desktop-app
225684326
Title: No indication of wrong query syntax before execution Question: username_0: There's no indication that the query is wrong before executing. ![image](https://cloud.githubusercontent.com/assets/4396672/25618138/b4677a02-2f13-11e7-8e94-bb2d12861d02.png) Status: Issue closed Answers: username_1: Hi @username_0 , we're working on Syntax check and better code-completion. Thanks for the feedback.
DataDog/dd-trace-py
493180907
Title: Consider creating a changelog Question: username_0: Consider creating a changelog so that it would be possible to easily determine what are the most important changes made between the released versions. Answers: username_1: We could set up https://pypi.org/project/reno/ username_2: We do create release notes in https://github.com/DataDog/dd-trace-py/releases but adding a changelog to the repo could also help people find that information easier. username_1: It's also that using something like reno makes sure notes are part of the commits. username_3: Can you link to the releases page prominently at least? I don't think to check GitHub releases because very few Python projects use them for their release notes. Status: Issue closed
stasya72008/highway-to-hell
337428169
Title: Create method for formating month in calendar Question: username_0: def month_format(days_in_month, first, prev_month_days): calendar = '' day = 1 day_of_week = 1 prev_month = prev_month_days - first + 1 while first > day_of_week: calendar += '%d.' % prev_month prev_month += 1 day_of_week += 1 while day < days_in_month + 1: calendar += '%d.' % day day += 1 if day_of_week % 7 == 0: calendar = calendar[:-1] calendar += ' | ' day_of_week = 0 day_of_week += 1 day = 1 while day_of_week - 1 < 7: calendar += '%d.' % day day += 1 day_of_week += 1 calendar = calendar[:-1] calendar += ' | ' Status: Issue closed Answers: username_0: def month_format(days_in_month, first, prev_month_days): calendar = '' day = 1 day_of_week = 1 prev_month = prev_month_days - first + 1 while first > day_of_week: calendar += '%d.' % prev_month prev_month += 1 day_of_week += 1 while day < days_in_month + 1: calendar += '%d.' % day day += 1 if day_of_week % 7 == 0: calendar = calendar[:-1] calendar += ' | ' day_of_week = 0 day_of_week += 1 day = 1 while day_of_week - 1 < 7: calendar += '%d.' % day day += 1 day_of_week += 1 calendar = calendar[:-1] calendar += ' | ' Status: Issue closed
pingcap/tidb
766656989
Title: wrong result of sql `select * from t where (a = 2 or a = 2) and (a = 2 or a = 2)` Question: username_0: ## Bug Report Please answer these questions before submitting your issue. Thanks! ### 1. Minimal reproduce step (Required) <!-- a step by step guide for reproducing the bug. --> ``` drop table if exists t; create table t (a int, b int, index a_b(a, b)); insert into t values(2, 3); select * from t where ((a = 2 or a = 2) and (a = 2 or a = 2)); ``` ### 2. What did you expect to see? (Required) ``` +------+------+ | a | b | +------+------+ | 2 | 3 | +------+------+ 1 row in set (0.00 sec) ``` ### 3. What did you see instead (Required) ``` Empty set (0.00 sec) ``` ### 4. What is your TiDB version? (Required) ``` Release Version: v4.0.0-beta.2-1814-gab9cd019b Edition: Community Git Commit Hash: ab9cd019be921b4d7ece3cca883b7cac7796e314 Git Branch: master UTC Build Time: 2020-12-14 15:47:20 GoVersion: go1.15.5 Race Enabled: false TiKV Min Version: v3.0.0-60965b006877ca7234adaced7890d7b029ed1306 Check Table Before Drop: false ``` <!-- Paste the output of SELECT tidb_version() --> Answers: username_1: This slightly more common variant of the query also fails: ```sql mysql> select * from t where ((a = 2 or a = 1) and (a = 3 or a=4)); Empty set (0.00 sec) ``` `EXPLAIN` shows this as a query against dual: ```sql mysql> explain select * from t where ((a = 2 or a = 1) and (a = 3 or a=4)); +-------------+---------+------+---------------+---------------+ | id | estRows | task | access object | operator info | +-------------+---------+------+---------------+---------------+ | TableDual_5 | 0.00 | root | | rows:0 | +-------------+---------+------+---------------+---------------+ 1 row in set (0.00 sec) ``` username_2: /assign Status: Issue closed
ku-vertnet/kubi-paleo
53269211
Title: Monthly VertNet data use report for December, 2014, resource kubi-paleo Question: username_0: Your monthly VertNet data use report is ready! You can see the HTML rendered version of the reports through this link http://htmlpreview.github.io/?https://github.com/ku-vertnet/kubi-paleo/blob/master/reports/KU-kubi_vertpaleo_2015_01_02.html or you can see and download the raw report via GitHub as a text file (https://github.com/ku-vertnet/kubi-paleo/blob/master/reports/KU-kubi_vertpaleo_2015_01_02.txt) or HTML file (https://github.com/ku-vertnet/kubi-paleo/blob/master/reports/KU-kubi_vertpaleo_2015_01_02.html). To download the report, please log in to your GitHub account and view either the text or html document linked above. Next, click the "Raw" button to save the page. You can also right-click on "Raw" and use the "Save link as..." option. The txt file can be opened with any text editor. To correctly view the HTML file, you will need to open it with a web browser. You can find more information on the reporting system, along with an explanation of each metric, here: http://www.vertnet.org/resources/usagereportingguide.html Please post any comments or questions to http://www.vertnet.org/feedback/contact.html Thank you for being a part of VertNet.
hathach/tinyusb
494028143
Title: Create usbd_open_ep() function Question: username_0: Some class drivers directly call `dcd_edpt_open`, which seems to be the the only spot where a class driver directly calls a driver function. The usbd module does have an open EP function, but it only open pairs, not single EPs. To this end, can a `usbd_edpt_open()` function be defined (probably just a simple wrapper), so that we can remove uses of `dcd_*` functions from class drivers? Also note that the classes are including the "device/usbd_pvt.h" header. The changes requested: [ ] Create `usbd_open_ep(...)` [ ] Remove uses of the dcd functions in class drivers [ ] Remove inclusion of the usbd_pvt header in class drivers (For the moment, my USBTMC class is also including the pvt header file; I'm not working in implementing these changes as it is usable at the moment.) Answers: username_1: Yeah, it is on my plan. usbd could do even more than that. I just need to find more time 😃😃 username_0: Yeah, this is not high priority, just makes me feel dirty. I might do a quick, not so intellectual fix... username_1: It is not too intellectual, though you probably need spending more time troubleshooting bugs within the stack to be aware of edge cases :) username_0: I started looking at this, and realized that they can just call `usbd_open_edpt_pair` with `count=1`. Do they do that, or do we make a new function to open a single edpt, and make the open_pair function always open two (as its name implies}? username_1: implemented by #379 :smiley: Status: Issue closed
ibmdb/python-ibmdb
61989359
Title: ibm_db_django 1.0.7 does not work on django version 1.5 and below Question: username_0: ``` What steps will reproduce the problem? 1. Install ibm_db_django 1.0.7 2. Modify settings.py to point towards an ibm_db_django instance 3. Execute random query What is the expected output? What do you see instead? Expected output: Query being successfully executed. Actual output: Error message 'ImproperlyConfigured' in regards to the database backend. Resolution: After digging into this for a little bit, the FieldInfo import needs to be guarded against the wrong version. Patch will follow shortly. ``` Original issue reported on code.google.com by `<EMAIL>` on 9 Dec 2014 at 2:25 Answers: username_1: Django 1.7 and up are now required. Status: Issue closed
cryptexfinance/community-coordination
920892834
Title: Request for Author - Our Network Guest Post Question: username_0: If you aren't familiar with it, here is the Our Network newsletter: https://ournetwork.substack.com/ Here is some feedback from Spencer (creator of Our Network) about the style of his newsletter and the interests of his audience - write about network health and key KPIs. Introduce what Cryptex does, focus on the DAO, TCAP, LPs, how the stakeholder ecosystem is evolving. Answers: username_0: Content was completed, now getting it over to Spencer to add it to an upcoming Our Network post. username_0: haven't made good progress with this, putting as low priority for now.
riesgos/dlr-riesgos-frontend
1164904927
Title: New schema Medina: adjust UI to fit multiple possible schemata Question: username_0: - Popups: don't group by material . No material information in classname. - Legend: currently shows D5 & D6 -< doesn't exist in Medina. Same thing with transitions Answers: username_0: Awaiting some additional data: Deus needs taxonomy data in mapping file so that Medina can also be used in the Chile scenario.
dotnet/roslyn-analyzers
118225793
Title: Port FxCop rule CA3058: DoNotUseSetInnerXml Question: username_0: **Title:** Do not use SetInnerXml **Description:** Do not use the unsafe setter of InnerXml property of System.Xml.XmlDocument/XmlDataDocument. This API internally enables DTD processing on the XML reader instance used, and uses UrlResolver for resolving external XML entities. The outcome is information disclosure. Content from file system or network shares for the machine processing the XML can be exposed to attacker. In addition, an attacker can use this as a DoS vector. **Proposed analyzer:** System.Xml **Notes:** Status: Issue closed Answers: username_0: Incorporated into [CA3075](https://github.com/dotnet/roslyn-analyzers/issues/758).
JKorf/Binance.Net
726547219
Title: BinanceStreamKlineData missing Interval Question: username_0: Hi, just got back to coding my app from over a year ago and trying to get it running with the new updates. Spot.MarketStream.BinanceStreamKlineData which use to be BinanceStreamKlineData had an interval indicating the klineinterval xx.Data.Interval but this is no longer there?? Sorry for the question Status: Issue closed Answers: username_1: Hi, it seems like it was missing in the interface. Should be fixed in version 6.3.1 I just pushed.
XX-net/XX-Net
287414222
Title: X-Tunnel极难连上,掉线,卡顿得很。昨天刚新购的一期X-tunnel,今天账户这样显示了。i烦请解答谢谢 Question: username_0: X-Tunnel极难连上,掉线,卡顿得很。昨天刚新购的一期X-tunnel,今天账户这样显示了。i烦请解答谢谢 ![image](https://user-images.githubusercontent.com/21957820/34772476-f7d42c58-f642-11e7-9078-4edd5c426cdc.png) ![image](https://user-images.githubusercontent.com/21957820/34772498-0cc8a062-f643-11e7-85ce-f9f804bc3444.png) ![image](https://user-images.githubusercontent.com/21957820/34772520-257f1c8a-f643-11e7-825e-b29ed5715cf7.png) Answers: username_1: 麻烦管理也回答下我的。 https://github.com/XX-net/XX-Net/issues/9404#issuecomment-356277614 X-Tunnel情况非常类似 username_2: 我情况也非常一样
prometheus/node_exporter
525841289
Title: Add machine_id value information to collectors Question: username_0: Hi guys, before open a PR for that, i would like to ask you if you are agree to introduce the `machine-id` value information in the node-exporter because recently we had a major outage regarding this. The weave-net cni implementation, one of the most common used in kubernetes, use this uuid to reach the quorum, if the machine-id is the same over nodes (due to a misconfiguration or a clone or whatever) this end with causing issues: https://github.com/weaveworks/weave/issues/2767 Having the value of each machine-id, is pretty simple write a check that ensure the unique value of each machine-id in the cluster, and avoid major further issues. I thought that this could be the best place where to gather this kind of information, so this is why i am asking you if is acceptable add this kind of collector. Answers: username_1: To be more precise, machine-id is used also by Calico (we had problems with it as well). username_2: What is "machine-id"? Where does it come from? username_3: source: https://www.freedesktop.org/software/systemd/man/machine-id.html username_4: This would essentially just returning the content of the file as label, I don't think it's worth adding an collector for that unless it's available in the state that the systemd collector already reads. I'd suggest you use the textfile collector, basically just writing a file: ``` systemd_machine_id{id="abc"} 1 ``` ..to your textfile collector dir. username_0: @username_4 , yes could be also this a solution, although it requires to deploy a file in the hosts as well, on the other side, a new collector gives the value out of the box without adding anything in the host. I understand btw your point of view, a new collector could be a duplication of something that could reached with an existing one. Thanks for your tip, i will try it soon username_2: Thanks for the link to the docs. So this is a systemd-specific feature. I would suggest that if we did want to add this it would go in the systemd collector module. I wonder if we can read this information over systemd dbus. username_0: @username_2 by hitting `systemd-machine-id-setup --print` you get the current configured machine-id username_2: @username_0 That is a binary, not dbus. The binary only appears to interact with the `/etc/machine-id` file. We do not allow forking processes in the exporter. username_5: I have it on my Gentoo system, it's timestamped from 2013 and I never ever had no systemd.. :) So it rather seems to be a part of dbus, simply hijacked by systemd later on. username_4: I think for people who need this it should be easy enough to run `echo systemd_machine_id{id="$(cat /etc/machine-id"} 1 > /path/to/textfile/foo.prom` somewhere. Status: Issue closed
intel-ctrlsys/actsys
245146386
Title: [actsys-1283] [Control Timeout] Timeout cmd line option will not work if provided at the end Question: username_0: <b><i>[reporter="anpatel1", created="Thu, 1 Jun 2017 16:09:01 -0700", resolved="Fri, 23 Jun 2017 15:36:28 -0700"]</i></b> <p>Current implementation will not allow users to put timeout cmd line option at the end, as shown below:<br/> $ctrl power on <device-name> -t 10</p> <p>Rather cmd line expects it to be right after 'ctrl' keyword.<br/> Ex: $ctrl -t 10 power on <device-name></p> <p>For better user experience we should allow user to put it at the end also.</p> Status: Issue closed Answers: username_0: <b><i>[reporter="anpatel1", created="Thu, 1 Jun 2017 16:09:01 -0700", resolved="Fri, 23 Jun 2017 15:36:28 -0700"]</i></b> <p>Current implementation will not allow users to put timeout cmd line option at the end, as shown below:<br/> $ctrl power on <device-name> -t 10</p> <p>Rather cmd line expects it to be right after 'ctrl' keyword.<br/> Ex: $ctrl -t 10 power on <device-name></p> <p>For better user experience we should allow user to put it at the end also.</p> username_0: <b><i>[author="kewang", created="Fri, 23 Jun 2017 15:35:53 -0700"]</i></b> <p>python argparse parses the input list (sys.argv<span class="error">&#91;1:&#93;</span>) in order, matching the strings with the Actions (add_argument object). So if the command is</p> <p>python foo.py --arg1=3 cmd --arg2=4</p> <p>it tries to handle '--arg1', then 'cmd'. If 'cmd' matches a subparser name, it then delegates the parsing to that parser, giving the remaining strings to it. If the cmd subparser cannot handle --arg2, it returns that as an unrecognized argument.</p> <p>The main parser does not resume parsing. Rather it just handles the unrecognized arguments as it normally would - raising an error if using parse_args, and returning them in the extras list if using parse_known_args.</p> <p>So if you want to put -t at the end, you have define it as a subparser argument. Or do some further parsing after parse_known_args. This will end up with complicated parser logic. </p>
phev-remote/phevctl
778213069
Title: MY18 unable to register Question: username_0: I am unable to complete the register step on my MY18. phevctl does connect to to car as it is able to fetch the VIN and other data, but then nothing happens. I tried setting msg-core/logger.h log level to debug, but seems I can't get debug from phevcore. Here's what I have: [out.log](https://github.com/phev-remote/phevctl/files/5765721/out.log) Answers: username_1: Hi @username_0, If this is of any help to you, I seem to have the same problem registering my 2014 car, but I've found that simply spoofing the mac address from my already-registered phone was enough to let me use phevctl (no need to register in this case).
pygeos/pygeos
568101972
Title: Distribute wheels on PyPI Question: username_0: Maybe we could use this: https://gertjanvandenburg.com/blog/wheels/ Answers: username_1: Windows is the tricky one. Over at Shapely I recently reworked an appveyor to compile/cache GEOS, then build and check the .whl files, which can be uploaded to PyPi. I'm not sure if it's the best solution, but it's what I was able to make work, as there is a big demand to install shapely on Windows via pip. If you're interested, I can rework a similar Windows setup for pygeos. Sean also has [shapely-wheels](https://github.com/shapely/shapely-wheels) based on [multibuild](https://github.com/matthew-brett/multibuild) for Linux and MacOS (I think?). Perhaps a workflow could be gathered into a new https://github.com/pygeos/pygeos-wheels repo. username_0: Thanks for the offer on the appveyor part. I was thinking on just getting the GEOS binaries from https://trac.osgeo.org/osgeo4w/. Out of curiosity, why did you choose to compile it yourself? Anyway, I’d be happy if you want to port your windows build recipe to pygeos 👍 username_1: I chose to compile GEOS within Appveyor to have full control of the version of GEOS, and which compiler is used. For instance, CPython versions 3.5, 3.6, 3.7, 3.8 was compiled with Visual C++ 14.X ([see ref](https://wiki.python.org/moin/WindowsCompilers)), so I get to ensure that the same tools are used to compile objects intended to work with CPython. This is a bigger deal if/when a different version of Python is released that depends on a different Visual C++ version. I suppose the other options are to use an upstream precompiled package, like vcpkg's [geos port](https://github.com/microsoft/vcpkg/tree/master/ports/geos), and cond's [geos](github.com/conda-forge/geos-feedstock). Perhaps these could work, I haven't checked. I'm always open to change the workflow, however. username_0: I see. I think we should minimize ABI incompatibilities with the shapely wheels so I am :+1: in following the same approach username_0: A work in progress on the manylinux part (based on https://github.com/shapely/shapely-wheels): - https://github.com/pygeos/pygeos-wheels - https://travis-ci.com/pygeos/pygeos-wheels They still fail with `ImportError: No module named 'pygeos.lib'` If anybody has an idea what I did wrong is not working, please tell me :) I will try to find some time coming weeks to debug. username_0: An update: the multibuild now passes (although the bug in #100 is blocking sometimes). Now I need a place to put the artefacts. I was thinking of making a 'production' branch (or 'pypi') and let Travis send the wheels to PyPI directly if the build is on that branch. Then (in the future) we will just need to bump the version number, make a PR to that branch, merge, and the CI's will publish the wheels automatically. More or less like it's done on conda-forge. Status: Issue closed username_0: Closing this, thanks again @username_1 for your contribution on the windows side
rancher/rancher
292031282
Title: Remember username, default to generating a random password on first login Question: username_0: - Remember username on login: ![image](https://user-images.githubusercontent.com/753917/35460468-ff43d990-0298-11e8-8d6e-af4709a8c948.png) - Default to generating random password: ![image](https://user-images.githubusercontent.com/753917/35460563-55d2c1b8-0299-11e8-8bcd-6fc949db010c.png) Separately, the backend will require you to change the password to something not the same (again) soon, and will provide a way for the UI to know that this is a new install so that it can bring you straight to setting the password. Status: Issue closed Answers: username_1: Version - 2.0 master 1/29 Verified fixed
pola-rs/polars
947207663
Title: Provide an interface for R language Question: username_0: Hello, have you considered providing an interface for R language? R language has packages to call Rust. I don’t know the difficulty level. If you want, I hope we can work together or find someone who is very good at R language to complete this task. This must be a great job, and I have observed that it is faster than data.table. Thanks. Answers: username_1: @username_2 & @jeroen might able to leave some suggestions Status: Issue closed username_0: Hello, have you considered providing an interface for R language? R language has packages to call Rust. I don’t know the difficulty level. If you want, I hope we can work together or find someone who is very good at R language to complete this task. This must be a great job, and I have observed that it is faster than data.table. Thanks. username_0: Sorry, I clicked close by mistake. Of course, they are all outstanding contributors to the R language, and extendr provides a good way to call Rust for R. It would be great if they were willing to join. username_2: This looks like an interesting project. I made a fairly large contribution to Arrow a couple of years ago as we were using it for bioinformatics data at work. It should be fairly easy to generate bindings using **extendr** and you would be welcome to raise an issue and get some help from the team. I'll try to look over your project in the next few days. username_3: Has any progress on this topic been made? I am a full-time R developer with an interest in high-performance data frame libraries as well as an inkling to learn Rust. If someone has already started an R project, I would love to contribute. username_0: Because of my lack of programming skills and busy work, I did not start this project. username_2: We (the extendr team) would be happy to help to interface anything to R. https://github.com/extendr/extendr username_4: Happy to help with this as well. I have _some_ but not a lot of Rust experience. I have a fair amount of R programming background. One project to look for inspiration from would be: [https://dtplyr.tidyverse.org](https://dtplyr.tidyverse.org) since it would be cool to be able to keep as much of the dplyr syntax with a faster polars backend (eventually).
gabrielpacheco23/google-translator
996954367
Title: Instance of '_Future<dynamic>' instead of text Question: username_0: Future<dynamic> translate(String input) async { final translator = GoogleTranslator(); var result; var translation = await translator .translate(input, to: 'tr') .then((value) => {result = value}); return result; } this scope gives me Instance of '_Future<dynamic>' but i want to get that translated text.
emailjs/emailjs-imap-client
194380663
Title: Disable logger Question: username_0: Correct me if i'm wrong, but there is no way to disable the logger? I don't wan't to log every action on my production environment ;) Answers: username_1: you can set the logger to LOG_LEVEL_NONE https://github.com/emailjs/emailjs-imap-client/blob/master/src/emailjs-imap-client.js#L1829 username_1: or you could create a no-op logger. a PR would be welcome :) username_0: Due to the fact that the log level is set hardcoded to `LOG_LEVEL_ALL` in https://github.com/emailjs/emailjs-imap-client/blob/master/src/emailjs-imap-client.js#L69 i will create a PR for that. Give me some days ;) username_1: but why not just do `imap.logLevel = LOG_LEVEL_NONE` ? username_0: Thanks! That works as expected. Status: Issue closed
microsoft/snmalloc
476490252
Title: Lazy TLS initialization breaks stats counting Question: username_0: Bisection reveals that b8a5d7fca96fd442eb74df8db59666bd1704696c breaks the idea that stat counters don't go negative. In particular, adding `assert(sizeclass[sc].count.current > 0);` to `sizeclass_dealloc` (and applying the fix for #79) will cause at least `func-malloc` to fail on that commit and after. The problem is that `small_alloc`'s call to `stats().sizeclass_alloc` happen before the check that we're on the fast path, and so apply to the stub global allocator's stat structure. This does not appear to be a problem in `medium_alloc` or `large_alloc`, as the replacement path will have tail-called before attempting to update the stub's stats structure. Answers: username_1: Thanks for reporting this. I see you have a commit in your fork to fix this. Have you managed to get the University to approve the CLA? If so, can you PR it in? Status: Issue closed username_1: Resolved by #84
rrousselGit/provider
554095822
Title: Possible memory leak on StreamProvider Question: username_0: I'm using Provider with MultiProvider basically on all my views. I use `StreamProvider` which returns a List of Models (Class instances), which are created while listening to a `Firebase` subscription (deserialization). I've noticed a steady memory increase while navigating do different routes. Turns out that those Models are still inside the heap (according to DevTools). Even when forcing GC, they are not removed from the heap. As far as I can tell those Models are not used by any widget after navigating to a different View, also `maintainState` is set to `false`. What am I missing here? Shouldn't StreamProvider dispose of all values emitted by the stream? Answers: username_1: It's unlikely StreamProvider, but instead your stream the issue username_0: Not yet 100% sure but maybe I've discovered the issue... in the same MultiProvider I have a `StreamProvider` (multiple providers in fact) which returns a stream (BehaviourSubject) from a class instance (let's call it `ProjectViewModel` which is, in fact, a `ChangeNotifier` class). In the same MultiProvider there is also a `ChangeNotifierProvider` which returns this whole class (`ProjectViewModel`). Inside `ProjectViewModel` I manually cancel subscriptions and close all streams when `ProjectViewModel` is disposed of, but somehow all Objects that were emitted inside the stream, are still present inside the heap. When I remove the `ChangeNotifierProvider` from the MultiProvider, this issue disappears (Objects are removed from memory). Maybe I'm abusing the ChangeNotifierProvider to pass around / provide the class instance? username_1: Do you mind making a code snippet to reproduce your issue? A description is too abstract to understand what is happening. username_0: After extensive investigation, I came to the conclusion that the issue is not related directly to StreamProvider. It was a case of not completed/closed streams and database connections (SQLite) inside ViewModel class, therefore there was still an existing reference to the emitted instances. Status: Issue closed
bmorris3/aesop
344988448
Title: Review - License doesn't match Question: username_0: License on repo is MIT, review paper for JOSS has CC-YA (probably the template default?) Answers: username_1: I'm a bit stumped by this one. Hey @jakevdp/@username_2 – should the JOSS submission license match the software license? username_2: JOSS _papers_ are CC-BY, the license choice of the software (MIT in this case) is picked by the author and respected. username_1: Thanks for the clarification @username_2! Status: Issue closed
shownb/shownb.github.com
596129553
Title: 坑二 机械键盘 Question: username_0: 引用自 https://www.showdoc.cc/TuKeyboard?page_id=3846046570596123 ### 选择尺寸 玩家首先根据自身需求选择偏好的键盘尺寸 有20%、40%、60%、80%、100%尺寸 如果您对键盘数字区有依赖,如财务人员,搞20%键盘 如果您偏爱小键盘,对尺寸、便携性有要求,搞40%或60%键盘 如果您偏爱大键盘,搞87%或100%键盘 ### 选择配列 1. **40%键盘配列** 常规配列 ![92540662ly1gcysg341kvj20io0683ya](https://user-images.githubusercontent.com/2890741/78715746-2251fe00-791e-11ea-8462-94f5061adbe7.jpg) 普朗克配列 ![92540662ly1gd0ejhz083j20io068jr5](https://user-images.githubusercontent.com/2890741/78715787-2ed65680-791e-11ea-9a05-c29058963ec5.jpg) 2. **60%键盘配列** POKER配列 ![92540662ly1gczxmszlzvj20ic064we9](https://user-images.githubusercontent.com/2890741/78715931-66450300-791e-11ea-95c4-0515c4abea04.jpg) Answers: username_0: BLE基础常用概念 - Central(中心设备):负责扫描,发现广播的Peripheral,向Peripheral发起连接。比如Phone,Pad等等 - Peripheral(外设):具备广播能力,能被Central发现和连接。比如心率计,手表 我们的esp32模拟成的键盘 - Service(服务):表示Peripheral可以提供哪些服务,比如心率计Peripheral就具备Heart Rate Service,血糖计Peripheral就具备Glucose Service。一个Peripheral可以具备多个不同的Services。 - Characteristic:更加精细具体的服务单元,也是数据通讯的载体,Service包含Characteristic。比如Heart Rate Service有Heart Rate Control Point,Body Sensor Location,Heart Rate Measurement 3个Characteristic。通过读取Body Sensor Location的值,可以获取心率传感器的位置。心率值是通过Heart Rate Measurement 通知出来。控制心率器,向Heart Rate Control Point写数据。 - Characteristic Property:不同的Characteristic支持不同属性,有可读,可写,通知等属性,比如Heart Rate Control Point支持可写,Body Sensor Location支持可写,Heart Rate Measurement支持通知。 username_0: [ { "pcb": false }, [ "Num Lock", "/", "*", "-" ], [ "7", { "a": 5 }, "8", { "a": 4 }, "9", "+" ], [ "4", "5", "6", "+" ], [ "1", "2", "3", "Enter" ], [ "0", "0", ".", "Enter" ] ] username_0: 05030201010200086f6b00000000000000000000 05030202000000000000000000 username_0: 05030201010200086f6b00000000000000000000 username_0: 05030201010a000868656c6c6f206361726c0000
sigurdsvela/wp-require
110797290
Title: Version Class: Allow separator between modifier and modifier-number Question: username_0: As of now, the version class only handles versions with modifiers that are structured like `1.0.0-b1` with nothing between the `b` and the `1`. The class should allow for a `.` in between those, as this is used by many.
Saul3d/ComicReadingList
479239662
Title: Infinite Scrolling Question: username_0: ### User Story When the user scrolls with down using the scroll wheel or scrollbars they should see more comics ### AC WHEN the user scrolls down the page THEN more data (comics) should be loaded onto the page. ### Developer Notes - change state limit, offset when the more data is loaded - limit, offset, dateRange should be passed to getComics function - call getComics function on scroll when document.scrollTop >= document.height
matiaskorhonen/nordea
67363821
Title: Wrong currencies Question: username_0: Hi, some currencies are not correctly calculated. As far as I've tested converting a currency (EUR or USD) to Belarusian ruble (BYR) and Vietnamese dong (VND) adds extra digits to the integer part. ```ruby Money.new(100, "USD").exchange_to('USD').to_f # => 1.0 OK Money.new(100, "USD").exchange_to('EUR').to_f # => 0.98 OK Money.new(100, "USD").exchange_to('BYR').to_f # => 1427345.0 but should be 14273.45 Money.new(100, "USD").exchange_to('VND').to_f # => 2158000.0 but should be 21580.0 ``` Nevertheless, the reverse operation gives a correct exchange: ```ruby Money.new(1427345.0, "BYR").exchange_to('USD').to_f # => 1.0 Money.new(2158000.0, "VND").exchange_to('USD').to_f # => 1.0 ``` Answers: username_1: :+1: same problem
apache/camel-kafka-connector
713442450
Title: [camel-main-support] Reordering props should not be required anymore Question: username_0: Back in the early days, it was required to re-order the properties to have `#class:` be resolved before trying to perform bean binding but it should not be required any more thus [this](https://github.com/apache/camel-kafka-connector/blob/f66a9211d6a31deae9e5d26a944f7262ceb872cc/core/src/main/java/org/apache/camel/kafkaconnector/utils/CamelMainSupport.java#L79-L86) can be removed<issue_closed> Status: Issue closed
GothamElections2017/RandomThoughts
412187548
Title: Republic of Slovenia : Selected Issues https://t.co/nOYIy5qwO1 Question: username_0: <blockquote class="twitter-tweet"> <p lang="en" dir="ltr" xml:lang="en">Republic of Slovenia : Selected Issues <a href="https://t.co/nOYIy5qwO1">https://t.co/nOYIy5qwO1</a></p> &mdash; <NAME> (@ge_ReedRichards) <a href="https://twitter.com/ge_ReedRichards/status/1098015762919378946?ref_src=twsrc%5Etfw">February 20, 2019</a> </blockquote> <br> <br> February 19, 2019 at 04:25PM<br> via Twitter
the-infocom-files/suspect
570522290
Title: "LOOK DOWN HALLWAY" doesn't work as intended Question: username_0: Michael is off to the east. Ostmann is off to the north. There's no one there. Linda, off to the south, disappears from sight to the east. ``` It showed Michael and Ostmann, who were standing still, but not Linda, who was moving. And it still printed "There's no one there." I guess for this to work ```CORRIDOR-LOOK``` needs another optional parameter telling it that you're looking for everyone, and that you want to see them even if they're moving. It then needs to return ```T``` if it does find someone.
airbnb/knowledge-repo
188019111
Title: semantic posibilities Question: username_0: Auto-reviewers: @NiharikaRay @username_1 @earthmancash @danfrankj Just wanted to drop an idea: Since this is all about knowledge, it would be nice to have semantic markup and a way to query it afterwards (e.g. using SPARQL) I was just thinking of something like a small Wikidata https://www.wikidata.org/wiki/Wikidata:Introduction In theory you could also connect the knowledge in the repo to the knowledge of wikidata, etc. Answers: username_1: This sounds like a really cool idea; but at this stage I have no idea what that kind of integration would look like (and we other more pressing things to work on within the knowledge repository). Feel free to continue fleshing out how this might look, and we can see where we can go with it. I'm going to allocate it to the "Post 1.0.0" milestone for now, though. Thanks for helping to make the Knowledge Repository awesome!
godotengine/godot
621141563
Title: Duplicating Custom VS node , Makes an instance/deplicate of its Script. Question: username_0: **Godot version:** Godot 3.2.1 **OS/device including version:** Windows 10 64bit 1909 **Issue description:** When Deplicating a Custom VS node , It makes an instance of it , not linking it to the original script. **Steps to reproduce:** 1-Make a Custom Node in any Virtual script. Link it with a custom Vs script written in Gdscript 2- Ctrl+D to duplicate the Node. 3-Check the script of the new node. it will be NAME.vs::NUMBER It should be ScriptName.gd This mean that by changing the main script , this new instance node will not be effected. Here is some images. https://i.ibb.co/rm4nRxF/Annotation-2020-05-19-175752.png https://i.ibb.co/KWrMCP1/Annotation-20d20-05-19-175752.png By duplicating the First Image Script we get a new Instance or deplicate of the script, Not linking it to the same on. Hope this help addressing this bug. Thanks Answers: username_1: Spooky ghosts. Will archive. Feel free to ask to reopen. Status: Issue closed
laravel/nova-issues
408522319
Title: Question: How to '/nova' Remove URL prefix Question: username_0: Hi, I just bought Nova and started customizing for my requirements. I had an admin application that I would like to expose as the default application at certain domain e.g. mypp-admin.com. Is it possible to serve Nova as the default application at the domain i.e. not have the /nova prefix. When I browser to my-appadmin.com I want to see the Nova login screen. What I've tried so far: I've tried adding a `domain` property in nova/config.php and setting the path to '/' but it doesn't seem to work. Any ideas/suggestions would be appreciated. Answers: username_1: There's an entry in the `app/config/nova.php` file called `path` which you can change to update the URI path that Nova is accessible from. It defaults to `'path' => '/nova'`, but if you change it to `'path' => '/'`, that should get you what you're looking for. Status: Issue closed username_3: Hi, Laravel version = 5.8.12 Nova version = 2.0.0 I changed to 'path' => '/' But the path stays in the app at '/nova/ and gives a 404/ Not found error when trying to use the path'/'. Anything else to change apart the app/config/nova.php file? Any ideas/suggestions would be also appreciated. username_4: I fixed this problem by checking the REQUEST_URI. If the URI has the word '/nova' in it. It will skip the route. ![image](https://user-images.githubusercontent.com/746279/58271679-e016e100-7d8c-11e9-8d2b-00cecacffa83.png) username_3: Thanks @username_4 ! :-) It works! NB1 : has to be added here = routes/web.php NB2 : Here is the code to copy/paste: `if ($_SERVER['REQUEST_URI'] != '/nova') { Route::get('/{page}', function($page) { return view("pages.{$page}"); }); }` username_5: Here is an alternative that I use in this situation: ``` if ( ! Request::is('admin*')) { Route::get('/{page}', 'PageController')->name('pages.show'); } ``` username_3: Thanks @username_5 , but when I try your solution, I receive an error at login and at logout: UnexpectedValueException Invalid route action: [App\Http\Controllers\PageController]. But if I log in using @username_4 solution and switch after to your solution, the navigation inside the Nova site works. username_3: Some feed-back @username_4 : I have to comment your solution before each migration to be done (and to recomment it after) or I receive this error: For example: ➜ xxxxx git:(master) ✗ php artisan make:migration:pivot attachments files In web.php line 24: Undefined index: REQUEST_URI ➜ xxxxx git:(master) username_4: ![image](https://user-images.githubusercontent.com/746279/62138942-ce037280-b2e8-11e9-83d1-01b41d1b38fc.png) This is a better way to solve the issue. By fetching the Request object. There is some code in the middle which you can ignore. username_4: ![image](https://user-images.githubusercontent.com/746279/62138942-ce037280-b2e8-11e9-83d1-01b41d1b38fc.png) This is a better way to solve the issue. By fetching the Request object. There is some code in the middle which you can ignore. username_3: Thanks @username_4 ! It works now smoothly. ` if (!\Request::is( 'nova')) { Route::get('{criteria}', function ($criteria) { $page = \App\Page::where('slug', $criteria)->first(); if (!$page) return abort(404); return view('page.show', ['page' => $page]); }); } ` username_6: After changing the path in `app/config/nova.php`. I found by simply running the following it solved my issues without needing to add any code like the above examples. ``` php artisan config:cache php artisan route:cache ```
openshift/origin
249573014
Title: k8s_storage containers is exited Question: username_0: [provide a description of the issue] ##### Version ``` CentOS:CentOS Linux release 7.3.1611 (Core) DOCKER:docker-1.12.6-32.git88a4867.el7.centos.x86_64 [root@master master]# oc version oc v3.6.0+c4dd4cf kubernetes v1.6.1+5115d708d7 features: Basic-Auth GSSAPI Kerberos SPNEGO Server https://127.0.0.1:8443 openshift v3.6.0+c4dd4cf kubernetes v1.6.1+5115d708d7 ``` ##### Steps To Reproduce ``` [root@master master]# oc cluster up Starting OpenShift using openshift/origin:v3.6.0 ... OpenShift server started. The server is accessible via web console at: https://127.0.0.1:8443 You are logged in as: User: developer Password: <any value> To login as administrator: oc login -u system:admin ``` ##### Current Result the storage container is exited: ``` [root@master log]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 628cf0ad97b9 docker.io/openshift/origin-docker-registry@sha256:541135fa70edb3f6a0020c197e9bd670bcb0dd019496a9c12f3e136b36d29041 "/bin/sh -c '/usr/bin" About a minute ago Up About a minute k8s_registry_docker-registry-1-kjhl8_default_f9be566f-7e70-11e7-b037-000c29e466a1_0 e14d8fd3ca60 openshift/origin-pod:v3.6.0 "/usr/bin/pod" About a minute ago Up About a minute k8s_POD_docker-registry-1-kjhl8_default_f9be566f-7e70-11e7-b037-000c29e466a1_0 4d123d01a417 docker.io/openshift/origin-haproxy-router@sha256:46746338b5964097b659c9ddff33705b3212815eb7bb5ccde72bce790c2deb63 "/usr/bin/openshift-r" About a minute ago Up About a minute k8s_router_router-1-5fthp_default_f5956912-7e70-11e7-b037-000c29e466a1_0 f7b8fc863f90 openshift/origin-pod:v3.6.0 "/usr/bin/pod" About a minute ago Up About a minute k8s_POD_router-1-5fthp_default_f5956912-7e70-11e7-b037-000c29e466a1_0 faf2d34e83f5 docker.io/openshift/origin@sha256:b19a59b84d88b09854937b50eb54aff840287816059ffdbfd87f7d8b04a4bff4 "/bin/bash -c '#/bin/" 2 minutes ago Exited (0) 44 seconds ago k8s_storage-setup-job_persistent-volume-setup-07x57_default_ebb637ff-7e70-11e7-b037-000c29e466a1_0 69f2338fb290 openshift/origin-pod:v3.6.0 "/usr/bin/pod" 2 minutes ago Exited (0) 42 seconds ago k8s_POD_persistent-volume-setup-07x57_default_ebb637ff-7e70-11e7-b037-000c29e466a1_0 590ce64a0675 openshift/origin:v3.6.0 "/usr/bin/openshift s" 2 minutes ago Up 2 minutes origin [root@master log]# ls ``` Answers: username_1: Seeing something similar in Fedora 26. Fedora 26: 4.12.5-300.fc26.x86_64 Docker version 1.13.1, build 27e468e/1.13.1 oc cluster up --version="v3.6" ``` []$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES dd874d0aa317 registry.access.redhat.com/openshift3/ose@sha256:b29e129d432bdedc32ff832f8c3f8e20decdde4577608ebe1790580bd7a8303f "/bin/bash -c '#/b..." 1 second ago Created k8s_storage-setup-job_persistent-volume-setup-k2dfb_default_8e005368-8417-11e7-9526-54e1ad534907_0 9b18247477c9 registry.access.redhat.com/openshift3/ose-pod:v3.6 "/usr/bin/pod" 1 second ago Up Less than a second k8s_POD_persistent-volume-setup-k2dfb_default_8e005368-8417-11e7-9526-54e1ad534907_0 16186fd10118 registry.access.redhat.com/openshift3/ose@sha256:b29e129d432bdedc32ff832f8c3f8e20decdde4577608ebe1790580bd7a8303f "/bin/bash -c '#/b..." 2 seconds ago Created k8s_storage-setup-job_persistent-volume-setup-d9p8n_default_8d5b422b-8417-11e7-9526-54e1ad534907_0 a5d3593ab977 registry.access.redhat.com/openshift3/ose-pod:v3.6 "/usr/bin/pod" 2 seconds ago Exited (0) 1 second ago k8s_POD_persistent-volume-setup-d9p8n_default_8d5b422b-8417-11e7-9526-54e1ad534907_0 58741af4da1d registry.access.redhat.com/openshift3/ose@sha256:b29e129d432bdedc32ff832f8c3f8e20decdde4577608ebe1790580bd7a8303f "/bin/bash -c '#/b..." 3 seconds ago Created k8s_storage-setup-job_persistent-volume-setup-gfc43_default_8cb749f4-8417-11e7-9526-54e1ad534907_0 b6e20161378a registry.access.redhat.com/openshift3/ose-pod:v3.6 "/usr/bin/pod" 3 seconds ago Exited (0) 2 seconds ago k8s_POD_persistent-volume-setup-gfc43_default_8cb749f4-8417-11e7-9526-54e1ad534907_0 a3511007e393 registry.access.redhat.com/openshift3/ose@sha256:b29e129d432bdedc32ff832f8c3f8e20decdde4577608ebe1790580bd7a8303f "/bin/bash -c '#/b..." 4 seconds ago Created k8s_storage-setup-job_persistent-volume-setup-35vtl_default_8bfaadda-8417-11e7-9526-54e1ad534907_0 05afb0121270 registry.access.redhat.com/openshift3/ose-pod:v3.6 "/usr/bin/pod" 4 seconds ago Exited (0) 3 seconds ago k8s_POD_persistent-volume-setup-35vtl_default_8bfaadda-8417-11e7-9526-54e1ad534907_0 4532e00acf07 registry.access.redhat.com/openshift3/ose@sha256:b29e129d432bdedc32ff832f8c3f8e20decdde4577608ebe1790580bd7a8303f "/bin/bash -c '#/b..." 5 seconds ago Created k8s_storage-setup-job_persistent-volume-setup-qq46n_default_8b5cfcef-8417-11e7-9526-54e1ad534907_0 3dce88e0f4da registry.access.redhat.com/openshift3/ose-pod:v3.6 "/usr/bin/pod" 6 seconds ago Exited (0) 4 seconds ago k8s_POD_persistent-volume-setup-qq46n_default_8b5cfcef-8417-11e7-9526-54e1ad534907_0 6c3558d64525 registry.access.redhat.com/openshift3/ose@sha256:b29e129d432bdedc32ff832f8c3f8e20decdde4577608ebe1790580bd7a8303f "/bin/bash -c '#/b..." 6 seconds ago Created k8s_storage-setup-job_persistent-volume-setup-5l1vk_default_8abdb0db-8417-11e7-9526-54e1ad534907_0 d6b66477df1b registry.access.redhat.com/openshift3/ose-pod:v3.6 "/usr/bin/pod" 7 seconds ago Exited (0) 5 seconds ago k8s_POD_persistent-volume-setup-5l1vk_default_8abdb0db-8417-11e7-9526-54e1ad534907_0 4afd8384035c registry.access.redhat.com/openshift3/ose@sha256:b29e129d432bdedc32ff832f8c3f8e20decdde4577608ebe1790580bd7a8303f "/bin/bash -c '#/b..." 7 seconds ago Created k8s_storage-setup-job_persistent-volume-setup-45cjq_default_8a21001d-8417-11e7-9526-54e1ad534907_0 ``` etc. etc. ``` []$ docker logs 03983efa4d68 container_linux.go:247: starting container process caused "process_linux.go:364: container init caused \"rootfs_linux.go:54: mounting \\\"/var/lib/origin/openshift.local.volumes/pods/767a3c04-8417-11e7-9526-54e1ad534907/volumes/kubernetes.io~secret/pvinstaller-token-htgc2\\\" to rootfs \\\"/var/lib/docker/overlay2/1e86cfbc34dc9e0e41294c218f8dc747c066f28e189725f609da050055f8f1c4/merged\\\" at \\\"/var/lib/docker/overlay2/1e86cfbc34dc9e0e41294c218f8dc747c066f28e189725f609da050055f8f1c4/merged/run/secrets/kubernetes.io/serviceaccount\\\" caused \\\"mkdir /var/lib/docker/overlay2/1e86cfbc34dc9e0e41294c218f8dc747c066f28e189725f609da050055f8f1c4/merged/run/secrets/kubernetes.io: read-only file system\\\"\"" ``` username_2: See also #15038.
platformio/platformio-core
304908255
Title: Local path for packages Question: username_0: What kind of issue is this? - [x] PlatformIO Core. ------------------------------------------------------------------ ### Configuration **Operating system**: MacOS 10.13.3 **PlatformIO Version** (`platformio --version`): PlatformIO, version 3.5.2rc4 ### Description of problem I would like to be able to specify a local path for packages. Currently, it seems the way to do that is to have a local copy of the platform and specify the path for that platform in the platformio.ini. Then, inside the platform's platform.json, set the version of the package to the path for that package (prepended with `file://`). However, when I do that, it copies the folder of the package to the `~/.platformio/packages` folder, preventing me from making further changes to my package's original package location, without deleting the copied version between every build or editing the version in the `~/.platformio/packages` folder. Ideally, I would like to be able to keep the original location of the package that I specified and keep editing from that location. Answers: username_1: Do you mean https://github.com/platformio/platformio-core/issues/1367 ? username_0: Ah yes, I think that will do it. Thank you Status: Issue closed
cypress-io/cypress
650253854
Title: Sporadic "Cannot set property 'err' of undefined" error occurs in place of other errors during automatic test-reruns on file save. Question: username_0: When Cypress detects uncaught errors originating from your application it will automatically fail the current test. This behavior is configurable, and you can choose to turn this off by listening to the uncaught:exception event. Learn more ``` The error is not really originating from the application; it's occurring within `hookFailed` at https://github.com/cypress-io/cypress/blob/4cfcae28f097e013ffe7dc7419905eccf1022709/packages/driver/src/cypress/runner.js#L526. Both `getTest()` and `getTestFromHookOrFindTest(hook)` end up returning `undefined`. I've seen this on two very different projects. - At work, which is a custom server + Webpack setup, webpack/React, running on MacOS. - At home, on a personal project running Next.js, running on Windows (Powershell). This project is linked below. ### Desired behavior: The test fails with the normal readable stack trace. ### Test code to reproduce This is easily reproducible on this project (created a branch with a failing spec): https://github.com/primitiveconcept/ludumdare46/tree/cypress/err-of-undefined/client - Run `npm run cy:dev` for server + Cypress - Run the `filesystem.spec.ts` spec - Save the `filesystem.spec.ts` while the test is running. Reproduces around 25% of the time, may require a few tries. If needed, I can try to come up with a minimal reproduction, but this isn't a very complex project. The server isn't required for the tests. ### Versions <!-- Cypress, operating system, browser --> Cypress versions: Can reproduce on 4.9.0. Cannot reproduce on 4.5.0. Other versions are unstable for various reasons on the example project. At work, this started occurring on 4.6.0. OS: MacOS + Windows Browser: Chrome + Electron Answers: username_1: I can confirm that this issue shows up (not sporadic, but as soon as something is "hot-reloaded/refreshed"), and does **not** appear in 4.5.0. username_2: I can verify this behavior - if you save the file really early in the test running, the `test` is not defined when reading the global `afterEach` hook, so it throws this error. Can reproduce from the code in this repo as explaind: https://github.com/primitiveconcept/ludumdare46/tree/cypress/err-of-undefined/client If I run this in 4.5, the tests keep locking up for me, so I wouldn't say 4.5 is a working version. username_3: I always get this when it's detecting a change and re-running the tests. Even if the previous test-run has finished running a long time ago username_0: Should I open a separate issue for the `The following error originated from your application code, not from Cypress.` message? At work, this was introduced with an upgrade to 4.6.0, but we just assumed it was something in our code causing it. I'm guessing that's why it hasn't been reported until now. username_4: The bug is annoying, but I'm more concerned that the error message is lying to us, implying that the error came from our code when it really came from cypress. Should there be another issue to address that? username_5: Same issue with 4.11.0. username_2: This error is being thrown from here: https://github.com/cypress-io/cypress/blob/develop/packages/driver/src/cypress/runner.js#L550 Sometimes the `test` is not defined at this point when we are trying to set `test.err`. So the question is, why is it failing to find the test here in this hook? It just doesn't have this `currentTest` property defined when we're trying to read it. <img width="381" alt="Screen Shot 2020-07-22 at 5 40 21 PM" src="https://user-images.githubusercontent.com/1271364/88169765-80db8780-cc42-11ea-87de-c76c98c1558b.png"> I can get this `test` to be `undefined` in a lot of circumstances, when saving the file, but there's a special time and situation where it hits that it makes the error bubble up incorrectly as an application error. I'm having a harder time reducing the use case to a small example for this. It is reproducible from here: https://github.com/primitiveconcept/ludumdare46/tree/cypress/err-of-undefined/client username_6: For me it happens every time after the first re-run. The initial run passes with no issues and all the subsequent are failing with this error, even if no files are changed - just a simple re-run triggered by the button in cypress UI. I end up returning false from the 'uncaught:exception' hook to suppress this error temporarily, so the tests are passing on re-run. "cypress": "4.10.0" username_0: I could cut that example down quite a bit if that would help. username_7: This is happening to me after upgrading from v4.10.0 to v4.11.0. username_8: I'm working on this. We might have already fixed as part of https://github.com/cypress-io/cypress/pull/8113, and I'm working on getting a reproducible test case to verify username_8: confirmed the work for this is done in #8113 Scheduled to release in version `4.12.1` Status: Issue closed username_2: If you're still experiencing this issue after upgrading to 4.12.1, please see this issue: #8189 and the new PR proposed to fix the issue: #8193
christiansandberg/canopen
920435525
Title: Need exemple for SDO server Question: username_0: Hi I need to implement unit test for me software and for that i want to create a simulator that will respond to SDO from client. Do you have exemple of dialogue between client and server with for exemple Vitual CAN interface?
BeamMW/beam
379158609
Title: Multiple-gpu's support Question: username_0: we are working on it right now @username_1 should be added soon Answers: username_1: when multiple-gpu's support? username_0: we are working on it right now @username_1 should be added soon username_1: before mainnet luanch? Is there a specific time? username_0: we hope so. no specific date for now folow us on medium and in Telegram to see all news username_0: already supported Status: Issue closed
T4MVC/T4MVC
723414866
Title: T4 files not added to project on package install Question: username_0: I installed the latest package v4.2.4, but the files `T4MVC.tt` and `T4MVC.tt.hooks.t4` weren't added to my VS 2019 MVC5 project. I suspect this may have something to do with the fact that a file referenced in the package's install script (`install.ps1`) doesn't exist in the package: `$project.ProjectItems.Item("T4MVC.tt.settings.xml").Properties.Item("BuildAction").Value = 0` The script may be failing entirely because of its absence, but I'm not certain of this. Questions: 1. Am I correct with the above? 2. Is the workaround to simply manually add these files to the project? 3. Where can I obtain the `T4MVC.tt.settings.xml` file? Answers: username_1: Nothing has changed in a long time, so it shouldn't be broken. Note that T4MVC is only for .NET Classic, and not .NET Core. Maybe you are using Core? username_0: Apparently not. I just discovered the `T$MVCVB` package. But it seems far behind the C# version. Is there anything critical that I'll be missing by using it? username_0: p.s. Yes, I'm using .NET Classic, v4.8, in a VB.NET project username_1: It was maintained by someone else a while back, but has not been updated since 2016. That being said, things have not changed fundamentally since, so it may still work. But I won't be able to support it if it doesn't :) username_0: Hm, I just installed the `T4MVCVB` package. Same result. Nothing was added to my project. 2. Is the workaround to manually add these files to the project? username_0: I've just put it on my to-do list to work with you to update that VB.NET version. Trouble is, I don't know when I'll be able to get to it. I'm swamped. It'll be well into the Spring months of next year, I'm sure. But you've got someone out here who wants to help :-) username_1: I can't explain why nothing gets added. I'm a bit out of the .NET world, so having tried any of this in a while. Maybe some setting changed in VS, or something that affects it. Thanks for the offer to support VB! username_0: Sure thing, happy to do it. But can you confirm these: 1. Is the workaround to manually add these files to the project? 2. Where can I obtain the `T4MVC.tt.settings.xml` file? username_1: It would be best to figure out why it's not working. I just tried creating a new VB MVC project and I was able to add T4MVCVB: ``` Attempting to gather dependency information for package 'T4MVCVB.3.7.8' with respect to project 'WebApplicationVBT4MVC', targeting '.NETFramework,Version=v4.7.2' Gathering dependency information took 15.36 sec Attempting to resolve dependencies for package 'T4MVCVB.3.7.8' with DependencyBehavior 'Lowest' Resolving dependency information took 0 ms Resolving actions to install package 'T4MVCVB.3.7.8' Resolved actions to install package 'T4MVCVB.3.7.8' GET https://api.nuget.org/v3-flatcontainer/t4mvcvb/3.7.8/t4mvcvb.3.7.8.nupkg GET https://api.nuget.org/v3-flatcontainer/t4mvcextensions/3.7.4/t4mvcextensions.3.7.4.nupkg OK https://api.nuget.org/v3-flatcontainer/t4mvcvb/3.7.8/t4mvcvb.3.7.8.nupkg 250ms OK https://api.nuget.org/v3-flatcontainer/t4mvcextensions/3.7.4/t4mvcextensions.3.7.4.nupkg 259ms Installing T4MVCExtensions 3.7.4. Installing T4MVCVB 3.7.8. Adding package 'T4MVCExtensions.3.7.4' to folder 'C:\Users\david\source\repos\WebApplicationVBT4MVC\packages' Added package 'T4MVCExtensions.3.7.4' to folder 'C:\Users\david\source\repos\WebApplicationVBT4MVC\packages' Added package 'T4MVCExtensions.3.7.4' to 'packages.config' Successfully installed 'T4MVCExtensions 3.7.4' to WebApplicationVBT4MVC Adding package 'T4MVCVB.3.7.8' to folder 'C:\Users\david\source\repos\WebApplicationVBT4MVC\packages' Added package 'T4MVCVB.3.7.8' to folder 'C:\Users\david\source\repos\WebApplicationVBT4MVC\packages' Added package 'T4MVCVB.3.7.8' to 'packages.config' Executing script file 'C:\Users\david\source\repos\WebApplicationVBT4MVC\packages\T4MVCVB.3.7.8\tools\install.ps1'... Successfully installed 'T4MVCVB 3.7.8' to WebApplicationVBT4MVC Executing nuget actions took 10.71 sec Time Elapsed: 00:00:27.0991883 ``` I gave not updated VS in a while though, in case that makes a difference. Can you try on a clean new project to isolate? I'll update my VS... username_0: The answer is that it's automatically generated at design time. In fact, I think I remember reading that somewhere in the docs. username_0: Here's the controller signature: ```vb <HttpPost> <ActionName("SignUp")> <ValidateAntiForgeryToken> Public Async Function SignUpAsync(Model As SignInOrSignUp) As Task(Of ActionResult) ... End Function ``` Is this because I'm running these as `Async`? username_1: Yes, I thin that was never supported. See #108 username_1: Actually, not sure that bug is related. It's been a long time, so I don't recall whether there was an issue with async methods on the C# side. It's possible that it's VB specific. username_0: Can you confirm? username_0: Found it. It's a bug. On line 819 of `T4MVCVB.tt`, I changed this: ```vb if (!method.Type.CodeType.get_IsDerivedFrom("System.Web.Mvc.ActionResult") && method.Type.CodeType.FullName !="System.Threading.Tasks.Task<System.Web.Mvc.ActionResult>") ``` ...to this: ```vb if (!method.Type.CodeType.get_IsDerivedFrom("System.Web.Mvc.ActionResult") && method.Type.CodeType.FullName !="System.Threading.Tasks.Task(Of System.Web.Mvc.ActionResult)") ``` Also: as it turns out, the generator strips out the `Async` when adding the `Overrides`. I'll dig through and see if I can't fix that as well. username_0: As far as I can tell, the problem seems to be related to this: https://github.com/T4MVC/T4MVC/blob/6edfc803513d521accc3fd39b2a5cde84dc49337/T4MVCHostMvcApp/T4MVC%20Files/T4MVC.tt#L941 As `method` is a COM object, I'm unable to discover its members. Is there something like an `IsAsync` property, to ensure that the `Async` modifier is retained? username_0: Oops, I just noticed I'm drifting the topic. Shall I move these to a new issue? username_1: About manually adding the files to the project, yes, you can do that if all else fails. For the ActionResult issue you found, it's likely something that was copied from the C# version and was not tested. The doc can help to discover methods. e.g. I think this is the right doc: https://docs.microsoft.com/en-us/dotnet/api/microsoft.visualstudio.vccodemodel.vccodefunction?view=visualstudiosdk-2019. But not sure about detecting async. username_0: That looks to be the right doc, but there's nothing in there about `Async`. Hm. It looks like we'll have to manually restore that modifier after the `Overrides` insertion steps on it. It'll only be once per new controller, so that's not too bad. username_0: OK, getting back on topic... The original problem of the files not being added to a `PackageReference` project... do you prefer to keep this issue open for tracking? Or is the workaround of adding them manually sufficient to close? If the latter, I'll go ahead and close it. username_1: It's fine to keep it open, but realistically, it's unlikely to get fixed. As time goes, more and more people move to Core, so T4MVC is less and less used. username_0: I understand. I'd love to move to Core myself, but unfortunately Microsoft is dropping the ball with VB.NET. username_1: Oh, I didn't realize that. I've left MS and I'm quite disconnected from this world now. username_0: I fantasize about becoming disconnected from this world as well ;-)
hackoregon/openelections
474334464
Title: Update Forgotten Password form is not centered Question: username_0: When I try to reset my password via email, the form should be centered. <img width="1197" alt="Open_Elections" src="https://user-images.githubusercontent.com/99731/62095051-0d23bc00-b234-11e9-968d-fa526434b786.png"><issue_closed> Status: Issue closed
jdf2e/nutui
1178030777
Title: [suggest] Question: username_0: ## Addrees 组件转换用户参数时丢失了用户的代理属性 <!--对你想要发生的事情的清晰而简洁的描述--> address 组件中的 regionList 在定义时使用了 reactive 函数(vue 会深度遍历里面的属性,更改为 proxy)。 如果用户提供的参数比如 props.province 本身就已经是 proxy 时,且该 proxy 内定义了一个计算属性 `id: (target) => target.row.uuid`,reactive 遍历时无法遍历到 id 这个属性 ,因此 reactive 函数执行后,regionList 中的 province 数据将不能访问到 id。 从而导致用户传入的参数和选择结果的数据在访问时会出现差异。 如果你能用 shallowReactive 去构造 regionList 它将保证组件输入的数据和选择结果输出的数据的一致性 ## 其它 现有场景 ```js const originalProvinces = [{ row: { uuid: '1' name: '第一级', }, }] function parseNode(node) { return new Proxy(node, { get(target, prop) { if (prop === 'id') return target.row.uuid if (prop === 'name') return target.row.name return target[prop] } }) } function onClose({ data }) { data.province.id // undefined } return () => { h(NutAddress, { province: originalProvinces.map(parseNode), onClose, ... }) } ```
microsoft/fhir-server
863100718
Title: Add cancellation logic to SearchParameterStatusDataStore operations Question: username_0: **User story** The search parameter status operations GetSearchParameterStatuses() and UpsertStatuses() should take in a cancellation token as input. **Acceptance criteria** Cancellation logic is added to search parameter status operations. Answers: username_0: AB#84443 Status: Issue closed
go-git/go-git
630431072
Title: Worktree Checkout behavior is different from git Question: username_0: From what I've noticed, the `Checkout` method on `Worktree` behaves differently than standard git. For example, given an untracked file in the working directory of some branch tree, calling `Checkout` with the default options, to another branch, would remove that file, which would not happen with the standard `git checkout other-branch`. Using the `Keep` option wouldn't help much, as files that were added and committed would remain in the working directory after a checkout. I suppose this has to do with the current implementation where the `Checkout` acts somehow like `git reset`. Is this expected or is it something to look into? Thanks! Answers: username_1: go-git should behave always as the original git username_2: Also i've noticed the lack of ability to specify concrete files to checkout like in original git with `--` username_3: I need this function, is it other way to do that? username_4: I have the same issue. I'm checking out into a bare repo and use `git --git-dir=$HOME/.git --work-tree=$HOME checkout`. With vanilla git this works but when using go-git. It would remove my complete $HOME.
anonaddy/browser-extension
1021761348
Title: Header link always points to app.anonaddy.com Question: username_0: This is more of a small usability nuisance than a bug, yet I feel it is worth mentioning. When using this extension with a self-hosted instance of AnonAddy (see #9), I would expect the link in the header to point me to that instance. However, I noticed the value is hard-coded to app.anonaddy.com. This may be appropiate for most users, but having custom settings reflected would definitely be a nice touch. I'm not familiar with Vue, so I probably won't be much help myself, but I assume the setting to be persisted *somehow* already for generic functionality (yet I was unable to figure out the data retention mechanism as of now). Reading the stored value and using it instead of the currently hardcoded string feels like it should be rather straightforward, though. Related code: https://github.com/anonaddy/browser-extension/blob/3934857059bcb2be1ad3e6d67dd4e84d82c7eadd/src/assets/js/components/App.vue#L119-L125 https://github.com/anonaddy/browser-extension/blob/3934857059bcb2be1ad3e6d67dd4e84d82c7eadd/src/assets/js/components/App.vue#L31-L52 Answers: username_1: Good spot, I'll sort that out shortly. username_1: This has been fixed in v2.0.10. Status: Issue closed
mheyman/Isopoh.Cryptography.Argon2
300655904
Title: Netcore2 on mac Error Question: username_0: I am using this on Netcore2 and get this error when trying to run the application throw mac os. The type initializer for Isopoh.Cryptography.secureArray.SecureArray throw new exception. Answers: username_1: I believe I have fixed the issue. I have never been completely certain when the CLR evaluates static functions and apparently it changed in .net core at some point. I don't have easy access to a mac to test on so the current code hasn't been tested on a mac. Good luck! :-) username_2: I am experiencing the same issue. username_2: But on Linux instead. username_2: Actually, I wasn't on the latest version. The problem is indeed fixed, and the issue can be closed! Status: Issue closed
rkkr/simple-keyboard
687386781
Title: [Feature Request] add Theme Question: username_0: Black AMOLED Theme high contras :) Answers: username_0: the menu can't be taped.. :( username_0: already open when i change it to "material dark" but the color hasn't changed username_1: Set it after changing the theme. username_0: can be changed when "material light" but the font color is less bright :( username_1: It works for dark themes the same way... Status: Issue closed
diegomura/react-pdf
434778494
Title: Forcing Line Break in Text componet Question: username_0: Hi excellent solution, I am sure that this is an obvious answer, but I can not see how to get react-pdf to produce the following layout Section 1 Paragraph 1.1 Paragraph 1.2 Section 2 Using the styled components example on the website ["0,"1"].map (s => (<Paragraph> Section s <Paragraph>Paragraph s.1</Paragraph> <Paragraph>Paragraph s.2</Paragraph> </Paragraph>) Renders to Section 1 Paragraph 1.1 Paragraph 1.2 Section 2 Paragraph 2.1 Paragraph 2.2 Any help/suggestions appreciated Thanks Finbar Status: Issue closed Answers: username_0: Oops I got that wrong It was meant to be ``["0,"1"].map (s => ( <Paragraph>Section s <Paragraph>Paragraph s.1</Paragraph> <Paragraph>Paragraph s.2</Paragraph> </Paragraph> )`` and then I found I can use the Fragment feature in React ``["0,"1"].map (s => ( <Fragment>Section s <Paragraph>Paragraph s.1</Paragraph> <Paragraph>Paragraph s.2</Paragraph> </Fragment> )`` and it works as expected :)
Squirrel/Squirrel.Windows
153855955
Title: squirrel Update.exe --uninstall fail to uninstall package Question: username_0: We have developed an electron application with autoupdate ussing electron-windows-installer. I don't have any problem before, but today we have generate a new installer and I get this error when doing Updater.exe --uninstall: `2016-05-09 21:36:32> Program: Starting Squirrel Updater: --install . 2016-05-09 21:36:32> Program: Starting install, writing to C:\Users\luis\AppData\Local\SquirrelTemp 2016-05-09 21:36:32> Program: About to install to: C:\Users\luis\AppData\Local\TractisPki 2016-05-09 21:36:32> Program: Install path C:\Users\luis\AppData\Local\TractisPki already exists, burning it to the ground 2016-05-09 21:36:32> Unhandled exception: System.AggregateException: Se han producido uno o varios errores. ---> System.IndexOutOfRangeException: Índice fuera de los límites de la matriz. en Squirrel.UnsafeUtility.<>c__DisplayClass2.<EnumerateProcesses>b__0(Int32 i) en System.Linq.Enumerable.WhereSelectEnumerableIterator`2.MoveNext() en System.Collections.Generic.List`1..ctor(IEnumerable`1 collection) en System.Linq.Enumerable.ToList[TSource](IEnumerable`1 source) en Squirrel.UnsafeUtility.EnumerateProcesses() en Squirrel.UpdateManager.InstallHelperImpl.KillAllProcessesBelongingToPackage() en Squirrel.Update.Program.<Install>d__38.MoveNext() --- Fin del seguimiento de la pila de la excepción interna --- en System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions) en System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken) en System.Threading.Tasks.Task.Wait() en Squirrel.Update.Program.executeCommandLine(String[] args) en Squirrel.Update.Program.main(String[] args) ---> (Nº de excepción interna 0) System.IndexOutOfRangeException: Índice fuera de los límites de la matriz. en Squirrel.UnsafeUtility.<>c__DisplayClass2.<EnumerateProcesses>b__0(Int32 i) en System.Linq.Enumerable.WhereSelectEnumerableIterator`2.MoveNext() en System.Collections.Generic.List`1..ctor(IEnumerable`1 collection) en System.Linq.Enumerable.ToList[TSource](IEnumerable`1 source) en Squirrel.UnsafeUtility.EnumerateProcesses() en Squirrel.UpdateManager.InstallHelperImpl.KillAllProcessesBelongingToPackage() en Squirrel.Update.Program.<Install>d__38.MoveNext()<--- ` Please, could you help us? We can't find any information about this error. Thanks Answers: username_0: So if the number of processes is high (more than 2048/sizeof(DWORD)) you will go out of the bounds of the int[2048] username_1: Subscribe, same sporadically here. Whenever a user has many programs open and starts setup, it fails. username_2: Just had this in 1.7.8 so the bug still exists. username_3: Same problem (2019, 3 years after this report) anybody actually using this?
wumouren/react
335418242
Title: React 知识梳理(三):手写自己的 react-redux Question: username_0: ## 写在前面的话([示例代码在这里](https://github.com/username_0/react/tree/master/my-redux-notes/my-redux)) #### 推荐大家读一下胡子大哈老师的 [《React.js 小书》](http://huziketang.mangojuice.top/books/react/) 提起 Redux 我们想到最多的应该就是 React-redux 这个库,可是实际上 Redux 和 React-redux 并不是同一个东西, Redux 是一种架构模式,源于 Flux。具体介绍请看[这里](https://www.zhihu.com/question/47686258),或者[这里](http://www.ruanyifeng.com/blog/2016/01/flux.html),或者还有[这里](https://segmentfault.com/a/1190000006742449)。 React-redux 是 Redux 思想与 React 结合的一种具体实现。 在我们使用 React 的时候,常常会遇到组件深层次嵌套且需要值传递的情况,如果使用 props 进行值的传递,显然是非常痛苦的。为了解决这个问题,React 为我们提供了原生的 context API,但我们用的最多的解决方案却是使用 React-redux 这个基于 context API 封装的库。 本文并不介绍 React-redux 的具体用法,而是通过一个小例子,来了解下什么是 redux。 好了,现在我们言归正传,来实现我们自己的 redux。 #### 一、最初 首先,我们用 creat-react-app 来创建一个项目,删除 src 下冗余部分,只保留 index.js,并修改 index.html 的 DOM 结构: ``` # index.html <div id="root"> <div id="head"></div> <div id="body"></div> </div> ``` 我们在 index.js 中创建一个对象,用它来储存、管理我们整个应用的数据状态,并用渲染函数把数据渲染在页面: ``` const appState = { head: { text: '我是头部', color: 'red' }, body: { text: '我是body', color: 'green' } } function renderHead (state){ const head = document.getElementById('head') head.innerText = state.head.text; head.style.color = state.head.color; } function renderBody (state){ const body = document.getElementById('body') body.innerText = state.body.text; body.style.color = state.body.color; } function renderApp (state){ renderHead(state); renderBody(state); } renderApp(appState); ``` 此时运行代码,打开页面,我们可以看到,在 head 中已经出现了红色字体的‘我是头部’,在 body 中出现了绿色字体的‘我是body’。 ![](https://user-gold-cdn.xitu.io/2018/5/21/163830e9312b03e3?w=525&h=73&f=png&s=1632) 如果我们把 head 和 body 看作是 root 中的两个组件,那么我们已经实现了一个全局唯一的 state 。这个 state 是全局共享的,随处可调用的。 我们可以修改 head 的渲染函数,来看下效果: ``` function renderHead (state){ const head = document.getElementById('head') head.innerText = state.head.text + '--' + state.body.text; head.style.color = state.head.color; state.body.text = '我是经过 head 修改后的 body'; } ``` [Truncated] if(store === oldStore) return; store.head !== oldStore.head && renderHead(store.head); store.body !== oldStore.body && renderBody(store.body); console.log('render app',store, oldStore); } // 首次渲染 subscribe((store, oldStore) => renderApp(store, oldStore)); renderApp(store); dispatch({ type: 'BODY_TEXT', text: '我是调用 dispatch 修改的 body' }); ``` 以上,我们修改了 storeChange ,让他不再直接修改原来的 store,而是通过计算,返回一个新的 store 。我们又修改了 cearteStore 让他接收 storeChange 返回的新 store ,在 dispatch 修改数据并且页面刷新后,把新 store 赋值给之前的 store 。而在页面刷新时,我们来通过比较 newStore 和 oldStore ,感知需要重新渲染的部分,完成一些性能上的优化。 重新打开控制台,我们可以看到,在我们修改 body 时,head 并没有重新渲染: ![](https://user-gold-cdn.xitu.io/2018/5/24/16391f996f3b7fa6?w=1384&h=386&f=png&s=29116) ### 最后 我们通过简单的代码例子,简单了解下 redux,虽然代码仍有些简陋,可是我们已经实现了 redux 的几个核心理念: * 应用中的所有state都以一个object tree的形式存储在一个单一的store中。 * 唯一能改store的方法是触发action,action是动作行为的抽象。 以上,是自己对[《React.js 小书》](http://huziketang.mangojuice.top/books/react/)的读后总结,限于篇幅,在下篇中,我们再来结合 react ,实现自己的 react-redux。
dgreif/ring
834593870
Title: Large number of ‘Snapshot request handler provided empty image buffer’ - messages in the log. Question: username_0: <!-- Please DO NOT DELETE THIS TEMPLATE or your issue may be closed immediately. Before opening an issue, search for open and closed GitHub issues that match your situation. If this is a Live Stream issue please go back and use the Live Streaming issue template. --> # Bug Report ### Describe the Bug I’m constantly receiving loads of “Snapshot request handler provided empty image buffer " and the Home App did not always show an image ### To Reproduce Nothing really, the plug-in was installed and it seems to get images - but I can see loads of this type of ‘snapshot request handler ‘ messages in the log. ### Expected behavior I expected it would be more tolerant, and obtain image updates without issue. ### Screenshots/Logs I have a screenshot, but can’t seem to see how to post it here? ### Additional context. ### Homebridge Ring Config It’s no longer installed. ### Environment - OS: QNAP NAS - Node.js: v14.15.3 - NPM: v6.14.9 - homebridge-ring: v9.15.4 - homebridge: v1.3.3 Answers: username_1: This is a duplicate of #592. These messages are generated by homebridge and not something I have direct control over. I'm working with the homebridge developers to see if there is a way to get around them, but no promises. As far as snapshots not always being available, please read through https://github.com/username_1/ring/wiki/Snapshot-Limitations for more information on why that's the case. Status: Issue closed username_1: I talked to the homebridge developers and we were able to find a workaround to get rid of these logs. Released in v`9.15.5` username_0: Hi @username_1 - great many thanks !
google/CausalImpact
282084301
Title: Causal impact inconsistent performance Question: username_0: Running Causal Impact for the same timeserie on the same version of R (installed with the same conda environment) on two similar machines (i7-6700HQ with ubuntu 16.10 and gcc6 , i7-7700HQ with ubuntu 16.04 and gcc5) the elapsed time is ~3.5 seconds vs ~21 seconds. I tried to create several VM with ubuntu 17.04 and 17.10 using gcc6 and gcc7 but i couldn't get the same performance as the first machine. Do I need to install some special library on my machine to get the best from CI? Answers: username_1: I'm not aware of any configuration positively or negatively influencing the performance of CausalImpact. What runtime did you measure on the virtual machines? Closer to the 3.5s or closer to the 21s? (It's delicate to compare runtimes on virtual machines to runtimes on real machines, though; but at least the order of magnitude would be interesting.) Did you profile the CausalImpact calls, e.g. using `Rprof()`? This would help to understand where the bottleneck is. username_0: Hi @username_1, I'll try to use `Rprof()` anyway the elapsed time provided by me are not on VM. I'm comparing the performance on my machine vs the one in my colleague's one. username_1: According to [this thread](https://github.com/ContinuumIO/anaconda-issues/issues/7792), it seems more related to Anaconda than to CausalImpact. Status: Issue closed
qawolf/qawolf
853741538
Title: Test branches Question: username_0: Allow creating a branch for a test to mirror a branch for your app Answers: username_1: Having a single set of tests is becoming a bit of a blocker for us, hopefully this helps illustrate: We do all our manual QA and user acceptance testing on a branch called `staging`, the last stop before production deploys. We have all of our tests setup for this branch and right now they are passing We are working on a branch (we'll call it) `feature-a` which will break a number of the tests. When we merge `feature-a` our build/tests will be failing. There's really no good way to get out ahead of this. We write unit tests on our branches and ensure those are passing before merge but with a single set of QAE tests there's no way to use QAWolf to manage these new tests. The one idea that keeps rolling around in my head is if I could maintain tests in the repo - then QAWolf would be able to pull those in based on a branch I've selected in the UI to test against, maybe a structure similar to other testing practices: ``` project_root __qawolf__ _helpers helper-a.js helper-b.js _tests test-a.test.js test-b.test.js ``` username_2: This is really critical for us too. Now that developers are using an http-tunnel to be able to run the tests on their own branch we can see where there are broken tests due to the new changes (not bugs). If a developer edits the tests so that they do not break on their branch, that same instance of tests will then fail in CI for any merges that come in until the PR with the updates is merged in. When it was me by myself this was doable, but as soon as I added a second developer to the mix everything got very confusing. We need to be able to sync instances of the test code with matching code from our app and the only way I can really think to do that properly is by keeping the tests in our repo next to our app in github. Maybe a drop down somewhere in the QAwolf gui that selects a branch for editing and test running? Developers are going to want these e2e tests to work like unit tests in that they will want all the lights green on their PR before submitting it, and they will want all the lights green after it merges. Without the above system there is a mandatory period where all the tests are broken after every merge. Imagine if all 5 developers were updating the same set of tests to work with their PR branches... the chaos that would ensue for each other developer having the tests on their branches change as well as the tests covering the main branch. If the code was stored in the app repo this would also allow the option of editing the tests in your code editor of choice already setup with linting, prettier, and ability to split the code windows and view more than 1 file at a time. I can see myself using the site editor most of the time, but occasionally formatting it and debugging it in the full editor. username_3: I agree to support this we need to establish a file structure that works well with git. Then use that as the source of truth for the dashboard / editor. This will allow you to use git to handle merge conflicts/ track version history / and include test changes as part of the standard git flow (open PR, etc). Here is a proposed file structure. - `qawolf/` folder at the root of the git repo - A test file would be anything with the `.test.js` prefix. You can put these in a folder or not. - `qawolf/helpers` is a folder that will store the helpers. For now it will just be one global helpers file. In helpers v2 you can add as many helpers as you want. `` qawolf/createAccount.test.js qawolf/editor/addSnippet.test.js qawolf/helpers/global.js // after helpers v2 qawolf/helpers/signIn.js ``` username_3: The second part of this is how do we manage this file syncing / saving / etc. I can think of two approaches. A. Connect a GitHub/GitLab repo one time via settings. On the dashboard you choose which branch you want to work off of (it defaults to your main branch). When you edit a test file you click to "Save/Commit" it and that will write a commit to your current branch. **Pros** - Lowest friction. Create an edit tests from the dashboard without needing to install and run anything locally. - Easy for less-technical users **Cons** - Cannot run it locally without a server connection to qawolf.com or a self hosted version of qawolf. B. Update `npx qawolf` to support using the file system as the source of truth, then having the dashboard / editor read directly from your local filesystem. **Pros** - We could refactor our dashboard / editor and bundle it with the npm package and airgap this. Privacy conscience teams could create and maintain tests without creating an account at qawolf.com. They can opt into that later if they want the fast test infrastructure / online editing capability. **Cons** - More steps / higher friction. This requires running a `cli` command every time you want to create / edit tests. - Difficult for less technical team members. Need to learn git for committing, requires installing node / dealing with potential environment issues that a hosted solution does not have. username_3: Since our goal is to be the easiest tool for testing, our plan is to go with A. Let me know if you think we should be doing something differently than what I outlined above. ^ Status: Issue closed username_3: Allow connecting a github project to use as the source of truth. Allow choosing a branch when editing and running tests. Allow test name, code changes, and helpers changes to be committed to a branch. Status: Issue closed username_3: This is shipped 🚀
JoleneOL/market-manage
242388599
Title: 07.12 前端优先级安排 Question: username_0: 1、升级代理商后台管理 2、修改商品价格等 3、拆分我的-我的团队 & 我的佣金 & 我的设备 like this ![image](https://user-images.githubusercontent.com/6067222/28120943-318d072a-674c-11e7-8376-da1b66f30f2e.png) 4、订单页面修改加入投融家分期,待原型 5、#50 公司后台增加发票管理 6、...... @username_1 蒋哥,对于先阶段到8月初后台管理页面有优先级较高请告诉我,我优先处理 Answers: username_0: 我的 `personalCenter.html` 拆分为 我的团队 `myTeam.html`,我的佣金 `commission.html`,我的设备 `maintain.html`(需求更迭,改页面没有最新原型,暂时不要展示) Status: Issue closed
telerik/kendo-ui-core
964773177
Title: Grid selectable column header element does not render an id Question: username_0: ### Bug report When .Navigatable() is enabled, the Grid renders an **aria-describedby** attribute in each td element. The value of the attribute should match the id value of the respective column header. This works for standard columns bound to fields in the data, but doesn't work for a selectable column: ``` columns.Select(); ``` The selectable column header element (th) does not render an id. The td elements of that column render an **aria-describedby** attribute, the value of which does not match any element id. This causes an accessibility issue (Ticket ID: 1530928). ### Reproduction of the problem Reproducible with the MVC helper: ``` @(Html.Kendo().Grid<TelerikMvcApp1.Models.OrderViewModel>() .Name("grid") .Columns(columns => { columns.Select(); columns.Bound(p => p.OrderID).Filterable(false); columns.Bound(p => p.Freight); columns.Bound(p => p.OrderDate).Format("{0:MM/dd/yyyy}"); columns.Bound(p => p.ShipName); columns.Bound(p => p.ShipCity); }) .Pageable() .Navigatable() .Scrollable() .HtmlAttributes(new { style = "height:550px;" }) .DataSource(dataSource => dataSource .Ajax() .PageSize(20) .Read(read => read.Action("Orders_Read", "Grid")) ) ) ``` ### Current behavior The selectable column header element (th) does not render an id. ### Expected/desired behavior The selectable column header element (th) renders an id that matches the **aria-describedby** attribute value of the td elements in the column. ### Environment * **Kendo UI version:** 2021.2.616 * **jQuery version:** x.y * **Browser:** [all ]<issue_closed> Status: Issue closed
klubbdinmamma/klubbdinmamma.github.io
258312485
Title: Missing text in the feed Question: username_0: ![screen shot 2017-09-17 at 17 45 29](https://user-images.githubusercontent.com/42626/30522427-047503a6-9bd0-11e7-974d-10e62d6c7864.png) See also https://validator.w3.org/feed/check.cgi?url=https%3A%2F%2Fklubbdinmamma.com%2Ffeed.xml<issue_closed> Status: Issue closed
RSS-Bridge/rss-bridge
1175357488
Title: New team members Question: username_0: Greetings, I would like to suggest to add more team members to the rss-bridge team. There are currently 5 members: @ArthurHoaro @mitsukarenai @teromene @LogMANOriginal and @username_3 Independent of previous achievements and additions (and some of those team members have written most of the application) to rss-bridge, the only currently active team member is @username_3. Since 01-01-2021, only teromene (1 commit) and mitsu (2 commits, same topic) have worked on rss-bridge at all, with arthur and logman having 0 commits. In order to keep the software up to date and in a good state, I strongly suggest to add more members to take the load off of em's shoulders. Looking at the [list of contributors who added things to rss-bridge since the beginning of 2021](https://github.com/RSS-Bridge/rss-bridge/graphs/contributors?from=2021-01-01&to=2022-03-21&type=c), this is the top 10 list: 1. @username_3 2. @VerifiedJoseph 3. @username_0 4. @username_5 5. @username_1 6. @csisoap 7. @username_6 8. @theScrabi 9. @username_4 10. @somini I would suggest to give these people the opportunity to join the team (of course only if they want to :) ) to keep the project alive and well. Thoughts from everyone involved (team members or contributors) please Answers: username_1: I'd love to help, but at the moment I can't even cope with the flow of issues on my own repositories 😕 username_0: All good, this is purely as "these should be given the opportunity to join", nothing more :) If you dont want to, its totally fine :) Also, this is a suggestion, doesnt mean the team is going to agree :D username_2: I've tried to make a dent in the right direction to modernize the codebase (mostly in form of comments, code reviews and architecture suggestions), but failed on the lack of motivation on the core team. I eventually gave up. Even contacting one of the team members was a challenge. So, if you need a helping hand, I am all in. username_0: I just think that, when people moved on to other things, the process of "handing over to others" didnt happen as well as it could. So if we can get a few more people who are active in the community to be able to drive the process forward, it would also motivate people like you again to contribute :) Btw: I know that the Top10 thing is scewed because it only validates commits that are done, not PRs in progress. Just from my own open PRs, there are about 100 changed files in #2296 and 4000 added loc in #2494 . And even that number is scewed because its content that was present already and is just moved. So the list isnt perfect, but I want to start somewhere. username_3: @username_6 and @username_0, I plan to add you to maintainers. Before that I want to discuss in meet.jit.si some things about maintaining rss-bridge. Discussion will be recorded and pasted here. Do you have time to discuss Today 23:59 UTC+5? Link to discussion will be posted here or in official irc channel username_0: Hi, Midnight UTC+5 is 8pm in my and username_6s timezone. So it would work for me. Before we do this though, what about the other people on the list? username_3: You both seem to be more active than or others. username_4: I think this is a great idea. I would love to become a maintainer, but I only have limited time to contribute at the moment. In any case, I would definitely be happy to help in reviving old PRs and helping maintainers with the project. username_5: To be honest, I'm not sure that I will be able to help with the PR, I may lack knowledge about the project structure. I may help to sort bugs if it's needed ? username_0: so @username_6 : date in 45 minutes? @username_3 link? username_6: I'm available at 7PM UTC which is in 30 mins. username_3: https://meet.jit.si/PostWarProspectsBetrayTenderly username_3: https://gist.github.com/username_3/d4e1a9ec27d4eb16e6e287272aacc652 username_0: Alright, @username_6 and me are now also officially team members. Thanks to @username_3 for the intro. We will of course call on everyone to keep improving this app. So with 200% more active-man-power, we'll see where we can go. username_3: https://feed.eugenemolotov.ru/var/2022-03-23.mp4
rodekruis/shelter-database
219243767
Title: Issues_drawings Question: username_0: First, one of the main issues is regarding absence of proper dimensions in the drawings. Also, there are no detailed construction drawings such as section drawings. So to actually implement these drawings in a site is quite challenging. Second, I tried to add a new shelter and after the 7 steps are completed I tried to upload some drawings in the structure tab. But it already shows me drawings from another shelter rather than an option to add the data. Furthermore, I am not sure what type of files are permitted to add in this. It would be great if more file formats such as autocad, sketchup, revit etc. could be added so that people can download them and make alterations to them as they need it. Status: Issue closed Answers: username_1: Drawings are precreated by the SRU en these are just supportive to correctly understand the different data requested. I've allowed upload of autocad and sketchup. Revit is an octet-stream and this is a dangerous mime type.
tensorflow/tensorflow
527694395
Title: tf.keras fit is incompatible with tf.function Question: username_0: **System information** - TensorFlow version (use command below): 2.0.0 - Python version: 3.7 **Describe the current behavior** I'm not sure if this is a bug or a documentation issue. The [migration guide](https://www.tensorflow.org/guide/migrate#mixed_variables_v1layers) recommends adding the `@tf.function` decorator to the `call` method of subclassed Keras models or layers. However, this causes problems when using the `Layer.add_loss` method with input-dependent losses as described in the [Keras guide](https://www.tensorflow.org/guide/keras/custom_layers_and_models#layers_recursively_collect_losses_created_during_the_forward_pass). In retrospect it makes sense that this doesn't work since `tf.function` requires all outputs to be returned from the function, but this is not made very clear in the docs and the resulting error message is not very helpful. **Code to reproduce the issue** ```python import numpy as np import tensorflow as tf from tensorflow.keras import layers class MyModel(tf.keras.Model): def __init__(self, num_classes): super().__init__() self.num_classes = num_classes self.dense = layers.Dense(num_classes, activation='sigmoid') @tf.function def call(self, inputs): self.add_loss(tf.reduce_sum(inputs), inputs=True) return self.dense(inputs) data = np.random.random((1000, 32)) labels = np.random.random((1000, 10)) model = MyModel(num_classes=10) model.compile(optimizer=tf.keras.optimizers.RMSprop(0.001), loss='categorical_crossentropy') model.fit(data, labels, epochs=50) ``` And the resulting error: ``` TypeError: An op outside of the function building code is being passed a "Graph" tensor. It is possible to have Graph tensors leak out of the function building context by including a tf.init_scope in your function building code. For example, the following function will fail: @tf.function def has_init_scope(): my_constant = tf.constant(1.) with tf.init_scope(): added = my_constant * 2 The graph tensor has name: Sum:0 ``` **Expected behavior** Ideally `add_loss` and `tf.function` would somehow be compatible. This would give me the flexibility to use the model directly by calling it or use the keras `fit` and `predict` functions as I please without hurting performance. Alternatively, at least the `tf.keras` guide should make the incompatibility clear, and perhaps an explicit check should be added to make sure `add_loss` is not called inside a `tf.function`. Answers: username_1: The issue reproduces with tf-nightly version 2.1.0-dev20191125. username_2: Was able to reproduce the issue in TF 2.6.0-dev20210529,please find the gist [here](https://colab.research.google.com/gist/username_2/7a8c1ce66ad5eb7a990c64829dfcb0d7/untitled87.ipynb)..Thanks !
lampepfl/dotty
278672738
Title: Functions params syntax is confusing Question: username_0: It's possible to pass a tuple as function param without enclosing the tuple in parenthesis. It is confusnig. Example: ``` scala> List(1,2,3).foldLeft(7,8,9,10){ case (a,b) => a } val res1: (Int, Int, Int, Int) = (7,8,9,10) ``` Another example: ``` scala> Option("abc").fold(-1,"hello",123L) val res5: String => (Int, String, Long) => (Int, String, Long) = Lambda$3317/21098686 ``` Confusing, as `Option.fold` has signature of `fold: (ifEmpty: => B)(f: String => B): B` but it lookes like I cram up more args . Writing: ``` Option("abc").fold(-1,{s:String => s.length}) val res6: String => (Int, String => Int) => (Int, String => Int) = Lambda$3318/1972520442@392dcdc3 ``` is totally confusing as I made a typo instead of ``` Answers: username_1: Note that this is not new to Dotty. Your first example also compiles with scalac. Regarding the other two examples, they compile with Dotty due to [improved eta-expansion](http://dotty.epfl.ch/docs/reference/changed/eta-expansion.html). You can also make them compile with scalac: ```scala scala> Option("abc").fold(-1,"hello",123L) _ res0: (String => (Int, String, Long)) => (Int, String, Long) = $$Lambda$1264/598183031@4119346d scala> Option("abc").fold(-1,{s:String => s.length}) _ res1: (String => (Int, String => Int)) => (Int, String => Int) = $$Lambda$1266/389226553@776e7dfb ``` username_0: Well, true to the extent that with scalac you must add underscore to make it unapplied method -- so it's intended. For dotty it looks confusing to me. username_0: Thanks for pointing that out, I did no realize I was able to pass tuple this way as arg to a function using scalac. Status: Issue closed
kedacore/keda
562665263
Title: HPA does not update data as Keda Operator does Question: username_0: **What happened:** I configured a scaledobject for kafka and it is not updating the HPA info. This is the scaledobject configuration: `spec: cooldownPeriod: 300 maxReplicaCount: 5 minReplicaCount: 1 pollingInterval: 30 scaleTargetRef: deploymentName: absolutegrounds-helper-processors triggers: - metadata: brokerList: bootstrap.kafka11:9092 consumerGroup: int.absolutegrounds.helper.processor.datapipeline lagThreshold: "500" topic: INT-AG_TASK_SOURCE_DP type: kafka` Adding debug log level shows that the consumer has a certain lag: `{"level":"debug","ts":1581350124.0215647,"logger":"kafka_scaler","msg":"Group int.absolutegrounds.helper.processor.datapipeline has a lag of 7931 for topic INT-AG_TASK_SOURCE_DP and partition 2\n"}` But the HPA created shows this information: `NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE keda-hpa-absolutegrounds-helper-processors Deployment/absolutegrounds-helper-processors 500/500 (avg) 1 5 5 3h35m` With this info: `apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: annotations: autoscaling.alpha.kubernetes.io/conditions: '[{"type":"AbleToScale","status":"True","lastTransitionTime":"2020-02-10T12:41:40Z","reason":"ReadyForNewScale","message":"recommended size matches current size"},{"type":"ScalingActive","status":"True","lastTransitionTime":"2020-02-10T12:41:40Z","reason":"ValidMetricFound","message":"the HPA was able to successfully calculate a replica count from external metric lagThreshold(\u0026LabelSelector{MatchLabels:map[string]string{deploymentName: absolutegrounds-helper-processors,},MatchExpressions:[],})"},{"type":"ScalingLimited","status":"False","lastTransitionTime":"2020-02-10T13:03:38Z","reason":"DesiredWithinRange","message":"the desired count is within the acceptable range"}]' autoscaling.alpha.kubernetes.io/current-metrics: '[{"type":"External","external":{"metricName":"lagThreshold","metricSelector":{"matchLabels":{"deploymentName":"absolutegrounds-helper-processors"}},"currentValue":"0","currentAverageValue":"500"}}]' autoscaling.alpha.kubernetes.io/metrics: '[{"type":"External","external":{"metricName":"lagThreshold","metricSelector":{"matchLabels":{"deploymentName":"absolutegrounds-helper-processors"}},"targetAverageValue":"500"}}]' creationTimestamp: "2020-02-10T12:23:13Z" labels: app.kubernetes.io/managed-by: keda-operator app.kubernetes.io/name: keda-hpa-absolutegrounds-helper-processors app.kubernetes.io/part-of: helpers-absolutegrounds-processor-intadaptive-lag app.kubernetes.io/version: 1.2.0 name: keda-hpa-absolutegrounds-helper-processors namespace: intadaptive-cb ownerReferences: - apiVersion: keda.k8s.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ScaledObject name: helpers-absolutegrounds-processor-intadaptive-lag uid: 1b5e600d-4c00-11ea-8a9e-005056a2317c resourceVersion: "388821268" selfLink: /apis/autoscaling/v1/namespaces/intadaptive-cb/horizontalpodautoscalers/keda-hpa-absolutegrounds-helper-processors uid: 1bb53f18-4c00-11ea-9fd9-ecebb8956d60 spec: maxReplicas: 5 minReplicas: 1 scaleTargetRef: [Truncated] name: absolutegrounds-helper-processors status: currentReplicas: 5 desiredReplicas: 5 lastScaleTime: "2020-02-10T13:03:39Z"` **What you expected to happen:** Something like 7931/500 (avg) in the HPA. Now it says that currentValue is 0 but currentAverage Value is 500 ¿¿?¿?¿?¿ and this for a long time. **Anything else we need to know?:** Noticed that the currentReplicas and desiredReplicas info is not updated: `keda-hpa-cancellation-helper-processors Deployment/cancellation-helper-processors 0/500 (avg) 1 5 4 3h42m cancellation-helper-processors-7f56f97c84-b6h2h 1/1 Running 0 7d` **Environment:** Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"<PASSWORD>", GitTreeState:"clean", BuildDate:"2019-02-01T20:00:57Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"} Keda version 1.2.0 Answers: username_1: @username_0 Could you please format the `ScaledObject` and `HPA` posted above, so it is more readible? eg. https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#fenced-code-blocks username_0: Of course, sorry! username_1: @username_2 might have an idea? username_2: How many partitions have your topic? username_0: Keda log only shows 1 partition in the example given: "Group int.absolutegrounds.helper.processor.datapipeline has a lag of 7931 for topic INT-AG_TASK_SOURCE_DP and partition 2\n" Kafka Tool shows 5 partitions: ![image](https://user-images.githubusercontent.com/4446361/74535061-58ce5680-4f35-11ea-9a7d-0625055a431a.png) And so does Kafka Manager: ![image](https://user-images.githubusercontent.com/4446361/74535170-8e733f80-4f35-11ea-9e2d-861484dd9e21.png) username_2: well that log is not clear because it's in a for loop which breaks as soon as there is a lag higher than the lagThreshold. So it just says that you have lag 7931 on partition 2 but maybe you have other lags on the other partitions. I think we should change this log somehow. Regarding the value showed by HPA 500/500 I would expect more 2500 due to this snippet code: ```go // don't scale out beyond the number of partitions if (totalLag / s.metadata.lagThreshold) > int64(len(partitions)) { totalLag = int64(len(partitions)) * s.metadata.lagThreshold } metric := external_metrics.ExternalMetricValue{ MetricName: metricName, Value: *resource.NewQuantity(int64(totalLag), resource.DecimalSI), Timestamp: metav1.Now(), } ``` it drops the `totalLag` to the number of partitions (otherwise more consumers would be idle). So I would expect `totalLag = 5 * 500` and this value is passed as external metric value for the HPA. Strange it reports 500/500 ... username_0: Thanks @username_2, this explains everything. Sorry I was expecting exact values but as you say it has no sense to run idle consumers. Do you have another explanation to the currentReplicas and desiredReplicas info that is not aligned with the real number of replicas of the deployment managed? This was the info about the HPA: ``` NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE keda-hpa-cancellation-helper-processors Deployment/cancellation-helper-processors 0/500 (avg) 1 5 4 3h42m ``` And this was the list of PODs, only 1 deployed, not 4: ``` NAME READY STATUS RESTARTS AGE cancellation-helper-processors-7f56f97c84-b6h2h 1/1 Running 0 7d ``` The HPA was true, the target was correct, the number of replicas diminished to the minpods value, but the number of replicas shown was the one reached when the lag was over the target of the HPA. It was not updated to match the real number of replicas until quite a long time. Thanks for your help!!!!!!! username_2: @username_0 tbh keda should be not involved on updating the hpa values, it's all about Kubernetes. I have no clue right now. Status: Issue closed username_0: Thanks @username_2, You are right, but is strange because other HPAs we have configured do not show this misbehavior. We will continue using your application while monitoring the results. It is far easier to work with your solution than with our previous one. Regards, Alberto.
timburgan/timburgan
1120431112
Title: chess|move|d7d1|19178 Question: username_0: Just push 'Submit new issue'. You don't need to do anything else. Answers: username_0: @username_1 it looks like the game is broken https://github.com/username_1/username_1/actions username_1: Thanks for letting me know. Looks like it's back on again now https://github.com/username_1/username_1/issues/19225#issuecomment-1026686494 Status: Issue closed
discordjs/discord.js
419086487
Title: <Channel>.overwritePermissions(...) changes name Question: username_0: <!-- If you need help with discord.js installation or usage, please go to the discord.js Discord server instead: https://discord.gg/bRCvFy9 This issue tracker is only for bug reports and enhancement suggestions. You won't receive any basic help here. --> **Please describe the problem you are having in as much detail as possible:** When `<Channel>.overwritePermissions(<Role>, ...)` is used, this will change name of the channel to a `.toLowerCase()` version of `<Role>.name`. **Include a reproducible code sample here, if possible:** ```js const mutedRole = message.guild.roles.find(r => r.name === "Muted"); message.guild.channels.forEach(async (channel, id) => { // eslint-disable-line await channel.overwritePermissions(mutedRole, { SEND_MESSAGES: false, ADD_REACTIONS: false }); }); ``` **Further details:** - discord.js version: v12 Master Branch (Commit: `<PASSWORD>`) - Node.js version: v11.9.0 - Operating system: Ubuntu 16.04 LTS - Priority this issue should have – please be realistic and elaborate if possible: Medium <!-- If this applies to you, please check the respective checkbox: [ ] becomes [x]. You don't have to modify the text to suit your particular situation – if you want to elaborate, please do so in the description. While it's not a requirement to test your issue on the master branch, it would make fixing the problem a lot easier for us, so please do so if possible. --> - [x ] I have also tested the issue on latest master, commit hash: 1673b6f8f5bc53a30e2f2ef1123057d4e50c37c8 Status: Issue closed Answers: username_1: You are passing incorrect parameters to [`GuildChannel#overwritePermissions`](https://discord.js.org/#/docs/main/master/class/GuildChannel?scrollTo=overwritePermissions), you are supposed to pass an options object, with `permissionOverwrites` and `reason` properties. Under the hood this method will just call [`GuildChannel#edit`](https://discord.js.org/#/docs/main/master/class/GuildChannel?scrollTo=edit) with the object you passed. Since [`Role`](https://discord.js.org/#/docs/main/master/class/Role) has a [`name`](https://discord.js.org/#/docs/main/master/class/Role?scrollTo=name) (and [`position`](https://discord.js.org/#/docs/main/master/class/Role?scrollTo=position)) property, doing so will edit both of those properties. You meant to use [`GuildChannel#updateOverwrite`](https://discord.js.org/#/docs/main/master/class/GuildChannel?scrollTo=updateOverwrite) which behaves like [`GuildChannel#overwritePermissions`](https://discord.js.org/#/docs/main/stable/class/GuildChannel?scrollTo=overwritePermissions) on stable.
conda-forge/conda-smithy
621494510
Title: use proper feedstock name in CFEP-13 validation Question: username_0: Right now we are not always getting the proper feedstock name in the CFEP-13 validation. see eg https://github.com/conda-forge/lalframe-feedstock/pull/17 This is likely due to rerendering in a clone of the repo to a dir that does not have the same name as its parent repo. <!-- Thanks for reporting your issue. Please fill out the sections below. --> Issue: <br/> Environment (<code>conda list</code>): <details> ``` $ conda list ``` </details> <br/> Details about <code>conda</code> and system ( <code>conda info</code> ): <details> ``` $ conda info ``` </details> Answers: username_1: For the `lalframe-feedstock` I have it cloned into a directory called `lalframe` on my host. It looks like @jakirkham tried to support this in #323: https://github.com/conda-forge/conda-smithy/blob/360b332df9ae771e1ce6ebff6fff6eaa82d87103/conda_smithy/configure_feedstock.py#L1553-L1555 but that isn't used (perhaps as of #1295): https://github.com/conda-forge/conda-smithy/blob/360b332df9ae771e1ce6ebff6fff6eaa82d87103/conda_smithy/configure_feedstock.py#L1827-L1828 username_0: Right it is more complicated than this due to how the code works. We actually need the exact name of the github repo which may never match the cloned dir even with add "-feedstock". We also cannot compute it from the meta.yaml because the top-level package name can change over time. We'll have to extract the right name from the CI service live, which should be possible and always work. username_0: We can use the following. - **circle**: CIRCLE_PROJECT_REPONAME - **travis**: TRAVIS_REPO_SLUG I still need to lookup the vars for drone and azure. Status: Issue closed
umbraco/Umbraco-CMS
654997870
Title: Boolean property editor default value not set when content is created using ContentService Question: username_0: ## Umbraco version I am seeing this issue on Umbraco version: 8.6.3 ### Bug summary When creating a node using the ContentService the default value of the boolean property editor isn't applied. ### Specifics When content is created in the back end the values for the boolean property editor are empty. Content created in the front end sees no value on the model and applies the default based on the config but content created in the back end never sees the angular controller. Since the angular controller is setting the value of the property editor when the content is loaded it appears as if the default value is set to True (the checkbox is checked and there is no green + icon indicating a saved draft) but until you save and publish from the content tree the value is still empty and rendered as false in views. I created [a forum post](https://our.umbraco.com/forum/using-umbraco-and-getting-started/102866-default-property-editor-value-not-published-when-creating-content-using-content-service) when I first came across it with images of what I was experiencing but after looking into it a bit more I think it's a bug. This seems related to [#6252](https://github.com/umbraco/Umbraco-CMS/issues/6252) but occuring when content is created in the back end not when a property editor is added to existing content. ### Steps to reproduce * Create a document type with an Umbraco.TrueFalse property editor and set the default value to true. * Create a new IContent node of your document type using the Create method of the ContentService. * SaveAndPublish created node ### Expected result I expected the default values to be set, if not possible I'd expect the content in the back office to reflect the value that's rendered to views. ### Actual result No values were set but the created node in the content tree appears to set up correctly. Answers: username_1: Yeah so this is how it works - setting "default value" to "true" only tells the AngularJs when the page loads: if this checkbox has no value (null), then set it to true in the UI. Indeed, this default value is not wired up in the `ContentService` which could be considered a bug. However, this setting was really not implemented in the way we would like it to be implemented (and we would like to have a better way to be able to set "default" or "initial" values for all property editors where it makes sense (https://github.com/umbraco/Umbraco-CMS/issues/7859). So for now: you are responsible for setting the default to `true` yourself if you create items using the `ContentService` and in the future we hopefully have a robust system for doing this automatically. Status: Issue closed
APIOps/portal
111543600
Title: Describe APIOps concept Question: username_0: As a developer of APIs I want to see profound description of APIOps iterative Design First principles and practices. Status: Issue closed Answers: username_0: This issue was moved to APIOps/apiops-meetup#25 username_0: As a developer of APIs I want to see profound description of APIOps iterative Design First principles and practices. Status: Issue closed username_0: This issue was moved to APIOps/apiops-meetup-website#34
ArctosDB/arctos
550332654
Title: [CONTACT] Question: username_0: ** Please use the "Report Bad Data" link where available to ensure the correct Curator receives your message. ** ** Outages should be reported to the contacts listed at https://arctosdb.org/arctos-down/ ** ** Arctos Documentation is http://handbook.arctosdb.org/ ** ** Learn more about Arctos at https://arctosdb.org/ ** ** Please include enough information to re-create the problem if this request involves Arctos forms. Links and screenshots are helpful. ** Answers: username_0: I am trying to upload images using the Batch Tools > Upload Images route. I was able to upload a zipped file with ~50 images. Received confirmation via e-mail of successful upload and was able to download the csv with 'remote paths' to images and previews (example remote path from the csv: https://web.corral.tacc.utexas.edu/arctos-s3/andres_lopez/2020-01-14/2954.JPG). I then put together a csv to link the images to their source records via the Batch Tools > Bulkload Media Metadata. I loaded that csv with no issues but all the uploaded records now report invalid status for the remote paths. I did not modify the paths from the csv download... What am I doing wrong? Attached is the csv file as uploaded... [media_bulk_capelin_1.txt](https://github.com/ArctosDB/arctos/files/4066432/media_bulk_capelin_1.txt) username_1: Please test a manual upload - go to http://arctos.database.museum/MediaSearch.cfm Click Attach/Upload Media then drag any image to the green box and, 1) attach a screenshot of the result 2) if it claims to have worked, test the links username_0: I get an error. See screenshot of the result.... <img width="1064" alt="Image_upload_error" src="https://user-images.githubusercontent.com/9117656/72472271-ba61a080-3790-11ea-8370-ba0e3e67de5d.png"> username_1: Please try the single-image upload again and let me know what happens. username_0: It seems to work. No error message and both links (image and preview) open the target e-mails. What changed? username_1: Excellent. I'll try to get the zip-loader patched up today as well. Apparently the underscore in your username - which is used to build 'buckets' - somehow melts the S3 service that handles file uploads. username_1: Everything should be patched in everywhere. Please reopen if I missed something. Status: Issue closed
mchorse/metamorph
259471585
Title: Suggestion For blaze morph Question: username_0: Hi could you maby make it so that you can turn off blaza particals whene you are morph into one becaus they block your field of view a bit much and that not nice whene building. Answers: username_1: I'll see what I will able to do. Although, I don't have really much time for modding, at the moment. username_2: Hi, username_0, i suggest you one option: Put particles to Minimal. Trust me, those smoke particles won't block your view anymore.
easylist/easylist
603150781
Title: live video stream not working when ABP enabled Question: username_0: <!-- Note: If you're a website owner that has been specifically targeted, fix the site before reporting. Remove revolving ad servers, popup ads, adblock countering etc. Only then will this request be reviewed. --> <!-- Any additions, changes or removals is at the Authors discretion. You're free to counterargue (to a certain point) if you disagree with the decision. To avoid being banned, don't constantly re-open or create new (related) issue reports. --> <!-- Just include the website URL in the Title line of this issue report --> ### List the website(s) you're having issues: https://www.zeebiz.com/live-tv <!-- URL(s) for issue on a specific site are **mandatory** --> <!-- To prevent tracking, wrap the website URL in a Code tag please. **mandatory** --> ### What happens? The video stream is not working when ABP is enabled. ### List Subscriptions you're using: EasyList IndianLIst ### Your settings <!-- Just to ensure there is no issues or conflicts with other webbrowser extensions. Disable Noscript, Ghostery, Disconnect, HTTPS Everywhere, Privacy Badger before reporting (and re-test with them disabled). Just ensure you're running just one Adblock extension only --> - OS/version: - Browser/version: - Adblock Extension/version: ### Other details: <!-- If you suspect certain filters (this helps spending time to debug it manually). If you have a screen shot of the issue or advert, this will help to highlight it. --> Answers: username_1: https://github.com/easylist/easylist/commit/6049d21366e6d4d268969829e0fa30496deecfac Status: Issue closed
aurelia/store
813534960
Title: All user defined PLATFORM.performance marks and measures are cleared after store.dispatch Question: username_0: **I'm submitting a bug report** * **Library Version:** 1.6.0 **Please tell us about your environment:** * **Operating System:** Windows 10 * **Node Version:** 10.15.3 * **NPM Version:** 6.4.1 * **JSPM OR Webpack AND Version** JSPM 0.16.55 * **Browser:** Chrome 88 * **Language:** TypeScript **Current behavior:** `PLATFORM.performance` app/user defined marks and measures are cleared after requesting `store.dispatch`. **Expected/desired behavior:** * **What is the expected behavior?** `PLATFORM.performance` app/user defined marks and measures should not be cleared after requesting `store.dispatch`. * **What is the motivation / use case for changing the behavior?** We use `PLATFORM.performance` to measure application/API performance. When application is using aurelia-store, its dispatch method may clear all marks/measures and reporting is broken. Looking at aurelia-store code, looks like it clears all marks/measures regardless of key owner, while it should clear aurelia-store key owned marks/measures only. Answers: username_1: Ouch thanks for catching that. Would you provide a PR to fix that? username_0: Let me see what I can do. username_0: Around next week I'll be able to contribute. username_0: Hey, I got error pushing fix branch: `ERROR: Permission to aurelia/store.git denied to username_0.` Please guide me through to get this done. username_1: OK ouch. Yeah just fork the repo and create the PR from there onto this upstream username_0: PR is created https://github.com/aurelia/store/pull/112 Can't link to the issue. Unit tests has passed, but some reporting has failed, please take a look. Status: Issue closed username_0: What is the process of publishing a fixed version to NPM? When could I expect it? username_1: I've just pinged @EisenbergEffect for this as he takes care about the v1 releases wheras V2 already happens through a larger crowd. username_1: @username_0 1.7.0 fresh out of the bakery. Thanks again for your contribution
ffuf/ffuf
778193115
Title: i want to bypass 429 Too Many Requests Question: username_0: Could you add an option to sleep specific time after specific number of requests Answers: username_1: The `-rate` flag? Status: Issue closed username_2: Yeah, there are multiple ways of doing it. `-rate` as mentioned by @username_1 above, as well as `-p` that basically pauses the execution of a thread for the set time after response.
bcitdatacomm/game
293302490
Title: Implement danger zone Question: username_0: Create and shrink the danger zone See Shrink Danger Zone Pseudocode https://docs.google.com/document/d/1hOUWyDaES2KyRVVhkWPAtfL-4nF97YO2QJ41akT9i6U/edit?usp=sharing Answers: username_1: @username_0 is it confirmed that we will close at a constant rate to force the game to end by a certain point? Can't find this anywhere. username_0: @username_1 It was never discussed in detail. The code I wrote does a constant decrease to simulate creeping in on the players, but could be modified however we want. Its something everyone needs to agree on. Status: Issue closed
zmkfirmware/zmk
925215175
Title: Add explicit registry to our docker image references Question: username_0: Podman (used by default on Fedora) is an alternative to local docker/docker-daemon that's OOTB includes multiple configured registries that are searched. We should move to explicit registries in our docker image selectors, e.g. `docker.io/zmkfirmware/zmk-build-arm` in order to make sure that users of Podman don't need to select them manually when doing things like running the VSCode docker plugin w/ podman.
bazelbuild/bazel
274469699
Title: repository_rule local=True not executed every time on mac Question: username_0: ### Description of the problem / feature request / question: repository_rule with local=True is only executed on changes, not every time. I use repository_ctx.execute() to validate local cached artefacts. This should be done every time bazel starts to build. ### Environment info * Operating System: macOS 10.12.6 * Bazel version (output of `bazel info release`): 0.7.0-homebrew Answers: username_1: This is working as intended: https://docs.bazel.build/versions/master/skylark/repository_rules.html#when-is-the-implementation-function-executed. It will be *very* costly to rerun repository rules on every build. If you share more details on what you are trying to do, we can try to suggest a solution. Status: Issue closed username_2: Found this issue while looking for solutions. Came up with a solution where I create a timestamp file on every bazel execution, but I think bazel should have a flag like unmanaged_resource=True which works in conjunction with local to indicate the rule has a non hermetic dependency which needs to be evaluated on every execution. https://stackoverflow.com/questions/61955644/is-there-a-way-to-execute-a-repository-rule-with-local-true-on-every-bazel-invoc/61965711#61965711
apache/couchdb
730116615
Title: Pods are not getting into ready state with Error in process <0.275.0> on node Question: username_0: [NOTE]: # ( ^^ Provide a general summary of the issue in the title above. ^^ ) ## Description I installed a couchdb instance on my Kubernetes cluster and used this platform to backup the volumes and the workload manifests that well. Now when I try to restore the the database using those volumes and manifests the pods are not getting into ready state and this is the error that I can see in the pod logs ``` [error] 2020-10-27T05:03:22.610384Z couchdb@my-release-couchdb-0.my-release-couchdb.justcouch-restored.svc.cluster.local emulator -------- Error in process <0.275.0> on node 'couchdb@my-release-couchdb-0.my-release-couchdb.justcouch-restored.svc.cluster.local' with exit value: {{rexi_DOWN,{'couchdb@my-release-couchdb-1.my-release-couchdb.justcouch-ns.svc.cluster.local',noconnect}},[{mem3_rpc,rexi_call,3,[{file,"src/mem3_rpc.erl"},{line,394}]},{mem3_seeds,'-start_replication/1-fun-0-',1,[{file,"src/mem3_seeds.erl"},{line,99}]}]} ``` ``` [notice] 2020-10-27T05:03:42.400712Z couchdb@my-release-couchdb-0.my-release-couchdb.justcouch-restored.svc.cluster.local <0.529.0> fb5c28d307 10.244.0.133:5984 10.244.0.242 undefined GET /_up 404 ok 0 [info] 2020-10-27T05:03:42.416955Z couchdb@my-release-couchdb-0.my-release-couchdb.justcouch-restored.svc.cluster.local <0.32.0> -------- SIGTERM received - shutting down [error] 2020-10-27T05:03:42.421902Z couchdb@my-release-couchdb-0.my-release-couchdb.justcouch-restored.svc.cluster.local <0.486.0> -------- gen_server <0.486.0> terminated with reason: killed last msg: {'EXIT',<0.373.0>,killed} state: {state,#Ref<0.61916457.2739535873.205975>,couch_replicator_doc_processor,nil,<<"_replicator">>,#Ref<0.61916457.2739404801.205976>,nil,[],true} extra: [] ``` I was not able to figure out what exactly the reason could be. ## Steps to Reproduce [NOTE]: # ( Include commands to reproduce, if possible. curl is preferred. ) ## Expected Behaviour I should get all the couchdb pods up and running. ## Your Environment this is the helm release that we installed ``` my-release justcouch-ns 1 2020-10-26 19:10:40.322422397 +0530 IST deployed couchdb-3.3.4 3.1.0 ``` ## Additional Context ``` » kubectl version Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"<PASSWORD>", GitTreeState:"clean", BuildDate:"2020-01-18T23:30:10Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.11", GitCommit:"<PASSWORD>", GitTreeState:"clean", BuildDate:"2020-08-13T15:11:47Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"} ```<issue_closed> Status: Issue closed
StrixIT/StoryScript
477436013
Title: Additional concepts to add to the tutorial Question: username_0: Customizing UI by replacing or moving blocks by copying components Using attribute numbers in character creation Using the inactive flag Answers: username_0: Customizing UI by replacing or moving blocks by copying components is added username_0: Add using snippets, using example source code and publishing your own source code for others to see. Status: Issue closed username_0: Now part of https://github.com/StrixIT/StoryScript/issues/113
Fahey-McLay/xalt
197136361
Title: XALT: Error in python setup Question: username_0: Hi, Compiling Python/2.7.12 while xalt is loaded: ``` XXX=/users/jenscscs/easybuild/daint/haswell/software cc -L/opt/cray/pe/libsci/16.11.1/GNU/5.1/x86_64/lib \ -L$XXX/bzip2/1.0.6-CrayGNU-2016.11/lib \ -L$XXX/zlib/1.2.8-CrayGNU-2016.11/lib \ -L$XXX/libreadline/6.3-CrayGNU-2016.11/lib \ -L$XXX/ncurses/6.0-CrayGNU-2016.11/lib \ -L$XXX/freetype/2.6.3-CrayGNU-2016.11/lib \ -L$XXX/libpng/1.6.23-CrayGNU-2016.11/lib \ -L$XXX/SQLite/3.9.2-CrayGNU-2016.11/lib \ -L$XXX/Tk/8.6.4-CrayGNU-2016.11/lib \ -L$XXX/GMP/6.1.1-CrayGNU-2016.11/lib \ -Wl,--rpath=$XXX/Python/2.7.12-CrayGNU-2016.11/lib \ -Xlinker -export-dynamic -o python \ Modules/python.o \ -L. -lpython2.7 -ldl \ -lutil $XXX/libreadline/6.3-CrayGNU-2016.11/lib/libreadline.a \ $XXX/ncurses/6.0-CrayGNU-2016.11/lib/libncurses.a \ -lm -dynamic ``` fails with: ``` /usr/bin/ld: no input files XALT: Error in users' python setup. Please report this error! /usr/bin/ld: no input files collect2: error: ld returned 1 exit status ``` * piccinal@daint103:/dev/shm/piccinal/python.eff$ which ld `/apps/daint/UES/xalt/0.7.6/bin/ld` `module unload xalt` fixes the issue. Any hint on this issue ? Thanks, jg.<issue_closed> Status: Issue closed
umn-asr/peoplesoft_course_class_data
925174799
Title: Change default branch Question: username_0: - [ ] create `main` branch - [ ] set as default - [ ] search project for mentions of `master` and update to `main` - [ ] delete `master` Answers: username_0: Didn't find any mentions of `master` outside of `bundle`. username_0: Complete. Status: Issue closed
Mimetis/Dotmim.Sync
604084072
Title: Trigger error creation Question: username_0: Hi. I am testing the framework with a SQL Server datamodel that needs to be syncronized thru trigger model. I am very happy with the results so far. However I am facing a little problem that it seems to be related with the fact that it has some tables with identical name, different schema and same primary key in the model. When I look on the PrimaryKeys properties info seems to be duplicated. When I reinitialize the agent I recorded this errors. Any guess where to look at specifically? ![Exception](https://user-images.githubusercontent.com/64089571/79883385-156aeb00-83b1-11ea-9550-0f605d824ee2.png) ![pk duplicated](https://user-images.githubusercontent.com/64089571/79883387-16038180-83b1-11ea-8939-f78dc48b3cfd.png) Answers: username_0: Thanks! I attached the script (.tsql) for creating the database and the c# used for sync. [Demo.zip](https://github.com/username_1/Dotmim.Sync/files/4511742/Demo.zip) username_1: Ok, got it. Seems I forgot to add the schema parameter when getting primary keys... It's fixed in the last commit. Here is the code I used, and it's working fine. It's a little bit different from your code, for testing purpose, but you will see it's almost same as you: ``` cs var clientProvider = new SqlSyncProvider(DbHelper.GetDatabaseConnectionString(clientDbName)); var serverProvider = new SqlSyncProvider(DbHelper.GetDatabaseConnectionString("Demo")); // Tables involved in the sync process: var tables = new string[] { "dbo.Tipo_Zona", "dbo.Institucion_SIRHP", "pag.Institucion_Sirhp" }; var syncSetup = new SyncSetup(tables); var syncOptions = new SyncOptions(); syncOptions.ScopeInfoTableName = "sscopeInfo"; syncSetup.TrackingTablesPrefix = "s"; syncSetup.StoredProceduresPrefix = "s"; syncSetup.TriggersPrefix = "s"; syncSetup.Tables["dbo.Tipo_Zona"].SyncDirection = SyncDirection.DownloadOnly; syncSetup.Tables["dbo.Institucion_SIRHP"].SyncDirection = SyncDirection.DownloadOnly; var agent = new SyncAgent(clientProvider, serverProvider, syncOptions, syncSetup); do { Console.Clear(); Console.WriteLine("Web sync start"); try { var progress = new SynchronousProgress<ProgressArgs>(pa => Console.WriteLine($"{pa.Context.SessionId} - {pa.Context.SyncStage}\t {pa.Message}")); var s = await agent.SynchronizeAsync(progress); Console.WriteLine(s); } catch (SyncException e) { Console.WriteLine(e.Message); } catch (Exception e) { Console.WriteLine("UNKNOW EXCEPTION : " + e.Message); } Console.WriteLine("Sync Ended. Press a key to start again, or Escapte to end"); } while (Console.ReadKey().Key != ConsoleKey.Escape); ``` You can test with the source code from master, otherwise waiting for next version `v0.5.3` that I will publish later today, hopefully. username_1: Can you try with the last version `v0.5.3` ? It's now available on **nuget**: https://www.nuget.org/packages?q=dotmim.sync Status: Issue closed username_0: Hi! The new version worked perfectly. If I face any other situation I let you know. Great job, thanks!
anchore/anchore-cli
530026823
Title: Anchore CLI stuck at "not_analyzing" and other things Question: username_0: When I try to `anchore-cli image add ...` it gives me a failure of `Error: failed post url=http://engine-catalog:8228/v1/images HTTP Code: 500 Detail: {'error_codes': []}` I then do a `docker-compose ps` and I see `aevolume_engine-catalog_1 /docker-entrypoint.sh anch ... Up (unhealthy)` I try to fix the above with a `docker-compose up -d` but it just says everything is up to date. So I have to restart my computer, then run `docker-compose up -d` again, and it starts everything up. I then run the `anchore-cli image add ...` again, but it gets stuck on `Status: not_analyzed Waiting 5.0 seconds for next retry.` It does this for about 10 minutes, and then it says `Error: Requested image not found in system` ... I'm then stuck back at square 1. Anyone know what is wrong here? I'm using `anchore-cli, version 0.4.1` Answers: username_1: Could you extract the logs from `engine-catalog` ? Do an `exec` in that container first: ``` $ docker exec -u root -ti aevolume_engine-catalog_1 bash ``` And then look into the log file: ``` $ less /var/log/anchore/anchore.log ``` Would be interesting to check what is going on in there if anything. Status: Issue closed
cli-table/cli-table3
345659849
Title: Create @types/cli-table3 Question: username_0: https://www.npmjs.com/package/@types/cli-table2 https://www.npmjs.com/package/@types/cli-table3 (404) I don't have any experience creating @types packages, so unfortunately I don't have further pointers. Status: Issue closed Answers: username_1: We already have typings published for this package username_0: Ah, didn't see that, awesome!
UMM-CSci-3601/3601-iteration-template
832356983
Title: Migrate from JCenter to Maven Central Question: username_0: The JCenter repository we use in our `build.gradle` is [shutting down](https://jfrog.com/blog/into-the-sunset-bintray-jcenter-gocenter-and-chartcenter/). We should move to using Maven Central: ```gradle repositories { mavenCentral() } ```<issue_closed> Status: Issue closed
opensourcewebsite-org/opensourcewebsite-org
644338233
Title: Telegram bot for groups. Greeting. Question: username_0: Добавить новую функцию “Greeting” (Приветствие), вторая кнопка в списке функций после Language. Добавить в функцию кнопку включения/выключения. По умолчанию выключено. Добавить в функцию кнопку “Message” (Сообщение). При клике требовать ввод (возможен ввод многострочного текста типа textarea). Добавить возможность очистить сообщение. Если сообщение пустое и функция включена, то считается что функция для бота отключена и он не отправляет сообщение в группу. Если сообщение задано то показывать его текст на экране просмотра функции. Добавить опцию - время удаления приветственного сообщения в минутах, по умолчанию 2 минуты. Меньше 1 минуты указать нельзя. Больше 24*60 указать нельзя. Все невалидные вводы пользователя бот игнорит и просто удаляет ввод пользователя. Добавить возможность добавлять кастомные кнопки с ссылками под приветствие. Status: Issue closed Answers: username_0: MILESTONE 1 (DONE): Добавить новую функцию “Greeting” (Приветствие), вторая кнопка в списке функций после Language. Добавить в функцию кнопку включения/выключения. По умолчанию выключено. Добавить в функцию кнопку “Message” (Сообщение). При клике требовать ввод (возможен ввод многострочного текста типа textarea). Если сообщение задано то показывать его текст на экране просмотра функции. MILESTONE 2 (DONE): Добавить возможность очистить сообщение в его настройке. Подключить возможность использовать markdown в тексте сообщения. ---- MILESTONE 3: Добавить возможность добавлять кастомные кнопки с ссылками под приветствие. ---- Аналоги: https://t.me/hellouserbot
jordanbray/chess
1089490406
Title: Display Game as PGN Question: username_0: Hello, To ease the task of analysing a game (after it is played using this crate), displaying a Game using PGN could be interesting, so that it can be plugged into the Lichess analysis board for example (https://lichess.org/analysis). I have started working on this but I am a very new Rust dev so mistakes are bound to be made. Would this project still be interested in a PR for this? Since the Game struct is already not as efficient as Board and not recommended for chess engines, I went for something very simple, but not so efficient: implementing Display for the Game struct, iterating over all the actions, ignoring every MakeMove except ChessMove, applying the move in a Board from the starting position (so that I can easily get the Piece that is in each Square), and with this creating the full PGN string. However, this does not work if a game is not started from the standard initial position. I don't quite know how to fix this, maybe saving the Board from the new_from_board method as a global variable in Game? This way my Board can be initialized from that position. If another approach should be preferred, I would be interested in details and I'll try to do it! Thanks a lot, username_0
cse-sim/cse
1185084019
Title: Improvements to automated documentation Question: username_0: - Instead of parsing comments from the *.DEF files, add actual fields (for, e.g., descriptions, units, etc.) that are processed by RCDEF and output through CSE command line (similar to `CSE -p`). - Create tables for choice fields from CNDTYPES.DEF
DefinitelyTyped/DefinitelyTyped
413586416
Title: This should be <SegmentedControlProps> not <SearchFieldProps> Question: username_0: https://github.com/DefinitelyTyped/DefinitelyTyped/blob/0d4dfe2e4642dccb8d078342f55bf125477f0b57/types/gestalt/index.d.ts#L941 Answers: username_1: Looks like this got fixed. https://github.com/DefinitelyTyped/DefinitelyTyped/blob/6e926863e4f727ae44332a7dcc11a45ea47f680c/types/gestalt/index.d.ts#L1394 Status: Issue closed username_2: Hi thread, we're moving DefinitelyTyped to use [GitHub Discussions](https://github.com/DefinitelyTyped/DefinitelyTyped/issues/53377) for conversations the `@types` modules in DefinitelyTyped. To help with the transition, we're closing all issues which haven't had activity in the last 6 months, which includes this issue. If you think closing this issue is a mistake, please pop into the [TypeScript Community Discord](https://discord.gg/typescript) and mention the issue in the `definitely-typed` channel.
razorpay/razorpay-magento
423583981
Title: How to add redirection_url Question: username_0: I can create a short_url using Invoice creation in razor-pay using java code . After payment success how can i redirect my site ? How can i add redirection_url ? Any solution for this ? Status: Issue closed Answers: username_1: The short URL is generally not used for redirection, but exists for sharing with the customer. You can do so yourself, or you can use the [sms_notify/email_notify attributes](https://razorpay.com/docs/invoices/api/#create-an-invoice) while creating the invoice, in which case Razorpay will send the notification. If you find yourself needing to accept payment from the customer directly, without any notification, you can pass invoice_id as one of the [parameters to checkout.js](https://razorpay.com/docs/payment-gateway/integrations-guide/checkout/standard/#checkout-form).
Manav-Ram19/wi21-cse110-lab3
792405672
Title: Add Box Model, Texts, Fonts Question: username_0: # Sub Tasks: 1. Box Model 2. Texts 3. Fonts ## Sub Task Descriptions: Box Model 1. Margins - Long (margin-top, margin-bottom, margin-left, margin-right) - Short (margin: top right bottom left) - auto 2. Padding - Long (padding-top, padding-bottom, padding-left, padding-right) - Shorthand (padding: top right bottom left) 3. Height / Width - Set the height and width for an element Texts 1. color 2. text-decoration 3. text-align Fonts 1. Include and use a 3rd party font (https://fonts.google.com/ (Links to an external site.)). You can load the font in either your HTML or your CSS<issue_closed> Status: Issue closed
epiforecasts/forecast.vocs
1016040712
Title: cmdstanr install failing Question: username_0: `cmstanr` is failing to install using the default install instructions. This may be a temporary issue as they role out a new version so just watch for now. ```r install.packages("cmdstanr", repos = c("https://mc-stan.org/r-packages/", getOption("repos"))) Installing package into ‘/workspaces/evaluate-delta-for-forecasting/renv/library/R-4.1/x86_64-pc-linux-gnu’ (as ‘lib’ is unspecified) Warning message: package ‘cmdstanr’ is not available for this version of R A version of this package for your version of R might be available elsewhere, see the ideas at https://cran.r-project.org/doc/manuals/r-patched/R-admin.html#Installing-packages ```
omnetpp/omnetpp
1092925069
Title: Hello sir, Question: username_0: **Platform** - OMNeT++ version: [e.g. 5.4] - OS: [e.g. Ubuntu 18.04] **Describe the bug** Provide a clear and concise description of the bug. What you did, what was the result, and what you expected to happen instead. You can paste output or use screenshots if it is useful. **To Reproduce** Help the development team to reproduce the issue. Provide step-by-step instructions, code examples, etc.<issue_closed> Status: Issue closed
ralovich/quickroute-linux
199266593
Title: Ubuntu package lacks dependence to a variety of mono files Question: username_0: amongst others, but maybe not only, mono-csharp-shell Answers: username_1: Please answer the following question to provide someone else a chance to reproduce your problem: * In your report please include: * The log of the program in question. * What steps will reproduce the problem? * What is the expected output? What do you see instead? * What version of Quickroute are you using? On what operating system? Is it pre-built of from sources? Did you take a look at https://github.com/username_1/quickroute-linux/issues/1 already? Does that solve your problem?
Atvaark/ddda-save-editor
196671514
Title: Interested but doesn't seems to work Question: username_0: I clicked the link, uploaded my save but then nothing happens, it ask my steam number and that's it... shame, I would of liked an editor for this... Answers: username_1: Hey, sorry to disappoint you. I abandoned this project because my employer ended up not using this toolchain. The only feature I wanted to add was changing the SteamID linked to the savegame so that you could share them. Would you mind uploading your savegame so I could see if the format was changed? username_0: hey ah well, back to text editor it is then XD Sure though, how do I do that? <NAME> <EMAIL> username_1: Could you pack your DDDA.sav as a ZIP archive and attach it to your post here? Or upload it to mediafire.com and send me the link. It doesn't contain anything personal besides your SteamID, which is public to all people that can view your Steam profile. username_0: Sure here you go :) <NAME> <EMAIL> username_2: This github needs to be closed then or updated for those of us looking for a working save editor. Thanks. username_1: I don't think you understand how this open source stuff works. You aren't in the position to demand anyone to to anything. Be it removing a project or making changes to it. If a project doesn't do something you want **it's up to you to do it yourself**. You might as well pay someone professional to do it for if you actually want it done and you don't have the skills yourself. username_3: I mean it would be a lot nicer if you just have implied that it isn't working as intended on the readme, right? So people wouldn't lose their time testing it. Thanks anyway.