repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
Jintin/andle
116567188
Title: Always returns "set sdk path first" Question: username_0: `andle setsdk -p /Users/username_0/Library/Android/sdk` returns "set sdk path first" Answers: username_1: what is the msg returned with setsdk command? username_1: If correctly, it'll show 'setsdk:/Users/username_0/Library/Android/sdk' and you'll find a file named 'andle' in your home dir with content '/Users/username_0/Library/Android/sdk' username_1: Thank you your feedback, I'll find out what is going on ASAP. username_1: You can add one file '.andle' in your home dir with content '/Users/username_0/Library/Android/sdk' for workaround first. Thank you your feedback username_0: Sorry it didn't work username_0: ``` username_0$ pwd / username_0$ cat .andle /Users/username_0/Library/Android/sdk Status: Issue closed username_0: `andle setsdk -p /Users/username_0/Library/Android/sdk` `andle setsdk --help` `andle --version` returns "set sdk path first" username_1: Home dir, not Root dir. username_0: Yes pardon me. Workaround worked. username_1: You're welcome. Bug fix at v1.4.2. Please run below command later `sudo pip install andle --upgrade` Status: Issue closed username_2: I had the same issue with a clean install as of today. Every command i tried reported "set sdk path first". I created the .andle file in my home directory manually with the correct SDK path and it worked. username_1: Ok, I'll check again. username_1: `andle setsdk -p /Users/username_0/Library/Android/sdk` `andle setsdk --help` `andle --version` returns "set sdk path first" username_1: Seems that 1.4.2 is not complete so your download is 1.4.1 still, please update or reinstall again. Thank you. Status: Issue closed username_2: Can confirm after updating it works as expected. Thanks.
VSH2-Devs/Vs-Saxton-Hale-2
204820559
Title: weapon changes Question: username_0: A to-do list with several balance changes and bug fixes I've compiled over the past few weeks. TODO: - [ ] Write out TODO list (mostly just lots of minor stuff, also an experimental Eureka Effect change) Answers: username_1: adding this link here as a backdrop for more weapons changes to make: https://www.skial.com/threads/vsh-ff2-weapon-balancing.82287/ username_1: think of better changes for these weapons: - [ ] Razorback - [ ] Darwins Danger Shield - [ ] Fists of Steel - [ ] Homewrecker
rrrene/credo
241959585
Title: `--only` and `--ignore` not working as expected Question: username_0: ### Environment * Credo version (`mix credo -v`): 0.8.1 and 0.8.2 * Erlang/Elixir version (`elixir -v`): Elixir 1.4.2 * Operating system: macOS 10.12.5 ### What were you trying to do? I was trying to only running selected checks to reduce runtime of credo, using the `--only` and `--ignore` option. For example, running `mix credo` on [phoenixframework](https://github.com/phoenixframework/phoenix), took ~12 seconds. Running `mix credo --only consistency` reports `Analysis took 0.6 seconds (0.1s to load, 0.5s running checks)` but the overall runtime of the command is still ~12 seconds. ### Expected outcome The expected behaviour (cp https://github.com/username_2/credo/issues/84#issuecomment-221386058) would be that the runtime of the `mix credo` command is reduced to something close to 1 second for the example above. ### Actual outcome Running `mix credo --only consistency` reports `Analysis took 0.6 seconds (0.1s to load, 0.5s running checks)` but the overall runtime of the command is ~12 seconds. The reporting of the line `Analysis took 0.6 seconds (0.1s to load, 0.5s running checks)` was fast (~1 second), but the command keeps running for ~11 more seconds until it reports `845 mods/funs, found 3 consistency issues`. Answers: username_1: As for the command still running after it reports its analysis time: running with ExProf shows that most time is spent in `Elixir.Credo.Code.Scope':find_scope/4`. This function is used to print the "845 mods/funs" statistic. Could one solution be to add a `--stats`/`--no-stats` switch? For example, the patch pasted below (which would need work still) allows me to run `mix credo --only consistency` on phoenix in... ```bash $ time mix credo --only consistency --no-stats Checking 72 source files (this might take a while) ... [snip] Analysis took 1.5 seconds (0.1s to load, 1.3s running checks) found 3 consistency issues. mix credo --only consistency --no-stats 3.76s user 0.30s system 52% cpu 7.805 total ``` and `mix credo --only consistency --stats` in... ```bash $ time mix credo --only consistency --stats Checking 72 source files (this might take a while) ... [snip] Analysis took 1.5 seconds (0.1s to load, 1.4s running checks) 845 mods/funs, found 3 consistency issues. mix credo --only consistency --stats 15.74s user 0.29s system 108% cpu 14.809 total ``` The patch: ```diff diff --git a/lib/credo/cli/options.ex b/lib/credo/cli/options.ex index c3b9521..126a753 100644 --- a/lib/credo/cli/options.ex +++ b/lib/credo/cli/options.ex @@ -21,6 +21,7 @@ defmodule Credo.CLI.Options do min_priority: :integer, only: :string, read_from_stdin: :boolean, + stats: :boolean, strict: :boolean, verbose: :boolean, version: :boolean diff --git a/lib/credo/cli/output/summary.ex b/lib/credo/cli/output/summary.ex index 83505d8..fb519c4 100644 --- a/lib/credo/cli/output/summary.ex +++ b/lib/credo/cli/output/summary.ex @@ -1,6 +1,7 @@ defmodule Credo.CLI.Output.Summary do alias Credo.CLI.Filter alias Credo.CLI.Output + alias Credo.CLI.Options alias Credo.CLI.Output.UI alias Credo.Execution alias Credo.Check.CodeHelper @@ -33,7 +34,7 @@ defmodule Credo.CLI.Output.Summary do UI.puts UI.puts [:faint, format_time_spent(time_load, time_run)] - UI.puts summary_parts(source_files, shown_issues) + UI.puts summary_parts(source_files, shown_issues, exec) # print_badge(source_files, issues) UI.puts @@ -120,7 +121,7 @@ defmodule Credo.CLI.Output.Summary do |> Enum.count [Truncated] - defp summary_parts(source_files, issues) do + defp summary_parts(source_files, issues, %Execution{cli_options: %Options{switches: switches}}) do parts = @category_wording |> Enum.flat_map(&summary_part(&1, issues)) @@ -137,9 +138,11 @@ defmodule Credo.CLI.Output.Summary do parts end + print_stats = Map.get switches, :stats, false + [ :green, - "#{scope_count(source_files)} mods/funs, ", + (if print_stats, do: "#{scope_count(source_files)} mods/funs, ", else: ""), :reset, "found ", parts, `` Status: Issue closed
urbit/arvo
283923409
Title: Strange prompt error triggered while activating lines Question: username_0: ``` --------------| ;2.600 ? 2.600 0v4.rj3ii.n9qlp.oc9pt.2gc3p.j3p49 at ~2017.12.21..14.48.35..b757 ~taglux-nidsep to + ~binzod/urbit-meta https://www.youtube.com/watch?v=wlLYM2IbBXA --------------| ;2.700 ? 2.700 0v7.35che.8ob12.1bf00.5gotg.8sane.17077 at ~2017.12.21..15.32.38..ca7e ~tonweb-pilseb to :, + ~binzod/urbit-meta i think these are 2 bugs --------------| ;2.700 ? 2.700 0v7.35che.8ob12.1bf00.5gotg.8sane.17077 at ~2017.12.21..15.32.38..ca7e ~tonweb-pilseb to :, + ~binzod/urbit-meta i think these are 2 bugs ; ~marzod not responding still trying ~tonweb-pilseb:talk[+ ~binzod/urbit-meta] ``` As you can see the '+' character is inside brackets. It goes away only after activating some line: ``` ~tonweb-pilseb; test ~rivseb_hadref+^ :: test successful! --------------| ;2.600 ? 2.600 0v4.rj3ii.n9qlp.oc9pt.2gc3p.j3p49 at ~2017.12.21..14.48.35..b757 ~taglux-nidsep to + ~binzod/urbit-meta https://www.youtube.com/watch?v=wlLYM2IbBXA ~tonweb-pilseb:talk+ ``` Answers: username_0: What is even more strange, activating lines seems to randomly change the prompt, disregarding the glyph: ``` --------------| ;2.725 ? 2.725 0va2m2g.rrger.va5qu.c3ipa.7de0a at ~2017.12.21..15.43.02..ab03 ~palfun-foslup to + ~binzod/urbit-meta seems to get the increasingly smaller head of the path --------------| ;2.730 ? 2.730 0ve9uth.ulbfv.rfhr5.lssrm.m7ff8 at ~2017.12.21..15.48.54..603e ~tonweb-pilseb to :, + /urbit-meta ee ~tonweb-pilseb:talk[+ /urbit-meta] ;2.730 ``` username_0: Is it a feature? It seems to be taking the destination of sent messages... username_1: Activating basically puts you in reply-mode yes, though if + and /urbit-meta are the same channel they definitely shouldn't both be in your prompt. username_2: The behavior you're seeing is not incorrect. Activating a message sets your audience to match that message's. When it's showing the audience with brackets it means you're going to be sending messages to both your inbox and the shown target. This is done automatically when talk detects you're sending to a target you're not subscribed to, to make sure you do see your message echoed locally. There is a bug where talk sometimes doesn't recognize you've joined a channel properly. #513 should fix that. username_0: Okay, closing then. Status: Issue closed
sonata-project/SonataAdminBundle
204560382
Title: SonataAdminBundle::ajax_layout.html.twig is broken with Twig ^2.0 Question: username_0: ### Environment #### Sonata packages ``` sonata-project/admin-bundle 3.12.0 The missing Symfony Admin Generator sonata-project/block-bundle 3.3.0 Symfony SonataBlockBundle sonata-project/cache 1.0.7 Cache library sonata-project/core-bundle 3.2.0 Symfony SonataCoreBundle sonata-project/doctrine-orm-admin-bundle 3.1.3 Symfony Sonata / Integrate Doctrine ORM into the SonataAdminBundle sonata-project/exporter 1.7.0 Lightweight Exporter library ``` #### Symfony packages ``` symfony/monolog-bundle v3.0.3 Symfony MonologBundle symfony/phpunit-bridge v3.2.2 Symfony PHPUnit Bridge symfony/polyfill-apcu v1.3.0 Symfony polyfill backporting apcu_* functions to lower PHP versions symfony/polyfill-intl-icu v1.3.0 Symfony polyfill for intl's ICU-related data and classes symfony/polyfill-mbstring v1.3.0 Symfony polyfill for the Mbstring extension symfony/polyfill-php54 v1.3.0 Symfony polyfill backporting some PHP 5.4+ features to lower PHP versions symfony/polyfill-php56 v1.3.0 Symfony polyfill backporting some PHP 5.6+ features to lower PHP versions symfony/polyfill-php70 v1.3.0 Symfony polyfill backporting some PHP 7.0+ features to lower PHP versions symfony/polyfill-util v1.3.0 Symfony utilities for portability of PHP codes symfony/security-acl v3.0.0 Symfony Security Component - ACL (Access Control List) symfony/swiftmailer-bundle v2.4.2 Symfony SwiftmailerBundle symfony/symfony v3.2.2 The Symfony PHP framework ``` #### PHP version ``` PHP 7.1.1 (cli) (built: Jan 24 2017 18:33:47) ( NTS ) Copyright (c) 1997-2017 The PHP Group Zend Engine v3.1.0, Copyright (c) 1998-2017 Zend Technologies with Zend OPcache v7.1.1, Copyright (c) 1999-2017, by Zend Technologies ``` ## Steps to reproduce 1. Create an Admin class with a `ModelType` 2. When clicking on the "add" button, nothing happens except an error popping in the Symfony Dev Profiler ## Expected results Working. ## Actual results With Twig 1.31, it's working. With Twig 2.0 and 2.1 : ``` [1] Twig_Error_Runtime: Block "list_table" on template "SonataAdminBundle::ajax_layout.html.twig" does not exist. at n/a in /srv/app/vendor/sonata-project/admin-bundle/Resources/views/ajax_layout.html.twig line 13 at Twig_Template->displayBlock('list_table', array('action' => 'create', 'form' => object(FormView), 'object' => object(Quiz), 'admin' => object(QuizAdmin), 'base_template' => 'SonataAdminBundle::ajax_layout.html.twig', 'admin_pool' => object(Pool), 'wrap_fields_with_addons' => true, 'app' => object(AppVariable), 'sonata_block' => object(GlobalVariables), 'sonata_admin' => object(GlobalVariables)), array('content' => array(object(__TwigTemplate_aa18bf995e0942bddb5f4104e4cdf68779fd801a3f8af107d927823f1626d9b1), 'block_content'), 'preview' => array(object(__TwigTemplate_aa18bf995e0942bddb5f4104e4cdf68779fd801a3f8af107d927823f1626d9b1), 'block_preview'), 'form' => array(object(__TwigTemplate_d1f017271cc594985f4137c7b702ac890addcdb36f46b1ae8c2e8e8ddd82eb8f), 'block_form'), 'list' => array(object(__TwigTemplate_aa18bf995e0942bddb5f4104e4cdf68779fd801a3f8af107d927823f1626d9b1), 'block_list'), 'show' => array(object(__TwigTemplate_aa18bf995e0942bddb5f4104e4cdf68779fd801a3f8af107d927823f1626d9b1), 'block_show'), 'sonata_form_action_url' => array(object(__TwigTemplate_155526ba8b6c7aea03919f514cdddf3e756851f71c33bee2c1bc732a31ca9a3f), 'block_sonata_form_action_url'), 'sonata_form_attributes' => array(object(__TwigTemplate_155526ba8b6c7aea03919f514cdddf3e756851f71c33bee2c1bc732a31ca9a3f), 'block_sonata_form_attributes'), 'sonata_pre_fieldsets' => array(object(__TwigTemplate_155526ba8b6c7aea03919f514cdddf3e756851f71c33bee2c1bc732a31ca9a3f), 'block_sonata_pre_fieldsets'), 'sonata_tab_content' => array(object(__TwigTemplate_155526ba8b6c7aea03919f514cdddf3e756851f71c33bee2c1bc732a31ca9a3f), 'block_sonata_tab_content'), 'sonata_post_fieldsets' => array(object(__TwigTemplate_155526ba8b6c7aea03919f514cdddf3e756851f71c33bee2c1bc732a31ca9a3f), 'block_sonata_post_fieldsets'), 'formactions' => array(object(__TwigTemplate_155526ba8b6c7aea03919f514cdddf3e756851f71c33bee2c1bc732a31ca9a3f), 'block_formactions'), 'sonata_form_actions' => array(object(__TwigTemplate_155526ba8b6c7aea03919f514cdddf3e756851f71c33bee2c1bc732a31ca9a3f), 'block_sonata_form_actions'), 'parentForm' => array(object(__TwigTemplate_155526ba8b6c7aea03919f514cdddf3e756851f71c33bee2c1bc732a31ca9a3f), 'block_form'), 'title' => array(object(__TwigTemplate_d1f017271cc594985f4137c7b702ac890addcdb36f46b1ae8c2e8e8ddd82eb8f), 'block_title'), 'navbar_title' => array(object(__TwigTemplate_d1f017271cc594985f4137c7b702ac890addcdb36f46b1ae8c2e8e8ddd82eb8f), 'block_navbar_title'), 'actions' => array(object(__TwigTemplate_d1f017271cc594985f4137c7b702ac890addcdb36f46b1ae8c2e8e8ddd82eb8f), 'block_actions'), 'tab_menu' => array(object(__TwigTemplate_d1f017271cc594985f4137c7b702ac890addcdb36f46b1ae8c2e8e8ddd82eb8f), 'block_tab_menu')), true) [Truncated] in /srv/app/vendor/sonata-project/admin-bundle/Controller/CRUDController.php line 78 at Sonata\AdminBundle\Controller\CRUDController->render('SonataAdminBundle:CRUD:edit.html.twig', array('action' => 'create', 'form' => object(FormView), 'object' => object(Quiz), 'admin' => object(QuizAdmin), 'base_template' => 'SonataAdminBundle::ajax_layout.html.twig', 'admin_pool' => object(Pool)), null) in /srv/app/vendor/sonata-project/admin-bundle/Controller/CRUDController.php line 566 at Sonata\AdminBundle\Controller\CRUDController->createAction() in line at call_user_func_array(array(object(CRUDController), 'createAction'), array()) in /srv/app/vendor/symfony/symfony/src/Symfony/Component/HttpKernel/HttpKernel.php line 153 at Symfony\Component\HttpKernel\HttpKernel->handleRaw(object(Request), 1) in /srv/app/vendor/symfony/symfony/src/Symfony/Component/HttpKernel/HttpKernel.php line 68 at Symfony\Component\HttpKernel\HttpKernel->handle(object(Request), 1, true) in /srv/app/vendor/symfony/symfony/src/Symfony/Component/HttpKernel/Kernel.php line 168 at Symfony\Component\HttpKernel\Kernel->handle(object(Request)) in /srv/app/web/app.php line 26 ```<issue_closed> Status: Issue closed
pjueon/JetsonGPIO
1130292755
Title: Compilation error when using add_event_detect() Question: username_0: Hi, I'm trying to use this library on Jetson Xavier with JetPack 4.6 and ROS-Melodic. It works fine, but... When I add the function ```add_event_detection()``` to my code I always get compilation errors. Even with your example "button_interrupt.cpp". So I made a new small test code: ``` #include <iostream> #include <string> #include <JetsonGPIO.h> #include <signal.h> bool shutdown = false; void signalHandler(int s) { shutdown = true; } void callback_fn(const std::string& channel) { std::cout << "Callback called from channel " << channel<< std::endl; } int main() { signal(SIGINT, signalHandler); GPIO::setmode(GPIO::BOARD); GPIO::setup(12, GPIO::IN); GPIO::add_event_detect(12, GPIO::RISING, callback_fn); // also tried GPIO::Edge::RISING while(!shutdown) { // } GPIO::remove_event_detect(12); GPIO::cleanup(); return 0; } ``` When I remove ```add_event_detection(...)``` I can compile and run the program without errors. Here is a screenshot of the compilation error message: ![compilation_error](https://user-images.githubusercontent.com/98911185/153438583-43503f6d-c9a4-4695-8242-10197673cd31.png) Does anyone know how to fix this? Greetings Marcel Answers: username_1: From your error: ``` static_assert(... "Callback return type: void, argument type: int"); ``` The callback argument type changed in recent build. (old version: int, latest version: const std::string&) I'm assuming that you installed an old version, and trying the new example code. Could you re-install the library and try it agian? Status: Issue closed
jordonwow/omnibar
721887957
Title: Shadowland hiccups Question: username_0: `Message: Interface\AddOns\OmniBar\Options.lua:108: Usage: NonEmptySpell:ContinueOnLoad(callbackFunction) Time: Wed Oct 14 18:50:28 2020 Count: 1 Stack: Interface\AddOns\OmniBar\Options.lua:108: Usage: NonEmptySpell:ContinueOnLoad(callbackFunction) [string "=[C]"]: in function `error' [string "@Interface\FrameXML\ObjectAPI\Spell.lua"]:51: in function `ContinueOnSpellLoad' [string "@Interface\AddOns\OmniBar\Options.lua"]:108: in function <Interface\AddOns\OmniBar\Options.lua:53> [string "@Interface\AddOns\OmniBar\Options.lua"]:557: in function `AddBarToOptions' [string "@Interface\AddOns\OmniBar\OmniBar.lua"]:104: in function <Interface\AddOns\OmniBar\OmniBar.lua:70> [string "=[C]"]: ? [string "@Interface\AddOns\OmniBar\Libs\AceAddon-3.0\AceAddon-3.0.lua"]:70: in function <...ce\AddOns\OmniBar\Libs\AceAddon-3.0\AceAddon-3.0.lua:65> [string "@Interface\AddOns\OmniBar\Libs\AceAddon-3.0\AceAddon-3.0.lua"]:527: in function `EnableAddon' [string "@Interface\AddOns\OmniBar\Libs\AceAddon-3.0\AceAddon-3.0.lua"]:620: in function <...ce\AddOns\OmniBar\Libs\AceAddon-3.0\AceAddon-3.0.lua:605> [string "=[C]"]: in function `LoadAddOn' [string "@Interface\FrameXML\UIParent.lua"]:495: in function `UIParentLoadAddOn' [string "@Interface\FrameXML\UIParent.lua"]:618: in function `TimeManager_LoadUI' [string "@Interface\FrameXML\UIParent.lua"]:1373: in function <Interface\FrameXML\UIParent.lua:1258> Locals: <none>` Answers: username_1: got the same issue the addon is working fine unless you dont have a custom setup/profile with multiple bars also its impossible to change anything /settings username_2: I made a pull request for the fix, but in the meantime here is what to change, and a link to the download: Delete this on line 107 in Options.lua: ``` local s = Spell:CreateFromSpellID(spellID) s:ContinueOnSpellLoad(function() descriptions[spellID] = s:GetSpellDescription() end ``` Replace with this: `descriptions[spellID] = GetSpellDescription(spellID)` Or download here: [OmniBar.zip](https://github.com/jordonwow/omnibar/files/5395200/OmniBar.zip) username_1: Ok I changed it. Thank you ;) username_0: Thanks, that fixed the issues. username_3: I attempted the fixed written by Aurelion314 and now I cannot access the omnibar menu ui at all. Does this fix work on the twitch version? If not why not? What is the difference between the version on github and the version on twitch? Thanks. username_1: @username_3 Are you sure you replaced line 107 with: descriptions[spellID] = GetSpellDescription(spellID) You need to delete: local s = Spell:CreateFromSpellID(spellID) s:ContinueOnSpellLoad(function() descriptions[spellID] = s:GetSpellDescription() end and make sure to have/keep the spacing between the other lines. You download the addon from curse and change the code in the Options LUA: https://www.curseforge.com/wow/addons/omnibar It's a quickfix and you won't need more than that. We need to wait for SL and Dev's are waiting with alot of updates since alot of spells will be changed/added. username_4: @username_1 So what do this do? Can I configuere inside the omnibar then? I miss the 'lock' buttom, so the omnibar isnt showed all time.... Zzzzz username_1: @username_4 Yes as already told it's that simple. After the quickfix the addon will work as intended. Status: Issue closed
rsachetto/MonoAlg3D_C
882403884
Title: Problems with the new GPU double precision Question: username_0: Hello Sachetto, When I tried to update my repository to the latest version with the GPU double precision I was not able to compile the Bondarenko model. I started to receive the following error: ```sh ~/Github/MonoAlg3D_C/src/models_library/bondarenko/bondarenko_2004_GPU.cu(266): error: calling a host function("std::pow<double, float> ") from a device function("RHS_gpu") is not allowed ~/Github/MonoAlg3D_C/src/models_library/bondarenko/bondarenko_2004_GPU.cu(266): error: identifier "std::pow<double, float> " is undefined in device code ``` In order to solve this issue, I checked the 'build.sh' file and commented the CFLAGS for the double precision. After this change, I was able to build everything. Although, when I tried to run the "purkinje_with_fibrosis.ini" example I received several warnings with the message: ```sh Solving EDO 1 times before solving PDE Starting simulation t = 0.00000, Iterations = 6, Error Norm = 2.642679e-17, Number of Tissue Cells:132322, Tissue CG Iterations time: 6207 us , Iterations = 12, Error Norm = 4.517128e-17, Number of Purkinje Cells:582, Purkinje CG Iterations time: 642 us, Total Iteration time: 485641 us Accepting solution with error > 1.021493 Accepting solution with error > 1.059086 Accepting solution with error > 1.100347 Accepting solution with error > 1.142385 Accepting solution with error > 1.188148 Accepting solution with error > 1.237055 ``` Then after some iterations the solver returned NaN for the Purkinje solution. ```sh t = 130.00000, Iterations = 10, Error Norm = 1.558992e-17, Number of Tissue Cells:132322, Tissue CG Iterations time: 7125 us , Iterations = 12, Error Norm = 6.437341e-17, Number of Purkinje Cells:582, Purkinje CG Iterations time: 717 us, Total Iteration time: 389427 us t = 132.00000, Iterations = 10, Error Norm = 1.469444e-17, Number of Tissue Cells:132322, Tissue CG Iterations time: 6988 us , Iterations = 0, Error Norm = nan, Number of Purkinje Cells:582, Purkinje CG Iterations time: 97 us, Total Iteration time: 385947 us t = 134.00000, Iterations = 10, Error Norm = 1.300754e-17, Number of Tissue Cells:132322, Tissue CG Iterations time: 7097 us , Iterations = 0, Error Norm = nan, Number of Purkinje Cells:582, Purkinje CG Iterations time: 96 us, Total Iteration time: 394784 us t = 136.00000, Iterations = 10, Error Norm = 1.414033e-17, Number of Tissue Cells:132322, Tissue CG Iterations time: 7262 us , Iterations = 0, Error Norm = nan, Number of Purkinje Cells:582, Purkinje CG Iterations time: 91 us, Total Iteration time: 384568 us t = 138.00000, Iterations = 10, Error Norm = 1.212957e-17, Number of Tissue Cells:132322, Tissue CG Iterations time: 7005 us , Iterations = 0, Error Norm = nan, Number of Purkinje Cells:582, Purkinje CG Iterations time: 89 us, Total Iteration time: 379773 us ``` I don't know if this is an error in my version of CUDA or if I need to include something new to the Purkinje cellular models, because I see that are some changes in the structure of the cellular models from now on. Can you help me with this problem ? Answers: username_1: Hi Lucas. This is a problem with both your cuda version and your GPU. Older GPUs doesn't support double precision calculations and I changed de code to compile both the CPU and GPU versions of the models with the same precision and Bondarenko fails with single precision. Please send me your Cuda version so I can change the code to allow different precisions with old versions. username_0: My current CUDA version is this one: ```sh berg@localhost:~$ nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2017 NVIDIA Corporation Built on Fri_Nov__3_21:07:56_CDT_2017 Cuda compilation tools, release 9.1, V9.1.85 ``` username_1: @username_0, Could you please try this new version.
lwblackledge/file-watcher
104545758
Title: Does not work in 1.0.7 Question: username_0: Atom claims 1.0.7 is the latest version and will not update beyond that. I opened a test document in dev mode, then the same document in notepad. I edited the notepad file and saved it and atom just updates without throwing any console errors or notifications. Also I have the box checked in the extension. Answers: username_1: Hi @username_0, can you be a bit more specific please? I tested in the latest version I have (1.0.10) and it was working as expected. Can you check for any error messages in the dev console? username_0: Atom claims 1.0.7 is the latest version and will not update beyond that. I opened a test document in dev mode, then the same document in notepad. I edited the notepad file and saved it and atom just updates without throwing any console errors or notifications. Also I have the box checked in the extension. username_2: Using version 1.0.11 on Linux Mint Debian Edition with KDE 4, this bug happens here as well. Opened file in Atom, open same file in another editor, add some text and save file. Atom instantly updates the file, new text appears and no message what so ever appears in Atom, nothing appears in the dev console either. Status: Issue closed username_1: This is a duplicate of issue #2. This package just hooks into the conflict event through by `atom\text-buffer`, which does not fire an even if the file changes and there are *no* changes pending within the atom editor. I am looking into adding this as an option but so far I have been unsuccessful. username_0: My coworker helped me find the proper error, and fyi I am on windows. 'diff' is not recognized as an internal or external command, ooperable program or batch file. username_1: Interesting, you might have a deeper problem with your Atom install. I do not use `diff` anywhere in `file-watcher` but it is used in [text-buffer](https://github.com/atom/text-buffer/blob/master/src/text-buffer.coffee) for a couple of operations. Can you try a clean install or Atom? You might need to raise an issue for atom or text-buffer directly. However, the scenario you described in [this comment](https://github.com/username_1/file-watcher/issues/3#issuecomment-137498511) is the same as issue #2, and not something that I can fix right now since it involves intercepting or overriding the [text-buffer](https://github.com/atom/text-buffer/blob/master/src/text-buffer.coffee) code. username_0: Disregard my last comment.. I confused this problem with another. I was trying to get atom-cli-diff to work property. As for this package, this morning I did a reinstall and have yet to see any new errors that would indicate it's broken, but it still does not work. username_1: Hi @username_0, The scenario you described [here](https://github.com/username_1/file-watcher/issues/3#issuecomment-137498511) is actually not what this package was created to do: Atom by default will reload a file if you have **not** made any changes. This is controlled by [text-buffer](https://github.com/atom/text-buffer/blob/master/src/text-buffer.coffee) and something I am working on as part of issue #2. I made this originally to show a prompt if you **have** made changes and something changes the file on disk. By default Atom will just ignore disk changes and not show a prompt. Consider this scenario: 1. Open a document in Atom 2. Add text (unsaved changes) 3. Open the same document in notepad 4. Add text in notepad and save 5. Check Atom and see that the text from step 2 is still there I created this package to add a prompt in atom between step 4 and 5 because I want to know if something changes the file I'm working on. If you **do not** add text at step 2, at step 5 you will see that Atom reloaded the file because there were *no* unsaved changes. I hope that makes this clearer. username_3: I can't get the package to work either. Using Ubuntu 16.10 and Atom 1.12.7. I make external changes, nothing happens. No prompt, no reload no nothing :'(
Dklein2/TheSprinkleboneInitiative1
73479738
Title: (BUG) The player is able to climb walls without coming into contact with them. Question: username_0: this may be able to be fixed by making the m_grounded variable inside the player controller register when coming into contact with walls. also make sure the grounded circle is not affected by walls. Status: Issue closed Answers: username_0: fixed in latest update 2:25 am 5/9/2015
documentationjs/documentation
139443909
Title: Unable to run in babel project Question: username_0: I am still seeing the behavior from #211 where documentationjs attempts to read my project's `.babelrc` and fails. I am not sure if this is expected or not, but I haven't seen any other issues filed, so I wanted to raise it in case anyone else has this problem / has a solution. ```js ReferenceError: [BABEL] /path/to/project/app/index.js: Unknown option: /path/to/project/.babelrc.presets while parsing file: /path/to/project/app/index.js at Logger.error (/path/to/project/node_modules/babelify/node_modules/babel-core/lib/transformation/file/logger.js:58:11) at OptionManager.mergeOptions (/path/to/project/node_modules/babelify/node_modules/babel-core/lib/transformation/file/options/option-manager.js:126:29) at OptionManager.addConfig (/path/to/project/node_modules/babelify/node_modules/babel-core/lib/transformation/file/options/option-manager.js:107:10) at OptionManager.findConfigs (/path/to/project/node_modules/babelify/node_modules/babel-core/lib/transformation/file/options/option-manager.js:168:35) at OptionManager.init (/path/to/project/node_modules/babelify/node_modules/babel-core/lib/transformation/file/options/option-manager.js:229:12) at File.initOptions (/path/to/project/node_modules/babelify/node_modules/babel-core/lib/transformation/file/index.js:147:75) at new File (/path/to/project/node_modules/babelify/node_modules/babel-core/lib/transformation/file/index.js:137:22) at Pipeline.transform (/path/to/project/node_modules/babelify/node_modules/babel-core/lib/transformation/pipeline.js:164:16) at Babelify._flush (/path/to/project/node_modules/babelify/index.js:28:24) at Babelify.<anonymous> (_stream_transform.js:117:12) ``` ```sh ❯ documentation --version 4.0.0-beta ``` Here are the commands I am running, both fail with the same error: ```sh ./node_modules/.bin/documentation build app/index.js -o docs2 -f html ``` ```sh documentation build app/index.js -o docs2 -f html ``` Answers: username_1: I'm confirming this, same here username_2: Hello, same here. ``` ReferenceError: [BABEL] /Users/pleasurazy/git/projects/my-cms/app/assets/services/i18n/index.js: Unknown option: /Users/pleasurazy/git/projects/my-cms/.babelrc.presets while parsing file: /Users/pleasurazy/git/projects/my-cms/app/assets/services/i18n/index.js ``` .babelrc: ``` { "presets": ["es2015", "stage-0"], "plugins": [ "transform-es2015-modules-umd", "transform-runtime" ] } ``` node v4.2.6 npm 3.8.1 documentation 4.0.0-beta [email protected] username_3: Will look into this tonight, I'm not sure what's going on yet. username_4: This issue also occurs in version 3.x, seems like a problem of babel. username_5: Getting the same error on `babel-core: 6.6.0` username_6: Can confirm I'm getting something similar. Appears to be trying to hook into my `.babelrc` file. Will run when I rename that file. ```sh /Users/ryan/.nvm/versions/node/v5.9.0/lib/node_modules/documentation/lib/commands/build.js:61 throw err; ^ TypeError: The plugin ["transform-es2015-modules-commonjs",{"loose":true}] didn't export a Plugin instance while parsing file: /Users/ryan/src/trib/newsapps-js/src/loadScript.js at PluginManager.validate (/Users/ryan/.nvm/versions/node/v5.9.0/lib/node_modules/documentation/node_modules/babel-core/lib/transformation/file/plugin-manager.js:164:13) at PluginManager.add (/Users/ryan/.nvm/versions/node/v5.9.0/lib/node_modules/documentation/node_modules/babel-core/lib/transformation/file/plugin-manager.js:213:10) at File.buildTransformers (/Users/ryan/.nvm/versions/node/v5.9.0/lib/node_modules/documentation/node_modules/babel-core/lib/transformation/file/index.js:237:21) at new File (/Users/ryan/.nvm/versions/node/v5.9.0/lib/node_modules/documentation/node_modules/babel-core/lib/transformation/file/index.js:139:10) at Pipeline.transform (/Users/ryan/.nvm/versions/node/v5.9.0/lib/node_modules/documentation/node_modules/babel-core/lib/transformation/pipeline.js:164:16) at Babelify._flush (/Users/ryan/.nvm/versions/node/v5.9.0/lib/node_modules/documentation/node_modules/babelify/index.js:28:24) at Babelify.<anonymous> (_stream_transform.js:118:12) at Babelify.g (events.js:273:16) at emitNone (events.js:80:13) at Babelify.emit (events.js:179:7) ``` My dependencies: ```js "devDependencies": { "babel-cli": "^6.6.5", "babel-plugin-transform-es2015-modules-commonjs": "^6.7.0", "rimraf": "^2.5.2", "rollup": "^0.25.4", "rollup-plugin-commonjs": "^2.2.1", "rollup-plugin-node-resolve": "^1.4.0", "uglify-js": "^2.6.2" }, ``` And running `documentation` as a global install: `[email protected]` username_3: Okay: I confirmed this problem with 4.0.0-beta and also confirmed the fix with current master, which is updated to babel 6. I'm going to release 4.0.0-beta1 in a moment. username_3: **4.0.0-beta1 is released** - please try it out & confirm whether this bug is fixed, thanks! username_6: Worked for me! My error is gone. :+1: Thanks, @username_3! username_0: Confirmed it is working now :+1: Thanks for the quick turn around. Status: Issue closed
subdavis/Tusk
363068105
Title: Enable cloud sync for chrome options Question: username_0: ### This issue is a feature request As the number of options increases we should consider adding Google Account Sync for chrome extension options using `chrome.storage.sync` for option value saving. In the future we can do similar thing for the storage information.
MaibornWolff/codecharta
498163559
Title: Additional help on usage of SCMLogParser Question: username_0: # Feature request Currently the SCMLogParser has functions implemented, that print help on how to get the different input log formats we can parse (example commands) as well as which metrics will be calculated upon the import of a log in such a format. However these functions cannot be called by the user currently and are therefore useless in the current state. If we keep the functionality, we should make the feature available for usage. We should also provide a parameter to the SCMLogParser which lists all of the input formats we can process. Status: Issue closed Answers: username_1: I am going to close this as won't fix for now. We want to automate most parsers including the SCMLogParser. That would be done by such automation and no user interaction would be required anymore. If any user interaction is required, just ask a question on the command line directly.
erinepolley/Front-End-Capstone
530498801
Title: "My Racks" in Nav Bar Question: username_0: # Story As a user, I should have the ability to see, edit, and delete the rack that I've added to the app. ## "My Racks" **Given** I want to see the racks I've added **When** I click on the "My Racks" button in the nav bar **Then** a list is rendered on the left side of my screen **And** each rack is a card with a picture, name, location, capacity, and comments **And I can click on the "Edit" or "Delete" buttons **Given** I'd like to edit the bike rack **When** I click the edit button **Then** a form appears in the card with the information pre-populated **And** when I'm finished editing, I click "Submit" **And** the information is changed in JSON **And** the new information, along with all the other bikes, is rendered on the page **Given** I'd like to delete the bike rack **When** I click the delete button Stretch goal: alert box appears, click no, back to screen. Yes? -> **Then** the rack is deleted from JSON **And** the list is re-rendered without the rack
jlippold/tweakCompatible
530623998
Title: `Bourne-Again SHell` working on iOS 13.2.3 Question: username_0: ``` { "packageId": "bash", "action": "working", "userInfo": { "arch32": false, "packageId": "bash", "deviceId": "iPhone10,6", "url": "http://cydia.saurik.com/package/bash/", "iOSVersion": "13.2.3", "packageVersionIndexed": true, "packageName": "Bourne-Again SHell", "category": "Terminal Support", "repository": "apt.bingner.com", "name": "Bourne-Again SHell", "installed": "5.0.3-2", "packageIndexed": true, "packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.", "id": "bash", "commercial": false, "packageInstalled": true, "tweakCompatVersion": "0.1.5", "shortDescription": "the best shell ever written by <NAME>", "latest": "5.0.3-2", "author": "(null)", "packageStatus": "Unknown" }, "base64": "<KEY> "chosenStatus": "working", "notes": "" } ```<issue_closed> Status: Issue closed
ProgrammersOfVilnius/pov-server-page
479382373
Title: FileNotFoundError: [Errno 2] No such file or directory: 'netstat': 'netstat' Question: username_0: This happens on Ubuntu 19.04 because netstat is not installed by default. (It comes from the net-tools package.) The netstat replacement is ss, but I don't know if it's suitable for my use cases -- it can show you pids, but not program names. Do I need program names? Maybe not. Status: Issue closed Answers: username_0: Should be fixed in commit a208c9dba1ca03889beece54acb48d2db88d2bcb.
algolia/algoliasearch-client-scala
487345282
Title: feat: add contextual rules Question: username_0: **TL;DR**: The `condition` field on rules is now optional. Actions required: - Update the `cts` with the information on : [#71](https://github.com/algolia/algoliasearch-client-specs-internal/pull/71/files) - On strongly typed clients, update types in query rules, as the `condition` field in rules is now **optional** - [Python test suite update](https://github.com/algolia/algoliasearch-client-python/pull/450/files) Answers: username_1: As [confirmed by @username_0](https://github.com/algolia/algoliasearch-client-specs-internal/issues/42#issuecomment-526526444), not only `condition` becomes optional, but also `condition.anchoring`, `condition.pattern` and `condition.context`.
custom-components/sensor.avanza_stock
869790305
Title: Unable to add topics Question: username_0: I'm not able to add topics to this repo, there should be gear icon next to about. I can see it in my own repos but not in this organization. ![image](https://user-images.githubusercontent.com/9336788/116388697-64492d00-a81c-11eb-9693-7a1b51a335c8.png) @username_1 @iantrich Seems you two are the owners of the organization so perhaps you can help me out? Status: Issue closed Answers: username_1: github changed some stuff a while back, should be possible now
qiuxiang/react-native-amap3d
242452985
Title: 能否实现地图滑动事件 Question: username_0: 地图滑动获取新的坐标,类似react-native-maps的onRegionChange,可以参考共享单车类app Answers: username_1: 这个接口是在计划中的,周末有时间可以做 username_1: @username_0 已经实现了,分别是 `onStatusChange` 和 `onStatusChangeComplete`,传递的事件数据为: ```json { "zoomLevel": number, "tilt": number, "rotation": number, "latitude": number, "longitude": number, } ``` Status: Issue closed
tue-robotics/tue-env
250392453
Title: Slow sourcing of target Question: username_0: It is caused by sourcing setup.bash in the dev space. That setup file sources setup.sh. Which is linked to ```/home/amigo/ros/kinetic/dev/devel/.private/catkin_tools_prebuild/setup.sh``` The lines which are the real problem are the following: ``` # source all environment hooks _i=0 while [ $_i -lt $_CATKIN_ENVIRONMENT_HOOKS_COUNT ]; do eval _envfile=\$_CATKIN_ENVIRONMENT_HOOKS_$_i unset _CATKIN_ENVIRONMENT_HOOKS_$_i eval _envfile_workspace=\$_CATKIN_ENVIRONMENT_HOOKS_${_i}_WORKSPACE unset _CATKIN_ENVIRONMENT_HOOKS_${_i}_WORKSPACE # set workspace for environment hook CATKIN_ENV_HOOK_WORKSPACE=$_envfile_workspace . "$_envfile" unset CATKIN_ENV_HOOK_WORKSPACE _i=$((_i + 1)) done ``` Status: Issue closed Answers: username_1: These are generated by catkin hooks, we can't do anything about it
WX-JIN/JGallery
335734256
Title: 垃圾 Answers: username_1: 兄弟, 你这就不对了. 我watch了这个项目, 看到您说出这样的话, 觉得很悲哀. 一个开源项目不管它好不好都不应该去这样评价它. 对一个开源项目我们需要的是鼓励和耐心. 何况既然你能搜索到这个项目, 应该是你自己没有实现这个需求. 开源社区并不需要像您这样不分青红直接开喷的coder. 不合理的地方您可以指正, 提出更合理和优化的实现. 这里欢迎探讨者和技术爱好者, 并不欢迎您这样的污染开源氛围的朋友. username_2: 楼上说的对
anthonydresser/testissues
724796492
Title: href attribute removed from untrusted notebooks Question: username_0: Links are only clickable on Mac and Trusted Windows Notebooks. Encounter this issue when fixing [12899](https://github.com/microsoft/azuredatastudio/-/12899). On Windows, the href attribute is removed from the element if it contains an URI-encoded absolute path when sanitizing the contents. This only happens with untrusted notebooks on Windows.
velopert/react-tutorial
493699203
Title: None Question: username_0: 안녕하세요. 수업을 듣고 있는데 버전이 react devtools 버전이 변한거 같습니다. 따라서 리렌더링 될때 확인 하는 방법을 찾지 못하고 있습니다. 혹시 알려주시면 감사하겠습니다. Answers: username_1: @username_0 님 devtools 버전을 낮추어서 진행하시면 됩니다. 관련되서 https://blog.woolta.com/categories/1/posts/159 참고하시면 도움이 될거 같습니다. :) username_2: 4.2 기준으로 수정된거같아요 username_3: useCallback을 사용하기 전과 사용한 후의 렌더링 박스 상태는 차이가 없는거 맞나요??!! 렌더링 확인하는건 CreateUser과 UserList에서의 차이인거죠?? username_1: @hkhch 님 1. nextId.current 인것으로 봐서 nextId 는 ref를 통한 값 조작으로 보입니다. ref 의 경우 현재 상태의 값으로만 보여주기때문에 dependency 에 넣지 않더라도 호출 시점 현재의 값으로 보여주기 때문에 안넣은것으로 보입니다. 2. 해당 calculate도 dependency 영역에 포함된다면 (외부에서 선언/정의된 함수가 변경이 필요한 함수라면) 판단 하신후 넣어주시면 됩니다.~ username_4: 궁금한점이 있습니다. 왜 다음의 본문 문장처럼 최신의 값을 바라보지 못한다고 하는 것인가요? 해당 값을 참조하면 해당 값을 바라보는게 아니였을까요. /* 본문 내용 */ 주의 하실 점은, 함수 안에서 사용하는 상태 혹은 props 가 있다면 꼭, deps 배열안에 포함시켜야 된다는 것 입니다. 만약에 deps 배열 안에 함수에서 사용하는 값을 넣지 않게 된다면, 함수 내에서 해당 값들을 참조할때 가장 최신 값을 참조 할 것이라고 보장 할 수 없습니다. props 로 받아온 함수가 있다면, 이 또한 deps 에 넣어주어야 해요. username_5: 안녕하세요, "리액트를 다루는 기술" 개정판을 구입해서 보던 중 궁금증을 해결하지 못해 서칭하다 여기까지 오게되었습니다. 저도 결국 위 @username_4 님과 동일한 점이 궁금한데요. 함수를 재사용하는게 중요한 것은 알겠는데 함수에 상태 혹은 props 가 있을 경우에는 왜 함수를 새로 생성해야 하는건가요? 함수를 재사용한다는 것이 결국 동일한 내용을 함수를 재호출하는 것인데 호출하는 과정에서 상태나 props를 다시 읽어오는게 아닌건가요?? username_6: 참조하지않으면 props가 변경되어도 함수가 새로 생성되지 않습니다. 함수가 새로 생성되지 않은 경우에는 함수 내의 props가 변경되더라도 최신 props 값이 사용되지 않고, 이전에 함수 생성시의 props가 사용되어집니다. username_7: # useCallback - useMemo와 비슷하다. --> useMemo 기반으로 만들어졌기 때문에 - 첫번째 인수에 함수를 두번째 인수에 상태가 props에서 사용하는 배열을 넣는다. username_8: 감사합니다!! username_9: useCallback : 특정 함수 재사용 useMemo: 특정 결과값 재사용 컴포넌트에서 props 가 바뀌지 않았으면 Virtual DOM 에 새로 렌더링하는 것 조차 하지 않고 컴포넌트의 결과물을 재사용 하는 최적화 작업을 하려면, ***함수 재사용 필수 deps 배열에 꼭 포함되야 하는 것: 함수 안에서 사용하는 상태 혹은 props 컴포넌트 렌더링 최적화 작업을 해주어야만 성능이 최적화 : useCallback & React.memo username_10: **devtools 4**를 사용하시는 분들은 *component* 탭을 찾아서 사용하시면 됩니다. 만약에 잘나오지 않는다면, 현재 활성화된 탭을 종료하고 다른 탭에서 실행하시면 됩니다. *hook*이 나오지 않는다면, *components* 탭 내의 *설정* 옵션에서 *components* 탭 안의 *Always parse hook names from source (may be slow)*을 체크해주시면 됩니다. username_11: 리액트를 다루는 기술을 보다가 궁금한 점이 생겨서 이 글을 읽어보았는데 여전히 해결이 되질 않네요 useCallback은 useMemo에서 파생되었음을 설명하면서 아래 코드는 완전히 똑같은 코드임을 설명하는 예시였는데 useMemo의 첫 번째 파라미터에서 왜 굳이 함수 안에서 또 함수를 만들고 이를 return을 하는지 모르겠습니다. ``` useCallback( ()=> { console.log('hello world'); }, []) useMemo( ()=>{ const fn = ()=>{ console.log('hello world') }; return fn; },[]) ``` 제 생각에는 useMemo( ()=> {console.log("hello world")}, [] ) 이렇게 사용하거나 retrun fn을 fn()로 변경시켜주어도 문제가 없을 줄 알았는데 막상 실행해보면 ~~is not a function이라는 에러가 뜹니다. 꼭 함수를 return을 해주어야 하는데 이 부분이 왜 useCallback하고 다를까요? username_12: @username_11 usememo의 첫 번째 파라미터는 일반 함수가 아닌 팩토리 함수를 넣어야 합니다. 팩토리 함수란 객체를 반환하는 함수입니다. useMemo의 목적이 특정 값을 재사용하는 것이므로 뭔가 반환되는 값이 있어야 정상적으로 작동할 것입니다. () => { const fn = ()=>{ console.log('hello world') }; return fn; } 위 함수가 팩토리 함수라고 생각하면 될 것 같습니다 username_13: 고급지고 우아하다 username_14: @username_11 usememo는 첫번째 파라미터 함수의 '반환 값'을 사용한다고 보시면 될거같습니다. 예시에서 함수의 안에 함수를 만들어 반환하는 이유는 바깥 함수의 반환 값으로 안쪽 함수를 사용하기 위함입니다. 그냥 함수 하나를 전달하면 그 함수의 반환값을 사용합니다. 함수 자체를 사용하기 위해서는 해당 함수를 반환하는 함수를 만들어 사용해야 합니다.
OData/ODataConnectedService
589883949
Title: Client side entity validation support Question: username_0: Hi, I was wondering what it would take to have the client-side generator generate property and entity attributes, e.g. DataAnnotations validation attributes. In addition, I was wondering if it's also possible to forward interface implementations, `IValidatableObject` for instance. I was wondering if we can have validation attributes generated into client too, as well as `IValidatableObject` implementation. Otherwise (I guess that's what's gonna happen), is it possible to use shared entities instead? Is there a guide or tutorial that shows what's the recommended way to use Microsoft OData Client with shared entities? I'm willing to contribute to this, and I'm wondering if this is this a massive job. Other things to consider is support for shared entities, but again then the work will have to be dedicated to deal with non-`ObservableCollection` entities etc. I'd be happy to contribute to this feature, and would appreciate a [twitter DM](http://bit.ly/2m22VNr). I also posted this question [here](https://bit.ly/2WQHDCL). Answers: username_0: Related: https://github.com/OData/odata.net/issues/1347. username_1: @username_0 I think most of the required annotations are already [standardized](https://github.com/oasis-tcs/odata-vocabularies/blob/master/vocabularies/Org.OData.Validation.V1.md). It shouldn't be to hard to write the required conventions for the ODataConventionModelBuilder.
wallabyjs/public
463284023
Title: Wallaby Start throwing an error: command wallaby.start not found Question: username_0: ### Issue description or question Wallaby suddenly stopped working on my machine and throws the error `command wallaby.start not found` I have tried the following - Uninstalled and reinstalled - Removed wallaby folders from .vscode extensions folder - Restarted VS Code and Extension Host After reinstalling, I see a notification of updating core, but that does not fix it either. On the developer tools the following errors are thrown ``` workbench.main.js:238 [Extension Host] (node:65012) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and usability issues. Please use the Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() methods instead. ``` I am not sure whether this is related to wallaby. ### Wallaby.js configuration file ```javascript const fs = require('fs'); const nodePath = '/usr/local/n/versions/node/10.10.0/bin/node'; const nodeLocal = fs.existsSync(nodePath) ? nodePath : 'node'; module.exports = wallaby => { process.env.wallabyTest = true; return { files: ['src/**/*.ts', 'doc/sample-data/*.json', '!src/**/*.spec.ts'], tests: ['src/**/*.spec.ts', 'test/*.e2e-spec.ts'], lowCoverageThreshold: 80, name: 'someName, testFramework: 'jest', reportUnhandledPromises: false, debug: false, setup() {}, workers: { initial: 1, regular: 1, }, }; }; ``` ### Code editor or IDE name and version Visual Studio Code v1.35.1 ### OS name and version OSX Status: Issue closed Answers: username_0: This got resolved after I received another `Updating Wallaby Core` message, and soon after a `Core Updated` message.
RedHatInsights/insights-frontend-builder-common
389430391
Title: deployment scripts blow away history of the target branch Question: username_0: The reason this happens is because it does a `git init` in the directory, then an initial commit, then a force push into the target branch. Since the commit was not built on previous history, everything in history is lost. cc @username_1 Answers: username_1: Yeah when I first saw this it was jarring for me too. We are just using Git as a staging area here to have some light ACLs in place before pushing things forward (though they can be worked around atm). We were going to rsync the data straight to the destination w/o landing it on Git initially. I guess given that... why do you want there to be history? What is it stopping you from doing? username_0: The history is useful to see any changes in time with what's deployed. If we have to rollback to a particular build, or see what changed between a good and bad build, this repo is the only place that those "compiled" dist directories are stored. Rebuilding from a particular SHA on the source repo is doable, but given dependency changes (both in packages and in the building tools like npm, webpack, etc), you can never truly get back to where you were. Status: Issue closed
spinnaker/spinnaker
316833754
Title: Igor fails to become healthy with K8s V2 Provider Question: username_0: Seems to be returning a 503 ``` $ kubectl exec -it spin-igor-589db64748-7z6cl --namespace spinnaker -- bash bash-4.4$ bash-4.4$ bash-4.4$ wget localhost:8088/health Connecting to localhost:8088 (127.0.0.1:8088) wget: server returned error: HTTP/1.1 503 ``` Answers: username_1: Which image of igor do your deployment use? I have seen the same issue, but not with newer versions of hal 1.1.0. Also if you are running an old version of hal, please update hal and `hal deploy apply` again. It is worth to note that that even if the health check fail it works fine (at least if it is the same thing I was seeing before version 0.49.0. username_2: The root of the issue is that Igor (in the default deployment) has no trigger sources to poll unless you enable Jenkins, Travis CI, or a Docker registry. W/o those Igor has no work to do, and comes up as unhealthy. Halyard should just be smart enough to know that it doesn't have to deploy Igor in that case. Status: Issue closed
ministryofjustice/cloud-platform
354820702
Title: probatesearch.service.gov.uk ssl certificate expiry Question: username_0: probatesearch.service.gov.uk ssl certificate is expiring Answers: username_0: <NAME>, Tribal Group is installing the certificate username_0: probatesearch.service.gov.uk is up and running now. steps carried out - generated key, csr renewed the cert using csr provided validation file to tribalgroup after domain got validated, sent the crt files & key to them. they installed the crt on the IIS server (after fixing some weird microsoft issues) site working! Status: Issue closed
hybridauth/hybridauth
424485793
Title: Can't login yahoo. Retrieving email failed Question: username_0: ## Bug I have created a yahoo app and added to my site . I want to retrieve user email address when they login.. But both email & email verified field seems too be empty Answers: username_1: should be fixed in #986. and released in 3.0-RC10 today Status: Issue closed
MicrosoftDocs/azure-docs
693316045
Title: Access Azure SQL Database from Azure AppService via Managed Identity on different Subscriptions Question: username_0: Is it possible to access Azure SQL Database on one subscription from Azure AppService hosted on a different subscription via Managed Identity? I followed the steps below, which has no example on different subscription. https://docs.microsoft.com/en-us/azure/app-service/app-service-web-tutorial-connect-msi#grant-permissions-to-managed-identity Other links https://stackoverflow.com/questions/62003073/using-azure-managed-identities-to-access-azure-sql-db https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-sql **Update** How to reference `idenity-name` when it is in a different subscription? shown on the link above https://docs.microsoft.com/en-us/azure/app-service/app-service-web-tutorial-connect-msi#grant-permissions-to-managed-identity CREATE USER [<identity-name>] FROM EXTERNAL PROVIDER; ALTER ROLE db_datareader ADD MEMBER [<identity-name>]; ALTER ROLE db_datawriter ADD MEMBER [<identity-name>]; ALTER ROLE db_ddladmin ADD MEMBER [<identity-name>]; --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 31b0d33c-f88e-d50c-2926-7933940f3b92 * Version Independent ID: 4aaef906-f501-6947-5d23-1b704470f09d * Content: [Tutorial: Access data with managed identity - Azure App Service](https://docs.microsoft.com/en-us/azure/app-service/app-service-web-tutorial-connect-msi#grant-permissions-to-managed-identity) * Content Source: [articles/app-service/app-service-web-tutorial-connect-msi.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/app-service/app-service-web-tutorial-connect-msi.md) * Service: **app-service** * GitHub Login: @cephalin * Microsoft Alias: **cephalin** Answers: username_1: Thanks for the feedback and bringing this to our notice . At this time we are reviewing the feedback and will update the document as appropriate . username_2: @username_0 unfortunately, this isn't possible due to the nature a managed identity is a trusted service principal that's tied to the subscription. As a workaround, I suggest using a [user assigned identity](https://docs.microsoft.com/en-us/azure/app-service/overview-managed-identity?tabs=dotnet#add-a-user-assigned-identity). From the Identity blade of your app service, you can create a user identity in any of your available subscriptions. From the Access Control blade of your server, you can reference that created user identity from the subscription you choose when you created the identity. If you have any further questions, please feel free to tag me. Status: Issue closed
nulib/images
168914000
Title: Complete IIIF Functionality Question: username_0: ## DONE IIIF is implemented across the application and does not introduce bugs in editing. - [ ] IIIF Is used in PPT - [ ] IIIF is used in image download - [ ] ensure IIIF implementation does not block editing existing records in MENU<issue_closed> Status: Issue closed
pcdshub/pcdshub.github.io
323717306
Title: Make a more robust Travis CI configuration Question: username_0: 1. Build the documentation on all Travis runs but only push if on `pcdshub/source` branch. 2. If we fail to build the docs do not push to `pcds/master`. Answers: username_1: Part 2 can just be done with a `set -e` as noted in the warning [here](https://drdoctr.github.io/#edit-your-travis-file). For Part 1, I believe we do this in pcdsdevices and other repos with [this](https://github.com/pcdshub/pcdsdevices/blob/6dd968b5bd323446b8ba13cd5190165df9bbdf16/.travis.yml#L105-L109): ``` - | if [[ -n "$DOCTR_DEPLOY_ENCRYPTION_KEY_PCDSHUB_PCDSDEVICES" && $BUILD_DOCS ]]; then echo "Deploying docs" doctr deploy . --built-docs docs/build/html --deploy-branch-name gh-pages fi ``` although I don't understand how this doesn't run during every `BUILD_DOCS=1` run since the deploy key is defined at the top of the file. In short, I don't understand how the doctr deploy key works. @username_3 do you? username_2: I think we should take care of these in `pcds-ci-helpers` username_1: Ahh, thanks! Totally missed that. Status: Issue closed
rhiemer/beerpoints
207623518
Title: Criar tela de tipos - JSF Question: username_0: - - - [Elegantt data. What's this?](http://bit.ly/elegantt-for-trello-whats-this) [](Elegantt_data:dont_delete{"ignored":false,"autoPlanned":true,"ownerId":false,"dependencies":[],"psd":"2017-02-18T23:43:54-03:00","ped":"2017-02-19T01:11:42-03:00"})
GEOS-ESM/MAPL
617415540
Title: GEOSgcm does not compile with MAPL develop: 2020-05-13 Question: username_0: As of last night, the GEOSgcm no longer builds with MAPL `develop`. The issue seems to be something with the error handling: ``` /discover/nobackup/username_0/SystemTests/builds/AGCM_MAPLDEV/CURRENT/GEOSgcm/src/Shared/@GMAO_Shared/GEOS_Shared/Lightning_mod.F90(421): error #6404: This name does not have a type, and must have an explicit type. [MAPL_VRFY] if(MAPL_VRFY(STATUS,Iam,421 ,rc)) return -------^ /discover/nobackup/username_0/SystemTests/builds/AGCM_MAPLDEV/CURRENT/GEOSgcm/src/Shared/@GMAO_Shared/GEOS_Shared/Lightning_mod.F90(421): error #6341: A logical data type is required in this c ontext. [MAPL_VRFY] if(MAPL_VRFY(STATUS,Iam,421 ,rc)) return -------^ ``` I'm assigning this to, well, everybody since I'm not sure what caused it. I'm going to try reverting moving back in time in `develop` commits to see if I can find what caused it. Answers: username_0: Well, it looks like 6d9554a4a2bbcd222d6f03955bc43c810ebcb8e2 (aka #361) might have done it. I'll keep looking... username_1: I will take care of it username_1: It seems the new MAPL does not support the macro "VERIFY_" any more. @username_2 , Do you remove the MAPL_VRFY? Should we change all suffix _ to prefix _ arcross the board? @tclune username_2: This was fixed by https://github.com/GEOS-ESM/MAPL/pull/371
clarat-org/clarat
133202693
Title: Codewort verpflichtend machen Question: username_0: Im Backend Answers: username_1: @username_0 Wir wollten noch kurz mit @KonstantinKo quatschen, ob es damit Probleme gibt, weil viele Angebote dann invalid sind. username_0: @username_1: Sollte an die Version Codewort gehängt werden. Bitte in dem Rahmen checken wie viele dieser Logic-Version an invalide sind. Liste der IDs dann an Julian der sich um zügige Nachbearbeitung kümmert. username_1: Im Gespräch mit @KonstantinKo waren wir beide der Meinung, dass wir solche Features auf das Backend 2.0 verlagern wollen. Es scheint ja auch so gut zu klappen: von den 809 Angeboten in der CodeWord-Version haben gerade mal 21 keines zugewiesen. Sortierte IDs: 2080, 2097, 2151, 5881, 6086, 6110, 6406, 6452, 6502, 6504, 6506, 6527, 6546, 6547, 6562, 6688, 6692, 6776, 6818, 7208, 7210 username_0: Ok, lass uns damit warten. Status: Issue closed
tomasbjerre/dictator-cypress-example
611200802
Title: CYPRESS_INSTALL_BINARY relative to npmrc? Question: username_0: It currently needs to be `../../cypress.zip` but this implementation will set it to `cypress.zip`. Opened an issue to see if they are willing to change it: https://github.com/cypress-io/cypress/issues/7205<issue_closed> Status: Issue closed
onlykey/onlykey.github.io
237932578
Title: Need to replace temp function with a pop up Question: username_0: right now there is a temp 20 sec delay to give the user time to enter challenge code and the decryption to occur. var cb = finalPacket ? setTimeout(function(){enroll_polling(3);}, 20000) : u2fSignBuffer.bind(null, cipherText.slice(maxPacketSize)); What we eventually want instead of this is for there to be a pop up that prompts the user for the challenge code and says something like - "Please enter the 3 digit challenge code on OnlyKey and then click continue" When they click continue then it should call enroll_polling(3)<issue_closed> Status: Issue closed
homermultitext/hmt-utils
89031848
Title: Parsing error urn:cite:hmt:msA.239r: ἀναστρεπτεον Question: username_0: passage: urn:cts:greekLit:tlg5026.msA.hmt:18.6@ἀναστρεπτεον surface form ἀναστρεπτεον a Byzortho has been submitted, normalizing this form to ἀναστρεπτέον, but it does not parse in Perseus even with the normalized form. We are assuming that it will still fail in MOM when the byzortho equivalent has been added Status: Issue closed Answers: username_0: will reopen IF it fails in MOM after it has been passed through the byzortho
jquery/jquery
208131916
Title: I can see html before $(document).ready(function(). Is this correct? Status: Issue closed Question: username_0: I already posted in that crap but was not an answer. http://stackoverflow.com/questions/42252985/avoid-cascade-div-showing-with-javascript-jquery-code. Its very strange when all issues must be directed to one site... Answers: username_1: Please look for programming help on Stack Overflow. If you find an issue you have is, indeed, caused by jQuery and you have a simple test case to share on https://jsbin.com you can report an issue then. username_0: I already posted in that crap but was not an answer. http://stackoverflow.com/questions/42252985/avoid-cascade-div-showing-with-javascript-jquery-code. Its very strange when all issues must be directed to one site... username_0: Now I see why there are very few issues here. It's not because jquery is closest to perfection but is because you seems to be happy closing questions. username_2: @username_0 this is the bug tracker, it is not a place for general support. The development team has limited time as it is, we cannot provide even more time for free support for everyone who needs basic information on how jQuery works. There is a very large community of jQuery users who should be able to help you, or you can pay someone who knows jQuery well to help or write the code for you.
osrg/gobgp
409216036
Title: Empty NeighborSet condition always evaluates to true Question: username_0: I'm not sure if this is intended behaviour, but it definitly counts as confusing. For example, this policy always matches, in case the nodes NeighborSet is empty. StatementName export_accept_nodes: Conditions: PrefixSet: any default-nodes NeighborSet: any nodes Actions: MED: 180 Nexthop: self accept There's actually the comment on the Evaluate function of the policy.go "// If NeighborList's length is zero, return true." But I'm wondering - why? I would assume: Empty NeighborSet matches never if match = any Empty NeighborSet matches always if match = invert Answers: username_0: Is actually also affect NextHopCondition, but is correctly handled for PrefixCondition username_1: Sounds like v1.X works in the above way. Then we cannot change the behavior (even if it's confusing) because the configuration file format for v1.X is supposed to work in the same way with v2.X. username_0: I would rather think it's a bug that no one noticed before. But it really prompts a weird behaviour. Also, config file format was already changed as far as I know with node sets allowing only cidr-notation. I'm populating the nodes via GRPC, if the second service isn't started yet, all routes are published to everyone -> not what you would want. Work around would be to add a fake host into the nodeSet -> could put that as an info to the documentation, so that the behaviour is documented. username_1: The format was changed but the behavior wasn't. The point is that we cannot change the behavior. Why you use empty NeighborCondition? If you don't have anything about neighbors, you don't need to create a NeighborCondition. username_0: Please read the last comment, I'm prepopulating my rules with sets. The rules are static, the sets are not, and they might be empty at some point, or filled. With that bug one time the rules are applied to all hosts, and one time they are applied only to the hosts in the set. I understand your concern about changing behaviour, but don't you think/feel that this behavior is wrong? username_1: If I started the project today, I might do differently. But changing the behavior is not an option. btw, IMHO, modifying the existing Conditions isn't a good idea. I prefer creating a new Conditions. username_0: ok, then let's agree on adding a big fat warning to the documentation, that the behaviour is documented. username_1: Absolutely. Please add the description of the behavior. Status: Issue closed
rs-pro/mongoid-audit
114727556
Title: AmbiguousRelationship erro Question: username_0: Not sure if this is an issue with ```ruby class User include Trackable has_many :given_gifts, class_name: 'Gift', inverse_of: :giver has_many :received_gifts, class_name: 'Gift', inverse_of: :recipient end class Gift include Trackable belongs_to :giver, class_name: '::User', inverse_of: :given_gifts, index: true belongs_to :recipient, class_name: '::User', inverse_of: :received_gifts, index: true end ``` This used to work just fine with Mongoid 3 / `mongoid-audit` 0.1.7, but once I upgraded to Mongoid 5 / `mongoid-audit` 1.1.0, I'm getting this error: ``` Mongoid::Errors::AmbiguousRelationship: message: Ambiguous relations :given_gifts, :received_gifts defined on User. summary: When Mongoid attempts to set an inverse document of a relation in memory, it needs to know which relation it belongs to. When setting :updater, Mongoid looked on the class Gift for a matching relation, but multiples were found that could potentially match: :given_gifts, :received_gifts. resolution: On the :updater relation on Gift you must add an :inverse_of option to specify the exact relationship on User that is the opposite of :updater. ``` I don't have an `:updater` relation defined in my model, but I do have `include Trackable` in my `Gift` and `User` model. I'm not quite sure how to properly define the `inverse_of` option on the `:updater` relation, since it's buried away somewhere else. Any idea what I could do to address this? Answers: username_1: @username_0 This is because when you track ```updater``` in gifts it is default associated with ```User```. Effectively belongs_to :updater, class_name: 'User' but you don't have inverse relationship from ```User``` . ```User``` is associated with ```gifts``` but with ```given_gifts``` and ```received_gifts```. Here if you don't require association from ```User``` set ```inverse_of: nil``` to ```gifts as follow. track_history :modifier_field_inverse_of => :nil, # adds an ":inverse_of" option to the "belongs_to :modifier" relation, default is not set If you want to find out gifts updated by user then you need to set ```inverse_of``` in ```gifts``` track_history modifier_field_inverse_of: :updated_gifts And in ```User``` model has_many :updated_gifts, class_name: 'Gift', inverse_of: :updater Hope this helps. username_0: @username_1 This is really great, thanks. Very helpful. Right now I'm just adding `include Trackable` to the `Gift` model and the `User` model. Where would I add the snippet: ```` track_history modifier_field_inverse_of: :updated_gifts ``` Should I just add that in the `User` model below `include Trackable`?
bitcoin/bitcoin
156561416
Title: Rare sendheaders.py failure Question: username_0: From here: https://github.com/bitcoin/bitcoin/pull/8090#issuecomment-221294192 More specifically, we have one travis failure (https://travis-ci.org/bitcoin/bitcoin/jobs/132557240) in `sendheaders.py` that looks like this: ``` sendheaders.py: Initializing test directory /tmp/test8bijuqpx start_node: bitcoind started, waiting for RPC to come up start_node: RPC succesfully started start_node: bitcoind started, waiting for RPC to come up start_node: RPC succesfully started MiniNode: Connecting to Bitcoin Node IP # 127.0.0.1:11072 MiniNode: Connecting to Bitcoin Node IP # 127.0.0.1:11072 Part 1: headers don't start before sendheaders message... Part 1: success! Part 2: announce blocks with headers after sendheaders message... Part 2: success! Part 3: headers announcements can stop after large reorg, and resume after headers/inv from peer... Unexpected exception caught during testing: ConnectionResetError(104, 'Connection reset by peer') Stopping nodes WARN: Unable to stop node: CannotSendRequest('Request-sent',) Cleaning up Failed stderr: bitcoind: main.cpp:2807: void PruneBlockIndexCandidates(): Assertion `!setBlockIndexCandidates.empty()' failed. File "/home/travis/build/bitcoin/bitcoin/bitcoin-x86_64-unknown-linux-gnu/qa/rpc-tests/test_framework/test_framework.py", line 142, in main self.run_test() File "/home/travis/build/bitcoin/bitcoin/bitcoin-x86_64-unknown-linux-gnu/qa/rpc-tests/sendheaders.py", line 372, in run_test new_block_hashes = self.mine_reorg(length=7) File "/home/travis/build/bitcoin/bitcoin/bitcoin-x86_64-unknown-linux-gnu/qa/rpc-tests/sendheaders.py", line 241, in mine_reorg all_hashes = self.nodes[1].generate(length+1) # Must be longer than the orig chain File "/home/travis/build/bitcoin/bitcoin/bitcoin-x86_64-unknown-linux-gnu/qa/rpc-tests/test_framework/coverage.py", line 49, in __call__ return_val = self.auth_service_proxy_instance.__call__(*args, **kwargs) File "/home/travis/build/bitcoin/bitcoin/bitcoin-x86_64-unknown-linux-gnu/qa/rpc-tests/test_framework/authproxy.py", line 137, in __call__ response = self._request('POST', self.__url.path, postdata) File "/home/travis/build/bitcoin/bitcoin/bitcoin-x86_64-unknown-linux-gnu/qa/rpc-tests/test_framework/authproxy.py", line 119, in _request return self._get_response() File "/home/travis/build/bitcoin/bitcoin/bitcoin-x86_64-unknown-linux-gnu/qa/rpc-tests/test_framework/authproxy.py", line 152, in _get_response http_response = self.__conn.getresponse() File "/usr/lib/python3.4/http/client.py", line 1147, in getresponse response.begin() File "/usr/lib/python3.4/http/client.py", line 351, in begin version, status, reason = self._read_status() File "/usr/lib/python3.4/http/client.py", line 313, in _read_status line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") File "/usr/lib/python3.4/socket.py", line 371, in readinto return self._sock.recv_into(b) Pass: False, Duration: 16 s ``` At the moment I have no idea what could cause this; my first guess is something funny with `invalidateblock` that hopefully would never occur in the wild, this needs to be tracked down. So far I've been unable to reproduce. Answers: username_1: Thanks for reporting, I was not able to reproduce either. username_2: I'm also not able to reproduce this, but can reproduce this very often (0.1 probability): ``` Initializing test directory /<KEY> Unexpected exception caught during testing: timeout('timed out',) File "/private/tmp/bitcoin-master/qa/rpc-tests/test_framework/test_framework.py", line 140, in main self.setup_network() File "./sendheaders.py", line 218, in setup_network self.nodes = start_nodes(self.num_nodes, self.options.tmpdir, [["-debug", "-logtimemicros=1"]]*2) File "/private/tmp/bitcoin-master/qa/rpc-tests/test_framework/util.py", line 323, in start_nodes rpcs.append(start_node(i, dirname, extra_args[i], rpchost, binary=binary[i])) File "/private/tmp/bitcoin-master/qa/rpc-tests/test_framework/util.py", line 304, in start_node wait_for_bitcoind_start(bitcoind_processes[i], url, i) File "/private/tmp/bitcoin-master/qa/rpc-tests/test_framework/util.py", line 177, in wait_for_bitcoind_start blocks = rpc.getblockcount() File "/private/tmp/bitcoin-master/qa/rpc-tests/test_framework/coverage.py", line 49, in __call__ return_val = self.auth_service_proxy_instance.__call__(*args, **kwargs) File "/private/tmp/bitcoin-master/qa/rpc-tests/test_framework/authproxy.py", line 137, in __call__ response = self._request('POST', self.__url.path, postdata) File "/private/tmp/bitcoin-master/qa/rpc-tests/test_framework/authproxy.py", line 118, in _request self.__conn.request(method, path, postdata, headers) File "/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 1083, in request self._send_request(method, url, body, headers) File "/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 1128, in _send_request self.endheaders(body) File "/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 1079, in endheaders self._send_output(message_body) File "/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 911, in _send_output self.send(msg) File "/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 854, in send self.connect() File "/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 826, in connect (self.host,self.port), self.timeout, self.source_address) File "/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/socket.py", line 711, in create_connection raise err File "/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/socket.py", line 702, in create_connection sock.connect(sa) Stopping nodes Traceback (most recent call last): File "./sendheaders.py", line 518, in <module> SendHeadersTest().main() File "/private/tmp/bitcoin-master/qa/rpc-tests/test_framework/test_framework.py", line 164, in main wait_bitcoinds() File "/private/tmp/bitcoin-master/qa/rpc-tests/test_framework/util.py", line 355, in wait_bitcoinds bitcoind.wait(timeout=BITCOIND_PROC_WAIT_TIMEOUT) File "/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/subprocess.py", line 1643, in wait raise TimeoutExpired(self.args, timeout) subprocess.TimeoutExpired: Command '['bitcoind', '-datadir=/var/folders/65/fn0h49r55k7779vg1b_h461r0000gn/T/testeqexi0jw/node1', '-server', '-keypool=1', '-discover=0', '-rest', '-mocktime=0', '-debug', '-logtimemicros=1']' timed out after 60 seconds ``` username_3: Was triggered in travis in #9246 (which has no code changes). Fails at a different stage than the ones above so may be a different issue. ``` sendheaders.py: Initializing test directory /tmp/testmyq6oa3l/319 start_node: bitcoind started, waiting for RPC to come up start_node: RPC successfully started start_node: bitcoind started, waiting for RPC to come up start_node: RPC successfully started MiniNode: Connecting to Bitcoin Node IP # 127.0.0.1:13552 MiniNode: Connecting to Bitcoin Node IP # 127.0.0.1:13552 Part 1: headers don't start before sendheaders message... Assertion failed: False != True Stopping nodes Not cleaning up dir /tmp/testmyq6oa3l/319 Failed stderr: File "/home/travis/build/bitcoin/bitcoin/qa/rpc-tests/test_framework/test_framework.py", line 145, in main self.run_test() File "/home/travis/build/bitcoin/bitcoin/build/../qa/rpc-tests/sendheaders.py", line 291, in run_test assert_equal(inv_node.check_last_announcement(inv=[tip]), True) File "/home/travis/build/bitcoin/bitcoin/qa/rpc-tests/test_framework/util.py", line 529, in assert_equal raise AssertionError("%s != %s"%(str(thing1),str(thing2))) Pass: False, Duration: 4 s ``` username_4: @username_3, is there a link to the failing travis run? The only run I see related to #9246 in the last 14 days at https://travis-ci.org/bitcoin/bitcoin/pull_requests seems to have finished successfully. I just tried running `sendheaders.py` myself in a loop on travis. It ran 328 times with no failures before timing out: https://travis-ci.org/username_4/bitcoin/builds/180127519 username_3: The travis link was invalidated when I restarted the build. username_1: No follow ups in a year. Closing. Status: Issue closed
neosimsim/dotfiles
411421173
Title: Add guillemot Question: username_0: https://github.com/username_0/dotfiles/blob/a9cdab16f5ef878b78b978848212796c30140f7c/Xmodmap.symlink#L11 ``` keysym comma = comma less guillemotleft keysym period = period greater guillemotright ```<issue_closed> Status: Issue closed
oligot/go-mod-upgrade
1185174393
Title: support Multi-Module workspace upgrades Question: username_0: ` go-mod-upgrade` not supporting go1.18 workspace enabled projects. $ go-mod-upgrade ⨯ upgrade failed error=Error running go command to discover modules: exit status 1 stderr=go: -mod may only be set to readonly when in workspace mode, but it is set to "mod" Remove the -mod flag to use the default readonly value, or set GOWORK=off to disable workspace mode. I have Multi-Module workspace repo. ``` $ go list -json -m { "Path": "github.com/tz/schannel", "Main": true, "Dir": "/Users/schintha/Developer/Work/go/schannel", "GoMod": "/Users/schintha/Developer/Work/go/schannel/go.mod", "GoVersion": "1.18" } { "Path": "github.com/tz/schannel/apps/lockboard", "Main": true, "Dir": "/Users/schintha/Developer/Work/go/schannel/apps/lockboard", "GoMod": "/Users/schintha/Developer/Work/go/schannel/apps/lockboard/go.mod", "GoVersion": "1.18" } { "Path": "github.com/tz/schannel/cmd/agent", "Main": true, "Dir": "/Users/schintha/Developer/Work/go/schannel/cmd/agent", "GoMod": "/Users/schintha/Developer/Work/go/schannel/cmd/agent/go.mod", "GoVersion": "1.18" } { "Path": "github.com/tz/schannel/service/engine", "Main": true, "Dir": "/Users/schintha/Developer/Work/go/schannel/service/engine", "GoMod": "/Users/schintha/Developer/Work/go/schannel/service/engine/go.mod", "GoVersion": "1.18" } ``` `GOWORK=off go-mod-upgrade` can only update root module. looking forward for `Multi-Module` upgrade feature.
OpenITI/Annotation
625765299
Title: 0151IbnIshaq.Sira.Shamela0009862-ara1.completed Question: username_0: **OLD URI:** 0151IbnIshaq.Sira.Shamela0009862-ara1.completed **NEW URI:** 0151IbnIshaq.Sira.Shamela0009862-ara2.completed **REASON:** Different edition from the Shia and JK version. **NOTE:** Please, paste the *old* URI into the header. If you think the death date or the author's name should be changed in all URIs of that author, please fill in the author URI only (e.g., 0255Jahiz). If you think something needs to change in the title of a book, in all versions, please fill in the book URI only (e.g., 0255Jahiz.Hayawan). Finally, if a URI change relates only to a specific version of the book, fill in the Version URI (e.g., 0255Jahiz.Hayawan.Shamela0000001-ara1). Answers: username_0: 0151IbnIshaq.Sira.Shamela0009862-ara1 > 0151IbnIshaq.Sira.Shamela0009862-ara2 Status: Issue closed
percy/percy-cypress
929783549
Title: Snapshot failing with "Your callback function returned a promise that never resolved" Question: username_0: Percy snapshot seems to be failing with the following error `CypressError: `cy.then()` timed out after waiting `15000ms`. Your callback function returned a promise that never resolved. The callback function was: async () => { if (Cypress.config('isInteractive') && !Cypress.config('enablePercyInteractiveMode')) { return cylog('Disabled in interactive mode', { details: 'use "cypress run" instead of "cypress open"', name }); } // Check if Percy is enabled if (!await utils.isPercyEnabled()) { return cylog('Not running', { name }); } // Inject @percy/dom if (!window.PercyDOM) { // eslint-disable-next-line no-eval eval(await utils.fetchPercyDOM()); } // Serialize and capture the DOM return cy.document({ log: false }).then(dom => { let domSnapshot = window.PercyDOM.serialize({ ...options, dom }); // Post the DOM snapshot to Percy return utils.postSnapshot({ ...options, environmentInfo: ENV_INFO, clientInfo: CLIENT_INFO, domSnapshot, url: dom.URL, name }).then(() => { // Log the snapshot name on success cylog(name, { name }); }).catch(error => { // Handle errors log.error(`Could not take DOM snapshot "${name}"`); log.error(error); }); }); } https://on.cypress.io/then` **Code:** ` cy.visit('/') cy.url().should('include', '/auth/signup') cy.percySnapshot()` Answers: username_1: This is also happening on our end, and has started about two days ago. We first thought that it could have been due to upgrading from `1.0.0-beta.54` to `1.0.0-beta.56`, but quickly realized that reverting back to `1.0.0-beta.54` did not fix the issue. Any advice? username_2: Thanks for the issue! The most helpful thing would be logs from the test run (`--verbose`). Weird to see this, I hope there isn't another networking error in Cypress (like 7.0 to 7.2 had in #325). It's either hanging on serializing the DOM _or_ `POST`'ing the DOM to the local Percy server. username_1: I hope I am not doing anything wrong on my end, but the `--verbose` tag has not helped. Command used: `percy exec --verbose -- cypress run --headless "--spec" "specFile"` ``` CypressError: `cy.then()` timed out after waiting `4000ms`. Your callback function returned a promise that never resolved. The callback function was: dom => { let domSnapshot = window.PercyDOM.serialize({ ...options, dom }); // Post the DOM snapshot to Percy return utils.postSnapshot({ ...options, environmentInfo: ENV_INFO, clientInfo: CLIENT_INFO, domSnapshot, url: dom.URL, name }).then(() => { // Log the snapshot name on success cylog(name, { name }); }).catch(error => { // Handle errors log.error(`Could not take DOM snapshot "${name}"`); log.error(error); }); } https://on.cypress.io/then at http://localhost:3001/__cypress/runner/cypress_runner.js:136215:24 at tryCatcher (http://localhost:3001/__cypress/runner/cypress_runner.js:10798:23) at http://localhost:3001/__cypress/runner/cypress_runner.js:5920:41 at tryCatcher (http://localhost:3001/__cypress/runner/cypress_runner.js:10798:23) at Promise._settlePromiseFromHandler (http://localhost:3001/__cypress/runner/cypress_runner.js:8733:31) at Promise._settlePromise (http://localhost:3001/__cypress/runner/cypress_runner.js:8790:18) at Promise._settlePromise0 (http://localhost:3001/__cypress/runner/cypress_runner.js:8835:10) at Promise._settlePromises (http://localhost:3001/__cypress/runner/cypress_runner.js:8911:18) at _drainQueueStep (http://localhost:3001/__cypress/runner/cypress_runner.js:5505:12) at _drainQueue (http://localhost:3001/__cypress/runner/cypress_runner.js:5498:9) at Async.../../node_modules/bluebird/js/release/async.js.Async._drainQueues (http://localhost:3001/__cypress/runner/cypress_runner.js:5514:5) at Async.drainQueues (http://localhost:3001/__cypress/runner/cypress_runner.js:5384:14) ``` username_0: On running with `--verbose` I could see the following extra logs apart from the ones posted above `(node:13894) UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'end' of undefined at IncomingMessage.request.on (*/testFilePath/node_modules/@percy/core/dist/server.js:88:69) at processTicksAndRejections (internal/process/next_tick.js:81:5) (node:13894) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1) (node:13894) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.` username_3: yup, happening here as well. ```cy.then() timed out after waiting 4000ms. Your callback function returned a promise that never resolved. The callback function was: dom => { let domSnapshot = window.PercyDOM.serialize({ ...options, dom }); // Post the DOM snapshot to Percy return utils.postSnapshot({ ...options, environmentInfo: ENV_INFO, clientInfo: CLIENT_INFO, domSnapshot, url: dom.URL, name }).then(() => { // Log the snapshot name on success cylog(name, { name }); }).catch(error => { // Handle errors log.error(`Could not take DOM snapshot "${name}"`); log.error(error); }); } ``` any updates? username_2: Can we get a full log output? What @username_0 provided was the closest (there's a real error there), but having full logs would help figure out what is causing the error. username_2: Also what version of Node is everyone using? The error lines up with changes in an unsupported version of Node (we support only the current LTS, which is Node 12+ right now). ![image](https://user-images.githubusercontent.com/2072894/123671168-97e80980-d803-11eb-9049-642a5ec7d0f4.png) ``` Cannot read property 'end' of undefined at IncomingMessage.request.on ``` username_1: We are using `v14.16.1` on our end username_3: | ^ 56 | let domSnapshot = window.PercyDOM.serialize({ ...options, dom }); 57 | 58 | // Post the DOM snapshot to Percy ``` username_2: (node:28987) ExperimentalWarning: The fs.promises API is experimental [percy] Successfully downloaded Chromium 812851 [percy] Percy has started! [percy] Running "cypress run" ==================================================================================================== (Run Starting) ┌────────────────────────────────────────────────────────────────────────────────────────────────┐ │ Cypress: 7.6.0 │ │ Browser: Electron 89 (headless) │ │ Specs: 1 found (todo_spec.js) │ └────────────────────────────────────────────────────────────────────────────────────────────────┘ ──────────────────────────────────────────────────────────────────────────────────────────────────── Running: todo_spec.js (1 of 1) TodoMVC 2021-06-28 11:55:54.179 Cypress Helper (Renderer)[29404:116121] CoreText note: Client requested name ".AppleSymbolsFB", it will get Times-Roman rather than the intended font. All system UI font access should be through proper APIs such as CTFontCreateUIFontForLanguage() or +[NSFont systemFontOfSize:]. 2021-06-28 11:55:54.179 Cypress Helper (Renderer)[29404:116121] CoreText note: Set a breakpoint on CTFontLogSystemFontNameRequest to debug. (node:28987) UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'end' of undefined at IncomingMessage.request.on ( /examples/example-percy-cypress/node_modules/@percy/core/dist/server.js:88:69) at processTicksAndRejections (internal/process/next_tick.js:81:5) (node:28987) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1) (node:28987) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code. 1) Loads the TodoMVC app 2021-06-28 11:55:58.838 Cypress Helper (Renderer)[29404:116121] CoreText note: Client requested name ".AppleSymbolsFB", it will get Times-Roman rather than the intended font. All system UI font access should be through proper APIs such as CTFontCreateUIFontForLanguage() or +[NSFont systemFontOfSize:]. ✓ With no todos, hides main section and footer (145ms) 2021-06-28 11:55:58.989 Cypress Helper (Renderer)[29404:116121] CoreText note: Client requested name ".AppleSymbolsFB", it will get Times-Roman rather than the intended font. All system UI font access should be through proper APIs such as CTFontCreateUIFontForLanguage() or +[NSFont systemFontOfSize:]. 2021-06-28 11:55:59.328 Cypress Helper (Renderer)[29404:116121] CoreText note: Client requested name ".AppleSymbolsFB", it will get Times-Roman rather than the intended font. All system UI font access should be through proper APIs such as CTFontCreateUIFontForLanguage() or +[NSFont systemFontOfSize:]. (node:28987) UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'end' of undefined at IncomingMessage.request.on ( /examples/example-percy-cypress/node_modules/@percy/core/dist/server.js:88:69) at processTicksAndRejections (internal/process/next_tick.js:81:5) (node:28987) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 2) 2) Accepts a new todo 2021-06-28 11:56:03.945 Cypress Helper (Renderer)[29404:116121] CoreText note: Client requested name ".AppleSymbolsFB", it will get Times-Roman rather than the intended font. All system UI font access should be through proper APIs such as CTFontCreateUIFontForLanguage() or +[NSFont systemFontOfSize:]. 2021-06-28 11:56:04.330 Cypress Helper (Renderer)[29404:116121] CoreText note: Client requested name ".AppleSymbolsFB", it will get Times-Roman rather than the intended font. All system UI font access should be through proper APIs such as CTFontCreateUIFontForLanguage() or +[NSFont systemFontOfSize:]. (node:28987) UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'end' of undefined at IncomingMessage.request.on ( /examples/example-percy-cypress/node_modules/@percy/core/dist/server.js:88:69) at processTicksAndRejections (internal/process/next_tick.js:81:5) (node:28987) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 3) 3) Lets you check off a todo 1 passing (16s) 3 failing 1) TodoMVC Loads the TodoMVC app: CypressError: `cy.then()` timed out after waiting `4000ms`. Your callback function returned a promise that never resolved. The callback function was: async () => { [Truncated] npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! @ test: `start-server-and-test start:server 8000 percy:cypress` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the @ test script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! /.npm/_logs/2021-06-28T16_56_09_410Z-debug.log ``` I can only see that error (which matches what @username_0 shared a snippet of) on Node 11.9.x or earlier: ``` $ node -v v11.9.0 ``` Seems to strongly suggest it's a Node version issue (or the version of node being used has been altered _maybe_?). A reproduction would be really helpful since I can't get it to break with that error without going down to an unsupported version of Node. username_3: @username_2 sorry I cant share the code as it's a private repo, but here's the build gist ``` jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 **- uses: actions/setup-node@v2 with: node-version: '14'** - name: Get yarn cache directory path id: yarn-cache-dir-path run: echo "::set-output name=dir::$(yarn cache dir)" - uses: actions/cache@v2 id: yarn-cache # use this to check for `cache-hit` (`steps.yarn-cache.outputs.cache-hit != 'true'`) with: path: ${{ steps.yarn-cache-dir-path.outputs.dir }} key: ${{ runner.os }}-yarn-${{ hashFiles('**/yarn.lock') }} restore-keys: | ${{ runner.os }}-yarn- # we use the exact restore key to avoid Cypress binary snowballing # https://glebbahmutov.com/blog/do-not-let-cypress-cache-snowball/ - name: Cache Cypress binary uses: actions/cache@v1 with: path: ~/.cache/Cypress key: cypress-${{ runner.os }}-cypress-${{ github.ref }}-${{ hashFiles('**/package.json') }} restore-keys: | cypress-${{ runner.os }}-cypress-${{ github.ref }}-${{ hashFiles('**/package.json') }} - name: Install dependencies and verify Cypress env: # make sure every Cypress install prints minimal information CI: 1 run: | yarn install yarn cypress cache path yarn cypress cache list yarn cypress verify yarn cypress info **- name: Cypress run uses: cypress-io/github-action@v2 with: record: true build: yarn build start: yarn serve wait-on: http://localhost:3000 command: yarn cross-env PERCY_TOKEN=${{secrets.PERCY_TOKEN_WWW}} percy exec -- cypress run --record --key ${{ secrets.CYPRESS_RECORD_KEY_WWW }} env: CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY_WWW }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}** ``` username_3: @username_2 by the way, funny part is screenshots are being uploaded to percy, it's just the tests that are failing username_4: Having same issue with cypress 5.6.0 and node 14.16.0 Need this to be fixed asap username_2: A reproduction would be the most helpful, given I can't reproduce on Node 12+. Using the example app with Node 14.16.1: https://github.com/percy/example-percy-cypress/runs/2934658148?check_suite_focus=true Using the example app with Node 11.9.x fails with the error provided in this issue: https://github.com/percy/example-percy-cypress/runs/2934668779?check_suite_focus=true The error really seems to suggest it's an issue with the version of Node being used. I can't break it in the same way (yet?) & there's only been one log posted with an error stack trace, so I'm assuming everyone is hitting this error: ``` (node:28987) UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'end' of undefined at IncomingMessage.request.on ( /examples/example-percy-cypress/node_modules/@percy/core/dist/server.js:88:69) at processTicksAndRejections (internal/process/next_tick.js:81:5) ``` username_2: Published a new version of the CLI which will hard exit if it detects a version of Node that's not supported: https://github.com/percy/cli/releases/tag/v1.0.0-beta.57 Curious if that errors or not. If not, I'm keen to get a reproduction or more details on _how_ to reproduce this issue 👀 username_4: This is what I have in logs in latest version `v1.0.0-beta.57` [logs.txt](https://github.com/percy/percy-cypress/files/6728719/logs.txt) username_3: here's my latest update 1. node version **14.16.0** 2. "@percy/cli": **"^1.0.0-beta.57"** 3. ran with verbose `yarn cross-env PERCY_TOKEN=${{secrets.PERCY_TOKEN_WWW}} percy exec --verbose -- cypress run` results : ``` 1) / take a percy snapshot: CypressError: `cy.then()` timed out after waiting `4000ms`. Your callback function returned a promise that never resolved. The callback function was: dom => { let domSnapshot = window.PercyDOM.serialize({ ...options, dom }); // Post the DOM snapshot to Percy return utils.postSnapshot({ ...options, environmentInfo: ENV_INFO, clientInfo: CLIENT_INFO, domSnapshot, url: dom.URL, name }).then(() => { // Log the snapshot name on success cylog(name, { name }); }).catch(error => { // Handle errors log.error(`Could not take DOM snapshot "${name}"`); log.error(error); }); } https://on.cypress.io/then at http://localhost:3000/__cypress/runner/cypress_runner.js:136215:24 at tryCatcher (http://localhost:3000/__cypress/runner/cypress_runner.js:10798:23) at http://localhost:3000/__cypress/runner/cypress_runner.js:5920:41 at tryCatcher (http://localhost:3000/__cypress/runner/cypress_runner.js:10798:23) at Promise._settlePromiseFromHandler (http://localhost:3000/__cypress/runner/cypress_runner.js:8733:31) at Promise._settlePromise (http://localhost:3000/__cypress/runner/cypress_runner.js:8790:18) at Promise._settlePromise0 (http://localhost:3000/__cypress/runner/cypress_runner.js:8835:10) at Promise._settlePromises (http://localhost:3000/__cypress/runner/cypress_runner.js:8911:18) at _drainQueueStep (http://localhost:3000/__cypress/runner/cypress_runner.js:5505:12) at _drainQueue (http://localhost:3000/__cypress/runner/cypress_runner.js:5498:9) at Async.../../node_modules/bluebird/js/release/async.js.Async._drainQueues (http://localhost:3000/__cypress/runner/cypress_runner.js:5514:5) at Async.drainQueues (http://localhost:3000/__cypress/runner/cypress_runner.js:5384:14) From Your Spec Code: at Context.eval (http://localhost:3000/__cypress/tests?p=cypress/support/index.js:150:40) ``` all the test that fails has the same logs, then : ``` [percy] Stopping percy... [percy] Finalized build #9: https://percy.io/f916ec5a/www.get***.com/builds/11202278 [percy] Done! verbose 422.144894774 Error: Command failed with exit code 17. at ProcessTermError.ExtendableBuiltin (/usr/local/lib/node_modules/yarn/lib/cli.js:721:66) at ProcessTermError.MessageError (/usr/local/lib/node_modules/yarn/lib/cli.js:750:123) at new ProcessTermError (/usr/local/lib/node_modules/yarn/lib/cli.js:790:113) at /usr/local/lib/node_modules/yarn/lib/cli.js:34550:30 at Generator.throw (<anonymous>) at step (/usr/local/lib/node_modules/yarn/lib/cli.js:310:30) at /usr/local/lib/node_modules/yarn/lib/cli.js:323:13 at processTicksAndRejections (internal/process/task_queues.js:93:5) error Command failed with exit code 17. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. Error: The process '/usr/local/bin/yarn' failed with exit code 17 ``` I've also noticed , it always fails on the same pages. hope this helps @username_2 . username_4: For me the number of failed tests is different. Sometimes it varies from 31 to 35. So my guess is something wrong with percy servers not with the code.... username_2: @username_4 your issue isn't at all related -- looks like you have duplicate snapshot: ``` [percy:core] Error: The name of each snapshot must be unique, and this name already exists in the build: 'Login page filled out - 2' -- You can fix this by passing a 'name' param when creating the snapshot. See the docs for more info on identifying snapshots for your specific client: https://percy.io/docs ``` if the tests are retried, that won't work with Percy (for now, in the future we'll be updating the SDKs to handle retries). username_2: @username_3 hm, that's interesting. I don't think we're talking about the same error as the original issue. I'm going to rename this one to be about the Node error (`TypeError: Cannot read property 'end' of undefined`). There aren't any other logs that provide a stack trace from Percy or anything? This is pretty generic and coming from Yarn 🤔 ``` verbose 422.144894774 Error: Command failed with exit code 17. at ProcessTermError.ExtendableBuiltin (/usr/local/lib/node_modules/yarn/lib/cli.js:721:66) at ProcessTermError.MessageError (/usr/local/lib/node_modules/yarn/lib/cli.js:750:123) at new ProcessTermError (/usr/local/lib/node_modules/yarn/lib/cli.js:790:113) at /usr/local/lib/node_modules/yarn/lib/cli.js:34550:30 at Generator.throw (<anonymous>) at step (/usr/local/lib/node_modules/yarn/lib/cli.js:310:30) at /usr/local/lib/node_modules/yarn/lib/cli.js:323:13 at processTicksAndRejections (internal/process/task_queues.js:93:5) error Command failed with exit code 17. ``` username_4: @username_2 I have more than 30 failing tests and others don't have this problem. And these tests have been working for ages without any changes on our side username_2: With the logs you provided, it's because `cy.percySnapshot('Login page filled out - 2')` was called twice. username_4: Here is another test failure [logs2.txt](https://github.com/percy/percy-cypress/files/6728903/logs2.txt) Do all tests fail because of this? But in the code `cy.percySnapshot('Login page filled out - 2')` is called only once. username_3: for everyone having this issue, add the following line to *cypress.json* { "projectId": "....", "defaultCommandTimeout": 10000 <------------ add this } this solves the problem. @username_2 pretty sure its related to the cli username_2: Let's open another issue, the orignal issue here was addressed (https://github.com/percy/percy-cypress/issues/367#issuecomment-868945251) These all have the same Cypress error, but any response that doesn't resolve from the CLI will have that Cypress error -- it's generic. See #371 Status: Issue closed
desktop/desktop
316725024
Title: 1 Question: username_0: <!-- First and foremost, we’d like to thank you for taking the time to contribute to our project. Before submitting your issue, please follow these steps: 1. Familiarize yourself with our contributing guide: * https://github.com/desktop/desktop/blob/master/.github/CONTRIBUTING.md#contributing-to-github-desktop 2. Check if your issue (and sometimes workaround) is in the known-issues doc: * https://github.com/desktop/desktop/blob/master/docs/known-issues.md 3. Make sure your issue isn’t a duplicate of another issue 4. If you have made it to this step, go ahead and fill out the template below --> ## Description <!-- Provide a detailed description of the behavior you're seeing or the behavior you'd like to see **below** this comment. --> ## Version <!-- Place the version of GitHub Desktop you have installed **below** this comment. This is displayed under the 'About GitHub Desktop' menu item. If you are running from source, include the commit by running `git rev-parse HEAD` from the local repository. --> * GitHub Desktop: <!-- Place the version of your operating system **below** this comment. The operating system you are running on may also help with reproducing the issue. If you are on macOS, launch 'About This Mac' and write down the OS version listed. If you are on Windows, open 'Command Prompt' and attach the output of this command: 'cmd /c ver' --> * Operating system: ## Steps to Reproduce <!-- List the steps to reproduce your issue **below** this comment ex, 1. `step 1` 2. `step 2` 3. `and so on…` --> ### Expected Behavior <!-- What you expected to happen --> ### Actual Behavior <!-- What actually happens --> ## Additional Information <!-- Place any additional information, configuration, or data that might be necessary to reproduce the issue **below** this comment. If you have screen shots or gifs that demonstrate the issue, please include them. If the issue involves a specific public repository, including the information about it will make it easier to recreate the issue. If you are dealing with a performance issue or regression, attaching a Timeline profile of the task will help the developers understand the runtime behavior of the application on your machine. https://github.com/desktop/desktop/blob/master/docs/contributing/timeline-profile.md --> ### Logs <!-- Attach your log file (You can simply drag your file here to insert it) to this issue. Please make sure the generated link to your log file is **below** this comment section otherwise it will not appear when you submit your issue. macOS logs location: `~/Library/Application Support/GitHub Desktop/logs/*.desktop.production.log` Windows logs location: `%APPDATA%\GitHub Desktop\logs\*.desktop.production.log` The log files are organized by date, so see if anything was generated for today's date. --><issue_closed> Status: Issue closed
MEGA65/mega65-user-guide
526695342
Title: VIC-IV documentation: hot registers Question: username_0: According to Paul's advice, requesting detailed documentation of all hot registers/bits and the exact effect on other- native VIC IV - registers when writing them. Answers: username_1: List of stuff to be added: - CHARPTR MSB **is not** reset by HOTREG write! (see [mega65-core#308](https://github.com/MEGA65/mega65-core/issues/308) and [mega65-core#368](https://github.com/MEGA65/mega65-core/issues/368)) - SPRPTR MSB **is** reset by HOTREG write! (see [mega65-core#267](https://github.com/MEGA65/mega65-core/issues/267))
aplbrain/saber
497687300
Title: Webserver docker container constantly restarting Question: username_0: Newest version of webserver container is constantly restarting once SABER is installed, possibly due to the use of deprecated functionality. Log trace: webserver_1 | DeprecationWarning, webserver_1 | Traceback (most recent call last): webserver_1 | File "/usr/local/bin/airflow", line 22, in <module> webserver_1 | from airflow.bin.cli import CLIFactory webserver_1 | File "/usr/local/lib/python3.6/site-packages/airflow/bin/cli.py", line 68, in <module> webserver_1 | from airflow.www_rbac.app import cached_app as cached_app_rbac webserver_1 | File "/usr/local/lib/python3.6/site-packages/airflow/www_rbac/app.py", line 25, in <module> webserver_1 | from flask_appbuilder import AppBuilder, SQLA webserver_1 | File "/usr/local/lib/python3.6/site-packages/flask_appbuilder/__init__.py", line 5, in <module> webserver_1 | from .base import AppBuilder webserver_1 | File "/usr/local/lib/python3.6/site-packages/flask_appbuilder/base.py", line 5, in <module> webserver_1 | from .api.manager import OpenApiManager webserver_1 | File "/usr/local/lib/python3.6/site-packages/flask_appbuilder/api/__init__.py", line 11, in <module> webserver_1 | from marshmallow_sqlalchemy.fields import Related, RelatedList webserver_1 | File "/usr/local/lib/python3.6/site-packages/marshmallow_sqlalchemy/__init__.py", line 1, in <module> webserver_1 | from .schema import TableSchemaOpts, ModelSchemaOpts, TableSchema, ModelSchema webserver_1 | File "/usr/local/lib/python3.6/site-packages/marshmallow_sqlalchemy/schema.py", line 3, in <module> webserver_1 | from .convert import ModelConverter webserver_1 | File "/usr/local/lib/python3.6/site-packages/marshmallow_sqlalchemy/convert.py", line 36, in <module> webserver_1 | class ModelConverter: webserver_1 | File "/usr/local/lib/python3.6/site-packages/marshmallow_sqlalchemy/convert.py", line 52, in ModelConverter webserver_1 | postgresql.MONEY: fields.Decimal, webserver_1 | AttributeError: module 'sqlalchemy.dialects.postgresql' has no attribute 'MONEY' webserver_1 | /usr/local/lib/python3.6/site-packages/airflow/logging_config.py:99: DeprecationWarning: task_log_reader setting in [core] has a deprecated value of 'file.task', but no handler with this name was found. Please update your config to use 'task'. Running config has been adjusted to match<issue_closed> Status: Issue closed
mschubert/clustermq
341384622
Title: Consider leaving the global seed alone in workers() Question: username_0: Looking through the code and tests, I am glad to see that `clustermq` values reproducibility with respect to pseudo-randomness. Along those lines, would it be possible to keep `workers()` from modifying the `.Random.seed` of the calling R session? ```r library(digest) tmp <- rnorm(1) digest(.Random.seed) #> [1] "8f6cbceb3880ed84a3f699cd296605a9" options(clustermq.scheduler = "multicore") library(clustermq) digest(.Random.seed) #> [1] "8f6cbceb3880ed84a3f699cd296605a9" w <- workers(2, data = list(seed = .Random.seed)) #> Submitting 2 worker jobs (ID: 7921) ... digest(.Random.seed) #> [1] "f2aaa2a70102e8447637854974d48a8e" ``` `drake::make()` tries to preserve `.Random.seed` by building targets inside `withr::with_seed()`. [This unit test](https://github.com/ropensci/drake/blob/e99f96cb63a118ab087508a28645af54094ea852/tests/testthat/test-random.R#L19-L28) succeeds for all parallel backends except the new `make(parallelism = "clustermq_staged")` functionality. Answers: username_1: It took me a while to understand the issue here, because I first assumed I inadvertently used `set.seed(...)` in multicore processing, which then ends up changing the seed on the master. However, a change of `.Random.seed` seems to happen every time we sample from any distribution, e.g. when generating data tokens to identify the `common_data` being set at the master. I'm happy to be convinced that this is something we should control for, but as it stands I would rather put the responsibility on whichever tool is running `clustermq`. username_0: What role do these tokens play? Could a deterministic `digest()` hash do the job instead? On the other hand, I suppose even then `clustermq` calls `sample()` to select ports. From the user's side, would it be sufficient to wrap `workers()` and `Q()` with `withr::with_preserve_seed()`, or does `clustermq` have other side effects on the seed? username_1: Yes, that should be sufficient. Status: Issue closed username_0: Actually, when I wrap `clustermq` functions in `withr::with_preserve_seed()`, I get "Address already in use" errors. Now, I understand why. I think we really do want the seed to change. Not a problem for `drake`, though, because it has its own way of setting seeds for targets.
konvajs/konva
382694304
Title: Update versions in konva-node package Question: username_0: The version numbers specified in the konva-node package are rather old: "dependencies": { "canvas": "^1.6.7", "konva": "^1.7.3" } This just caught me out as I'm using node.js / npm for automated testing and have been experiencing bugs that were actually fixed long ago. Is there any plan to update these to the latest versions? Thanks for all your work on this project by the way 👍 Answers: username_1: Updated. Status: Issue closed username_0: That's great I can see the updated version on npm, thank you. The code on github still shows the old version numbers: https://github.com/konvajs/konva/blob/master/konva-node/package.json but perhaps this doesn't matter?
Zeex/samp-plugin-crashdetect
803503377
Title: My server only connects with crashdetect Question: username_0: This error appears right on the call. ``` [08:39:15] Long callback execution detected (hang or performance issue) [08:39:15] AMX backtrace: [08:39:15] #0 00009ea0 in Opcode:UnrelocateOpcode (Opcode:opcode=-147921579) at C:\Users\Pichau\Desktop\SA-MP\dependencies\Pawn.RakNet\src\..\..\amx_assembly\opcode.inc:744 [08:39:15] #1 00010c74 in CodeScanRunFastPrescanRelocate (&proc=@02585824 -5947932, &nextaddr=@02585820 -5951856, searchFuncAddr=43916) at C:\Users\Pichau\Desktop\SA-MP\dependencies\Pawn.RakNet\src\..\..\amx_assembly\codescan.inc:827 [08:39:15] #2 000110d0 in bool:CodeScanRunFast (csState[CodeScanner:164]=@02586040, searchFuncAddr=43916) at C:\Users\Pichau\Desktop\SA-MP\dependencies\Pawn.RakNet\src\..\..\amx_assembly\codescan.inc:867 [08:39:15] #3 00012ed4 in AddressofResolve () at C:\Users\Pichau\Desktop\SA-MP\dependencies\Pawn.RakNet\src\..\..\amx_assembly\addressof_jit.inc:130 [08:39:15] #4 000137b0 in public AMX_OnCodeInit () at C:\Users\Pichau\Desktop\SA-MP\dependencies\YSI-Includes\YSI_Data\y_foreach\..\..\YSI_Coding\y_va\..\..\YSI_Core\y_core\y_thirdpartyinclude.inc:367 [08:39:15] #5 00003098 in public Debug_OnCodeInit () at C:\Users\Pichau\Desktop\SA-MP\dependencies\YSI-Includes\YSI_Data\y_foreach\..\..\YSI_Core\y_core\y_amx_impl.inc:206 [08:39:15] #6 000029ec in public ScriptInit_OnCodeInit () at C:\Users\Pichau\Desktop\SA-MP\dependencies\YSI-Includes\YSI_Data\y_foreach\..\..\YSI_Core\y_core\y_debug_impl.inc:644 [08:39:15] #7 00001524 in bool:ScriptInit_CallOnCodeInit (bool:jit=false, bool:fs=false) at C:\Users\Pichau\Desktop\SA-MP\dependencies\YSI-Includes\YSI_Data\y_foreach\..\..\YSI_Core\y_core\y_scriptinit_impl.inc:375 [08:39:15] #8 00001e48 in public OnGameModeInit () at C:\Users\Pichau\Desktop\SA-MP\dependencies\YSI-Includes\YSI_Data\y_foreach\..\..\YSI_Core\y_core\y_scriptinit_impl.inc:624 ``` When I don't crashdetect plugins in server.cfg, GameMode won't load. Answers: username_1: Looks like that backtrace appears because the init of the YSI-Includes is taking up too much time. As far as I know, Y_Less recently made use of a feature available in crashdetect, so it won't report those code init callbacks anymore, as they always take too much to be executed (lots of things are done there, so that can't really be improved). It is hard to tell why your server won't load without crashdetect, but there are two things to try: 1. Try loading your server without any filterscripts and if that works. If it will work, it means that there's a crash when initializing any of the filterscripts, so you would have to debug that. If it still doesn't work, try debugging your `OnGameModeInit`. You can try a basic debugging by placing lots of `print` in your code. After every 15 lines or so in your callback, you could add some kind of `OnGameModeInit 1/2/3/[and so on, each one being an increment of the last one]`. If the server goes into your `OnGameModeInit` but it won't come out of it without crashing, then the problem is somewhere between the last `print` that got its output written in the `server_log.txt` file and the next `print`, which didn't get written there. If you find an interval like that, try adding more `print` functions between those lines, to find the exact line that is crashing. 2. If none of the `print`s mentioned in the previous step are displayed, including a `print` placed on the first line of your `OnGameModeInit`, it probably means that it is crashing in one of your libraries' `OnGameModeInit` hook. [Maybe you have a pretty old YSI-Includes library? Try updating it. Old libraries versions can be unstable, so it is worth a try.](https://github.com/pawn-lang/YSI-Includes) If it still doesn't work (after you manage to successfully upgrade the version), try updating more of your libraries. This one may take more time, if you are currently using ancient library versions. Converting from ancient versions to the latest versions may require plenty of code changes. You may also need to do the `print` debugging in your libraries.
oracle-quickstart/oci-quickstart
604269338
Title: use clean action for update-listing Question: username_0: Use dockerfile and entrypoint.sh from here: https://github.com/oracle-quickstart/oci-quickstart/tree/dev_ben_lackey/actions/update-listing Scope action down to just update a stack (not handle images, not create new) Use . instead of / Refactor python to be a single file and drop unused functionality. Drop comment block and start of code. Move main entry point to a main function. Refactor global variables as local. Ensure json is handled natively, not as strings. Simplify variable and function names to ensure readability. Answers: username_1: please split this into separate issues. some of these are by design Status: Issue closed
w3c/encrypted-media
52408519
Title: Initialization Data Types SHOULD be supported independent of content types and other capabilities Question: username_0: The Get Supported Configuration algorithm checks `initDataTypes` first. If an implementation was capable of supporting, for example, `"keyids"` and `"cenc"` but `"keyids"` was only supported for `"*/webm"`, a `MediaKeySystemConfiguration` that contained the following would fail because none of the videoCapabilities supports both of the supported `initDataTypes`. ``` initDataTypes: ["keyids", "cenc"], videoCapabilities: [ { contentType: "video/mp4; codecs=avc1.42E01E" } ] ``` Even if the following was added to the `videoCapabilities` sequence, the configuration would still fail unless `"video/webm`" can be used with `"cenc"`. ``` { contentType: "video/webm; codecs='vp9'" } ``` Moving the `initDataTypes` check to the end of the algorithm does not solve this problem. To solve this, we would need to significantly complicate the already complex algorithm. Fortunately, this should not be a problem in practice since license request generation, which implements Initialization Data Type support, and demuxing, which extracts the Initialization Data, are independent of the codecs. The only scenario that might be problematic is if different codecs were supported by different pipelines that supported different Initialization Data Types. We should avoid adding complexity for such cases and instead recommend that implementations not restrict Initialization Data Types to specific content types and non-normatively explain the consequences. Answers: username_1: I am not sure what the proposed text would look like here. Is the implication that a UA is required to be able to demux every possible initData type from every contentType? username_0: The intended implication was that the CDM must support all `initDataTypes` for which a positive result is returned _regardless of the content being decoded_. As I noted above, this should not be a problem since the license request generation is independent of decryption. You bring up a good point about demuxing the initData in the UA. I think the implication is that the UA supports demuxing all the listed initData types from the containers that support them. For now, there is a 1:1 pairing for WebM and MP4. (It doesn't mean that the UA must be able to parse "webm" from an MP4 or "keyids" from WebM. If a UA added support for a new BMFF protection scheme, it seems reasonable to expect it to support both demuxing of and license generation based on the initData format for that scheme. username_1: Ok, I agree with the recommendation. Hopefully this will not be an issue in practice but we should avoid it anyway. Status: Issue closed
SaturnFramework/Saturn
334068755
Title: Reasoning for tupled handler args controller update/delete? Question: username_0: Reasoning for tupled handler args controller update/delete? Makes it a bit nasty to do nice generic things over all these handlers. If there's no objection I'd like to have it as normal curried args. Answers: username_1: Yeah, @isaacabraham pointed this out in #57. I'd be fine with the change. Status: Issue closed
IHE/publications
745845284
Title: Sidebar navigation improvements Question: username_0: Tracking of sidebar navigation improvements - add deeper levels - add dynamic open/close of sub-levels Answers: username_1: The current rendering of the sidebar (and the TOC) exposes an issue with the section numbering and organization that could potentially be confusing to users. 1. The level 2 headers for the transactions are not listed in the sidebar (3.71, 3.72, etc.) 2. Due to their placement in the document, the MHD Profile and the ATNA Profile are listed under Volume 2b - Transactions. (May want to move Volume 1 info under Volume 1 and leave Volume 2 info under Volume 2. The way it is no, Volume 1 and 2 info appears under Volume 2.) username_0: For 1., do you mean https://ihe.github.io/publications/ITI/TF/Volume2/index.html? My current plan is to make it look like https://ihe.github.io/publications/ITI/TF/Volume1/index.html with callouts for - Introduction - Transactions - Appendices Where the transactions will be like the profiles in the Volume 1 index. username_1: Sorry, for the IUA supplement.
Taxel/PlexTraktSync
628071528
Title: astimezone() cannot be applied to a naive client Question: username_0: When I run this sync I get this: "Processing section Movies Traceback (most recent call last): File ".\main.py", line 324, in <module> main() File ".\main.py", line 305, in main section, trakt_watched_movies, ratings, listutil, trakt_movie_collection) File ".\main.py", line 117, in process_movie_section m.mark_as_seen(seen_date.astimezone(datetime.timezone.utc)) ValueError: astimezone() cannot be applied to a naive datetime" Then the script stops. Any idea what these could mean? Status: Issue closed Answers: username_1: duplicate of #34
OpenChemistry/avogadroapp
978081693
Title: QtGui directory Question: username_0: Hi, I found that at least one file `mainwindow.cpp` requires the presence of `avogadro/qtgui` folder: ``` avogadroapp-1.95.0/avogadro/mainwindow.cpp:34:10: fatal error: avogadro/qtgui/layermodel.h: No such file or directory 34 | #include <avogadro/qtgui/layermodel.h> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. make[2]: *** [avogadro/CMakeFiles/avogadro.dir/build.make:163: avogadro/CMakeFiles/avogadro.dir/mainwindow.cpp.o] Error 1 make[1]: *** [CMakeFiles/Makefile2:118: avogadro/CMakeFiles/avogadro.dir/all] Error 2 make: *** [Makefile:156: all] Error 2 ``` Could you please put it in the distribution? <NAME>. Answers: username_1: You need to compile both `avogadroapp` and `avogadrolibs` together - this is why they are released together. http://two.avogadro.cc/build.html Status: Issue closed
termux/x11-packages
386545234
Title: RedNoteBook Question: username_0: <!-- Important note: If you don't provide information 'why this package is needed' then it's importance will be determined by repository owner ;) --> **Package description** RedNotebook is a modern desktop journal. It lets you format, tag and search your entries. You can also add pictures, links and customizable templates, spell check your notes, and export to plain text, HTML, Latex or PDF. I do not have a clue whether some of the requirements are satisfiable at all (for example GtkSourceView). **Link to home page and sources** 1. Home page: https://rednotebook.sourceforge.io/ 2. Source code: https://github.com/jendrikseipp/rednotebook Answers: username_1: Main problem here is that *rednotebook* is a python package with dependencies like `python-gobject`. Status: Issue closed
runelite/runelite
374760175
Title: info panels for Gem Mining? Question: username_0: Anything about info panels for Gem Mining? Maybe how many gems of each type that got mined/deposited(underground/above Shilo Village gems) Answers: username_1: We have #4126 for tracking stuff like this Status: Issue closed username_0: umm i dont see the feature of using for Gem mining? username_2: i believe he means something more like how mlm shows gems recieved username_0: ^
holoviz/panel
1071373024
Title: Height of Str pane very small when object is none or empty string. Question: username_0: Panel: 0.12.5 I am trying to output multiple results to multiple panels. Some of these results can be None or "". I want to show a loading indicator while (re-)calculating the results. In some cases the loading indicator did not work. I became aware its because it was very, very small. More specifically the `Str` pane height is small if its object is `None` or `""`. I would have expected the height to be the same. ![image](https://user-images.githubusercontent.com/42288570/144734273-a2bb49d0-b205-46bb-a009-3dd94176bc15.png) ```python import panel as pn pn.extension(sizing_mode="stretch_width") pn.Column( "# None", pn.pane.Str(object=None, loading=True, background="lightgray", margin=25), "# Empty String", pn.pane.Str(object="", loading=True, background="lightgray", margin=25), "# One Space", pn.pane.Str(object=" ", loading=True, background="lightgray", margin=25), "# Some Text", pn.pane.Str(object="Some Text", loading=True, background="lightgray", margin=25), ).servable() ``` ## Workaround For now I will catch `None` and `""` values and just set it to `" "` instead.<issue_closed> Status: Issue closed
operator-framework/operator-sdk
575581806
Title: Controller fails on empty manifest files from a Helm Chart Question: username_0: ## Bug Report <!-- Note: Make sure to first check the prerequisites that can be found in the main README file! Thanks for filing an issue! Before hitting the button, please answer these questions. Fill in as much of the template below as you can. If you leave out information, we can't help you as well. --> **What did you do?** For each of the following cases I did these steps: 1. Created an Operator from a local helm chart: `$ operator-sdk new my-operator --type=helm --helm-chart <local_helm_chart_path>` 2. Deployed the CRD `$ kubectl create -f deploy/crds/<resource>_crd.yaml` 3. Started local operator `$ operator-sdk run --local` 4. Create the CR `kubectl create -f deploy/crds/<resource>_cr.yaml` Case 1: A file containing a resource is empty due to a conditional * myvalue is not set ``` {{ if .Values.myvalue }} # entire resource goes here # ... {{ end }} ``` Case 2: There is an empty manifest from `---` markers ``` --- --- # valid resources go here # ... ``` **What did you expect to see?** * The controller ignores the empty resources **What did you see instead? Under which circumstances?** Some of the resources were deployed but I saw the following error messages repeatedly printed... Case 1: ``` {"level":"error", "ts":1583340099.203393, "logger":"helm.controller", "msg":"Release failed", "namespace":"default", "name":"example-product", "apiVersion":"domain.com/v1alpha1", "kind":"Product", "release":"example-product", "error":"failed to install release: no objects visited", "stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\tpkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/operator-framework/operator-sdk/pkg/helm/controller.HelmOperatorReconciler.Reconcile\n\tsrc/github.com/operator-framework/operator-sdk/pkg/helm/controller/reconcile.go:194\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\tpkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:256\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\tpkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\tpkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\tpkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\tpkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\tpkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88"} ``` Case 2: ``` [Truncated] go version go1.13.8 darwin/amd64 ```` * Kubernetes version information: ``` $ kubectl version Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe<KEY>2967d0<PASSWORD>1e6c0c4<PASSWORD>", GitTreeState:"clean", BuildDate:"2019-10-15T23:43:08Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.9-gke.9", GitCommit:"<KEY> <PASSWORD>e2ea08d2<PASSWORD>33ca61<PASSWORD>", GitTreeState:"clean", BuildDate:"2020-02-07T22:35:02Z", GoVersion:"go1.12.12b4", Compiler:"gc", Platform:"linux/amd64"} ``` * Are you writing your operator in ansible, helm, or go? ``` $ helm version version.BuildInfo{Version:"v3.0.1", GitCommit:"<PASSWORD>", GitTreeState:"clean", GoVersion:"go1.13.4"} ``` **Possible Solution** * If a manifest is empty then do not attempt to reconcile it in the cluster **Additional context** Answers: username_1: Hi @username_0, Could you please provide and `-helm-chart <local_helm_chart_path>` which we can use to reproduce your scenario in order to make it more clear? Have you any example from the [stable](https://github.com/helm/charts/tree/master/stable/memcached) repo which would allow us to check/reproduce it? Also, what do you mean with `https://github.com/helm/charts/tree/master/stable/memcached`? Could you please elaborate more? username_2: @username_0 Case 1 is actually pretty common in Helm charts. In fact, we coincidentally test this case in the SDK's helm-operator e2e test (check out the ingress template generated from [this command](https://github.com/operator-framework/operator-sdk/blob/5104f530ea7bbfb50f4d2d5888cbdc2f26786906/hack/tests/e2e-helm.sh#L129-L133) in the test). So I'm fairly certain the Helm operator supports disabling entire resources. Your error seems to indicate that the release contained no resources at all. And in that case, it would make sense to fail an installation. For case 2, this looks like a legitimate bug. My guess is that it is originating from [here](https://github.com/operator-framework/operator-sdk/blob/ad1d4d1953abf22c336f67198fc23a489c0f4fdb/pkg/helm/controller/controller.go#L127). We parse each YAML document in the release manifest and assume that we'll find the object meta. In the case of your chart it looks like there are some empty documents, which parse "successfully", but result in an empty object. Any chance you would be interested in submitting a PR to add some extra sanity checks in that function to make sure that we have a non-empty GVK? username_0: Hi @username_1, Sorry for the late response. These errors were not from the stable repo but from a custom helm chart. Here are examples of the Charts: **Case 1:** Chart.yaml: ``` apiVersion: v1 name: my-helm-chart description: A helm chart for me type: application version: 1.0.0 appVersion: 2.3.0 ``` Values.yaml: ``` resourceProvided: true ``` templates/resource.yaml: ``` {{ if .Values.resourceProvided }} apiVersion: v1 kind: Service metadata: labels: name: {{ .Release.Name }}-service namespace: {{ .Release.Namespace }} spec: ports: - name: port-8443 port: 8443 protocol: TCP targetPort: 8443 selector: name: {{ .Release.Name }} type: ClusterIP {{ end }} ``` **Case 2:** Chart.yaml: ``` apiVersion: v1 name: my-helm-chart description: A helm chart for me type: application version: 1.0.0 appVersion: 2.3.0 ``` Values.yaml: ``` exposeService: true ``` templates/resource.yaml: ``` apiVersion: v1 kind: Service metadata: labels: name: {{ .Release.Name }}-label name: {{ .Release.Name }}-service [Truncated] {{ if .Values.exposeService }} apiVersion: v1 kind: Service metadata: labels: name: {{ .Release.Name }}-exposed-label name: {{ .Release.Name }}-service-exposed namespace: {{ .Release.Namespace }} spec: ports: - name: port-8443 port: 8443 protocol: TCP targetPort: 8443 selector: name: {{ .Release.Name }}-exposed-label type: LoadBalancer {{ end }} --- ``` username_3: /assign username_0: Hi @username_2, thanks for the information and the "kind/bug" label. I don't currently have the cycles to work on this issue. Thanks @username_3 for taking this up! If you need more information from me I will do my best to assist you. username_3: @username_0 When trying to reproduce CASE 1, with below environment, I see that operator shows error in the creation step itself, as shown below. ``` $ operator-sdk new my-operator --type=helm --helm-chart=my-helm-chart/ INFO[0000] Creating new Helm operator 'my-operator'. INFO[0000] Created helm-charts/my-helm-chart INFO[0000] Generating RBAC rules WARN[0000] Using default RBAC rules: failed to generate RBAC rules: failed to get default manifest: failed to render chart templates: template: my-helm-chart/templates/tests/test-connection.yaml:14:65: executing "my-helm-chart/templates/tests/test-connection.yaml" at <.Values.service.port>: nil pointer evaluating interface {}.port INFO[0000] Created build/Dockerfile INFO[0000] Created watches.yaml INFO[0000] Created deploy/service_account.yaml INFO[0000] Created deploy/role.yaml INFO[0000] Created deploy/role_binding.yaml INFO[0000] Created deploy/operator.yaml INFO[0000] Created deploy/crds/charts.helm.k8s.io_v1alpha1_myhelmchart_cr.yaml INFO[0000] Generated CustomResourceDefinition manifests. INFO[0000] Project creation complete. ``` Was curious, whether you have seen similar error while creation. Later, encountered bellow error , after deployment of `cr` ``` {"level":"info","ts":1586186301.702219,"logger":"controller-runtime.controller","msg":"Starting Controller","controller":"myhelmchart-controller"} {"level":"info","ts":1586186301.7022479,"logger":"controller-runtime.controller","msg":"Starting workers","controller":"myhelmchart-controller","worker count":1} {"level":"error","ts":1586186373.333587,"logger":"helm.controller","msg":"Release failed","namespace":"default","name":"example-myhelmchart","apiVersion":"charts.helm.k8s.io/v1alpha1","kind":"MyHelmChart","release":"example-myhelmchart","error":"failed to install release: template: my-helm-chart/templates/tests/test-connection.yaml:14:65: executing \"my-helm-chart/templates/tests/test-connection.yaml\" at <.Values.service.port>: nil pointer evaluating interface {}.port","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\tpkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/operator-framework/operator-sdk/pkg/helm/controller.HelmOperatorReconciler.Reconcile\n\tsrc/github.com/operator-framework/operator-sdk/pkg/helm/controller/reconcile.go:196\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\tpkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:256\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\tpkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\tpkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\tpkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\tpkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\tpkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88"} ``` P.S ``` operator-sdk version operator-sdk version: "v0.16.0", commit: "<PASSWORD>", go version: "go1.14 darwin/amd64" ``` ``` helm version version.BuildInfo{Version:"v3.1.0", GitCommit:"<PASSWORD>", GitTreeState:"clean", GoVersion:"go1.13.8"} ``` username_2: @username_0 @username_3 Just want to re-iterate [my previous comment](https://github.com/operator-framework/operator-sdk/issues/2622#issuecomment-604225508) that I think something slightly different is happening with case 1 than what is described in the issue. The error message you got (`failed to install release: no objects visited`) is actually coming from the Helm libraries [here](https://github.com/helm/helm/blob/c12a9aee02ec07b78dce07274e4816d9863d765e/pkg/kube/client.go#L313-L315). That error means that there were 0 resources in the release manifest, so there must be something wrong with your templates or CR values that is causing nothing to be rendered. I think we can fix case 2 by checking to make sure the GVK is not empty. username_4: @username_2 we have a partner seeking operator certification that is also seeing case #2. In their particular case, the chart installs and also renders fine using helm3. Putting the same chart into an operator yields case #2, which we believe is when the deployment templates get rendered. The following conditional code appears to be tripping up the helm-operator: ``` {{/* Generates deployment specs for CR controller services. These are stateless. */}} {{- range include "k10.crServices" . | splitList " " }} {{ $service := print "crc-" . }} {{ dict "main" $ "k10_service" $service "persistence.enabled" false "replicas" 1 | include "k10-default" }} {{- end }} ``` The chart name is `k10` and can be installed from the repo at https://charts.kasten.io. username_3: @username_4 Can you please recheck the charts link from above, will try to reproduce. username_4: @username_3 Sorry about that. That link is a helm repo, and not browser accessible. The following helm commands will add the repo, and then fetch the chart in tar.gz format: ``` $ helm repo add kasten https://charts.kasten.io $ helm fetch kasten/k10 ``` username_3: @username_4 What is your helm version and sdk version? username_5: helm chart is helm2 sdk is 0.15.2 (0.16.0 requires h3 chart) this chart works find with helm3 and helm2 username_2: Really? If so, that was an unexpected breakage. Do you have details? username_5: Unfortunately yes, it looks for Charts.lock with is available only in helm 3 since requirements were moved to Chart.yaml. I can create the issue tomorrow with details. username_5: FIled https://github.com/operator-framework/operator-sdk/issues/2806 username_3: @username_5 Can you please try this repo for SDK, and let us know if it fixes Case 2 for you? https://github.com/username_3/operator-sdk/tree/issue%232622 username_5: @username_3 i hope ecerytihg right to build operator-sdk and package it but i still getting ```2020-04-09T00:10:33.996Z DEBUG predicates Reconciling due to dependent resource update {"name": "prometheus-server", "namespace": "kasten", "apiVersion": "v1", "kind": "PersistentVolumeClaim"} 2020-04-09T00:10:38.214Z ERROR helm.controller Failed to run release hook {"namespace": "kasten", "name": "k10", "apiVersion": "apik10.kasten.io/v1alpha1", "kind": "K10", "release": "k10", "error": "no matches for kind \"\" in version \"\""} github.com/go-logr/zapr.(*zapLogger).Error pkg/mod/github.com/go-logr/[email protected]/zapr.go:128 github.com/operator-framework/operator-sdk/pkg/helm/controller.HelmOperatorReconciler.Reconcile src/github.com/operator-framework/operator-sdk/pkg/helm/controller/reconcile.go:210 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:256 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211 k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1 pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152 k8s.io/apimachinery/pkg/util/wait.JitterUntil pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153 k8s.io/apimachinery/pkg/util/wait.Until pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88 2020-04-09T00:10:38.215Z ERROR controller-runtime.controller Reconciler error {"controller": "k10-controller", "request": "kasten/k10", "error": "no matches for kind \"\" in version \"\""} github.com/go-logr/zapr.(*zapLogger).Error pkg/mod/github.com/go-logr/[email protected]/zapr.go:128 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:258 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211 k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1 pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152 k8s.io/apimachinery/pkg/util/wait.JitterUntil pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153 k8s.io/apimachinery/pkg/util/wait.Until pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88 2020-04-09T00:10:39.215Z DEBUG helm.controller Reconciling {"namespace": "kasten", "name": "k10", "apiVersion": "apik10.kasten.io/v1alpha1", "kind": "K10"} E0409 00:10:40.351418 1 memcache.go:199] couldn't get resource list for actions.kio.kasten.io/v1alpha1: the server is currently unable to handle the request ``` username_5: @username_3 i'm getting this error, right after fully rendered chart ``` 2020-04-09T00:28:34.440Z ERROR controller-runtime.controller Reconciler error {"controller": "k10-controller", "request": "kasten/k10", "error": "Operation cannot be fulfilled on k10s.apik10.kasten.io \"k10\": the object has been modified; please apply your changes to the latest version and try again"} github.com/go-logr/zapr.(*zapLogger).Error /repo/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler /repo/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:258 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem /repo/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker /repo/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211 k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1 /repo/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152 k8s.io/apimachinery/pkg/util/wait.JitterUntil /repo/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153 k8s.io/apimachinery/pkg/util/wait.Until /repo/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88 ``` username_5: good news that I can see Deployments in logs now. username_3: @username_5 Great, so did I understand correctly, that your issue is getting fixed when you use SDK repo from thins branch? https://github.com/username_3/operator-sdk/tree/issue%232622 username_5: Yes, one of them :) there is still another error :( username_2: @username_5 That seems like a completely separate issue. Can you file a separate GitHub issue for that one? username_5: @username_2 @username_3 https://github.com/operator-framework/operator-sdk/issues/2815 username_4: OK I've found the bug in the chart that @username_5 is using. It's the exact case #2. There was a template file, v0services.yaml inside the chart that had the following lines: ``` --- {{ end -}} {{/*aggregatedapis-svc can be exposed via gateway later is required.*/}} --- ``` I've removed the comment and the latter `---` from the file, and the operator no longer throws the error, and it renders/prints the chart to stdout as normal. @username_2 As you mentioned, I think the fix for helm-operator is to ignore input files without a valid GVK. This is definitely an issue with the chart that the standalone helm binary seems to ignore. Status: Issue closed
Bulisor/BabylonjsDemo
514750449
Title: General Questions Question: username_0: Hello, I've got to your repo [from](https://forum.babylonjs.com/t/babylonjs-in-reactnative/5420/12) and found it very useful. I'm currently migrating a [mobile-web game](https://designwithfriends.com/) to Expo and started using your example. 1. I've seen that pinch gesture is working a bit off on iOS devices but when I change: `(this._scale * 60)` to `(this._scale * 0.1)` it gets better, this is just a "magic number" I found that seems to be working better than 60, not 100% sure what it stands for (just started with React Native a day ago) wonder what are you thoughts about that have you tested it on iOS devices? 2. I refactored SceneTemplate from JS to TS, wondered if that's something you need as well if so I'll be happy to create a PR.<issue_closed> Status: Issue closed
quorrajs/Ouch
346889883
Title: npm audit reports a Prototype Pollution Question: username_0: Hello, 'npm audit' reports a 'Prototype Pollution' issue due to your 'lodash' < 4.17.5 dependency. Would you mind to check? ``` $ npm audit === npm audit security report === ┌──────────────────────────────────────────────────────────────────────────────┐ │ Manual Review │ │ Some vulnerabilities require your attention to resolve │ │ │ │ Visit https://go.npm.me/audit-guide for additional guidance │ └──────────────────────────────────────────────────────────────────────────────┘ ┌───────────────┬──────────────────────────────────────────────────────────────┐ │ Low │ Prototype Pollution │ ├───────────────┼──────────────────────────────────────────────────────────────┤ │ Package │ lodash │ ├───────────────┼──────────────────────────────────────────────────────────────┤ │ Patched in │ >=4.17.5 │ ├───────────────┼──────────────────────────────────────────────────────────────┤ │ Dependency of │ quasar-cli [dev] │ ├───────────────┼──────────────────────────────────────────────────────────────┤ │ Path │ quasar-cli > ouch > lodash │ ├───────────────┼──────────────────────────────────────────────────────────────┤ │ More info │ https://nodesecurity.io/advisories/577 │ └───────────────┴──────────────────────────────────────────────────────────────┘ found 1 low severity vulnerability in 13536 scanned packages 1 vulnerability requires manual review. See the full report for details. ``` Many thanks Cheers, Francesco Answers: username_0: [snip] 31 passing (105ms) ``` I've only run the unit tests, should I anyway do a pull request? Cheers, Francesco username_1: So...is this going to be fixed? I have had the same problem for over a month... username_0: Dear username_1, I did some testing on a real world app and my pull request doesn't break anything. About the other question I think won't be fixed anytime soon. I'm using my own repo for this reason username_2: @username_0 @username_1 Sorry, I didn't get time to look into this. I will work on it this weekend. username_0: @username_2 thank you, sir. Please let me know if you need any further information or help Status: Issue closed
SwiftGen/SwiftGen
894077453
Title: Make output file scoped to Public? Question: username_0: I am using SwiftGen to manage Colors in an Asset catalog in a Framework. I'd like to expose those colors in the public scope. In other words, I'd like the swiftgen output file to say ``` public enum Asset { public enum Color { public enum Font { public static let body = ColorAsset(name: "Color/Font/body") ... } } } ``` instead of ``` internal enum Asset { internal enum Color { internal enum Font { internal static let body = ColorAsset(name: "Color/Font/body") ... } } } ``` I didn't notice a way to do this in the config file documentation. It's possible that I have overlooked it. Status: Issue closed Answers: username_1: Check this discussion about exactly the same thing: https://github.com/SwiftGen/SwiftGen/discussions/833 Essentially: update your configuration to set a template variable for public access. username_0: Great. I appreciate the reply.
jdb78/pytorch-forecasting
976285769
Title: FileNotFoundError at Colab Question: username_0: - PyTorch-Forecasting version: 0.9.0 - PyTorch version: 1.9.0+cu102 - Python version: 3.7.11 - Operating System: Colab ### Expected behavior I executed code from documentation [Demand forecasting with the Temporal Fusion Transformer](https://pytorch-forecasting.readthedocs.io/en/latest/tutorials/stallion.html) and **it works well on my local computer**. ### Actual behavior However, when I run the code at Colab I encountered `FileNotFoundError: [Errno 2] No such file or directory: ''` ![image](https://user-images.githubusercontent.com/1643432/130342658-386a03df-7728-4645-b1b9-a516df2a058c.png) ### Code to reproduce the problem [Link to a colab notebook](https://colab.research.google.com/drive/1ncVw1_oQJ_SsiWaZ83TSnWgWDYb9rfdP?usp=sharing) Answers: username_1: hi, i had same problem, my environment was like below: - colab - pytorch_lighting : 1.5.0dev (latest in colab) - pytorch forecast : 0.9.0 - torchmetrics : 0.5 i'd tried several times with `runtime restart`, and only one time, it's work alright with same condition. and also i'd tried it for my local environment, it's work alright every time i tried. local env: - gpu : nvidia 1660ti - pytorch_lightning : 1.4.5 - pytorch forcast : 0.9.0 - torchmetrics : 0.5 - pandas : 1.2.5 i don't know what is problem, but i guess it will be associated with incompatible package or gpu😥 i wish my case can help you. or if someone know what is problem, plz mention me. thx username_2: Same trouble, have you solved this?
mozilla/readability
63553256
Title: Add support for querySelectorAll() to JSDOMParser? Question: username_0: That would ease a bunch performing batch operations, eg. ```js [].forEach.call(doc.querySelectorAll("object,embed,iframe"), function(x) {x.remove()}) ``` I suspect this is a bunch of work :( Answers: username_1: On the upside... I have a patch for this lying around. Will resurrect it at some point today or tomorrow. Not happy with how slow it is, though. :-) username_1: https://github.com/username_1/readability/tree/implement-queryselectorall fwiw, need to rebase and clean up / improve perf. username_0: Ha, great! username_0: Any news from this front? username_1: I'm working on the live DOM stuff performance, and am almost done there. I can look at this next week, but be aware that my patch only implements selecting of an element or a set of elements using comma-separated id/class/tag selectors. It doesn't support child or descendant selectors, and adding those would complicate matters a lot. If you are doing something that requires this, I would look at / discuss alternative ways of doing this. username_0: Ah, that's indeed something I was after, unfortunately. I get this is a lot of work… I've stumbled upon [this project](https://github.com/termi/CSS_selector_engine), I wonder if we could use it along with JSDOMParser? I suspect that won't work magically, though I'll check and play around the idea. Status: Issue closed username_1: It's been nearly 4 years and we still haven't bothered doing this, so I'm going to assume we don't need/want it. I'd sooner get rid of the hack that is JSDOMParser in Firefox.
sequelize/cli
611289147
Title: Sequelize CLI appears to silently fail when PG < 8 Question: username_0: sequelize-cli appears to be silently failing (not doing anything) when PG version is below 8. Output with "pg": "^7.12.1": ```Sequelize CLI [Node: 14.1.0, CLI: 5.5.1, ORM: 5.21.7] Loaded configuration file "packages/backend/config/config.json". Using environment "development". (node:49105) [SEQUELIZE0004] DeprecationWarning: A boolean value was passed to options.operatorsAliases. This is a no-op with v5 and should be removed. (Use `node --trace-deprecation ...` to show where the warning was created)``` Output with "pg": "^8.0.3": Sequelize CLI [Node: 14.1.0, CLI: 5.5.1, ORM: 5.21.7] Loaded configuration file "packages/backend/config/config.json". Using environment "development". (node:49105) [SEQUELIZE0004] DeprecationWarning: A boolean value was passed to options.operatorsAliases. This is a no-op with v5 and should be removed. (Use `node --trace-deprecation ...` to show where the warning was created) Database testdb created. Answers: username_1: Closing my issue #896 as it appears to be identical to here. Expanding upon this -> sequelize.authenticate() hung indefinitely - may be related. username_2: Duplicate of https://github.com/sequelize/sequelize/issues/12158 Status: Issue closed
elastic/apm
679036242
Title: Adding more config options to central config for backend agents Question: username_0: Let's add another batch of options, following up on #213. The criteria is: important options you'd regularly want to change on-the-fly. | Option | Alignment status | |----------------------------|------------------| | sanitize_field_names | Not available for Node.js. Semantics in Python and Ruby are slightly different | | span_min_duration | Only available for Java. Spec PR: #314 | | log_level | Not available for python, values not aligned | | transaction_ignore_urls | Aligned via #144 (7.10) | | transaction_max_spans | Aligned via #148 | For reference, here's the table of config options for all agents: https://docs.google.com/spreadsheets/d/1JJjZotapacA3FkHc2sv_0wiChILi3uKnkwLTjtBmxwU I'll create sub-issues for the options that are not aligned yet. Open questions - Does Kibana indicate which agent version is required to change a setting at runtime? - Milestone/prioritization. This is not entirely clear yet but let's look at it as stretch goals for 7.10 for the time being<issue_closed> Status: Issue closed
AnthonyKNorman/Xiaomi_LYWSD03MMC_for_HA
657477482
Title: I need to restart ESP 32 Question: username_0: I have been testing the board for a few days and I have realized that at least once a day I have to restart the board because it is not sending a signal, … if you find a solution to that, tell me regards Answers: username_1: Hi, First of all thanks for your work. It´s amazing. I have the same problem, in max 24 hours dont recieve data, and have to manual reset the board. It´s possible to implement a board reset every 12 hours (for example)? Thanks in advance username_2: Same here, after setup, following super clear instructions it lasted about 18 hours before I needed to restart the board. ESP32 DEVKIT. Other than that, it works great, awesome job! username_3: Please can you all try the versions now uploaded and let me know how you get on. Thanks username_0: Hi Anthony, I congratulate you on the work you have done yesterday I reloaded the files to ESP32, and it was working correctly for 14 hours, then I stopped sending information, While it was working I found that the information sending was more constant than the previous configuration, but something happens that the system crashes after a few hours If you need more proof, here we are Thank you username_3: Are you able to copy information from the REPL that shows any errors after the crash? username_0: I think that I will not be able to do it, since it is connected only in a 5v connector. There is a way to see your device in the ESP Home of HA, It would be great, so you could see the LOG username_0: Anthony, leave running esp32 from the raspberry, now to wait for it to mark an error and send the report regards username_2: I had it working for 48 hours before the need to reset, it is back now. Before the update, it never lasted more than 24 hours. username_0: Hi Anthony, it worked perfect for more than 48 hours connected directly to the Raspberry, but connected to a normal cell phone charging socket only reached 14 hours. What will be the ideal power source for an ESP32? In some parts of the script I frame some errors like this: What can be? thanks for your work Trying to connect to a4:c1:38:50:6e:17 GAP procedure initiated: connect; peer_addr_type=0 peer_addr=a4:c1:38:50:6e:17 scan_itvl=16 scan_window=16 itvl_min=24 itvl_max=40 latency=0 supervision_timeout=256 min_ce_len=16 max_ce_len=768 own_addr_type=0 self.connected False Trying to connect to a4:c1:38:50:6e:17 Error: Connect [Errno 114] EALREADY self.connected False connected peripheral has disconnected. 65535 255 00:00:00:00:00:00 Trying to connect to a4:c1:38:50:6e:17 GAP procedure initiated: connect; peer_addr_type=0 peer_addr=a4:c1:38:50:6e:17 scan_itvl=16 scan_window=16 itvl_min=24 itvl_max=40 latency=0 supervision_timeout=256 min_ce_len=16 max_ce_len=768 own_addr_type=0 self.connected False Trying to connect to a4:c1:38:50:6e:17 Error: Connect [Errno 114] EALREADY self.connected False connected peripheral has disconnected. 65535 255 00:00:00:00:00:00 username_0: Hi Anthony I have been testing your programming for several days with the integration of Xiaomi thermometers in an ESP32, As a conclusion, the device behaves correctly connected directly to the Raspberry but if the ESP32 connected it directly to the same power source of the 2 amp Raspberri, for some reason it loses connection and stops sending data to the mqtt after 24 hours of functioning, It's very strange I wonder if an automatic restart can be integrated into the programming after 20 hours of operation, for example is this possible to do? The same thing happens to the rest of the chat? stay tuned Thank you very much for your work ![WhatsApp Image 2020-08-01 at 19 47 27](https://user-images.githubusercontent.com/13158743/89484134-d009d800-d76b-11ea-9a7e-28894f416f46.jpeg) username_2: I can say since the last update, it is working better, I had it working for 2 weeks with no need to restart but then it stops publishing and I need to power cycle to get it going again. I see in the code there is a machine.reset() if it fails to connect to MQTT. While it stops publishing the information (I have 4 Xiaomi) it doesn't seem to restart so I presume it is still connected to MQTT. My question is, without any modification of the existing code on the ESP32, is it possible to launch a command to mimic a lost connection from home assistant? This way I could force a restart when it stops publishing. My other solution would be to setup a D1 Mini with a relay that will power cycle the ESP32 when needed but I think this is overkill. username_3: If you can show me what comes up in the REPL then maybe I can understand what is causing the disconnect. username_2: I have a small problem, my esp32 is connected to an AC power and my only computer is a laptop that I need to take to work every day. At this time it stays connected for more than 24-48 hours. Unless I have a way to connect to the ESP without disconnecting it from the AC or is there a saved log that is available? username_2: From my previous posts, I was under the impression that I had to restart the ESP32 after a day and then I found it worked for over a week or two but now I realized, the ESP stops publishing the Xiaomi values and then after a while it starts again without power cycling the ESP. However, out the the four devices I have, one stopped publishing and it's been a few days now. The Xiaomi is about 10 feet away from the ESP32 in the next room. The other 3 are in same room, one floor down and two floors down. Connecting the the ESP via Putty, this is what come out of the one that I am not getting data from: ` Trying to connect to a4:c1:38:6e:19:6b GAP procedure initiated: connect; peer_addr_type=0 peer_addr=a4:c1:38:6e:19:6b scan_itvl=16 scan_window=16 itvl_min=24 itvl_max=40 latency=0 supervision_timeout=256 min_ce_len=16 max_ce_len=768 own_addr_type=0 self.connected False Trying to connect to a4:c1:38:6e:19:6b Error: Connect [Errno 114] EALREADY self.connected False connected peripheral has disconnected. 65535 255 00:00:00:00:00:00 Trying to connect to a4:c1:38:6e:19:6b GAP procedure initiated: connect; peer_addr_type=0 peer_addr=a4:c1:38:6e:19:6b scan_itvl=16 scan_window=16 itvl_min=24 itvl_max=40 latency=0 supervision_timeout=256 min_ce_len=16 max_ce_len=768 own_addr_type=0 self.connected False Trying to connect to a4:c1:38:6e:19:6b Error: Connect [Errno 114] EALREADY self.connected False connected peripheral has disconnected. 65535 255 00:00:00:00:00:00 Trying to connect to a4:c1:38:6e:19:6b GAP procedure initiated: connect; peer_addr_type=0 peer_addr=a4:c1:38:6e:19:6b scan_itvl=16 scan_window=16 itvl_min=24 itvl_max=40 latency=0 supervision_timeout=256 min_ce_len=16 max_ce_len=768 own_addr_type=0 self.connected False Trying to connect to a4:c1:38:6e:19:6b Error: Connect [Errno 114] EALREADY self.connected False IRQ peripheral connect A peripheral has sent a notify request. 0 54 b'\x86\t3\xbc\x0b' GAP procedure initiated: connection parameter update; conn_handle=0 itvl_min=12 itvl_max=24 latency=0 supervision_timeout=90 min_ce_len=16 max_ce_len=768 Reading Data GATT procedure initiated: read; att_handle=3 .False A gattc_read() has completed. 0 3 b'LYWSD03MMC\x00' self.name LYWSD03MMC length 10 Got LYWSD03MMC GAP procedure initiated: terminate connection; conn_handle=0 hci_reason=19 .connected peripheral has disconnected. 0 0 a4:c1:38:6e:19:6b Name: LYWSD03MMC -------------------------------------------- `
urbit/bridge
1121319839
Title: Various L2 ux, copy, design issues, feedback Question: username_0: Feel like none of these quite warrant their own issue at this time. Certainly making separate issues for each feels potentially overkill. Still, we should document them. ux - [ ] The big blue button on login screen is still so enticing, even though I know I just want to log in. I believe we had discussed toning this down. Did we end up deciding against that? imo we should consider coloring the "master ticket" option bright blue instead, what with it being the blessed, canonical flow and all. - [ ] Wonder if we should explain, on first visit (or anywhere really), why we ask to enable browser notifications. Might be strange to new/paranoid users. - [ ] We now have a loading spinner. This is nice, except that it's "full screen", dimming the whole page when it appears, however briefly. Whenever you navigate back to the point overview page, it spins for just a fraction of a second, causing an annoying screen-flashing effect. It doesn't require an epilepsy warning, but it's not pleasant either. - [ ] The "insufficient ETH" button currently doesn't help me resolve the situation. Previously, the insufficient eth warning (though displayed late, only _after_ trying to send the tx) would include the address that needed more funds (the one you were logged in as). Now, I have to exit the screen, potentially discarding my inputs, go back to the homepage/point screen, and copy my address from there. Including the address somewhere near the "insufficient ETH" button may prove more convenient. - [ ] We accept a gas price of `0` as a valid input. Maybe this is useful for dev environments? But on any real network it will just make the tx sending take forever, guaranteed never confirm. - [ ] We let the user navigate away from a pending tx without warning. Being able to move away is good, but we should consider a little `alert()` that explains that the tx can still happen in the background, etc. copy - [ ] The galaxy ops screen [says "planets" in place of "stars"](https://user-images.githubusercontent.com/3829764/152069679-c4048c92-f2db-4c68-b17d-bd31d2fc7ae3.png), for the residents/requests. - [ ] The transaction completion screens might need some additional copy. Currently, [they](https://user-images.githubusercontent.com/3829764/152068974-e55c4ea3-1002-498e-a764-1a58e9a990b5.png) feel [very bare](https://user-images.githubusercontent.com/3829764/152069001-2215540f-7893-49b3-80e4-0d19da497a47.png), leaving the naive user wondering what happened/if they're fine to leave. In some cases, [the input field remains](https://user-images.githubusercontent.com/3829764/152068990-cbe5e4a4-ad37-4dae-868c-456becc18934.png), which is also confusing. - [ ] Additionally, the value displayed on the completion screen may display the _old_ value for a little bit, before updating to reflect the changes. We should consider optimistically displaying the new value. After all, the tx _did_ confirm. behavior - [ ] On the master ticket login screen, when toggling "shards" mode, the shards fields should _replace_ the master ticket field, rather than [display alongside it](https://user-images.githubusercontent.com/3829764/152068245-d130e84a-bbf0-4565-8c98-47bb300a5c6e.png). Either you have the full master ticket, or the shards, but not both. - [ ] On the ID page, we can see and copy the ownership address and the proxies, [except the voting proxy](https://user-images.githubusercontent.com/3829764/152069384-72509f9d-2cab-40de-b735-1fc7d56e0634.png), which only gets an "edit" option at the bottom of the list, and requires clicking into that to view the current value. - [ ] The "OS" screen [only shows the current sponsor](https://user-images.githubusercontent.com/3829764/152069533-bfbb3fae-7cf9-463f-b59e-1351bed51485.png). If we have requested escape, we will only see this after clicking "change". - [ ] Additionally, the "change" screen there should give you the option of sending a new request, instead of only cancelling the current one. The way it works right now forces you to make two transactions in cases where one would suffice. design - [ ] The welcome message on the point overview page [causes a rather wide spacing gap](https://user-images.githubusercontent.com/3829764/152068364-2a5e115c-cd2c-47cd-b4c1-2e33e6120941.png). This could probably be tightened up a bit. - [ ] On the star lockup screen, the section selection/tab and other elements on the screen [are put together a little too closely](https://user-images.githubusercontent.com/3829764/152070009-2ab4c45c-8115-4dd6-a89b-2b7811414e3b.png) compared to the rest of Bridge. - [ ] On the star lockup transfer acceptance screen, the header margin [appears to be missing](https://user-images.githubusercontent.com/3829764/152070096-1db9bab0-7393-4118-91b4-d6351fad9c7c.png).
monarch-initiative/mondo
1184977475
Title: [Merge] Question: username_0: **Mondo term (ID and Label)** MONDO:0006661 ascorbic acid deficiency MONDO:0009412 scurvy **Reason for deprecation** MONDO has different terms for Ascorbic Acid Deficiency and Scurvy, with no relation between them that I can see. I am out of my depth here, but I notice many sources treat these as the same. Does anybody understand why they're considered distinct in MONDO? **Your nano-attribution (ORCID)** 0000-0002-9491-7674 - <NAME> If you don't have an ORCID, you can sign up for one [here](https://orcid.org/) Answers: username_1: MESH has 2 different entries D001206 (Ascorbic acid deficiency) and D012614 (scurvy) According to MESH: - Ascorbic acid deficiency frequently develops into SCURVY in young children fed unsupplemented cow's milk exclusively during their first year. It develops also commonly in chronic alcoholism. - Scurvy: severe deficiency of vitamin C For what I can say, the difference between these 2 terms is the severity of the deficiency. If we decide to keep these terms separate, we should make 'scurvy' a child of 'ascorbic acid deficiency' (though we would have a single child issue)
raiden-network/raiden
515476905
Title: Add synchronization for setting the room listener and the fecthing of new messages. Question: username_0: ## Overview This is a rough outline of how the synchronization is performed in the matrix transport: 1. During start up, if it is not the first run, the previous created user account is reused. Because there may be new invites a [call to `sync` is performed](https://github.com/raiden-network/raiden/blob/1ff6e6e22017b16a22fcef0e23535d5c41c617cc/raiden/network/transport/matrix/utils.py#L445), this is however, limited so that [no messages are fetched](https://github.com/raiden-network/raiden/blob/1ff6e6e22017b16a22fcef0e23535d5c41c617cc/raiden/network/transport/matrix/utils.py#L439-L447), the rationale here is that the room callback is not installed, and if a message is fetched it would be discarded. 1. The matrix events are fetched by a `listener_thread` , [requested here](https://github.com/raiden-network/raiden/blob/1ff6e6e22017b16a22fcef0e23535d5c41c617cc/raiden/network/transport/matrix/transport.py#L404) and spawned [here](https://github.com/raiden-network/raiden/blob/1ff6e6e22017b16a22fcef0e23535d5c41c617cc/raiden/network/transport/matrix/client.py#L288). 1. This `listen_forever` thread is a [hot long-polling loop](https://github.com/raiden-network/raiden/blob/1ff6e6e22017b16a22fcef0e23535d5c41c617cc/raiden/network/transport/matrix/client.py#L236-L276), that uses matrix [`sync` api](https://github.com/raiden-network/raiden/blob/1ff6e6e22017b16a22fcef0e23535d5c41c617cc/raiden/network/transport/matrix/client.py#L405). 1. When the matrix server returns a response, the client [spawns a thread to handle the messages](https://github.com/raiden-network/raiden/blob/1ff6e6e22017b16a22fcef0e23535d5c41c617cc/raiden/network/transport/matrix/client.py#L414-L425). At any point in time there is only [one thread processing requests](https://github.com/raiden-network/raiden/blob/1ff6e6e22017b16a22fcef0e23535d5c41c617cc/raiden/network/transport/matrix/client.py#L409-L411) 1. The `sync` logic is completely overwritten, and the handling is customized in the Raiden `GMatrixClient` implementation. ## Problems Point `4` seems buggy, as far as I understand from the explanation I got from @andrevmatos, the new thread to process the messages is necessary because calls to `sync` are used by Matrix to determine if a node is online, if a node takes too long to call `sync` again its presence status will change to `offline`. This can happen when: - `sync` returns - the process thread is spawn, the goal is to make the next sync as fast as possible - `sync` returns a second time - the process thread is not finished, therefore the next sync will done and the node may go offline. Points `1` and `5` are at odds. Either the code for `5` has the same problem as `1` and needs additional synchronization, or the problem for `1` does not exist, from what I see there are additional races when new rooms are created. New rooms are created by [the `_get_room_for_address` function](https://github.com/raiden-network/raiden/blob/1ff6e6e22017b16a22fcef0e23535d5c41c617cc/raiden/network/transport/matrix/transport.py#L986-L989), this function currently is only used when a message has to be [sent to the partner](https://github.com/raiden-network/raiden/blob/1ff6e6e22017b16a22fcef0e23535d5c41c617cc/raiden/network/transport/matrix/transport.py#L934) a room is created if none exists yet. Once the room is created and the partner joined the [handler is installed](https://github.com/raiden-network/raiden/blob/1ff6e6e22017b16a22fcef0e23535d5c41c617cc/raiden/network/transport/matrix/transport.py#L1031-L1032). This is a race condition just like the one in the initialization, because there is no synchronization among `_sync` and the installation of the room handler it is possible for a message to be fetched before the handler is installed. Restarts don't have this problem because during `1` messages are simply not fetched, messages are fetched only after listeners are installed by [`_inventory_rooms`](https://github.com/raiden-network/raiden/blob/1ff6e6e22017b16a22fcef0e23535d5c41c617cc/raiden/network/transport/matrix/transport.py#L637-L656) called [during the start](https://github.com/raiden-network/raiden/blob/1ff6e6e22017b16a22fcef0e23535d5c41c617cc/raiden/network/transport/matrix/transport.py#L398) The code handling invites doesn't have a problem. The [invite](https://github.com/raiden-network/raiden/blob/1ff6e6e22017b16a22fcef0e23535d5c41c617cc/raiden/network/transport/matrix/client.py#L447-L449) is processed [before the messages](https://github.com/raiden-network/raiden/blob/1ff6e6e22017b16a22fcef0e23535d5c41c617cc/raiden/network/transport/matrix/client.py#L483-L492). [The invite listener](https://github.com/raiden-network/raiden/blob/1ff6e6e22017b16a22fcef0e23535d5c41c617cc/raiden/network/transport/matrix/transport.py#L334) is installed during the transport construction, so there is no race condition there. This listener will join the room and [install the message listener](https://github.com/raiden-network/raiden/blob/1ff6e6e22017b16a22fcef0e23535d5c41c617cc/raiden/network/transport/matrix/transport.py#L745-L746), so after processing the invite it is perfectly fine for the room messages to be processed (This is okay because there is only one thread to process the matrix events, therefore it is know that the processing of the invite and the room messages are ordered).<issue_closed> Status: Issue closed
qiuxiang/react-native-amap3d
368590336
Title: 怎么修改定位图标样式啊,默认的是一个三角形带有一个淡紫色的圆圈,可以修改吗? Question: username_0: <!-- 进行问题反馈之前: - 提供你正在使用的版本、RN 版本、设备信息 - 请尽可能详细地描述问题,最好能提供截图,可重现的代码 - 在提编译问题之前,请先检查最新的编译测试 以下情况 issue 会被 close: - 确定是环境配置的问题,且没有提供足够的信息 - 问题不具备普遍性,且缺乏讨论 问题被关闭,仍然欢迎讨论。 --> Status: Issue closed Answers: username_0: <!-- 进行问题反馈之前: - 提供你正在使用的版本、RN 版本、设备信息 - 请尽可能详细地描述问题,最好能提供截图,可重现的代码 - 在提编译问题之前,请先检查最新的编译测试 以下情况 issue 会被 close: - 确定是环境配置的问题,且没有提供足够的信息 - 问题不具备普遍性,且缺乏讨论 问题被关闭,仍然欢迎讨论。 --> Status: Issue closed username_1: 楼主解决了吗?
juliakorea/talks
723916670
Title: 어레이 트랜스 포즈 Question: username_0: 안녕하세요 매트랩으로 쓰여진 코드를 줄리아로 래퍼 하려고 하나씩 비교 하려고 매트랩 함수를 줄리아로 끌어다가 (MATLAB.jl 이용) 퓨어 줄리아 코드랑 비교 하는 작업을 하고 있습니다. 하나 발견한 점은 mxcall function이 트랜스포즈 된 어레이를 인식 못하는 것 같습니다. ex> 1×10000 LinearAlgebra.Adjoint{Float64,Array{Float64,1}}: 간단한 커스텀 펑션으로 길이를 리턴 하라고 만들면 1로 나와 버리네요 그래서 트랜스포즈 하고 나서 저런 형태가 아닌 그냥 어레이 형태로 바꾸려고 시도를 해보고 있는데 https://docs.julialang.org/en/v1/stdlib/LinearAlgebra/ 이곳의 예시에서 처럼 시도를 하였으나 julia> A = [1 2im; -3im 4] 2×2 Array{Complex{Int64},2}: 1+0im 0+2im 0-3im 4+0im julia> T = transpose(A) 2×2 Transpose{Complex{Int64},Array{Complex{Int64},2}}: 1+0im 0-3im 0+2im 4+0im julia> copy(T) 2×2 Array{Complex{Int64},2}: 1+0im 0-3im 0+2im 4+0im y(1:10) a = Array(1:10) 10-element Array{Int64,1}: 1 2 3 4 5 6 7 8 9 10 a b= transpose(a) 1×10 LinearAlgebra.Transpose{Int64,Array{Int64,1}}: 1 2 3 4 5 6 7 8 9 10 c = copy(b) 1×10 LinearAlgebra.Transpose{Int64,Array{Int64,1}}: 1 2 3 4 5 6 7 8 9 10 이렇게 밖에 안되네요.. 트랜스포즈를 하고 나서도 계속 Array{Float64) 타입을 유지하게 할 수 있는 방법이 어떻게 있을까요?? Answers: username_1: 조금 더 느려질 수 있지만, collect(T)를 해보시죠. username_0: 앗 감사합니다. 되네요 근데 타입이 {,2)로 나오는데 그럼 a[1,:]이런식으로 차원을 낮추면 될까요? 사실 확인만 하고 결과가 같다면 나중에 매트랩 함수는 쓰지 않을거라 속도는 상관 없겠지만서도요.. username_1: 그건 2차원이라 그런데, `length(collect(T))` 라고 하면 되지 않을까요 Status: Issue closed
kxgames/glooey
843756958
Title: main icon and title change Question: username_0: Hi, I am new to this library. Can we change the main logo/icon? right now it's python kernel logo. and also can we change the title of the GUI? right now it shows the path where it is in my PC. ![Capture](https://user-images.githubusercontent.com/50308147/112893260-706c8e00-90a8-11eb-93f5-174507a10aea.PNG) Thank you, Akshay Answers: username_1: Sorry for the slow response. If I'm understanding you correctly, you want to change the title/icon of the GUI window? If so, you can do that using pyglet. See the [windowing](https://pyglet.readthedocs.io/en/latest/programming_guide/windowing.html#appearance) documentation. Status: Issue closed username_0: Thank you @username_1 .
auth0/auth0-spa-js
750671974
Title: Can't create auth0 client when refresh token is invalid (e.g. outdated) Question: username_0: ### Describe the problem When using `localStorage` and refresh tokens, invalid (e.g. outdated) refresh tokens cause `createAuth0Client` to throw an error. This leaves users of the library in an awkward state that is difficult to handle, requiring ugly workarounds, as users don't have a client instance to call logout and messing with the auth0 `localStorage` entry is discouraged. The current behaviour has been an issue for multiple users (#449, #499). Given the browser's tightening 3rd party cookie policies, it can be expected that more users will use the combination of `localStorage` and persisted refresh tokens for good UX. The library whose purpose is to authenticate users who are not authenticated should not explode during initialisation when the user is not authenticated. Expired refresh tokens should be considered one specific case of the user not being authenticated (anymore) instead of being an error preventing the library to initialise. ### What was the expected behavior? - `createAuth0Client` successfully builds an auth0 client and cleans up invalid tokens from `localStorage` during the process, so that the user is logged out. - `isAuthenticated` returns `false`, so that the application can call `loginWithPopup` or `loginWithRedirect`. - `getTokenSilently` throws `login_required` error, so that the application can handle the error and call `loginWithPopup` or `loginWithRedirect`. ### Reproduction - Configure library with options `{ cacheLocation: 'localstorage', useRefreshTokens: true, ...otherOptions }` - Log in with any account. - Wait until access token and refresh token are expired. To speed up testing, take the shortcut of editing the `localStorage` `auth0spajs` entry: Set the `expiresAt` attribute to the current unix timestamp, and replace at least one character of `refresh_token`. This procedure of course is not a regular use-case which the library should have to deal with, but it yields the same error as can be observed from regular user behaviour and good for reproducing the issue quickly. The "organic" occurence of this happens when the user logs into an app, and then doesn't interact with it for the duration that the application's `Refresh Token Expiration - Absolute Lifetime` is set to. Without user interaction, the access token is not used and therefore the refresh token not refreshed even if using rotating refresh tokens. - Reload the page (or, more frequently in SPAs: open a new tab), so that `createAuth0Client` is called. At that point it can be observed that `createAuth0Client` throws an error and does not clean up `localStorage`. Reloading keeps yielding the same non-recoverable error. Without auth0 client instance it's not possible to call `logout`, which would clean up `localStorage`. It is possible to create a client with `new Auth0Client()` and call `logout` on that, but at that point the consumer of the library has to know the errors which the library could throw, and wrap client initialisation in a couple of try-catches. That the library does not handle this is against user expectation and can be considered a bug or a feature request. ### Environment Using `"@auth0/auth0-spa-js": "^1.12.0"` in a vue application. Tested in Chrome 85-87. As mentioned above, the relevant environment elements are that the library is used with `localStorage` and refresh token. The API must have `Allow Offline Access` enabled so that the refresh token is stored in `localStorage`. Answers: username_1: Thanks for raising this. We are aware of this problem and are actively tracking it on our board. I expect us to look into this pretty soon and will get back to you. Leaving this issue open in the meantime. Status: Issue closed username_2: Hi @username_1 since the PR was merged, does this mean `getTokenSilently()` isn't going to throw `login_required` and `invalid_grant` anymore? I'm having a work in progress handling those errors manually in the UI and offer the user `loginWithRedirect` username_1: Hi, `login_required` should still be thrown, as you can see here : https://github.com/auth0/auth0-spa-js/blob/master/src/Auth0Client.ts#L883 `invalid_grant` should now be caught, fallback to using iframe (https://github.com/auth0/auth0-spa-js/blob/master/src/Auth0Client.ts#L958) which will either succeed or throw the above `login_required` error. username_2: @username_1 alright, thank you for the clarification 👍
renweizhukov/AllJoyn
116910370
Title: Add yes/no options for AboutService to accept the AboutClient join request. Question: username_0: When AboutClient receives the announce signal broadcasted by AboutService, AboutClient will send a joining request to AboutService. At this time, AboutService should offer the user yes/no options to accept/reject the request.
flutter/flutter
537344633
Title: Webview Plugin error while building Question: username_0: After upgrading flutter to latest version v1.12.13+hotfix.6 webview_flutter is throwing the below error open ...Software/Flutter/flutter/.pub-cache/hosted/pub.dartlang.org/webview_flutter-0.3.18+1/ios/Classes/JavaScriptChannelHandler.h: Operation not permitted Answers: username_1: Hi @username_0 coud you please try to run `flutter clean` `flutter pub cache repair` if the issue persists can you please provide your updated `flutter doctor -v` and your `flutter run --verbose`? Thank you username_0: It didn't work completely The below error is remaining #import <GoogleUtilities/GULAppEnvironmentUtil.h> Thanks username_1: Hi again @username_0 can you please provide your updated `flutter doctor -v` and your `flutter run --verbose`? Thank you username_0: This is the error log #0 throwToolExit (package:flutter_tools/src/base/common.dart:28:3) #1 RunCommand.runCommand (package:flutter_tools/src/commands/run.dart:509:7) <asynchronous suspension> #2 FlutterCommand.verifyThenRunCommand (package:flutter_tools/src/runner/flutter_command.dart:615:18) #3 _asyncThenWrapperHelper.<anonymous closure> (dart:async-patch/async_patch.dart:73:64) #4 _rootRunUnary (dart:async/zone.dart:1134:38) #5 _CustomZone.runUnary (dart:async/zone.dart:1031:19) #6 _FutureListener.handleValue (dart:async/future_impl.dart:139:18) #7 Future._propagateToListeners.handleValueCallback (dart:async/future_impl.dart:680:45) #8 Future._propagateToListeners (dart:async/future_impl.dart:709:32) #9 Future._completeWithValue (dart:async/future_impl.dart:524:5) #10 _AsyncAwaitCompleter.complete (dart:async-patch/async_patch.dart:32:15) #11 _completeOnAsyncReturn (dart:async-patch/async_patch.dart:290:13) #12 RunCommand.usageValues (package:flutter_tools/src/commands/run.dart) #13 _asyncThenWrapperHelper.<anonymous closure> (dart:async-patch/async_patch.dart:73:64) #14 _rootRunUnary (dart:async/zone.dart:1134:38) #15 _CustomZone.runUnary (dart:async/zone.dart:1031:19) #16 _FutureListener.handleValue (dart:async/future_impl.dart:139:18) #17 Future._propagateToListeners.handleValueCallback (dart:async/future_impl.dart:680:45) #18 Future._propagateToListeners (dart:async/future_impl.dart:709:32) #19 Future._completeWithValue (dart:async/future_impl.dart:524:5) #20 _AsyncAwaitCompleter.complete (dart:async-patch/async_patch.dart:32:15) #21 _completeOnAsyncReturn (dart:async-patch/async_patch.dart:290:13) #22 IosProject.isSwift (package:flutter_tools/src/project.dart) #23 _asyncThenWrapperHelper.<anonymous closure> (dart:async-patch/async_patch.dart:73:64) #24 _rootRunUnary (dart:async/zone.dart:1134:38) #25 _CustomZone.runUnary (dart:async/zone.dart:1031:19) #26 _FutureListener.handleValue (dart:async/future_impl.dart:139:18) #27 Future._propagateToListeners.handleValueCallback (dart:async/future_impl.dart:680:45) #28 Future._propagateToListeners (dart:async/future_impl.dart:709:32) #29 Future._completeWithValue (dart:async/future_impl.dart:524:5) #30 _AsyncAwaitCompleter.complete (dart:async-patch/async_patch.dart:32:15) #31 _completeOnAsyncReturn (dart:async-patch/async_patch.dart:290:13) #32 IosProject.buildSettings (package:flutter_tools/src/project.dart) #33 _asyncThenWrapperHelper.<anonymous closure> (dart:async-patch/async_patch.dart:73:64) #34 _rootRunUnary (dart:async/zone.dart:1134:38) #35 _CustomZone.runUnary (dart:async/zone.dart:1031:19) #36 _FutureListener.handleValue (dart:async/future_impl.dart:139:18) #37 Future._propagateToListeners.handleValueCallback (dart:async/future_impl.dart:680:45) #38 Future._propagateToListeners (dart:async/future_impl.dart:709:32) #39 Future._completeWithValue (dart:async/future_impl.dart:524:5) #40 _AsyncAwaitCompleter.complete (dart:async-patch/async_patch.dart:32:15) #41 _completeOnAsyncReturn (dart:async-patch/async_patch.dart:290:13) #42 XcodeProjectInterpreter.getBuildSettings [Truncated] #60 Future.wait.<anonymous closure> (dart:async/future.dart:400:22) #61 _rootRunUnary (dart:async/zone.dart:1134:38) #62 _CustomZone.runUnary (dart:async/zone.dart:1031:19) #63 _FutureListener.handleValue (dart:async/future_impl.dart:139:18) #64 Future._propagateToListeners.handleValueCallback (dart:async/future_impl.dart:680:45) #65 Future._propagateToListeners (dart:async/future_impl.dart:709:32) #66 Future._addListener.<anonymous closure> (dart:async/future_impl.dart:389:9) #67 _rootRun (dart:async/zone.dart:1126:13) #68 _CustomZone.run (dart:async/zone.dart:1023:19) #69 _CustomZone.runGuarded (dart:async/zone.dart:925:7) #70 _CustomZone.bindCallbackGuarded.<anonymous closure> (dart:async/zone.dart:965:23) #71 _microtaskLoop (dart:async/schedule_microtask.dart:43:21) #72 _startMicrotaskLoop (dart:async/schedule_microtask.dart:52:5) #73 _runPendingImmediateCallback (dart:isolate-patch/isolate_patch.dart:118:13) #74 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:175:5) username_0: Any update on this? username_1: Hi @username_0 can you double check that these logs are your full `flutter run --verbose`? thank you issue possibly related to https://github.com/flutter/flutter/issues/46898 username_2: Without additional information, we are unfortunately not sure how to resolve this issue. We are therefore reluctantly going to close this bug for now. Please don't hesitate to comment on the bug if you have any more information for us; we will reopen it right away! Thanks for your contribution. Could everyone who still has this problem please file a new issue with the exact descriptions what happens, logs and the output of 'flutter doctor -v' please. All system setups can be slightly different so its always better to open new issues and reference related issues. Status: Issue closed
MicrosoftDocs/powerapps-docs
1125602481
Title: Update Note section for OnStart property Question: username_0: Hi team Note section could be updated for the OnStart property section since the timeline has passed, especially point 2. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: e4bbd1e3-f15f-e855-e4ee-2e10fd22920d * Version Independent ID: 48aa97f4-bceb-7ac9-b2ce-49cf721519cf * Content: [App object in Power Apps - Power Apps](https://docs.microsoft.com/en-us/powerapps/maker/canvas-apps/functions/object-app#onstart-property) * Content Source: [powerapps-docs/maker/canvas-apps/functions/object-app.md](https://github.com/MicrosoftDocs/powerapps-docs/blob/main/powerapps-docs/maker/canvas-apps/functions/object-app.md) * Service: **powerapps** * GitHub Login: @gregli-msft * Microsoft Alias: **gregli**
nicejade/markdown-online-editor
465866928
Title: 完善基于 prerender-spa-plugin 构建 SPA 配置 Question: username_0: 本项目基于 [prerender-spa-plugin](https://github.com/chrisvfritz/prerender-spa-plugin) 来解决 SPA SEO 问题。而此库基于 Puppeteer 所完成,因此在安装时需要下载 Chromium,因为很多情况下不能达成所愿,推荐手动下载,详见 [基于 Puppeteer 构建简易机器人](https://www.jeffjade.com/2019/06/14/156-puppeteer-robot/#%E4%B8%8B%E8%BD%BD%E5%AE%89%E8%A3%85);这样就需要手动设定 executablePath 参数,来指定 **Chromium** 所在位置,使得 `prerender-spa-plugin` 可以正常工作。<issue_closed> Status: Issue closed
AmauriC/tarteaucitron.js
586135088
Title: Customize needConsent on init Question: username_0: Salut, comment serait-il possible de modifier à l'initialisation le needConsent du service ajouté ? Pour forcer par exemple Google Analytics qui serait paramétré en anonyme (anonymizeIp = true) ? Sans modifier le fichier services.js. -- Hi, how would it be possible to modify the needConsent of the added service at initialization? To force for example Google Analytics which would be configured anonymously (anonymizeIp = true)? Without modifying the services.js file.
firelab/windninja
494751390
Title: DEM no data exception not properly handled Question: username_0: A user just reported a case where the simulation fails with `Exception caught: std::exception` during DEM reading. We detect the no data values, but the error message "The DEM has no data values." is not passed through. This was for a WRF wxModelInitialization run.
microsoft/setup-msbuild
771978208
Title: Clang support Question: username_0: I'm not sure if this is right place to ask. I'm seeking to use clang, now shipped with VS2019, on my GitHub action. Can this Action be used to this? How? Answers: username_1: This action cannot be used to do this as it isn't a general purpose action to find any tool. We're keeping this specific to MSBuild tools. Sorry about that. I'm not sure if clang could also be found with vswhere tools as well, perhaps @username_2 may know. Status: Issue closed username_2: vswhere is not designed to find specific tools, nor is that information stored in anyway it could. It find the root. You must combine the subdirectory yourself, or use a new enough version that supports the `-find` parameter. username_2: vswhere is not designed to find specific tools, nor is that information stored in anyway it could. It find the root. You must combine the subdirectory yourself, or use a new enough version that supports the `-find` parameter.
afdaniele/x-docker
1022998115
Title: use NVIDIA_DRIVER_CAPABILITIES=all instead of NVIDIA_DRIVER_CAPABILITIES=graphics? Question: username_0: I can't run pytorch code on a gpu unless more driver capabilities are loaded. Is there a reason you are only using graphics? If not, can you change this to flag to "all" instead? Answers: username_1: Fixed in https://github.com/username_1/x-docker/commit/00e33066476915a909f7f3ebf7c1b1a805869df9
logstash-plugins/logstash-filter-useragent
282020376
Title: useragent filter plugin in logstash Question: username_0: i have 2 indexes , one for nginx access log and the second is for the haproxy log , so please i want to get the device name , useragent , OS type of the requester typed in kibana as individual fields . Please help . Answers: username_1: I apologize for the inconvenience, but this is a usage question, and should be asked at https://discuss.elastic.co. GitHub is for coding issues and error reporting. Looking at your configuration, you aren't using this useragent filter at all, you must add `useragent { source => "[nginx][access][remote_ip]" }` at the bottom of your filter section Status: Issue closed
ivarptr/yu-writer.site
252165826
Title: 能够ctrl+V黏贴图片就更赞了 Question: username_0: 哈哈哈!好 Answers: username_1: @username_0 谢谢你的意见!这个功能完成后我通知你哈 username_0: 哈哈哈!好 username_2: @username_1 我想反馈一个关于插入图片流程时发现的issue 我们正常的使用逻辑是ctrl+v粘贴图片(特别用截图工具的时候) 如果截图想要插入到文档中的话,我们需要先把图片保存成文件,才能插入 但是问题在于,貌似yu writer会把插入到文档的图片自动再添加一次到yu writer的文档库里,这个就有些混乱了(见图) ![yuwriter自动复制文件导致冗余](https://user-images.githubusercontent.com/15974476/30485427-f3f89c5c-9a5f-11e7-8a08-85aa54075257.png) username_3: 渴望增加ctrl+v粘贴图片(特别用截图工具的时候) +1,感谢作者 username_4: ctrl+v粘贴图片的功能实现了吗?这个是决定易用性的关键。其他再好都没有用 username_1: 好的,0.6版会开始彻底解决插入图片的各种问题。
mekanism/Mekanism
594537438
Title: Osmium isn't generating but in the Spawn-Chunk Question: username_0: *Please use the search functionality before reporting an issue. Also take a look at the closed issues!* #### Issue description: I have a Modpack Server. There are many Mods including BoP. This could also cause issues. I really don't know where to post this so sorry if I'm wrong. #### Steps to reproduce: 1. 2. 3. #### Version (make sure you are on the latest version before reporting): **Forge:** forge-1.7.10-10.13.4.1566-1.7.10-universal **Mekanism:** Mekanism-1.7.10-9.1.1.1031 **Other relevant version:** #### If a (crash)log is relevant for this issue, link it here: (_It's almost always relevant_) [[gist](https://gist.github.com/) / [pastebin](https://pastebin.com/) / etc link here. Please make sure that it isn't set to expire.] Answers: username_1: I haven't used 1.7.10 in a very long time, and I also wasn't on the team back then so I don't know if there were any known bugs, but I am going ahead and closing this as we are currently only working on developing the 1.15 version of Mekanism. Status: Issue closed
auth0/Auth0.Android
1156942577
Title: Does any body knows how to perform a native logout action? Answers: username_1: Hi @username_0, 1. For clearing the credentials stored in the browser cookies, you can use ```java WebAuthProvider.logout(account) .start(this, logoutCallback) ``` For clearing the credentials stored in the `CredentialManager` you can call ```java manager.clearCredentials() ``` If you are looking for something else, please provide more information using our Template to report issues and it will help us provide more details for you. Status: Issue closed
codotype/codotype-vuejs-vuex-bootstrap-generator
364535022
Title: Page title should be set to Blueprint.label Question: username_0: See attached - currently hard-coded as `Codotype - { pageTitle }` ![screenshot from 2018-09-27 11-34-10](https://user-images.githubusercontent.com/4616233/46157273-56d4fc00-c249-11e8-83c6-12a779ec8428.png) Answers: username_0: Resolved Status: Issue closed
jean-philippe-p/callipolis
326125010
Title: sous-titre "recherche d'adresse" Question: username_0: supprimer le sous-service et sa page, je ne vais pas proposer se service pour l'instant Answers: username_1: comme dit dans une autre issue tu pourra le faire toute seule dans pas longtemps username_1: c'est bon tu peux supprimer des servcies/sous-servcies dans l'admin dans l'ecran d'édition y a un bouton "delete" en bas a droite (en fait c'est pas une vrai suppression, je cache juste le service/sous-service en question, comme ça tu peux le récupérer si tu l'as supprimer pas erreur) Status: Issue closed
kata-containers/tests
588065638
Title: enhance install script to build containerd when version is a commit Question: username_0: **Which feature do you think can be improved?** `.ci/install_cri_containerd.sh` does not successfully work when the containerd version passed is a commit. **How can it be improved?** Enhance the script so that it can build and install from source when the version is a commit SHA.<issue_closed> Status: Issue closed
YiiGuxing/TranslationPlugin
428592340
Title: 为什么翻译替换规则改了? Question: username_0: 最新版本 2.3.2 为什么翻译替换规则和以前不一样了?以前输入双引号然后在双引号里面输入中文在直接翻译替换会直接吧双引号里面的中文替换掉 双引号还在 但是现在最新的版本会直接双引号也一起替换? 而且以前英文接着中文的时候 直接快捷键取词翻译会直接取中文部分忽视英文部分 然后替换的时候也是无视英文部分的 为什么现在又不行了? Answers: username_1: 因为以前是只能从非英文翻译替换为英文,而现在是任何语言都能翻译替换了。 username_0: 那怎么办呢?赶紧这样很不方便啊?实在不行的话能不能提供一个以前的版本给我?我真的不习惯这样的...赶紧各种不顺畅 username_1: 可以从release中下载老的版本 username_0: 找到老版本了 不过还是建议可以在替换的时候判断一下 只是截取相同语言的部分 不同语言的部分就不用截取和替换了 感觉那样会更好 username_1: 本地是很难判断有哪些语言的。 Status: Issue closed
scallop/scallop
23531882
Title: Parameter relationship problem Question: username_0: Great library! This code allows me to correctly default startDate/endDate as intended, but --help throws an exception. import org.rogach.scallop._; object Hello extends App { val opts = new ScallopConf(args) { val startDate = opt[String]("startDate", default=Some("2013-12-01)) val endDate = opt[String]("endDate", default=Some(startDate())) } } Exception in thread "main" org.rogach.scallop.exceptions.UnknownOption: Unknown option 'help' at org.rogach.scallop.Scallop$$anonfun$5.apply(Scallop.scala:125) at org.rogach.scallop.Scallop$$anonfun$5.apply(Scallop.scala:125) Perhaps there is a better way to do this? Best regards, Nick Answers: username_1: Hi! I apologize for such a long delay, and hope that this issue didn't trouble you for long time. I think that the best way to handle such cases will be to handle it in separate method: ```scala val opts = new ScallopConf(args) { val startDate = opt[String]("startDate", default=Some("2013-12-01")) val endDate = opt[String]("endDate", default=None) def getEndDate = endDate.get.getOrElse(startDate()) } ``` The problem is that Scallop first assembles information about options, then runs verification on those options, their dependencies and stuff - and in this case help printout is triggered before internal builder is fully verified - and non-verified builder tries to parse input string to determine value of startDate, and obviously fails. It will be really hard to fix that. Status: Issue closed username_0: Thank you!
Agile-Organization/recommendations
717800518
Title: Query all related products by product id Question: username_0: **As a** Developer **I need** to query all the related products, include the one with inactive relationship, by a given product id. **So that** after provided a product id, I can return all the products that has an relationship with it as recommendations. **Assumptions:** * Each product has an unique id. * Any two products may have an one way relationship defined by an integer (0 - up-sell, 1 - cross-sell, 2 - accessory) * Given an order of product A and product B, there can be up to one relationship exists. * Relationship can be toggled between active and inactive. **Acceptance Criteria:** ``` Given the id of product A, returns all the records in database that has a relationship with A. GET on /recommendations/{product-a-id} Then returns an array that contains 3 objects [ { relationship-id: 1, ids: [id1, id2, id3, ...], inactive-ids: [id10, id20, id30, ...] }, { relationship-id: 2, ids: [id4, id5, id6, ...], inactive-ids: [id40, id50, id60, ...] }, { relationship-id: 3, ids: [id7, id8, id9, ...], inactive-ids: [id70, id80, id90, ...] } ] For each category, if no related product exists, then ids field will be an empty array. ``` Status: Issue closed Answers: username_1: This is broken the endpoint does not work, how did you test it? username_0: **As a** Developer **I need** to query all the related products, include the one with inactive relationship, by a given product id. **So that** after provided a product id, I can return all the products that has an relationship with it as recommendations. **Assumptions:** * Each product has an unique id. * Any two products may have an one way relationship defined by an integer (0 - up-sell, 1 - cross-sell, 2 - accessory) * Given an order of product A and product B, there can be up to one relationship exists. * Relationship can be toggled between active and inactive. **Acceptance Criteria:** ``` Given the id of product A, returns all the records in database that has a relationship with A. GET on /recommendations/{product-a-id} Then returns an array that contains 3 objects [ { relationship-id: 1, ids: [id1, id2, id3, ...], inactive-ids: [id10, id20, id30, ...] }, { relationship-id: 2, ids: [id4, id5, id6, ...], inactive-ids: [id40, id50, id60, ...] }, { relationship-id: 3, ids: [id7, id8, id9, ...], inactive-ids: [id70, id80, id90, ...] } ] For each category, if no related product exists, then ids field will be an empty array. ``` Status: Issue closed
temporalio/temporal
969711761
Title: In TestWorkflowEnviornment, getResult() hangs after terminate() Question: username_0: ## Expected Behavior Against the real temporal server, calling WorkflowStub.getResult() for a workflow that has been terminated immediately throws a WorkflowFailedException. The test environment should behave the same way. ## Actual Behavior getResult() hangs indefinitely or until the user-supplied timeout. ## Steps to Reproduce the Problem 1. git clone <EMAIL>:username_0/samples-java.git 1. Run GreetingWorkflow.java. As checked in, it demonstrates the hanging behavior. 1. To see the expected behavior, `docker-compose up`, change useReal to true on line 154, and re-run. ## Specifications - Version: 1.2.0 - Platform: Cassandra, but probably doesn't matter.
bcgov/name-examination
356289775
Title: Solr: convert admin app solr package to module Question: username_0: #### **Task** (Use for Work not Directly related to a Story but supports the Sprint Goals) ##### _**Detailed Description**_ The solr package in solr-admin-app is unnecessary, convert it to a module instead. ##### _**Sprint Goal**_ ##### _**Acceptance Criteria**_ ##### _**Definition of Done**_ (:one:-Mandatory to add to the Backlog, :two:-Mandatory to add to the Sprint Backlog) - [ ] Acceptance Criteria Defined :one: - [ ] Estimate :two: - [ ] Priority Label :one: - [ ] Task Label :one: - [ ] Assignee :two: - [ ] Sprint Goal (in line with the goal of the sprint) :two: ---- ---- Answers: username_0: Deployed to dev and test. Status: Issue closed
naser44/1
129004198
Title: صور: هنا كل ما ستكشف عنه أبل في 2016 Question: username_0: <a href="http://ift.tt/23rYojd">&#1589;&#1608;&#1585;: &#1607;&#1606;&#1575; &#1603;&#1604; &#1605;&#1575; &#1587;&#1578;&#1603;&#1588;&#1601; &#1593;&#1606;&#1607; &#1571;&#1576;&#1604; &#1601;&#1610; 2016</a>
jspm/project
660906158
Title: handlebars browser build Question: username_0: The main handlebars build at https://jspm.dev/npm:[email protected]!cjs should be the browser build not the Node build. This seems due to the `"browser": { "." }` mapping interacting with the cjs resolution process somehow, and needs further investigation.
google/gvisor
327547838
Title: Support ARM64 for gvisor Question: username_0: Currently just support amd64, could we support for ARM64? What do we need to do for this support? I can help to work on it. Answers: username_1: We don't have any immediate plans to port to additional architectures. It is certainly feasible (as shown below), but certainly requires a lot of work and is something we want to do very carefully to avoid adding unnecessary complexity and technical debt. Porting to a new architecture requires several steps: 1. Porting/creating a platform compatible with the arch. The [ptrace](https://github.com/google/gvisor/tree/master/pkg/sentry/platform/ptrace) platform would be fairly simple to port. The [kvm](https://github.com/google/gvisor/tree/master/pkg/sentry/platform/kvm) platform would be much more complex. 2. Porting AMD64-specific assembly/host syscalls. There are several AMD64 assembly files (mostly with names `_amd64.s`) that obviously need ports. There may also be places where we make direct syscalls with `syscall.Syscall` where the syscall semantics are slightly different on a new arch. 3. Adding a new [syscall table](https://github.com/google/gvisor/blob/master/pkg/sentry/syscalls/linux/linux64.go). 4. Adding support for the new arch throughout sentry internal system call, signal, etc handling. A lot of this is in the [arch](https://github.com/google/gvisor/tree/master/pkg/sentry/arch) package, though that package is in need of refactoring. There are also several kernel structures that differ between arches (`struct pt_regs` is an obvious example). Many of these are in [abi/linux](https://github.com/google/gvisor/tree/master/pkg/abi/linux), but others we still use directly from the `syscall` package (they should be moved to `abi/linux`). This is the hardest part, as there are still many unanswered questions around how to do all of this cleanly. username_0: @username_1 Really Thanks for clarifying. In that case I think it is not a very easy work, I will talk with my workmate inside arm to discuss the development work. username_2: @username_1 We have enabled ptrace platform on Arm64 platform. Later, we will deliver the patches. Please see following as reference: ###### root@entos1:/go/src/github.com/google/gvisor# uname -p aarch64 root@entos1:/go/src/github.com/google/gvisor# docker run --runtime=runsc hello-world W1010 17:46:16.510561 30359 x:0] Could not parse /proc/cpuinfo, it is empty or does not contain cpu MHz W1010 17:46:16.537007 30370 x:0] Could not parse /proc/cpuinfo, it is empty or does not contain cpu MHz W1010 17:46:16.556919 30370 x:0] Could not parse /proc/cpuinfo, it is empty or does not contain cpu MHz Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (arm64v8) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker ID: https://hub.docker.com/ For more examples and ideas, visit: https://docs.docker.com/engine/userguide/ username_1: Wow, that's great! I didn't realize anyone was working on this. I look forward to seeing the changes. username_3: Amazing! Is there a public branch somewhere on github? We'd love to help the patches land, and it might be useful to have some early high-level guidance. username_4: Any chance to try this out already? username_5: I'm willing to test this out if/when its ready. username_6: Ok, so I don't really care much about KVM support on arm64 at this point, but I would love to be able to test ptrace on arm64. I would appreciate if we could bring gvisor runsc cross-compile to the stage that it produces a binary, which I can run, and then complain about :) username_7: I kind of think we can close this issue in lieu of using the ARM64 milestone. Issues regarding support for ARM64 should be tracked there. https://github.com/google/gvisor/milestone/2 username_8: Arm64 kvm: run ffmpeg on Ampere Altra Server (arm64 neoverse-n1): https://github.com/google/gvisor/issues/4056 username_9: @username_2ARM hello, could you tell me how to make build in arm64 username_9: @username_2ARM could you tell me how to make build gvisor in arm64 root@cloud:~/gvisor# make -j $(nproc) warning: /var/cache/dnf/docker-ce-stable-5216070ebe39d4d5/packages/docker-ce-cli-20.10.1-3.fc31.aarch64.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEY Docker CE Stable - aarch64 0.0 B/s | 0 B 00:00 Curl error (35): SSL connect error for https://download.docker.com/linux/fedora/gpg [OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to download.docker.com:443 ] The downloaded packages were saved in cache until the next successful transaction. You can remove cached packages by executing 'dnf clean packages'. The command '/bin/sh -c dnf install -y docker-ce-cli' returned a non-zero code: 1 --- BUILD -c opt //runsc Error: No such container: gvisor-bazel-3328c4e9-aarch64 Environment root@cloud:/gvisor# uname -a Linux cloud 5.5.19-050519-generic #202004210831 SMP Tue Apr 21 08:49:56 UTC 2020 aarch64 aarch64 aarch64 GNU/Linux root@cloud:/gvisor# **the problem mybe fail to install google-cloud-sdk** root@cloud:~/gvisor# uname -a Linux cloud 5.5.19-050519-generic #202004210831 SMP Tue Apr 21 08:49:56 UTC 2020 aarch64 aarch64 aarch64 GNU/Linux root@cloud:~/gvisor# username_8: Hi @username_9 If you use the root user, you should use the bazel command to build runsc. Such as: bazel build runsc If you want to use make command to build or test, there are 2 things you should to do: 1, We should login as normal user. Notice: please do not log in as normal users in the root account 2, Enabling Non-root Users to Run Docker Commands https://www.google.com.hk/search?newwindow=1&safe=strict&rlz=1C5GCEA_enCN927CN927&sxsrf=ALeKk02XcV_twmwUv_RoQkZfxzIeNwkD4w%3A1609147764289&ei=dKXpX-2REfGGr7wPtYaiiAo&q=Enabling+Non-root+Users+to+Run+Docker+Commandsdocs&oq=Enabling+Non-root+Users+to+Run+Docker+Commandsdocs&gs_lcp=CgZwc3ktYWIQAzIHCCEQChCgAToECAAQR1Dz7hBY8-4QYJ7xEGgAcAV4AIABmAGIAZgBkgEDMC4xmAEAoAECoAEBqgEHZ3dzLXdpesgBCMABAQ&sclient=psy-ab&ved=0ahUKEwjtm4bHrvDtAhVxw4sBHTWDCKEQ4dUDCA0&uact=5 username_8: ![image](https://user-images.githubusercontent.com/34124929/103205051-a5473880-4933-11eb-967d-b718ec7bf4c5.png) ![image](https://user-images.githubusercontent.com/34124929/103205102-c6a82480-4933-11eb-84fa-b956d1f6d22a.png) ![image](https://user-images.githubusercontent.com/34124929/103205333-47672080-4934-11eb-9d1e-f70077ffcb96.png) ![image](https://user-images.githubusercontent.com/34124929/103205321-41713f80-4934-11eb-97b1-7705ae1de93e.png) username_9: @username_2ARM I have tried you ways and other way such as : username_9: @username_2ARM , thank you . because of Great Firewall ,other problem happens and do you have other way to build gvisor nalyzing: target //runsc:runsc (47 packages loaded, 6941 targets configured) ERROR: An error occurred during the fetch of repository 'com_github_google_subcommands': Traceback (most recent call last): username_8: @username_9 Sorry. I have no idea about it.