repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
mintproject/dame_cli
602345477
Title: validate urls Question: username_0: ```bash To run this model configuration, a daily-weather file (.tar.gz file) is required. Please enter a url for it: Please enter a url for it: sad To run this model configuration, a monthly-weather file (.tar.gz file) is required. Please enter a url for it: afdasf ```<issue_closed> Status: Issue closed
MicrosoftDocs/dynamics365smb-devitpro-pb
821353118
Title: Guide does not address environments with the Office Store disabled Question: username_0: This guide does not address 365 environments which have the Office Store disabled, thus requiring all apps to be approved and available in the Admin Managed section. My organization is configured like this and the Dynamics Excel add-in is not working when we choose the "Edit in Excel" option from a screen in Business Central. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: a5c53847-a111-f746-cc85-af3dcc702f84 * Version Independent ID: 06c03c31-a8a9-40e7-2847-bc22cea91344 * Content: [Setting up the Excel Add-In for Editing Data - Business Central](https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/administration/configuring-excel-addin) * Content Source: [dev-itpro/administration/configuring-excel-addin.md](https://github.com/MicrosoftDocs/dynamics365smb-devitpro-pb/blob/live/dev-itpro/administration/configuring-excel-addin.md) * Service: **dynamics365-business-central** * GitHub Login: @jswymer * Microsoft Alias: **jswymer**
GoogleCloudPlatform/gcsfuse
242218927
Title: gcsfuse missing for Ubuntu 16.04 Xenial? Question: username_0: I'm trying to install gcsfuse on Ubuntu 16.04 Xenial, but even after adding the repo and apt-get update it still doesn't find the package. I've barely slept this week, but I'm pretty sure I'm not doing this wrong. I even tried to install the .deb package, but it's for amd and I'm on i386. This is what I did (about 20 times): ``` :~$ export GCSFUSE_REPO=gcsfuse-`lsb_release -c -s` :~$ echo $GCSFUSE_REPO gcsfuse-xenial :~$ echo "deb http://packages.cloud.google.com/apt $GCSFUSE_REPO main" | sudo tee /etc/apt/sources.list.d/gcsfuse.list deb http://packages.cloud.google.com/apt gcsfuse-xenial main :~$ curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 663 100 663 0 0 2848 0 --:--:-- --:--:-- --:--:-- 2857 OK :~$ sudo apt-get update Hit:1 http://us.archive.ubuntu.com/ubuntu xenial InRelease Get:2 http://us.archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB] Get:3 http://us.archive.ubuntu.com/ubuntu xenial-backports InRelease [102 kB] Get:4 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB] Hit:5 http://ppa.launchpad.net/alessandro-strada/ppa/ubuntu xenial InRelease Hit:6 http://ppa.launchpad.net/git-core/ppa/ubuntu xenial InRelease Hit:7 http://packages.cloud.google.com/apt gcsfuse-xenial InRelease Hit:8 http://ppa.launchpad.net/ondrej/php/ubuntu xenial InRelease Hit:9 http://packages.cloud.google.com/apt cloud-sdk-xenial InRelease Fetched 306 kB in 0s (384 kB/s) Reading package lists... Done :~$ sudo apt-get install gcsfuse Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package gcsfuse ``` Any idea what I'm missing? Thank you very much for your help! Answers: username_1: Hi, sorry for the slow response. There was a complaint about this in #202, which it seems turned out to be user error. And I verified that it worked for me on GCE. But I see you are being careful. It seems likely the issue is that you're on i386, for which I don't prepare prebuilt packages. You can probably compile gcsfuse yourself, but I make no guarantees. username_0: @username_1 Ok, thank you. I might just see about spinning up a x64 version of the server. That would probably be a better idea than trying to duplicate this mess on all of the load balance nodes that will be getting spun up. Thank you. username_0: I got it up and running on a x64 system no problem. I know I read that multiple file transfers is slower, but I'm getting 1.3KiB using gsutil rsync. Is that normal? Status: Issue closed username_1: Glad to hear it's working now. Sorry, I'll need more detail on the performance issue you're seeing; could you please post a separate issue?
lisen87/image_pickers
544167255
Title: Glide 报错。 Question: username_0: ` /Users/renzhayi/SDK/FlutterSDK/flutter/.pub-cache/hosted/mirrors.tuna.tsinghua.edu.cn%47dart-pub%47/image_pickers-1.0.5+3/android/src/main/java/com/leeson/image_pickers/utils/GlideEngine.java:26: 错误: 无法访问Fragment Glide.with(context).load(url).into(imageView); ^ 找不到android.support.v4.app.Fragment的类文件 /Users/renzhayi/SDK/FlutterSDK/flutter/.pub-cache/hosted/mirrors.tuna.tsinghua.edu.cn%47dart-pub%47/image_pickers-1.0.5+3/android/src/main/java/com/leeson/image_pickers/utils/GlideEngine.java:32: 错误: 无法访问FragmentActivity Glide.with(context) ^ 找不到android.support.v4.app.FragmentActivity的类文件 ` 我试了试 使用 https://github.com/username_1/image_pickers/issues/5 的解决方案,但是无效 Answers: username_0: 这是android 目录单独 编译的 error `错误: 无法访问FragmentActivity 找不到android.support.v4.app.FragmentActivity的类文件 ` username_0: 试过了。。还是无法访问Fragment
OGRECave/ogre
775596171
Title: overlay fails to build during swig on debian Question: username_0: Hi Pavel, I'm trying to update the debian packages once more but it fails to build: https://salsa.debian.org/games-team/ogre/-/jobs/1290043/raw (search for "error:") the relevant error is: ``` /builds/games-team/ogre/debian/output/source_dir/obj-x86_64-linux-gnu/Components/Python/CMakeFiles/_Overlay.dir/OgreOverlayPYTHON_wrap.cxx: In function 'PyObject* _wrap_ImGuiIO_RenderDrawListsFnUnused_set(PyObject*, PyObject*)': /builds/games-team/ogre/debian/output/source_dir/obj-x86_64-linux-gnu/Components/Python/CMakeFiles/_Overlay.dir/OgreOverlayPYTHON_wrap.cxx:92131:21: error: 'struct ImGuiIO' has no member named 'RenderDrawListsFnUnused'; did you mean 'RenderDrawListsFn'? 92131 | if (arg1) (arg1)->RenderDrawListsFnUnused = arg2; | ^~~~~~~~~~~~~~~~~~~~~~~ | RenderDrawListsFn /builds/games-team/ogre/debian/output/source_dir/obj-x86_64-linux-gnu/Components/Python/CMakeFiles/_Overlay.dir/OgreOverlayPYTHON_wrap.cxx: In function 'PyObject* _wrap_ImGuiIO_RenderDrawListsFnUnused_get(PyObject*, PyObject*)': /builds/games-team/ogre/debian/output/source_dir/obj-x86_64-linux-gnu/Components/Python/CMakeFiles/_Overlay.dir/OgreOverlayPYTHON_wrap.cxx:92153:30: error: 'struct ImGuiIO' has no member named 'RenderDrawListsFnUnused'; did you mean 'RenderDrawListsFn'? 92153 | result = (void *) ((arg1)->RenderDrawListsFnUnused); | ^~~~~~~~~~~~~~~~~~~~~~~ | RenderDrawListsFn ``` The relevant branch is https://salsa.debian.org/games-team/ogre/-/commits/ogre-1.12.9-updates please note that we try to use a packaged version of imgui, so there are some small changes to the CMakeLists.txt in this patch: https://salsa.debian.org/games-team/ogre/-/blob/ogre-1.12.9-updates/debian/patches/use-system-imgui.diff Any idea? Thanks! Answers: username_1: yeah, its this: https://github.com/ocornut/imgui/issues/3403#issuecomment-672934624 see https://github.com/OGRECave/ogre/blob/7d2aa8a4e7e03cc5a61080849439fa17076d055a/Components/Python/CMakeLists.txt#L22 I would suggest porting to the system imgui package independently of upgrading ogre (which is now at 1.12.10 BTW). However, note that you should be very cautious about using the system imgui, as this effectively means that you have the code in one package and the Python bindings for that code in an rather unrelated one (ogre). Therefore each imgui upgrade **must** trigger an ogre rebuild (also for compile flag changes) as it might affect the API & ABI. (like at hand) username_0: I think/hope it's not as bad as you paint it since imgui is used as a static library here. (Which of course also limits the benefits of splitting it out in the first place ...) Status: Issue closed username_1: yes, including imgui as a static lib alleviates the problem. while you are at it, might I bring this package issue to your attention: https://forums.ogre3d.org/viewtopic.php?f=2&t=96117 you might want to cherry-pick these two commits: - https://github.com/OGRECave/ogre/commit/19cade5cd8d1447bee61905b5b3a1cd81ea67b0e - https://github.com/OGRECave/ogre/commit/7624a5692348257b9d9345fc78184335456fc239 you can verify this, by running cmake inside: https://github.com/OGRECave/ogre/tree/master/Samples/Tutorials username_0: you could also make a 1.12.11 release with those fixes and I can try to include that tutorial as a package test I've had a hard time getting a package with even bigger issues fixed in Ubuntu LTS so I'm not sure I'll be able to help that guy but of course it should be fixed in Debian before the next release (and will then end up automatically in future Ubuntu releases) username_0: Let's hope stuff now works: https://salsa.debian.org/games-team/ogre/-/commits/ogre-1.12.10-updates username_1: there seems to be another issue with the debian package: SDL2 is apparently not picked up (at least in 1.12.4): https://forums.ogre3d.org/viewtopic.php?p=549964#p549964 would you check that? Also, where are the issues for ogre in debian tracked? So we dont have to abuse this issue any more.. username_0: https://www.debian.org/Bugs/Reporting or https://bugs.debian.org/cgi-bin/pkgreport.cgi?package=ogre-1.12 I'm not sure if Ubuntu bugs are propagated there in any way ``` [ 99%] Building CXX object Components/Bites/CMakeFiles/OgreBites.dir/src/OgreApplicationContextSDL.cpp.o ``` So the current packages don't have that issue. Unfortunately my request to import 1.12.5 back then was not reacted upon so Ubuntu has a very early stage of my packaging attempts. 20.10 includes 1.12.5 which should have this fixed already. (for 1.12.4 salsa ci (the ci on that gitlab instance) was not yet activated, that's why there's no build log)
share/sharedb
705244385
Title: Rename destroy method? Question: username_0: I don't know if something is wrong with my setup but calling `doc.destroy()` will **not** prevent the document from being sync'ed to other users right? I just found out that in order to stop syncing a document I have to `doc.del()` which is odd to me as "destroy" sounds more "destructive" than just "del" 😆 . So I was wondering if it would be a good idea to rename it to "stop" or "clear". On the other hand, after `doc.destroy()` I still see the document (now with `subscribed: false`). So the document is not destroyed right? What is the difference between `doc.destroy()` and `doc.unsubscribe()` after all? Answers: username_1: Each `Connection` object only ever has a single `Doc` for a given `collection` and `id`. That is: ```js const doc1 = connection.get('collection', '123'); const doc2 = connection.get('collection', '123'); doc1 === doc2 // true ``` The docs are stored in memory on the `Connection` object. `destroy` simply unsubscribes the `Doc`, and then removes the memoized `Doc` from the `Connection` to free up memory. Or, in code, assuming our `Doc` has some data: ```js const doc = connection.get('collection', '123'); doc.subscribe(() => { // doc.data is truthy doc.unsubscribe(() => { // doc.data is still truthy, but we're no longer subscribed doc === connection.get('collection', '123'); // true doc.destroy(() => { doc === connection.get('collection', '123'); // false // The data on the Connection instance of Doc is unset, because we've re-initialised a Doc }); }); }); ``` Note that `unsubscribe` and `destroy` have nothing to do with the remote state of the `Doc`, only local state. As you correctly said, `del` is the only one that actually removes the remote document.
ikedaosushi/tech-news
469283537
Title: KPIはもう古いGoogleが採用しているOKRとは Question: username_0: KPI&#12399;&#12418;&#12358;&#21476;&#12356;&#65311;Google&#12364;&#25505;&#29992;&#12375;&#12390;&#12356;&#12427;&#12300;OKR&#12301;&#12392;&#12399;&#65311;<br> &#12497;&#12501;&#12457;&#12540;&#12510;&#12531;&#12473;&#21521;&#19978;&#12420;&#25126;&#30053;&#23455;&#34892;&#21147;&#12395;&#12388;&#12394;&#12370;&#12427;&#12383;&#12417;&#12398;&#30446;&#27161;&#31649;&#29702;&#26041;&#27861;&#12392;&#12375;&#12390;&#12289;&#12300;OKR&#12301;&#12364;&#27880;&#30446;&#12434;&#28020;&#12403;&#12390;&#12356;&#12414;&#12377;&#12290;Google&#12420;Facebook&#12289;Amazon&#12394;&#12393;&#21517;&#12384;&#12383;&#12427;&#20225;&#26989;&#12391;&#25505;&#29992;&#12373;&#12428;&#12390;&#12356;&#12427;&#12289;&#20225;&#26989;&#32076;&#21942;&#32773;&#12395;&#24517;&#38920;&#12398;&#30446;&#27161;&#31649;&#29702;&#12398;&#26368;&#26032;&#12501;&#12524;&#12540;&#12512;&#12391;&#12377;&#12290;&#32068;&#32340;&#12395;&#12362;<br> https://ift.tt/2XFRHA2
RauliL/ostoslista
357606620
Title: Adding new item fails when focus is on last item Question: username_0: While text input focus is still on a previous item the "Add new item" button does nothing. Status: Issue closed Answers: username_1: This doesn't seem to be an issue anymore with the redesigned UI introduced in #5. username_2: Actually clicking on the "+" does nothing right now. I am at 6991bc7
DeveloperLiberationFront/Program-Navigation-Plugin
161281034
Title: Prettify the tool's landing page Question: username_0: Probably something as simple as MD -> html [(tool)](dillinger.io). Anything would look better than the boring list of links currently online. Answers: username_0: http://www4.ncsu.edu/~jssmit11/projects/flower/flowerMaterials.html Status: Issue closed username_0: I did the thing!
derailed/k9s
545945307
Title: Move k9s to homebrew core Question: username_0: Despite the breaking changes this project underwent recently I think it is time to move k9s to the official homebrew core repository. What's preventing it right now is that all releases of k9s have the Pre-release tag. How do you feel about it? Answers: username_1: @username_0 Thanks Alexander! Don't think we are there yet. An official release would mean no more backward compatibility breakage which I don't think I can commit to at present. Trying a bunch of newer stuff and features at the moment, that challenges this status-quo, so hopefully once we get thru this instability period, we can start thinking about an official K9s v1.0. Which frankly I would really love to see!! However, I am manning the fort solo at the moment and as you can probably tell with the # issues, totally feeling the burn ;( So One-O feels more like Five-O for me right now ;) Status: Issue closed username_0: @username_1 I can feel your pain. I have also a quite successful open source project under my belt and feel worn out by all the expectations that come with it. Please make sure to not push yourself too much. `k9s` is a great project but if it becomes a burden for you you won't have any fun anymore working on it. What helped one of my projects getting more and higher quality contributions was using GitHub projects to organise the roadmap of the project. People started working on issues for particular releases. With some time and maturity in the project you may be able to assemble a team around it and do supervising tasks as well as actual development work. Regarding the homebrew core question thank you for taking your time to give a detailed answer. I will hold my breath for now and close this issue :) username_1: @username_0 Thank you so much for your support, kindness and wise advise!! Congratulations on your OSS successes! It's true OSS as a sole IC is quite demanding. At times I feel, we're a dying breed as outfits with big teams and deep pockets are flooding the scene. I do feel really blessed to have great followers and folks that are patient/understanding and genuinely care about these projects. This fuels, lots of long nights and weekends. I had a bit of cycles during the holidays so figured I'll push hard before getting back in the saddle here at the ranch. Might have been a mistake ;( I feel, I've moved the needle a bit toward 1.0 but time will tell in this major refactoring aftermath. I truly appreciate your gesture and time on this ticket. Thank you for your kind words and for stopping by Alexander!! username_1: @username_0 Thank you so much for your support, kindness and wise advise!! Congratulations on your OSS successes! It's true OSS as a sole IC is quite demanding. I do feel really blessed to have great followers and folks that are patient/understanding and genuinely care about these projects. This fuels, lots of long nights and weekends. I had a bit of cycles during the holidays so figured I'll push hard before getting back in the saddle here at the ranch. Might have been a mistake ;( I feel, I've moved the needle a bit toward 1.0 but time will tell in this major refactoring aftermath. I truly appreciate your gesture and time on this ticket. Thank you for your kind words and for stopping by Alexander!!
racehub/om-bootstrap
65143158
Title: Wrong namespace in pagination example on the components documentation page Question: username_0: In the current version of documentation the namespace is written as "om-bootstrap.pager", but it should be "om-bootstrap.pagination". http://om-bootstrap.herokuapp.com/components ```clojure (:require [om-bootstrap.pager :as pg]) (pg/pagination {} (pg/page {} "1") (pg/page {} "2") (pg/page {} "3")) ``` Answers: username_1: Thanks! Fixed in https://github.com/racehub/om-bootstrap/commit/7ef47a0ceabd069a3e9d69f3d0dc4cdc2b067475. username_1: This should hit the doc site in a few minutes. Status: Issue closed username_0: Awesome! Thank you for the great library, by the way! username_1: Absolutely, and thanks for the feedback! Let me know if you see anything else we need to tidy up.
sybila/biodivine-lib-bdd
906494538
Title: Python bindings Question: username_0: We should provide python bindings that would fit the style of `https://github.com/tulip-control/dd` and primarily benchmark the library using these bindings instead of the built-in benchmarks (i.e. there can be microbenchmarks in rust, but comparison with other libraries should be done via Python).
ACLSystems/alumno
345020167
Title: Migrar winston a 3.0.0 Question: username_0: **La mejora está relacionada a un problema? Por favor describe.** Migrar winston a 3.0.0 **Describa la solución que propones** Migrar winston a 3.0.0 **Contexto adicional** Revisar notas de migración en [https://github.com/winstonjs/winston/blob/master/UPGRADE-3.0.md] Answers: username_0: Se ha migrado toda la funcionalidad de logueo a Winston 3.0. También está lista la configuración para loguear en diferentes archivos si es necesario. Status: Issue closed
ash-project/ash
628060484
Title: Get all `ash-project` repositories cleaned up and ready to be worked on Question: username_0: - [ ] Licenses, with a badge - [ ] Contributor guidelines - [ ] Pull request template with commit name requirements - [ ] Readme with at minimum a short summary and an example of usage. - [ ] Any long-form writing moved to in-code documentation. - [ ] CI - with all of the steps runnable by mix check https://github.com/karolsluszniak/ex_check, running on a matrix of different elixir/downstream dependency versions (e.g Ecto, Phoenix) - [ ] Ensuring only the maintaining team can push to master - [ ] Requiring PR approvers - [ ] Requiring PRs pass a Continuous Integration build - [ ] guidelines on using issues - [ ] Uniform GitHub labels - [ ] All public interfaces at minimum specced, but ideally with function/module docstrings - [ ] Logo?<issue_closed> Status: Issue closed
erinfox/bey-pi
343871790
Title: Add/Choose a YONCE favicon Question: username_0: I think updating the favicon to be something fun would be great addition. If can find small photo of a queen bee or something cool and then maybe use https://www.npmjs.com/package/serve-favicon to serve it up. Answers: username_1: Sounds cool to me! username_0: This is tough! I don't what favicon is worthy of Yonce
conjurdemos/kubernetes-conjur-demo
414125514
Title: Demo is updated to run automated tests against OC cluster Question: username_0: The demo currently runs automated tests against GKE. It should be updated to run the same set of tests against our OC 3.9 cluster. AC: - [ ] Demo runs the same tests that are run in GKE against an OC 3.9 cluster<issue_closed> Status: Issue closed
hwangnk1004/Algorithm
569727206
Title: 명품자바 #실습문제6장 11번 Question: username_0: * 문제 - Math.random()의 난수 발생기를 이용하여 사용자와 컴퓨터가 하는 가위바위보 게임을 만들어보자. 가위, 바위, 보는 각각 1,2,3 키이다. 사용자가 1,2,3 키 중 하나를 입력하면 동시에 프로그램도 난수 발생기를 이용하여 1,2,3 중에 한 수 를 발생시킨다. 그리고 사용자와 컴퓨터 둘 중 누가 이겼는지를 판별하여 승자를 출력하라. 게임은 반복되도록 작성한다. * 답 - ```java package chapter6; import java.util.Scanner; public class chapter6_11 { public static void main(String[] args){ Scanner scanner = new Scanner(System.in); while (true) { int num = scanner.nextInt(); int com = (int) Math.round(1 + Math.random() * 2); System.out.println(com); if(num == com) { System.out.println("비김"); } else if (num ==1 && com ==2) { System.out.println("컴퓨터 승"); } else if (num ==1 && com ==3) { System.out.println("사용자 승"); } else if (num ==2 && com ==1) { System.out.println("사용자 승"); } else if (num ==2 && com ==3) { System.out.println("컴퓨터 승"); } else if (num ==3 && com ==1) { System.out.println("컴퓨터 승"); } else if (num ==3 && com ==2) { System.out.println("사용자 승"); } } } } ```
realfagstermer/realfagstermer
232030758
Title: Proposisjonslogikk Question: username_0: Foreslår Proposisjonslogikk EN Propositional logic EN Propositional calculculus EN Zeroth-order logic (BF? NN?) Belegg: http://bora.uib.no/handle/1956/12644 Answers: username_1: Lagt inn. Tror det blir "Proposisjonslogikk" på NN også. username_1: Vent, vi hadde allerede `Utsagnslogikk`. Slår sammen. Status: Issue closed username_1: Da har vi ``` Utsagnslogikk BF Setningslogikk BF Junktorlogikk BF Proposisjonslogikk EN ... NN .. ```
microsoft/DeepSpeed
835589598
Title: [zero3] apex was installed without --cpp_ext. Falling back to Python flatten and unflatten. Question: username_0: Running on one setup the current master I get: ``` [2021-03-18 22:12:42,472] [WARNING] [stage3.py:34:<module>] apex was installed without --cpp_ext. Falling back to Python flatten and unflatten. ``` Is it using a previously installed apex, or an apex comes with DeepSpeed and what needs to be done to make it fast? I don't see apex already installed on that setup. And I'm not using it in the ds config file. Thank you. Answers: username_0: It's because `try` doesn't check if apex is even installed before checking for `apex_C`. So it needs to be fixed. However we are discussing removing this code altogether https://github.com/microsoft/DeepSpeed/issues/877 which would close this issue. Status: Issue closed
ponylang/rfcs
184636854
Title: Type language for changing generated code based on type parameter constraints. Question: username_0: We're looking someone to make a proposal for a type language for changing generated code based on type parameter constraints, based on discussion here: https://github.com/ponylang/ponyc/issues/683 Answers: username_1: I'll write up something on that one. username_1: Done. See #62. Status: Issue closed username_0: Thanks @username_1! I'm going to close this ticket, since the RFC now exists as a PR and future discussion can be directed there.
github/docs
781580549
Title: Missing permissions for deleting a discussion Question: username_0: ### What article on docs.github.com is affected? [Repository permission levels for an organization](https://docs.github.com/en/free-pro-team@latest/github/setting-up-and-managing-organizations-and-teams/repository-permission-levels-for-an-organization) ### What part(s) of the article would you like to see updated? We need to add a new row in the table, near the other GitHub Discussions content, for deleting a discussion. ### Additional information People with Maintain and Admin permissions can delete a discussion ⚡ /cc @evi-liu Answers: username_1: PR https://github.com/github/docs/pull/2680 merged. Status: Issue closed
auth0/docs
95226027
Title: AWS API Setup does not match with AWS Portal Question: username_0: This doc is not updated since there is a new step asking for the role type. https://auth0.com/docs/aws-api-setup it should be based on http://docs.aws.amazon.com/IAM/latest/UserGuide/roles-creatingrole-identityprovider.html also, the steps are wrong numbered Answers: username_0: @username_1 I believe you already fixed this, if so please close this bug. username_1: @username_0 No, I haven't worked on this doc before. I will make these changes. username_1: @username_0 What is the correct Role Type for this example? Im guessing Role for Identity Provider Access >> Grant API access to SAML providers. username_1: ![aws-api-setup-6a](https://cloud.githubusercontent.com/assets/12396567/11108256/55196130-88b3-11e5-9f49-8e6ac3df32ef.png) username_1: @username_0 Looks like you now have to create the custom policy **before** creating the role. I will add these steps. username_1: New PR#712 Status: Issue closed
vaadin/designer
444408093
Title: Easier way to add missing dependencies for pattern Question: username_0: Patterns are very nice in Designer. But currently, if there are missing dependencies, a user has to install the dependencies manually, which is not very convenient. Would be great if Designer could help to add the missing dependencies, e.g., by clicking a button in the popup info window. Designer Version: 4.3.0.beta1 Answers: username_1: We removed patterns section with the release yesterday, and moved one of them to the starting point list when you create a new design. None are broken anymore.
gatheringhallstudios/MHGenDatabase
422600741
Title: Weapon Names in Database? Question: username_0: Hey. Great work on this. I am exploring the database a bit to see what sort of info I can gleam from it but am not finding any Weapon Names/Weapon Families. Am I just overlooking an obvious table? Armor has an `armor` table and `armor_families` for names but I can't find an equivalent for weapons. Any guidance much appreciated! Status: Issue closed Answers: username_0: Nevermind I figured it out! Weapons and Items share an ID, so name and description and stuff just requires a join on the `items` table.
jlippold/tweakCompatible
339645083
Title: `DockColor` not working on iOS 11.3.1 Question: username_0: ``` { "packageId": "org.thebigboss.dockcolor", "action": "notworking", "userInfo": { "arch32": false, "packageId": "org.thebigboss.dockcolor", "deviceId": "iPad6,11", "url": "http://cydia.saurik.com/package/org.thebigboss.dockcolor/", "iOSVersion": "11.3.1", "packageVersionIndexed": false, "packageName": "DockColor", "category": "Tweaks", "repository": "BigBoss", "name": "DockColor", "packageIndexed": false, "packageStatusExplaination": "This tweak has not been reviewed. Please submit a review if you choose to install.", "id": "org.thebigboss.dockcolor", "commercial": false, "packageInstalled": true, "tweakCompatVersion": "0.0.7", "shortDescription": "change color of Dock on Home Screen", "latest": "1.1-4", "author": "iNasser", "packageStatus": "Unknown" }, "base64": "<KEY> "chosenStatus": "not working", "notes": "" } ```
elixir-lang/elixir
316794026
Title: match? doesn't work with map pinned variable Question: username_0: root@phoenix:/var/app# elixir -v Erlang/OTP 20 [erts-9.1] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:10] [kernel-poll:false] Elixir 1.5.2 root@phoenix:/var/app# iex iex(1)> match?(%{a: 1}, %{a: 1, b: 2}) true iex(2)> a = %{a: 1} %{a: 1} iex(3)> match?(^a, %{a: 1, b: 2}) false Is this a bug? How to match against whole map pattern in Elixir? Status: Issue closed Answers: username_1: @username_0 a pattern is not the same as a value. For example, I can have a pattern such as `[_ | _]` but there is no such list as a value. With that in mind, note that ^a compares values, and not patterns. So your last example would only match if both maps were fully equal.
cedmax/youmightnotneed
397798334
Title: Moment - format - format simple test fails Question: username_0: Was looking into adding an object values example for lodash but found simple format test for moment is currently failing. ``` var date = new Date(Date.UTC(2012, 11, 20, 3, 0, 0)); new Intl.DateTimeFormat('it-IT', { year: 'numeric', month: '2-digit', day: '2-digit', }).format(date) ``` returns `"20/12/2012"` rather than `2012-12-20` Answers: username_1: interesting, not sure why this wasn't caught in the build or on my local. I'll look into it, thanks for the heads up username_1: @username_0 I don't get the same error on local or building on netlify, may I ask you your version of node and whether you installed the dependencies with yarn or npm? username_0: I'm using node 9.11.2 with yarn. If I run the same code in the Safari console I get the same result. username_1: sooo, I didn't manage to replicate the issue, could you do me a favour and try to run this test on your machine? https://github.com/srl295/btest402 it's basically running [this file](https://raw.githubusercontent.com/srl295/btest402/master/btest402.js) in node to test the Intl support. I'm sorry to put you through this but I can't think of anything else :( username_0: with 9.11.2 I get `SUMMARY:Have Intl, Date:no 'tlh', Date:bad 'mt', Date:no 'ja',` username_1: exactly my result. I'm completely lost here, sorry. Please feel free to open the PR with the test failing: as I said the build is going through and alas I don't have enough understanding of the problem (and time now, tbh) to investigate the issue properly :( What I can tell is that in the browser (any browser) I get the same result you get, but in node it works. I take away that probably the `Intl` is not the best option to achieve deterministic results. Status: Issue closed username_1: I decided to remove momentjs from the website in favour of an external resource. Hence this is not relevant anymore
ThePotatoKing55/FavoritesWidgetizer
708506804
Title: Major bug Question: username_0: Great app has potential but there’s a super annoying bug. I set 4 contacts for call and added the widget to my Home Screen. I click one of the names to call and it go’s to favorites page settings and that’s it. Doesn’t give me the option to call. I have to exit out of the favorites page and retap on the person 5-6 times just to get the call option again. It does that with every contact. It does it for 4 contacts and just one. For me it’s unreliable. Anyone else have this problem? Answers: username_1: That's really really strange. What phone are you running on? username_0: iPhone 11 username_0: Okay so what I do is click on a contact it pops up as call then hit cancel. Then I just swipe up to go to home and then retry to call a contact and it happens. I have to force close the app and then it’ll work again. username_1: That's so strange, I can't reproduce it on my device. Could you try reinstalling the app? username_0: Yes I can. Is there a way I can send you a screen recording ? I’ll change my contacts to test just so I can show. I will uninstall and reinstall though. username_1: I think you can attach them to comments by dragging+dropping. Keep me posted! username_0: This is what I was talking about. https://share.icloud.com/photos/07urKgQ7Ma__8fOq9V3XsflKw username_0: This is what I was talking about. Sorry it took me so long to get back with you. https://share.icloud.com/photos/07urKgQ7Ma__8fOq9V3XsflKw
wpilibsuite/allwpilib
1003605154
Title: Glass Freezing Issues Question: username_0: OS: MacOS Big Sur 11.3.1 Software: Glass Project: RomiReference Example project, Java After making a new RomiReference example project, I turn on my Romi (with the latest WPILibPi and firmware version installed). I connect my computer to the Romi network and then I run the "Simulate Robot Code On Desktop" Command. Once the glass GUI shows up, I click on the System Joysticks box and the whole GUI screen will no longer accept any input (can't move anything around, can't toggle state, etc.) except closing the glass screen. After terminating that simulation, I run it again. Now I click on the FMS box and the screen works normally. Everything but the FMS box and the other devices box causes this behavior. Answers: username_1: What version of wpilib? Does this only happen with the Romi, or any glass/simgui program?
LadioHadClip/lhc-native-page
507964767
Title: Coding Practice: LeetCode, WC158 (睡过了~) Question: username_0: #### 写在前面 因为一直没有什么记录的习惯,感觉不太好,恰逢最近感觉需要练习一下coding(因为实在太菜了),这算是第一个系列Coding Practice的开始。主要的关注点会在LeetCode平台的Contest和KickStart上,夹杂一些其他的东西,希望过一段时间能有所提高吧。 ## Weekly Contest 158, Oct.13 10.30 a.m -- 12.00 p.m. * [Split a String in Balanced Strings](#Split) ### 1. [(1221, Easy) Split a String in Balanced Strings](https://leetcode.com/contest/weekly-contest-158/problems/split-a-string-in-balanced-strings/) ***Problem***: Balanced strings are those who have equal quantity of 'L' and 'R' characters. Given a balanced string s split it in the maximum amount of balanced strings. Return the maximum amount of splitted balanced strings. 要求返回分割后平衡字符串的最大数量,所谓分割字符串是指有相同数量的L和R两种字符。 ***Examples***: ``` [1] Input: s = "RLRRLLRLRL" Output: 4 Explanation: "RL", "RRLL", "RL", "RL" [2] Input: s = "RLLLLRRRLR" Output: 3 Explanation: "RL", "LLLRRR", "LR" [3] Input: s = "LLLLRRRR" Output: 1 Explanation: "LLLLRRRR" ``` ***Solutions***: 怎样使得字符串数量最大呢?其实就是使得分割后的字符串不可分割即可。具体来说,贪婪的从前往后分割即可,每当R/L数量平衡时就进行一次分割,显然分割后得到的字符串是不可分割的。(然而愚蠢的我先想到的是用栈,虽然差不多......) ```C++ int solution(string s){ int res = cnt = 0; for (const auto& c : s) { cnt += c == 'L' ? 1 : -1; if (cnt == 0) ++res; } return res; } ``` ```Python def solution(s: str) -> int: res = cnt = 0 for c in s: cnt += 1 if c == 'L' else -1 if cnt == 0: res += 1 return res ``` ### 2. [(1222, Medium) Queens That Can Attack the King](https://leetcode.com/contest/weekly-contest-158/problems/queens-that-can-attack-the-king/) ***Problem***: On an 8x8 chessboard, there can be multiple Black Queens and one White King. Given an array of integer coordinates queens that represents the positions of the Black Queens, and a pair of coordinates king that represent the position of the White King, return the coordinates of all the queens (in any order) that can attack the King. [Truncated] [3] Input: n = 3, rollMax = [1,1,1,2,2,3] Output: 181 ``` ***Solutions***: 这个问题也比较简单,一个简单的思路是从King的坐标出发向八个方向扩展,每个方向碰到的第一个Queen就是所求的。考虑到输入的是Queen的坐标,而不是棋盘掩码,与其进行坐标到棋盘的转换,不如改变思路。可以逐个扫描Queen的坐标列表,计算每个Queen对应的方向,只保留最近的即可,这样还可以省去做转换的时间。(虽然coding起来反而麻烦一点......) ```C++ int solution(int n, vector<int>& rollMax) { } ``` ```Python def solution(n: int, rollMax: List[int]) -> int: ```
vitamin-caig/zxtune
511661095
Title: JniLibrary.java line 12 Question: username_0: #### in * Number of crashes: 1 * Impacted devices: 1 There's a lot more information about this crash on crashlytics.com: [https://fabric.io/vitamins-projects/android/apps/app.zxtune/issues/87fbb0cba79ff9f1551117add071f47f?utm_medium=service_hooks-github&utm_source=issue_impact](https://fabric.io/vitamins-projects/android/apps/app.zxtune/issues/87fbb0cba79ff9f1551117add071f47f?utm_medium=service_hooks-github&utm_source=issue_impact)<issue_closed> Status: Issue closed
pocketjoso/penthouse
275415977
Title: Error: Protocol error (Runtime.callFunctionOn): Cannot find context with specified id undefined Question: username_0: For some URLs Iam checking Iam getting an strange error, which is puppeteer releated. Think of updating to a newer version. https://www.bountysource.com/issues/51332369-when-would-an-error-cannot-find-context-with-specified-id-undefined-happen Also I want to mention that we need to install a lot of dependeciens with puppeteer. So I will have 3 new users in my passwd file. Keeps me feeling bad. The problem with the failing id is much more important for me. Answers: username_1: Based on that discussion, it should be fixed in the next puppeteer release (0.14) - so let's retry then. username_1: I know puppeteer install can be quite painful, hopefully that will be improved in the library itself. If you really want to you could revert to `0.11.13`, the version that still used phantomjs. It's no longer maintained however. username_0: I hope so. There is a little difference in my error message and the one in the thread. Its about execution context was destroyed and not an id is missing. Returning to an older version is not an option. Iam not into using software without updates. :) username_0: @username_1 what do you expect as output from debuglog function? Code like this: ```javascript stdErr += debuglog('call generateCriticalCssWrapped'); ``` results in _stdErr = undefinedundefinedundefined_ username_0: Ok I solved the problem by simply adding a return ""; to the end of the debuglog function. PR incoming username_0: @username_1 I opend an issue at puppeteer and they had some interesting information. The result would be that you should include these handlings because it could happen to anyone. https://github.com/GoogleChrome/puppeteer/issues/1591#issuecomment-355245777 username_1: @username_0 thanks for the digging - indeed it would be a good feature to add to Penthouse to lock the page from navigating during the critical css generation, to avoid these errors. If you're interested feel free to work on a PR, otherwise as usual I'll get around to it eventually. Please let me know here if you are going to start such work so we can avoid duplicate work! username_0: I will let you know when I start working. username_0: So I will start now to check if we can fix this issue username_1: Great, thanks, I will leave it to you then and review your PR when it comes. username_0: After some digging it seems we have an race condition with pageLoadSkipTimeout and the successfull return of the response. Here is my setup: ```javascript timeout: 90000, pageLoadSkipTimeout: 10000, renderWaitTime: 200, blockJSRequests: true, ``` I tried some 1000 tests. with one url of our own servers. nginx access log was helping me. - every time penthouse finished successfully was when the page responded under 10 seconds - every time the response took more than 10s penthouse didn't wait any longer for the response and the client quit the connection with status 499, although we had a timeout set of 90s - in a small chance of having the response in 10 to 10.2 seconds there is a race condition where the client gets the status 200 and while the resonse from the server is incoming we decide to cut the connection. Here are the logs: ` // Response in < 10s | status: 200 | bytes sent: 501439 | time till response: 7.367 [31/Jan/2018:17:23:37 +0100] 10.111.26.101 - - - https 200 501439 7.367 // Response in > 10.2 | status: 499 (Client disconnected) | bytes sent: 0 | time till response: 10.960 ( due to abort) [31/Jan/2018:16:47:15 +0100] 10.111.26.101 - - - https 499 0 10.960 // Response in 10s - 10.2s | status: 200 | bytes sent: 490601 | time till response: 10.086 [31/Jan/2018:17:24:43 +0100] 10.111.26.101 - - - https 200 490601 10.086 ` The important thing is that we got a status 200 but not the whole response data was transfered because we aborted the connection before. Next step is to check why we are doing this. Thinking of the strange sleep function in purneNonCriticalSelectors.js. Maybe there is a race condition. username_1: Interesting. Check the [pageLoadSkipTimeout logic](https://github.com/username_1/penthouse/blob/master/src/core.js#L117-L142), if you didn't already. It doesn't reject the page loading (shouldn't quit the connection), but we do stop waiting for it after the time specified, and move on to start extracting the critical css. For your sake, what happens if you increase the `pageLoadSkipTimeout` to a really high value, does that make this error go away in your setup? username_0: Iam on it. 👍 If I increase the pageLoadSkipTimeout to 20000 it never happened in my tests. username_0: Here we go ``` penthouse:core page load waiting ABORTED after 10.1s. +10s penthouse:core page load DONE +1ms penthouse:core build selector profile +1ms [0201/104704.874645:INFO:CONSOLE(5)] "debug: pruneNonCriticalSelectors", source: (5) penthouse:core pruneNonCriticalSelectors +10ms penthouse:core cleanupAndExit +11ms penthouse remove browser page for generateCriticalCss after ERROR, now: 0 +10s penthouse closed browser +9ms Error: Protocol error (Runtime.callFunctionOn): Execution context was destroyed. undefined ``` username_1: Cannot reproduce. I lowered pageLoadSkipTimeout to `2500` and ran penthouse with two slow domains, works as expected. Please try your own setup with some other page urls, that you don't own. Do you still get the same problem? Are you on the latest penthouse version/master? username_0: This only happens in a very short time frame. I have an domain which runs on a cache but when I refresh the cache it could happen that it needs up to 10s for pageload. I need to run this workflow up to 50 times and then I get the hit. Problem as you can see above is that the pruneNonCriticalSelectors script, which is injected in the puppeteer page runs into an error because the page is not loaded completely. ''' await Promise.race([ loadPagePromise, new Promise(resolve => { // instead we manually _abort_ page load after X time, // in order to deal with spammy pages that keep sending non-critical requests // (tracking etc), which would otherwise never load. // With JS disabled it just shouldn't take that many seconds to load what's needed // for critical viewport. setTimeout(() => { if (waitingForPageLoad) { debuglog( 'page load waiting ABORTED after ' + pageLoadSkipTimeout / 1000 + 's. ' ) resolve() } }, pageLoadSkipTimeout) }) ]) ''' Here is where the problem seems to come from, but I can't say why. I replaced the Promise.race logic with Promise.all and used rejects to determine when one is finished. Also I injected the pageload INTO the page with evaluate. The reason is: - pageLoadSkipTimeout should start with the response of the document because otherwise it would do the same as timeout - when started after document (the first request/response) it is a real counter for "what comes after" and can be more precise - if evaluated in browser page it can trigger window.stop() which stops all further requests Sideeffects: injecting and evaluating the time consts some time (~100ms) but I guess this is ok due to the fact that it seems to be more natural. Im in testing now to see if that fixed my problem username_0: So Iam finished and the fix is comfirmed working. Iam getting ``` https 200 490601 7.970 ``` so it means the page is not loaded completely. This is due to my very short pageLoadSkipTimeout frame. In the previous version we got the problem, that it broke the process and there was no css generated. In my fix it worked as shown below: ´´´bash penthouse:core new page opened in browser +73ms penthouse:core viewport set +3ms penthouse:core blocking js requests +2ms penthouse:core page load start +0ms penthouse:core RESPONSE URL: xxx +8s penthouse:core pageLoadSkipTimeout injected on dom creation +0ms penthouse:core pageLoadSkipTimeout [10] +25ms penthouse:core pageLoadSkipTimeout - page load waiting ABORTED after 0.01s. +15ms penthouse:core RACE RESULT: pageLoadSkipTimeout +0ms penthouse:core page load DONE +1s penthouse:core build selector profile +66ms penthouse:core pruneNonCriticalSelectors +30ms penthouse:core waited for renderWaitTime: 500 +509ms penthouse:core filterSelectors BEFORE +0ms penthouse:core filterSelectors AFTER +92ms penthouse:core pruneNonCriticalSelectors done, now cleanup AST +5ms ´´´ so no breaking anymore. Will serve a pull request asap username_0: So after a real test with our companys setup it seems the problem is still there. The purpose of injecting the pageLoadTimeout script was to being able to do window.stop(), which was a try to prevent the page from doing wrong. Iam not sure of the `pruneNonCriticalSelectors `script leads to the page crash. But the error comes from page.evaluate. ```bash penthouse:core page.evaluate - ERROR Error: Protocol error (Runtime.callFunctionOn): Cannot find context with specified id undefined ``` And you are right. I will try to find out why it is crashing. username_0: To get deeper into the code I have some questions: Iam using a child_process to launch penthouse multiple times at once. At the moment 1 per core. `let worker_process = child_process.fork(path.join(workerLib), options, processOptions);` I found the logic for browserPagesOpen and Iam not sure when there will be more than 1 page opened in one browser. How could this happen when you only allow one url in a penthouse call? To get deeper in our problem right now. When using the debug logs I found that the first browser crashes after the first cleanupAndExit script is running. ``` penthouse:core page load DONE +0ms penthouse:core build selector profile +41ms penthouse:core pruneNonCriticalSelectors +25ms penthouse:core cleanupAndExit start +381ms penthouse:core cleanupAndExit end +1ms penthouse remove browser page for generateCriticalCss after ERROR, now: 0 +8s Chromium unexpecedly not opened - crashed? _browserPagesOpen: 1 url: https://.... AST children: 5099 restarting chrome after crash penthouse no browser instance, launching new browser.. +6ms ``` This makes no sense to me because when the browser is a child process how could the cleanup script of child process 1 kill the browser of child process 2? To be clear: Iam not sure that this is really happening. Just discussing username_0: Ok after I adjusted my test directly in penthouse I can confirm that the reason for the error `page.evaluate - ERROR Error: Protocol error (Runtime.callFunctionOn): Cannot find context with specified id undefined` is related to the child process logic and penthouse. To reproduce use this test: - create a folder "tmp" - create a file test.mjs with following content: ```javascript import childProcess from 'child_process' const workerToStart = 20 // this is the trick > 5 is ok in my case, 10 sometimes, but 20 everytime failes const url = 'https://jonassebastianohlsson.com/criticalpathcssgenerator/' for (let i = 0; i < workerToStart; i++) { console.log("WORKER STARTED %s", i) let workerProcess = childProcess.fork('./tmp/pmodule.mjs', [url, i]) workerProcess.on('error', err => console.error("WORKER ERROR", err)) workerProcess.on('close', (code) => { console.log("WORKER ENDED %s", i) }) } ``` - create a file called pmodule,mjs ```javascript import penthouse from '../lib' const url = process.argv[2] const i = process.argv[3] penthouse({ url: url, // can also use file:/// protocol for local files cssString: 'body { color; red }', // the original css to extract critcial css from // OPTIONAL params width: 1300, // viewport width height: 900, // viewport height keepLargerMediaQueries: true, // when true, will not filter out larger media queries timeout: 60000, // ms; abort critical CSS generation after this timeout pageLoadSkipTimeout: 5000, // ms; stop waiting for page load after this timeout (for sites with broken page load event timings) maxEmbeddedBase64Length: 1000, // characters; strip out inline base64 encoded resources larger than this userAgent: 'Penthouse Critical Path CSS Generator', // specify which user agent string when loading the page renderWaitTime: 500, // ms; render wait timeout before CSS processing starts (default: 100) blockJSRequests: true, // set to false to load (external) JS (default: true) strict: false, // set to true to throw on CSS errors puppeteer: { getBrowser: undefined // A function that resolves with a puppeteer browser to use instead of launching a new browser session } }).then(criticalCss => { console.log("FINISHED") process.exit() }).catch(err => { // handle the error console.log(err) process.exit() }) ``` - run `DEBUG="penthouse*,-penthouse:css-cleanup*,-penthouse:preformatting*" node --experimental-modules tmp/test.mjs` username_1: Thanks for looking closely into this. I will take a closer look later, but note in general (not document enough I guess) regarding parallel processing and penthouse: * by default, if you call penthouse again while another call is still in process, penthouse will re-use the same chromium (puppeteer) browser, and just open a new tab instead. This is a bit faster than open and closing browsers. * if you pass the experimental (and intentionally not documented) `unstableKeepBrowserAlive` param, the launched browser will always be re-used (new tabs opened) and the browser will never be closed by Penthouse. Useful only if you are running a long-running node process that you will later kill manually (which will close the browsers) * you can also launch a puppeteer browser yourself (see docs but basically `puppeteer.launch().then((browser)`), in which case behavior will be the same as above - new tabs, and `Penthouse` will never kill the browser - it is now your responsibility. This way you can get much more efficient critical css generation - however how many cores it will utilise is up to your node configuration. --- I have never looked into spawning child processes for Penthouse calls, so I can't say anything yet about how it interacts with above parallelization logic. I am however running penthouse effectively in large scale just relying on the parellization, without child processes, and it works well for me. username_0: Thanks for the details. Unfortunately I can not see that penthouse will use the other browser. Thats why Iam asking. You can reproduce this by using the test above. As for the debug logs it shows that there is always a new browser. That was the reason why I was asking. To the problem: ```javascript await page.waitFor(100) const criticalSelectors = await page.evaluate(pruneNonCriticalSelectors, { selectors, renderWaitTime }) ``` With this `await page.waitFor(100)`it runs flawlessly through all my files and urls. Only the browser was crashing sometimes but you recovery logic works as expected. Also my test above with 20 workers is now running without any error. So the conclusion is that this is indeed related to puppeteer and not a race condition in penthouse. Here is the task I was reading about a similar problem https://github.com/GoogleChrome/puppeteer/issues/1325#issuecomment-357645094 username_0: I fixed 2 missing error rejects and it now restarts on execution context losing like you wanted it to be. The execution context problem is nearly gone but this is a real puppeteer issue and also not fixed in 1.0. With the current status it is working as it should and I didn't lost a single page request because of the restarting after error. Having 8 parallel penthouse process running with each at least 5 urls to check and 24 different files to process. Had 3 times crashed browser (context lost) and this was NOT due to navigating away. The main impact was `page.waitFor(500)` which leads to smoother running. Check the pull request and tell me if you are not happy with anything. By the way: I really like your effort in writing penthouse but I need to say that in my opinion the code is ulgy. In its functional way it is very hard to read. What do you thing of refactoring to es6 classes and less event bouncing ( Promises )? username_1: @username_0 will have to take time and give you a response later, but just quickly - I don't see any pull request, did you miss to send it? username_0: #227 here you go. I was just writing the comment :smiley: username_1: The code can surely be improved as it has mostly stayed in the same format since the beginning of the repo, when I was quite new to JS - but I am not sold on switching to Class usage. Making the code clean is not the only goal I have in mind, I also try to make Penthouse as fast as possible, which sometimes does lead to more complicated code, see #224 f.e. where I'm trying some increased parallelisation. Regarding promises (and async/await), I don't see how you mean that would change with a move to classes (async code would remain async code)? username_1: Reflecting a bit on what's going on here too, I want to say that: - regardless of everything else - patching up the error handling in these _edge_ cases in Penthouse is a good thing, so thanks for working on this with me! - I wonder however if you would really come across these errors at all if you used penthouse in a more standard way. To clarify: in your test example you are starting up 20 chromium browsers (and page loads) at once. It is not unlikely that you run out of memory or even that your CPU becomes super busy, causing all the issues you see (that don't happen normally). Can you try running in a more recommended way, see below (I will put these examples in the README tomorrow): ```js import puppeteer from 'puppeteer' import penthouse from '../lib' const url = process.argv[2] // 20 here still means 20 tabs at the same time (but in the _same_ browser); // my recommendation would be to cap at 5 and create a simple queue instead const urlsToTestInParallel = 20 const url = 'https://jonassebastianohlsson.com/criticalpathcssgenerator/' puppeteer.launch().then(browser => { for (let i = 0; i < urlsToTestInParallel; i++) { penthouse({ url, ...otherOptions, puppeteer: { getBrowser: () => browser } }) .then(criticalcss => { // finished }) } }) ``` @username_0 can you please try this on your machine on `master` in penthouse and see if you get fewer errors? username_0: Iam back. Sorry for being late. I tried the browser code, but unfortunately puppeteer.launch() always run into an error "kill ERSCH". Dont know why that happen. Maybe there is something strange in my configuration. username_1: I see that the example I gave for you to run was not very smart (and it also didn't have the enough config for launching puppeteer safely). I recently added more documentation about how to run many urls at the same time - can you try that example instead? https://github.com/username_1/penthouse/blob/master/examples/many-urls.js username_1: The heaviest work in Penthouse is running Chromium (via Puppeteer) to load pages and prune the critical css (and optionally take screenshots). This is done in one tab per job, and Chromium does take advantage of all your cores. What happens outside of Chromium is less CPU intensive, and if you run enough jobs in parallel you should not see much idle CPU time, at least I don't. So it depends on your requirements, but it is most likely not very beneficial for you to spawn separate processes. Your different cores are already in use, and starting multiple processes yourself means you will, I think, waste resources having one browser instance per process, instead of sharing it. username_0: So testet my original code of my project with the 1.4.1 version of penthouse. It works as expected. Changed my code to NOT use child process as explained by you above. Problem: Still the error I fixed in my PR. `Error: Protocol error (Runtime.callFunctionOn): Cannot find context with specified id undefine` So I can't test the speed impact of my multi process version. Will try on penthouse master with the examples. username_0: Basic example is working. But has an error in it. The cssString is `color; red` and should be `color: red` But it is working. So puppeteer.launch is the problem username_0: So tested everything again and now its working. Used my windows 10 x64 machine and ubuntu 16.10 x64. But I would say in my use case Iam faster with child processes. Because there is about 10 urls per entry to check for one child process. And this in parallel. I can reach much faster execution with 10x10 and my cores are way more used. Anyway I could use a browser instance for each of the 10 childs. How to proceed now? The issue still exists and I could fix them with my PR. I need that thing and it is even without child processes a problem. username_1: As much as possible, I would like to avoid bigger changes in Penthouse in response to: puppeteer bugs and/or things that only effect a minority of users. As it stands your PR is quite big and I'm not sure how much of it is needed to fix your problems. Could you start splitting it so we can get more clarity? What is the minimal change required to fix your issue? After we talked a bit more I described why I want the page load logic to work the way it does. And some of your error handling I already merged into master. What remains that makes a difference for you? If you can identify that and open in a _new_ PR, it will be easier for me to merge. All the commits that had nothing to do with your problem (like fixing/updating deps), please open separate PR's or leave these changes for now. This will also make them easier for me to merge - a PR that just upgrades deps _without breaking tests_ is generally a no-brainer! Thanks for your patience @username_0! username_0: You just want the best for penthouse. The same as I do. I may be a poweruser of penthouse, but that don't mean we can't optimize it to be a fit for everyone. Anyway Iam not a fan of creating another option value for adding the waitFor topic. Because it is indeed a problem for ALL puppeteer users. It could happen every time. And this "workaround" is just in the penthouse code until they fixed their problems. My problem is that I can't wait them to fix it, because it seems this could take ages. So I implement the workaround for it till they fix it. :) To be honest, we are talking about 500ms. I don't think that anyone will complain about it while he knows that there won't be any broken page requests or crashes because of that. But I admit that it is just my opinion. Yours is the one that counts. username_0: PRs are supplied username_0: After creating my own library: https://www.npmjs.com/package/crittr I was able to get rid of all the issues penthouse had. Benefits are cleaner code, faster code, more features. Thanks for your inspiration. Status: Issue closed username_1: Good job @username_0 - long live open source! :) Just started checking it out; I created a PR for a benchmark fix: https://github.com/username_0/crittr/pull/3
cms-sw/cmssw
1080246461
Title: Is MallocOpts still useful? Question: username_0: Question raised in https://github.com/cms-sw/cmssw/pull/36467 Answers: username_0: assign core username_0: According to `git grep` it is used only in `SimpleMemoryCheck` Service https://github.com/cms-sw/cmssw/blob/5e081489c9027255e45cd174b2ac14af1cb68a9d/FWCore/Services/plugins/SimpleMemoryCheck.cc#L384-L412 username_0: Seems to me that this piece in `SimpleMemoryCheck` has had no functional changes since its introduction in 2007 in dd0a0cbd60fd. I don't see any of the four configuration parameters being set anywhere except in a test configuration https://github.com/cms-sw/cmssw/blob/master/FWCore/Services/test/mallocopts_cfg.py . I'd be in favor of removing it (but I'm not someone somewhere is actually using them).
alphagov/smart-answers
79076505
Title: TypeError: can't convert nil into Integer Question: username_0: See the [Errbit report][] for more information. This exception is raised when the response to a `value_question :name, parse: Integer` is nil. I can't see that this is possible through the web interface but we should protect against it nonetheless. @username_1 mentioned that this might become a problem in commit 28785dd848d206e80e82c6c147b0db719258862a. You can see the error by visiting http://smartanswers.dev.gov.uk/calculate-your-child-maintenance/y/pay/1_child/no/400.0?next=1. The key is the lack of 'response=' in that URL. [Errbit report]: https://errbit.production.alphagov.co.uk/apps/533c35ae0da1159384044f5f/problems/555dc6746578635c07b60300<issue_closed> Status: Issue closed
TeamNULLDummy/T_NULL
188449913
Title: How to grep the obj id of the DB child when there is a nested ng-repeat Question: username_0: Since it i not possible to grep the $id of a child like using " g in p.Team", we have to figure out other ways to do it. Solve: We should use "(key, g) in p.Team instead of using " g in p.Team", then we could get the id by passing key into the controller's function.
p7g/patina
970970072
Title: Using take() in the arguments of another method call on the same option is broken Question: username_0: Since Python reads the method from the object before evaluating the arguments, the wrong method gets called (since `take()` changes `__class__`). For example: ```python opt = Some(123) opt.replace(opt.take().unwrap()) ``` Ends up calling `Some.replace` even though when the method actually gets called the object has become a `None_` :scream:<issue_closed> Status: Issue closed
Apollon77/ioBroker.tuya
803806276
Title: Indicator Reachable switching true/false Question: username_0: Hello, first of all, nice work! But I got a strange problem right now, my three Tuya Devices, two of them are switchable sockets, are constantly switching between beeing reachable and not reachable. Logs says following: ` tuya.0 | 2021-02-08 19:12:33.577 | debug | (26376) 6003468550029110afc2: Error from device (20): App still open on your mobile phone? Error: Error from socket -- | -- | -- | -- tuya.0 | 2021-02-08 19:12:28.698 | debug | (26376) stateChange tuya.0.6003468550029110afcd.7 {"val":false,"ack":true,"ts":1612807948667,"q":0,"from":"system.adapter.tuya.0","user":"system.user.admin","lc":1612807301709} tuya.0 | 2021-02-08 19:12:28.690 | debug | (26376) stateChange tuya.0.6003468550029110afcd.6 {"val":0,"ack":true,"ts":1612807948628,"q":0,"from":"system.adapter.tuya.0","user":"system.user.admin","lc":1612807301693} tuya.0 | 2021-02-08 19:12:28.663 | debug | (26376) stateChange tuya.0.6003468550029110afcd.5 {"val":0,"ack":true,"ts":1612807948628,"q":0,"from":"system.adapter.tuya.0","user":"system.user.admin","lc":1612807301636} tuya.0 | 2021-02-08 19:12:28.661 | debug | (26376) stateChange tuya.0.6003468550029110afcd.4 {"val":0,"ack":true,"ts":1612807948627,"q":0,"from":"system.adapter.tuya.0","user":"system.user.admin","lc":1612807301615} tuya.0 | 2021-02-08 19:12:28.658 | debug | (26376) stateChange tuya.0.6003468550029110afcd.2 {"val":0,"ack":true,"ts":1612807948626,"q":0,"from":"system.adapter.tuya.0","user":"system.user.admin","lc":1612807301593} tuya.0 | 2021-02-08 19:12:28.653 | debug | (26376) stateChange tuya.0.6003468550029110afcd.1 {"val":false,"ack":true,"ts":1612807948625,"q":0,"from":"system.adapter.tuya.0","user":"system.user.admin","lc":1612807826107} tuya.0 | 2021-02-08 19:12:28.612 | debug | (26376) 6003468550029110afcd: Received data: {"1":false,"2":0,"4":0,"5":0,"6":0,"7":false} tuya.0 | 2021-02-08 19:12:23.582 | debug | (26376) 6003468550029110afc2: Error from device (19): App still open on your mobile phone? Error: Error from socket tuya.0 | 2021-02-08 19:12:13.576 | debug | (26376) 6003468550029110afc2: Error from device (18): App still open on your mobile phone? Error: Error from socket tuya.0 | 2021-02-08 19:12:03.578 | debug | (26376) 6003468550029110afc2: Error from device (17): App still open on your mobile phone? Error: Error from socket tuya.0 | 2021-02-08 19:11:53.573 | debug | (26376) 6003468550029110afc2: Error from device (16): App still open on your mobile phone? Error: Error from socket tuya.0 | 2021-02-08 19:11:52.172 | debug | (26376) stateChange tuya.0.bf04470fb911f9c88ansmu.online {"val":false,"ack":true,"ts":1612807912161,"q":0,"from":"system.adapter.tuya.0","user":"system.user.admin","lc":1612807912161} tuya.0 | 2021-02-08 19:11:52.155 | debug | (26376) bf04470fb911f9c88ansmu: Disconnected from device tuya.0 | 2021-02-08 19:11:43.592 | debug | (26376) 6003468550029110afc2: Error from device (15): App still open on your mobile phone? Error: Error from socket tuya.0 | 2021-02-08 19:11:40.264 | debug | (26376) stateChange tuya.0.bf04470fb911f9c88ansmu.26 {"val":0,"ack":true,"ts":1612807900192,"q":0,"from":"system.adapter.tuya.0","user":"system.user.admin","lc":1612807301040} tuya.0 | 2021-02-08 19:11:40.262 | debug | (26376) stateChange tuya.0.bf04470fb911f9c88ansmu.25 {"val":"000d0d00000003e803e800000000","ack":true,"ts":1612807900190,"q":0,"from":"system.adapter.tuya.0","user":"system.user.admin","lc":1612807301 tuya.0 | 2021-02-08 19:11:40.259 | debug | (26376) stateChange tuya.0.bf04470fb911f9c88ansmu.24 {"val":"001c03e803e8","ack":true,"ts":1612807900189,"q":0,"from":"system.adapter.tuya.0","user":"system.user.admin","lc":1612807300938} tuya.0 | 2021-02-08 19:11:40.242 | debug | (26376) stateChange tuya.0.bf04470fb911f9c88ansmu.23 {"val":1000,"ack":true,"ts":1612807900183,"q":0,"from":"system.adapter.tuya.0","user":"system.user.admin","lc":1612807300849} tuya.0 | 2021-02-08 19:11:40.218 | debug | (26376) stateChange tuya.0.bf04470fb911f9c88ansmu.22 {"val":1000,"ack":true,"ts":1612807900182,"q":0,"from":"system.adapter.tuya.0","user":"system.user.admin","lc":1612807300684} tuya.0 | 2021-02-08 19:11:40.215 | debug | (26376) stateChange tuya.0.bf04470fb911f9c88ansmu.21 {"val":"1","ack":true,"ts":1612807900181,"q":0,"from":"system.adapter.tuya.0","user":"system.user.admin","lc":1612807300641} tuya.0 | 2021-02-08 19:11:40.212 | debug | (26376) stateChange tuya.0.bf04470fb911f9c88ansmu.20 {"val":false,"ack":true,"ts":1612807900180,"q":0,"from":"system.adapter.tuya.0","user":"system.user.admin","lc":1612807300573} tuya.0 | 2021-02-08 19:11:40.193 | debug | (26376) stateChange tuya.0.bf04470fb911f9c88ansmu.online {"val":true,"ack":true,"ts":1612807900159,"q":0,"from":"system.adapter.tuya.0","user":"system.user.admin","lc":1612807900159} tuya.0 | 2021-02-08 19:11:40.166 | debug | (26376) bf04470fb911f9c88ansmu: Received data: {"20":false,"21":"colour","22":1000,"23":1000,"24":"001c03e803e8","25":"000d0d00000003e803e800000000","26":0} ` Status: Issue closed Answers: username_1: seems that the device is ending the socket ... or there is a wlan reachability issue ... try to remove power from device and try again. or check wlan quality
haizlin/fe-interview
598356792
Title: [vue] 使用elementUI的表格组件时,在有多页的情况下,多选框如何跨页选择? Question: username_0: 使用elementUI的表格组件时,在有多页的情况下,多选框如何跨页选择? [我也要出题](http://web.haizlin.cn/interview/) Answers: username_1: element-ui提供了reserve-selection,它仅对 type=selection 的列有效,类型为 Boolean,为 true时会在数据更新之后记住之前选择的数据。(需要指定row-key) 百度的答案,感觉以前没有注意过
algolmaster/DP
440299624
Title: 동전 2 Question: username_0: n가지 종류의 동전이 있다. 이 동전들을 적당히 사용해서, 그 가치의 합이 k원이 되도록 하고 싶다. 그러면서 동전의 개수가 최소가 되도록 하려고 한다. 각각의 동전은 몇 개라도 사용할 수 있다. 사용한 동전의 구성이 같은데, 순서만 다른 것은 같은 경우이다. **입력** 첫째 줄에 n, k가 주어진다. (1 ≤ n ≤ 100, 1 ≤ k ≤ 10,000) 다음 n개의 줄에는 각각의 동전의 가치가 주어진다. 동전의 가치는 100,000보다 작거나 같은 자연수이다. 가치가 같은 동전이 여러 번 주어질 수도 있다. **출력** 첫째 줄에 사용한 동전의 최소 개수를 출력한다. 불가능한 경우에는 -1을 출력한다. https://www.acmicpc.net/problem/2294
duffn/theouterrim
679748122
Title: Lannik species appears two times in species table Question: username_0: Lannik species appears two times in species table ![изображение](https://user-images.githubusercontent.com/19798113/90334520-807aa600-dfce-11ea-87dc-502c2b3e6b63.png) Status: Issue closed Answers: username_1: Thanks for reporting! Fixed here and deploying now. https://github.com/username_1/theouterrim/pull/311
cipchk/ngx-countdown
572280373
Title: Event notify is not emitted by any applicable directives nor by countdown element for angular 9 Question: username_0: <!-- ============ 请尽可能通过 https://stackblitz.com/edit/ngx-countdown-setup 重现问题 ============ --> ## Bug Report or Feature Request (mark with an `x`) <pre><code> [ ] Bug report -> please search issues before submitting [ ] Feature request [ ] Documentation issue or request </code></pre> ## Current behavior <!-- Describe how the issue manifests. --> ## Expected behavior <!-- Describe what the desired behavior would be. --> ## Environment <pre><code> Angular version: X.Y.Z <!-- Check whether this is still an issue in the most recent Angular version --> ngx-countdown version: X.Y.Z <!-- Check whether this is still an issue in the most recent ng-zorro-antd version --> Browser: - [ ] Chrome (desktop) version XX - [ ] Firefox version XX - [ ] Safari (desktop) version XX - [ ] IE version XX Others: <!-- Anything else relevant? Operating system version, IDE, package manager, HTTP server, ... --> </code></pre><issue_closed> Status: Issue closed
comunica/comunica
326990762
Title: Autogenerate command and API in packager Question: username_0: #### Issue type: - :heavy_plus_sign: Feature request ____ #### Description: `@comunica/packager` should autogenerate an export of the programmatic API and a command line tool (such as `comunica-sparql`). The generated code for this should be minimal, and should be abstracted in the comunica framework.
manistal/calmdownandgamble
611529233
Title: When using Curling and there is a tie, my add-on will have to be reset Question: username_0: Whenever we have a tie, the addon will just stop working and need to be reset. I can get some screen shots later if you'd like. Answers: username_1: Hey @username_0 - Just dropped an update today on curseforge, wowup and github: https://github.com/username_1/calmdownandgamble/releases/tag/v9.1.5 Can you let me know if the updated version works?
boostorg/config
319515380
Title: Fix boost (>= 1.66) on gcc 4.9.3 Question: username_0: When trying to use boost 1.66 on gcc 4.9.3 (not sure about 4.9.4; we use gcc 4.9.3 as it is the last "known good" version for `gccxml`) we encountered a strange bug related to gcc's handling of `__has_include`: The macro is defined but not implemented correctly. Our workaround was to conditionally undefine the macro when encountering gcc 4.9.x. ```diff diff -rpu boost_1_66_0_old/boost/config/stdlib/libstdcpp3.hpp boost_1_66_0/boost/config/stdlib/libstdcpp3.hpp --- boost_1_66_0_old/boost/config/stdlib/libstdcpp3.hpp 2017-12-14 00:56:42.000000000 +0100 +++ boost_1_66_0/boost/config/stdlib/libstdcpp3.hpp 2018-04-25 16:42:42.627694409 +0200 @@ -301,6 +301,11 @@ extern "C" char *gets (char *__s); # define BOOST_NO_CXX17_STD_APPLY #endif +#if defined(__GNUC__) && (__GNUC__ == 4) && (__GNUC_MINOR__ == 9) && defined(__has_include) +// gcc 4.9.x defines but does not implement it +#undef __has_include +#endif + #if defined(__has_include) #if !__has_include(<shared_mutex>) # define BOOST_NO_CXX14_HDR_SHARED_MUTEX ``` Answers: username_1: That's nasty, but the fix looks almost as bad.... not sure what to do about this one.
WeTransfer/WeScan
401453961
Title: Improve Quad Detection Question: username_0: The Dropbox apps comes with excellent document scanning capabilities. Its page/quad detection is very good -- better than `CIDetector` or `VNDetectRectanglesRequest`. It would great if WeScan can develop capabilities that approximates this. Dropbox did reveal some of the core ideas in their blog: [Fast and Accurate Document Detection for Scanning](https://blogs.dropbox.com/tech/2016/08/fast-and-accurate-document-detection-for-scanning/) The blog post doesn't contain any code. And one part of the pipeline requires human-created training data. Answers: username_1: @username_0 if you have anyone who has expertise in this that would be great :) I'm not an AI expert username_2: Not sure what other contributors think, but here are my two cents: I really don’t think a custom quad detection algorithm, even if very good, is a good idea. While a custom algorithm may be better, it would be hard to maintain, and if whoever contributed it stopped helping out, we’d be stuck with a unmaintainable detection algorithm. Furthermore, it would require constant training, and may add considerable weight to the framework. On the other hand, Vision (which we now use for iOS 11+) is getting trained by Apple and getting better each year, for example, I believe Apple spoke about how Vision has already improved in iOS 12. What do others think about this? Sent from my iPhone > username_3: At the very least it would helpful to allow options to be passed to your main controller to initialize the CIDetector with the CIDetectorAspectRatio. Currently, it appears that anything less than 0.5 cannot be scanned username_2: Hey @username_3, I don't see where we're currently forcing the aspect ratio? Just for additional information, which version of iOS do you happen to use? username_3: @username_2 I’m using iOS 12 on both devices. I’m not familiar with code, but have a hunch that the default value 0.5 of the vision framework is preventing me from scanning anything of interest (my documents are all rather tall; 2x6 or 2x8) https://developer.apple.com/documentation/vision/vndetectrectanglesrequest/2875378-minimumaspectratio username_2: iOS 11 and above uses Vision instead of CoreImage. I'm not sure if that's the issue but I'll check later. username_3: @username_2 Thanks for the reply. Unfortunately all 5 of my devices are above 12.x so I cannot confirm the behavior pre-11.x. As a test, with the WeScanSample app I modified VisionRectangleDetector::rectangle, line 50 with `rectDetectRequest.minimumAspectRatio = 0.3` I am able to scan my 2" x 6" and 3" x 10" documents now. Couple of questions, 1. I wasn't sure if I should move this over to issue: [#137](https://github.com/WeTransfer/WeScan/issues/137) 2. Would it be OK to make a pull request to add a minimumAspectRatio as a property of your main controller and pass it to CIDetector::CIDetectorAspectRatio and VNDetectRectangleRequest::minimumAspectRatio username_3: @username_2 Quick follow up; fwiw, I have confirmed that (my iOS devices > iOS 11.x) are able to scan the narrow documents using the CIRectangleDetector in CoreImage. username_3: I created a pull request that does the bare minimum of adding a default value 0.1 aspect ratio to allow many other documents to succeed rectangle detection in iOS 11.x and later. https://github.com/WeTransfer/WeScan/pull/139 username_4: Having a customizable value would of course be best :)
sporter399/kaggletest
506950978
Title: Export csv data to sql Question: username_0: Have acquired a data set that I'm fairly sure I want to work with and I established some comfort level with python pandas to acquire it and have run some experimental filters on it. All of this without any Vue, JS, or SQL. I think the next step is to export that csv data to SQL and I have found some leads online as to how to do so but as of right now I do not see an SQL database established. More specifically, lines 12 - 14 in kaggleRead.py attempt to do that export but I'm fairly sure at this point it did not happen
ashleawalker29/ptcg_inventory
692042964
Title: [warning] Database disruption imminent, row limit exceeded for hobby-dev database on Heroku app ptcg-inventory Question: username_0: ``` The database DATABASE_URL on Heroku app ptcg-inventory has exceeded its allocated storage capacity. Immediate action is required. The database contains 11,183 rows, exceeding the Hobby-dev plan limit of 10,000. INSERT privileges to the database will be automatically revoked in 7 days. This will cause service failures in most applications dependent on this database. To avoid a disruption to your service, migrate the database to a Hobby Basic ($9/month) or higher database plan:https://hello.heroku.com/upgrade-postgres-c#upgrading-with-pg-copy If you are unable to upgrade the database, you should reduce the number of records stored in it. ``` Answers: username_0: There are a number of rows that have no values for `quantity_normal`, `quantity_reverse`, and `quantity_holo`. These can be deleted. Status: Issue closed username_0: Deleted all rows that don't have any numerical values, meaning they are not in my personal collection: ``` sql DELETE 9611 ``` There are now significantly less rows that fit within the 10,000 row threshold: ``` sql select count(*) from card_inventory_cards; count ------- 1684 (1 row) ```
kni-labs/rrssb
135172770
Title: Mobile Unreadable Question: username_0: When I used the js in the download ("js/rrssb.min.js") everything looked OK on desktop but mobile was too small to read with more than 3 buttons. Strangely enough the [demo ](http://kurtnoble.com/labs/rrssb/)switches to icons on mobile so not an issue. Using the [build.js](http://kurtnoble.com/labs/rrssb/js/build.js) from [demo ](http://kurtnoble.com/labs/rrssb/) works fine. Not sure what the difference is since it's minimized. Love the buttons. Answers: username_1: Can you post a stripped down example somewhere demonstrating what is going on? username_0: Sure - [example](http://downsconsultingservices.com/test/rr-tst.html) Looks like the attached on the phone.  Looking at it on a desktop is deceptive. There's no room on a small screen for all that text. ![too small for phone](https://cloud.githubusercontent.com/assets/3686121/13203206/5cb14fec-d878-11e5-84de-637c9993d18b.png) username_1: try adding this to your `head`: `<meta name="viewport" content="width=device-width, initial-scale=1">` username_0: I would have sworn I tried that last night. Added and it works perfectly. Status: Issue closed username_1: Whoo hoo! 🎉
communitybridge/easycla
492212013
Title: Support whitelisting of bot users Question: username_0: **Summary** Allow projects to whitelist bot / automation users for the CLA check. **Background** Many open source projects use bots for automation. [dependabot](https://dependabot.com/) for example can be used to keep dependencies up to date. It creates a pull request for each outdated dependency which contains a commit created by the bot to perform the update. It should be possible to configure the CLA check in a way that lets pull requests from (specific) bot users pass the check. **User Story** As a maintainer of a project I want to be able to allow bot users to contribute to the project without failing the CLA check for their pull requests. **Acceptance Criteria** 1. Possibility to whitelist bot users (or some other similar way to not get failed CLA checks for bot users). 1. Description in the docs on how to configure that as a project maintainer. **References** My fork of JanusGraph has dependabot activated which shows [how the PRs look](https://github.com/florianhockmann/janusgraph/pulls) like created from that bot. Answers: username_1: Hi @username_0, this is a very interesting use case. I can understand it's value. I believe that it is possible to whitelist bots if they have a GitHub Username or ID. We'll have to work with <NAME> to get approval. username_1: Hi @username_0, we can work together to whitelist bots that wish to. I just need to work with a company that wants to whitelist it, get their approval to affiliate the bot with their company. username_0: So a company that signed a CCLA basically needs to "vouch" for the bot? I hoped that we could find a way where bots are treated as individual contributors as they don't really belong to any company. But if that requires a bigger change and you don't want to / can't put in that effort right now, then we can of course also work around that by adding the bot to a company. I suggest that we just use the company I work for as I'm CLA manager there to keep things simple. So, should I just add the bot via _corporate.lfcla.com_? Do you have a good idea what I could use to identify Dependabot? You can find an example PR here: username_0/janusgraph-dotnet#18 username_1: Hi @username_0, my apologies for the workaround. I know it is an inconvenience. Thanks so much for your willingness to work this out. The GitHub Username for your Dependabot is **dependabot-preview[bot]**. If cloudfoundry decides to get the full version of Dependabot, then we'll want to add dependabot[bot] as well. If you're unable to add the bot because of the brackets, then please send me an email (<EMAIL>) with your explicit approval to add this bot to your company whitelist and I'll add the details on you behalf. username_1: Hello again @username_0. Thank you for your patience and assistance here. JanusGraph should now allow dependabot-preview[bot] & dependabot[bot] under the EasyCLA check. Would it be possible to get someone to validate? username_0: Thanks @username_1, I just activated Dependabot for our repository janusgraph-dotnet, but [the CLA check failed for the first PR](https://github.com/JanusGraph/janusgraph-dotnet/pull/23#issuecomment-539466872). Can you check what's going on there?
fetchai/agents-aea
650917786
Title: Remove contracts from skill context Question: username_0: **Is your feature request related to a problem? Please describe.** Contracts are still injected into skill context **Describe the solution you'd like** Remove them there; build from registry only<issue_closed> Status: Issue closed
neovide/neovide
1011844388
Title: Potential rendering issue with unicode characters -- Braille Pattern Dots Question: username_0: <!--- NOTE: PLEASE FILL OUT TEMPLATE RATHER THAN DELETING ---> **Describe the bug** I see a few open issues regarding font rendering and I'm here to add to the pot. I'm currently using this lua plugin [neovim-dashboard](https://github.com/glepnir/dashboard-nvim) and there is a discrepancy without how the text is rendered in both the dashboard output and the config file. I do not see the issue on other text art that are composed in ASCII and don't use unicode characters. **To Reproduce** Steps to reproduce the behavior: 1. Copy the text art image below 2. Paste it into your neovide 3. Do you get the same error? **Expected behavior** Render the text as expected **Screenshots** The dashboard: rendered in neovim under windows terminal ![WindowsTerminal_6049mNJw59](https://user-images.githubusercontent.com/36192863/135405876-0fae05fd-d2e6-4b9f-8ee9-efc192dba366.png) The lua list for the art: Also in neovim under windows terminal ![WindowsTerminal_b3V0ZLBgSG](https://user-images.githubusercontent.com/36192863/135406001-8f286508-e391-447b-b715-be223d242f1b.png) the lua list rendered in neovide: The same if ran with or without init file or plugins ![WindowsTerminal_jSSrXwKqRb](https://user-images.githubusercontent.com/36192863/135406113-fbf172f4-ffde-44cc-ba1b-2f123693e3df.png) And finally, the text for that image ```lua local ascii_theme_super_meatboy = { " ⣀⣀⣤⣤⣦⣶⢶⣶⣿⣿⣿⣿⣿⣿⣿⣷⣶⣶⡄ ", " ⣿⣿⣿⠿⣿⣿⣾⣿⣿⣿⣿⣿⣿⠟⠛⠛⢿⣿⡇ ", " ⣿⡟⠡⠂ ⢹⣿⣿⣿⣿⣿⣿⡇⠘⠁ ⣿⡇ ⢠⣄ ", " ⢸⣗⢴⣶⣷⣷⣿⣿⣿⣿⣿⣿⣷⣤⣤⣤⣴⣿⣗⣄⣼⣷⣶⡄ ", " ⢀⣾⣿⡅⠐⣶⣦⣶ ⢰⣶⣴⣦⣦⣶⠴ ⢠⣿⣿⣿⣿⣼⣿⡇ ", " ⢀⣾⣿⣿⣷⣬⡛⠷⣿⣿⣿⣿⣿⣿⣿⠿⠿⣠⣿⣿⣿⣿⣿⠿⠛⠃ ", " ⢸⣿⣿⣿⣿⣿⣿⣿⣶⣦⣭⣭⣥⣭⣵⣶⣿⣿⣿⣿⣟⠉ ", " ⠙⠇⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡟ ", " ⣿⣿⣿⣿⣿⣛⠛⠛⠛⠛⠛⢛⣿⣿⣿⣿⣿⡇ ", " ⠿⣿⣿⣿⠿⠿ ⠸⣿⣿⣿⣿⠿⠇ ", " ", " ", " И Ǝ 0 V I M ", } ``` Admittedly, it's a bit wack how it's rendered on github or in notepad++ But in neovim, windows terminal and vscode all the spacing is aligned. I'm using CaskaydiaCove Nerd Font and I assume it's monospace on those editors. But that doesn't explain how it doesn't render the close quotes in neovide. **Desktop (please complete the following information):** - OS: Windows 10 21H1 build 19043.1237 - Neovide Version 0.7.0 - Neovim Version 0.5.0 **Please run `neovide --log` and paste the contents of the `.log` file here:** [neovide_rCURRENT.log](https://github.com/neovide/neovide/files/7257642/neovide_rCURRENT.log) I took a minute to realize where it was, maybe state where it's located or echo the path in the log command Answers: username_1: Can you build on main? Closing as dup of #987 Status: Issue closed username_0: Oh wow, my bad. I searched for issues regarding font issues and overlooked that one! I attempted to compile main but ran into an issue so I am unable to verify if the issue still exists. ``` error: failed to run custom build command for `skia-bindings v0.40.2` Caused by: process didn't exit successfully: `C:\Users\<USERNAME>\AppData\Local\Temp\neovide\target\release\build\skia-bindings-e1813e1054da2e9e\build-script-build` (exit code: 101) --- stdout cargo:rerun-if-env-changed=SKIA_DEBUG --- stderr thread 'main' panicked at 'unsupported target: Target { architecture: "x86_64", vendor: "pc", system: "windows", abi: Some("gnu") }', C:\Users\<USERNAME>\.cargo\registry\src\github.com-1ecc6299db9ec823\skia-bindings-0.40.2\build_support\binaries_config.rs:106:18 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace ``` username_1: @username_0 Grab an executable from here: https://github.com/neovide/neovide/actions/runs/1299683663 username_0: @username_1 Nice! thanks. I can confirm that the issue for #987 is no longer present on my end with this build!
stylelint/stylelint
158397849
Title: no-extra-semicolons in block-comments Question: username_0: ### Describe the issue. Is it a bug or a feature request (new rule, new option, etc.)? Bug ### Which rule, if any, is this issue related to? `no-extra-semicolons` ### What CSS is needed to reproduce this issue? ```css @import "../config.less"; /* ========================================================================== jQuery Smartbanner overwrite ========================================================================== */ #smartbanner { .box-sizing( content-box ); font-family: @regularFontStack; .FontLight; } ``` ### What stylelint configuration is needed to reproduce this issue? ```js module.exports = { "extends": "stylelint-config-standard", "rules": { "at-rule-name-case": null, "at-rule-empty-line-before": "always", "block-no-empty": true, "color-no-invalid-hex": true, "comment-empty-line-before": "always", "declaration-colon-space-after": "always", "declaration-block-trailing-semicolon": "always", "indentation": 4, "max-empty-lines": 2, "max-line-length": [120, { ignore: ["non-comments"] } ], "number-leading-zero": "always", "selector-pseudo-element-colon-notation":"double", "string-quotes": "double", "unit-whitelist": [ "em", "rem", "px", "%" ] } }; ``` ### Which version of stylelint are you using? ``` $ stylelint --version [Truncated] }); ``` ### Does your issue relate to non-standard syntax (e.g. SCSS, nesting, etc.)? LESS ### What did you expect to happen? "No warnings to be flagged." ### What actually happened (e.g. what warnings or errors you are getting)? ``` overwrite.less 3:73 ⚠ Unexpected extra semicolon (no-extra-semicolons) [stylelint] 3:75 ⚠ Unexpected extra semicolon (no-extra-semicolons) [stylelint] 7:33 ⚠ Unexpected extra semicolon (no-extra-semicolons) [stylelint] 8:27 ⚠ Unexpected extra semicolon (no-extra-semicolons) [stylelint] ``` Answers: username_1: @username_0 Failing tests or PR greatly appreciated. username_1: I think this is an upstream issue caused by https://github.com/webschik/postcss-less/issues/45 username_2: @username_0 Please consider helping upstream in [`postcss-less`](https://github.com/webschik/postcss-less/issues/45) if you'd like to see this issue resolved. username_3: @username_0 can you please try PostCSS-Less v`0.14.0`? Change it from `0.13.0` to `0.14.0` in your `package.json` file please Status: Issue closed
rapidsai/cudf
927338727
Title: [BUG] Bad interaction between cuDF, pandas MultiIndex and Timestamps Question: username_0: **Describe the bug** cuDF DataFrames indexed by a Timestamp range can be accessed using `.loc[]` without any problem. However, if the cuDF DataFrame is indexed with a MultiIndex with timestamps as the first key, `.loc[]` fails, when doing so causes no issue with pandas. **Steps/Code to reproduce bug** The [following gist](https://gist.github.com/username_0/689242cf5c79ce9185aa7fa3bb1f2e89) holds a self-contained example. The last line of the code fails with error: `TypeError: 'Timestamp' object is not iterable` **Expected behavior** I would expect the pandas and cuDF snippets to behave similarly. **Environment overview (please complete the following information)** - Environment location: Docker - Method of cuDF install: Docker - docker pull rapidsai/rapidsai:0.18-cuda10.1-runtime-ubuntu18.04-py3.7 - docker run -d -p 10000:8888 -p 10001:8787 -p 10002:8786 --privileged=true --gpus all --name test -t test **Environment details** <details><summary>Click here to see environment details</summary><pre> **git*** commit 2cda39b34197c60614186ec51106d8254e5f7b05 (grafted, HEAD, origin/branch-0.16) Author: <NAME> <<EMAIL>+<EMAIL>> Date: Wed Oct 21 10:31:49 2020 -0400 Update CHANGELOG.md **git submodules*** ***OS Information*** DISTRIB_ID=Ubuntu DISTRIB_RELEASE=18.04 DISTRIB_CODENAME=bionic DISTRIB_DESCRIPTION="Ubuntu 18.04.5 LTS" NAME="Ubuntu" VERSION="18.04.5 LTS (Bionic Beaver)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 18.04.5 LTS" VERSION_ID="18.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=bionic UBUNTU_CODENAME=bionic Linux fe1b5c84b917 4.15.0-143-generic #147-Ubuntu SMP Wed Apr 14 16:10:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux ***GPU Information*** Tue Jun 22 15:01:51 2021 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 455.23.05 Driver Version: 455.23.05 CUDA Version: 11.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 GeForce GTX 1080 On | 00000000:05:00.0 Off | N/A | | 28% 43C P8 7W / 180W | 1504MiB / 8114MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ [Truncated] xorg-libice 1.0.10 h7f98852_0 conda-forge xorg-libsm 1.2.3 hd9c2040_1000 conda-forge xorg-libx11 1.7.0 h7f98852_0 conda-forge xorg-libxau 1.0.9 h7f98852_0 conda-forge xorg-libxdmcp 1.1.3 h7f98852_0 conda-forge xorg-libxext 1.3.4 h7f98852_1 conda-forge xorg-libxrender 0.9.10 h7f98852_1003 conda-forge xorg-renderproto 0.11.1 h7f98852_1002 conda-forge xorg-xextproto 7.3.0 h7f98852_1002 conda-forge xorg-xproto 7.0.31 h7f98852_1007 conda-forge xz 5.2.5 h516909a_1 conda-forge yaml 0.2.5 h516909a_0 conda-forge yarl 1.6.3 py37h5e8e339_1 conda-forge zeromq 4.3.4 h9c3ff4c_0 conda-forge zict 2.0.0 py_0 conda-forge zipp 3.4.0 py_0 conda-forge zlib 1.2.11 h516909a_1010 conda-forge zstd 1.4.8 hdf46e1d_0 conda-forge </pre></details> Answers: username_1: Thanks for including a simple reproducer gist. I've included it below for ease of access. ```python import pandas as pd import cudf import numpy as np ​ start = pd.Timestamp(datetime.strptime('2021-03-12 00:00+0000', '%Y-%m-%d %H:%M%z')) end = pd.Timestamp(datetime.strptime('2021-03-12 03:00+0000', '%Y-%m-%d %H:%M%z')) timestamps = pd.date_range(start, end, freq='1H') labels = ['A', 'B', 'C'] index = pd.MultiIndex.from_product([timestamps, labels], names=["timestamp", "label"]) value = np.random.normal(size=12) df = pd.DataFrame(value, index=index, columns=['value']) df_gpu = cudf.from_pandas(df) ​ stamp = pd.Timestamp(datetime.strptime('2021-03-12 02:00+0000', '%Y-%m-%d %H:%M%z')) ​ print(df.loc[stamp]) # SUCCEEDS print(df_gpu.loc[stamp]) # FAILS value label A 1.184793 B -0.253166 C -0.790236 --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /raid/nicholasb/miniconda3/envs/rapids-21.08/lib/python3.8/site-packages/cudf/core/indexing.py in __getitem__(self, arg) 234 try: --> 235 return self._getitem_tuple_arg(arg) 236 except (TypeError, KeyError, IndexError, ValueError): /raid/nicholasb/miniconda3/envs/rapids-21.08/lib/python3.8/contextlib.py in inner(*args, **kwds) 74 with self._recreate_cm(): ---> 75 return func(*args, **kwds) 76 return inner /raid/nicholasb/miniconda3/envs/rapids-21.08/lib/python3.8/site-packages/cudf/core/indexing.py in _getitem_tuple_arg(self, arg) 360 else: --> 361 return columns_df.index._get_row_major(columns_df, arg) 362 else: /raid/nicholasb/miniconda3/envs/rapids-21.08/lib/python3.8/site-packages/cudf/core/multiindex.py in _get_row_major(self, df, row_tuple) 926 row_tuple = slice(row_tuple.start, self[-1], row_tuple.step) --> 927 self._validate_indexer(row_tuple) 928 valid_indices = self._get_valid_indices_by_tuple( /raid/nicholasb/miniconda3/envs/rapids-21.08/lib/python3.8/site-packages/cudf/core/multiindex.py in _validate_indexer(self, indexer) 958 else: --> 959 for i in indexer: 960 self._validate_indexer(i) TypeError: 'Timestamp' object is not iterable During handling of the above exception, another exception occurred: TypeError Traceback (most recent call last) <ipython-input-87-fe779946243b> in <module> 15 16 print(df.loc[stamp]) # SUCCEEDS [Truncated] /raid/nicholasb/miniconda3/envs/rapids-21.08/lib/python3.8/site-packages/cudf/core/multiindex.py in _validate_indexer(self, indexer) 957 self._validate_indexer(indexer.stop) 958 else: --> 959 for i in indexer: 960 self._validate_indexer(i) 961 TypeError: 'Timestamp' object is not iterable ``` It looks like we go down a codepath that expects an iterable, which explains why wrapping with a tuple works (and may resolve your problem in the short term): ```python print(df_gpu.loc[(stamp,)]) # SUCCEEDS value label A 1.184793 B -0.253166 C -0.790236 ``` username_0: Hi @username_1, thanks for the answer! The "tuple trick" above seems to do the job for accessing a single value. However, I'm back into trouble if I want to fetch values for a timestamp range. Elaborating from my previous gist example, if I type: ``` start = pd.Timestamp(datetime.strptime('2021-03-12 01:00+0000', '%Y-%m-%d %H:%M%z')) end = pd.Timestamp(datetime.strptime('2021-03-12 02:00+0000', '%Y-%m-%d %H:%M%z')) print(df.loc[start:end]) ``` I get the expected result: ``` value timestamp label 2021-03-12 01:00:00+00:00 A -0.466112 B -0.781473 C -1.010174 2021-03-12 02:00:00+00:00 A 0.160179 B 1.007183 C -1.053772 ``` With cuDF, the following gets the usual `TypeError: 'Timestamp' object is not iterable`: ``` print(df_gpu.loc[start:end]) ``` Alternatively, trying: ``` print(df_gpu.loc[(start:end,)]) ``` gets a `SyntaxError: invalid syntax`. Using a _regular_ Timestamp range with: ``` start = pd.Timestamp(datetime.strptime('2021-03-12 01:00+0000', '%Y-%m-%d %H:%M%z')) end = pd.Timestamp(datetime.strptime('2021-03-12 02:00+0000', '%Y-%m-%d %H:%M%z')) timestamps = pd.date_range(start, end, freq='1H') print(df_gpu.loc[(timestamps,)]) ``` I get `ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()`. Any idea for circumventing this issue? username_0: Hi, I'm following up about the bug reported above, as reported in my last answer, using a tuple to access a Timestamp first level of a MultiIndex circumvents the issue pointed out initially, but the proposed solution fails if one wants to access a Timestamp range. I realize that the title is not accurately reflecting the actually remaining bug: should I create a new issue which singles out the Timestamp range bug, or rename this one? username_0: Hi @username_1, [Here is an updated minimal gist](https://gist.github.com/username_0/53312fe4bef649b7780f8f63bf09fbe1) which lists in details what works, does not work, and workarounds (as of version 21.08.02 installed on my side). In a nutshell (please refer to the gist for details): with a MultiIndex and timestamps as primary key, pandas allows to do this kind of operation: ``` df.loc[stamp] df.loc[timestamps] ``` with `stamp` and `timestamps` valid timestamp and timestamp range, respectively. I would like to do the same with cudf, but as of v21.08.02, it is impossible. username_0: The problems reported above and highlighted [in this minimal gist](https://gist.github.com/username_0/53312fe4bef649b7780f8f63bf09fbe1) still occur with v21.12, with exactly the same error messages.
DestinyItemManager/DIM
106496457
Title: The left emote equippables are able to be stored in the vault, but there is no section for them in DIM. Question: username_0: Maybe you guys are tracking this internally already, but this might be something to add just for completeness. I'm not sure what we'll be getting in the way of emotes besides TTK CE ones. http://i.imgur.com/xe9PLjT.png Answers: username_1: I should have bought TTK CE it seems :/ username_1: I'll be adding it soon, with the next update of DIM, v3.1.3. username_0: Cool. According to the destiny tracker database the TTK CE emotes are all that exist for now. http://db.destinytracker.com/items/emotes/emote Status: Issue closed
SeleniumHQ/selenium
71496526
Title: Send_keys Python 2.7 function does not handle long type variables Question: username_0: Code to reproduce: ```python from selenium import webdriver driver = webdriver.Firefox() driver.get('http://www.google.com') element = driver.find_element_by_id('lst-ib') longnumber = (65536*65536) print ('Type should be long: %s' % type(longnumber)) element.send_keys(longnumber) ```<issue_closed> Status: Issue closed
lifo101/php-daemon
312641949
Title: Documentation Question: username_0: Do you have any links to good reads for understanding the logic behind your daemon? Maybe I don't understand the concept of a daemon, but I just don't get how to use things here. For example, you can define the max number of workers to fork via 'setMaxProcesses': ``` $this->addWorker('myWorker', 'example') ->setAutoRestart(true) # enable auto-restarts of workers, to show them exit and be re-created on-the-fly ->setMaxCalls(10) # how many calls a worker will process before exiting ->setMaxRuntime(10) # how long a worker will run before exiting ->setMaxProcesses(3); # how many workers to fork ``` But i'm not understanding what that actually does? Say for example, I have a bucket of 12 apples. I have a worker which eats apples, one at a time. I would assume, by defining that 3 workers should be created on each loop, then 3 apples would be eaten every loop. You also note in your documentation that you can call a task from a worker, but I have found no examples and cannot figure out how to make that happen. I could use some help in understanding all this and I'd very much appreciate any info or links you can provide. Thank you. Answers: username_1: Did you read any of the wiki? I go over [Workers ](https://github.com/username_1/php-daemon/wiki/Workers) and [Tasks](https://github.com/username_1/php-daemon/wiki/Tasks), etc. A worker is basically a wrapper around 1 or more sub-processes that are automatically forked for you in the background. Every time you call a method on your worker it'll fork that method call into a background process and eventually return the result to your main daemon process. So, if your worker is eating apples and you have a maximum of 3 processes allowed, that means you can have, at most, 3 apples being eaten at any given moment, it doesn't guarantee that you'll eat 3 apples each loop cycle. That depends on the OS, and how long it takes each background process to eat an apple. Those functions `setMaxProcesses`, `setMaxRuntime` and `setMaxCalls` are just ways to limit how long your worker sub-processes will hang around before exiting (and having its sub-process cleaned up). Having a worker sit idle takes up memory, but having to spawn a new worker over and over takes a lot of CPU. So, it's a balancing act based on your personal daemon's requirements. You don't really want to call a "task" from within a worker, since a task spawns another process it'll most likely cause problems with ProcessManager that is maintaining all of the sub-processes for your daemon. All spawning should be done within the main process of your daemon. But, honestly, I'm not sure what will happen if you do. Opening a task is the same as a worker, the only difference is a task does not return anything to the main process. The `examples.php' file is just there to help users run the included example scripts. You need to make your own 'entry' script that starts your personal daemon as you see fit. And finally, just to be clear, if you want to run a daemon and have it disconnect from your terminal, you need to set the `setDaemonize(true)` on your daemon. It'll run in the background until it dies or you 'kill -INT {pid}' on the process ID. username_0: I did, though admittedly I am having difficulty wrapping my head around it which is why I reached out. I have almost no experience with multiprocessing or daemons as a whole, so the concepts and terminology is new to me. I have a goal, I know I want to utilize your work and I really appreciate you taking the time to help me out, this definitely clarifies some things for me. My goal is to use your library to build a daemon that will monitor an Azure message queue and spawn various workers to handle those messages -- namely resulting in the management of a database. I opted for your library, over the standard cron job approach, as I wanted a means of actively monitoring the queue and reacting as things arrive in the queue. Using the examples you provided in your library, I managed to get a running daemon but it only processes 1 job every loop. I am now trying to figure out how to get things happening concurrently so as to process say 5 jobs on every loop. Everything you noted above has certainly helped to clarify that and i'll be plugging away more when I return home from work. Cheers :) username_1: By default, the daemon runs the loop at 1 second intervals. You could speed up the interval, or poll for more events within a single loop iteration. If you spend too much time polling in each iteration the rest of your daemon will suffer. So, try not spend more than 1 second at a time in the `execute` loop. for example, your `execute` loop could do something like this: ```php // collect messages for up to 0.8 seconds before exiting $start = microtime(true); while (microtime(true) - $start < 0.8 && null !== $m = $this->getNextMessage()) { // keep track what messages are being worked on, // so we know what to do when a worker returns a result $this->messages[$m->id] = $m; $this->worker('worker')->processMessage($m)->then(function($m) { unset($this->messages[$m->id]); $this->log('Message %d processed', $m->id); }); } ``` note, in this example I track what messages have been sent to workers so that if the deamon shuts down prematurely I can act on those messages in some way (in the Daemon::onShutdown method; which I don't show here). You could also just speed up the loop interval with `Daemon::setLoopInterval()` and have it run every 0.5 seconds, or something and only poll for 1 message at a time. Setting it too low will also make your daemon spike CPU too high. It's a balancing act, based on your application requirements. Either way, even if you're only polling a single message in each loop, remember that the actual work is being done in another process, in parallel to your main process. As soon as you call that `$this->worker('...')` method it returns instantly. One thing to be aware of. The daemon is not really meant for high-speed processing. The 'calling' mechanism in the back-end that communicates with sub-processes can sometimes break if you make too many calls too quickly. A few messages per second is fine, but hundreds per second will start to become unstable, most likely. I've never tested how much I could do within workers. Status: Issue closed username_1: Glad you got use out of the library. Thanks.
terraform-aws-modules/terraform-aws-eks
1185556290
Title: Ability to customize userdata template args Question: username_0: ## Is your request related to a problem? Please describe. After migrating to v18 it seems that the old `userdata_template_extra_args` variable doesn't have a replacement, so I can't provide extra variables to the userdata custom template ## Describe the solution you'd like. Be able to merge custom variables in the template, like I was doing in v17, e.g.: ``` userdata_template_extra_args = { enable_admin_container = false enable_control_container = true } ``` and in my template ``` # The admin host container provides SSH access and runs with "superpowers". # It is disabled by default, but can be disabled explicitly. [settings.host-containers.admin] enabled = ${enable_admin_container} # The control host container provides out-of-band access via SSM. # It is enabled by default, and can be disabled if you do not expect to use SSM. # This could leave you with no way to access the API and change settings on an existing node! [settings.host-containers.control] enabled = ${enable_control_container} ``` ## Describe alternatives you've considered. Hardcoding the variables in the template ## Additional context I'm not an expert in terraform, but I haven't found a replacement of such feature.
kadena-io/chainweb-node
537907092
Title: RPC API instruction Question: username_0: We would like make a mining pool for Kadena, is there any instruction about the RPC interface? Such as getblocktemplate and submitblock etc... I went through the FAG and Wiki, but couldn't find one. Thanks. Answers: username_1: The mining API is defined here: https://github.com/kadena-io/chainweb-node/blob/master/src/Chainweb/Miner/RestAPI.hs#L47 There is also swagger documentation for it if you hit `/swagger.json`. We have a standalone miner here https://github.com/kadena-io/chainweb-miner and the README has more in-depth documentation on mining with some info about the node's mining API.
iqlusioninc/tmkms
576009059
Title: failure_derive-0.1.6/src/lib.rs:107:70: could not find `__rt` in `quote` Question: username_0: Upgrading from v0.7.1 to v0.7.2, bumped into ![image](https://user-images.githubusercontent.com/87547/75951760-addcf880-5e61-11ea-92d5-7d22433debb4.png) tried uninstalling then reinstall. ``` $ cargo -V cargo 1.39.0 (1c6ec66d5 2019-09-30) $ rustc -V rustc 1.39.0 (4560ea788 2019-11-04) ``` Answers: username_1: If you clone the git repo and build from that, it will use the already checked-in and known working `Cargo.lock` file: https://github.com/iqlusioninc/tmkms#compiling-from-source-code-via-git This is the recommended way to ensure a build which has undergone dependency review and should always work, regardless of what other releases there have been in the meantime. Status: Issue closed
OpenAngelArena/oaa
217013268
Title: Locking Boss Cores Question: username_0: If for any reason a boss core becomes locked from combining, you can't unlock it. Happened mostly when dissembling another item that included an upgrade core. @username_2 If you want a video, just ping me, but seemed pretty self explanatory. Answers: username_1: If cores has attribute sellable, you can unlock. May be this way can solve problem. username_2: @username_0 I found this issue myself last night. Thanks for writing it up so I don't have to. username_0: Them being sellable would be a design/balance team decision. Status: Issue closed
lucifering/PathOfBuilding
415028473
Title: 未处理的问题列表 Question: username_0: ·瓦尔技能能被头部附魔所辅助,但是却不像它的未腐化版本那样添加进技能栏后再头盔附魔会直接显示出来,需要自己去翻那一长串的所有技能 ·弹幕+GMP 投射物数量计算错误:https://github.com/Openarl/PathOfBuilding/issues/1427 ·【彻骨】仅支持提高目标承受的持续冰霜伤害:https://github.com/Openarl/PathOfBuilding/issues/1426
rails-girls-summer-of-code/rgsoc-teams
120660827
Title: Refactoring: Make Comment polymorphic Question: username_0: The Comment model has grown a set of commentable records' foreign keys. We should refactor it into a `belongs_to :commentable, polymorphic: true` instead of listing all types of recods individually. There's no deadline for this and can be done whenever there's time. @marcgreenstock and @lucaspinto, is this something you could see yourself working on? Answers: username_1: Steps for make `Comment` model polymorphic: - Create Migration for make appropriate changes in `comments` table - Perform modifications within `Comment` model - Perform modifications within `Team`, 'User', `Application` and `Project` models - Test the changes and generate PR @username_0 : It would be great if you can verify above steps and suggest changes if you find any so that I can start working on it. :blush: username_0: Hello @username_1, sorry for the slow response. It's great that you want to work on this, thank you! The steps you describe are :ok_hand: :smile: username_1: @username_0 : No worries. :blush: I'll start work on it and generate PR on the same. username_1: @username_0 : It would be great if you can suggest me which main forms/screens I need to verify/check while test the things! username_0: @username_1 I have checked the codebase only briefly, so this list might not be complete: * app/views/projects/show.html.slim * app/views/rating/applications/show.html.slim * app/views/rating/todos/applications/show.html.slim * app/views/supervisor/dashboard/index.html.slim username_0: I've started working on this now (because I desperately want to [comment on status updates](https://github.com/rails-girls-summer-of-code/rgsoc-teams/issues/263)), but I'm struggling with the best way to migrate existing data – mainly because we recently had the issue that older migrations didn't apply anymore. I'm hesitant about adding data transformation loops referencing actual Rails models (which may no longer exists in one or two years). @alicetragedy @username_3 @username_2 @ramonh do you have any advice / ideas / good experiences to share? username_2: What I usually do in that case is just define the minimal required model class inside the migration. That way you can still use ActiveRecord in the migration without running the risk of the class not being there at some point down the line. It also does not add a lot of complexity since usually its just an empty Subclass of ActiveRecord::Base with maybe one belongs_to statement (whatever you need to do the migration). Does that make sense in our case? username_3: Yes, @username_2's approach is a classic here *(see for instance [this blog post](http://railsguides.net/change-data-in-migrations-like-a-boss/) on how it can look like)*. A shorter alternative is to just wrap the data transformation part in a `begin`- `rescue` block - does the job as well. A wholly different solution: use temporary rake tasks *(see for instance [this blog post](https://robots.thoughtbot.com/data-migrations-in-rails))*. I personally don't really like this approach, because it leaves two questions open: 1) who when removes the task again? 2) is the task run in every context where I intended it to? username_0: @username_3 @username_2 ¡muchas gracias! Those are very valuable suggestions, and the blog post looks very intriguing! I've always gone for the temp. rake task approach in my projects (a `:single_run` namespace). It's brittle when someone didn't get the memo run task X after migration Y, but the impact isn't big since everyone is working with anonymiized dev dumps anyway; no one really migrates from 4.years.ago to present. Status: Issue closed
react-hook-form/react-hook-form
822639680
Title: Feature request - add ability to modify default values Question: username_0: **Is your feature request related to a problem? Please describe.** Yes. If we use a method like `append`, existing default values that have not been manually registered or mounted will be dropped from form values. This means we must manually register all our field inputs ahead of time and continually reregister new fields if they are added (ex `append`). This is a pain and is also tricky for nested field array where we need nested loops to register everything. **Describe the solution you'd like** For a start, I would like to see methods like append, remove, etc. simply be able to modify default values. I think it created a clearer mental picture of what is happening and means that we don't need to constantly register our inputs and worry about them being removed. The main issue here is it causes issues with watch and useWatch. Therefore, we will need some thought on how to execute this. Additionally, I think it would be nice to add an option to setValue to allow the user to modify default values. This would provide more control and be helpful in certain situations. **Describe alternatives you've considered** Constantly registering my inputs. Status: Issue closed Answers: username_1: I think this issue FR should be resolved by register absent fields from default values right? username_0: Yes. No longer needed.
kieker-monitoring/instrumentation-languages
117798722
Title: Record with property type byte[] generates Java file with syntax error Question: username_0: Solution: Use Arrays.equals(a0,a1): `if (!Arrays.equals(this.getBuffer(), castedRecord.getBuffer())) return false;` Answers: username_1: After some debugging, I can reproduce your error. It appears the code generator for the equals method is broken. Remember it is possible to have multi-dimensional arrays. As you assigned the bug to yourself. I will not fiddle around with this ;-)
kubernetes/kubernetes
519920602
Title: all pod of one node: pod running but Conditions:Ready :False,so endpoints is none Question: username_0: <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks! If the matter is security related, please disclose it privately via https://kubernetes.io/security/ --> **What happened**: reboot one node **What you expected to happen**: all pods are ready **How to reproduce it (as minimally and precisely as possible)**: reboot vm **Anything else we need to know?**: k8s cluster: two master three node appearance: kubectl get pod -o wide|grep oasis-ui-admin oasis-ui-admin-7cd99ff45d-dt98g 1/1 Running 0 28h 172.16.31.10 192.168.56.54 <none> <none> [root@centos54 ~]# kubectl get ep|grep oasis-ui-admin oasis-ui-admin 28h [root@centos54 ~]# kubectl describe pod oasis-ui-admin-7cd99ff45d-dt98g Name: oasis-ui-admin-7cd99ff45d-dt98g Namespace: default Priority: 0 Node: 192.168.56.54/192.168.56.54 Start Time: Thu, 07 Nov 2019 13:07:54 +0800 Labels: feature=oasis_base name=oasis-ui-admin pod-template-hash=7cd99ff45d Annotations: <none> Status: Running IP: 172.16.31.10 Controlled By: ReplicaSet/oasis-ui-admin-7cd99ff45d Containers: oasis-ui-admin: Container ID: docker://320b3dffa5a4d0e5c69afe809bcf02a54da633bb69bfeb8e2d391e57ca088cc0 Image: h3crd-wlan1.chinacloudapp.cn:5000/buildonly/oasis-ui-admin:R10.0.0.10.0.0_20191021103414 Image ID: docker-pullable://h3crd-wlan1.chinacloudapp.cn:5000/buildonly/oasis-ui-admin@sha256:d2b919c04b2ffa98e3e426f4aac6e55f69f49844821f05264713c43f7823fe01 Port: 80/TCP Host Port: 0/TCP State: Running Started: Thu, 07 Nov 2019 15:44:38 +0800 Ready: True Restart Count: 0 Limits: cpu: 1200m memory: 1230Mi Requests: cpu: 30m memory: 200Mi Environment: O2O_PROFILE: release WEB_DOMAIN: 192.168.56.35:10443 DB_URL: jdbc:mysql://mariadb-ss:3306/o2o?useUnicode=true&characterEncoding=UTF-8&autoReconnect=true DB_PORTAL_URL: jdbc:mysql://mariadb-ss:3306/portal?useUnicode=true&characterEncoding=UTF-8&autoReconnect=true DB_WEIXIN_URL: jdbc:mysql://mariadb-ss:3306/weixin?useUnicode=true&characterEncoding=UTF-8&autoReconnect=true DB_WEIXIN_WIFI_URL: jdbc:mysql://mariadb-ss:3306/weixinwifi?useUnicode=true&characterEncoding=UTF-8&autoReconnect=true DB_USERNAME: maxscale DB_PASSWORD: <PASSWORD> [Truncated] Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: <none> **Environment**: - Kubernetes version (use `kubectl version`): Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:50Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} - Cloud provider or hardware configuration: - OS (e.g: `cat /etc/os-release`): CentOS Linux release 7.4.1708 (Core) - Kernel (e.g. `uname -a`): Linux centos52 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux - Install tools: Binary - Network plugin and version (if this is a network-related bug): - Others: docker version 19.03.1 Answers: username_1: It seems same as https://github.com/kubernetes/kubernetes/issues/84931 username_0: Is this a bug? username_0: Is this a bug? username_2: closing as duplicate of https://github.com/kubernetes/kubernetes/issues/84931 please continue the discussion there. /close
Vestride/Shuffle
150196303
Title: Mixing layout and component styles Question: username_0: Nice plugin! However, I see the example is mixing layout styles and item styles. It's not a good practice to put the element class `.picture-element` directly on the column class `.col-sm-4` because it's then hard to maintain the component and use it in other places. As it should be be only a tag with a class spanning 100% width. I was having trouble with the plugin when writing my code this way. Is it possible to structure the code however you want and just replace the elements that way? Thanks! Answers: username_1: I disagree with you, but why not use your component at 100% width as the only child of a `shuffle-item`? Something like: ```html <div id="grid"> <div class="col-sm-4 item"> <div class="my-component-that-is-100-percent-width"></div> </div> <div class="col-sm-4 item"> <div class="my-component-that-is-100-percent-width"></div> </div> <div class="col-sm-4 item"> <div class="my-component-that-is-100-percent-width"></div> </div> </div> ``` username_0: Typically cols are related to a grid system that's being leveraged across the entire site (aka global). When putting cols on the item itself, you're mixing the item-specific styles which should not contain floats and col widths. The item should be decoupled with it's own styling. Technically the col is just a spanned width of the page that should then contain elements. username_1: I understood your opinion the first time and I still disagree with it. If I were to separate layout styles from visual ones, each item in the grid would need an inner item nested within it. People already struggle to implement shuffle and I don't want to make it any harder than it needs to be. You are completely free to style elements inside the "item" any way you see fit. Status: Issue closed username_0: Honestly, I would do without the grid completely for the example, but no worries either way.
cgeo/cgeo
347735297
Title: Undo support for delete waypoint Question: username_0: Recently I managed again to delete a waypoint (with a prepared calculator) by accident. Therefore I tried to implement the Undo capability for the Delete action. This works but since the Undo action triggers a Re-Insert of the waypoint (and waypoints are only ordered by their id) it will be moved to the end of the waypoint list which is not an ideal solution. To make it better waypoints would need support for free ordering. The question is if there is interest in a semi-optimal Undo implementation where an unwanted waypoint deletion is restored but positioned at the end of the list. Then I would prepare a PR. Answers: username_1: Undo should do what it says: undo an action and restore the state as if nothing has happened. username_0: Thanks for clarification. Without the possibility to freely order the waypoint list (which would be also a nice feature BTW) this can't be done. Reordering would involve database changes and with all its implications like compatibility this is much more complex than converting a few code lines into a command to realize an Undo. username_2: Sorting of waypoints looks like #2677 username_0: Yes somehow. But #2677 talks about sorting is based on existing waypoint attributes (in detail: waypoint name). To manually influence the order waypoints are displayed a "position" attributes is necessary as a prerequisite. username_2: OK, but the discussion fits better there instead of here (discussing undo). username_3: Back to the original issue: We do already have a confirmation dialog before deleting a waypoint. If someone prefers an Undo operation, it should replace the confirmation dialog. IMHO I would prefer to confirmation dialog instead of an Undo, because a seldomly use the Undo and mostly I only recognize an accidental deletion only after the Undo bar has already vanished. username_0: My c:geo from last night deletes waypoints without confirmation, username_3: Oh, you are right. I mixed it up with the discarding of waypoint changes. The temperatures :( username_0: The point is that a lot of actions in cgeo got Undo support on actions that modify data instead of confirmation dialogs - also within the waypoint context menu. Shouldn't there be a general strategy how we deal with this? I like the Undo toasts but I agree it would be nice if there is more time to click on it. Based on the feedback in #2677 it is clear that the necessary infrastructure for Undo is not available and there is also no interest to have it implemented. Therefore the options "confirmation dialog" or "reject this issue" remain. username_3: I vote for confirmation dialog. I rarely delete any waypoint and I assume this might apply to many users, so it wont bother to much.
square/workflow-kotlin
845293226
Title: DecorativeViewFactory signature is weird Question: username_0: You're forced to provide the `map` function, but if you provide your own `doShowRendering` it's not used. Status: Issue closed Answers: username_0: Huh. Actually I don't think I can simplify this. The `map` function is always used: first when we build the view, and then again inside the `doShowRendering` function. username_0: Well, at least we can better align the initializeView method with the new one on `ViewRegistry.buildView`. username_0: You're forced to provide the `map` function, but if you provide your own `doShowRendering` it's not used. Status: Issue closed
pytorch/pytorch
736716884
Title: unbalanced gpu memory when using DistributedDataParallel Question: username_0: I was using DistributedDataParallel to train a model on single machine 8 gpus. I thought by using DistributedDataParallel, memory on each gpu should be approximately the same, however, there is one gpu with significantly more memory usage. Could anyone know what might cause this? Thanks! ![image](https://user-images.githubusercontent.com/45524636/98215459-c4a09580-1f82-11eb-8916-eb99f8d9166f.png) Answers: username_1: @username_0 Can you provide a self contained repro that we can use locally on our GPU machines? username_0: Thank you very much for the reply. You can find the code here https://github.com/username_0/unbalanced-demo. Thanks again. username_0: here is another wield situation I encountered today with DistributedDataParallel. I was using batch-size 128, then I stopped the training to change the print frequency (so I can see the loss more often, and I changed nothing else). After that, I can't do batch-size 128 as it always reports cuda out of memory. So I have to decrease the batch size. While I was using batch-size 128, the GPU memory look like this, as expected: ![image](https://user-images.githubusercontent.com/45524636/99956472-0c1b8400-2dc1-11eb-9ff1-c5f21b90b8a2.png) However, after I changed to 64, the unbalance memory issue happened again: ![image](https://user-images.githubusercontent.com/45524636/99956563-30776080-2dc1-11eb-8061-163a8b707837.png) Besides the memory problem, do you have any insight on why the batch-size 128 no longer work? Like any internal logic I should take care of? Thanks username_2: It's very likely that torch.load caused the imbalanced usage. When I train a model from scratch, DDP never has imbalanced memory usage issue. But it always happens when I continue training from a checkpoint. username_3: I meet the same problem, and as @username_2 said, when I train a model from a checkpoint, DDP has imbalanced memory usage. - From scratch ![image](https://user-images.githubusercontent.com/28639377/140856246-83489760-0fdd-440f-a88b-27a4591dedf5.png) - From a checkpoint ![image](https://user-images.githubusercontent.com/28639377/140856310-1c8aa27b-4f60-4ac5-828a-06a122d235fd.png) username_3: When I set the following code, the problem is solved. ```python state = torch.load('xxx.pth', map=torch.device('cpu')) ```
elixir-lang/elixir
209474976
Title: `Task.Supervisor.start_child` failed silently when `args` is not a list Question: username_0: ### Environment * Elixir & Erlang versions (elixir --version): Erlang/OTP 19 [erts-8.2] [source] [64-bit] [smp:8:8] [async-threads:10] [hipe] [kernel-poll:false] [dtrace] Elixir 1.4.2 * Operating system: macOS Sierra 10.12.4 Beta (16E154a) ### Current behavior ```elixir {:ok, pid} = Task.Supervisor.start_child(Task.Supervisor, MyModule, :my_method, {:action, "text}) ``` The last parameter of `Task.Supervisor.start_child` should be a `list`, but given a tuple. The code should fail but it runs. It returns `:ok` with a pid, but the method is never called. ### Expected behavior Throw an error at runtime complaining about the argument should be `list` Answers: username_0: I think it is possible to fix the issue by changing `Task.Supervisor. start_child` signature from ```elixir def start_child(supervisor, module, fun, args) do ``` to ```elixir def start_child(supervisor, module, fun, args) when is_atom(module) and is_atom(fun) and is_list(args) do ``` But not sure whether it is a good solution. If it is acceptable, I can provide a PR to fix the issue and add related tests. username_1: Yes, please do provide a PR. :) username_0: @username_1 PR is ready, tests added and passed on CI. Please have a look. Cheers. Status: Issue closed
irisnet/irishub
582738087
Title: Do not store historical service data Question: username_0: **do not store the historical data, we should do the following:** * refactor the context id and the request context id * remove completed context, related requests and response * use tags to store batch requests * add more tags for subscribing context/batch state changes<issue_closed> Status: Issue closed
BrowserSync/browser-sync
265613303
Title: BrowserSync With SASS and Pug Not Working Question: username_0: ### Issue details _Please provide issue details here_. Very new to BrowserSync. I'm trying to find out how to get it going haha. My main file that stores everything is called 'gulpwork'. Inside it I have 4 folders; two to convert Pug ('src') to HTML ('dist') and two to convert SASS ('sass') to CSS ('css'). I've managed to get BrowserSync to run however I'm getting the 'Cannot GET /' message so I know it probably has something to do with file directory. I would like to have both Pug and SASS synced. EDIT: It only works if I have both my Pug and HTML file outside their respected folders directly inside my root and it only works if the HTML file is named index.html. How can I get it to work in its respected folders and without having to change the name to index? ### Steps to reproduce/test case _Please provide necessary steps for reproduction of this issue, or better the reduced test case (without any external dependencies)_. ### Please specify which version of Browsersync, node and npm you're running - Browsersync [ 2.18.13 ] - Node [ 6.11.4 ] - Npm [ ] ### Affected platforms - [ ] linux - [x] windows - [ ] OS X - [ ] freebsd - [ ] solaris - [ ] other _(please specify which)_ ### Browsersync use-case - [ ] API - [x] Gulp - [ ] Grunt - [ ] CLI ### If CLI, please paste the entire command below {cli command here} ### for all other use-cases, (gulp, grunt etc), please show us exactly how you're using Browsersync {Browsersync init code here} var gulp = require('gulp'); var pug = require('gulp-pug'); var sass = require('gulp-sass'); var browserSync = require('browser-sync').create(); gulp.task('browserSync', ['sass', 'pug'], function() { browserSync.init({ server: { baseDir: './' [Truncated] })) .pipe(gulp.dest('./dist')) }); gulp.task('sass', function() { return gulp.src('./sass/*.sass') .pipe(sass()) .pipe(gulp.dest('./css')) .pipe(browserSync.reload({stream: true})) }); gulp.task('watch', ['browserSync'], function() { gulp.watch('./src/*.pug', ['pug']); gulp.watch('./sass/*.sass', ['sass']); gulp.watch('./**/*.html').on('change', browserSync.reload); }); gulp.task('default', ['sass', 'pug', `'watch']);` Answers: username_1: I can't get it working with Pug either. I have an Express server and am using command line: `browser-sync start --proxy localhost:3000 --no-open --files "**/*.pug"`
microsoft/PowerToys
720799375
Title: ThinkPad Mute Button Causes Video Conference Mute Toolbar to Persist Question: username_0: ## ℹ Computer information - PowerToys version: 24.0 - PowerToy Utility: Video Conference Mute - Running PowerToys as Admin: Yes - Windows build number: 18363.1110 ### ✔️ Expected result I want to have the built in laptop mute hotkey toggled without the Video Conference Mute Toolbar being shown. ### ❌ Actual result The laptop mute turns on and the PowerToys Video Conference Mute Toolbar is persistently shown. ## 📷 Screenshots ![Xih4LMNFKh](https://user-images.githubusercontent.com/10644674/95912767-4250f700-0d71-11eb-9210-aff6447e1704.jpg) ![YWh92HZN55](https://user-images.githubusercontent.com/10644674/95913085-ba1f2180-0d71-11eb-924e-7227afd2ee29.jpg) For whatever reason the PowerToys Video Conference Mute Toolbar is also ridiculously small on both of the monitors that I currently have in use. I'd assume that this issue has already been reported though I might as well bring it up. Answers: username_0: For whatever reason this issue doesn't seem to be happening to me anymore. Regardless though, it was happening. I did update my drivers so it could have just been that though I don't know for sure.. username_0: I have an external microphone connected and it seems as though it is in a permanently off state according to the toolbar even if it's on though when I mute it and disable the laptop's mute, the toolbar disappears though the audio doesn't go through. username_1: Note to dev: the option show be on by default. username_0: I would like to bring attention to this issue once again as I am running into this issue again after Video Conference Mute was merged into the full release version of PowerToys and it is still happening.
mojocn/base64Captcha
437038378
Title: go get load model error Question: username_0: ➜ go get -u github.com/username_1/base64Captcha go: finding github.com/username_1/base64Captcha latest go: finding golang.org/x/image v0.0.0-00010101000000-000000000000 go: finding github.com/golang/freetype latest go: golang.org/x/[email protected]: unknown revision 000000000000 go get: error loading module requirements Answers: username_1: Thans for your awesome report. This issue has been fixed by [fix go mod](https://github.com/username_1/base64Captcha/commit/62bc889b6166f090863d16a407fa0354cae6fbb1) My home Raspberry Pi 3b Debian Server and Travis have tested the commit. ```bash pi@homePi:~ $ git clone https://github.com/username_1/base64Captcha 正克隆到 'base64Captcha'... remote: Enumerating objects: 18, done. remote: Counting objects: 100% (18/18), done. remote: Compressing objects: 100% (15/15), done. remote: Total 477 (delta 5), reused 10 (delta 2), pack-reused 459 接收对象中: 100% (477/477), 1.44 MiB | 446.00 KiB/s, 完成. 处理 delta 中: 100% (268/268), 完成. pi@homePi:~ $ cd base64Captcha/ pi@homePi:~/base64Captcha $ go get go: finding github.com/golang/image v0.0.0-20190424155947-59b11bec70c7 go: finding github.com/golang/freetype v0.0.0-20170609003504-e2365dfdc4a0 go: finding github.com/golang/text v0.3.1 go: finding github.com/golang/tools v0.0.0-20190425001055-9e44c1c40307 go: finding github.com/golang/net v0.0.0-20190424112056-4829fb13d2c6 go: finding github.com/golang/sync v0.0.0-20190423024810-112230192c58 go: finding github.com/golang/crypto v0.0.0-20190424203555-c05e17bb3b2d go: finding github.com/golang/sys v0.0.0-20190425045458-9f0b1ff7b46a go: downloading github.com/golang/freetype v0.0.0-20170609003504-e2365dfdc4a0 go: downloading github.com/golang/image v0.0.0-20190424155947-59b11bec70c7 go: extracting github.com/golang/freetype v0.0.0-20170609003504-e2365dfdc4a0 go: extracting github.com/golang/image v0.0.0-20190424155947-59b11bec70c7 pi@homePi:~/base64Captcha $ cd _examples pi@homePi:~/base64Captcha/_examples $ go run main.go Server is at localhost:7777 ``` Status: Issue closed
healthfinch/emacs-common-denominator
105669883
Title: (require 'package) fails Question: username_0: I get this error on Emacs 24.5.1 ```emacs-lisp Debugger entered--Lisp error: (void-function ido-vertical-mode) (ido-vertical-mode t) (progn (if (not (require (quote ido-vertical-mode) nil (quote noerror))) (ignore (message (format "Could not load %s" (quote ido-vertical-mode))))) (ido-mode t) (ido-everywhere t$ (condition-case err (progn (if (not (require (quote ido-vertical-mode) nil (quote noerror))) (ignore (message (format "Could not load %s" (quote ido-vertical-mode))))) (ido-mode $ (if (not (require (quote ido-ubiquitous) nil (quote noerror))) (ignore (message (format "Could not load %s" (quote ido-ubiquitous)))) (condition-case err (progn (if (not (require$ (progn (condition-case err (require (quote ido)) ((debug error) (ignore (display-warning (quote use-package) (format "%s %s: %s" "ido-ubiquitous" ":init" (error-message-string er$ eval-buffer(#<buffer *load*-294174> nil "/Users/cjw/healthfinch/emacs-common-denominator/healthfinch-buffer-etc-switching.el" nil t) ; Reading at buffer position 2184 load-with-code-conversion("/Users/cjw/healthfinch/emacs-common-denominator/healthfinch-buffer-etc-switching.el" "/Users/cjw/healthfinch/emacs-common-denominator/healthfinch-buffe$ require(healthfinch-buffer-etc-switching) eval-buffer(#<buffer *load*-78775> nil "/Users/cjw/healthfinch/emacs-common-denominator/healthfinch-init.el" nil t) ; Reading at buffer position 470 load-with-code-conversion("/Users/cjw/healthfinch/emacs-common-denominator/healthfinch-init.el" "/Users/cjw/healthfinch/emacs-common-denominator/healthfinch-init.el" nil t) require(healthfinch-init) eval-buffer(#<buffer *load*> nil "/Users/cjw/.emacs" nil t) ; Reading at buffer position 86 load-with-code-conversion("/Users/cjw/.emacs" "/Users/cjw/.emacs" t t) load("~/.emacs" t t) #[0 "^H\205\262^@ \306=\203^Q^@\307^H\310Q\202;^@ \311=\204^^^@\307^H\312Q\202;^@\313\307\314\315#\203*^@\316\202;^@\313\307\314\317#\203:^@\320\nB^R\321\202;^@\316\322^S\323$ command-line() normal-top-level() ``` my `.emacs` ```emacs-lisp (push "~/healthfinch/emacs-common-denominator" load-path) (require 'healthfinch-init) ``` Status: Issue closed Answers: username_1: Fixed. Sorry for the delay.
BryanSWeber/CUNYAIModule
367353730
Title: Test build orders with sunken colonies Question: username_0: Do they cause freezing? There may not be an override forcing the creation of sunken colonies. #109 Answers: username_1: Sunken and spore colonies are currently freezing the build order. I believe the issue stems from the game first requiring a creep colony in order to then morph a sunken or spore colony. username_0: Tested the hatch + many sunkens build order. Seemed to work fine. Note: Occasionally a build order will complete without morphing a sunken into a creep colony. This suggests I may be marking them as complete twice instead of only once as intended, or the creep colony is not complete when the morph order is sent. Visible in the 2hatch muta build order. username_0: Resolved: https://github.com/username_0/CUNYAIModule/commit/8086466d1087250144e154bb90232d1175cac141 Status: Issue closed
CocoaPods/CocoaPods
1005712996
Title: Question / feature request: support Xcode 13 multi-platform frameworks Question: username_0: * [x] I've read and understood the [*CONTRIBUTING* guidelines and have done my best effort to follow](https://github.com/CocoaPods/CocoaPods/blob/master/CONTRIBUTING.md). I was in the process of replacing CocoaPods with SPM so I can finally get rid of all my duplicated iOS/watchOS/tvOS frameworks. Unfortunately that's still [riddled with issues](https://twitter.com/nachosoto/status/1441077174962835457?s=21). Ideally I'd continue using CocoaPods, but it would be great if it could support [multi-platform frameworks](https://twitter.com/weichsel/status/1433449620882006016?s=21). Answers: username_1: nice! marked it for 1.12.0. username_1: @username_0 have you thought about it what it is that cocoapods should provide to enable this? I think the build settings can be changed by `pod_target_xcconfig` but I am uncertain about the remaining part, is it new DSL? username_0: AFAIK the targets need to have `Any` `supported platforms`, and `Allow Multi-Platform Builds`. I suppose I could do this manually by modifying build settings, but probably ideally we get a new option in the DSL for this? username_1: its interesting in terms of providing a DSL because a pod author might not know which case will be used by what. Could be a Podfile DSL option though to customize which platform to build a pod for. This will give you control to do it. username_1: Would this apply to only pre-built/vendored pods? username_2: +1 for the same feature. Looking to build a multi-platform framework that uses pods. username_3: +1 for this, I'd love to use multi-platform frameworks with CocoaPods. Is there any ETA on when will this be implemented?
remy/nodemon
282533672
Title: Nodemon 1.3.2 crashes, but 1.3.1 did not. Warning of memory leak. Question: username_0: - `nodemon -v`: 1.3.2 - `node -v`: 8.7.0 - Operating system/terminal environment: Mac OS X - Command you ran: nodemon demo.js The latest version 1.3.2 seems to introduce a bug that causes instant crashes with nodemon. I tried the older version 1.3.1 and it does not contain any problems. Once nodemon has crashed 22 times, I see an error message: ``` (node:50575) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 exit listeners added. Use emitter.setMaxListeners() to increase limit Server is up and running ``` ### Steps to reproduce Install version 1.3.2 and with `express` and try to run the most simple hello world: ``` const express = require('express') const app = express() app.get('*', (req, res) => { res.send('Hello World!') }) app.listen(3000) ``` Once you do any changes in the file, nodemon starts crashing. If you save the file 22 times, you see the warning about possible memory leak. The nodemon version 1.3.1 did not have this problem. Answers: username_0: Here is an even more easy version to reproduce the problem. Run a script like this with nodemon 1.3.2: ``` function hello() { console.log('...world\n') setTimeout(hello, 2000) } hello() ``` ... and nodemon crashes if you try to save any changes to the file. Here is information from `--dump`: ``` -------------- node: v8.7.0 nodemon: 1.13.2 command: /usr/local/Cellar/node/8.7.0/bin/node /Users/username_0/work/demo/node_modules/.bin/nodemon --dump demo.js cwd: /Users/username_0/work/demo OS: darwin x64 -------------- { run: false, system: { cwd: '/Users/username_0/work/demo' }, required: false, dirs: [ '/Users/username_0/work/demo' ], timeout: 1000, options: { dump: true, ignore: [ '.git', '.nyc_output', '.sass-cache', 'bower_components', 'coverage', 'node_modules', re: /\.git|\.nyc_output|\.sass\-cache|bower_components|coverage|node_modules/ ], watch: [ '*.*', re: /.*\..*/ ], ignoreRoot: [ '.git', '.nyc_output', '.sass-cache', 'bower_components', 'coverage', 'node_modules' ], restartable: 'rs', colours: true, execMap: { py: 'python', rb: 'ruby' }, stdin: true, runOnChangeOnly: false, verbose: false, signal: 'SIGUSR2', stdout: true, watchOptions: {}, execOptions: { script: 'demo.js', exec: 'node', args: [], scriptPosition: 0, nodeArgs: undefined, [Truncated] ext: 'js,json', env: {} }, monitor: [ '*.*', '!.git', '!.nyc_output', '!.sass-cache', '!bower_components', '!coverage', '!/Users/username_0/work/demo/node_modules/**/*' ] }, load: [Function], reset: [Function: reset], lastStarted: 0, loaded: [ '/Users/username_0/work/demo/package.json' ], watchInterval: null, signal: 'SIGUSR2', command: { raw: { executable: 'node', args: [ 'demo.js' ] }, string: 'node demo.js' } } ``` username_1: Also happening to me, but I think you mean version `1.13.2` not `1.3.2` username_0: @username_1 yeah good catch - I was trying to type `1.13.2` but it ended up as `1.3.2` multiple times :) username_2: Acknowledged - new bug in 1.13.2 - not present in 1.13.1. Suspect the new `pstree` logic. Status: Issue closed username_2: 1.13.3 should fix.
tensorflow/tensorflow
155763055
Title: Better support for breaking up too-large operations Question: username_0: These can be avoided by placing variables on CPUs, but in my implementation, this results in training epochs taking 10 times as long to compute. Clearly the ideal policy is to identify specific chunks of code that generate errors, and attempt to place only those on CPUs. But it is unclear to me how to do this, because those calculations can't be isolated from others that require GPU placement to achieve efficiencies. For example, simply testing predictions on a test set with something like evals = sess.run(tf.argmax(y, 1), feed_dict={x: use_x_all}) where `x` is a `tf.placeholder` of inputs to my model, and `y` are the output activations of my network, produces the above error when `use_x_all` is a large array (here with `28000` examples). Attempting to put this calculation alone on a CPU fails, presumably because the network evaluation producing `y` is on the GPU. Because of this I (seem to) need to resort to approaches like use_x_all, _ = data_loader.stack_data(use_data, as_cols=False) use_x_split = np.split(use_x_all, splits) for use_x in use_x_split: # ... evals_part = sess.run(tf.argmax(y, 1), feed_dict={x: use_x}) # accumulate evals which clearly doesn't scale. Is there a better way? Specifically: - Is there a way to place calculations like the one above on a CPU and still have those calculations for the same graph (e.g. training) run on a GPU? or, alternatively - Is there an idiom (like batching) that can be more easily applied in such situations to reduce the memory demands of such calculations? Answers: username_1: Why does your `np.split(...)` approach not scale? This is an easy way to proceed if your dataset fits in host memory. For larger datasets, you can use the standard reader pipeline to read batches of input at a time: e.g. see the `batch_inputs()` function in the [Inception model](https://github.com/tensorflow/models/blob/dc7791d01c9a6b1fcc40e9e2c1ca107cbd982027/inception/inception/image_processing.py#L407). You could also try [`tf.train.batch()` and related functions](https://www.tensorflow.org/versions/r0.8/api_docs/python/io_ops.html#batching-at-the-end-of-an-input-pipeline) to control how the inputs are batched. As to why TensorFlow doesn't do this automatically, there are clearly [lots of different ways to do batching](https://www.tensorflow.org/versions/r0.8/api_docs/python/io_ops.html#batching-at-the-end-of-an-input-pipeline), and TensorFlow can't reliably infer the user's intent. Therefore, we provide higher level libraries to allow users to build the appropriate input pipelines. username_0: @username_1 — At least in this context, it's not an issue of the many ways to do batching, but a simple matter of sequencing: doing something in parts rather than all at once (as in the hand-coded version). What doesn't scale is that I need to ferret out every place where this happens, and write code like I did above. But I see the problem: its not easily generalized, as you say. The data might need shuffling, there could be various ways to combine the results, and not all of it may fit in memory at once, etc. So I guess as long as I'm "doing it right", I'm not worried; and the question boils down to that: Is it idiomatic to be manually breaking up and reassembling data fed to TF operations as needed when they result in calculations that are too big for the hardware? username_0: @username_1 — If I've got that right, it would be extremely helpful to see how I might rewrite the full code example in [my SO question](http://stackoverflow.com/q/37327312/656912) to take advantage of some of the related TF API (e.g., `train.batch` and `batch_join`, perhaps). username_2: @username_0 I've written a tool called [hypercube](https://github.com/ska-sa/hypercube) to reason about problem sizes and memory requirements that you may find useful. username_3: This is mostly questions and requests for help rather than an issue, so I'm going to close to keep the issue tracker focused. Status: Issue closed username_0: @username_3: More of a feature request. What's the home for those? username_3: @username_0: Unfortunately I think the feature request to have TensorFlow automatically shard ops is too broad to leave as a Github issue. username_0: @username_3: Not saying it should be here; but wondering where the home for general discussion of features requests is (where they can evolve to more specific issues). username_3: <EMAIL> is one option, but on second thought leaving this as an issue is fine. username_0: @username_3: It's certainly within the scope of what's spelled out in the issue submission template. username_3: These can be avoided by placing variables on CPUs, but in my implementation, this results in training epochs taking 10 times as long to compute. Clearly the ideal policy is to identify specific chunks of code that generate errors, and attempt to place only those on CPUs. But it is unclear to me how to do this, because those calculations can't be isolated from others that require GPU placement to achieve efficiencies. For example, simply generating predictions on a test set with something like evals = sess.run(tf.argmax(y, 1), feed_dict={x: use_x_all}) where `x` is a `tf.placeholder` of inputs to my model, and `y` are the output activations of my network, produces the above error when `use_x_all` is a large array (here with `28000` examples). Attempting to put this calculation alone on a CPU fails, presumably because the network evaluation producing `y` is on the GPU. Because of this I (seem to) need to resort to approaches like use_x_all, _ = data_loader.stack_data(use_data, as_cols=False) use_x_split = np.split(use_x_all, splits) for use_x in use_x_split: # ... evals_part = sess.run(tf.argmax(y, 1), feed_dict={x: use_x}) # accumulate evals which clearly doesn't scale. Is there a better way? Specifically: - Is there a way to place calculations like the one above on a CPU and still have those calculations for the same graph (e.g. training) run on a GPU? or, alternatively - Is there an idiom (like batching) that can be more easily applied in such situations to reduce the memory demands of such calculations? username_3: Automatically closing due to lack of recent activity. Please update the issue when new information becomes available, and we will reopen the issue. Thanks! Status: Issue closed username_4: Hey, I'd like to calculate a matrix multiplication with samples which do not fit into memory. Now I thought about batching the samples and sum up the results. In OpenMP this would be a simple omp parallel reduction. In Tensorflow, it is a rather big bunch of code to write a dedicated input pipeline, create a while loop, assign-adding the values, ... Also, the user has to decide how big the batches are allowed to be. Have there been any updates on simplifying this? I'd love something like a map-reduce option which would automatically decide which batch size it can use.
unpoly/unpoly
930515427
Title: up.Selector.closest() doesn't consider selector on children targets Question: username_0: ### Bug description discovered this when my forms were set to target an inner element, but instead the first matching element on the page was selected (this is a page with a list of forms) ```jinja <div> <div class="auto-submit d-flex"></div> <form {% if id %}id="{{id}}"{% else %}id="form_{{name}}"{% endif %} up-transition="cross-fade" up-target=".auto-submit" up-method="{{up-method|default:PATCH}}" up-autosubmit up-delay="{{up-delay|default:1000}}" enctype="multipart/form-data" up-history="false" action="{{action}}"> {% block component %}{% endblock %} <div class="auto-submit d-flex"> </div> </form> </div> ``` response ```jinja <div class="auto-submit d-flex"> <img src="/assets/Font-Awesome/svgs/solid/check-circle.svg"> <p>Profile Upadted !</p> </div> ``` my workaround/fix ```javascript var selector_closest_fixed = false; (function fix_selector_closest(){ if(!selector_closest_fixed){ selector_closest_fixed = true up.Selector.prototype.closest = function(element) { var parentElement; let child_target = element.querySelector(this.unionSelector); if (child_target){ return child_target } else if (this.matches(element)) { return element; } else if ((parentElement = element.parentElement)) { return this.closest(parentElement); } }; } })(); ``` Answers: username_1: Can you confirm you're using Unpoly 2? username_0: i'm using `unpoly ~2.0.1` (whatever is latest on CDN) i tried many selectors for solutions, even `up-id` i spent a lot of time on this issue, trying to figure out why my form post was changing html in some random place on my page, nowhere near the origin. the way `selector closest` works either is not intuitive, incorrect, or maybe the function should be renamed. `closest` really sounds like it should try to do just querySelector(selector), because the `closest` selection will be that. but closest is doing a recursive search of parents, so it's name should probably be something similar to what it's actually doing, like `search_parents` or something since unpoly has the `up-content` att, i think i can do an example without needing a server. i'll post it soon. username_0: https://glitch.com/edit/#!/ember-mousy-canvas js has my fix, i can't get the `up-document/up-content` working (maybe another bug) i added some style to show what unpoly is changing, js has my fix, remove the fix and see the behaviour difference. username_1: I removed your Javascript changes and updated both `up-target` to this. ``` up-target="& .auto-submit" ``` I believe it achieves what you were looking for. Can you confirm? https://glitch.com/edit/#!/radial-attractive-equinox username_0: yes, maybe this should be emphasised in the documentation. i tried to use :origin in my target, but it didn't work, the `&` is very unfamiliar to me, and i didn't attempt to use it as a selector. seems that origin should be implied, though. would be common that the selector you used would break things for people? for me it seems like the intuitive behaviour and `&` is redundant. username_1: I haven't ran into any issues here, so I can't really agree. Maybe Henning can provide an opinion here. I'm going to close this for now since `up.fragment.closest` documents that it only looks at itself and then it's ancestors, never its children. If Henning deems this a bug we can re-open. Please feel free to submit a PR for the `:origin` documentation if you can find a way to improve it. Status: Issue closed username_2: Both `:origin` and `&` are supported, as [documented here](https://unpoly.com/origin). Even without `:origin` in the selector, Unpoly 2 tries to [take a given `{ origin }` into account](https://unpoly.com/fragment-placement#interaction-origin-is-considered) when matching fragments. Currently that mostly looks for a parent element around the origin. I haven't had a use case where I needed to auto-match an origin child, but I guess we could support that. The fix wouldn't go in `up.Selector` though, but here: https://github.com/unpoly/unpoly/blob/e555c48d93e636f8a9c6643c38688cc237ee6853/lib/assets/javascripts/unpoly/fragment.coffee#L1033-L1038 https://github.com/unpoly/unpoly/blob/e555c48d93e636f8a9c6643c38688cc237ee6853/lib/assets/javascripts/unpoly/classes/fragment_finder.coffee https://github.com/unpoly/unpoly/blob/e555c48d93e636f8a9c6643c38688cc237ee6853/spec_app/spec/javascripts/up/fragment_spec.js.coffee#L63-L87 username_2: Re-opening this issue for a while in case someone wants to tackle that change. username_2: ### Bug description discovered this when my forms were set to target an inner element, but instead the first matching element on the page was selected (this is a page with a list of forms) ```jinja <div> <div class="auto-submit d-flex"></div> <form {% if id %}id="{{id}}"{% else %}id="form_{{name}}"{% endif %} up-transition="cross-fade" up-target=".auto-submit" up-method="{{up-method|default:PATCH}}" up-autosubmit up-delay="{{up-delay|default:1000}}" enctype="multipart/form-data" up-history="false" action="{{action}}"> {% block component %}{% endblock %} <div class="auto-submit d-flex"> </div> </form> </div> ``` response ```jinja <div class="auto-submit d-flex"> <img src="/assets/Font-Awesome/svgs/solid/check-circle.svg"> <p>Profile Upadted !</p> </div> ``` my workaround/fix ```javascript var selector_closest_fixed = false; (function fix_selector_closest(){ if(!selector_closest_fixed){ selector_closest_fixed = true up.Selector.prototype.closest = function(element) { var parentElement; let child_target = element.querySelector(this.unionSelector); if (child_target){ return child_target } else if (this.matches(element)) { return element; } else if ((parentElement = element.parentElement)) { return this.closest(parentElement); } }; } })(); ```
kyrptonaught/quickshulker
798232040
Title: Consistency Ender Chest Question: username_0: This is more a design question, but could also be a normal issue. I like the feature to open the ender chest in inventory. In normal vanilla minecraft, you need a silk touch pickaxe to break the ender chest to get it back into your inventory after usage. This mod bypasses this necessity. My question is, if there could be a check added, if the user has a silk touch pick? Maybe as optional config.
IrinaSing/welcome-to-js
833909202
Title: reading-code exercises Question: username_0: - [ ] 1-remembery.js - [ ] 2-madlib.js - [ ] 3-getting-an-orange.js - [ ] 4-frogify.js - [ ] 5-repeat-or-remove-1.js - [ ] 5-repeat-or-remove-2.js - [ ] 5-repeat-or-remove-3.js - [ ] 6-filter-words.js - [ ] 7-search.js - [ ] 8-guessing-game.js<issue_closed> Status: Issue closed
CCMS-UCSD/GNPS_Workflows
575144219
Title: [MERGE_NETWORKS_POLARITY] Settings / how to troubleshoot failing workflow Question: username_0: Hey @username_1, Here are two failed jobs from two versions of the MERGE_NETWORKS_POLARITY workflow with the same POS and NEG files resulting in different errors: Release 18: https://gnps.ucsd.edu/ProteoSAFe/status.jsp?task=b5492f23506f4d0c8db8f6b6d2fc998b Release 19: https://gnps.ucsd.edu/ProteoSAFe/status.jsp?task=5cfd199ca36640eba0f3508db7af40fd Issues? bugs to fix? Let me know ;) Answers: username_1: Thanks @username_0 will look into it for release 20! Status: Issue closed username_1: These networks are not from the same set of data and don't make sense to merge polarities.
blackbaud/stache2
229756377
Title: Allow way to change route's name in sidebar, breadcrumbs Question: username_0: ### Expected behavior ### Actual behavior ### Steps to reproduce ### Plunker (see example SKY UX 2 plunker template at: https://plnkr.co/edit/GeP22YbirEzceF3NVu39?p=preview) Answers: username_0: https://github.com/blackbaud/stache2/pull/166 Status: Issue closed
red-data-tools/YouPlot
916760762
Title: Strange behavior with --xlim option Question: username_0: Strange behavior with --xlim option ![image](https://user-images.githubusercontent.com/5798442/121444152-e60aa980-c9c9-11eb-8f9b-1da03a432af9.png) Answers: username_0: This is caused by passing an array of strings to xlim. I need to fix YouPlot. This is not a UnicodePlot bug, but I have reported it to UnicodePlot as well. https://github.com/red-data-tools/unicode_plot.rb/issues/55 Status: Issue closed
infor-design/enterprise-ng
730298878
Title: DataGrid: Click event is not triggered when using key search Question: username_0: **Describe the bug** We have columns marked as hyperlinks that relies on the click event to perform different actions. A SohoDataGridColumn: ``` { id: column.name, name: column.description, field: column.name, sortable: true, hidden: false, formatter: this.getOrderNumberFormatter(m3DialogParams.h5Url), click: this.orderNumberCallback(m3DialogParams), // The callback is not triggered when using key search }; ``` But when using key search and the field is marked as found by key search the attached callback function is never called. **Expected behavior** A click event should be triggered so the callback function is called when clicking on a field even when using key search. **Version** - ids-enterprise-ng: 7.6.0 **Screenshots** With no key search the hyperlinks works fine and the click event is triggered when clicking on the order no: ![image](https://user-images.githubusercontent.com/42343009/97285049-0b371700-1842-11eb-9682-e00970cb33c3.png) But when using key search and the field is marked as found by key search the callback which is attacked to the click event is not triggered when clicking on the order no: ![image](https://user-images.githubusercontent.com/42343009/97285478-8c8ea980-1842-11eb-9106-aac709c430ee.png) Answers: username_1: I tried to reproduce this but could not. I think i have the steps right but feel free to correct. 1) I updated https://master-enterprise.demo.design.infor.com/components/datagrid/example-keyword-search.html to show a console log on the click event of the hyperlink cell (may take a while to update) 2) go to the keyword search and type "Comp" and hit enter 3) this highlights 4) go click on the cells that are highlighted and check the console -> can see the click event is firing You might want to test a newer version? Or perhaps im missing something in my steps? username_0: I've checked your example. When the keyword search field is empty I can see that all rows works by looking at the console: ![image](https://user-images.githubusercontent.com/42343009/97312909-52370380-1866-11eb-9420-b917b1adc18a.png) But if I enter the keyword "Compressor" and hit enter some of the rows that worked previously doesn't work now (nothing is printed in the console when I click on them): ![image](https://user-images.githubusercontent.com/42343009/97313225-ab069c00-1866-11eb-8d18-d3ee19b8fa24.png) I've highlighted those that doesn't work yellow. But for some reason some of them works while others does not.
spinnaker/swabbie
512161165
Title: Fix swabbie email template termination date Question: username_0: <img width="878" alt="Screen Shot 2019-10-24 at 1 10 53 PM" src="https://user-images.githubusercontent.com/1572759/67522109-f937fc80-f660-11e9-9059-9a0d42e31495.png"> <img width="511" alt="Screen Shot 2019-10-24 at 1 12 41 PM" src="https://user-images.githubusercontent.com/1572759/67522116-ff2ddd80-f660-11e9-9ee6-437a30f3212b.png"> Echo template: ```<td align="left" style="padding: 4px 0; font-family: Helvetica, Arial, sans-serif; font-size: 12px;"> ${resourceData.resource.projectedDeletionStamp?number_to_date?string("EEE, d MMM yyyy")} </td>``` Done When: - Swabbie formats the deletion date in human readable format - Update echo to read the value without any treatment to the format<issue_closed> Status: Issue closed
dapr/dapr
909184410
Title: before make e2e-build-deploy-run should delete mutatingwebhookconfigurations file Question: username_0: ## Expected Behavior before make e2e-build-deploy-run delete mutatingwebhookconfigurations file ## Actual Behavior not delete mutatingwebhookconfigurations cause caBundle content not match the secret key,the Injector will not work fine. ``` {"instance":"dapr-sidecar-injector-db4b69978-6cfn4","level":"info","msg":"log level set to: info","scope":"dapr.injector","time":"2021-06-01T15:37:17.24615903Z","type":"log","ver":"unknown"} {"instance":"dapr-sidecar-injector-db4b69978-6cfn4","level":"info","msg":"metrics server started on :9090/","scope":"dapr.metrics","time":"2021-06-01T15:37:17.246706046Z","type":"log","ver":"unknown"} {"instance":"dapr-sidecar-injector-db4b69978-6cfn4","level":"info","msg":"starting Dapr Sidecar Injector -- version edge -- commit v1.0.0-rc.4-195-g4db3a83-dirty","scope":"dapr.injector","time":"2021-06-01T15:37:17.246909768Z","type":"log","ver":"unknown"} {"instance":"dapr-sidecar-injector-db4b69978-6cfn4","level":"info","msg":"Healthz server is listening on :8080","scope":"dapr.injector","time":"2021-06-01T15:37:17.253899327Z","type":"log","ver":"unknown"} {"instance":"dapr-sidecar-injector-db4b69978-6cfn4","level":"info","msg":"Sidecar injector is listening on :4000, patching Dapr-enabled pods","scope":"dapr.injector","time":"2021-06-01T15:37:17.369876538Z","type":"log","ver":"unknown"} 2021/06/01 15:38:13 http: TLS handshake error from 10.244.0.0:3630: remote error: tls: bad certificate 2021/06/01 15:38:13 http: TLS handshake error from 10.244.0.0:60225: remote error: tls: bad certificate 2021/06/01 15:38:49 http: TLS handshake error from 10.244.0.0:35503: remote error: tls: bad certificate 2021/06/01 15:38:52 http: TLS handshake error from 10.244.0.0:44436: remote error: tls: bad certificate 2021/06/01 15:39:26 http: TLS handshake error from 10.244.0.0:36347: remote error: tls: bad certificate ``` ## Steps to Reproduce the Problem <!-- How can a maintainer reproduce this issue (be detailed) --> ## Release Note <!-- How should the fix for this issue be communicated in our release notes? It can be populated later. --> <!-- Keep it as a single line. Examples: --> <!-- RELEASE NOTE: **ADD** New feature in Dapr. --> <!-- RELEASE NOTE: **FIX** Bug in runtime. --> <!-- RELEASE NOTE: **UPDATE** Runtime dependency. --> RELEASE NOTE:<issue_closed> Status: Issue closed
dof-dss/architecture-catalogue
625653425
Title: Architecture Catalogue: ‘Last Updated’ field not being updated when changes are made Question: username_0: **‘Last Updated’** field not being updated when changes are made. In this example I removed a Tag from User Defined Tags then selected Back to catalogue entry to return to the catalogue entry. See **Last Updated.png** attached. **Steps to reproduce:** **1.** Launch https://architecture-catalogue.staging.ea.digitalni.gov.uk/home **2.** Open a catalogue entry and make changes **3.** Go back to the entry and view the Last Updated entry **Actual Result:** The Last Updated field remains the date on which the record was created **Expected Result:** The Last Updated field should reflect the changed make to the catalogue entry <img width="785" alt="Last Updated" src="https://user-images.githubusercontent.com/64217737/83018929-044f7280-a01e-11ea-974a-027202ba1cca.png"> Answers: username_1: **‘Last Updated’** field not being updated when changes are made. In this example I removed a Tag from User Defined Tags then selected Back to catalogue entry to return to the catalogue entry. See **Last Updated.png** attached. **Steps to reproduce:** **1.** Launch https://architecture-catalogue.staging.ea.digitalni.gov.uk/home **2.** Open a catalogue entry and make changes **3.** Go back to the entry and view the Last Updated entry **Actual Result:** The Last Updated field remains the date on which the record was created **Expected Result:** The Last Updated field should reflect the changed make to the catalogue entry <img width="785" alt="Last Updated" src="https://user-images.githubusercontent.com/64217737/83018929-044f7280-a01e-11ea-974a-027202ba1cca.png">
scikit-learn-contrib/imbalanced-learn
472570385
Title: SMOTENN Question: username_0: Hello i faced an imbalanced multiclass data which its chart as below : ![image](https://user-images.githubusercontent.com/41380025/61832412-54dfc780-ae71-11e9-85f7-569abda1faf6.png) i used smoteenn , it seems it solve the problem but at the same time it make the data huge and take alot of time in training , my question is there is a way to use smoteenn to reduce the majority class 50% for instance and increase the minority class to be equal to the rest of classes ? Thank you Answers: username_1: The following code should do the trick. ```python from collections import Counter from imblearn.over_sampling import SMOTE from imblearn.pipeline import make_pipeline from imblearn.under_sampling import RandomUnderSampler from sklearn.datasets import make_classification # Assume that you have a multiclass classification problem X, y = make_classification( n_samples=250, n_informative=20, n_features=30, n_classes=3, weights=[0.8, 0.1, 0.1], random_state=0, ) # Where the class distribution is {0: 200, 2: 24, 1: 26} print(Counter(y)) # So, since you know before hand that your majority class contains 200 samples # you can force the RandomOverSampler to undersample that class by a prespecified number. # In our example will be 100 since the majority class contains 200 samples. sampler = make_pipeline( RandomUnderSampler(random_state=0, sampling_strategy={0: 100}), SMOTE(random_state=0), ) X_resampled, y_resampled = sampler.fit_resample(X, y) # After the resampling the class distribution will be {0: 100, 1: 100, 2: 100} print(Counter(y_resampled)) ``` username_0: thank you Chkoar for replying but i faced an error below ![Error](https://user-images.githubusercontent.com/41380025/61915812-6e9f0e80-af46-11e9-8c0c-7f9747308035.JPG) could you please tell me how to solve it as i try to write the code which appeared in the error msg but it doesn't work . Thank u username_1: ![image](https://user-images.githubusercontent.com/11897937/61916335-c2aef080-af50-11e9-854f-36bbd9b17eec.png) username_0: Good morning username_1 i correct the line u mentioned and unfortunately there is another error appeared ![error2](https://user-images.githubusercontent.com/41380025/61934197-12f57500-af88-11e9-9581-8bd13d128417.JPG) marked in yellow username_1: @ username_0 Update please to the latest version of `scikit-learn` username_0: i have this version "The scikit-learn version is 0.20.3." username_0: ok i'll update to 0.21.2 username_0: it works , thank you but i want to ask is there is a way to use SMOTEENN to implement your code and make all samples 100? username_1: `SMOTEENN` actually is a pipeline that is consisted by `SMOTE` and `EEN`. So at first `SMOTE` generates examples for the minority class. Then `ENN` probably will try to clean both classes (in case of `SMOTEENN`), if it can. So in my example SMOTE will make 100 examples per class. Now you have a perfectly balanced set. What do you want to do from there? You can add `ENN` in the pipeline at the end but probably you'll want to change it's `sampling_strategy` to `all`. In that case it will try to clean all classes. So you may remain with a new imbalanced dataset. Status: Issue closed
knative/test-infra
470493128
Title: Change the build log link in the alert message to spyglass link Question: username_0: Currently the build log link the alert message links to the GCS bucket. Update this to use the spyglass link to get summary of the error. /kind monitoring Answers: username_1: /assign username_0 username_2: According to the schema, we should be storing the link to the build-log file - https://github.com/knative/test-infra/blob/81861c7c2060af68e3dbdc72bcd3a2f0584566d2/tools/monitoring/mysql/schema.sql#L27 But, it looks like we are actually storing the path to the GCS dir instead. When sending the alert, we are just using this and hence, it points to the GCS dir. username_2: We either need to update the storage to start storing the URL instead of the GCS path, or update the schema to also store the url with the GCS path
bats-core/bats-core
602087417
Title: Fail on duplicate test names Question: username_0: I fell into the trap that I accidentally had same-named tests. This caused one of tests being run twice and the other not at all. I consider this rather critical, as it can hide failures and you might wonder what goes on while all tests are green. I've seen that there is already a detection for duplicate test names and a warning printed at the start of the run. But if you start the tests, then wait until they are finished and then look at the result, this warning can easily be missed (I did so). I'd like to suggest to change this to a hard error, as it hides potential test failures by not executing the code that should be executed. Answers: username_1: I can't think of a scenario where a user would intentionally use the same name, so I'm inclined to agree that we should hard-error. username_0: Actually I fell in that trap myself right away :-( svn-all-fast-export/svn2git#106. As I have a wrapper around bats with some options anyway I added an output screening now that fails the test run in the end if there were duplicate names. For me it is ok and maybe even better if there is no instant fail, as the tests that are run are valid nonetheless, just that one task is run twice and one isn't run at all. So just like a failing test does not immediately abort, aborting after the tests are run would be desirable, as you could in the same run also see other failing tests. But anything is better than only having a warning imho :-) username_2: I support having it fail fast, to avoid a safe sense of safety. Good point @username_0 :+1: Status: Issue closed username_1: @username_2 what do most other test runners do in this scenario? I'd prefer to behave similarly unless we have a strong reason not to. username_2: @username_1 from browsing a few other repos a lot of other test runners can't detect it as they define native functions, which are overwritten by the last definition (unless defining immutable variables). I've been stung by this plenty of times so err on the side of obviousness (and trying to reclaim hours back!). username_1: I just quickly checked rspec and minitest (or, at least minitest within rails). rspec happily duplicates the test, so both copies run as if they had different names. no issue. minitest, (when using the `test "name"` DSL from ActiveSupport), exhibits the same restriction that bats has. That is, the second attempt to define a dupicate test name causes a collision. Minitest allows the entire suite to run, and treats the collision as an Error (which is different from a Failure). Errors are essentially mis-uses of the test framework, or exceptions that are thrown during the test that are not expected. They are distinct from Failures which are assertions that do not pass. So, TL;DR: - one framework _supports_ duplicate test names (uniq-ifying them somehow automatically) - another framework does not allow duplicate test names, but does _NOT_ hard-error the test run. It completes, and just treats the duplicate test as errored/failed/broken. We could probably audit some JS testing frameworks as well, but I'm mostly convinced myself that we should just report the test as failed instead of aborting the entire suite.
YumaInaura/YumaInaura
609464070
Title: docker sync の unison と rsync の違いは? unison は双方向の同期で、rsync は ホストからゲストへの送信のみ。 Question: username_0: のようです。 [Docker for Mac で遅いファイルマウントを何とかする - Qiita](https://qiita.com/Jason/items/a76df7cc7e7db16d90f7) rsync で ゲストからホストにファイルを送るには、単純に docker cp すれば良いようです。 ``` # Docker内のschema.rbをPC側に持ってくる $ docker cp コンテナ:/var/www/db/schema.rb ./db/schema.rb ``` [Docker for Macが遅い問題をdocker-syncで解決する | Cluex Developers Blog](https://www.wantedly.com/companies/clueit/post_articles/43582?auto_login_flag=true#_=_)
rwoods3/ray-personal-website
433084174
Title: As site admin I want the ability to hide projects Question: username_0: I want the ability to choose which projects get displayed on Projects page of the website. For example, private projects obviously should not get displayed. But, there might also be projects that are not complete that i do not want to show yet or that were experimental that I just don't want to highlight on the website.
JCSDA/jedi-stack
716835600
Title: PNETCDF and PIO on AWS Question: username_0: I was updating the stacks on AWS today so I could test the new mpas cmake build and I ran into a puzzling little error. I'm documenting here so it doesn't get forgotten but it's not high priority so I'll icebox it. The intel build worked fine but the gnu build failed on PIO because it was complaining about not being able to find the PNETCDF fortran libraries. Further investigation revealed that they were not there - the PNETCDF installation happily succeeded but skipped the fortran. The easiest way to check this is just to look for the `pnetcdf.mod` file in the `$PNETCDF/include` directory. So then I tried to force the issue by adding the `--enable-fortran` flag to the `configure` statement in our build_pnetcdf.sh script. It came back with this: ```bash checking whether /optjedi/modules/gnu-9.3.0/openmpi/4.0.3/bin/mpifort is a valid MPI compiler... no configure: error: ----------------------------------------------------------------------- Invalid MPI Fortran 77 compiler: "/optjedi/modules/gnu-9.3.0/openmpi/4.0.3/bin/mpifort " A working MPI Fortran 77 compiler is required. Please specify the location of a valid MPI Fortran 77 compiler, either in the MPIF77 environment variable or through --with-mpi configure flag. Abort. ----------------------------------------------------------------------- ``` I verified that compiler is indeed gfortran 9.3, as I expect: ```bash ubuntu@ip-172-31-24-71:/optjedi/modules/intel-20.0.166/impi-20.0.166/pnetcdf/1.12.1/include$ /optjedi/modules/gnu-9.3.0/openmpi/4.0.3/bin/mpifort --version GNU Fortran (Ubuntu 9.3.0-11ubuntu0~18.04.1) 9.3.0 Copyright (C) 2019 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. ``` So, for some reason, Pnetcdf doesn't think gfortran 9.3 is a valid F77 (!) compiler. But, I have the identical combination of gfortran 9.3.0 and openmpi 4.0.3 on my Mac and both PNETCDF and PIO install fine, with fortran. So, just a heads-up for @markjolah that I'm working on new AWS modules but the gnu stack for now won't have PIO or pnetcdf fortran. I'll test mpas with intel. And, after ecmwf tags their latest releases of ecbuild, eckit, fckit, and atlas, I'll try building with those and including the appropriate modules. So, I expect to update the AWS snapshot with new modules next week, accessible via the `jedinode.py` script (single-node) or ParallelCluster. Answers: username_0: Hey - awesome! It looks like https://github.com/JCSDA/jedi-stack/pull/168 solved this problem! I guess it was an issue with our `build_pnetcdf.sh` script. PNETCDF and PIO build with gnu on AWS. I'll close this Status: Issue closed
solo-io/gloo
635477976
Title: Can I get the access log behind the ext auth Question: username_0: I build a [custom auth server](https://docs.solo.io/gloo/latest/guides/security/auth/custom_auth/) in gloo and set some response headers in the custom auth server, which can make upstream service get some auth info after passing the auth server. Now I want to write a custom grpc accesslog server following the docs in [access logging](https://docs.solo.io/gloo/latest/guides/security/access_logging/). But I find the access logs are traced in the entrypoint in the gateway. Actually, I want to get the request headers set by custom auth server in access logging. So is there any method specifying that access log behind the custom auth? Answers: username_0: I'm Sorry that I forgot to set the headers set by auth-server into `additional_request_headers_to_log`. Now it works. Thanks very much. By the way, can I get the request body in accesslogger? I have seen [an envoy issue](https://github.com/envoyproxy/envoy/issues/9950), which say we can use traffic tapping. So is this supported by gloo? Is there any method to get both the request (including body) and response info in gloo? Since we want to audit user behavior in the gateway side. username_1: access logs does not support body. one possible workaround in gloo today is to use the transformation filter to add the body to the envoy's dynamic metadata, and log that. i would be careful with large bodies in this approach. consider using the buffer filter to limit max_request_bytes username_2: @username_0 i'm trying to set some response headers in the custom auth server, which can make upstream service get some auth info after passing the auth server as you did but headers are not added to the upstream response. Can you show me an example of how you edited the gloo-system default settings? Thankx for your help username_0: You can use `allowedUpstreamHeaders` ```bash kubectl -n gloo-system patch settings default --type merge -p "spec: extauth: extauthzServerRef: name: your-custom-auth-upstream namespace: gloo-system httpService: request: allowedHeaders: - Requst-Header-You-Want-To-Receive response: allowedUpstreamHeaders: - Header-You-Want-To-Add-To-Upstream requestTimeout: 0.5s" ``` username_2: Thankx @username_0 . Works like a charm👌🏾
awslabs/aws-api-gateway-developer-portal
1010388680
Title: Handling Authorization Question: username_0: How do you facilitate the end user getting access to IdToken or AccessToken, if the API is secured by Cognito Pool Authorizer? Is there a different mechanism for securing the API along the api-key? Appreciate any ideas on how to secure the API while keeping it simple for the end user to use it. Thanks Answers: username_1: @username_0 What did you end up doing? I see that we can import new OpenAPI 3/2 definition manually. I can manually extract the IdToken and make calls in Postman for testing. There is a dashboard area that shows the usage graph and api key. Maybe displaying the id token there could be the way to go. It is located in Local Storage. username_1: @username_0 I created an endpoint /auth POST request to call a lambda function that accepts username and password. You must enable `USER_PASSWORD_AUTH` in cognito's app client for this to work. Here is excerpt of the lambda function. Install npm package `amazon-cognito-identity-js` first. `const AmazonCognitoIdentity = require('amazon-cognito-identity-js'); const poolData = { UserPoolId : 'POOL_ID', ClientId : 'CLIENT ID of APP' }; const userPool = new AmazonCognitoIdentity.CognitoUserPool(poolData); exports.handler = function(event, context, callback) { let username = ''; let password = ''; if (event.body) { let body = JSON.parse(event.body) if (body.password) password = <PASSWORD>; if (body.username) username = body.username; } login(event.username, event.password, callback); login = function(user, pass, callback) { var authenticationDetails = new AmazonCognitoIdentity.AuthenticationDetails({ Username : user, Password : <PASSWORD>, }); var userData = { Username : user, Pool : userPool }; var cognitoUser = new AmazonCognitoIdentity.CognitoUser(userData); let response = { statusCode: 200, body: null }; cognitoUser.authenticateUser(authenticationDetails, { onSuccess: function (result) { let idToken = result.getIdToken().getJwtToken(); response.body = { token: idToken } callback(null, response); }, onFailure: function(err) { console.log(err); response.body = err; response.statusCode = 401; callback(null, response); }, }); } ` The developers consuming the api can use amplify in their apps to do this. But to be able to test it at least (sandbox environment for example) they could use /auth. Also if they have their own api, this makes it easier.
robertamezquita/OSCABioc2019
456893088
Title: vignette does not build Question: username_0: ``` --- re-building ‘OSCABioc2019.Rmd’ using rmarkdown Quitting from lines 122-201 (OSCABioc2019.Rmd) Error: processing vignette 'OSCABioc2019.Rmd' failed with diagnostics: there is no package called 'TabulaMurisData' --- failed re-building ‘OSCABioc2019.Rmd’ SUMMARY: processing the following file failed: ‘OSCABioc2019.Rmd’ Error: Vignette re-building failed. Execution halted ``` Looks like #1 might fix this problem. cc: @LiNk-NY @lwaldron Answers: username_1: Fixed! Checking the build on travis now Status: Issue closed
kalabox/kalabox
177287428
Title: Stucked at - Waiting for site to be ready Question: username_0: Does this bug prevent you from using Kalabox? Yes Answers: username_1: @username_0 so Kalabox will wait for your site to return a site code of `200 OK` before it reports it as ready. However, i think you should still be able to hit the site in the browser while this is happening. Seems like that is not true for you? Bear in mind that the site might actually not be up and running yet if you do this eg you will get the `502` page. Additionally, is it possible that your site has an error or something that would actually cause a non-200 response code? Maybe we should change our check to be a little less strict so even if a site errors the user can get to the page. username_1: @username_0 actually going to open this up a bit before we do our stable username_0: @username_1 Thanks, i'm still trying to get this resolve also. i will keep updates whatever error i found/spot. i don't have any error on my console at my Live site that can cause non-200 response code. And yes i can't hit to open my site on my local while it saying "Waiting your site to be ready" this is what showing after i'm waiting ``` C:\Users\username_0\mscapelocal>kbox start Starting. Cleaning up. Running pre start tasks. Starting containers. Starting mscapelocal_data_1 Starting mscapelocal_redis_1 Starting mscapelocal_db_1 Starting mscapelocal_web_1 mscapelocal_unison_1 is up-to-date Starting mscapelocal_solr_1 Starting mscapelocal_cli_1 Starting mscapelocal_appserver_1 Starting mscapelocal_terminus_1 Starting mscapelocal_edge_1 Running post start tasks. kalabox_proxy_1 is up-to-date kalabox_dns_1 is up-to-date Waiting for site to be ready. Unhandled rejection VError: Failed after 50 retries. {"max":50,"backoff":500}: getaddrinfo ENOTFOUND at C:\Program Files\Kalabox\bin\lib\promise.js.jx:70:17 at process._tickCallback (node.js:917:13) ``` username_1: @username_0 you can always try running the command as `kbox restart -- -d` to get more output that might help you debug. Can you confirm that you got your sites code pulled down from pantheon? can you confirm the kalabox proxy server is up and running (visit `something.kbox` in your non-edge browser, should get a 400/502) username_0: @username_1 already do kbox restart -- -d showing ``` Starting mscapelocal_data_1 Starting mscapelocal_redis_1 Starting mscapelocal_db_1 Starting mscapelocal_web_1 Starting mscapelocal_unison_1 Starting mscapelocal_solr_1 Starting mscapelocal_edge_1 Starting mscapelocal_cli_1 Starting mscapelocal_appserver_1 Starting mscapelocal_terminus_1 debug: SPAWN DOCKER-COMPOSE.EXE ==> Spawn exited with code: 0 debug: EVENTS ==> Emitting event [post-engine-start]. unknown debug: EVENTS ==> No listeners [post-engine-start]. debug: EVENTS ==> Finished dispatching event listeners [post-engine-start]. info: Running post start tasks. Running post start tasks. debug: EVENTS ==> Emitting event [pre-engine-start]. unknown debug: EVENTS ==> No listeners [pre-engine-start]. debug: EVENTS ==> Finished dispatching event listeners [pre-engine-start]. verbose: DOCKER COMPOSE ==> Running: ["--project-name","kalabox","--file","C:\\Users\\username_0\\.kalabox\\downloads\\kalabox\\kalabox-3.yml","up","-d","dns","proxy"] debug: SPAWN DOCKER-COMPOSE.EXE ==> Using C:\Program Files\Kalabox\bin\docker-compose.exe to run --project-name,kalabox,--file,C:\Users\username_0\.kalabox\downloads\kalabox\kalabox-3.yml,up,-d,dns,proxy in collect mode with environment {"ALLUSERSPROFILE":"C:\\ProgramData","APPDATA":"C:\\Users\\username_0\\AppData\\Roaming","CommonProgramFiles":"C:\\Program Files\\Common Files","CommonProgramFiles(x86)":"C:\\Program Files (x86)\\Common Files","CommonProgramW6432":"C:\\Program Files\\Common Files","COMPUTERNAME":"WELYANTO","ComSpec":"C:\\WINDOWS\\system32\\cmd.exe","DOCKER_CERT_PATH":"C:\\Users\\username_0\\.docker\\machine\\certs","DOCKER_HOST":"tcp://10.13.37.100:2376","DOCKER_MACHINE_NAME":"Kalabox2","DOCKER_TLS_VERIFY":"1","FPS_BROWSER_APP_PROFILE_STRING":"Internet Explorer","FPS_BROWSER_USER_PROFILE_STRING":"Default","FP_NO_HOST_CHECK":"NO","HOMEDRIVE":"C:","HOMEPATH":"\\Users\\username_0","INIT_CWD":"C:\\Users\\username_0\\mscapelocal","KALABOX_APPS_ROOT":"C:\\Users\\username_0\\.kalabox/apps","KALABOX_APP_REGISTRY":"C:\\Users\\username_0\\.kalabox/appRegistry.json","KALABOX_CONFIG_SOURCES":"[\"C:\\\\Program Files\\\\Kalabox\\\\bin\\\\kalabox.yml\",\"C:\\\\Program Files\\\\Kalabox\\\\kalabox.yml\",\"DEFAULT_GLOBAL_CONFIG\",\"ENV_CONFIG\"]","KALABOX_DEV_MODE":"false","KALABOX_DOMAIN":"kbox","KALABOX_DOWNLOADS_ROOT":"C:\\Users\\username_0\\.kalabox/downloads","KALABOX_ENGINE":"kalabox-engine-docker","KALABOX_ENGINE_GID":"50","KALABOX_ENGINE_HOME":"/c/Users/username_0","KALABOX_ENGINE_ID":"1000","KALABOX_ENGINE_IP":"10.13.37.100","KALABOX_ENGINE_REMOTE_IP":"10.13.37.1","KALABOX_ENGINE_REPO":"kalabox","KALABOX_GLOBAL_PLUGINS":"[\"kalabox-core\",\"kalabox-cmd\",\"kalabox-services-kalabox\",\"kalabox-sharing\",\"kalabox-ui\",\"kalabox-app-pantheon\",\"kalabox-app-php\"]","KALABOX_HOME":"C:\\Users\\username_0","KALABOX_IMG_VERSION":"latest","KALABOX_INSTALL_PATH":"C:\\Program Files\\Kalabox","KALABOX_IS_BINARY":"true","KALABOX_IS_NW":"false","KALABOX_LOG_LEVEL":"debug","KALABOX_LOG_LEVEL_CONSOLE":"none","KALABOX_LOG_ROOT":"C:\\Users\\username_0\\.kalabox/logs","KALABOX_OS":"{\"type\":\"Windows_NT\",\"platform\":\"win32\",\"release\":\"6.3.9600\",\"arch\":\"x64\"}","KALABOX_SRC_ROOT":"C:\\Program Files\\Kalabox\\bin","KALABOX_STATS":"{\"report\":true,\"url\":\"http://stats-v2.kalabox.io\"}","KALABOX_SYS_CONF_ROOT":"C:\\Program Files\\Kalabox","KALABOX_SYS_PLUGIN_ROOT":"C:\\Program Files\\Kalabox","KALABOX_USER_CONF_ROOT":"C:\\Users\\username_0\\.kalabox","KALABOX_USER_PLUGIN_ROOT":"C:\\Users\\username_0\\.kalabox","KALABOX_VERSION":"0.13.0-rc.1","LOCALAPPDATA":"C:\\Users\\username_0\\AppData\\Local","LOGONSERVER":"\\\\WELYANTO","NUMBER_OF_PROCESSORS":"8","OS":"Windows_NT","Path":"C:\\Program Files\\Kalabox\\bin;C:\\Program Files\\Git\\usr\\bin;C:\\Program Files\\Kalabox\\bin;C:\\Program Files\\Git\\usr\\bin;C:\\Program Files\\Kalabox\\bin;C:\\Program Files\\Git\\usr\\bin;C:\\Program Files\\Kalabox\\bin;C:\\Program Files\\Git\\usr\\bin;C:\\Program Files\\Kalabox\\bin;C:\\Program Files\\Git\\usr\\bin;C:\\Program Files\\Kalabox\\bin;C:\\Program Files\\Git\\usr\\bin;C:\\Program Files\\Kalabox\\bin;C:\\Program Files\\Git\\usr\\bin;C:\\Program Files (x86)\\Intel\\iCLS Client\\;C:\\Program Files\\Intel\\iCLS Client\\;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\Program Files\\Intel\\Intel(R) Management Engine Components\\DAL;C:\\Program Files (x86)\\Intel\\Intel(R) Management Engine Components\\DAL;C:\\Program Files\\Intel\\Intel(R) Management Engine Components\\IPT;C:\\Program Files (x86)\\Intel\\Intel(R) Management Engine Components\\IPT;C:\\Program Files (x86)\\Skype\\Phone\\;C:\\Program Files (x86)\\GitExtensions\\;C:\\Program Files\\nodejs\\;C:\\Program Files\\Git\\cmd;C:\\Program Files\\cURL\\bin;C:\\Users\\username_0\\AppData\\Roaming\\npm;C:\\Program Files\\Kalabox\\bin;%USERPROFILE%\\AppData\\Local\\Microsoft\\WindowsApps;","PATHEXT":".COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC","PROCESSOR_ARCHITECTURE":"AMD64","PROCESSOR_IDENTIFIER":"Intel64 Family 6 Model 60 Stepping 3, GenuineIntel","PROCESSOR_LEVEL":"6","PROCESSOR_REVISION":"3c03","ProgramData":"C:\\ProgramData","ProgramFiles":"C:\\Program Files","ProgramFiles(x86)":"C:\\Program Files (x86)","ProgramW6432":"C:\\Program Files","PROMPT":"$P$G","PSModulePath":"C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\Modules\\","PUBLIC":"C:\\Users\\Public","SESSIONNAME":"Console","SystemDrive":"C:","SystemRoot":"C:\\WINDOWS","TEMP":"C:\\Users\\username_0\\AppData\\Local\\Temp","TMP":"C:\\Users\\username_0\\AppData\\Local\\Temp","USERDOMAIN":"Welyanto","USERDOMAIN_ROAMINGPROFILE":"Welyanto","USERNAME":"username_0","USERPROFILE":"C:\\Users\\username_0","VBOX_MSI_INSTALL_PATH":"C:\\Program Files\\Oracle\\VirtualBox\\","windir":"C:\\WINDOWS"} kalabox_proxy_1 is up-to-date kalabox_dns_1 is up-to-date debug: SPAWN DOCKER-COMPOSE.EXE ==> Spawn exited with code: 0 debug: EVENTS ==> Emitting event [post-engine-start]. unknown debug: EVENTS ==> No listeners [post-engine-start]. debug: EVENTS ==> Finished dispatching event listeners [post-engine-start]. debug: SERVICES PLUGIN ==> Configuring DNS for web services. verbose: DOCKER COMPOSE ==> Running: ["--project-name","mscapelocal","--file","C:\\Users\\username_0\\mscapelocal\\kalabox-compose.yml","--file","C:\\Users\\username_0\\mscapelocal\\kalabox-cli.yml","--file","C:\\Users\\username_0\\.kalabox\\downloads\\mscapelocal\\mscapelocal-2.yml","ps","-q","web"] debug: SHELL EXEC ==> Running command C:\Program Files\Kalabox\bin\docker-compose.exe,--project-name,mscapelocal,--file,C:\Users\username_0\mscapelocal\kalabox-compose.yml,--file,C:\Users\username_0\mscapelocal\kalabox-cli.yml,--file,C:\Users\username_0\.kalabox\downloads\mscapelocal\mscapelocal-2.yml,ps,-q,web debug: SHELL EXEC ==> Running command "2f93c5a06887f25ccfbfd4cad4024d0f8c86d2ce21a4297c9cf183c4abbfa61f\r\n" debug: SERVICES PLUGIN ==> Connecting to redis on 10.13.37.100:8160 debug: SERVICES PLUGIN ==> Connecting to redis on 10.13.37.100:8160 debug: SERVICES PLUGIN ==> Setting DNS. frontend:http://mscapelocal.kbox => http://10.13.37.100:32779 debug: SERVICES PLUGIN ==> Setting DNS. frontend:https://mscapelocal.kbox => https://10.13.37.100:32780 debug: SERVICES PLUGIN ==> Configuring DNS for edge services. verbose: DOCKER COMPOSE ==> Running: ["--project-name","mscapelocal","--file","C:\\Users\\username_0\\mscapelocal\\kalabox-compose.yml","--file","C:\\Users\\username_0\\mscapelocal\\kalabox-cli.yml","--file","C:\\Users\\username_0\\.kalabox\\downloads\\mscapelocal\\mscapelocal-2.yml","ps","-q","edge"] debug: SHELL EXEC ==> Running command C:\Program Files\Kalabox\bin\docker-compose.exe,--project-name,mscapelocal,--file,C:\Users\username_0\mscapelocal\kalabox-compose.yml,--file,C:\Users\username_0\mscapelocal\kalabox-cli.yml,--file,C:\Users\username_0\.kalabox\downloads\mscapelocal\mscapelocal-2.yml,ps,-q,edge debug: SHELL EXEC ==> Running command "d50c111de6546b422fe04c4599868b29f16e885f88c65b585505ac2d831bd14b\r\n" debug: SERVICES PLUGIN ==> Connecting to redis on 10.13.37.100:8160 debug: SERVICES PLUGIN ==> Connecting to redis on 10.13.37.100:8160 debug: SERVICES PLUGIN ==> Setting DNS. frontend:http://edge.mscapelocal.kbox => http://10.13.37.100:32782 debug: SERVICES PLUGIN ==> Setting DNS. frontend:https://edge.mscapelocal.kbox => https://10.13.37.100:32781 debug: CORE PLUGIN ==> Checking to see if http://mscapelocal.kbox is ready. info: Waiting for site to be ready. Waiting for site to be ready. debug: CORE PLUGIN ==> Checking to see if http://mscapelocal.kbox is ready. debug: CORE PLUGIN ==> Checking to see if http://mscapelocal.kbox is ready. debug: CORE PLUGIN ==> http://mscapelocal.kbox is now ready. debug: EVENTS ==> Emitting event [app-started]. unknown debug: EVENTS ==> No listeners [app-started]. debug: EVENTS ==> Finished dispatching event listeners [app-started]. ``` - trying something.kbox it showing "Server Not Found - 404" - Also yes i'm already trying over & over to repull the files & database from server, it successed but still can't fixed the "stucked waiting site to be ready" (http://prntscr.com/cjxmfi , http://prntscr.com/cjxmlr) username_2: I've seen the same issue with our Pantheon site. In our case I think it's because we're using the new nested docroot feature. Perhaps support for that should be a separate issue, but I figured I'd at least mention it here in case it's a common denominator in your case. username_1: yeah id def open up another ticket for that. would be good to ping some pantheon people to get their eyes on this as well. That said, we do actually support nested docroot but you need to change some things around in the config http://docs.kalabox.io/en/stable/users/config/#sharing Status: Issue closed username_1: @username_2 opened up a ticket #1622 that should cover the nested docroot func which i think is the main thing causing 404's. Going to close this. username_0: Hi @username_1 i do download the latest version of kalabox after it released. uninstall the old ones and install the new ones.. delete old apps, create new apps, pull database & files new apps, kbox restart, kbox down, kbox up, kbox rebuild, kbox start, kbox pull again, restart again, and i'm still stucked at Waiting for site to be ready. is there any related internet connection, or something? the error i got still same ![0000](https://cloud.githubusercontent.com/assets/4454872/19396707/ef9de2d0-9276-11e6-814f-2089b0f35d81.jpg) i do check all files are there, i believe current site isn't on nested it's should on root. username_1: @username_0 what happens when you try to visit your site in the browser? Are you experiencing #1657 ? username_0: showing server not found: ![server-not-found](https://cloud.githubusercontent.com/assets/4454872/19397660/545b1c8e-927b-11e6-82a5-8c750e5dd61c.jpg) check testing.kbox also i don't see any moving clouds. username_3: I was getting the same as username_0 today, its strange because it was working fine for days and then today it wouldn't connect anymore with the same ENOTFOUND. I had to uninstall/reinstall kalabox and virtualbox to get the networking functioning again, I'll try and get better notes if it happens again, from what I can tell it was the same as Welyanto said happened to him. Im on Windows 10 Pro username_3: Ok it happened again in the app, and separately doing **kbox start**. What fixed it was going to CLI and running **kbox restart -- -d** I waited a while and it eventually passed after 24 checks **debug: CORE PLUGIN ==> Checking to see if http://thesite.kbox is ready.** I was able to access the site from the given edge IP access on IP:32777 before the checks cleared, it seems the "Setting DNS. frontend" was not refreshing at that point, in the app it never cleared, eventually giving ENOTFOUND error. username_1: @username_3 are you getting this just for a particular site or on every site? username_3: It only seemed to happen on one d8 pantheon site, it was still barebones plus commerce mods, no additions to the settings.php I see you've recently added a new release though, so im going to dump this build and install kalabox-v2.1.0-rc.2 :) username_1: Must be site specific then. Are you enforcing HTTPS? username_3: No, not on that site, though I did have that issue with another site that was https. My error for the same as OP resulted perhaps from errors increasing the load time. Maybe the op has a similar issue? Mine was errors with composer dependencies, it borked on pantheons end too and had to restore to backup lol :D oops username_4: Similar issues: #1487 #1327 #651 #111 DNS/resolver issues repeat randomly. I guess this requires a more robust solution, a good check that warns users when resolver is not working and good troubleshooting docs. Just my 2c. username_1: @username_4 if you've got a better suggestion on how to reliably do this on Windows i am all ears!!! That said we added some DNS changes in the 2.1 version that might help this.
mobxjs/mobx-state-tree
768920974
Title: Union dispatcher snapshot undefined Question: username_0: <!-- Not following the template might result in your issue being closed without further notice --> **_Question_** * [ ] I've checked documentation and searched for existing issues * [ ] I tried the [spectrum channel](https://spectrum.chat/?t=dad48299-3dfc-4e10-b6da-9af1e39498a3) <!-- Write your question below --> In our MST/Typescript project we like to use `Model.is(...)` quite a bit, this sometimes results in errors like these popping up: ``` Cannot read property 'exampleProperty' of undefined ``` ```js 1 | dispatcher: (snapshot) => snapshot.exampleProperty || types.null, | ^ ``` This is _usually_ caused by not enforcing the passed model to be non-nullable via an extra check like: ```js exampleObject && Model.is(exampleObject) ``` It's easily fixed but not always caught by people due to the rather "mysterious" error thrown and no typing error. Is there a way to enforce `Model.is` to throw type errors with undefined values? Would this be something that MST would have to change in their typings or maybe the inference is already there but I am not using it correctly. (I'd be happy to create an PR).