repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
danielkrupinski/Osiris
830908381
Title: how i can hook my func Question: username_0: void Misc::jumpbugindndicators() noexcept { if (!config->misc.jumpbugdetect.enabled || !localPlayer || !localPlayer->isAlive()) return; if (!config->misc.jumpbug || !config->misc.jumpbugkey.isDown()) return; { constexpr unsigned font{ 0x0c }; interfaces->surface->setTextFont(font); if (config->misc.keyStorkes.rainbow) interfaces->surface->setTextColor(rainbowColor(config->misc.keyStorkes.rainbowSpeed)); else interfaces->surface->setTextColor((config->misc.keyStorkes.color)); const auto [jwidth, jheight] = interfaces->surface->getScreenSize(); config->misc.jumpbugdetectResX = jwidth; config->misc.jumpbugdetectResY = jheight; if (config->misc.jumpbugdetectM) interfaces->surface->setTextPosition(config->misc.jumpbugdetectPosX, config->misc.jumpbugdetectPosY); else interfaces->surface->setTextPosition(jwidth / 2 - 6, jheight - 200); interfaces->surface->printText(L"JB"); } } My func dont work where i need hook<issue_closed> Status: Issue closed
coderaiser/cloudcmd
246066971
Title: Config flag to disable contact button/feature Question: username_0: <!-- Thank you for reporting an issue. Please fill in the template below. If unsure about something, just do as best as you're able. --> * **Version** (`cloudcmd -v`): username_1/cloudcmd:7.1.0-alpine * **Node Version** `node -v`: n/a * **OS** (`uname -a` on Linux): n/a * **Browser name/version**: Firefox 54 * **Used Command Line Parameters**: `--no-console --no-terminal --no-config-dialog` * **Changed Config**: n/a There are a few menu bar items I'd like to remove for our deployment. Setting `--no-console --no-terminal --no-config-dialog` gets me 75% of the way, but I couldn't find a toggle like that for the contact button. Would it be possible to get that added? If yes, is there a way for me to help get that implemented? Answers: username_1: It is a good idea for a pull request :). You can add this option in the same way [--no-console](https://github.com/username_1/cloudcmd/commit/c3c008ff720f90281aea91bbd1539403bbca4fa1) was added. username_0: Looks like there have been a lot of structural changes since that commit. I'll try to find the new places. username_1: No as much as it looks like, directories 'lib' and 'lib/server' renamed to 'server'. And no need to change 'client'. username_1: Landed in [v7.2.0](https://github.com/username_1/cloudcmd/releases/tag/v7.2.0). Status: Issue closed
cblomart/vsphere-graphite
349819289
Title: Custom fields support ? Question: username_0: Hello, Very nice project :-) Is it possible to add custom fields please ? ex : vsphere.virtualmachine.custom_fields.Owner (the owner of the VM) Thanks for your help. Answers: username_1: I don't quite see how you see that in vsphere-graphite? Maybe if you could describe a full "story" of what you want? (i.e. in your example where do you define the owner?) Would vSphere tags be a response to the requirement? see issue #29 username_0: Sorry, i'm not the virtual envrionment admin :-) Metricbeat (beta, and very limited) do that by default : https://www.elastic.co/guide/en/beats/metricbeat/current/exported-fields-vsphere.html vsphere.virtualmachine.custom_fields type: object Custom fields https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.vcenterhost.doc/GUID-73606C4C-763C-4E27-A1DA-032E4C46219D.html Maybe, tag can help. Thanks for your help (Merci beaucoup :-) ) username_1: I propose we follow up this issue on issue #29 Status: Issue closed
gost-engine/engine
735255472
Title: centos 8 / openssl_1_1_1 issue Question: username_0: Добрый день По инструкции сделал следующее: `dnf install openssl-devel cmake make git clone --branch openssl_1_1_1 https://github.com/gost-engine/engine.git cd engine && mkdir build && cd build cmake -DCMAKE_BUILD_TYPE=Release .. cmake --build . --config Release cmake --build . --target install --config Release` Конфиг взял example.conf и положил в /etc/pki/tls/openssl.cnf взамен родного. При этом вижу что openssl подцепила gost, но не вижу чтобы были криптоалгоритмы gost. `$ openssl engine (rdrand) Intel RDRAND engine (dynamic) Dynamic engine loading support (gost) Reference implementation of GOST engine $ openssl ciphers -v | grep GOST $ ` Похоже что-то я недоделал, а что не могу понять, так как инструкция на этом заканчивается. Подскажите, plz. === Делал по заявке от разработчиков, они говорят что их скрипт которому нужен был ГОСТ в итоге работает. Так что я несколько в ступоре. Answers: username_1: А как вы проверяете наличие криптоалгоритмов ГОСТ? ```openssl dgst -md_gost12_256 somefile``` что выдаёт? username_0: Проверяю такой командой: `$ openssl ciphers -v` Вот результат той что вы привели: ``` $ openssl dgst -md_gost12_256 task md_gost12_256(task)= 3f539a213e97c802cc229d474c6aa32a825a360b2a933a949fd925208d9ce1bb ``` username_1: Это значит, что алгоритмы ГОСТ присутствуют. Если у вас RedHat-based дистрибутив, то там есть дополнительные меры по ограничению шифронаборов, https://access.redhat.com/articles/3666211 username_0: Да, у меня CentOS 8. Ок, спасибо за разъяснения. Status: Issue closed
airdcpp-web/airdcpp-webclient
421180091
Title: Error when killing process using ctrl+c Question: username_0: **Application version:** AirDC++w 2.6.0b-50-gb248 x86_64 **Web UI version:** 2.6.0-beta.9 **Web UI build date:** March 10, 2019 5:40 PM **OS:** Fedora 29 I've not opened the UI once during the loading, and it should be downloading bundles. No more clue, it can be fixed on newest build. ``` $ ../Compil/airdcpp-webclient/airdcppd Starting. .Loading Hash database Loading Download queue Loading Shared files Loading Country information Starting web server. AirDC++w 2.6.0b-50-gb248 running, press ctrl-c to exit... HTTP port: 8080, HTTPS port: 0 ^CShutdown requested... [info] Error getting remote endpoint: system:9 (Bad file descriptor) [fail] WebSocket Connection Unknown - "" - 0 websocketpp:26 Operation canceled [info] asio async_shutdown error: system:9 (Bad file descriptor) [info] handle_accept error: Operation canceled [info] Stopping acceptance of new connections because the underlying transport is no longer listening. Saving hash data Saving the share cache Closing connections Saving settings Shutting down. ``` Answers: username_1: Did you notice any other issues besides that error message? I don't think that this is anything that needs to be worried about. username_0: Nothing. I was more reporting "to be sure" Status: Issue closed
sakshamj74/IPL-Analysis
760896849
Title: Strike rate of <NAME> in 16-20 overs Question: username_0: Might include other top players having highest strike rate. Plot of player name and it's strike rate. Answers: username_1: @username_0 Also, do an analysis of the top 5 strikers. Like what percentage of runs they score in 4's and 6's, their averages, etc. username_0: Ohh yeah sure. But then wouldn't it become Intermediate Probelm username_0: Regarding the 4's and 6's I would have to make another file. username_0: Thanks for updating my score. Status: Issue closed
GoogleContainerTools/kaniko
658770277
Title: COPY not creating relative workdir directories Question: username_0: **Actual behavior** Somewhere between 0.14.0 and latest, behaviour has changed with copying files into relative directories which need to be created within the WORKDIR, eg, if I have this dockerfile, I expect that `bin/` would be created within the WORKDIR, and the file placed into this path (other tools successfully create the relative directory, and place file inside, and did so before 0.14 of Kaniko): ``` FROM ubuntu:latest WORKDIR /build COPY file.yaml bin/ RUN ls -la /build/bin ``` What happens now is that file.yaml gets created in /build as a file named `bin`: Output from Kaniko on the above Dockerflle: ``` INFO[0000] Retrieving image manifest ubuntu:latest INFO[0003] Retrieving image manifest ubuntu:latest INFO[0007] Built cross stage deps: map[] INFO[0007] Retrieving image manifest ubuntu:latest INFO[0011] Retrieving image manifest ubuntu:latest INFO[0015] Executing 0 build triggers INFO[0015] Unpacking rootfs as cmd COPY podinfo.yaml bin/ requires it. INFO[0023] WORKDIR /build INFO[0023] cmd: workdir INFO[0023] Changed working directory to /build INFO[0023] Creating directory /build INFO[0023] Taking snapshot of files... INFO[0023] COPY podinfo.yaml bin/ INFO[0023] Taking snapshot of files... INFO[0023] RUN ls -la /build/bin INFO[0023] Taking snapshot of full filesystem... INFO[0023] cmd: /bin/sh INFO[0023] args: [-c ls -la /build/bin] INFO[0023] Running: [/bin/sh -c ls -la /build/bin] -rw-r--r-- 1 root root 986 Jul 17 02:21 /build/bin INFO[0023] Taking snapshot of full filesystem... INFO[0023] No files were changed, appending empty layer to config. No layer added to image. INFO[0023] Skipping push to container registry due to --no-push flag ``` **Expected behavior** Expect that /build/bin/podinfo.yaml gets created, not /build/bin <-- as a file Output from Docker: ``` docker build -f Dockerfile /tmp Sending build context to Docker daemon 3.119kB Step 1/4 : FROM ubuntu:latest ---> adafef2e596e Step 2/4 : WORKDIR /build ---> Using cache ---> 4d06377069b1 Step 3/4 : COPY podinfo.yaml bin/ ---> e81b9ce0f562 Step 4/4 : RUN ls -l /build/bin ---> Running in 36534a5d62c9 total 4 -rw-r--r-- 1 root root 986 Jul 15 04:47 podinfo.yaml Removing intermediate container 36534a5d62c9 ---> 808ca51a1ba4 Successfully built 808ca51a1ba4 Successfully tagged test:test Can someone please let me know if this is the expected behaviour these days, and a full path is required as the COPY dst path? Answers: username_1: We see something similar. Our dockerfile has something like ``` WORKDIR somewhere COPY --from=BUILD_IMAGE foo/shell-main/target/lib ./lib ``` Attempting to pull the resulting image results in ``` failed to register layer: Error processing tar file(exit status 1): link /somewhere/lib /somewhere/lib: no such file or directory ``` Doing something like ``` COPY --from=BUILD_IMAGE foo/shell-main/target/lib /somewhere/lib ``` creates a working image. username_2: Looks like problem in WORKDIR handling, because have similar problem, but with RUN after WORKDIR username_3: Looks like the bug is here https://github.com/GoogleContainerTools/kaniko/blob/c480a063475e2b0af66214af4c065fc6000f36a3/pkg/util/command_util.go#L185 Instead of using `filepath.join` we can replace this with ``` strings.Join([]string{cwd, newDest}, pathSeparator) ``` username_3: Would you be up for submitting a pr?
threefoldfoundation/www_threefold_farming
786072812
Title: Add Op-out dialog for Analytics Tracking Question: username_0: ``` Answers: username_0: We need to add the code above for people to op-out of us tracking them. username_0: so the iframe does not dissapear, whatever you do? username_1: yes username_0: @username_1 and @samaradel could you look into this: https://developer.matomo.org/guides/tracking-javascript-guide#optional-creating-a-custom-opt-out-form username_0: Any update here @username_1 / @samaradel ? username_1: nothing yet this link https://developer.matomo.org/guides/tracking-javascript-guide#optional-creating-a-custom-opt-out-form show how to use script at opt-out same as script in body no button to hide the message box username_0: can't we create a box to hide it ourselves? username_2: Added this to a new issue for adding analytics tracking for all websites: https://github.com/threefoldfoundation/home/issues/111 Status: Issue closed
magefree/mage
464933847
Title: Hapatra / Nest of Scarabs + Tokens with Wither / Infect Question: username_0: Hapatra and Nest of Scarabs do not trigger when token with wither or infect deal combat damage to a creature and die. I have this output from when I cast Triumph of the Hordes and attack once with a token creature, and once with a nontoken creature. The attackers are blocked and die. Hapatra triggers for the nontoken creature, but not for the token creature. (Flourishing Defenses triggers for both). ``` 11:20: Turn 16 wurst (34 - 35) 11:20: wurst draws a card 11:20: wurst puts Llanowar Wastes [23c] from hand onto the Battlefield 11:20: wurst plays Llanowar Wastes [23c] 11:20: Ability triggers: Evolution Sage [aff] - Whenever a land enters the battlefield under your control, proliferate. (You choose any number of permanents and/or players with counters on them, then give each another counter of each kind already there.) 11:21: wurst loses 1 life 11:21: wurst casts Triumph of the Hordes [121] 11:21: wurst puts Triumph of the Hordes [121] from stack into their graveyard 11:21: wurst attacks with 1 creature 11:21: Attacker: Elf Warrior [a3d] (2/2) blocked by Terastodon [f4e] (9/9) 11:21: Elf Warrior [a3d] deals 2 damage to Terastodon [f4e] 11:21: Terastodon [f4e] deals 9 damage to Elf Warrior [a3d] 11:21: Elf Warrior [a3d] died 11:21: Ability triggers: Flourishing Defenses [1a6] - Whenever a -1/-1 counter is put on a creature, you may create a 1/1 green Elf Warrior creature token. 11:21: Ability triggers: Flourishing Defenses [1a6] - Whenever a -1/-1 counter is put on a creature, you may create a 1/1 green Elf Warrior creature token. 11:22: wurst creates a Elf Warrior [cff] token 11:22: wurst creates a Elf Warrior [95a] token 11:23: Player request: Rolling back to start of turn 16 11:23: Turn 16 wurst (34 - 35) 11:23: wurst draws a card 11:23: wurst puts Llanowar Wastes [23c] from hand onto the Battlefield 11:23: wurst plays Llanowar Wastes [23c] 11:23: Ability triggers: Evolution Sage [aff] - Whenever a land enters the battlefield under your control, proliferate. (You choose any number of permanents and/or players with counters on them, then give each another counter of each kind already there.) 11:23: wurst loses 1 life 11:23: wurst casts Triumph of the Hordes [121] 11:23: wurst puts Triumph of the Hordes [121] from stack into their graveyard 11:23: wurst attacks with 1 creature 11:23: Attacker: Evolution Sage [aff] (3/2) blocked by Terastodon [f4e] (9/9) 11:23: Evolution Sage [aff] deals 3 damage to Terastodon [f4e] 11:23: Terastodon [f4e] deals 9 damage to Evolution Sage [aff] 11:23: Evolution Sage [aff] died 11:23: Ability triggers: Hapatra, Vizier of Poisons [2a6] - Whenever you put one or more -1/-1 counters on a creature, create a 1/1 green Snake creature token with deathtouch. 11:23: Ability triggers: Flourishing Defenses [1a6] - Whenever a -1/-1 counter is put on a creature, you may create a 1/1 green Elf Warrior creature token. 11:23: Ability triggers: Flourishing Defenses [1a6] - Whenever a -1/-1 counter is put on a creature, you may create a 1/1 green Elf Warrior creature token. 11:23: Ability triggers: Flourishing Defenses [1a6] - Whenever a -1/-1 counter is put on a creature, you may create a 1/1 green Elf Warrior creature token. 11:23: wurst creates a Elf Warrior [a48] token 11:23: wurst creates a Elf Warrior [e0d] token 11:23: wurst creates a Elf Warrior [fc8] token 11:23: wurst creates a Snake [50b] token ``` The same happens with Nest of Scarabs and wither. Hapatra, Nest of Scarabs, Corrosive Mentor (gives all your black creatures wither) and a black Insect token on the battlefield. When Corrosive Mentors attacks and is blocked and dies, Hapatra and Nest of Scarabs trigger. They don't trigger for the black Insect token. ![Wither - Hapatra + Nest of Scarabs](https://user-images.githubusercontent.com/38747561/60766530-70925200-a0ab-11e9-82bd-d1cab449a8cc.png) The -1/-1 counters are placed each time, but Hapatra and Nest of Scarabs do not trigger if the damage is dealt by a token. Answers: username_1: From the linked duplicate, verified this is still bugged.
Tyriar/vscode-shell-launcher
258204505
Title: Scrollbar of terminal disappear Question: username_0: - VSCode Version: Code 1.16.1 (27492b6bf3acb0775d82d2f87b25a93490673c6d, 2017-09-14T16:38:23.027Z) - OS Version: Windows_NT x64 10.0.15063 - Extensions: the listing length exceeds browsers' URL characters limit --- When I create a terminal using this extension, the scroll bar and scrolling ability disappears on the second and subsequent terminals. ![gif](https://user-images.githubusercontent.com/8474118/30508754-3ca24f24-9ad9-11e7-9b43-9a9e27b7ccb3.gif) Config ``` "shellLauncher.shells.windows": [{ "shell": "C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\powershell.exe", "label": "PowerShell" } ... ] ``` Answers: username_1: Duplicate https://github.com/Microsoft/vscode/issues/34483 Status: Issue closed username_0: @username_1 This issue is not a duplicate. When create a terminal using this extension in v 1.16, the size of `xterm-scroll-area` class (html) will not be correctly got and the scrolling ability seems to be lost. I thought that it was a issue of the extension, but it seems not to be that. In v1.17(4571d387c9fe2d19e833ff96aa99aac6f73c88d4), the terminal is not be created from the extension. Maybe vscode api is missing some kind of error. ![gif](https://user-images.githubusercontent.com/8474118/30572652-ffa8b9e8-9d29-11e7-8afe-5a8b0462a84c.gif) I had to create this issue on vscode repo. username_1: @username_0 are you pointing to sysnative instead of system32? username_0: @username_1 no, I'm using following shell config which was copied from default `terminal.integrated.shell.windows` ``` "shellLauncher.shells.windows": [{ "shell": "C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\powershell.exe", "label": "PowerShell" } ... ] ``` `"terminal.integrated.shell.windows"` has not changed in setting.json. username_0: resolved by https://github.com/Microsoft/vscode/issues/34554
UVicFH/PowerDistElectrical
268547025
Title: Rear Power Distribution Question: username_0: Need to learn about the arduino/ CANduino interface and ensure that the rear power distribution PCB is correct. Need to add more debugging LEDs or just a better layout with labels on the PCB (i.e engine fan, wheel sensors, etc.). Need to correct the PCB footprints for a few components Answers: username_0: Need to add more debugging LEDs or just a better layout with labels on the PCB (i.e engine fan, wheel sensors, etc.). username_0: Need to correct the PCB footprints for a few components username_1: (1) Correct the footprint on the Transistors q-16 and q-29. (2) Look up the d2 diode to find an appropriate replacement that has a larger footprint. (3) Verify that all outputs are giving the correct voltage to components and that fuses are correctly rated. ( same task as Front power distribution ) username_0: For 2018/2019 board: - Replace LEDs with more robust option, also maybe look into how to ensure that if they fail that they fail the way we want them to (probably depends on the circuit whether we want them to fail short or open) - fix the fuel pump flyback diode to a larger diode (1206 packaging) - directional capacitors need directional footprints (on 12-5 DC to DC circuit) - check the packaging for the ceramic resonators, it wasn't quite the right size - maintain a similar layout with the circuits divided into labelled boxes - the spare circuit needs its own box - capacitors need to be closer to their respective parts, some of them were too far away (i.e from the 5V linear regulator on the controller) - one of the fuses wasn't connected to an output pin - The AMS status and IMD status LEDs didn't have output pins - Increase the size of all power trace widths - move signal traces further away from all power traces - Add a header for Rx / Tx - Remove all outputs from Rx/ Tx and move them to different IO pins username_0: lets make into 1 board because I think cost is less important than reliability and easy test accessibility. Unless you can think of a better way of running down to the bottom board that can handle more current than the headers we have (maybe a larger pitch?) but that still doesn't really solve the issue of possible board fracture if we are pulling them apart frequently username_0: oh! Also, ground plane needs to connect in more spots so that we don't have any possible islands or ground loops username_0: make sure the switch in the reset circuit is connected and that the series resistances were removed, not sure how up to date the files here are username_2: high noise input output protection
argoproj/argo-cd
889230298
Title: Support for `git describe` strings as git targetRevision values. Question: username_0: # Summary ArgoCD currently supports git commit hashes, tags for specifying specific application revisions. Of those, only the commit hash is immutable. However, the output of `git describe` also works as an argument to `git checkout` and has the benefit of being both human-meaningful (which version is before/after which others and how much are they separated) and is effectively immutable (due to the commit fingerprint in the string -- the commit count of course is not as reliable). eg The form I use is: git describe --long --tags --match '[0-9]*.[0-9]*' --dirty Which produces strings like: 0.1-94-g5293aef Where the "0.1" is the most recent tag matching the regex, the "94" is the number of commits since that tag, and the "g5293aef" is the fingerprint of the indicated commit (and is what is actually used by `git checkout`). # Motivation Support more meaningful and still authoritative git targetRevision values. # Proposal Allow the `git describe` returns with commit fingerprints to be used as targetRevisions.
tableau/query-graphs
284011939
Title: edge labels misplaced for all orientations except top-to-bottom Question: username_0: See pull request #21 for details. Three examples of misplaced edge labels below. Orientation left-to-right places edge labels as if the layout was top-to-bottom, while bottom-to-top and right-to-left labels are somewhere not apparently visible on the display. http://localhost:3000/d3/query-graphs.html?file=hyper-query1.json&orientation=left-to-right http://localhost:3000/d3/query-graphs.html?file=hyper-query1.json&orientation=right-to-left http://localhost:3000/d3/query-graphs.html?file=hyper-query1.json&orientation=bottom-to-top Status: Issue closed Answers: username_0: Fixed along with crosslinks support.
friajs/events
446568622
Title: Implementing ..NET Collection interfaces Question: username_0: Descripción y uso de algunas de las interfaces que tiene .NET que son útiles para trabajar con colecciones. Entre estas interfaces tenemos: - ICollection<T> vs IList<T> - IEnumerable<T> - IComparable<T> - IComparer<T> - IEquatable<T> - IEqualityComparer<T>
matrix-org/synapse
925196765
Title: MSC1711 Upgrade FAQ should be removed from the docs sidebar Question: username_0: This document isn't very relevant going forward. We probably don't need to keep it top-level. The 1.0 UPGRADE notes already link to it. We should change that link to a github permalink and just remove it from the repo. Answers: username_1: honestly most of that doc can probably go away altogether.
BrynMattPHP/starter-todo4
275123999
Title: Create XML model with load() Question: username_0: Create application/core/XML_Model.php, based on CSV_Model.php from the previous labs. It will need to populate its internal array of record objects from the appropriate XML document. See the PHP manual for examples, and ask us if you have questions. You might also find the SimpleXML example I have used previously to be helpful. Remember to cast any SimpleXMLElement objects to appropriate PHP data types (eg string), to build your entity objects. Test this by having your task collection class extend XML_Model instead of CSV_Model, and make sure that everything displays as expected. Did you remember to "include_once" your XML_Model at the end of appliction/core/MY_Model?<issue_closed> Status: Issue closed
lucifiel0121/blog
411707739
Title: Functional Programming in Javascript 練習題 #12 Question: username_0: 其實會限制語法,是因為這邊練習是為了後面的 Rx.js,也是asynchronous programming的特性。 先試試看解這題吧: - 蒐集 movieLists 裡, 每一個 video 的 {id, title, boxart} ![](https://i.imgur.com/SjOvdgR.png) 還差一個boxart的條件,而且每一層裡面都包一個array `[ [somethings], [somethings], [somethings] ]` 需要用 concatAll() 合併。 - 很好,變回來 ` [ {Objs} ,{Objs} ,{Objs} ] ` ![](https://i.imgur.com/KxzSQSN.png) - 處理過濾條件 ![](https://i.imgur.com/vT08IvO.png) 完成! <br> <br> <br> <br> 完成...嗎? 注意到,有個小bug : `boxarts:[ url ] ` ,又是一個二維需要合併。 這時候多懷念`var itemInArray = movieLists[0];` .... 輕鬆可以解開,可惜被限制使用。 ![](https://i.imgur.com/nmFDw2r.png) <br> <br> 回想一下,`var itemInArray = movieLists[0];`是做變數綁定, 把集合內 movieLists,找出 index 的第一個物件 `movieLists[0]` 綁定在 `itemInArray` 上, - bind a variable to every item in the collection 和某一個函式有點像 [1,2,3].map( x => x+1 ) - `x` is every item in that collection <br> <br> <br> <br> 試著實做看看: [Truncated] ).concatAll() ``` 所以我們先過濾好條件: ![](https://i.imgur.com/jZq4gcQ.png) 處理一堆的 array nesting , 三層 -> 一層 ![](https://i.imgur.com/rCtJIgk.png) 加回去 json 格式 (因為如果一開始加,會印不出內容,所以故意現在才加,不影響說明) ![](https://i.imgur.com/gzER67X.png) 補上 id, title。 ![](https://i.imgur.com/uCId3Ph.png) 注意 scope,這種巢狀範圍鏈 (Scope chain) 很容易搞混: - `video.id`, `video.title` 是在外層 .map( `video` => somethings ) 的 `video` 底下取值 - `boxart.url` 取值對象是 .filter.map(`boxart` => somethings)
neos/flow-development-collection
331085477
Title: Wrong documentation of FlowQuery-slice-operation Question: username_0: ### Description According to the [documentation](https://github.com/neos/neos-development-collection/blob/4.0/Neos.Neos/Documentation/References/FlowQueryOperationReference.rst) the slice-operation expects an offset and a length, similar to _array_slice_ in PHP. In the [implementation](https://github.com/neos/flow-development-collection/blob/5.0/Neos.Eel/Classes/FlowQuery/Operations/SliceOperation.php#L46) though the second argument is expected to be the end position within the array rather than the length of results. This is similar to the _Array.slice_ Eel helper. In order to keep compatibility I would suggest to update the documentation. ### Affected Versions I did not find the earliest version of the documentation, but the implementation was like this since the very first release of Flow. Answers: username_1: Good catch and I agree that we should fix the docs rather than the implementation for backwards compatibility reasons. Would you be willing to create a corresponding Pull Request? username_1: @username_0 _ping_ ;) username_0: @username_1 I'm so sorry, I'm currently in the final stage of a project and really short on time. But I will send a MR asap. I think next week at the latest, when I'm finally on vacation :) username_2: No worries <3 username_1: No worries at _all_ - just a friendly reminder. And let us know if we can help with the PR! username_0: So, it took me another week to update just one sentence. I'm really sorry for the delay, but finally I created a PR: https://github.com/neos/neos-development-collection/pull/2113 I wasn't sure about the target branch though. According to the roadmap it should get updated in 2.3, too, but the file structure is completely different there. How do you handle this? I'll update the PR of cause if necessary. Thank you in advance for helping me out! :) username_3: Hey @username_0, thanks for the PR! Well, as you offer it, it would be indeed great if you fix that in the 2.3 branch. The upmerge from 2.3 to 3.0 is indeed painful sometimes but doable ;) username_0: Hey @username_3, all right. As changing the base branch really messes up everything I created a new PR for Neos 2.3: https://github.com/neos/neos-development-collection/pull/2117 Can I somehow be of help with the upmerge? Status: Issue closed username_4: I guess this can be closed since the according PR has been merged in the meantime. Thanks again! username_4: Well, but somehow it looks like it felt pray to the EOL of the 2.3 branch and didn't get upmerged. Do a upmerge once or manually port this change to 3.3?
appium/appium
46806842
Title: Cocos2d view not detected Question: username_0: I have an app . In the Viewcontrollers view i add the cocos2d view: CCDirector *director=[CCDirector sharedDirector]; ``` if ([director isViewLoaded]==NO){ ``` CCGLView *glView = [CCGLView viewWithFrame:self.view.bounds ``` pixelFormat:kEAGLColorFormatRGB565 depthFormat:0 preserveBackbuffer:NO sharegroup:nil multiSampling:NO numberOfSamples:0]; glView.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight; [director setView:glView]; } director.delegate=(id)self; [self addChildViewController:director]; [self.view addSubview:director.view]; [self.view sendSubviewToBack:director.view]; [director didMoveToParentViewController:self]; if ([director runningScene]) { IntroScene *scene=[IntroScene scene]; scene.rootViewController=self; [director replaceScene:scene]; } else{ IntroScene *scene=[IntroScene scene]; scene.rootViewController=self; [[CCDirector sharedDirector] runWithScene:scene]; } ``` When i start the appium. All the elements in view are detected except the cocos2d view . I am not able to simulate testing because if this . Any workaround ?
zulip/zulip
559939609
Title: "Full Name" doesn't change immediately after editing in settings Question: username_0: The "full name" of the user doesn't change immediately after editing the name, the changes are reflected only after refreshing the page. Screenshots of the issue: 1. Initially the name was "Name", I was changing it to "New Name" ![Screenshot 2020-02-05 at 1 09 15 AM](https://user-images.githubusercontent.com/39935516/73780604-c8fe0f00-47b4-11ea-8079-5109c3bf6266.png) 2. After clicking "Change" the "full name" doesn't change only a "saved" message comes up. The changes are reflected after refreshing the page. ![Screenshot 2020-02-05 at 1 09 30 AM](https://user-images.githubusercontent.com/39935516/73780573-b84d9900-47b4-11ea-8833-17baf96339fa.png) Answers: username_1: I haven't tried reproducing but this sounds like a potential regression in our real-time sync code for that page. @username_2 do you have time to investigate? username_2: @zulipbot claim Status: Issue closed
MicrosoftDocs/azure-docs
520234391
Title: Plans to support programmatic publisher configuration? Question: username_0: Hi folks, The page mentions that programmatic support for setting the publisher domain isn't supported. Our use-case is Virtual Machines shared across tenants (which require Shared Image Galleries and App Registrations). Each of our customers gets their own Shared Image Gallery (and hence their own App Registration). We can programmatically create the VMs, the SIGs, setup the App Registrations, and assign the Service Principals to enable cross-tenant access. The only sticking point is the publisher domain, which remains a manual step. There's already a "Custom domain names" setup in AAD, so it's not a question of verifying new domains. Ideally this would work: $ az ad app update --id $objectId --set publisherDomain=example.com However currently it returns: Property 'publisherDomain' is read-only and cannot be set. Are there plans to make this property accessible programmatically? Thanks --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 61edea45-b5d6-05b3-33c3-2265ef8cb309 * Version Independent ID: c7d5813c-5cf1-44c3-88ff-3c194d3e44ad * Content: [Configure an application's publisher domain - Microsoft identity platform](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-configure-publisher-domain) * Content Source: [articles/active-directory/develop/howto-configure-publisher-domain.md](https://github.com/Microsoft/azure-docs/blob/master/articles/active-directory/develop/howto-configure-publisher-domain.md) * Service: **active-directory** * Sub-service: **develop** * GitHub Login: @username_3 * Microsoft Alias: **ryanwi** Answers: username_1: @username_0 Thanks for the Comment. We are actively investigating and will get back to you soon. username_2: Sorry, there is no plan to allow setting publisherDomain property programmatically. username_3: @username_0 Closing this issue, sounds like there's no plans to programmatically accessible. #please-close Status: Issue closed
bootstrap-vue/bootstrap-vue
228538469
Title: <b-card> add class only for header or footer Question: username_0: I have add class `bg-info` only for header or footer. I need create this block ``` <div class="card"> <div class="card-header bg-info text-white">text</div> hello world </div> ``` I tired ``` <b-card show-header> <div class="bg-info" slot="header" variant="info">test</div> hello world </b-card> ``` But this code transform in ``` <div class="card"> <div class="card-header"> <div class="bg-info text-white">text</div> </div> hello world </div> ``` How to do it? Answers: username_1: Try this: ```html <b-card show-header> <template slot="header"> <div class="bg-info">test</div> </template> hello world </b-card> ```` It might not be exactly the result you want. username_2: I have same problem as you. Have you found the solution for this issue? @username_0 username_3: Can anyone confirm they tried @username_1's solution and it didn't work? @username_2 @username_0 username_2: @username_3 , yes, I tried and it did't work. @username_1 I want to add class bg-info into div.card-header but your solution still effect .card-header childs only username_1: We may need to add in a few extra props for adding classes to various portions of the card username_3: + 1 on this enhancement. I've wanted to have a simpler way to set header and footer backgrounds. I did it with CSS before, but BS4 saves a lot of unnecessary CSS and handles inverting the text color for you. username_2: Hi guys, I have created a new props call headerClass to <b-card> component and now it work. You can check this out my lastest commit on my forked repository. Code: ` headerClass: { type: String, default: null }, ` ` computed: { headerClasses() { let header = { 'card-header' : true }; if(this.headerClass != null && this.headerClass.length > 0){ let formattedHeaderClass = this.headerClass.replace(/\s+/g,' ').trim(); let classes = formattedHeaderClass.split(' '); classes.forEach((clazz) => { header[clazz] = true; }); } return header; }, ` username_3: @username_2, I think you might want to consider making the `headerClass` prop accept a string or an array. Then you won't need to set every object value to true or deal with any value sanitation (the regex replace and trim shouldn't be necessary). Also, I think we should extend this to the footer. Maybe you can even leave the `class="card-header"`, but just add the plain `:class="headerClass"` since Vue can handle interpolating all the string, object, or array values to classes. username_1: Something like this in the vue component: ```html <template> <div class="card"> <div :class="['card-header', headerClass]"> <div class="bg-info text-white">text</div> </div> .... </div> </template> <script> props: { headerClass: { type: [String, Array], default '' } } </script> ``` username_2: @username_3 thanks you, I'm new to Vue so I didn't know that 👍 @username_1 Your code looks good. It's simple but do the same thing. 👍 username_1: Yep, the bound classes can be a string, array, or object, or any combination of each :) username_1: Should this be headerVariant (to control the background only), or should we allow the user to place any classes they like onto the header/footer/etc? Or maybe both? username_2: I think both of them would be better. headerVariant for basic usage and headerClass for someone who really need to re-style card component. username_1: PR #463 should address this issue. username_1: And, yet another option would b to apply the card variant type, and then use the default slot (with `no-block` set)and place a `div` in the default slot with a white background.. ```html <b-card show-header variant="info" no-block> <template slot="header">test</template> <div style="background-color:#fff;color:initial;" class="p-3"> hello world </div> </b-card> ``` Which would look thile this: ![image](https://cloud.githubusercontent.com/assets/2781561/26525777/8139253e-4338-11e7-8eee-618b90f939a2.png) username_1: This features should be available in the next release. Status: Issue closed username_1: v0.17.0 has now been [released](https://github.com/bootstrap-vue/bootstrap-vue/releases/tag/0.17.0) and this issue _should_ be addresed. Try out the latest, and if you run into issues/bugs, please [create an issue](https://github.com/bootstrap-vue/bootstrap-vue/issues)
1995eaton/chromium-vim
476372851
Title: can't set mapleader in site-specific setting Question: username_0: As title said, setting `mapleader` in site-specific settings doesn't work. My cvimrc looks like this : ``` let mapleader = "<C-/>" site '*://*.reddit.com/*' { let mapleader = "," unmap j map <leader>j scrollDown } ``` `<C-/>` works as `mapleader` on `*.reddit.com` but not `,`.
philiprbrenan/Dita
269407978
Title: App: philiprbrenan/Dita generation failed with ERRORS Question: username_0: 2017-10-29 at 15:29:12 Generate AppaAppsPhotoApp version 20171021-222 2017-10-29 at 15:29:16 Step: parseSource 2017-10-29 at 15:29:16 Good source file! 2017-10-29 at 15:29:16 Step: loadAssets 2017-10-29 at 15:29:16 Step: multiplyPhotosByFacts 2017-10-29 at 15:29:16 Step: genJavaManifest 2017-10-29 at 15:29:16 Step: genJsonManifest 2017-10-29 at 15:29:16 Step: genGameHtml 2017-10-29 at 15:29:16 Step: genHtmlAssets 2017-10-29 at 15:29:16 Step: zipAssets 2017-10-29 at 15:29:16 Unable to create zip file %zipFile<issue_closed> Status: Issue closed
jaelpark/chamferwm
477037325
Title: Immediate crash with xf86-intel-video and xorg-server-git Question: username_0: If I launch chamferwm (with instructions from the README), for a few seconds I see a black screen and a mouse pointer that I can move around, and then it crashes me back to the tty. It doesn't seem like it will let me start any programs. I can't see any errors in `~/.local/share/xorg/Xorg.0.log`. Is there another place where errors are logged? I installed chamerfwm from the AUR. Other specifics are that I'm running xf86-intel-video-git, xorg-server-git, and a vanilla, fully updated Arch. It's probably a misconfiguration on my side, so my main question is where to look for errors. Answers: username_1: Thanks for the report. Chamferwm does some logging to stdout, try this in your .xinitrc to have the log written: ```sh exec stdbuf -oL chamfer --shader-path=/usr/share/chamfer/shaders/ > /tmp/log ``` Please upload the log here and let's see 👍 username_0: Nothing too interesting in the log, but another possibly relevant error caught my eye. When X crashes, it spews out the error `Xlib: extension "NV-GLX" missing`, indicating that it tries to load something nvidia related. I do have a dual-GPU laptop, but the nvidia gpu is off, and the log below indicates that chamfer tries to use the intel HD GPU. Trying to start chamfer with `--device-index=1` says `[chamferwm]: invalid GPU index`. It is supposed to work on intel GPUs right? FIY, chamfer starts fine without compositor. Output of `exec stdbuf -oL chamfer --shader-path=/usr/share/chamfer/shaders/ > /tmp/log`: ``` [chamferwm 2019-08-06 20:09:08] Found config /usr/share/chamfer/config/config.py No pulsectl module. [chamferwm 2019-08-06 20:09:08] Screen size: 3000x2000 [chamferwm 2019-08-06 20:09:08] Root id: 169 [chamferwm 2019-08-06 20:09:08] Backend initialized. [chamferwm 2019-08-06 20:09:08] XComposite 0.4 [chamferwm 2019-08-06 20:09:08] overlay xid: 141 [chamferwm 2019-08-06 20:09:08] XFixes 5.0 [chamferwm 2019-08-06 20:09:08] Damage 1.1 [chamferwm 2019-08-06 20:09:08] SHM 1.2 [chamferwm 2019-08-06 20:09:08] Enumerating required layers VK_LAYER_LUNARG_standard_validation [chamferwm 2019-08-06 20:09:08] Enumerating required extensions VK_KHR_surface VK_KHR_xcb_surface VK_EXT_debug_report [chamferwm 2019-08-06 20:09:10] Enumerating physical devices * 0: Intel(R) UHD Graphics 620 (Kabylake GT2) .deviceID: 22807 .vendorID: 32902 .deviceType: 1 max push constant size: 128 max bound desc sets: 8 max viewports: 16 multi viewport: 1 [chamferwm 2019-08-06 20:09:10] Available surface formats: 2 Surface format ok. [chamferwm 2019-08-06 20:09:10] Enumerating required device extensions VK_KHR_swapchain [chamferwm 2019-08-06 20:09:10] Swap chain image extent 3000x2000 ``` username_1: Yes, it should definitely work with Intel GPUs. I found that the problem is most likely related to how makepkg builds the binary. I disabled custom build flags in PKGBUILD for now, can you try update from AUR and rebuild? username_0: Works great, thanks! Status: Issue closed username_0: I still get the `Xlib: extension "NV-GLX" missing` "error" btw, although it seems harmless for now.
kubernetes/kubernetes
179161205
Title: PetSet pets pods should resolve PTR records Question: username_0: We are trying to run a GlusterFS cluster in a 1.4 Kubernetes cluster using PetSet. We found that even though pod names create DNS record, no PTR record is created for pet pods. As a result, we can't really use pod name as a GlusterFS peer name. Can PTR records be created for pet pods? Answers: username_1: We encountered the same problem when we tried to set up a hbase cluster. Hbase regionServers will register themselves to a zookeeper Master through its hostname registered in ZooKeeper. So reverse DNS from Pod IP to pod's hostname is needed. username_0: cc: @username_2 @username_3 username_2: You are expecting the pod IP to resolve back to the pet name as described on the endpoints struct? username_0: Yes. If I ran `dig -x <pod-ip>`, I would like to see pod name in the answer section. username_3: Why is reverse lookup needed? Just because gluster feels the need to do that, or for some actual reason? username_0: @username_3, this is a great question. I am not familiar with GlusterFS codebase. So, I am going to try to answer based on my research. First, reverse DNS is mentioned as an essential for prod setups in GlusterFS docs. See "Other Notes" https://gluster.readthedocs.io/en/latest/Install-Guide/Common_criteria/#getting-started In case you are unfamiliar with GlusterFs setup process, the way it works is first you run `glusterd` in a host. This creates a single node GlusterFS. Then you run `glusterd` in a second host. Now, you call `gluster peer probe <fqdn | ip>` from the first host. This will join both nodes into a single cluster. In GlusterFS there is no master server. So, each host maintains its list of peers. If no reverse DNS is setup for the first host in the above example, then the second host uses the IP of first host as identifier in its peer list. But if reverse DNS is setup for the first host, then the second host uses the fqdn of first host as the identifier in its peer list. _It seems that GlusterFS uses reverse DNS to verify name of its peers._ @username_2 PTR record for ip-of-host --> fqdn-of-host (gluster-0.example.org) I have tested both these cases in a test cluster in DigitalOcean. Without stable identifier (fqdn), every time pod restarts, GlusterFS thinks it has completely lost a peer. So, we have to remove that record, add a new pod as peer and then run a full data recovery. If fqdn is used, then GlusterFS just auto heals the restarted pod by syncing any missing data. Given the total volume of data, the benefit of having a stable identifier (fqdn in this case) can be huge for GlusterFS. username_2: @wattsteve what's the heketi configuration going to do here w.r.t. gluster? Is it still depending on host network and host name being stable? username_2: I'll bring the gluster guys in to give us an answer username_0: @username_2, just an fyi, I have not tried heketi. I very recently found out that heketi is needed for dynamic provisioning. username_3: I don't have a problem with reverse DNS for named headless (this specific case). We just can't do reverse lookups for pods in general, unless we watch all pods... username_4: @username_0 I work with @username_1 to eventually make our openTSDB cluster up and running in kubernetes with bare pod and headless service (without petset); and DNS is one of the major hiccups in the process. openTSDB cluster involves Zookeeper cluster, Hadoop cluster, Hbase Cluster and openTSDB itself. Particularly, Hbase relies on reverse DNS, see https://hbase.apache.org/book.html#trouble.rs.runtime.double_listed_regions. We did the following: 1. All components are deployed via Pod, whose identities are managed manually. All pods have a corresponding headless service with the same name, so that an A record can be created for it. hbase-region-a.${NAMESPACE}.svc.cluster.local -> 192.168.65.5 ``` ubuntu@sysinfra-1:~$ nslookup hbase-region-a.shaolei.svc.cluster.local 10.254.0.100 Server: 10.254.0.100 Address: 10.254.0.100#53 Name: hbase-region-a.shaolei.svc.cluster.local Address: 192.168.65.2 ``` 2. Hack kubeDNS to have it add a PTR record for each headless service, (eg. 172.16.17.32.in-addr.arpa -> hbase-region-a.${NAMESPACE}.svc.cluster.local.) ``` ubuntu@sysinfra-1:~$ nslookup 192.168.65.2 10.254.0.100 Server: 10.254.0.100 Address: 10.254.0.100#53 Non-authoritative answer: 192.168.127.12.in-addr.arpa name = hbase-region-a.shaolei.svc.cluster.local. Authoritative answers can be found from: ``` 3. Use script to set FQDN in container, so so running `hostname -f` in container will return FQDN ``` # set fqdn while true do if grep --quiet $POD_NAME /etc/hosts; then cat /etc/hosts | sed "s/$POD_NAME/${POD_NAME}.${POD_NAMESPACE}.svc.cluster.local $POD_NAME/g" > /etc/hosts.bak cat /etc/hosts.bak > /etc/hosts break else echo "waiting for /etc/hosts ready" sleep 1 fi done ``` ``` root@hbase-region-a:/opt/hbase# hostname -f hbase-region-a.shaolei.svc.cluster.local root@hbase-region-a:/opt/hbase# cat /etc/hosts # Kubernetes-managed hosts file. 1192.168.127.12 localhost ::1 localhost ip6-localhost ip6-loopback fc00:e968:6179::de52:7100 ip6-localnet fc00:e968:6179::de52:7100 ip6-mcastprefix fdf8:f53e:61e4::18 ip6-allnodes fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b ip6-allrouters 192.168.65.2 hbase-region-a.shaolei.svc.cluster.local hbase-region-a ``` In a word, we manually fixed the DNS PTR record and container's FQDN, then the cluster is eventually ok. So if we want to deploy the cluster with PetSet, these two problems should be resolved: DNS and FQDN. username_5: Changing kubedns to handle reverse lookup for just named headless service isn't too hard, two major points running on top of my head: - maintaining headless service pods, especially when deleting pods can introduce some performance penalty if not handled properly - at lease for the hbase case, FQDN should be available for pod while enabling PTR; so changing kubedns along won't solve the problem username_6: I can confirm kubernetes/dns#25 fixes the issue with glusterfs. `dig -x <ip>` resolves the fqdn and `gluster peer probe <fqdn>` resolves to the PTR records and resulting `<hostname>.<headlessServiceName>.<ns>.svc.cluster.local`. In cause someone want to check things replacing the kubd-dns, here's the image: [appscode/k8s-dns-kube-dns-amd64:1.10.0-106-g0bb9f17](https://hub.docker.com/r/appscode/k8s-dns-kube-dns-amd64/tags/) Status: Issue closed username_0: PR is merged now. Closing. Thanks everyone!
svgstore/svgstore-cli
172991414
Title: Ability to remove header tags Question: username_0: Can we either remove them or see an --option to have them removed? They're unnecessary for inline SVGs: ``` <?xml version="1.0" encoding="UTF-8"?> ``` ``` <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"> ``` Answers: username_1: This is supported in v1.2.0 with the `--inline` option :) Status: Issue closed username_0: Awesome, thanks!
redis/redis-rb
777547344
Title: overwrite default `redis.new` config? Question: username_0: Is there a way to overwrite the default behavior of `Redis.new` in a initializer? id like it to pass some default `ssl_params` in, which would allow our devs to continue to use simply `redis.new` and I could change the params based on the current rails env
fountainment/cherrysoda-engine
1174235476
Title: Layered Drawing 2D Question: username_0: I´ve encountered a problem trying rendering sprites over each other. I render two sprites as follows ``` first->Position(Math::Vec3(0.0f, 0.0f, 10.0f)); second->Position2D(Math::Vec2(0.0f, 0.0f)); ``` and I expect the first to be drawn above the second (changing position of first to ```Math::Vec3(0.0f, 0.0f, -10.0f);``` does not change anything). However, they are drawn in the order they were instantiated. Thus creating the second as first sprite and the first sprite as second yields the desired result. The renderer is copy-pasted from example code ``` cherrysoda::Graphics::SetPointTextureSampling(); auto renderer = new cherrysoda::EverythingRenderer(); auto camera = renderer->GetCamera(); camera->Position(cherrysoda::Math::Vec3(0.0f, 0.0f, 200.0f)); renderer->SetEffect(cherrysoda::Graphics::GetEmbeddedEffect("sprite")); renderer->KeepCameraCenterOrigin(false); camera->UseOrthoProjection(true); camera->CenterOrigin(); Add(renderer); ``` So my question is, how would you go on about rendering in layers, or is it something that needs to be implemented first? Thank you for your time! Answers: username_1: Sprite is something special, there is no corresponding drawcall for every single sprite, instead, it add data to SpriteBatch in order to get better rendering performance. And the z axis data is actually ignored for sprite. **Render order issue in this engine is mainly handled by Entity::Depth**, entities will be sorted everyframe based on its depth. Components in an entity are rendered by its adding order and are not sorted. It's **not true** that things need to be set up on begin of a scene. Off course sprite is ok to be updated in scene update. Something like this `sprite->Scale2D(Math::Vec2(scaleFloat));` should work. When you encounter an issue that might be a bug, it will be better if you can give a minimal project for issue reproduction, otherwise I can't help much. username_0: I understand now. Rendering layered with Entity::Depth works. Scaling in update works too, it was apparently some error from my side. Status: Issue closed
ThiagohrMartins/trabalhoDm107php
282733043
Title: Trabalho PHP Question: username_0: A autenticação está sendo realizada de forma fixa no código o correto seria utilizar banco de dados $app->add(new Tuupola\Middleware\HttpBasicAuthentication([ "users" => [ "admin" => "admin" ] ]));
date-fns/date-fns
741765953
Title: startOf* Bug: returning wrong date Question: username_0: I started migrating moment to date-fns in one of our projects, and I faced this bug: ``` const date = new Date(2016) // 2016-01-01T00:00:00.000Z const day = startOfDay(date) // 2015-12-31T03:00:00.000Z -> it should be the initial date = 2016-01-01T00:00:00.000Z const month = startOfMonth(date) // 2015-12-01T03:00:00.000Z -> it should be the initial date = 2016-01-01T00:00:00.000Z const nextMonth = addMonths(month), 1) // 2016-01-01T03:00:00.000Z -> it should be 2016-02-01T00:00:00.000Z return differenceInDays(nextMonth, day) // 1 -> result should be 22 ``` Apparently, the bug happens when the time is 00:00. Thank you, Answers: username_0: this is not a bug actually. If the time is not informed, UTC time will be assumed. So I don't think the library should know about this. I am closing this issue. Status: Issue closed
EventStore/EventStore
940732985
Title: Evaluate implementation of TPC Benchmark™ C Question: username_0: Evaluate the feasibility of implementing the TPC Benchmark™ C for the ESDB cloud benchmark https://drive.google.com/file/d/145-LZ1bMPeEy25clNl7FKNNySD8OuIOX/view?usp=sharing Issues to be defined once TPC Benchmark™ C understood and decision made to my forward with implementation. Answers: username_0: Although this benchmark could be useful, the work required to implement it exceeds are current capabilities. There is a great of work required to build the application infrastructure to support the benchmark. The resources needed are not presently available. We should consider implementing this benchmark when the resources become available. username_1: IIRC there are only 5 transaction types in TPC-C How can this be a great amount of work? username_2: @username_0, do we still want to do that?
symfony/symfony
392510105
Title: Doctine extensions can not be loaded on Symfony 3.4.20 / 4.1.9 Question: username_0: **Symfony version(s) affected**: 3.4.20 / 4.1.9 After updating from 3.4.19->3.4.20 (or 4.1.8->4.1.9) it is not possible to enable doctrine filters (https://packagist.org/packages/gedmo/doctrine-extensions + https://packagist.org/packages/stof/doctrine-extensions-bundle) in config.yml. Config which we use: ```yaml filters: soft_deleteable: class: 'Gedmo\SoftDeleteable\Filter\SoftDeleteableFilter' enabled: true ``` During loading the app error is thrown: ``` (1/1) InvalidArgumentExceptionFilter 'soft_deleteable' does not exist. in FilterCollection.php line 107 at FilterCollection->enable('soft_deleteable')in ManagerConfigurator.php line 48 at ManagerConfigurator->enableFilters(object(EntityManager))in ManagerConfigurator.php line 34 at ManagerConfigurator->configure(object(EntityManager))in appDevDebugProjectContainer.php line 2714 at appDevDebugProjectContainer->getDoctrine_Orm_DefaultEntityManagerService(false)in appDevDebugProjectContainer.php line 2703 at appDevDebugProjectContainer->ContainerT6xfqgt\{closure}(null, object(EntityManager_9a5be93), 'getMetadataFactory', array(), object(Closure))in EntityManager_9a5be93.php line 38 at Closure->__invoke(null, object(EntityManager_9a5be93), 'getMetadataFactory', array(), object(Closure))in EntityManager_9a5be93.php line 38 at EntityManager_9a5be93->getMetadataFactory()in AbstractManagerRegistry.php line 181 at AbstractManagerRegistry->getManagerForClass('Path\\To\\Entity')in ServiceEntityRepository.php line 30 at ServiceEntityRepository->__construct(object(Registry), 'Path\\To\\Entity')in ServiceEntityRepository.php line 19 at SomeEntityRepository->__construct(object(Registry))in appDevDebugProjectContainer.php line 3313 at appDevDebugProjectContainer->getServiceEntityRepositoryService()in appDevDebugProjectContainer.php line 3243 at appDevDebugProjectContainer->getServiceEntityChasingPaymentServiceService()in appDevDebugProjectContainer.php line 2683 at appDevDebugProjectContainer->getDoctrine_Orm_DefaultEntityListenerResolverService()in appDevDebugProjectContainer.php line 4686 at appDevDebugProjectContainer->getDoctrine_Orm_DefaultConfigurationService()in appDevDebugProjectContainer.php line 2712 at appDevDebugProjectContainer->getDoctrine_Orm_DefaultEntityManagerService(false)in appDevDebugProjectContainer.php line 2703 at appDevDebugProjectContainer->ContainerT6xfqgt\{closure}(null, object(EntityManager_9a5be93), 'getConfiguration', array(), object(Closure))in EntityManager_9a5be93.php line 328 at Closure->__invoke(null, object(EntityManager_9a5be93), 'getConfiguration', array(), object(Closure))in EntityManager_9a5be93.php line 328 at EntityManager_9a5be93->getConfiguration()in ProxyCacheWarmer.php line 51 at ProxyCacheWarmer->warmUp('/var/www/app/default/symfony-standard/app_api/var/cache/dev')in CacheWarmerAggregate.php line 52 at CacheWarmerAggregate->warmUp('/var/www/app/default/symfony-standard/app_api/var/cache/dev')in Kernel.php line 680 at Kernel->initializeContainer()in Kernel.php line 135 at Kernel->boot()in Kernel.php line 195 at Kernel->handle(object(Request))in app_dev.php line 42 ``` Looks like it is related to changes introduced in: https://github.com/symfony/symfony/pull/29369/files Note: If `enabled: true` is removed from the config app loads correctly, but it should be possible to have this filter enabled by default. Answers: username_1: It looks like this issue is related to https://github.com/symfony/symfony/issues/29810 username_1: Also related or even duplicate: https://github.com/symfony/symfony/issues/29772 username_2: A reproducible example application will ease to help here. username_2: I am going to close here for now due to the lack of feedback. Please let us know when you have more information and we can consider to reopen. Status: Issue closed
nedwill/fasthax
198356354
Title: Working on O3DS but freezes when closes Question: username_0: Tried the new build, got the "we won" message but when I press start it presents some messages too much fast to read and when it's suppose to close and come back to homebrew menu, bot screen stay with a red color. (I know it's in alpha, just reporting what I experienced) Answers: username_1: o3ds 11.2.0-35U ofw here. Here is what I'm getting with what I just built less than 20 minutes ago (no dumps because ofw). As for the whole thing with pressing start, I think username_2 is just detecting if start is being held, and not if there are separate presses. You have to tap start pretty fast because of that in order to see what I got. A simple oversight, could probably be remedied with waiting for two events (a start button up, then start button down). I myself am getting: `[+] UAF succeeded and backdoor installed. [-] Couldn't finalize global backdoor. [-] We won't be able to run kernel code in other processes k11_exploit failed! waiting for user... press <start> to continue` When I press start as prompted, I get a red screen freeze. It has been consistent each time it succeeds with the red screen freeze. username_2: That error is expected. I didn't implement `finalize_global_backdoor` yet. username_0: Oh, good to know! username_3: Does anyone have the build of the most recent file for o3ds? username_2: @username_3 It's not usable by anyone yet. I'll publish a build when it's usable :) username_2: Also there's only one build for all 3DSes because we can detect the kernel version now that this is a homebrew application! username_3: @username_2 oh okay Thanks Ned, Ive been making your updates known on the reddit and people are pretty happy- keep up the good work :) username_4: @username_2 That means it will work on 2DS too? We will be able to install legit CIA's and/or non-legit CIA's through FBI Cia Installer? Status: Issue closed username_2: This is now fixed in the beta. When the exploit succeeds, it no longer freezes when exiting. Thanks for the report!
c29reid/Dolphin-Matchmaking-Server
154070220
Title: CMAKE issue Question: username_0: we need to build sfml before the cmake links everything Options: 1. brew install sfml (for osx) 2. Find method to get cmake to build and then link???? (kill me pls, but probably the best) option 1: the below line would have to reference a file in the users filesystem see: https://github.com/username_1/Dolphin-Matchmaking-Server/blob/master/CMakeLists.txt#L19 Answers: username_1: #2 gl username_0: make looks fine. need to get make to build with c++14 username_1: Cmake + SFML works fine now right? Status: Issue closed username_0: 👍
hasura/graphql-engine
844418381
Title: Rollback mechanism Question: username_0: How can we couple hasura rollback mechanism with kubernetes one ? Example : I use the cli-image to have automated migrations. I bound sql migrations files and metadata yaml in configMaps. Now what if a migration should be rollbacked? : Documentation talks about down.sql files but to do the rollback we have to use the hasura-cli tool. So the kubernetes rollback won't solve the problem here. Any suggestions ? Answers: username_1: D
tdviet/fedcloudclient
1159907576
Title: Possible improvements Question: username_0: This is just a suggestion, to register some possible improvements for the code that I have seen while I worked on #127 : - Review code to eliminate additional possible code duplication. - It could be beneficial to provide some progress feedback to the user (e.g. during the execution of time consuming commands such as `fedcloud endpoint projects -a`). even if it is just a printed symbol that changes or a printed dot "." at the time a new site is accessed. This should not interfere with the standard output or could be a log level option. - More solid exception management, as referred in #128 . Generic exceptions are captured in several parts so it makes difficult to treat them separately in an adequate manner. - Error messages should be unified. The work I did on PR #129 in this respect should be considered temporary. - It would be beneficial to use a proper logging framework, and fix the lack of logs. - Some parts of the code could benefit from concurrency. I am aware that Python does not excel at multithreading processing but, for example, the performance of `fedcloud endpoint projects -a` can greatly improve if all sites are contacted asynchronously. Answers: username_1: Concurrency should be problem for Python, it has been implemented, for example for `fedcloud openstack -a` command. https://github.com/username_1/fedcloudclient/blob/master/fedcloudclient/openstack.py#L260-303 username_0: Great! Also, I saw time ago that `fedcloud token check` does not seem to work as I expected on OIDC. So I ended up using: ``` printf '%(%F %T)T\n' `oidc-token --env $OIDC_AGENT_ACCOUNT | grep -oP '(?<=OIDC_EXP=).*?(?=;)'` ``` instead username_1: I expect the access token from oidc-agent is always valid, as the oidc-agent will check and update automatically when expires, so the `fedcloud token check` only check the token provided via `--oidc-access-token` username_0: Oh, I see. The error message was a bit misleading, as it mentioned that OIDC access token is required, and I have one: ``` i@tckr:~$ fedcloud token check OIDC access token or refresh token required ``` Although I might not be getting the correct answer ``` i@tckr:~$ fedcloud token check --oidc-access-token egi Error: Invalid access token. ``` username_1: The option `--oidc-access-token` should be accompanied with the token in free text, not oidc-agent account name. Try ``` fedcloud token check --oidc-access-token `oidc-token egi` ``` username_0: Thanks, so I called it wrongly then. Perhaps the error message of `token check` could point to this, something like: username_1: It is an awkward question, can I ask who are you from EGI? I try to decode ILM but it does not match any name nor position from https://www.egi.eu/about/egi-foundation/team/. username_0: Following this list of suggestions, I would like to add some other ideas: - Implement a "_**-f --format**_" argument to output the results in a specific format (e.g. the default table, JSON, YAML, CSV, grep-able text, Terraform, etc) . This can be very useful for users. For example, Fedcloudclient can absorb the functionality I have created in my Terraform script so it can provide out-of-the-box Terraform files to manage infrastructure. - Unified management of **IGTF certificates**. I am following the recommendations for production use and I install the certificates via [a script](https://github.com/username_1/python-requests-bundle-certs/blob/main/scripts/install_egi_core_trust_anchor.sh). This has some problems: (i) It is only valid for apt-based systems. (ii) It appends the certificates in the keystore every time it is executed. (iii) In my case, they are installed in my general keystore, so it makes the system trust IGTF certificates for tasks unrelated to Fedcloud. Perhaps it would be better to "cache" the certificates in an alternative keystore and set this keystore during Fedcloud execution. This way, it can be integrated within Fedcloudclient with no additional installation, be system agnostic and avoid messing with the default system keystore. I guess it can be a simplified version of what is already done [in another script ](https://github.com/username_1/python-requests-bundle-certs/blob/main/scripts/install_certs.sh), but transparently integrated within the program.
Azure/azure-quickstart-templates
240923900
Title: tree/master/visual-studio-dev-vm: ImageNotFound in Development VMs Question: username_0: I tried to deploy the Windows10:N and W2012R2 with VS2015 community ed. + Azure SDK 2.9 I get an imageNotFound error: { "error": { "code": "ImageNotFound", "target": "imageReference", "message": "The platform image 'MicrosoftVisualStudio:VisualStudio:VS-2015-Ent-VSU3-AzureSDK-291-WS2012R2:latest' is not available. Verify that all fields in the storage profile are correct." } } --------------------MESSAGE FROM ADMIN, DELETE BEFORE SUBMITTING---------------------- Sorry to hear you had a bad experience with one of the templates :worried: But, in case you're just asking a question, we're happy to help. You can also check if the question might already have been asked here https://github.com/Azure/azure-quickstart-templates/issues?utf8=%E2%9C%93&q=is%3Aissue We've created an outline of recommended sections to fill out that will help make this Pull Request awesome! --------------------MESSAGE FROM ADMIN, DELETE BEFORE SUBMITTING---------------------- [Template Name goes here](Template link goes here) ### Issue Details ### Repro steps (*if necessary, delete otherwise*) 1. 2. 3. 4. 5.
rleiva/nescience
946879200
Title: Constrained kmeans Question: username_0: If we want to discretize a pair of variables to compute k(x, y), we divide the bidimensional space x, y in a grid of n squares of equal size, and count the number of samples in each square. It is not clear if this the optimal way of doing thing. Perhaps a method based on clustering, like Kmeans, will provide a better result, with less intervals. The problem is that methods like Kmeans change the entropy of the dataset, that is, the entropy of the original dataset is different than the entropy of the encoded dataset, and that alter the value of K(x, y). We could develop a constrained kmeans algorithm, based on Voronoi polygons, that do not modify the entropy. Also, instead of using a square to contains all the points, coputing the convex hull of the dataset could provide a better discretization.
astropy/astropy
653777681
Title: Performance of Table creation from Python lists Question: username_0: This is mostly a question to understand what is causing the following performance hit in initialization of a table from Python lists: ```python In [238]: import numpy as np ...: from astropy import table ...: time = list(np.random.randint(0, 1000000, 100000)) ...: rate = list(np.random.randint(0, 1000000, 100000)) ...: error = list(np.random.randint(0, 1000000, 100000)) In [239]: %timeit table.Table([time, rate, error], names=["time", "rate", "error"]) 673 ms ± 14.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [240]: %timeit data = table.Table([np.array(time), np.array(rate), np.array(error)], names=["time", "rate", "error"]) 17 ms ± 224 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) ``` Answers: username_2: Wow, you're a trooper, @username_1 . Thanks! That means #9048 username_3: Thanks for the investigation thus far! This is pretty bad, I'll have a look. The root cause is `np.ma.array` (this is with 1.18.5): ``` In [1]: time = list(np.random.randint(0, 1000000, 100000)) In [2]: %timeit np_data = np.ma.array(time, dtype=None) 268 ms ± 11.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [3]: %timeit np_data = np.array(time, dtype=None) 5.32 ms ± 113 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) ``` Any thoughts @username_5 on what to do here? username_4: This comes from : https://github.com/numpy/numpy/blob/18a6e3e505ee416ddfc617f3e9afdff5a031c2c2/numpy/ma/core.py#L2861-L2863 trying to get a mask from each element of the list. ``` In [17]: %timeit np_data = np.ma.array(time, dtype=None) 251 ms ± 1.48 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [18]: %timeit [np.ma.getmaskarray(m) for m in data] 188 ms ± 1.16 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [19]: %timeit [np.ma.getmaskarray(np.asanyarray(m)) for m in data] 229 ms ± 2.41 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` username_5: That's not pretty, but I'm not sure what to do if one wants to be general and allow masked entries - one would either have to tell people who use plain lists to pre-pack with `np.array` or people who may have `np.ma.masked` or other masked arrays in their lists to pre-pack with `np.ma.MaskedArray`. username_2: Hmm... Maybe add this to https://docs.astropy.org/en/latest/table/index.html#performance-tips ? username_3: I think the use case of a plain list is far more important and common than the corner case of a list of masked arrays. The only thing I can think of for partly handling that corner case is a not-robust solution of checking for a masked array in the first element of a list and going down the masked array conversion only in that case. username_3: Or document that case and tell people they need to explicitly create a `MaskedColumn`? After all, the same is true of plain numpy arrays that they don't magically auto-convert. username_0: I agree with https://github.com/astropy/astropy/issues/10548#issuecomment-656398073: username_5: @username_3 - agreed that the plain list is far more common - I just thought you might fear backwards incompatibility more than you seem to (are we switching roles?? ;-), especially as the commit that introduced the slowdown was specifically to allow list of `MaskedArray`. Anyway, I'm fine with changing it, or, as you suggest, just check the first element of the list. username_2: If you put the patch in, say, 4.2, then maybe not so much an issue. But it would be more controversial to backport to LTS. Hmm... username_3: tl;dr - initializing `Table` with a list of things that might include masked arrays or `np.ma.masked` is a bit of a minefield with inconsistent behaviors. I did some digging in an effort to fix this. It turns out to be a little bit trickier than I had imagined. The changes in #9048 (basically converting a list to array via MaskedArray) enabled a new behavior which previously didn't work, namely initializing from a list that contains `np.ma.masked`. One commit before #9048: ``` (astropy-temp) ➜ astropy-temp git:(5a791cffd) ✗ ipython Python 3.7.7 (default, May 6 2020, 04:59:01) Type 'copyright', 'credits' or 'license' for more information IPython 7.16.1 -- An enhanced Interactive Python. Type '?' for help. In [1]: astro astropy=4.0.dev25421 In [2]: Table([[1.0, np.ma.masked], ['a', np.ma.masked]]) /Users/aldcroft/git/astropy-temp/astropy/table/column.py:221: UserWarning: Warning: converting a masked element to nan. self_data = np.array(data, dtype=dtype, copy=copy) Out[2]: <Table length=2> col0 col1 float64 str32 ------- ----- 1.0 a nan 0.0 ``` After #9048 ``` (astropy-temp) ➜ astropy-temp git:(5a791cffd) ✗ git checkout 2529d00c487869c0ee5f M astropy/table/tests/test_masked.py Previous HEAD position was 5a791cffd Merge pull request #9035 from adrn/coordinates/organize-api-docs HEAD is now at 2529d00c4 Fix bug adding a col as list of masked arrays (astropy-temp) ➜ astropy-temp git:(2529d00c4) ✗ ipython Python 3.7.7 (default, May 6 2020, 04:59:01) Type 'copyright', 'credits' or 'license' for more information IPython 7.16.1 -- An enhanced Interactive Python. Type '?' for help. In [1]: astro astropy=4.0.dev25422 In [2]: Table([[1.0, np.ma.masked], ['a', np.ma.masked]]) /Users/aldcroft/miniconda3/envs/astropy-temp/lib/python3.7/site-packages/numpy/ma/core.py:2788: UserWarning: Warning: converting a masked element to nan. order=order, subok=True, ndmin=ndmin) Out[2]: <Table length=2> col0 col1 float64 str32 ------- ----- 1.0 a -- -- ``` This behavior was not tested at that point, but became part of regression tests later on with #9651, specifically: ``` def test_initialization_with_all_columns(self): t1 = Table([self.a, self.b, self.c, self.d, self.ca, self.sc]) assert t1.colnames == ['a', 'b', 'c', 'd', 'ca', 'sc'] # Check we get the same result by passing in as list of dict. # (Regression test for error uncovered by scintillometry package.) lofd = [{k: row[k] for k in t1.colnames} for row in t1] t2 = Table(lofd) [Truncated] assert np.all(getattr(t1[k], 'mask', False) == getattr(t2[k], 'mask', False)) ``` However, "working" is a little bit of an overstatement. The underlying conversion with `MaskedArray` fails for `int` type: ``` In [11]: np.ma.MaskedArray([1, np.ma.masked], dtype=int) MaskError: Cannot convert masked element to a Python int. # Automatically coerces to float if no dtype is provided In [12]: np.ma.MaskedArray([1, np.ma.masked]) /Users/aldcroft/miniconda3/envs/astropy/lib/python3.7/site-packages/numpy/ma/core.py:2795: UserWarning: Warning: converting a masked element to nan. order=order, subok=True, ndmin=ndmin) Out[12]: masked_array(data=[1.0, --], mask=[False, True], fill_value=1e+20) ``` In fact if you add a test `assert t1[k].dtype == t2[k].dtype` to the above test it fails. For instance the `dtype` of `t2['b']` is object not int64. username_3: FWIW, pandas has a similar behavior: ``` In [11]: Series([1,np.ma.masked]) Out[11]: 0 1 1 -- dtype: object ``` username_2: p.s. I don't see the new benchmark result being reported yet at http://www.astropy.org/astropy-benchmarks/ Status: Issue closed
dalibo/temboard-agent
1147346953
Title: Statements Menu Item Does Not Load Current History from pg_stat_statements Question: username_0: We had some database troubles last week and after a database bounce (or around that time), the Statements menu item doesn't show any data after that bounce... this is for the last 7 days: ![image](https://user-images.githubusercontent.com/30243584/155213444-8b28a88f-9d35-46d0-8c3f-cd59f8360391.png) This is for the last 24 hours: ![image](https://user-images.githubusercontent.com/30243584/155215494-e8268bee-e477-46cd-8097-afd78208cd76.png) Restarting temboard agent does not help. The logs only show this error when heading to the Statements screen: ``` 2022-02-22 14:37:30,681 temboardagent[14567]: [httpd] ERROR: {'error': 'Invalid session.'} 2022-02-22 14:37:30,681 temboardagent[14567]: [httpd] ERROR: Traceback (most recent call last): 2022-02-22 14:37:30,681 temboardagent[14567]: [httpd] ERROR: File "/usr/lib/python3.6/site-packages/temboardagent/api.py", line 36, in check_sessionid 2022-02-22 14:37:30,681 temboardagent[14567]: [httpd] ERROR: session = sessions.get_by_sessionid(xsession) 2022-02-22 14:37:30,681 temboardagent[14567]: [httpd] ERROR: File "/usr/lib/python3.6/site-packages/temboardagent/sharedmemory.py", line 48, in get_by_sessionid 2022-02-22 14:37:30,681 temboardagent[14567]: [httpd] ERROR: raise SharedItem_not_found() 2022-02-22 14:37:30,681 temboardagent[14567]: [httpd] ERROR: temboardagent.errors.SharedItem_not_found 2022-02-22 14:37:30,681 temboardagent[14567]: [httpd] ERROR: 2022-02-22 14:37:30,681 temboardagent[14567]: [httpd] ERROR: During handling of the above exception, another exception occurred: 2022-02-22 14:37:30,681 temboardagent[14567]: [httpd] ERROR: 2022-02-22 14:37:30,681 temboardagent[14567]: [httpd] ERROR: Traceback (most recent call last): 2022-02-22 14:37:30,681 temboardagent[14567]: [httpd] ERROR: File "/usr/lib/python3.6/site-packages/temboardagent/httpd.py", line 131, in response 2022-02-22 14:37:30,681 temboardagent[14567]: [httpd] ERROR: (code, message) = self.route_request() 2022-02-22 14:37:30,681 temboardagent[14567]: [httpd] ERROR: File "/usr/lib/python3.6/site-packages/temboardagent/httpd.py", line 253, in route_request 2022-02-22 14:37:30,681 temboardagent[14567]: [httpd] ERROR: username = check_sessionid(self.headers, self.sessions) 2022-02-22 14:37:30,681 temboardagent[14567]: [httpd] ERROR: File "/usr/lib/python3.6/site-packages/temboardagent/api.py", line 42, in check_sessionid 2022-02-22 14:37:30,681 temboardagent[14567]: [httpd] ERROR: raise HTTPError(401, "Invalid session.") 2022-02-22 14:37:30,681 temboardagent[14567]: [httpd] ERROR: temboardagent.errors.HTTPError: Invalid session. 2022-02-22 14:37:30,682 temboardagent[14567]: [httpd] ERROR: {'error': 'Invalid session.'} 2022-02-22 14:37:30,682 temboardagent[14567]: [httpd] INFO: client: xxx.xxx.xxx.xxx request: "GET /profile?key=<KEY> HTTP/1.1" 401 - 3.76ms ``` Answers: username_1: Hi @username_0 can you share me full DEBUG logs ? Did you upgrade o 7.10 ? username_0: DEBUG logs emailed. We are planning on upgrading today actually. After the upgrade we can let you know the outcome. Let me know if the logs provide anything useful. Status: Issue closed username_0: This seems to be resolved after upgrading to 7.10. Thanks.
cortago/cortago
552637925
Title: Add support for basic layout Question: username_0: **What features are you proposing in Cortago** Add support for basic layout similar to `this->layout` in laravel **Describe the way you'd like to solve this problem(Optional)** Add support for basic templating supports **Additional context** Add any other context or screenshots about the feature request here.
PicoJr/tbl
655406511
Title: support overlapping activities Question: username_0: I realize that if you use rtw to track freelance work you are doing for a client, you can't really work on multiple tasks at the same time, so this may not be a huge issue. But if you also use rtw to track anything else, such as what food you are eating, or what your child is doing while you futilely attempt to do work and childcare at the same time during the pandemic ("look, see, I do plenty of chores, here's my log in rtw"), you will have overlapping activities. Currently, if you attempt to start a new task while another task is in progress, rtw will end the previous task: ``` <EMAIL>@<EMAIL> ~> rtw start 1 h ago write issue Tracking write issue Started 2020-07-12T07:17:47 <EMAIL>@<EMAIL> ~> rtw start 30 min ago sit in chair Tracking sit in chair Started 2020-07-12T07:47:56 <EMAIL>@<EMAIL> ~> rtw stop write issue Error: --> 1:1 | 1 | write issue | ^--- | = expected time_clue <EMAIL>@Valence ~ [1]> rtw stop Recorded sit in chair Started 2020-07-12T07:47:56 Ended 2020-07-12T08:18:20 Total 00:30:24 <EMAIL>@<EMAIL> ~> rtw summary write issue 2020-07-12T07:17:47 2020-07-12T07:47:56 00:30:08 sit in chair 2020-07-12T07:47:56 2020-07-12T08:18:20 00:30:24 ``` If multiple activities are running at once, perhaps instead rtw should ask which one you want to stop (if you don't specify), or allow you to include tags that indicate which activity you want to end. I imagine this would also require re-engineering the timeline view. For people who don't want overlapping activities to ever happen (seriously, they are only using this for business), maybe a preference should allow them to disable (or enable) overlapping activities. Then perhaps they would receive errors if they mistakenly attempt to start an overlapping activity, rather than rtw silently interpreting their activities so that they do not overlap.
qvacua/vimr
226072294
Title: feature request: option to set a dark theme for tool panes Question: username_0: Obviously low priority request here, but I thought it would be nice to be able to set the tools to have a dark background. I normally use a dark color scheme in neovim and the contrast is a bit distracting. Answers: username_1: +1 from me. I use Solarised Light for the editor window and [given the choice] would use Solarised Dark for the file browser pane. The white BG seems glaring to me, even next to the Solarised Light BG colour. So I can 'feel @username_0's pain', if he's trying to work with a dark theme in the editor. While you're sorting that one out, the ability to set a separate colourscheme for the markdown preview pane would be a nice touch too. username_2: Yes, this would indeed be a very nice feature. However, there are more pressing matters... Let's see... username_3: 1+, would absolute love for a way to theme the sidebars to be darker. username_2: I'm currently thinking about to completely use the selected color theme for the file browser. Is there any Vim-color-scheme-expert who can tell me which color I should use for which part? 😉 username_0: @username_2 don't know if this would help, but you might consider looking at [NERDTree](https://github.com/scrooloose/nerdtree) as a guide. username_4: Don't forget to eliminate the thin 1px border too ;) username_2: Making some progress: <img width="794" alt="screen shot 2017-06-25 at 23 34 13" src="https://user-images.githubusercontent.com/460034/27519944-3e3f8b6e-59ff-11e7-8356-47d7bb2a0888.png"> <img width="794" alt="screen shot 2017-06-25 at 23 36 23" src="https://user-images.githubusercontent.com/460034/27519943-3e3e7be8-59ff-11e7-9a3f-731b2ac20747.png"> username_0: looks great. username_4: Looks great! How can you customize the colors? Or, which colors are reused? It would also be awesome to be able to customize the folder/file icons, or until then possibly disable them. I like the direction! username_2: @username_4 I'm trying to use the colors defined in the color scheme, eg the foreground and background (and 15% darker version of the background 😀). I didn't quite get your previous comment about eliminating 1px border; Do you mean the border between the Vim view and the file browser? username_2: Please try https://github.com/username_2/vimr/releases/tag/snapshot%2F209 and report the bugs here! 😬 username_4: Yes, I meant that a lot of editors like `Idea` for the longest time wouldn't let you customize the border between the two and it ended up looking pretty bad on many color schemes. I think your default here (15% darker) is a good starting point. username_2: The latest snapshot https://github.com/username_2/vimr/releases/tag/snapshot%2F211 uses the `directory` color for folders in the file browser: what do you guys think? I'm not quite sure... username_3: Works great for me! username_5: I've tried your new version. It's very beautiful Status: Issue closed
Flutterando/modular
982244193
Title: How to setup in an existing project ? Question: username_0: hi, we'd like to try your package however its not clear how to set it up when you have an existing application that sets up via MaterialApp(home: 'xx', navigationObservers: [..] etc..) ? Could you please provide a simple example ? Status: Issue closed Answers: username_1: We've created new documentation to help you get started. Look: modular.flutterando.com.br
fbu-bettertogether/BetterTogether
468271547
Title: Add Post Functionality to Groups Question: username_0: - [ ] members of group can create posts - [ ] users can view all posts underneath group info (RecyclerView) - [ ] can tag other users in posts - [ ] members get notifications when things are posted<issue_closed> Status: Issue closed
rancher/rancher
141656065
Title: API v1.1 Question: username_0: Rename enviornment to stack and project to environment, then fix everything that breaks. Answers: username_0: Done, right @username_1 ? username_1: @username_0: v2-beta renames enviornment to stack. But renaming project to environment is not done yet. username_1: @username_2 , v2-beta renames enviornment to stack. But renaming project to environment is not done yet. Is this change is also scoped for this release ? username_2: Please note that the issue has been updated and test it for exactly what the issue says. environment -> stack @username_1 username_1: Tested with rancher-server version -v1.2.0-pre3. Environment has been renamed to stack for v2-beta apis. Validation tests uses stack related apis when testing with v2-beta apis. Status: Issue closed
rich-iannone/DiagrammeR
1170457965
Title: multiline input for mermiad.js graphs Question: username_0: Would it be possible to allow multiline input for mermaid.js graphs? I find it convenient to keep a `mermaid.js` graph as a character vector of lines: ```r graph <- c( "graph LR", " subgraph Legend", " outdated([Outdated]):::outdated --- stem([Stem]):::none", " stem([Stem]):::none --- function>Function]:::none", " end", " subgraph Graph", " g>g]:::outdated --> f>f]:::outdated", " y1([y1]):::outdated --> z([z]):::outdated", " y2([y2]):::outdated --> z([z]):::outdated", " f>f]:::outdated --> y1([y1]):::outdated", " end", " classDef outdated stroke:#000000,color:#000000,fill:#78B7C5;", " classDef none stroke:#000000,color:#000000,fill:#94a4ac;", " linkStyle 0 stroke-width:0px;", " linkStyle 1 stroke-width:0px;" ) ``` After installing 91059fd054c757f9d3d6751a3f848cb4b6b56fde and then updating `mermaid.js` using https://github.com/rich-iannone/DiagrammeR/issues/421#issuecomment-1014160634, I can generate the desired graph if I paste the lines together. ```r DiagrammeR::mermaid(paste0(graph, collapse = "\n")) ``` ![Screen Shot 2022-03-15 at 10 16 22 PM](https://user-images.githubusercontent.com/1580860/158503436-308a84c9-59dc-499f-9742-d2eb7aa510b2.png) But if I do not paste the lines, I see this: ![Screen Shot 2022-03-15 at 10 16 36 PM](https://user-images.githubusercontent.com/1580860/158503415-60740e50-1be1-4970-b6e5-a70b2c4ec7bc.png) Mermaid graphs have been great for `targets`: https://github.com/ropensci/targets/pull/802
hyb1996-guest/AutoJsIssueReport
239219801
Title: QQ名片点赞脚步 Question: username_0: Description: --- 只设置好友点赞 Device info: --- <table> <tr><td>App version</td><td>2.0.10b Beta</td></tr> <tr><td>App version code</td><td>127</td></tr> <tr><td>Android build version</td><td>eng.compiler.20170511.122319</td></tr> <tr><td>Android release version</td><td>5.1.1</td></tr> <tr><td>Android SDK version</td><td>22</td></tr> <tr><td>Android build ID</td><td>LMY47V release-keys</td></tr> <tr><td>Device brand</td><td>vivo</td></tr> <tr><td>Device manufacturer</td><td>vivo</td></tr> <tr><td>Device name</td><td>PD1501BD</td></tr> <tr><td>Device model</td><td>vivo X6SPlus D</td></tr> <tr><td>Device product name</td><td>PD1501BD</td></tr> <tr><td>Device hardware name</td><td>qcom</td></tr> <tr><td>ABIs</td><td>[arm64-v8a, armeabi-v7a, armeabi]</td></tr> <tr><td>ABIs (32bit)</td><td>[armeabi-v7a, armeabi]</td></tr> <tr><td>ABIs (64bit)</td><td>[arm64-v8a]</td></tr> </table>
openworm/OpenWorm
15461470
Title: Create sample NeuroML connectome output Question: username_0: - [x] install NEURON and NeuroConstruct - [ ] load the connectome up in NEURON - [ ] set NEURON to write out data files for the voltage of all 302 neurons - [ ] graph the dataset using matplotlib to see all traces - [ ] upload graph and data files to GitHub - [ ] repeat data output, graphing and uploading steps for all compartments (302 x ~10) Status: Issue closed Answers: username_0: This all happens now within c302. Closing this.
samotracio/mkone
343868032
Title: TypeError Question: username_0: Hi, I try to follow your example, but the result is: TypeError: unsupported operand type(s) for +: 'method-wrapper' and 'int' Maybe something is wrong with sampy. Answers: username_1: Hi, It would be great to know which line number in the source code is generating the error and what exactly were you executing. I have just tried the first example under python 2.7 and runs fine. Remember Python >3 is not fully supported because sampy is <2.7 only (nevertheless, data point generation should work fine just by changing some print/assignment syntax from python version 2 to version 3. In fact I have a preliminary Python 3 version of mkone.py) username_0: TypeError Traceback (most recent call last) /work/xyz/work/simulation/mkone/mkone.py in <module>() 892 doplot=False,colorize=True,cosmo=cosmo,oformat='table') 893 --> 894 send(kone,'kone',cols=['ra','dec','z','comd','px','py','pz']) 895 896 /work/xyz/work/simulation/mkone/mkone.py in send(dat, fname, cols, disc) 185 ''' Send a 2D array or astropy table to topcat via SAMP ''' 186 from sampc import Client --> 187 c = Client() 188 c.send(dat,fname,cols=cols,disc=disc) 189 /work/xyz/work/simulation/mkone/sampc.pyc in __init__(self, addr, hub) 171 # even though it shouldn't 172 self.hub = sampy.SAMPHubServer(addr=addr) --> 173 self.hub.start() 174 else: 175 self.hub = None /work/share/software/miniconda2/lib/python2.7/site-packages/sampy.pyc in start(self, wait) 1580 self._updateLastActivityTime() 1581 -> 1582 if self._createLockFile() == False: 1583 self._is_running = False 1584 return /work/share/software/miniconda2/lib/python2.7/site-packages/sampy.pyc in _createLockFile(self) 1655 self._log.debug("Lock-file: " + lockfilename) 1656 -> 1657 result = self._new_lockfile(lockfilename) 1658 if result: 1659 self._lockfilename = lockfilename /work/share/software/miniconda2/lib/python2.7/site-packages/sampy.pyc in _new_lockfile(self, lockfilename) 1676 # Custom tokens 1677 -> 1678 lockfile.write("hub.id=%d-%s\n" % (os.getpid(), threading._counter + 1)) 1679 1680 if self._label == "": TypeError: unsupported operand type(s) for +: 'method-wrapper' and 'int'
docker/docker-py
894619620
Title: websocket-client dependency range isn't compatible with newly released 1.0.0 Question: username_0: When doing a `pip install` with a fresh venv it installs websocket-client 1.0.0 due to the websocket-client>=0.32.0 requirement. I checked the git tag 5.0.0 and the requirements.txt was a static version so I don't understand why it would try to grab 0.32.0 so maybe someone could enlighten me on that. requirements.txt ``` docker==5.0.0 ``` test.py ``` import docker ``` Running `python3 test.py` gives the following error ``` Traceback (most recent call last): File "test_failing_docker_sdk.py", line 1, in <module> import docker File "/projects/gwe/gdes/jenkins/src/main/python/env/lib/python3.6/site-packages/docker/__init__.py", line 2, in <module> from .api import APIClient File "/projects/gwe/gdes/jenkins/src/main/python/env/lib/python3.6/site-packages/docker/api/__init__.py", line 2, in <module> from .client import APIClient File "/projects/gwe/gdes/jenkins/src/main/python/env/lib/python3.6/site-packages/docker/api/client.py", line 10, in <module> from .. import auth File "/projects/gwe/gdes/jenkins/src/main/python/env/lib/python3.6/site-packages/docker/auth.py", line 5, in <module> import six ModuleNotFoundError: No module named 'six' ``` logs when installing with pip ``` Collecting websocket-client>=0.32.0 (from docker==5.0.0->-r requirements.txt (line 2)) Using cached https://files.pythonhosted.org/packages/ba/d1/501076b54481412df1bc4cdd1fe479f66e17857c63ec5981bedcdc2ca793/websocket_client-1.0.0-py2.py3-none-any.whl ``` ``` $ pip freeze && python --version && docker version certifi==2020.12.5 chardet==4.0.0 docker==5.0.0 idna==2.10 requests==2.25.1 urllib3==1.26.4 websocket-client==1.0.0 Python 3.6.8 Client: Docker Engine - Community Version: 19.03.8 API version: 1.40 Go version: go1.12.17 Git commit: afacb8b Built: Wed Mar 11 01:21:11 2020 OS/Arch: darwin/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 19.03.8 API version: 1.40 (minimum version 1.12) Go version: go1.12.17 Git commit: afacb8b Built: Wed Mar 11 01:29:16 2020 OS/Arch: linux/amd64 Experimental: false containerd: Version: v1.2.13 GitCommit: 7<PASSWORD>0ea95e65ba581<PASSWORD> runc: Version: 1.0.0-rc10 GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd docker-init: Version: 0.18.0 GitCommit: <PASSWORD> ``` Answers: username_1: This isn't really related to the websocket-client changes, but due to docker not properly tracking its dependencies and relying on another package to pull them in. The change to `setup.py` should be to explicitly include the `six` dependency. username_2: Ran into a similar problem yesterday when trying to install on a CentOS7 box where I'm forced to use `python 2.7.5`, therefore restricted to version `4.4.4` of the docker library. Only work-around I've found is to explicitly install the `websocket-client` dependency with a `<1` argument prior to installing `docker`. Is there any chance of getting a `4.4.5` release of the docker library to support those of stuck in the last decade? I know it's a long shot, but it would be incredibly helpful, there are lots of places e.g. the ansible docker community collection where we're told to explicitly use version 4.4.4 of the docker library on systems running python 2.7 and currently all of that documentation is incorrect due to this dependency issue.
keras-team/keras-cv
1099548281
Title: CutOut augmentation Question: username_0: Thanks @chjort Status: Issue closed Answers: username_1: https://github.com/tensorflow/addons/blob/master/tensorflow_addons/image/cutout_ops.py https://github.com/tensorflow/models/blob/master/official/vision/image_classification/augment.py#L267 username_2: @username_0 There is another augmentation similar to this one called [CoarseDropout](https://albumentations.ai/docs/api_reference/augmentations/dropout/coarse_dropout/#albumentations.augmentations.dropout.coarse_dropout.CoarseDropout). I think it's somewhat better to have than simple CutOut. [TF Code. ](https://www.kaggle.com/cdeotte/tfrecord-experiments-upsample-and-coarse-dropout) ![output_13_0](https://user-images.githubusercontent.com/17668390/150628277-fde78218-99d5-41a1-a060-7c9d98e04c1c.png) Here is another variant on the cutout, called [progressive sprinkles](https://twitter.com/jeremyphoward/status/1150927513666953216) username_0: Thanks @chjort Status: Issue closed
Zulko/moviepy
1145114108
Title: is there a way to remove black bars when concatenating? Question: username_0: is there a way to remove blackbars when concatenating different size videos without messing up their size ratio? for example if I try to concatenate two videos, one with 500px width and other with 1000px width, the output video will have the same width as the 1000px video and that will cause black bars when the first video plays because its width is 500px. I tried resizing the first and second video or the output video but they don't seem to work. Answers: username_1: It was already answered [here](https://github.com/Zulko/moviepy/issues/663#issuecomment-395322381) . username_0: yeah I did something similar, I used the average value of width/height ratio for every video and resized all of them to that resolution. Although I wish there was a way to just remove black bars.
joeyklee/bc-climate-explorer
210900405
Title: Feasibility of a Climate Explorer for Europe Question: username_0: @joeyklee @username_1 During our last skype meeting, kevin raised an interest in doing a similar tool for germany. I looked into whether there are the essential data out there to do this. the answer is YES! In my opinion, it would be great to do the tool at the european scale, to capture a broad diversity of climates and climate change trajectories, and a bigger user community. there are two components we need: climate data and climatic zones. 1. Climate data: available at [ClimateEU](https://sites.ualberta.ca/~ahamann/data/climateeu.html). it's in the same format as our data for BC. the only thing missing is time series data for the future projections, so our projections would have to be generalized, unless i generated our own downscaled time series (lots of work but not impossible). 2. climate zonation: the [European Environmental Stratification](http://www.wur.nl/en/Expertise-Services/Research-Institutes/Environmental-Research/Projects/EBONE-2/Products/European-Environmental-Stratification.htm) has 84 climatic zones that are further divided based on elevation. this sounds like exactly what we need. i have contacted the authors to try to get the spatial data, and also asked some other folks in europe for confirmation that this is the best zonation system for our purposes. great success! Answers: username_1: That's awesome news. Thanks for researching that! username_0: I recieved the European stratification spatial data and it looks decent. still waiting to get an independent opinion on its utility.
alanxz/SimpleAmqpClient
357043358
Title: can I use it in Multithreading? Question: username_0: hi,i want to use the simpleamqpclient lib,but my project is Multithreading,i want new an amqpclient object in each thread,can I use it directly? Answers: username_1: SimpleAmqpClient `AmqpClient::Channel` objects can be used in a multi-threaded environment provided that concurrent access from multiple threads is synchronized. Status: Issue closed
brightsparklabs/appcli
734154636
Title: Rename 'stop' command as 'shutdown' Question: username_0: It would be clearer if the 'stop' command was renamed to 'shutdown'. 'Shutdown' is more clear in suggesting to the end user that the application will be halted. Answers: username_1: To ease transition. Make `stop` a hidden command which aliases the new `shutdown` commad Status: Issue closed
Anaconda-Platform/anaconda-client
343242084
Title: Fix flake8 errors Question: username_0: ``` ./binstar_client/_version.py:134:1: C901 'git_versions_from_keywords' is too complex (11) ./binstar_client/_version.py:179:1: C901 'git_pieces_from_vcs' is too complex (11) ./binstar_client/_version.py:419:1: C901 'get_versions' is too complex (11) ./binstar_client/requests_ext.py:3:1: F401 'codecs' imported but unused ./binstar_client/requests_ext.py:16:1: C901 'encode_multipart_formdata_stream' is too complex (11) ./binstar_client/requests_ext.py:44:5: E731 do not assign a lambda expression, use a def ./binstar_client/requests_ext.py:68:32: E127 continuation line over-indented for visual indent ./binstar_client/requests_ext.py:95:22: E251 unexpected spaces around keyword / parameter equals ./binstar_client/requests_ext.py:146:30: E231 missing whitespace after ':' ./binstar_client/__init__.py:6:1: F401 'warnings' imported but unused ./binstar_client/__init__.py:14:1: F403 'from .errors import *' used; unable to detect undefined names ./binstar_client/__init__.py:28:1: E402 module level import not at top of file ./binstar_client/__init__.py:53:27: E231 missing whitespace after ':' ./binstar_client/__init__.py:94:16: F405 'BinstarError' may be undefined, or defined from star imports: .errors ./binstar_client/__init__.py:102:19: F405 'BinstarError' may be undefined, or defined from star imports: .errors ./binstar_client/__init__.py:114:22: E128 continuation line under-indented for visual indent ./binstar_client/__init__.py:115:22: E128 continuation line under-indented for visual indent ./binstar_client/__init__.py:116:22: E128 continuation line under-indented for visual indent ./binstar_client/__init__.py:117:22: E128 continuation line under-indented for visual indent ./binstar_client/__init__.py:118:22: E128 continuation line under-indented for visual indent ./binstar_client/__init__.py:119:22: E128 continuation line under-indented for visual indent ./binstar_client/__init__.py:120:22: E128 continuation line under-indented for visual indent ./binstar_client/__init__.py:121:22: E128 continuation line under-indented for visual indent ./binstar_client/__init__.py:122:22: E128 continuation line under-indented for visual indent ./binstar_client/__init__.py:123:22: E128 continuation line under-indented for visual indent ./binstar_client/__init__.py:126:101: E501 line too long (101 > 100 characters) ./binstar_client/__init__.py:133:101: E501 line too long (119 > 100 characters) ./binstar_client/__init__.py:193:5: C901 'Binstar._check_response' is too complex (12) ./binstar_client/__init__.py:196:101: E501 line too long (120 > 100 characters) ./binstar_client/__init__.py:197:101: E501 line too long (117 > 100 characters) ./binstar_client/__init__.py:209:12: E713 test for membership should be 'not in' ./binstar_client/__init__.py:211:101: E501 line too long (112 > 100 characters) ./binstar_client/__init__.py:215:13: E722 do not use bare except' ./binstar_client/__init__.py:306:101: E501 line too long (101 > 100 characters) ./binstar_client/__init__.py:312:101: E501 line too long (101 > 100 characters) ./binstar_client/__init__.py:328:33: E231 missing whitespace after ':' ./binstar_client/__init__.py:334:5: E303 too many blank lines (2) ./binstar_client/__init__.py:459:5: E303 too many blank lines (2) ./binstar_client/__init__.py:475:30: E231 missing whitespace after ':' ./binstar_client/__init__.py:497:5: E303 too many blank lines (2) ./binstar_client/__init__.py:498:101: E501 line too long (118 > 100 characters) ./binstar_client/__init__.py:540:70: E231 missing whitespace after ':' ./binstar_client/__init__.py:582:1: E402 module level import not at top of file ./binstar_client/mixins/package.py:8:1: E302 expected 2 blank lines, found 1 ./binstar_client/mixins/package.py:11:22: E127 continuation line over-indented for visual indent ./binstar_client/mixins/package.py:21:1: W391 blank line at end of file ./binstar_client/mixins/__init__.py:1:1: W391 blank line at end of file ./binstar_client/mixins/organizations.py:3:1: E302 expected 2 blank lines, found 1 ./binstar_client/mixins/organizations.py:87:1: W391 blank line at end of file ./binstar_client/mixins/channels.py:8:1: F401 'binstar_client.errors.BinstarError' imported but unused ./binstar_client/mixins/channels.py:10:1: E302 expected 2 blank lines, found 1 ./binstar_client/mixins/channels.py:35:1: W293 blank line contains whitespace ./binstar_client/mixins/channels.py:41:1: W293 blank line contains whitespace ./binstar_client/mixins/channels.py:52:1: W293 blank line contains whitespace ./binstar_client/mixins/channels.py:56:101: E501 line too long (103 > 100 characters) ./binstar_client/mixins/channels.py:58:1: W293 blank line contains whitespace ./binstar_client/mixins/channels.py:68:72: W291 trailing whitespace ./binstar_client/mixins/channels.py:69:1: W293 blank line contains whitespace ./binstar_client/mixins/channels.py:73:1: W293 blank line contains whitespace [Truncated] ./binstar_client/inspect_package/tests/test_pypi.py:18:39: E231 missing whitespace after ':' ./binstar_client/inspect_package/tests/test_pypi.py:20:38: E201 whitespace after '[' ./binstar_client/inspect_package/tests/test_pypi.py:23:101: E501 line too long (108 > 100 characters) ./binstar_client/inspect_package/tests/test_pypi.py:24:27: E127 continuation line over-indented for visual indent ./binstar_client/inspect_package/tests/test_pypi.py:40:46: E127 continuation line over-indented for visual indent ./binstar_client/inspect_package/tests/test_pypi.py:42:32: E127 continuation line over-indented for visual indent ./binstar_client/inspect_package/tests/test_pypi.py:60:1: E302 expected 2 blank lines, found 1 ./binstar_client/inspect_package/tests/test_pypi.py:63:5: E301 expected 1 blank line, found 0 ./binstar_client/inspect_package/tests/test_pypi.py:69:9: E303 too many blank lines (2) ./binstar_client/inspect_package/tests/test_pypi.py:89:30: E128 continuation line under-indented for visual indent ./binstar_client/inspect_package/tests/test_pypi.py:90:30: E128 continuation line under-indented for visual indent ./binstar_client/inspect_package/tests/test_pypi.py:91:30: E128 continuation line under-indented for visual indent ./binstar_client/inspect_package/tests/test_pypi.py:97:9: E303 too many blank lines (2) ./binstar_client/inspect_package/tests/test_pypi.py:102:5: E303 too many blank lines (2) ./binstar_client/inspect_package/tests/test_pypi.py:109:9: E303 too many blank lines (2) ./binstar_client/inspect_package/tests/test_pypi.py:159:9: E303 too many blank lines (2) ./binstar_client/inspect_package/tests/test_pypi.py:162:101: E501 line too long (101 > 100 characters) ./binstar_client/inspect_package/tests/test_pypi.py:163:101: E501 line too long (104 > 100 characters) ./binstar_client/inspect_package/tests/test_pypi.py:179:1: E303 too many blank lines (3) ```
moo-ai/moo-ai.github.io
437037635
Title: [FATAL][2019-04-25 06:58:22] The online openlab deployment <master> has Down, Please recovery asap! Question: username_0: For recover the ENV, you need to do the following things manually. The target node otc-openlab-zuul in master deployment is failed to be accessed with IP 192.168.211.244. Have a try: ssh [email protected] And try to login the cloud to check whether the resource exists.<issue_closed> Status: Issue closed
FTP-YCAB-Fullstack/GP3-dicafein
1007802022
Title: user dapat menghapus data order Question: username_0: - [ ] user dapat menghapus data menggunakan id - [ ] user mengirim token - [ ] user mengirimkan data ke endpoint '/orders/:id' dengan method delete - [ ] user menerima respon berupa json yang berisi data yang user kirim
google/mediapipe
714738504
Title: How to get Iris coordinates using Iris desktop version Question: username_0: Hi, I'm using Iris model (Desktop version) to get iris landmarks for an input video. However, on running the below command bazel-bin/mediapipe/examples/desktop/iris_tracking/iris_tracking_cpu_video_input \ --calculator_graph_config_file=mediapipe/graphs/iris_tracking/iris_tracking_cpu_video_input.pbtxt \ --input_side_packets=input_video_path=<input video path>,output_video_path=<output video path> I'm getting facial and in depth iris landmarks as well (refer the below screenshot) ![image](https://user-images.githubusercontent.com/31441117/95071646-a2b7b700-0727-11eb-8a64-d9a6b7f3adf0.png) And I'm expecting an output as the below screenshot: ![image](https://user-images.githubusercontent.com/31441117/95071729-c2e77600-0727-11eb-933c-d09b09bfba55.png) I have 3 queries here : 1. How can I get the above expected output video? 2. How exactly I can get the coordinates of left and right iris using Desktop version? 3. Can I get the iris coordinates for an image instead of a video? Answers: username_1: any updates on this? username_2: I found a solution based on https://github.com/google/mediapipe/issues/200 I modified the hand tracking file to match the iris tracking. See [here](https://gist.github.com/username_2/a4dc674e52f3e689b0e268851243f6b2). You need to add these lines to the BUILD file under mediepipe/examples/desktop ``` cc_library( name = "demo_run_graph_main_out", srcs = ["demo_run_graph_main_out.cc"], deps = [ "//mediapipe/calculators/util:landmarks_to_render_data_calculator", "//mediapipe/framework:calculator_framework", "//mediapipe/framework/formats:image_frame", "//mediapipe/framework/formats:image_frame_opencv", "//mediapipe/framework/formats:landmark_cc_proto", "//mediapipe/framework/port:commandlineflags", "//mediapipe/framework/port:file_helpers", "//mediapipe/framework/port:opencv_highgui", "//mediapipe/framework/port:opencv_imgproc", "//mediapipe/framework/port:opencv_video", "//mediapipe/framework/port:parse_text_proto", "//mediapipe/framework/port:status", ], ) ``` You these lines to the BUILD file under mediepipe/examples/desktop/iris_tracking ``` cc_binary( name = "iris_tracking_out_cpu", deps = [ "//mediapipe/examples/desktop:demo_run_graph_main_out", "//mediapipe/graphs/iris_tracking:iris_tracking_cpu_deps", ], ) ``` Run: `bazel build -c opt --define MEDIAPIPE_DISABLE_GPU=1 mediapipe/examples/desktop/iris_tracking:iris_tracking_out_cpu` Then: `bazel-bin/mediapipe/examples/desktop/iris_tracking/iris_tracking_out_cpu --calculator_graph_config_file=mediapipe/graphs/iris_tracking/iris_tracking_cpu.pbtxt` To save the output into a text file, add `> filename.text` when running the application. Note that this modification outputs only the iris markers and not the facial markers. username_0: Hi, I followed the steps you mentioned, I'm getting below error : Linking of rule '//mediapipe/examples/desktop/iris_tracking:iris_tracking_out_cpu' failed (Exit 1) gcc failed: error executing command /usr/bin/gcc @bazel-out/k8-opt/bin/mediapipe/examples/desktop/iris_tracking/iris_tracking_out_cpu-2.params username_2: Hi, Can you share the bazel build and bazel bin commands? Also, can you share the full error message? username_0: I ran this bazel build command - bazel build -c opt --define MEDIAPIPE_DISABLE_GPU=1 mediapipe/examples/desktop/iris_tracking:iris_tracking_out_cpu Below is the error I got : ERROR: /mediapipe/mediapipe/examples/desktop/iris_trac king/BUILD:62:1: Linking of rule '//mediapipe/examples/desktop/iris_tracking:iris_tracking_out_cpu' failed (Exit 1) gcc fai led: error executing command /usr/bin/gcc @bazel-out/k8-opt/bin/mediapipe/examples/desktop/iris_tracking/iris_tracking_out_ cpu-2.params Use --sandbox_debug to see verbose messages from the sandbox /usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/crt1.o:function _start: error: undefined reference to 'main' bazel-out/k8-opt/bin/external/org_tensorflow/tensorflow/lite/nnapi/_objs/nnapi_implementation/nnapi_implementation.o:nnapi_ implementation.cc:function (anonymous namespace)::ASharedMemory_create(char const*, unsigned long): warning: the use of `tm pnam' is dangerous, better use `mkstemp' collect2: error: ld returned 1 exit status Target //mediapipe/examples/desktop/iris_tracking:iris_tracking_out_cpu failed to build username_0: Hi, I was able to complete the build part, where exactly do I need to add the > filename.text while running the application? right now I'm using the below command to run the application : bazel-bin/mediapipe/examples/desktop/iris_tracking/iris_tracking_cpu_video_input \ --calculator_graph_config_file=mediapipe/graphs/iris_tracking/iris_tracking_cpu_video_input.pbtxt \ --input_side_packets=input_video_path=<input video path>,output_video_path=<output video path> username_2: Hi, sorry for not answering the previous comment, probably missed it. To output a text file just add the "< filename.txt" at the end of the run command: `bazel-bin/mediapipe/examples/desktop/iris_tracking/iris_tracking_out_cpu --calculator_graph_config_file=mediapipe/graphs/iris_tracking/iris_tracking_cpu.pbtxt < filename.txt` The output file will be in mediapipe folder. username_0: Hi, this build is not working in the case where I am using an input video from my local, for this I think they are using simple_run_graph_main instead of demo_run_graph_main. Is there any way to make the build command you mentioned run for input video instead of webcam? username_0: Hi, this build is not working in the case where I am using an input video from my local, for this I think they are using simple_run_graph_main instead of demo_run_graph_main. Is there any way to make the build command you mentioned run for input video instead of webcam? username_2: Hi, you can use demo_run_graph_main using local videos, instead of writing: "--input_side_packets=input_video_path=,output_video_path=" just use: "--input_video_path=PATH_TO_YOUR_VIDEO" for using video from your drive and: "--output_video_path=PATH_TO_YOUR_VIDEO" to write the video file. Say my input video is in mediapipe folder and I want to output the video and the landmarks to this folder as well. The build command is as before: `bazel build -c opt --define MEDIAPIPE_DISABLE_GPU=1 mediapipe/examples/desktop/iris_tracking:iris_tracking_out_cpu` To run the program: `bazel-bin/mediapipe/examples/desktop/iris_tracking/iris_tracking_out_cpu --calculator_graph_config_file=mediapipe/graphs/iris_tracking/iris_tracking_cpu.pbtxt --input_video_path=input.mp4 --output_video_path=output.mp4 > landmarks.txt` username_0: Thanks! It worked, but when I'm annotating the image with the coordinates, they are not in place . Is there any conversion factor? username_0: Hi, just one question, is there a way to input a static image instead of a video here? and would should be the ideal image size in that case? username_0: Hi, just one question, is there a way to input a static image instead of a video here? and what should be the ideal image size in that case? username_2: Hi, glad it works :) Regarding the conversion factor - I think there is some conversion since the values are between 0 to 1. I would start with the size of the image or the size of your screen and multiply the coordinates based on it. For your image question, I don't have any experience with it but on mediapipe website there is an [example](https://google.github.io/mediapipe/solutions/iris.html#single-image-depth-estimation) how to do it. username_3: @username_2 tried running `bazel build` but got an error: ``` bazel build -c opt --define MEDIAPIPE_DISABLE_GPU=1 mediapipe/examples/desktop/iris_tracking:iris_tracking_out_cpu INFO: Analyzed target //mediapipe/examples/desktop/iris_tracking:iris_tracking_out_cpu (0 packages loaded, 0 targets configured). INFO: Found 1 target... ERROR: /Users/cttippur/plaground/mediapipe/mediapipe/examples/desktop/BUILD:54:11: //mediapipe/examples/desktop:demo_run_graph_main_out: missing input file '//mediapipe/examples/desktop:demo_run_graph_main_out.cc' Target //mediapipe/examples/desktop/iris_tracking:iris_tracking_out_cpu failed to build Use --verbose_failures to see the command lines of failed build steps. ERROR: /Users/cttippur/plaground/mediapipe/mediapipe/examples/desktop/BUILD:54:11 1 input file(s) do not exist INFO: Elapsed time: 0.661s, Critical Path: 0.38s INFO: 1 process: 1 internal. FAILED: Build did NOT complete successfully ``` username_3: Found the source code here: https://gist.github.com/mgyong/7353474eb3e57ba95621632af274911a next set of error: `mediapipe/examples/desktop/demo_run_graph_main_out.cc:28:10: fatal error: 'mediapipe/calculators/util/detections_to_render_data_calculator.pb.h' file not found ` username_2: @username_3 you didn't create the "demo_run_graph_main_out.cc" file or is misplaced. Please see my first comment to this issue how to create this file and where to save it. username_3: Apologies and thank you for pointing me in the right direction. I do see the landmarks. Can you please help me understand the landmarks. I see output like these. I want to be able to get the pupil coordinates of both the eyes. ``` landmark { x: 0.611678839 y: 0.392118603 z: -0.0238272324 visibility: 0 presence: 0 } ``` username_3: @username_2 - any visibility on how to interpret the landmarks? username_4: @username_3 I believe x and y coordinates are the pixel coordinates normalized on the image resolution so if , for example, the image is 640x480 and the landmark is in x = 0.5 , y= 0.5 the corresponding pixel is x=320 / y= 240. I arrived at this point too, but now everytime the face is not clearly visible ( like when in the vanilla code a green rectangle with very few landmark apppeared ) the cycle just halts without errors and must be closed and restarted. username_3: @username_4 Thanks for getting back. That makes sense. I was looking to see if some metadata can be added to the landmarks. I seem to get 10 landmarks for each frame and I can see that 1/2 of them are from left part of the face and other half from the right. I am struggling to understand the coordinates represent what part of the face. Appreciate any pointers. username_5: @username_3 Hi may I ask , is there a way to input a static image instead of a video here? username_5: Thanks I'v already slove static image type. username_6: I am sorry that I can not open the source code. username_7: I am in the same situation. I have pinpointed "if (!poller_landmark.Next(&landmark_packet)) break;" as the issue but don't know where to go from there. When there is no face to be detected, there are obviously no coordinates that can be printed, and this causes problems for the poller_landmarks stream. Not sure how to proceed and would appreciate any insight. username_8: Hi @username_0, Could you please respond, if you're still looking resolution for the above query. Thanks! username_7: I have gotten [#867](https://github.com/google/mediapipe/pull/867) to work; It is essentially the same issue but for the FaceMesh desktop version. However, I'm not sure if it will transfer due to differences in the iris and FaceMesh graphs. username_8: Hi @username_7, Could you please raise a new issue with complete details of your query. Thanks! username_9: `if (!poller_landmark.Next(&landmark_packet)) break; auto& output_landmarks = landmark_packet.Get<mediapipe::NormalizedLandmarkList>();` after adding these lines no output screen is received.. where as after commenting it out gives the output screen.. something is not working.. please help
rwth-afu/UniPagerNextion
298317884
Title: Not compatible with Unipager 1.0.0 Question: username_0: Es gibt einen neuen JSON-Inhalt mit namen ``StatusUpdate``. Dieser wird noch nicht verarbeitet. Ebenfalls wird nicht der aktuell verbundene Node angezeigt. Answers: username_0: Fixed in 52719cac4dbeff37116f0f41a2660f3120f75fdf Status: Issue closed
ChingWenWen/BaseProject-Cpp
392695932
Title: HW4_描繪序列圖 Question: username_0: 基本上序列圖需要依照類別圖來畫。上方為某功能的實現中,有需要用到的類別(class)創建出來的物件(object),而圖中央的各類別物件的關係,為該類別自身擁有的方法(method),物件與物件之間的資料傳送便是使用方法來傳遞。具體範例可以參考Chap 3: System Modeling p.113 序列圖中出現的類別物件與類別方法,基本上都要是有在類別圖中出現的東西 D0611000 1. 物件方面還可以,但是商品頁面沒有方法(method)可以傳送購買商品種類跟數量。(-5) D0676627 1. 如上述介紹,圖上方為類別物件,圖中央為類別方法。(-10) D0611134 1. 如上述介紹,圖上方為類別物件,圖中央為類別方法。你們類別圖也沒寫後端資料庫出來。(-10) D0641621 1. 如上述介紹,圖上方為類別物件,圖中央為類別方法。(-10) D0642369 Answers: username_1: 已修改序列圖 username_0: 其實你們的類別圖完全沒有關於個人資料之類的類別。既然涉及到買賣東西,你們整個系統裡面卻看不到有關於用戶資料的資料庫,那空有註冊功能其實也是沒有用的(沒地方讓你儲存用戶資料)。所以看你們要不要修改一下類別圖,畢竟單看類別圖其實也沒有什麼錯誤,只是要符合你們買賣的功能的話可能就有缺點東西了。 D0676627 1. 因為你們沒有賣家(用戶資料)這個類別,所以你的修改其實也沒有用的喔。 username_2: 已修改類別圖內商品介紹的method username_0: D0676627 1. 其實要畫的話。你應該在"購買頁面"添加"新增訂單"這個方法,然後連接到"訂單名細"。最後多畫一個"使用者資料",裡面除了"帳號""密碼"這些基本屬性以外,還要一個"接受訂單"方法(以上為類別圖)。這樣才會符合你的使用案例規格書想要做到的功能。最後在依照這樣去畫循序圖。 所以其實軟工導論的作業,只要沒考慮到後面,就算前面對了可能也要為了符合後面而回去修改。 所以妳這樣改還是不能給分喔。 username_2: 助教,不好意思,上方修改是D0611000所做的修改 username_0: 智障了哈 username_1: 已修改類別圖及序列圖 username_3: 修改序列圖 D0611134 username_4: 已修改序列圖D0641621 username_0: D0611134 1. 其實你的問題還是一樣。在類別圖裡面沒有後端資料庫的情況下,循序圖畫出來的就不會正確。 你們的類別圖有註冊頁面,把這個當成登入頁面勉強可以,但是你們類別圖裡面沒有"HTML後端"這個類別,也沒有"帳密比對"的這個method,所以這個其實就沒有照最上面說的"序列圖需要依照類別圖來畫。" (-5) ps. 像D0641621這位同學,雖然就少少的,不過"主頁面"在類別圖上是有的,而"主頁面"也確實有"族群資料"這個method,所以就算不是完全對也至少差不多了。
18F/frontend
154105604
Title: Front end architecture review for Identity project Question: username_0: In order to improve best front end practices across 18F, the identify project should go through a 1 hour tech review to come to conclusions on front end technology. Myself and one other front end member will do the review. This will include ~1 hour of prep where we decide the format of the meeting and about an ~1 hour meeting. I'd like people to volunteer to partner on this review. I think this time around, we'll do first come first serve, but we can modify that process if required. Please volunteer by commenting. Anybody can volunteer, except people on the Identity project. Answers: username_1: I'd like to be involved, even if just overseeing how the review is done -- I'd like to be more familiar with the process because I'd like to push for other projects to be reviewed. username_2: I would also like to be involved in the review and have adequate time to participate username_3: re: Identity team, me and @hursey013 will be joining & showing you around the codebase! username_0: The review as completed on 5/25/2016. The next step is to make a contributing document for the identity team and a general research document on how the process went. Status: Issue closed username_0: Documentation put in identity repo: https://github.com/18F/identity-idp/blob/master/docs/frontend.md
ToulouseJug/call-for-paper
429242494
Title: Débuter en GraphQL avec Spring Question: username_0: ## Débuter en GraphQL avec Spring ### Le(s) speaker(s) <NAME> Architecte logiciel et expert Java indépendant ### Description de votre conférence GraphQL est une alternative à REST pour développer nos API web. Conçu initialement par Facebook et désormais un standard, GraphQL a gagné en popularité grâce à ses avantages de performance et de maintenabilité. Nous allons présenter les caractéristiques et le fonctionnement de GraphQL puis montrer sa mise en oeuvre dans une application Spring Boot. ### Informations diverses * Niveau de difficulté : intermédiaire * Durée : 40 minutes * Format : majorité de slides, un peu de live coding * Dispo ou indispo : à partir de mai 2019, me prévenir au moins un mois avant Answers: username_1: Salut Florian, ça fait un petit moment que ton sujet est en standbye, est-ce que ça te dirais de revenir en janvier ? Je pense que d'ici là on sera toujours en remote, mais si le confinement s'assoupli on pourra se poser la question présentiel ensemble. username_0: Salut Arnaud, Je te remercie d'avoir pensé à moi. Je vais devoir décliner, trop de travail en ce moment pour trouver le temps de mettre à jour la présentation. Je mets à jour ma dispo dans la description. username_1: Yes, c'est noté. Est-ce que tu veux qu'on prévois ça pour février du coup ? username_0: Plutot avril comme j'ai ajouté en haut. username_1: Ah, tu as écris "avril 2020", je pense que tu voulais plutôt dire 2021 de coup 😉 Du coup on se recontacter vers fin janvier/début février, ok ?
kubernetes/kubernetes
141576880
Title: Unable to create Secrets as Environment Variables. Question: username_0: Firstly tried create Secrets as Environment Variable in Pod as instructed on "http://kubernetes.io/docs/user-guide/secrets/" URL. But it is giving error like below <snip> error validating "pod.yaml": error validating data: [field fieldRef: is required, found invalid field secretKeyRef for v1.EnvVarSource, field fieldRef: is required, found invalid field secretKeyRef for v1.EnvVarSource]; if you choose to ignore these errors, turn validation off with --validate=false </snip> I was trying to use it in ReplicationController. Tried to create ReplicationController using Secrets as Environment Variable but getting below error message. <snip> error validating "tomcatrc.yml": error validating data: [field fieldRef: is required, found invalid field secretKeyRef for v1.EnvVarSource, field fieldRef: is required, found invalid field secretKeyRef for v1.EnvVarSource]; if you choose to ignore these errors, turn validation off with --validate=false </snip> Kindly suggest on this. Answers: username_1: @username_0 is this the thing you are talking about? https://github.com/username_1/downward/blob/master/pod.json username_0: Kubectl binaries are upto date. username_1: @username_0 I am able to run `kubectl create -f rc.yml`, the file is from the link you pasted above: https://github.com/pavlovml/match/blob/master/rc.yml username_0: Its strange. Please pass your current version of Kubectl. username_1: I do not use a download version, I am using the built one, so there is no meaning version, :) Could you try to use the `kubectl` while you run a `./cluster/hack/build-go.sh`? Anyway, it would be better to give the kubectl version you are using, so other guys could help to reproduce it and fix it. username_0: root@# gcloud version Google Cloud SDK 101.0.0 bq 2.0.24 bq-nix 2.0.18 core 2016.03.11 core-nix 2016.02.22 gcloud gsutil 4.17 gsutil-nix 4.16 kubectl kubectl-linux-x86_64 1.1.7 root@# kubectl version Client Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.7", GitCommit:"<PASSWORD>", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.8", GitCommit:"<PASSWORD>", GitTreeState:"clean"} root@# kubectl create -f tomcatrc.yml error validating "tomcatrc.yml": error validating data: [field fieldRef: is required, found invalid field secretKeyRef for v1.EnvVarSource, field fieldRef: is require d, found invalid field secretKeyRef for v1.EnvVarSource]; if you choose to ignore these errors, turn validation off with --validate=false root@# cat tomcatrc.yml apiVersion: v1 kind: ReplicationController metadata: namespace: default name: testtomcat spec: replicas: 2 selector: app: testtomcat template: metadata: labels: app: testtomcat spec: containers: - name: testtomcat image: gcr.io/fluted-oasis-107921/tomcat:vcs6 ports: - containerPort: 8080 env: - name: DB_USERNAME valueFrom: secretKeyRef: name: vcs-db-secret key: username - name: DB_PASSWORD valueFrom: secretKeyRef: name: vcs-db-secret key: password username_1: @username_0 it seems that you are using v1.1. and the feature you requested does not exist there, see the related code: https://github.com/kubernetes/kubernetes/blob/release-1.1/pkg/api/v1/types.go#L793-L796 There is only `fieldRef`. username_2: @username_1 is right (thanks a lot for responding on this issue). This feature is available in 1.2.0, which is released today. username_2: @username_0, is it possible for you to try it out on 1.2? username_0: Firstly I am using Kubernetes on Google Container Cluster. Its not on standalone instance or server. So, I will have to use Kubectl to create any kind of resource using yaml/json file. @username_2 I am not getting what version are you talking about? username_3: You are using a feature which is only available in Kubernetes v1.2.0. and newer. Currently your GKE cluster has Kubernetes v1.1.x. So your GKE cluster needs an upgrade to the newest version of Kubernetes. But as you can see here Google only supports version 1.1.8 and 1.0.7 for now: https://cloud.google.com/container-engine/release-notes#supported_kubernetes_api_versions They will probably start with rolling out master node updates in the coming weeks after which you can issue a cluster upgrade command as described here: https://cloud.google.com/container-engine/docs/clusters/upgrade For now you have to wait and can't use that feature. username_0: @username_3 Thanks for the inputs on this. username_0: I have installed Kubernetes 1.2 on GCS instance now and trying to create Replication Controller/Pod but unable to create it using secrets to export it as variable into the pods. root@standalone-kubernetes:~# kubectl version Client Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.7", GitCommit:"<PASSWORD>", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"<KEY>", GitTreeState:"clean"} root@standalone-kubernetes:~# kubectl get secrets NAME TYPE DATA AGE mysecret Opaque 2 2h vcs-db-secret Opaque 2 2h root@standalone-kubernetes:~# cat pod.yaml apiVersion: v1 kind: Pod metadata: name: secret-env-pod spec: containers: - name: mycontainer image: redis env: - name: SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: username - name: SECRET_PASSWORD valueFrom: secretKeyRef: name: mysecret key: <PASSWORD> restartPolicy: Never root@standalone-kubernetes:~# kubectl create -f pod.yaml The Pod "secret-env-pod" is invalid. * spec.containers[0].env[0].valueFrom: Invalid value: "": may not have more than one field specified at a time * spec.containers[0].env[1].valueFrom: Invalid value: "": may not have more than one field specified at a time root@standalone-kubernetes:~# cat tomcatrc.yaml apiVersion: v1 kind: ReplicationController metadata: namespace: default name: testtomcat spec: replicas: 2 selector: app: testtomcat template: metadata: labels: app: testtomcat spec: containers: - name: testtomcat image: gcr.io/fluted-oasis-107921/tomcat:vcs6 ports: - containerPort: 8080 [Truncated] - name: DB_USERNAME valueFrom: secretKeyRef: name: vcs-db-secret key: username - name: DB_PASSWORD valueFrom: secretKeyRef: name: vcs-db-secret key: <PASSWORD> securityContext: capabilities: {} privileged: true imagePullPolicy: IfNotPresent root@standalone-kubernetes:~# kubectl create -f tomcatrc.yaml The ReplicationController "testtomcat" is invalid. * spec.template.spec.containers[0].env[0].valueFrom: Invalid value: "": may not have more than one field specified at a time * spec.template.spec.containers[0].env[1].valueFrom: Invalid value: "": may not have more than one field specified at a time username_0: I tried to create another manual yaml file and its working fine for now. root@standalone-kubernetes:~# /usr/local/kubernetes/cluster/kubectl.sh exec -it testtomcat-7qd12 /bin/bash root@:# echo $DB_USERNAME testdb root@:# echo $DB_PASSWORD <PASSWORD> root@:# Status: Issue closed
kgress/scaffold
631913054
Title: Chrome browser versions older than 75 should set an experimental option for W3C compliance Question: username_0: # Summary With the recent Sauce W3C compliance tickets being merged in, we should also account for compliance on Chrome browsers versions older than 75. Per the W3C compliance documentation from Sauce: ``` For tests on Google Chrome version 74 or lower, the W3C capability must be set as an experimental option. ChromeDriver version 75 runs in W3C standard compliant mode by default, so setting this capability won't be necessary in the future. ``` The option for enabling this is: ``` chOpts.setExperimentalOption("w3c", true); ``` # A/C * Scaffold should check the browser version and if it contains a version older than 75, we should add the experimental chrome option w3c to true
getsentry/sentry-python
446380412
Title: Celery integration captures retried exceptions that it shouldn't when one task directly calls another Question: username_0: I have one Celery task that directly calls another, but as a function, not as a task. The first task is auto-retried for a certain set of exceptions, and the second task is retried for a different set of exceptions. [Here is an example task in my actual codebase](https://git.ligo.org/leo-singer/gwcelery/commit/b3768d7158728248067bc73b96cd48faeda78f35), but here is an isolated illustration of the problem. Consider the following two tasks, `bar` which is auto-retried for exceptions `A` or `B`, and a wrapper task `bat` which is auto-retried for exceptions `C` or `D`. ```python def foo(): ... # do something that might raise exception A, B, C, or D @app.task(autoretry_for=(A, B)) def bar(): return foo() @app.task(autoretry_for=(C, D)) def bat(): return bar() ``` Now invoke `bat`: ```python bar.delay() ``` Now suppose that during the execution of `bat`, `foo()` raises exception `C`. Even though Celery will retry the task, the exception `C` will be reported to Sentry. I think that this is due to [this line in sentry_sdk.integrations.celery](https://github.com/getsentry/sentry-python/blob/0.8.0/sentry_sdk/integrations/celery.py#L39): ```python task.run = _wrap_task_call(task, task.run) ``` Removing that line would probably fix this. Answers: username_1: We generally don't have support for `autoretry_for`, but we should fix that. If you raise `Retry` instead it should work fine username_2: Hello :wave: Is that documented that `autoretry` for Celery tasks is not supported? I just stumbled upon an issue with autoretried errors that appeared in Sentry and I'm not sure if this is a bug or expected behavior. Best regards username_1: It's a bug, this bug :) I'll prioritize it now since you're the second person to have commented on this username_2: Wow, that was a fast response! Thank you @username_1, that's not a big inconvenience for me personally, because it's not a big lift to rewrite task to not use `autoretry_for`, I just wanted to make sure that I'm not missing something on my part. Thanks for all the work you guys are doing over here at Sentry :bowing_man: username_3: just commenting in that I found the same issue here, though in our case, we have `bind=True` so, stealing from the top example ```python @app.task(bind=True, autoretry_for=(A, B)) def foo(self): ... # do other things that can raise exception A, B @app.task(bind=True, autoretry_for=(A, B)) def bar(self): return foo.run() ``` When we call `bar.delay()` if `foo` raises `A`, then the exception will still go to sentry even though it will be retried. It's not a huge problem though, since I can rewrite this to use `self.retry()` instead of `autoretry_for` username_4: I have one Celery task that directly calls another, but as a function, not as a task. The first task is auto-retried for a certain set of exceptions, and the second task is retried for a different set of exceptions. [Here is an example task in my actual codebase](https://git.ligo.org/leo-singer/gwcelery/commit/b3768d7158728248067bc73b96cd48faeda78f35), but here is an isolated illustration of the problem. Consider the following two tasks, `bar` which is auto-retried for exceptions `A` or `B`, and a wrapper task `bat` which is auto-retried for exceptions `C` or `D`. ```python def foo(): ... # do something that might raise exception A, B, C, or D @app.task(autoretry_for=(A, B)) def bar(): return foo() @app.task(autoretry_for=(C, D)) def bat(): return bar() ``` Now invoke `bat`: ```python bar.delay() ``` Now suppose that during the execution of `bat`, `foo()` raises exception `C`. Even though Celery will retry the task, the exception `C` will be reported to Sentry. username_5: Quick update: This issue is still open. And because multiple persons are reporting this it will be kept in our minds for one of the next SDK updates.
rTonyCloud/music-world-ecommerce
994397397
Title: Build Heroku server Question: username_0: This ticket is for building the Heroku server with the information and details given for the team project. This is to deploy the finale and presentation project. start by using https://dashboard.heroku.com/login
mapstruct/mapstruct
1138426331
Title: How to map objects when source object has custom method for Getter? Question: username_0: I'm using a specific framework(it's called TeamCenter), I want to use this library to map from that framework's object to my custom DTO. However, this framework's model class is `ModelObject` and to get values from the object, I have to call method like this => `modelObject.getPropertyDisplayableValue("propertyName");` How can I achieve this with `MapStruct`? I created a `Mapper` class like below. ``` @Mapper public interface ProjectMapper { ProjectMapper INSTANCE = Mappers.getMapper(ProjectMapper.class); @Mapping(target = "id", source = "prg0PlanId") ProjectDto toDto(ModelObject projectModel) throws NotLoadedException; } ``` The only working way is to use `expression`. ``` @Mapper public interface ProjectMapper { ProjectMapper INSTANCE = Mappers.getMapper(ProjectMapper.class); @Mapping(target = "id", expression = "java(projectModel.getPropertyDisplayableValue(\"prg0PlanId\")") ProjectDto toDto(ModelObject projectModel) throws NotLoadedException; } ``` But, I think this is not the advantage to use `MapStruct`. I want to use something like this. ``` @Mapper public interface ProjectMapper { ProjectMapper INSTANCE = Mappers.getMapper(ProjectMapper.class); @Mapping(target = "id", source = "prg0PlanId", qualifiedByName = "getPropertyDisplayableValue") ProjectDto toDto(ModelObject projectModel) throws NotLoadedException; @Named("getPropertyDisplayableValue") static String getPropertyDisplayableValue(ModelObject modelObject, String source) throws NotLoadedException { return modelObject.getPropertyDisplayableValue(source); } } ``` But, I noticed there are no workarounds something like this. I also don't want to manually implement mapping method something like [this](https://mapstruct.org/documentation/stable/reference/html/#adding-custom-methods). What I need is just specifying custom method to get values from source object. Am I missing something or this is not available with `MapStruct`? Thanks for helps.
jenkinsci/helm-charts
928080573
Title: Setting controller.runAsUser != 1000 leads to failing init container Question: username_0: <!-- Thanks for filing an issue! Before hitting the button, please answer these questions. It's helpful to search the existing GitHub issues first. It's likely that another user has already reported the issue you're facing, or it's a known issue that we're already aware o. Fill in as much of the template below as you can. The more information we have the better we can help you. Be ready for followup questions, and please respond in a timely manner. If we can't reproduce a bug or think a feature already exists, we might close your issue. If we're wrong, PLEASE feel free to reopen it and explain why. --> **Describe the bug** When setting `controller.runAsUser` to a value other than the built in jenkins user id 1000 the `init` container fails. **Version of Helm and Kubernetes**: Helm Version: ```console $ helm version version.BuildInfo{Version:"v3.6.1", GitCommit:"<PASSWORD>", GitTreeState:"clean", GoVersion:"go1.16.5"} ``` Kubernetes Version: ```console $ kubectl version Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"<PASSWORD>", GitTreeState:"clean", BuildDate:"2021-05-12T14:11:29Z", GoVersion:"go1.16.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3+k3s3", GitCommit:"<PASSWORD>4fc89665cece4", GitTreeState:"clean", BuildDate:"2020-11-13T07:19:02Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"} ``` **Which version of the chart**: 3.4.0 **What happened**: Started the chart with `controller.runAsUser=1001` results in `jenkins-0 0/2 Init:CrashLoopBackOff` The log of `init`containers shows ``` java.nio.file.NoSuchFileException: ?/.cache ``` Presumably the errors is caused here: https://github.com/jenkinsci/helm-charts/blob/6e829d121c81cc935978a7a436cf0d93f53fa3e2/charts/jenkins/templates/config.yaml#L38 Because the `jenkins-plugin-cli` uses the system var `user.home` [as base dir]( https://github.com/jenkinsci/plugin-installation-manager-tool/blob/13693591e4cb9415a4e87868045fb113a0b9307c/plugin-management-library/src/main/java/io/jenkins/tools/pluginmanager/config/Settings.java#L24). I presume that the `user.home` home is initialized from `/etc/passwd`. The default UID 1000 is contained in `/etc/passwd` so it works. For other UIDs (e.g. 1001) there is no entry in `/etc/passwd` so `?` is returned. Explicitly setting the `HOME` variable does not solve this issue. Adding `--Duser.home="{{ .Values.controller.jenkinsHome }}` to the `jenkins-plugin-cli` call might do it, though. **What you expected to happen**: The pod starts successfully. **How to reproduce it** (as minimally and precisely as possible): [Truncated] ``` <!-- This could be something like: values.yaml (only put values which differ from the defaults) ``` key: value ``` ``` helm install my-release jenkins/jenkins --version version --values values.yaml ``` --> **Anything else we need to know**: I'm a bit time-constrained right now so I can't validate the fix proposed above. Still wanted to document it. I might create a PR at a later stage. Answers: username_1: Do you think there is something the helm chart could do to fix this? It sounds more like an issue of the docker image. Some people might use customized images where a different user id is present. So allowing this configuration makes sense. username_0: If a custom image is the reason for this option - then a warning in the comments of `controller.runAsUser` in `values.yaml` about changing the UID when running the default image could prevent running into this issue. A fix that would have worked for me would be to set `-Duser.home="{{ .Values.controller.jenkinsHome }}` here: https://github.com/jenkinsci/helm-charts/blob/0afd66178a112cee7dd8579ea79ace51db42dbe5/charts/jenkins/templates/config.yaml#L37-L38 But I can't judge if this would cause collateral damage. I also don't know if this issue would occur in the `else` branch https://github.com/jenkinsci/helm-charts/blob/0afd66178a112cee7dd8579ea79ace51db42dbe5/charts/jenkins/templates/config.yaml#L39-L40 and if something like `HOME={{ .Values.controller.jenkinsHome }} /usr/local/bin/install-plugins.sh ...` would fix it. username_1: We could set that by default. No matter which runAsUser is specified. Do you want to create a PR for that? username_1: @username_2 do you know if setting home would be respected by the CLI? username_2: I would walk back a bit, Why are you doing this? You may have other issues if you continue down this route with the persistent volume as well. If you want to change the user you will need to re-build the jenkins upstream image yourself and set the argument for it. https://github.com/jenkinsci/docker/blob/master/11/debian/buster/hotspot/Dockerfile#L23 username_0: @username_2 We used to set the UID to the local user's UID in a project (cloudogu/gitops-playground@1<PASSWORD>c3ae1f<PASSWORD>d) that runs on a local cluster in order to avoid file permissions issues on bind mounted folders. We had no other issues with files inside the images. But I can see that those might turn up for other users. We ended up removing the option and it works for us. The behaviour was still somewhat confusing to me, so I thought it was worth the issue. So the only takeaway here would be a warning in the comments above the option stating that changing the UID might cause issues if the underlying image does not provide an entry in `/etc/passwd` and appropriate file permissions? @username_1 I wanted to discuss this first, in order to see if a PR is relevant at all. username_2: I think so. If you check the https://github.com/jenkinsi/docker repo there's quite a few issues related to this, but it mostly comes down to docs.
dmjio/stripe
81623974
Title: Eliminate (really unsafe) record fields on data types with variants. Question: username_0: I just converted an application from the previous stripe library, and in general things have been good, but I have some pretty serious concerns about safety. In particular, I just saw an error in production due to this problem. Essentially, record field accessors on data types with variants are really unsafe. The particular one that I hit was on `Customer` - I was converting code, and the other stripe library had only one `Customer` variant, and had a (perfectly safe and good) `customerId` record field. So the code compiled, and then at runtime, it got passed a `DeletedCustomer`, and exploded. To be honest, I think having these be variants at all is pretty bizarre. I would much prefer if `DeletedCustomer` was it's own type, and if an endpoint can return either, then make it be `Either`. Having `Customer` sometimes not really be customer is odd. Answers: username_1: There is a nuance (specifically for customers), that retrieving deleted records don't 404, they stick around. I agree that these partial functions need to be removed. @stepkut's changes still need to be merged, and I believe they remedy this situation. It's been hard to get a breather from work, and the merge is huge. I apologize for this, and plan to make the changes asap. username_1: @username_0, thought of a solution to this. Then make all customer retrieval functions use this. ```haskell {-# LANGUAGE FlexibleInstances #-} import Control.Applicative import Data.Aeson instance FromJSON (Either DeletedCustomer Customer) where parseJSON (Object o) = do result <- o .: "deleted" case result of True -> Left <$> (DeletedCustomer <$> parseJSON o) Nothing -> Right <$> (Customer <$> parseJSON o) ``` username_0: @username_1 That sounds great. username_1: This issue is a bit of a pickle. I've implemented: ```haskell newtype CustomerResult = CustomerResult (Either DeletedCustomer Customer) deriving (LotsOfThings) ``` All the tests pass except for one. For some reason removing a discount on a customer actually deletes the customer. Still pondering... username_2: wot username_1: Yea, unsure why. Maybe some actions are being called out of order.
Picolab/pico-engine
287980021
Title: Registering local rulesets with spaces in filepaths Question: username_0: I successfully registered `file://C:/Users/nikk29/Documents/Programming/picos/test.krl` but changing the filename to `test space.krl` causes the engine to crash, both with the space and with the `%20` code. `file://C:/Users/nikk29/Documents/Programming/picos/test space.krl` `file://C:/Users/nikk29/Documents/Programming/picos/test%20space.krl` The engine crash log is as follows: `http://localhost:8080 C:\Users\nikk29\AppData\Roaming\npm\node_modules\pico-engine\node_modules\pico-e ngine-core\src\extractRulesetID.js:4 var src_no_comments = src.replace(commentsRegExp(), " "); ^ TypeError: Cannot read property 'replace' of undefined at module.exports (C:\Users\nikk29\AppData\Roaming\npm\node_modules\pico-eng ine\node_modules\pico-engine-core\src\extractRulesetID.js:4:30) at Object.storeRuleset (C:\Users\nikk29\AppData\Roaming\npm\node_modules\pic o-engine\node_modules\pico-engine-core\src\DB.js:266:23) at Object.core.registerRuleset (C:\Users\nikk29\AppData\Roaming\npm\node_mod ules\pico-engine\node_modules\pico-engine-core\src\index.js:232:12) at C:\Users\nikk29\AppData\Roaming\npm\node_modules\pico-engine\node_modules \pico-engine-core\src\index.js:409:18 at ReadFileContext.callback (C:\Users\nikk29\AppData\Roaming\npm\node_module s\pico-engine\node_modules\pico-engine-core\src\getKRLByURL.js:21:28) at FSReqWrap.readFileAfterOpen [as oncomplete] (fs.js:359:13)` <!--- @huboard:{"order":2.3689172685704052e-07,"milestone_order":0.2457369802095112} --> Status: Issue closed Answers: username_1: @username_0 I just made a fix and released 0.44.1 Run `npm i -g pico-engine` to upgrade.
VATSIM-UK/UK-Sector-File
319253206
Title: EGPB Runway Headings Update Question: username_0: # Summary of issue/change All runways at EGPB need headings updated: 06 - 057 to 056 09 - 087 to 086 15 - 147 to 146 24 - 237 to 236 27 - 267 to 266 33 - 327 to 326 # Reference (amendment doc/official source/forum) incl. page number(s) AIP Section 2.12 # Affected areas of the sector file (if known) Airports/EGPB/Runway.txt Status: Issue closed Answers: username_1: Fixed in #1131
ChristophAnastasiades/Lingallery
1014067182
Title: Doesn't work with Nuxt Question: username_0: Would be appreciated to have this plugin working with Nuxt, I've tried the setup below and it doesn't work. **plugins/test.js** ``` import Vue from 'vue' import VueLazyLoad from 'vue-lazyload' import LightBox from 'vue-image-lightbox' Vue.use(VueLazyLoad) Vue.component('lingallery', LightBox) ``` **nuxt.config.js** ``` plugins: [ { src: '~/plugins/test.js', ssr: false }, // without ssr also doesn't work. ], ``` **component.vue** ``` <template> <lingallery :width="width" :height="height" :items="items" :media="media" /> </template> <script lang="ts"> import Vue from 'vue' export default Vue.extend({ data() { return { width: 600, height: 400, items: [ { src: "https://picsum.photos/600/400/?image=0", // for a local file it also doesn't work thumbnail: "https://picsum.photos/600/400/?image=0", // for a local file it also doesn't work caption: 'Some Caption', id: 'someid1', }, { src: "https://picsum.photos/600/400/?image=0", // for a local file it also doesn't work thumbnail: "https://picsum.photos/600/400/?image=0", // for a local file it also doesn't work }, ], media: ['(min-width: 600px)'], } }, }) </script> ``` There's nothing in the `console` and on the page I can simply see an empty page with `1/1` as in the screenshot below. ![ss](https://user-images.githubusercontent.com/12736263/135720878-f40af1c7-4b3f-443a-8466-c0f7130c4a9c.png) Answers: username_1: I could be wrong but you're trying to use `vue-image-lightbox` which is not a lingallery. Try to rework your plugin file and replace its content with the following which is working fine for me ```javascript import Vue from 'vue'; import Lingallery from 'lingallery'; Vue.component('lingallery', Lingallery); ``` Or try to put a question within corresponding repo https://github.com/pexea12/vue-image-lightbox
GoogleCloudPlatform/cloudml-samples
385520627
Title: Verify scikit-learn sample works in Python 3.5 Question: username_0: https://github.com/GoogleCloudPlatform/cloudml-samples/tree/master/sklearn Internal reference: b/120086448 Answers: username_1: @username_0 @alecglassford @username_2 Any updates on this? Please take a look. Status: Issue closed username_2: https://github.com/GoogleCloudPlatform/cloudml-samples/tree/master/sklearn Internal reference: b/120086448 username_1: Ping username_2: I tested it with 3.5.0 first. Jupyter does not run with that version of Python due to some Type error. I then switched to use 3.5.4. Jupyter ran fine. Then, I started with https://github.com/GoogleCloudPlatform/cloudml-samples/blob/master/notebooks/scikit-learn/Training%20with%20scikit-learn%20in%20CMLE.ipynb and I got an error: ERROR 2019-06-20 13:23:39 -0700 service The replica master 0 exited with a non-zero status of 1. ERROR 2019-06-20 13:23:39 -0700 service Traceback (most recent call last): ERROR 2019-06-20 13:23:39 -0700 service File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main ERROR 2019-06-20 13:23:39 -0700 service "__main__", mod_spec) ERROR 2019-06-20 13:23:39 -0700 service File "/usr/lib/python3.5/runpy.py", line 85, in _run_code ERROR 2019-06-20 13:23:39 -0700 service exec(code, run_globals) ERROR 2019-06-20 13:23:39 -0700 service File "/root/.local/lib/python3.5/site-packages/census_training/train.py", line 154, in <module> ERROR 2019-06-20 13:23:39 -0700 service bucket = storage.Client().bucket(BUCKET_NAME) ERROR 2019-06-20 13:23:39 -0700 service File "/usr/local/lib/python3.5/dist-packages/google/cloud/storage/client.py", line 141, in bucket ERROR 2019-06-20 13:23:39 -0700 service return Bucket(client=self, name=bucket_name, user_project=user_project) ERROR 2019-06-20 13:23:39 -0700 service File "/usr/local/lib/python3.5/dist-packages/google/cloud/storage/bucket.py", line 139, in __init__ ERROR 2019-06-20 13:23:39 -0700 service name = _validate_name(name) ERROR 2019-06-20 13:23:39 -0700 service File "/usr/local/lib/python3.5/dist-packages/google/cloud/storage/_helpers.py", line 39, in _validate_name ERROR 2019-06-20 13:23:39 -0700 service 'Bucket names must start and end with a number or letter.') ERROR 2019-06-20 13:23:39 -0700 service ValueError: Bucket names must start and end with a number or letter. ERROR 2019-06-20 13:23:39 -0700 service I verified that the bucket exists and it is accessible to the code. I doubt that it is a python 3.5.4 issue, but it needs be investigated further. Status: Issue closed username_2: The sklearn notebooks run with Python 3.5.4. Because the codes are in notebooks, I was unable to test them with Python 3.5.0.
JJCCGGRR24/Firulais-2.0
312534853
Title: Cuando intento publicar un newspaper con articles que no estan a final mode se publica automaticamente el newspaper y sus articulos asociados, ¿eso deberia ser asi? Question: username_0: En ningun lado se especifica eso asi Answers: username_0: y realmente en bd no se pone a finalmode el article pero en la columna aparece si esta a final mode, y el newspaper si que se publica username_1: Solucionado, se ha añadido el if para que no actualice las fechas y lo publique si no estan todos los articulos en modo final. ![imagen](https://user-images.githubusercontent.com/22836642/38502376-7bd37cd6-3c0f-11e8-9c7c-37c4954e8d78.png) Status: Issue closed
go-acme/lego
891432386
Title: Cannot use --run-hook "flag provided but not defined" Question: username_0: ### What did you expect to see? In reviewing https://go-acme.github.io/lego/usage/cli/examples/ , I saw that I should be able to use `--run-hook="./myscript.sh"`, however, when I tried to do so, I got an error: ``` Incorrect Usage. flag provided but not defined: -run-hook ``` I note that the error says `-run-hook` when I supplied `--run-hook`. I am not sure why this is different. After removing this one flag, ### Steps to reproduce ``` lego --email <redacted_email> --dns=<redacted_provider> --domains test.home.example.com --run-hook="./hook.sh" ``` ### Details <details><summary>Version of lego</summary> ```console $ lego --version lego version 4.3.1 linux/arm64 ``` </details> <details><summary>Logs</summary> ```console Incorrect Usage. flag provided but not defined: -run-hook NAME: lego - Let's Encrypt client written in Go USAGE: lego [global options] command [command options] [arguments...] VERSION: 4.3.1 COMMANDS: run Register an account, then create and install a certificate revoke Revoke a certificate renew Renew a certificate dnshelp Shows additional help for the '--dns' global option list Display certificates and accounts information. help, h Shows a list of commands or help for one command GLOBAL OPTIONS: --domains value, -d value Add a domain to the process. Can be specified multiple times. --server value, -s value CA hostname (and optionally :port). The server certificate must be trusted in order to avoid further modifications to the client. (default: "https://acme-v02.api.letsencrypt.org/directory") --accept-tos, -a By setting this flag to true you indicate that you accept the current Let's Encrypt terms of service. --email value, -m value Email used for registration and recovery contact. --csr value, -c value Certificate signing request filename, if an external CSR is to be used. --eab Use External Account Binding for account registration. Requires --kid and --hmac. --kid value Key identifier from External CA. Used for External Account Binding. --hmac value MAC key from External CA. Should be in Base64 URL Encoding without padding format. Used for External Account Binding. --key-type value, -k value Key type to use for private keys. Supported: rsa2048, rsa4096, rsa8192, ec256, ec384. (default: "ec256") --filename value (deprecated) Filename of the generated certificate. --path value Directory to use for storing the data. (default: "/root/.lego") [$LEGO_PATH] --http Use the HTTP challenge to solve challenges. Can be mixed with other types of challenges. --http.port value Set the port and interface to use for HTTP based challenges to listen on.Supported: interface:port or :port. (default: ":80") --http.proxy-header value Validate against this HTTP header when solving HTTP based challenges behind a reverse proxy. (default: "Host") --http.webroot value Set the webroot folder to use for HTTP based challenges to write directly in a file in .well-known/acme-challenge. This disables the built-in server and expects the given directory to be publicly served with access to .well-known/acme-challenge --http.memcached-host value Set the memcached host(s) to use for HTTP based challenges. Challenges will be written to all specified hosts. --tls Use the TLS challenge to solve challenges. Can be mixed with other types of challenges. --tls.port value Set the port and interface to use for TLS based challenges to listen on. Supported: interface:port or :port. (default: ":443") --dns value Solve a DNS challenge using the specified provider. Can be mixed with other types of challenges. Run 'lego dnshelp' for help on usage. --dns.disable-cp By setting this flag to true, disables the need to wait the propagation of the TXT record to all authoritative name servers. --dns.resolvers value Set the resolvers to use for performing recursive DNS queries. Supported: host:port. The default is to use the system resolvers, or Google's DNS resolvers if the system's cannot be determined. --http-timeout value Set the HTTP timeout value to a specific value in seconds. (default: 0) --dns-timeout value Set the DNS timeout value to a specific value in seconds. Used only when performing authoritative name servers queries. (default: 10) --pem Generate a .pem file by concatenating the .key and .crt files together. --cert.timeout value Set the certificate timeout value to a specific value in seconds. Only used when obtaining certificates. (default: 30) --help, -h show help --version, -v print the version 2021/05/13 06:29:11 flag provided but not defined: -run-hook ``` </details> Answers: username_1: Hello, the flag `--run-hook` must be placed after the `run` command, like in the documentation: ``` lego --email="<EMAIL>" --domains="example.com" --http run --run-hook="./myscript.sh" ``` https://go-acme.github.io/lego/usage/cli/examples/#obtain-a-certificate-and-hook Status: Issue closed
nmxiaowei/avue
727838598
Title: 在新的版本中出现很多的展开组件,如何去掉 Question: username_0: ![image](https://user-images.githubusercontent.com/50045583/96948573-9483dc00-1518-11eb-90b9-82ee612f4dc8.png) Answers: username_1: 检查样式 有一个全局样式覆盖了 username_0: 是的,去掉那个全局样式之后,还有些部分和旧版本不同,比如新增的下方多了一条横线,右边的icon没有对齐,是新版本变更了这些吗 ![image](https://user-images.githubusercontent.com/50045583/96951434-10812280-151f-11eb-8bcd-ebc831994eaf.png) ![image](https://user-images.githubusercontent.com/50045583/96951475-21ca2f00-151f-11eb-9e6e-67665ad78aa3.png) username_1: 检查下样式吧 可能还有其他样式 Status: Issue closed
Coyeah/blog
464949336
Title: 浅谈对AST的理解——动手写写Babel插件 Question: username_0: 懵了? 在计算机科学中,抽象语法树(abstract syntax tree 或者缩写为 AST),或者语法树(syntax tree),是源代码的抽象语法结构的树状表现形式,这里特指编程语言的源代码。树上的每个节点都表示源代码中的一种结构。 AST 是对源代码结构的一种抽象表现,称得上是“抽象”,即不会对真实语法中的每一个细节都记录下来。 # AST 的使用场景 + JavaScript 反编译 + Babel 编译 ES6 语法 + 代码高亮 + 关键词匹配 + 作用域判断 + 代码压缩 # AST 的解析过程 AST 解析和编译器所做的事并不同,相对来说更加简单(保住了发量)。编译器需要把高级编程语言转换成二进制,而 AST 解析只需要关注 **词法分析** 和用 **语法分析** 这两个关键的步骤。 ## 词法分析 词法分析,就是把一句话拆分成词语。既不让丢失原本的意思,又能够拆分成最小词法单元。 JavaScript 可以识别的最小词法单元:空格、注释、字符串、数字、标识符、运算符、括号。 ### 举个栗子 `用纸笔记录生活的美好。` > JavaScript 文件 babel working now... `用`、`纸笔`、`记录`、`生活`、`的`、`美好` > AST ### 用代码说明 ``` JavaScript if (1 > 0) { alert("if 1 > 0"); }; ``` 这样的一串代码,在 babel 看来就是这个样子的:`if`、`(`、`1`、`>`、`0`、`)`、`{`、`alert`、`(`、`"if 1 > 0"`、`)`、`;`、`}`、`;`。省去了空格。 这个有点太简单了,走个专业点的。 ``` JavaScript [ { type: "whitespace", value: "\n" }, { type: "identifier", value: "if" }, { type: "whitespace", value: " " }, { type: "parens", value: "(" }, { type: "number", value: "1" }, { type: "whitespace", value: " " }, { type: "operator", value: ">" }, { type: "whitespace", value: " " }, { type: "number", value: "0" }, { type: "parens", value: ")" }, { type: "whitespace", value: " " }, { type: "brace", value: "{" }, { type: "whitespace", value: "\n " }, [Truncated] } } module.exports = { visitor, } ``` ### Key Point + **visitor** - 当Babel处理一个节点时,是以 visitor 的形式获取节点信息,并进行相关操作,这种方式是通过一个 visitor 对象来完成的,在 visitor 对象中定义了对于各种节点的访问函数,这样就可以针对不同的节点做出不同的处理。我们编写的 Babel 插件其实也是通过定义一个实例化 visitor 对象处理一系列的 AST 节点来完成我们对代码的修改操作。 + **path** - 每次访问节点方法时,都会传入一个 path 参数,这个参数中包含了节点的信息以及节点和所在的位置,以供对特定节点进行操作。具体来说 Path 是表示两个节点之间连接的对象。这个对象不仅包含了当前节点的信息,也有当前节点的父节点的信息,同时也包含了添加、更新、移动和删除节点有关的其他很多方法。 + **state** - 每次访问节点方法时传入的第二个参数 + **square** - 作用域 # 总结 在网路上找了很多资料学习而来,还在学习阶段。对于 AST 原理并没有很高盛莫测,却有着探索的必要。以上是我对抽象语法树的理解,若有什么不正确的地方。恳请斧正! 参考资料:[Babel 插件手册](https://github.com/jamiebuilds/babel-handbook/blob/master/translations/zh-Hans/plugin-handbook.md#toc-introduction) 项目传送门:[传送门](https://github.com/username_0/demo-babel-plugin)
riboseinc/rvc
355821745
Title: "Couldn't to connect to rvd(err:13)" Question: username_0: Am I missing anything? Answers: username_1: 1. It sounds like RVD isn’t running. Can you “ps aux” to check? 2. Check also if RVD actually can be run by itself because the configuration syntax has changed to Nereon format. 3. Remember RVD has to be run as root, so ensure that you’ve run the necessary scripts as displayed by brew install. (And do not manually install because it is quite complicated to get all the details right) username_0: peter_tam 15041 0.0 0.0 4267768 892 s000 S+ 11:06AM 0:00.00 grep rvd root 14832 0.0 0.0 4336384 1420 ?? Ss 10:50AM 0:00.73 /opt/rvc/bin/rvd 2. No idea of how to check 3. I have followed all the instructions displayed, no luck... username_0: RVC works as long as I run `sudo rvc status all` rather than `rvc status all`. `rvc status all`: ``` rvc status all Couldn't to connect to rvd(err:13) ``` `sudo rvc status all`: ``` ... normal output ... ``` @jjr840430 could you help fix this? username_0: Superseded by https://github.com/riboseinc/rvc/issues/252. Status: Issue closed
pytest-dev/pytest
558036144
Title: How can i repeat a test module in pytest? Question: username_0: I would like to repeat a test module N times. The order is very important. ``` import pytest @pytest.mark.usefixtures("class_setup_teardown") class TestStressRobot: def test_1(self): print "\nstressing part 1..." assert True def test_2(self): print "\nstressing part 2..." assert True def test_3(self): print "\nstressing part 3..." assert True When I run py.test --repeat=2, the output is: test_stress.pyTestStressRobot.test_1[0] ✓ test_stress.pyTestStressRobot.test_1[1] ✓ test_stress.pyTestStressRobot.test_2[0] ✓ test_stress.pyTestStressRobot.test_2[1] ✓ test_stress.pyTestStressRobot.test_3[0] ✓ test_stress.pyTestStressRobot.test_3[1] ✓ I don't want it to be repeated per test, but per test module. Is it possible to have something like that? test_stress.pyTestStressRobot.test_1[0] ✓ test_stress.pyTestStressRobot.test_2[0] ✓ test_stress.pyTestStressRobot.test_3[0] ✓ test_stress.pyTestStressRobot.test_1[1] ✓ test_stress.pyTestStressRobot.test_2[1] ✓ test_stress.pyTestStressRobot.test_3[1] ✓ ``` Could you please let me know how to do? Awaiting your reply. Answers: username_1: See https://github.com/pytest-dev/pytest-repeat You can use --count=10 --repeat-scope=module I think you can also just put your test module X times on the command line. username_2: Thanks @username_1, I will close this for now, but please don't hesitate to come back if you have further questions @username_0! Status: Issue closed username_0: @username_1 @username_2 Thanks for your reply. Sure i will come back to you if i have any questions
boutproject/xBOUT
388935185
Title: Integrated tests against real data Question: username_0: Currently the unit tests create fake BOUT data and check that xBOUT reads them as expected. There should also be integrated tests which read some examples of real BOUT data. These should be from a variety of BOUT modules and be of varying dimensions. (SD1D, Storm2D and Hermes would be good candidates.) The data should be as small as possible. Help wanted because it would be better if other people gave me the data to perform the tests on. Answers: username_1: It's usually a good idea to try to avoid storing binary files in git repositories if possible so if it's possible to generate the data on the fly then that's often the best approach. An alternative is to fetch archived data from a separate repository/location. username_2: How data is now generated during test seems quite robust. Do you still want to go down the route of using real data Tom?
googlemaps/android-maps-utils
595164505
Title: Being able to customize strokePatter from GeoJsonPolygonStyle Question: username_0: Hi I am working with `GeoJsonPolygon` and when I have tried to customize the stroke of the polygons I have seen that neither `StrokePattern` nor `StrokeJointType` are accesible from the `GeoJsonPolygonStyle` Because it is a small thing I have sent a little MR with the necessary changes. I dont know if the issue was necessary, but just in case. Thanks<issue_closed> Status: Issue closed
backdrop/backdrop-issues
102511134
Title: [UX] Remove leftover 'Preview' status from vertical tabs on content type form Question: username_0: We removed the ability for nodes to have previews in Backdrop, but we didn't remove the vertical tab status for that old preview setting. See admin/structure/types/manage/article, for an example. ![screen shot 2015-08-22 at 12 17 20 am](https://cloud.githubusercontent.com/assets/397895/9422986/83f5b76e-4863-11e5-8499-56e7950d446b.png) This 'Preview' text, and any javascript that was supposed to change it, should be removed from Backdrop. Answers: username_1: Great, thanks @dboulet! Merged into 1.x and 1.2.x. Status: Issue closed
WesJD/NoNameTags
175549072
Title: Update to 1.10 Question: username_0: I want to take screenshots from Overcast maps, and I want them to be high res, so I have to use https://github.com/Yiyoek/mineshot, which is only 1.10. Could you update this mod? Thanks 😄 Answers: username_1: Yeah I'll see what I can do. Status: Issue closed
burakoner/OKEx.Net
1066707177
Title: pong json read error Question: username_0: okex.net version 5.1.2 newtonsoft.json version 13.0.1 error occured when recieve websocket message ------------------------------------------------------------------------ 2021-11-30 11:13:34:989 | Error | OKEx WS Api | Deserialize JsonReaderException: Unexpected character encountered while parsing value: p. Path '', line 0, position 0., Path: , LineNumber: 0, LinePosition: 0. Data: pong 2021-11-30 11:13:35:005 | Warning | OKEx WS Api | Socket 4 Message not handled: pong 2021-11-30 11:13:35:006 | Error | OKEx WS Api | Deserialize JsonReaderException: Unexpected character encountered while parsing value: p. Path '', line 0, position 0., Path: , LineNumber: 0, LinePosition: 0. Data: pong 예외 발생: 'System.InvalidOperationException'(Newtonsoft.Json.dll) 2021-11-30 11:13:35:051 | Error | OKEx WS Api | Socket 5 unhandled exception during message processing: InvalidOperationException - Cannot access child value on Newtonsoft.Json.Linq.JValue. at Newtonsoft.Json.Linq.JToken.get_Item(Object key) at Okex.Net.OkexSocketClient.OkexHandleSubscriptionResponse(SocketConnection s, SocketSubscription subscription, Object request, JToken message, CallResult`1& callResult) at CryptoExchange.Net.SocketClient.<>c__DisplayClass62_0.<SubscribeAndWaitAsync>b__0(JToken data) at CryptoExchange.Net.Sockets.PendingRequest.CheckData(JToken data) at CryptoExchange.Net.Sockets.SocketConnection.ProcessMessage(String data) at CryptoExchange.Net.Sockets.CryptoExchangeWebSocketClient.Handle[T](List`1 handlers, T data) at CryptoExchange.Net.Sockets.CryptoExchangeWebSocketClient.HandleMessage(Byte[] data, Int32 offset, Int32 count, WebSocketMessageType messageType) Answers: username_1: I get the same error.
CFLombardi/sethBot
291420187
Title: remove !dosh from the dosh command Question: username_0: We are currently splitting on `var content = msg.content.split("!dosh");`. Since we are trying to use prefix which is in the config.json and the commands as described in the index.js, we shouldn't have a hardcoded split. Status: Issue closed Answers: username_1: See PR https://github.com/username_1/sethBot/pull/29
elyra-ai/elyra
995195506
Title: Enable installing only specific runtimes when deploying elyra Question: username_0: **Is your feature request related to a problem? Please describe.** Currently, we deploy all supported runtimes when installing Elyra, and that might not be the desired behavior when deploying in a production environment. We should enable installing only individual runtimes **Describe the solution you'd like** - Deploying full elyra with all runtimes: ``` pip install --upgrade elyra pip install --upgrade elyra[all] ``` - Deploying elyra with specific runtime: ``` pip install --upgrade elyra[runtime] ``` Note: Any checks for specific runtimes (e.g. if runtime == 'kfp') should be removed
prometheus/snmp_exporter
499464949
Title: Please fulfill DockerHub project Question: username_0: Hi. There is the project repository in dockerhub but it does not have any README information or release tags. https://hub.docker.com/r/prom/snmp_exporter Apparently it's official (as it is in the prom repository) but without tags or README it's hard to use in productive environments. It would be amazing if we had a README and the tags available. Status: Issue closed Answers: username_1: https://hub.docker.com/r/prom/snmp-exporter is the right one, I've deleted the incorrect one.
prove-rs/z3.rs
977233716
Title: Rust newbie: fatal error at installation: 'z3.h' file not found Question: username_0: After I've added a z3 dependency to Cargo.toml, `cargo run` attempts to install the z3 crate, but fails when buidling z3-sys, with the error above. Obviously, it cannot locate the Z3 solver. How can that be fixed ? Am I supposed to have the Z3 solver's source code in my rust project repository, including z3.h ? Or add a link to a shared library ? Answers: username_0: Silly me. The fix is to [build Z3](https://github.com/username_0/z3#building-z3-using-make-and-gccclang) and install it using `sudo make install`. I now get the error `'stddef.h' file not found`. Not sure why. username_0: Now fixed. I had to `sudo apt install clang-9` as suggested [here](https://github.com/include-what-you-use/include-what-you-use/issues/679#issuecomment-656882050), but for it to work, I had first to downgrade libc6-dev, as suggested [here](https://askubuntu.com/a/1338046) for Ubuntu 20.04 . Pfew ! Feel free to update the docs or close this issue. username_1: Hi username_0, z3-sys tries to find Z3 on your system and generate bindings for that. Unless I am mistaken, this is common behavior for Rust *-sys projects using bindgen/cmake. As called out in the [z3 installation instructions](https://github.com/prove-rs/z3.rs/tree/master/z3#installation), you can instead choose static linking to compile Z3 as a part of the build process. I do see Z3 [available on Ubuntu 20.04](https://packages.ubuntu.com/focal/z3), but it is a bit outdated. username_0: Thanks. I did try to install Z3 using apt, but it did not work for some reason. The documentation is fine, so, I'll close the issue. Status: Issue closed
JustinBeckwith/retry-axios
300872687
Title: Exception in function shouldRetryRequest() when checking HTTP methods to retry Question: username_0: In function `shouldRetryRequest` there's this block of code ``` // Only retry with configured HttpMethods. if (!err.config.method || config.httpMethodsToRetry.indexOf(err.config.method.toUpperCase()) < 0) { return false; } ``` that fails evaluating second condition, because `httpMethodsToRetry` is an object, not an array. ``` config.httpMethodsToRetry {0: "GET", 1: "HEAD", 2: "OPTIONS", 3: "DELETE", 4: "PUT"} ``` Answers: username_1: Weird. `httpMethodsToRetry` should be an array, not an object. It would be super helpful if you could share your code :) username_2: I am getting this, too. Here's my relevant code. ``` import Axios from 'axios'; import { attach as raxAttach, getConfig as raxConfig } from 'retry-axios'; const createAxios = () => { const http = Axios.create(); http.defaults.timeout = 60*1000; http.defaults.validateStatus = (status) => (status >= 200 && status < 300) || (status == 404); http.defaults.raxConfig = { instance: http, retry: 5, noResponseRetries: 100, retryDelay: 250, httpStatusCodesToRetry: [[100, 199], [420, 429], [500, 599]], onRetryAttempt: (err) => { try { console.info("Retrying request", err, raxConfig(err)) } catch(e) { throw new Error("Error logging the retry of a request: " + e); } }, }; raxAttach(http); return http; }; ``` username_3: I am also getting error, Here is my relevant code, ``` axios.defaults.raxConfig= { // Retry 3 times on requests that return a response (500, etc) before giving up. Defaults to 3. retry: 3, // Retry twice on errors that don't return a response (ENOTFOUND, ETIMEDOUT, etc). noResponseRetries: 3, // Milliseconds to delay at first. Defaults to 100. retryDelay: 0, // HTTP methods to automatically retry. Defaults to: // ['GET', 'HEAD', 'OPTIONS', 'DELETE', 'PUT'] httpMethodsToRetry: ['GET', 'DELETE', 'PUT', 'POST'], // The response status codes to retry. Supports a double // array with a list of ranges. Defaults to: // [[100, 199], [429, 429], [500, 599]] httpStatusCodesToRetry: [[100, 199], [429, 429], [500, 599]], // If you are using a non static instance of Axios you need // to pass that instance here (const ax = axios.create()) instance: axios, // You can detect when a retry is happening, and figure out how many // retry attempts have been made onRetryAttempt: (err) => { const cfg = rax.getConfig(err); console.log(`Retry attempt #${cfg.currentRetryAttempt}`); } }; const id = rax.attach(axios); ``` And here is the error, ``` TypeError: config.httpMethodsToRetry.indexOf is not a function at shouldRetryRequest (/Users/chaityashah/FracTEL/skrum/node_modules/retry-axios/build/src/index.js:96:35) at onError (/Users/chaityashah/FracTEL/skrum/node_modules/retry-axios/build/src/index.js:58:10) at <anonymous> at process._tickCallback (internal/process/next_tick.js:188:7) ``` username_4: I debugged this issue and found the root of the problem in the Axios package. The issue only happens on retry, and the reason is because of a configuration merge function that doesn't have deep-copy capability. Offending line here: https://github.com/axios/axios/blob/ad1195f0702381a77b4f2863aad6ddb1002ffd51/lib/core/Axios.js#L35 It appears that this merge function has entirely changed (with deep-copy fixes) for v0.18.x releases, which are yet to be released. username_5: also had this issue. An effective workaround for the moment is to pass a custom shouldRetry function and check the type of `config.httpMethodsToRetry` and get the values into something useful before checking. Not ideal, but relatively easy to fix until a new version of axios is released which resolves the issue. username_6: https://github.com/axios/axios/blob/4f98acc57860721c639f94f5772138b2af273301/lib/core/Axios.js#L37 https://github.com/axios/axios/blob/4f98acc57860721c639f94f5772138b2af273301/lib/core/mergeConfig.js#L13-L51 username_7: @username_9 The same problem exists with the `httpMethodsToRetry` config property. Could you please patch that as well, since it is the same problem ? username_8: Having this problem as well username_6: @username_8 I abondoned `retry-axios` and wrap `axios` by myself. Currently this library is completely broken. username_6: I also propose that peer dependency be correctly defined. ```js "peerDependencies": { "axios": "*" // <- ?? }, ``` username_1: Greetings! When you say the latest version - what specific version are you using? username_6: @username_1 `v0.19.0-beta.1`. Please see [my comment](https://github.com/username_1/retry-axios/issues/1#issuecomment-461276410). Now options are completely whitelisted and it doesn't accept `raxConfig`. username_9: @username_6 shall we create separate issues for the peer dependency and whitelisting stuff? username_10: Do you guys know of a drop in (or as close to drop in as possible) replacement for axios? This library is on the critical path for a lot of our apps, and problems like these cause serious issues and slow down development. username_1: I wrote https://github.com/googleapis/gaxios/ to be that drop in replacement based on node-fetch 🙃 It's part of the reason I don't spend a ton of time in this particular module. It has retries baked in. Status: Issue closed username_1: :tada: This issue has been resolved in version 2.0.1 :tada: The release is available on: - [npm package (@latest dist-tag)](https://www.npmjs.com/package/retry-axios/v/2.0.1) - [GitHub release](https://github.com/username_1/retry-axios/releases/tag/v2.0.1) Your **[semantic-release](https://github.com/semantic-release/semantic-release)** bot :package::rocket:
duskload/react-device-detect
1004342686
Title: Edge on Android mobile phone return isLegacyEdge true Question: username_0: Edge on Android mobile phone return isLegacyEdge if I'm not wrong it should be false and it should return isEdgeChromium true. Device detect: { "isMobile": true, "vendor": "none", "model": "Mi A2 Lite", "os": "Android", "osVersion": "10", "ua": "Mozilla/5.0 (Linux; Android 10; Mi A2 Lite) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.82 Mobile Safari/537.36 EdgA/93.0.961.53" }
forkcms/forkcms
155277603
Title: Bug - Variable defined in widget bleeds through to other widgets Question: username_0: When assigning a variable in a widget, this variable still exists in the same and other widgets. So when WidgetA has the following code: `$this->tpl->assign('bleed', true);` This variable is available in both the same widget (from other call) and other widgets. I can write this in WidgetB: ``` {option:bleed} I am shown, but bleed isn't my variable, it's a variable from WidgetA... {option:bleed} ``` Answers: username_1: This is done by design, but I also do not think this is a good design. In my opinion, every widget and action should be rendered in it's own scope, without taking other modules into account. What do the other core devs and contributors think? @forkcms/core-contributors @forkcms/owner @forkcms/moderators username_0: Oh, by 'design' you say ;). It would be, like you said, alot better when each widget has it's own scope. Sorry, thoughed this was a bug. Is there any way to get around this, as i have the same widget multple times. But my option in my tpl should be tied to the scope of the widget and not be global. Have had this before, when using someone elses code, and using the same variable names. Never gave it any thoughed. And it was fixed by changing my variable name in my code. Which i thoughed was odd. username_1: The FormBuilder contains a widget working around this issue. You could try this technique to give your widget it's own scope: https://github.com/forkcms/forkcms/blob/master/src/Frontend/Modules/FormBuilder/Widgets/Form.php#L298-L310 username_0: Will have a look at it! Thanks! username_0: When using: ``` $pathTo = FrontendTheme::getPath(FRONTEND_MODULES_PATH . '/Blog/Layout/Widgets/FrontArticles.tpl'); $this->tpl = new FrontendTemplate($pathTo); ``` It throws an error on line 57 of file Frontend\Core\Engine\Theme.$file is empty ("")... And can't seem to find out where it gets a empty string... ``` if (!is_file(PATH_WWW . str_replace(PATH_WWW, '', $file))) { throw new Exception('The template (' . $file . ') does not exists.'); } ``` username_1: Have you tried a `var_dump` of your `$pathTo` variable? My guess is that `FrontendTheme::getPath` returns an empty string? username_0: Have been debugging, $pathTo is never ""; Will do some reseach, and will let you know... username_0: Found it: put this in the loadData() instead of execute. Silly me :)... ``` return $this->tpl->getContent( FRONTEND_MODULES_PATH . '/' . $this->getModule() . '/Layout/Widgets/FrontArticles.tpl'); ``` username_1: Ok, glad to help! I'll keep this issue open for the discussion about the local scope vs global scope in widgets and actions. username_2: I am pro local scope Status: Issue closed
reactiveui/Pharmacist
665578330
Title: [BUG] Visual Studio 2019.16.5 hangs with Phamacist installed Question: username_0: Describe The Bug After installing Phamacist.MsBuild and Phamacist.Common from Nuget to a newly created Xamarin project created from Prism Template. Restart Visual Studio 2019 Community and open the project. Visual Studio 2019 hangs after a few minutes at most. However it doesn't say unresponsive in Windows Task Manager. There is also a Visual Studio Delay Notification in the System Tray. I have to kill Visual Studio when this happens. Steps To Reproduce 1. Create a new Xamarin Prism project 1. Install Phamacist.MsBuild and Phamacist.Common to main application lib 1. Restart Visual Studio 1. Visual Studio hangs after a few minutes, sometimes after a few minutes, some times very quickly 1. Remove Phamacist.MsBuild and Phamacist.Common, Visual Studio does not hang. Expected Behaviour Visual studio should not hang. Environment OS: Windows 2004 Newly installed Visual Studio Version: 2019, 16.6.5, no extra extensions. Answers: username_1: Same issue with Visual Studio Version: 2019, 16.8.0 and Phamacist.MsBuild 1.9.1 Status: Issue closed username_2: See https://www.nuget.org/packages/ReactiveMarbles.ObservableEvents.SourceGenerator/ and https://github.com/reactivemarbles/ObservableEventsSourceGenerator This doesn't use the MSBuild system anymore, instead a source generator.
smltq/jPublic
452379210
Title: 数组差集,结果未带上数组符号 Question: username_0: ![image](https://user-images.githubusercontent.com/50507613/58942301-16eaef00-87b0-11e9-9330-0a61bf50b08c.png) 如_.difference([1,2,3],[1,2]),实际结果为3,期望结果应该为[3] Answers: username_1: 返回结果为[3],但是数组不能用==比较,建议使用_.equals对比数组是否相等 Status: Issue closed
googleapis/python-api-core
638048807
Title: Updating the pubsub breaks because it needs a newer version of protobuf Question: username_0: Thanks for stopping by to let us know something could be better! **PLEASE READ**: If you have a support contract with Google, please create an issue in the [support console](https://cloud.google.com/support/) instead of filing on GitHub. This will ensure a timely response. Please run down the following list and make sure you've tried the usual "quick fixes": - Search the issues already opened: https://github.com/googleapis/python-api-core/issues - Search StackOverflow: https://stackoverflow.com/questions/tagged/google-cloud-platform+python If you are still having issues, please be sure to include as much information as possible: #### Environment details - OS type and version: Windows 10 - Python version: 3.6.9 - pip version: 20.0.2 - `google-api-core` version: 1.16.0 #### Steps to reproduce 1. Install google-cloud-pubsub==1.6.0 and protobuf==3.10.0 2. Import the PushConfig object to create a push subscription #### Stack trace ``` Traceback (most recent call last): File "C:\Users\Sollum\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "C:\Users\Sollum\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\Sollum\PycharmProjects\sollumcloudplatform\src\main.py", line 19, in <module> from src.procedures import create_app File "C:\Users\Sollum\PycharmProjects\sollumcloudplatform\src\procedures.py", line 15, in <module> from src.utils.decorators import memoize File "C:\Users\Sollum\PycharmProjects\sollumcloudplatform\src\utils\decorators.py", line 14, in <module> from google.cloud.pubsub_v1.proto.pubsub_pb2 import PushConfig File "C:\Users\Sollum\PycharmProjects\sollumcloudplatform\venv\lib\site-packages\google\cloud\pubsub_v1\__init__.py", line 17, in <module> from google.cloud.pubsub_v1 import types File "C:\Users\Sollum\PycharmProjects\sollumcloudplatform\venv\lib\site-packages\google\cloud\pubsub_v1\types.py", line 32, in <module> from google.cloud.pubsub_v1.proto import pubsub_pb2 File "C:\Users\Sollum\PycharmProjects\sollumcloudplatform\venv\lib\site-packages\google\cloud\pubsub_v1\proto\pubsub_pb2.py", line 30, in <module> create_key=_descriptor._internal_create_key, AttributeError: module 'google.protobuf.descriptor' has no attribute '_internal_create_key' ``` Answers: username_0: I didn't fill this issue in the pubsub repository because the protobuf dependency lies here. username_1: Same issue, but what I installed google-cloud-pubsub==1.6.0 googleapis-common-protos==1.5.2 _protobuf in sources has 3.8.0_ atm googleapis-common-protos is the latest version, guess it also needs to be updated with latest protobuf. Status: Issue closed username_3: @username_2 thanks for the fix! would it be possible to yank the 1.20.0 release? i have a pinned version of protobuf and my dependency resolver thinks 1.20.0 is safe to upgrade to. username_2: @username_3 Done! (I didn't realize "yanking" had been added to PyPI). username_3: Awesome, thanks!
getgauge/gauge
93489423
Title: Support for parameter transformers Question: username_0: Look at https://github.com/cucumber/cucumber/wiki/Step-Argument-Transforms . In my case I need to replace some variables in file parameter (not only), e.g.: ``` { "javaHome": "${env.JAVA_HOME}" } ``` should be replaced to ``` { "javaHome": "/usr/share/java ..." } ``` based on environment variables. The cleanest way would be to use something similar to mentioned transformes. Answers: username_1: Closing this. Prefer environment variables to be handled in code instead. Status: Issue closed
raspberrypilearning/edu-image
345707473
Title: crumble install generates errors Question: username_0: The following error can be seen in the install. ``` Preparing to unpack crumble_0.25.1_all.deb ... Unpacking crumble (0.25.2) ... dpkg: dependency problems prevent configuration of crumble: crumble depends on python-wxgtk3.0; however: Package python-wxgtk3.0 is not installed. crumble depends on python-pyparsing; however: Package python-pyparsing is not installed. crumble depends on libhidapi-libusb0; however: Package libhidapi-libusb0 is not installed. dpkg: error processing package crumble (--install): dependency problems - leaving unconfigured ``` It looks like there are a few missing dependencies: - `python-wxgtk3.0` - `python-pyparsing` - `libhidapi-libusb0` They have the snif of Python 2 libraries about then (not confirmed tho)! Answers: username_1: Fixed - added dependencies to install script Status: Issue closed
rob-balfre/svelte-select
661518439
Title: Support for <label for=""> Question: username_0: In an HTML `<select>`, we can do this: ```html <label for="example">Example input</label> <select id="example">...</select> ``` In this case, clicking on the label focuses the select element, an important accessibility feature. However, there is no way to do this with `svelte-select`: ```html <script> import Select from "svelte-select"; </script> <label for="example">Example input</label> <Select id="example" items={[]} /> ``` Maybe we can `export let id` within the selector, which is then added to the input element? Answers: username_1: @username_0 already possible, see docs... `inputAttributes` Status: Issue closed username_0: Thanks!