repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
openshift/machine-config-operator
399112108
Title: Fast cycling of MCs (and GC issues) Question: username_0: Splitting this out of https://github.com/openshift/machine-config-operator/pull/273#issuecomment-454111408 https://github.com/openshift/machine-config-operator/pull/273#issuecomment-454134115 Here are relevant mcc logs from a cluster spun up by that PR: ``` I0114 22:26:10.482441 1 render_controller.go:437] Generated machineconfig master-46c05bfb9cb3d4e05608277bb2cb0a5d from 1 configs: [{MachineConfig 00-master machineconfiguration.openshift.io/v1 }] I0114 22:26:11.383457 1 node_controller.go:336] Error syncing machineconfigpool worker: Empty Current MachineConfig I0114 22:26:11.383750 1 render_controller.go:361] Error syncing machineconfigpool worker: no MachineConfigs found matching selector machineconfiguration.openshift.io/role=worker I0114 22:26:11.589272 1 render_controller.go:437] Generated machineconfig master-7734c782bad1ead0f8ef5b6affcaf35c from 2 configs: [{MachineConfig 00-master machineconfiguration.openshift.io/v1 } {Machin eConfig 00-master-osimageurl machineconfiguration.openshift.io/v1 }] I0114 22:26:12.250607 1 render_controller.go:361] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again I0114 22:26:12.484212 1 render_controller.go:437] Generated machineconfig master-383ca913310d861fee0be89e6f1d0127 from 3 configs: [{MachineConfig 00-master machineconfiguration.openshift.io/v1 } {Machin eConfig 00-master-osimageurl machineconfiguration.openshift.io/v1 } {MachineConfig 00-master-ssh machineconfiguration.openshift.io/v1 }] I0114 22:26:14.083445 1 node_controller.go:336] Error syncing machineconfigpool worker: Empty Current MachineConfig I0114 22:26:14.483810 1 render_controller.go:437] Generated machineconfig master-a206c9459a44d859587164a68bb484f2 from 4 configs: [{MachineConfig 00-master machineconfiguration.openshift.io/v1 } {Machin eConfig 00-master-osimageurl machineconfiguration.openshift.io/v1 } {MachineConfig 00-master-ssh machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-kubelet machineconfiguration.openshift.io/ v1 }] ``` You can see that this caused very fast churn in the machineconfigs, and the previous ones were GCd. But - there were secondary masters that were still booting and expecting to be able to find that MC. This is a tricky problem - we need to have a way to avoid pruning "in flight" MCs passed from the MCC to Ignition. Answers: username_0: This problem is actually fairly new (as of basically since Friday) because of the near-simultaneous addition of `-ssh` and `-kubelet` MCs which causes MC regeneration churn, and I'm currently going to add another. username_1: /kind bug username_2: As a data point, the `-ssh` mc was added last Monday, so any idea why you saw the issues begin on Friday? username_0: Sorry, I was wrong. It's likely the `-kubelet` one on top of the `-ssh` - but both are pretty recent. username_3: Hmm, wonder if it'd work to add another owner reference from the MCD to the MC before handling it. Let me look into that. username_3: So, just to do a brain dump on this. I'm still trying to reproduce this locally. It doesn't trigger at least when I manually create MCs in rapid succession (this is with #303 reverted). One thing that caught my eye though is that there is no code today that actually *deletes* MachineConfigs, right? The render controller does [try to remove the pool ownerReference from previously generated MachineConfigs](https://github.com/openshift/machine-config-operator/blob/a843ec02ba711fdf5577603f9cb36a41b9181d58/pkg/controller/render/render_controller.go#L473) (that actually doesn't work right now; working on a patch to fix that), but even so, deleting the ownerRef doesn't automatically delete the object, it just orphans them (which is actually an issue; we need to figure out garbage collection). username_0: Doesn't Kube itself do the GC https://blog.openshift.com/garbage-collection-custom-resources-available-kubernetes-1-8/ ? We should indeed figure this issue out better but it's not in the blocking path right now I'd say. username_1: Agreed username_3: My worry is that not knowing exactly what's deleting MCs means we don't actually know how bad this is. username_3: Yeah agreed. Will try to get more visibility on this today. BTW, was https://github.com/openshift/machine-config-operator/pull/273#issuecomment-454111408 in a CI cluster, or local? Wondering if there's somehow a higher likelihood of hitting this in CI for whatever reason. username_0: That was in CI. However, I think https://github.com/openshift/machine-config-operator/pull/303 has been a strong mitigation. But, we don't have much data here - https://github.com/openshift/machine-config-operator/pull/319 is attempting to gather some. username_0: Yeah, this is still a blocking issue for #273 In a recent CI run all my masters went degraded due to failing to fetch the MC. And for some reason I seem to have lost my workers entirely (`NotReady`) and the DS pods are `NodeLost`. username_0: In current master (i.e. after https://github.com/openshift/machine-config-operator/pull/321/commits/0c36d1e2e872979fc902b8b1d9307657032917e0 landed): ``` W0118 13:53:05.784041 1 render_controller.go:488] Failed to delete ownerReference from 00-worker-ssh: json: cannot unmarshal object into Go value of type jsonpatch.Patch W0118 13:53:05.790057 1 render_controller.go:488] Failed to delete ownerReference from 01-master-kubelet: json: cannot unmarshal object into Go value of type jsonpatch.Patch W0118 13:53:05.856325 1 render_controller.go:488] Failed to delete ownerReference from 00-master-ssh: json: cannot unmarshal object into Go value of type jsonpatch.Patch W0118 13:53:05.948509 1 render_controller.go:488] Failed to delete ownerReference from 00-worker: json: cannot unmarshal object into Go value of type jsonpatch.Patch W0118 13:53:05.956110 1 render_controller.go:488] Failed to delete ownerReference from 00-worker-ssh: json: cannot unmarshal object into Go value of type jsonpatch.Patch W0118 13:53:06.159607 1 render_controller.go:488] Failed to delete ownerReference from 01-master-kubelet: json: cannot unmarshal object into Go value of type jsonpatch.Patch W0118 13:53:06.254004 1 render_controller.go:488] Failed to delete ownerReference from worker-dc91f58f81073a943eda478f6e6c9c24: json: cannot unmarshal object into Go value of type jsonpatch.Patch W0118 13:53:06.254523 1 render_controller.go:488] Failed to delete ownerReference from 00-master-ssh: json: cannot unmarshal object into Go value of type jsonpatch.Patch W0118 13:53:06.562560 1 render_controller.go:488] Failed to delete ownerReference from 00-worker: json: cannot unmarshal object into Go value of type jsonpatch.Patch W0118 13:53:06.648050 1 render_controller.go:488] Failed to delete ownerReference from 00-master: json: cannot unmarshal object into Go value of type jsonpatch.Patch W0118 13:53:06.754139 1 render_controller.go:488] Failed to delete ownerReference from worker-dc91f58f81073a943eda478f6e6c9c24: json: cannot unmarshal object into Go value of type jsonpatch.Patch W0118 13:53:06.756698 1 render_controller.go:488] Failed to delete ownerReference from master-305cb5e168750be92391b570a00bf078: json: cannot unmarshal object into Go value of type jsonpatch.Patch W0118 13:53:06.854664 1 render_controller.go:488] Failed to delete ownerReference from master-e9dbd67aa91af61df98032b1cbc4c3e0: json: cannot unmarshal object into Go value of type jsonpatch.Patch W0118 13:53:06.962235 1 render_controller.go:488] Failed to delete ownerReference from 00-master: json: cannot unmarshal object into Go value of type jsonpatch.Patch ``` So yeah, that code was clearly broken. username_1: Should https://github.com/openshift/machine-config-operator/commit/0c36d1e2e872979fc902b8b1d9307657032917e0 be reverted? username_0: Er sorry when I said "that code was broken" I meant *before* 0c36d1e. The new code is revealing errors that existed before. username_0: Offhand, here's what I'm thinking right now. Today AIUI, the render controller tries a policy of "keep only latest MC for a pool". I think a pool should have something like this: ``` diff --git a/pkg/apis/machineconfiguration.openshift.io/v1/types.go b/pkg/apis/machineconfiguration.openshift.io/v1/types.go index aa13fdf..cbfba9d 100644 --- a/pkg/apis/machineconfiguration.openshift.io/v1/types.go +++ b/pkg/apis/machineconfiguration.openshift.io/v1/types.go @@ -280,6 +280,8 @@ type MachineConfigPoolStatus struct { // The current MachineConfig object for the machine pool. Configuration MachineConfigPoolStatusConfiguration `json:"configuration"` + ActiveConfigurations []MachineConfig `json:activeConfigurations` + // Total number of machines in the machine pool. MachineCount int32 `json:"machineCount"` ``` Then the render controller only queues for GC any MC which is not in that set. OK, now how do we maintain that set? I think a first clear cut at this would be "all MCs that are either currentConfig or desiredConfig" on a node (so the node controller writes this)? However...that still leaves open the special case of the MCS providing a config to boot a node, and having the config be GC'd while it's still booting. My short term vote is: Let's do the really simple thing and only GC MCs that are older than 1 hour. username_1: In other words, there would always be a desiredConfig and currentConfig available, but anything that didn't match those would be under a GC of 1 hour? username_0: Notes so far from a meeting on this. We don't think (but this needs to be verified) that deleting an owner ref shouldn't GC the object. And yeah, just playing with this briefly, that seems to be the case. And clearly the patching code wasn't working. We think that the MCs going away may have something more to do with some sort of race condition on cluster bringup. Abhinav suggested the operator should pause rolling out any configs until the master pool has stabilized. I'm wondering if maybe the race is something like us rebooting the master before the other ones have come online, and etcd didn't get to finish committing the rendered MC? username_0: OK I added a quick hack here: https://github.com/openshift/machine-config-operator/pull/324/commits/836f5e0af593fc4a36d3ccbf2aa6d3e6151b33cc Also I noticed that nothing is setting `Paused` - is it intended for humans? I [guessed it is](https://github.com/openshift/machine-config-operator/pull/324/commits/e7c10eca3c953a2568d5c28a26f973b36655ef10). So ideas for better waits in the operator; we could add a new `OperatorPaused` field that we own, have it be equivalent to `Paused`, and set it initially to `true`, then change it `false` after... Hmm...what would be a good signal for "cluster is ready for MCO to start running"? Maybe when all the workers specified in the install config are online? username_0: And based on the discussion so far we shouldn't merge https://github.com/openshift/machine-config-operator/pull/318 right? Because it would make this worse (assuming the patch works and I would believe it does). I forget, from the meeting did we have any ideas on what a good GC policy for the MCs would be? Did we land anywhere different from https://github.com/openshift/machine-config-operator/issues/301#issuecomment-455601024 ? username_1: That sounds reasonable. If we wanted to be extra explicit about flow we could also keep MCD's from being scheduled on the workers until MC's are generated and available via the MCS. Just thinking out loud. username_2: I thought that we had said that the 1 hour part didn't really work? username_3: It definitely works. :) Though I'd agree we shouldn't merge it until we have a better understanding of what's going on. username_0: Some work on "operator pause" here https://github.com/openshift/machine-config-operator/pull/329 - WIP, not really tested (since I'd have to make a custom release payload to do that sanely); going to hope CI runs it and see. username_0: OK I think I'm finally understanding this. No MCs are being GC'd. The problem is until the osimage work, we were relying on the fact that the MC generated by the bootstrap was exactly the same as the MC initially generated in the cluster. So we either need to render the osimageurl during bootstrap (probably best) or also render the base template MC without the osimageurl too (would seem fine). If I'm correct about this and one of those fixes works, this issue would then turn to the other problem of when we GC MCs. username_1: :+1: yes! The original architecture passed to us was that the MC from bootstrap would end up being the same. This can't ALWAYS be counted on since it's possible someone may use an old MC when installing a new cluster, but, in those cases, the extra reboot and pivot to the updated MC would be expected and fine. I'm good with either or both of the above. username_0: Closing in favor of https://github.com/openshift/machine-config-operator/issues/354 since there wasn't actually any GC https://github.com/openshift/machine-config-operator/issues/301#issuecomment-455804265 Status: Issue closed
ga-wdi-exercises/to_oz
238912815
Title: CLI HW (<NAME>) Question: username_0: $ cd homework/ Tue Jun 27 12:24:38 ~/Documents/wdi/homework $ mkdir House Tue Jun 27 12:24:43 ~/Documents/wdi/homework $ cd House/ Tue Jun 27 12:24:50 ~/Documents/wdi/homework/House $ touch Dorothy Toto Tue Jun 27 12:24:58 ~/Documents/wdi/homework/House $ ls Dorothy Toto Tue Jun 27 12:24:59 ~/Documents/wdi/homework/House $ cd ../ Tue Jun 27 12:25:30 ~/Documents/wdi/homework $ mkdir Oz Tue Jun 27 12:25:35 ~/Documents/wdi/homework $ cd Oz Tue Jun 27 12:25:39 ~/Documents/wdi/homework/Oz $ touch "Good Witch of the North" Tue Jun 27 12:25:48 ~/Documents/wdi/homework/Oz $ ls Good Witch of the North Tue Jun 27 12:25:49 ~/Documents/wdi/homework/Oz $ touch "Wicked Witch of the East" "Good Witch of the South" "Wicked Witch of the West" Tue Jun 27 12:26:08 ~/Documents/wdi/homework/Oz $ ls Good Witch of the North Good Witch of the South Wicked Witch of the East Wicked Witch of the West Tue Jun 27 12:26:09 ~/Documents/wdi/homework/Oz $ rm Wicked\ Witch\ of\ the\ East Tue Jun 27 12:26:39 ~/Documents/wdi/homework/Oz $ cd ../House/ Tue Jun 27 12:26:48 ~/Documents/wdi/homework/House $ mv Dorothy ../Oz/ Tue Jun 27 12:27:03 ~/Documents/wdi/homework/House $ ls Toto Tue Jun 27 12:27:07 ~/Documents/wdi/homework/House $ cd ../Oz/ Tue Jun 27 12:27:25 ~/Documents/wdi/homework/Oz $ ls Dorothy Good Witch of the North Good Witch of the South Wicked Witch of the West Tue Jun 27 12:27:27 ~/Documents/wdi/homework/Oz $ touch Scarecrow "Tin Man" "Cowardly Lion" Tue Jun 27 12:27:48 ~/Documents/wdi/homework/Oz $ mkdir "Emerald City" Tue Jun 27 12:27:55 ~/Documents/wdi/homework/Oz $ mv Scarecrow "Tin Man" "Cowardly Lion" Emerald\ City/ Tue Jun 27 12:28:13 ~/Documents/wdi/homework/Oz $ ls Dorothy Good Witch of the North Wicked Witch of the West Emerald City Good Witch of the South Tue Jun 27 12:28:14 ~/Documents/wdi/homework/Oz $ cd Emerald\ City/ Tue Jun 27 12:28:19 ~/Documents/wdi/homework/Oz/Emerald City $ ls Cowardly Lion Scarecrow Tin Man Tue Jun 27 12:28:20 ~/Documents/wdi/homework/Oz/Emerald City $ cd ../ Tue Jun 27 12:28:41 ~/Documents/wdi/homework/Oz $ touch "Flying Monkeys" Tue Jun 27 12:28:48 ~/Documents/wdi/homework/Oz $ rm Wicked\ Witch\ of\ the\ West Tue Jun 27 12:28:53 ~/Documents/wdi/homework/Oz $ cd Emerald\ City/ Tue Jun 27 12:37:59 ~/Documents/wdi/homework/Oz/Emerald City $ echo "diploma" Scarecrow diploma Scarecrow Tue Jun 27 12:38:08 ~/Documents/wdi/homework/Oz/Emerald City $ echo "diploma" > Scarecrow Tue Jun 27 12:38:17 ~/Documents/wdi/homework/Oz/Emerald City $ echo "heart shaped watch" > Tin\ Man Tue Jun 27 12:38:27 ~/Documents/wdi/homework/Oz/Emerald City $ echo "medal" > Lion Tue Jun 27 12:38:34 ~/Documents/wdi/homework/Oz/Emerald City $ ls Cowardly Lion Lion Scarecrow Tin Man Tue Jun 27 12:38:35 ~/Documents/wdi/homework/Oz/Emerald City $ echo "medal" > Cowardly\ Lion Tue Jun 27 12:38:49 ~/Documents/wdi/homework/Oz/Emerald City $ Answers: username_1: Good Job! Status: Issue closed
IF11I/notes
344583373
Title: Frontend angular routes error Question: username_0: Built frontend will get some routes by angular eg.: /componenttypes for the page with componenttypes. This is fine as long as nobody refreshes the page, due to the fact that there is no actual reload, if angular serves the pages. BUT if the built frontend is deployed on nginx server these routes cannot be assigned anymore, keep in mind to add location parts to every of these! Answers: username_1: Have a look at the [documentation](https://angular.io/guide/deployment#server-configuration). This should fix the problem.
home-assistant/core
1081917100
Title: AirPurifier is disabled after HA upgrade to 2021.12.1 (2012.12.2 does not help too) Question: username_0: ### The problem Hi! I've upgrade my HA (in docker) to 2012.12.1 And I saw my Air Purifier Pro (zhimi.airpurifier.v6) became disabled by Config entry. I tried to upgrade python-miio to 0.5.9.2. No success. Just now I upgraded HA to 2021.12.2 Air Purifier is still disabled. Need help. ### What version of Home Assistant Core has the issue? 2021.12.2 ### What was the last working version of Home Assistant Core? 2012.11.5 ### What type of installation are you running? Home Assistant Container ### Integration causing the issue xiaomi-miio ### Link to integration documentation on our website https://www.home-assistant.io/integrations/xiaomi_miio/#air-purifier-pro-zhimiairpurifierv6 ### Example YAML snippet _No response_ ### Anything in the logs that might be useful for us? _No response_ ### Additional information _No response_ Answers: username_1: Please [enable debug logging](https://www.home-assistant.io/integrations/logger/) for the integration and `miio` and look if there are any hints why this is happening. username_0: -- <NAME> username_1: From that log it looks like it's working just fine (getting updates), so I'm not really sure why it would be disabled :o username_0: -- <NAME> username_2: Logger: homeassistant.components.number Source: components/xiaomi_miio/number.py:254 Integration: ์ˆซ์ž (documentation, issues) First occurred: 15:36:04 (3 occurrences) Last logged: 15:38:38 Error while setting up xiaomi_miio platform for number Traceback (most recent call last): File "/usr/src/homeassistant/homeassistant/helpers/entity_platform.py", line 249, in _async_setup_platform await asyncio.shield(task) File "/usr/src/homeassistant/homeassistant/components/xiaomi_miio/number.py", line 254, in async_setup_entry entity_reg = hass.helpers.entity_registry.async_get() TypeError: async_get() missing 1 required positional argument: 'hass' username_3: Closed via https://github.com/home-assistant/core/pull/63446 Status: Issue closed
home-assistant/core
621942873
Title: prezzibenzina integration Detected I/O inside the event loop Question: username_0: <!-- READ THIS FIRST: - If you need additional help with this template, please refer to https://www.home-assistant.io/help/reporting_issues/ - Make sure you are running the latest version of Home Assistant before reporting an issue: https://github.com/home-assistant/core/releases - Do not report issues for integrations if you are using custom components or integrations. - Provide as many details as possible. Paste logs, configuration samples and code into the backticks. DO NOT DELETE ANY TEXT from this template! Otherwise, your issue may be closed without comment. --> ## The problem <!-- Describe the issue you are experiencing here to communicate to the maintainers. Tell us what you were trying to do and what happened. --> prezzibenzina integration is giving these lines in the logs 2020-05-20 19:03:16 WARNING (MainThread) [homeassistant.util.async_] Detected I/O inside the event loop. This is causing stability issues. Please report issue for prezzibenzina doing I/O at homeassistant/components/prezzibenzina/sensor.py, line 54: info = client.get_by_id(station) 2020-05-20 19:03:18 WARNING (MainThread) [homeassistant.util.async_] Detected I/O inside the event loop. This is causing stability issues. Please report issue for prezzibenzina doing I/O at homeassistant/components/prezzibenzina/sensor.py, line 119: self._data = self._client.get_by_id(self._station)[self._index] ## Environment <!-- Provide details about the versions you are using, which helps us to reproduce and find the issue quicker. Version information is found in the Home Assistant frontend: Developer tools -> Info. --> - Home Assistant Core release with the issue: 0.110.0 - Last working Home Assistant Core release (if known): - Operating environment (Home Assistant/Supervised/Docker/venv): Supervised - Integration causing this issue: prezz<NAME> - Link to integration documentation on our website: https://www.home-assistant.io/integrations/prezzibenzina/ ## Problem-relevant `configuration.yaml` <!-- An example configuration that caused the problem for you. Fill this out even if it seems unimportant to you. Please be sure to remove personal information like passwords, private URLs and other credentials. --> ```yaml ``` ## Traceback/Error logs <!-- If you come across any trace or error logs, please provide them. --> ```txt ``` 2020-05-20 19:03:16 WARNING (MainThread) [homeassistant.util.async_] Detected I/O inside the event loop. This is causing stability issues. Please report issue for prezzibenzina doing I/O at homeassistant/components/prezzibenzina/sensor.py, line 54: info = client.get_by_id(station) 2020-05-20 19:03:18 WARNING (MainThread) [homeassistant.util.async_] Detected I/O inside the event loop. This is causing stability issues. Please report issue for prezzibenzina doing I/O at homeassistant/components/prezzibenzina/sensor.py, line 119: self._data = self._client.get_by_id(self._station)[self._index] ## Additional information<issue_closed> Status: Issue closed
cssconf/2015.cssconf.eu
104454371
Title: Finalize content on info page Question: username_0: @retrospekt @username_1 Can you review the info page and finalize it? http://2015.cssconf.eu/info/ Feel free to move everything around, remove old stuff, add links to other pages, blog posts etc etcโ€ฆ let's make this page as useful as possible :) New information that should be added: - Closing Party (or Closing event?) will start right after the end of the schedule, takes place at Radialsystem - There will be breakfast, lunch, dinner. All with veggie options. - Maybe mention the hashtag #cssconfeu, add a link to lanyrd Answers: username_1: @username_0 @retrospekt Hi! I have added some info to the infopage in a new branch. Maybe you could have a look before I open a PR? https://github.com/cssconf/2015.cssconf.eu/tree/info-page Status: Issue closed
vespa-engine/vespa
503287933
Title: Feed performance for array of map/struct Question: username_0: To feed updates at maximum speed, Vespa must not look up the document in the summary store before updating This is not possible if using array of struct/map, and feed performance is hence lower https://docs.vespa.ai/documentation/writing-to-vespa.html should be updated to reflect this, too
rust-bitcoin/rust-bitcoin
454423838
Title: ASM encoding for OP_CLTV / OP_CSV Question: username_0: `OP_CLTV`/`OP_CSV` are currently encoded to asm string as `OP_NOP2`/`OP_NOP3`. This was brought up previously by #150 and now again in the context of esplora at https://github.com/Blockstream/esplora/issues/98. Is keeping it like that an intentional decision? Should I do the conversion on esplora's side, or could this be added here? Answers: username_1: I agree. I fixed it here: https://github.com/rust-bitcoin/rust-bitcoin/pull/282 Status: Issue closed
theme-next/theme-next.org
441420999
Title: Autoinstall script issues Question: username_0: Getting following errors, after a fresh auto install. ![image](https://user-images.githubusercontent.com/37770/57329251-0273f200-7131-11e9-8c38-2c9b3e612b63.png) ``` 127.0.0.1/:1 Refused to apply style from 'http://127.0.0.1:4000/lib/fancybox/source/jquery.fancybox.css' because its MIME type ('text/html') is not a supported stylesheet MIME type, and strict MIME checking is enabled. 2(index):1482 GET http://1172.16.17.32:4000/lib/fancybox/source/jquery.fancybox.pack.js net::ERR_ABORTED 404 (Not Found) utils.js?v=7.1.1:38 Uncaught TypeError: $(...).fancybox is not a function at Object.wrapImageWithFancyBox (utils.js?v=7.1.1:38) at HTMLDocument.<anonymous> (next-boot.js?v=7.1.1:35) at j (index.js?v=2.1.3:2) at Object.fireWith [as resolveWith] (index.js?v=2.1.3:2) at Function.ready (index.js?v=2.1.3:2) at HTMLDocument.I (index.js?v=2.1.3:2) wrapImageWithFancyBox @ utils.js?v=7.1.1:38 (anonymous) @ next-boot.js?v=7.1.1:35 j @ index.js?v=2.1.3:2 fireWith @ index.js?v=2.1.3:2 ready @ index.js?v=2.1.3:2 I @ index.js?v=2.1.3:2 index.js?v=2.1.3:4 ``` Answers: username_1: Run `hexo-theme-next-autodeploy.sh`, and `fancybox` will be installed automatically Status: Issue closed username_1: Script updated https://github.com/theme-next/theme-next.org/commit/39410dcff4c9f963a9f2d2712b8fdc8bfd217572 username_2: @username_1 in fact, I prefer site to be a complete project, which can be run after clone, rather than need to execute a shell script to improve. ๐Ÿ˜‚ username_1: The current process is the same as the Hexo website: https://github.com/hexojs/site and vuejs.org https://github.com/vuejs/vuejs.org/ Maybe we can use a command like `npm start` instead of `sh hexo-theme-next-autoinstall.sh` username_1: The new process is ``` git clone https://github.com/next-theme/theme-next-docs cd theme-next-docs npm install npx hexo server ``` https://github.com/next-theme/theme-next-docs#getting-started See also https://github.com/theme-next/theme-next.org/commit/39410dcff4c9f963a9f2d2712b8fdc8bfd217572
lh3/seqtk
43655441
Title: Tag a recent stable release Question: username_0: Release 1.0 corresponds with r31, and the current commit of seqtk is r68. Please tag a recent stable release. Answers: username_1: Pretty please? :) username_2: Triple please :) username_0: With :cherries: on top? Status: Issue closed username_3: Done. `v1.1` tag applied. username_0: Pull request to update Homebrew-science over here: https://github.com/Homebrew/homebrew-science/pull/3506
kalexmills/github-vet-tests-dec2020
759080590
Title: stephenafamo/ci-bot: processors.go; 41 LoC Question: username_0: [Click here to see the code in its original context.](https://github.com/stephenafamo/ci-bot/blob/4a44b0bc3cf2d6701eb158d1ca5443b0314c58fa/processors.go#L25-L65) <details> <summary>Click here to show the 41 line(s) of Go which triggered the analyzer.</summary> ```go for build := range s.Builds { go func() { ts, attemptErr := sendAttemptDeployMessage(build) if attemptErr != nil { log.Println(attemptErr) return } url, deployErr := deploy(build) if deployErr != nil { log.Println(deployErr) failErr := sendFailedDeployMessage(build, ts, deployErr) if failErr != nil { log.Println(failErr) } return } err := sendDeploySuccessMessage(build, ts, url) if err != nil { log.Println(err) return } payload, errs := sendOwnerMessages(build, url) if len(errs) > 0 { log.Println(errs) return } errs = sendQaMessages(build, url, payload) if len(errs) > 0 { log.Println(errs) return } }() } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 4a44b0bc3cf2d6701eb158d1ca5443b0314c58fa
RickStrahl/Westwind.Globalization
921480925
Title: Net45 LocalizationAdmin/index.html getting no 'resources' Question: username_0: Hi, Problem found in "DbResXConverter.cs" on line 942 public Dictionary<string, object> GetResourcesNormalizedForLocale(ResourceManager resourceManager, string localeId) String "localeID" is filled with "auto,de" But cannot deduce the origin. [](url) I have temporary fixed with: localeId = localeId.Replace("auto,", ""); [ ![Westwind1](https://user-images.githubusercontent.com/85940221/122074200-4e59f080-cdf9-11eb-84b1-1ff08bc9eca5.JPG) ![Westwind2](https://user-images.githubusercontent.com/85940221/122074223-531ea480-cdf9-11eb-9454-fd5a98485fed.JPG) ](url) Thx Thomas
palantir/gradle-docker
420046712
Title: dockerRun doesn't allow dependsOn Question: username_0: the dockerRun task should support the dependsOn flag so that you can ensure that you build the image and then run it. Answers: username_1: However, I was able to add dependencies to the underlying task like this: ```groovy tasks.dockerRun.dependsOn 'anotherTask' ``` username_2: Awesome hint. Using this schema, it does work the other way, too. ```groovy anotherTask.dependsOn tasks.dockerRun ``` username_3: I ran into this as well, but in my case I want the image version to be a variable that is set by 'anotherTask'. No matter what I try, it appears that the dockerRun `image` field is always initialized before the task that sets the variable is executed. Any ideas?
apache/tvm
925331594
Title: [ONNX] ssd-mobilenetv1 fail to build Question: username_0: The discussion I saw is at https://discuss.tvm.apache.org/t/failures-using-many-of-onnx-model-zoo-models/10268 I used a script like https://gist.github.com/username_1/9348db919edb105912b94b84792dd7d3 to build ssd-mobilenetv1, but some errors appeared. tvm branch (commit 1fac10b3) llvm version; 12.0.1 OS info: Ubuntu 20.10 (Groovy Gorilla) error message: ``` ==> https://github.com/onnx/models/raw/master/vision/object_detection_segmentation/ssd-mobilenetv1/model/ssd_mobilenet_v1_10.tar.gz <== Loading ssd_mobilenet_v1/ssd_mobilenet_v1.onnx ... Input shapes: {'image_tensor:0': (1, 383, 640, 3)} Importing graph from ONNX to TVM Relay IR ... /home/chlu/tvm/python/tvm/relay/frontend/onnx.py:2572: UserWarning: Using scan outputs in a loop with strided slice currently may cause errors during compilation. warnings.warn( [14:48:48] ../src/runtime/threading_backend.cc:217: Warning: more than two frequencies detected! Compiling graph from Relay IR to llvm ... Caught an exception Traceback (most recent call last): 37: TVMFuncCall 36: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::relay::vm::VMCompiler::GetFunction(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&) 35: tvm::relay::vm::VMCompiler::Lower(tvm::IRModule, tvm::runtime::Map<tvm::Integer, tvm::Target, void, void> const&, tvm::Target const&) 34: tvm::relay::vm::VMCompiler::OptimizeModule(tvm::IRModule, tvm::runtime::Map<tvm::Integer, tvm::Target, void, void> const&, tvm::Target const&) 33: tvm::transform::Pass::operator()(tvm::IRModule) const 32: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const 31: tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const 30: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const 29: tvm::relay::transform::FunctionPassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const 28: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::relay::Function (tvm::relay::Function, tvm::IRModule, tvm::transform::PassContext)>::AssignTypedLambda<tvm::relay::transform::AlterOpLayout()::{lambda(tvm::relay::Function, tvm::IRModule, tvm::transform::PassContext)#1}>(tvm::relay::transform::AlterOpLayout()::{lambda(tvm::relay::Function, tvm::IRModule, tvm::transform::PassContext)#1})::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&) 27: tvm::relay::alter_op_layout::AlterOpLayout(tvm::RelayExpr const&) 26: tvm::relay::ForwardRewrite(tvm::RelayExpr const&, tvm::runtime::TypedPackedFunc<tvm::RelayExpr (tvm::relay::Call const&, tvm::runtime::Array<tvm::RelayExpr, void> const&, tvm::runtime::ObjectRef const&)> const&, std::function<tvm::runtime::ObjectRef (tvm::relay::Call const&)>, std::function<tvm::RelayExpr (tvm::RelayExpr const&)>) 25: tvm::relay::MixedModeMutator::VisitExpr(tvm::RelayExpr const&) 24: tvm::relay::MixedModeMutator::VisitLeaf(tvm::RelayExpr const&) 23: _ZN3tvm5relay16MixedModeMutator17DispatchVisitExprERKNS_9RelayExp 22: tvm::relay::ExprMutator::VisitExpr(tvm::RelayExpr const&) 21: tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&) 20: _ZZN3tvm5relay11ExprFunctorIFNS_9RelayExprERKS2_EE10InitVTableEvENUlRKNS_ 19: tvm::relay::ExprMutator::VisitExpr_(tvm::relay::FunctionNode const*) 18: tvm::relay::MixedModeMutator::VisitExpr(tvm::RelayExpr const&) 17: tvm::relay::MixedModeMutator::VisitLeaf(tvm::RelayExpr const&) 16: _ZN3tvm5relay16MixedModeMutator17DispatchVisitExprERKNS_9RelayExp 15: tvm::relay::ExprMutator::VisitExpr(tvm::RelayExpr const&) 14: tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&) 13: _ZZN3tvm5relay11ExprFunctorIFNS_9RelayExprERKS2_EE10InitVTableEvENUlRKNS_ 12: tvm::relay::ExprMutator::VisitExpr_(tvm::relay::LetNode const*) 11: tvm::relay::MixedModeMutator::VisitExpr(tvm::RelayExpr const&) 10: tvm::relay::MixedModeMutator::VisitLeaf(tvm::RelayExpr const&) 9: _ZN3tvm5relay16MixedModeMutator17DispatchVisitExprERKNS_9RelayExp 8: tvm::relay::ExprMutator::VisitExpr(tvm::RelayExpr const&) 7: tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&) 6: _ZZN3tvm5relay11ExprFunctorIFNS_9RelayExprERKS2_EE10InitVTableEvENUlRKNS_ 5: tvm::relay::MixedModeMutator::VisitExpr_(tvm::relay::CallNode const*) 4: tvm::relay::ForwardRewriter::Rewrite_(tvm::relay::CallNode const*, tvm::RelayExpr const&) 3: tvm::runtime::TypedPackedFunc<tvm::RelayExpr (tvm::relay::Call const&, tvm::runtime::Array<tvm::RelayExpr, void> const&, tvm::runtime::ObjectRef const&)>::AssignTypedLambda<tvm::RelayExpr (*)(tvm::relay::Call const&, tvm::runtime::Array<tvm::RelayExpr, void> const&, tvm::runtime::ObjectRef const&)>(tvm::RelayExpr (*)(tvm::relay::Call const&, tvm::runtime::Array<tvm::RelayExpr, void> const&, tvm::runtime::ObjectRef const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*) const 2: tvm::RelayExpr tvm::relay::LayoutRewriter<tvm::relay::alter_op_layout::AlterTransformMemorizer>(tvm::relay::Call const&, tvm::runtime::Array<tvm::RelayExpr, void> const&, tvm::runtime::ObjectRef const&) 1: tvm::relay::alter_op_layout::AlterTransformMemorizer::CallWithNewLayouts(tvm::relay::Call const&, std::vector<tvm::RelayExpr, std::allocator<tvm::RelayExpr> > const&) 0: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), TVMFuncCreateFromCFunc::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#2}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&) [clone .cold] [Truncated] vm = VirtualMachine(vm_exec, dev) vm.set_input("main", **inputs) print(f"Running inference...") vm.run() except KeyboardInterrupt: raise except Exception as ex: print(f'Caught an exception {ex}') result = 'not ok' else: print(f'Succeeded!') result = 'ok' summary.append((result, url)) print() print('Summary:') for result, url in summary: print(f'{result}\t- {url}') ``` Answers: username_0: ssd-mobilenetv1 model :https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/ssd-mobilenetv1 username_1: This is a very strange model in that there are multiple ONNX `Loop` for no good reason. In particular, there is a loop at the beginning that does input image preprocessing, and for some reason the output of the loop is already dynamic in all dimensions. So the input to the first convolution op is already dynamic in H and W dimensions, which result in the error above. ``` ... %37 = subtract(%36, meta[relay.Constant][5] /* ty=Tensor[(1, 1, 1, 1), float32] */) /* ty=Tensor[(?, ?, ?, ?), float32] */; %38 = nn.conv2d(%37, meta[relay.Constant][6] /* ty=Tensor[(32, 3, 3, 3), float32] */, strides=[2, 2], padding=[0, 0, 1, 1], kernel_size=[3, 3]) /* ty=Tensor[(?, 32, ?, ?), float32] */; ... ``` I have a feeling that our ONNX `Loop` support does not preserve static shape information precisely, since it does not make sense to have a dynamic input at the first conv2d op after the preprocessing loop. Also this could be one of the reasons MaskRCNN import does not work well with ONNX, since it has a loop and compilation fails at dynamic H and W dimension which should not exist. @jwfromm @mbrookhart username_0: Thank you for your reply, I have also observed this,but I am not sure if it is because the ssd-mobilenet model originally needs dynamic shape(e.g predict bounding box). If you can, I'll help as much as possible, but probably limited ability because I was a beginner :)
devinit/DIwebsite-redesign
664335915
Title: 'Audio and visual media pages' can't display in featured content on homepage Question: username_0: ## I'm submitting a ... Check one of the following options with "x" and add the appropriate label to the issue as well <pre><code> [x ] Bug report <!-- Please search this repo for a similar issue or PR before submitting --> [ ] Regression (behaviour that used to work and stopped working in a new release) </code></pre> **Describe the Issue** A clear and concise description of what the issue is. Not quite a bug, but the new 'Audio and visual media pages' format doesn't seem to be one we can choose as featured content on the homepage - it would be great if this was an option. **To Reproduce** Replace the content below with the steps to reproduce the behaviour. 1. Go to '...' 2. Click on '....' 3. Scroll down to '....' 4. See error Go to the homepage in the CMS, and try to add our new GHA animation as featured content, it doesn't appear in the menu **Expected Behaviour** A clear and concise description of what you expected to happen Would be good if audio/visual content could be chosen as featured content **Screenshots/GIF** If applicable, add screenshots or a GIF to help explain your problem. **Desktop (please complete the following information):** - OS: [e.g. iOS] - Browser [e.g. chrome, safari] - Version [e.g. 22] **Smartphone (please complete the following information):** - Device: [e.g. iPhone6] - OS: [e.g. iOS8.1] - Browser [e.g. stock browser, safari] - Version [e.g. 22] **Environment/Server:** <pre><code> - [ ] Production - [ ] Staging - [ ] Development </code></pre> <!-- If possible, check whether this is still an issue on the test server first --> **Additional Context** Add any other context about the problem here. Requested Delivery Date: dd/MMM/YYYY <!-- This gives us a better sense of the urgency of the issue compared to labels If possible, We'll respond with the expected delivery date --> Ideally asap, but we can use the legacy format if needed to share the GHA event :) Answers: username_1: @username_0 what exactly would you like to appear on the home page for this type of page? @username_2 I don't think the AudioVisualMedia page went through FF for its design... makes such enhancements tricky. username_2: Yeah - we just adapted a general page. Ideally we don't want to have to go to Ben's team for every design tweek, it is helpful to be able to do simple stuff with our own inhouse team. username_1: @username_0 this is now possible on staging ![image](https://user-images.githubusercontent.com/5672438/88694481-46be2a00-d109-11ea-9389-8636c82e377b.png) username_3: @username_1 Weirdly, this didn't seem to work when I tried it: ![image](https://user-images.githubusercontent.com/55877063/88819847-7415e100-d1b8-11ea-9ace-ca7bd8b1edee.png) username_1: Tried it just now on staging and it's working as expected ... @username_2 could you please give it a go Edit Home Page -> Hero Section -> Featured page -> Choose another page `Audio and Visual Media Page` should be listed in the alert at the top of the popup username_2: @username_1 it doesn't show in the alert list for me either, I see the same thing as Alice posted. username_1: hmm ... tested on two other browsers and incognito, I see it's fine ... so, probably a cache issue. username_3: I cleared my cache and got the same behaviour @username_1 Any luck @username_2? username_1: I have no added an Audio & Visual Media page as the feature page ... do you see it on the home page? http://staging.devinit.org/ @username_3 username_3: Yes! Status: Issue closed
flutter/website
972149627
Title: [PAGE ISSUE]: 'Export fonts from a package' Question: username_0: ### Page URL https://flutter.dev/docs/cookbook/design/package-fonts.html ### Page source https://github.com/flutter/website/tree/master/src/docs/cookbook/design/package-fonts.md ### Describe the problem When running the code sample lint info messages ### Expected fix Update sample ### Additional context _No response_ Status: Issue closed Answers: username_0: closing in favor of https://github.com/flutter/website/issues/6155
cityofaustin/atd-data-tech
649167795
Title: VZD | Data Inconsistency between Crash and Unit fatality counts Question: username_0: With the creation of the `atd_fatality_count` field on the crash table which populates from `death_cnt` but is editable AND with use of the `death_cnt` field from units table in some VZV widget, there are now scenarios where counts on crashes and units do not add up in our reporting. As a quick fix for this problem we propose to make death count & Sus serious injury fields editable on all tables (Primary Person, Person, Unit, & Crash) and rely on VZE users to make them consistent if they get out of sync. In the future, we can decided on a proper data control flow to ensure counts are consistent and recalculated<issue_closed> Status: Issue closed
CraigBryan/csi4107-tweet-search
56960339
Title: Interpreting Hashtags Question: username_0: We should probably create a Tokenizer/Analyzer that interprets hashtags specifically and adds weighting to those terms. We could find a way to add weights to these terms while we index the documents (Extend the Analyzer/Tokenizer) or We can inject this function into our Similarity Computation somehow (we would have to implement our own Similarity Class) Answers: username_1: An interesting step would be to see what lucene does now with hashtags. I assume it just takes the '#' off the end and parses the word normally. username_0: Hey, so i've been putting in some work this evening and I found that extending the analyzer/tokenizer is a crazy task. What I did instead for hashtags was to create a separate createHashtagIndex() method and parse the tweet for words that start with a hashtag and then and use a whitespace analyzer to create an index with hashtag terms. What are your thought on a system that would supplement the existing scoring with this hashtag index? username_0: I'm going to first try to implement a scoring that minimizes rank over both fields. username_1: Hmm. I think it'd be really cool, does it remove the hashtag words from the normal index? Also, I don't think I have time to work on it, but I'm happy to let you work on it as I tidy up the code, comment, and write a bunch of the report. Does that sound amenable? username_0: The hashtag words are still in the normal index as well at this point. Ya this works fine, could you also test the implementation after I commit it though, just toggle the functionality on off and compare the results? I still can't run trec_eval, not even a fresh download from the command line. I should be able to push the changes tonight, or tomorrow morning. username_1: Haha, I can do that. I've got command line arguments for everything else now. username_0: I just committed the hashtag indexing methods (in Query Processor) They seam to be working and generating new information/scoring, but I'm not sure of the quality of the results Status: Issue closed username_1: Hey, question about part of your code. Why are you inverting your comparison here?: ``` @Override public int compare(final IDandScore a, final IDandScore b) { return (-1) * a.score.compareTo(b.score); } ``` Its in getResults of the QueryProcessor. username_1: We should probably create a Tokenizer/Analyzer that interprets hashtags specifically and adds weighting to those terms. We could find a way to add weights to these terms while we index the documents (Extend the Analyzer/Tokenizer) or We can inject this function into our Similarity Computation somehow (we would have to implement our own Similarity Class) username_0: The regular float comparator returns 1 if float a is less than float b, in other words it would try to sort the floats in ascending order (lowest first, highest last), by reversing this comparator we end up sorting for the opposite order (highest first, lowest last) username_1: Ok great, makes sense. I've reimplemented that a nicer way, and I'll make sure it's reversed. Status: Issue closed
FHIR/sushi
553854534
Title: Examples not showing up in tab on profile pages Question: username_0: Examples are not showing up on the pages of the profiles they are examples of. This is because we're generating the ImplementationGuide resource descriptions of examples with `exampleBoolean` instead of `exampleCanonical`. We need to update this to get the desired functionality. See: http://hl7.org/fhir/R4/implementationguide-definitions.html#ImplementationGuide.definition.resource.example_x_ Answers: username_0: Related to standardhealth/fsh-mcode#28 Status: Issue closed username_0: Re-opening until it is released. username_0: Examples are not showing up on the pages of the profiles they are examples of. This is because we're generating the ImplementationGuide resource descriptions of examples with `exampleBoolean` instead of `exampleCanonical`. We need to update this to get the desired functionality. See: http://hl7.org/fhir/R4/implementationguide-definitions.html#ImplementationGuide.definition.resource.example_x_ username_0: Fixed in 0.6.2 Status: Issue closed
postmanlabs/postman-app-support
322182643
Title: Run monitor API throw parentMissingError error Question: username_0: I followed the [API document](https://docs.api.getpostman.com/#5b277ca0-7114-e04e-f1f5-246fbbd6d973) to run a monitor, it throw a weird error saying parentMissingError. And I google `parentMissingError` nothing related was found โ˜น๏ธ. ## Screenshots ![image](https://user-images.githubusercontent.com/4319332/39909773-cd2f0f20-5526-11e8-912a-26dbb21b93c9.png) Answers: username_1: Hey @username_0, Can you please email us at <EMAIL> with the `monitor_uid`? username_0: @username_1 Sent already. Please check. Status: Issue closed username_2: Closed due to inactivity on the support ticket
Jaspero/jms
455969371
Title: Adjust field styles Question: username_0: The fields have different paddings/margins and overall styles. Genos glyco has a few great example schemas if you need them. Could you please go over them and adjust. The image and gallery fields especially need some love. Answers: username_1: ostavit ฤ‡u ovo open zasad, bilo bi najbolje napravit jedan page sa svim moguฤ‡im fieldovima da mogu provjerit username_0: I think this one needs a bit of cleanup ![image](https://user-images.githubusercontent.com/4489834/59549851-ee0eea80-8f63-11e9-827d-7bf833b3064c.png) It should just be one field not a filed within a field. Status: Issue closed
robolectric/robolectric
1181911438
Title: Add setNetworkSpecifier in ShadowNetworkCapabilities Question: username_0: Some apps use NetworkCapabilities.getNetworkSpecifier to verify if a network is a particular type. To unit test related methods, there should be the ability to set NetworkSpecifier with ShadowNetworkCapabilities.
carrot/recycler-core
178306973
Title: Multiple models in a RecyclerView Question: username_0: Hey, I was requested to create a layout of a vertical RecyclerView with rows of nested horizontal RecyclerViews. Each nested row has a ContactModel in the first position. This class has several attributes, one of them is "type" which can be "user" or "group". A group has a different layout than a user. The rest of the list are ActivityModel items. I've created the models and controllers for each item, but I'm stuck on 2 fronts. 1) How to combine both items into 1 list - I could create a BaseItem and have both ContactModel and ActivityModel inherit from it. Would the controller know that a current item is an instance of one of the classes? 2) If 1 would work, how will I be able to change the layout of ContactModel depending on its type? Would appreciate any assistance here. Thanks, I've attached class files [Archive.zip](https://github.com/carrot/recycler-core/files/484709/Archive.zip) Answers: username_1: @username_0 There are a couple of things that need to be fixed. This does not look like a problem in Recycler Core. I will change your code and update a working code as soon as I have some free time. 1. The Models and controllers are mapped using, `@InjectController(controller = ContactController.class, layout = R.layout.row_nested_floatfeed_user)` so the models are free to inherit from any class. `Model <-> Controller` is a 1-1 mapping. I see that you have two models that extend from same controller, we should not do that. Every model should have its own controller. Since we use annotations to bind the models with Controller, we don't need to the type of the model from the Adapter/RecyclerView perspective. 2. You cannot and should not change the layout of one model or ContactModel. If your view is different, you should have a different model and a controller. As a workaround, you can create another model that extends the same model, so it will retain all the same methods and fields, and you can respective controllers and models for it. Status: Issue closed username_1: @username_0 Fixed your code. This should get you started. [Archive.zip](https://github.com/carrot/recycler-core/files/486065/Archive.zip) So closing this issue. If this does not help you, feel free to open an issue again.
wandenberg/puppet-module-nexus3_rest
388206325
Title: Release request Question: username_0: Hi, Any chance of a tagged release please? Answers: username_1: Hi, just waiting a user validate the `nexus3_privilege` type. If you can help to validate would be greate https://github.com/username_1/puppet-module-nexus3_rest/tree/privileges username_1: After a long time ... just released version 0.3.0 Status: Issue closed
tensorflow/models
479361639
Title: Traceback (most recent call last): Question: username_0: Please go to Stack Overflow for help and support: http://stackoverflow.com/questions/tagged/tensorflow Also, please understand that many of the models included in this repository are experimental and research-style code. If you open a GitHub issue, here is our policy: 1. It must be a bug, a feature request, or a significant problem with documentation (for small docs fixes please send a PR instead). 2. The form below must be filled out. **Here's why we have that policy**: TensorFlow developers respond to issues. We want to focus on work that benefits the whole community, e.g., fixing bugs and adding features. Support only helps individuals. GitHub also notifies thousands of people when issues are filed. We want them to see you communicating an interesting problem, rather than being redirected to Stack Overflow. ------------------------ ### System information - **What is the top-level directory of the model you are using**: - **Have I written custom code (as opposed to using a stock example script provided in TensorFlow)**: - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: - **TensorFlow installed from (source or binary)**: - **TensorFlow version (use command below)**: - **Bazel version (if compiling from source)**: - **CUDA/cuDNN version**: - **GPU model and memory**: - **Exact command to reproduce**: You can collect some of this information using our environment capture script: https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh You can obtain the TensorFlow version with `python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"` ### Describe the problem Describe the problem clearly here. Be sure to convey here why it's a bug in TensorFlow or a feature request. ### Source code / logs Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. Answers: username_1: Thank you for your post. We noticed you have not filled out the following field in the issue template. Could you update them if they are relevant in your case, or leave them as N/A? Thanks. What is the top-level directory of the model you are using Have I written custom code OS Platform and Distribution TensorFlow installed from TensorFlow version Bazel version CUDA/cuDNN version GPU model and memory Exact command to reproduce Status: Issue closed
Dojo-Bahia/dojos-em-ssa
233052904
Title: Criar um logo pro Dojo-Bahia Question: username_0: Precisamos de um logo simples para o Dojo-Bahia. Uma idรฉia pode ser a bandeira da Bahia, com o sรญmbolo japonรชs usado na divulgaรงรฃo do dojo@raul_hc ocorrido em maio/2017 Answers: username_0: @username_1 vocรช gostaria de participar? username_1: Colado! Tirando a noite pra pensar nisso username_0: @username_1 por favor, suba as imagens em https://github.com/Dojo-Bahia/dojo-bahia.github.io/ num novo diretรณrio chamado `images/` Status: Issue closed username_0: closed by https://github.com/Dojo-Bahia/dojo-bahia.github.io/pull/1
xws-bench/battles
133720286
Title: Human:150 Human:50 Question: username_0: Mandalorian_Mercenary*Lone_Wolf*Tactician*Glitterstim*Slave_I.Spice_Runner*Twin_Laser_Turret.Syndicate_Thug*Twin_Laser_Turret*Unhinged_Astromech.Binayre_Pirate.VSOmicron_Group_Pilot.Soontir_Fel*Push_the_Limit*Autothrusters*Royal_Guard_TIE*Stealth_Device.Darth_Vader*Veteran_Instincts*Engine_Upgrade*TIE/x1*Advanced_Targeting_Computer.<br> http://bit.ly/1U3cymM<br>
shayhatsor/zookeeper
192337753
Title: ZK client floods the memory with Task objects on connection loss Question: username_0: When I add a FW rule to disable a ZK connection, then memory usage grows to 1GB in a few seconds. When I ran a profiler I saw that memory is flooded with Task objects, that were created in AsyncManualResetEvent class. This happens in production when CPU usage in the VM is high as well. First a ZooKeeper ConnectionLoss exception is logged, and soon the process memory starts to grow very fast. It seems like the ThreadPool doesn't give a chance for the connection ping Tasks to fire because the CPU usage is high. Then a ConnectionLoss exception is logged. And then the Client starts to create the Tasks objects without any Delay. I tested this behaviour with 3.4.8.3 and 3.4.9.1. Here's the NUnit test fixture: [TestFixture] public class ZookeeperMemoryLeakTest_Manual { // Zookeeper must run on Linux machine, this test adds firewall rule private static readonly string zkSrvIp = "192.168.60.10"; private static readonly string zkSrvUser = "root"; private static readonly string zkSrvPassw = "<PASSWORD>"; private static readonly string zkPort = "2181"; private static readonly string fwRule = $"INPUT -p tcp --dport {zkPort} -j DROP;"; private static readonly string zkConnStr = $"{zkSrvIp}:{zkPort}"; [TearDown] public virtual void TestFixtureTearDown() { Console.WriteLine("Removing Firewall rule"); SshUtils.ExecuteSshCommand(zkSrvIp, zkSrvUser, zkSrvPassw, $"sudo iptables -D {fwRule}"); } [Test] [Explicit] [Description("See Ram and Processor usage. Long running.")] public void MemoryLeakTest() { var zookeeper = new ZooKeeper(zkConnStr, 15000, new ClientWatch()); while (zookeeper.getState() != ZooKeeper.States.CONNECTED) { Thread.Sleep(10); } // Add FW rule SshUtils.ExecuteSshCommand(zkSrvIp, zkSrvUser, zkSrvPassw, $"sudo iptables -I {fwRule}"); var totalBytesOfMemoryUsedBefore = Process.GetCurrentProcess().WorkingSet64; Console.WriteLine("Waiting.."); Thread.Sleep(20000); Console.WriteLine("Done Waiting.."); long totalBytesOfMemoryUsedAfter = Process.GetCurrentProcess().WorkingSet64; var memoryUsageDiffMB = (totalBytesOfMemoryUsedAfter - totalBytesOfMemoryUsedBefore) / 1024 / 1024; Console.WriteLine($"memoryUsageDiffMB: {memoryUsageDiffMB} MB"); Assert.Less(memoryUsageDiffMB, 100); } #region Watcher for zookeeper private class ClientWatch : Watcher { public override Task process(WatchedEvent we) { var eventState = we.getState(); Console.WriteLine($"State from watcher: {eventState}"); return Task.FromResult<object>(null); } } #endregion } Answers: username_1: Thanks for reporting the bug. I usually don't use Linux, would you mind running the test with debug log level and post it here? I think I understand where the problem comes from but would like to make sure. username_0: Thank you, Shay, for a quick response. I got the same situation with the simplified test: [Test] public void MemoryLeakTest() { ZooKeeper.LogLevel = TraceLevel.Verbose; //var zookeeper = new ZooKeeper("192.168.3.11:2181", 15000, new ClientWatch()); var zookeeper = new ZooKeeper("127.0.0.1:2181", 15000, new ClientWatch()); var timeOut = TimeSpan.FromSeconds(30); var startTime = DateTime.Now; while (zookeeper.getState() != ZooKeeper.States.CONNECTED) { Thread.Sleep(10); if (DateTime.Now > startTime.Add(timeOut)) { Assert.Fail("Test timeout."); } } } I ran this test on windows without a ZK server at all. 1) When I use a connStr "192.168.3.11:2181" the memory starts to grow after 15s timeout. See the ZK log attached as well as img with memory and cpu consumption graphs. [111.111.111.111_ZK.2016-11-30-10.25.07.926Z.txt](https://github.com/username_1/zookeeper/files/621576/111.111.111.111_ZK.2016-11-30-10.25.07.926Z.txt) ![111 111 111 111_zkclientissue](https://cloud.githubusercontent.com/assets/24253081/20749626/c014e2b2-b6fb-11e6-8f02-5d605630adb2.png) 2) When I use a connStr "127.0.0.1:2181" (there's no ZM server on my localhost) the situation is a bit different, but the memory and cpu hops are still visible. ZK log attached as well as img with memory and cpu consumption graphs. [127.0.0.1_ZK.2016-11-30-10.32.53.686Z.txt](https://github.com/username_1/zookeeper/files/621577/12172.16.17.32_ZK.2016-11-30-10.32.53.686Z.txt) ![127 0 0 1_zkclientissue](https://cloud.githubusercontent.com/assets/24253081/20749648/d05fe9c8-b6fb-11e6-9008-50a3c2749b67.png) username_1: @username_0, thanks a lot for the complete analysis of the issue ! I love the screen grabs :+1: Now that I have an easy repo on windows, it'll be much easier to identify the bug. username_1: @username_0, this doesn't reproduce on my machine. But looking closely at your screen grabs I do see something weird happening. The yellow marks represents GC activity which in your case correlates with the memory/CPU consumption. I suspect that you're using a VM with only one core. If that's the case, check your GC settings and try to add another core. Tell me if that helps or at least changes the results. username_2: I also observed some excessive CPU consumption when testing ZooKeeper server failover. When ZK client cannot connect to any server, it seems to be stuck in some tight reconnection loop. My machine has 1 socket, 4 cores totaling 8 logical processors. However I did not observe increase of memory allocation. username_0: Thanks, @username_2, for confirmation of an issue. @username_1, I ran the test on my laptop with "Intelยฎ Coreโ„ข i7-6820HQ Processor" (4 cores, 8 threads). https://ark.intel.com/products/88970/Intel-Core-i7-6820HQ-Processor-8M-Cache-up-to-3_60-GHz. I believe it's a combination of CPU power/no-of-threads and GC that triggers the memory growth. My OS: ![image](https://cloud.githubusercontent.com/assets/24253081/20828221/1c9a630c-b87f-11e6-8a2d-c845f612f396.png) .NET runtimes: ![image](https://cloud.githubusercontent.com/assets/24253081/20829367/6e89520e-b884-11e6-928b-52de99b9377a.png) There are differences in GC in different .NET versions: http://stackoverflow.com/questions/5643147/determining-which-garbage-collector-is-running/8416915#8416915 I updated the test to print GCSetting and .NET version: [Test] public void MemoryLeakTest() { Get45or451FromRegistry(); Console.WriteLine($"IsServerGC: {GCSettings.IsServerGC}, LatencyMode: {GCSettings.LatencyMode}."); ZooKeeper.LogLevel = TraceLevel.Verbose; var zookeeper = new ZooKeeper("192.168.3.11:2181", 15000, new ClientWatch()); //var zookeeper = new ZooKeeper("127.0.0.1:2181", 15000, new ClientWatch()); var timeOut = TimeSpan.FromSeconds(30); var startTime = DateTime.Now; while (zookeeper.getState() != ZooKeeper.States.CONNECTED) { Thread.Sleep(10); if (DateTime.Now > startTime.Add(timeOut)) { Assert.Fail("Test timeout."); } } } private static void Get45or451FromRegistry() { using (var ndpKey = Microsoft.Win32.RegistryKey.OpenBaseKey(Microsoft.Win32.RegistryHive.LocalMachine, Microsoft.Win32.RegistryView.Registry32).OpenSubKey("SOFTWARE\\Microsoft\\NET Framework Setup\\NDP\\v4\\Full\\")) { int releaseKey = Convert.ToInt32(ndpKey.GetValue("Release")); if (true) { Console.WriteLine("Version: " + CheckFor45DotVersion(releaseKey)); } } } // Checking the version using >= will enable forward compatibility, // however you should always compile your code on newer versions of // the framework to ensure your app works the same. private static string CheckFor45DotVersion(int releaseKey) { if (releaseKey >= 393273) { return "4.6 RC or later"; } if ((releaseKey >= 379893)) { return "4.5.2 or later"; } if ((releaseKey >= 378675)) { return "4.5.1 or later"; } if ((releaseKey >= 378389)) { return "4.5 or later"; } // This line should never execute. A non-null release key should mean // that 4.5 or later is installed. return "No 4.5 or later version detected"; } Here's what it prints on my machine: Version: 4.6 RC or later IsServerGC: False, LatencyMode: Interactive. Some screenshots from MemoryProfiler: ![image](https://cloud.githubusercontent.com/assets/24253081/20829026/f9642388-b882-11e6-92d5-2b73f7a85ffb.png) ![image](https://cloud.githubusercontent.com/assets/24253081/20829074/1ffdf73a-b883-11e6-92b7-fadfb6477002.png) username_2: Just wanted to note that our app is configured to use server GC (`<gcServer enabled="true" />`). username_0: I enabled gcServer in ReSharper and got a bit different situation, the CPU is still very high, less garbage collections and memory is growing faster: ![image](https://cloud.githubusercontent.com/assets/24253081/20833961/04becacc-b89b-11e6-87bb-32b89436e328.png) I wander if the Network Response time affects this. Theoretical situation: Lets say there's some uncontrolled loop in ZK client that puts a lot of "Ping ZK Server" tasks and each "Ping ZK Server" task sends a request and registers a WaitHandle with Task Continuation in the ThreadPool to fire a continuation Task when the Network IO responds. If there's a bigger network response time, then more WiatHandles with Task Continuations gets into the ThreadPool stack. If CPU is better then situation gets worse, because it puts Continuations into the ThreadPool stack faster. The bigger the stack, the more work for a ThreadPool to track WaitHandles... username_1: Guys, I have just successfully reproduced the bug. The profilers didn't work correctly with the .net core dlls. So i compiled the code in a normal csproj targeting .net 4.52 and ran the profiles. Just as @username_0 reported, the bugs are there. I guess that the GC is just masking the memory leak problem. username_0: Great news, @username_1 . Thank you for your effort. username_1: I believe I solved the bug. It's a one liner that was hard to find. I'll roll out a new version soon. Thanks gain for providing me with all the information needed for the repo. username_1: @username_0 and @username_2, did you have time to test the latest release? I'd like to make sure the bug is considered solved on your end before I close the issue, thanks. username_0: @username_1, this test passed! I started to run more tests with parallel connections and ZK restarts. And I'm seeing similar behavior when memory and cpu increases suddenly. Unfortunately I'm on a tight release schedule now, but once it finishes I'll post more tests that reproduce such behavior. Thanks again for your help! username_0: Hi, @username_1, attached is a test case and test output that cause the memory growth. [ZooKeeperNetExTests.zip](https://github.com/username_1/zookeeper/files/740009/ZooKeeperNetExTests.zip) [testOutput.zip](https://github.com/username_1/zookeeper/files/740011/testOutput.zip) There's 20 threads with a zookeeper client per thread. Each thread creates nodes for 60s. If I restart the zookeeper in the middle of the test, then memory grows very fast. username_1: @username_0, I've been busy with another project and just now got the time to go over the bug. You've provided once again a great repo, thanks for that. I'm working on a fix and will roll it out soon. I'm counting on you to give me the thumbs up after you run your rigorous tests ๐Ÿ˜‰ username_1: @username_0, please test the latest version 3.4.9.3 it's on nuget, thanks username_0: @username_1 , I tested 3.4.8.3 vs 3.4.9.2 vs 3.4.9.3. The same test as above, but for ~5min. In one test run I simply ran 20 threads and each thread creates nodes synchronously. In another test run I ran the same test but restarted the ZK in the middle. When ZK is not restarted memory grows similarly in all versions. When I restart the ZK: v3.4.8.3 doesn't show significant change - connections resumes, minimal memory increase. v3.4.9.2 memory grows very fast on restart. but connections resumes later and most of memory is cleaned. v3.4.9.3 memory grows very fast on restart. connections times out. RESULTS: NOTE: timescale differs in diagnostic tolls, but all tests ran for ~5min, except the last one because it thows timeout exception. I guess the difference comes from the high cpu usage. v3.4.8.3, 5min, no restart: 2017-04-03T16:25:03.1363600+03:00: Created nodes count 258213 2017-04-03T16:25:03.1378647+03:00: connectionLossExceptionCnt = 0 2017-04-03T16:25:03.1378647+03:00: nodeExistsExceptionCnt = 0 2017-04-03T16:25:03.1671898+03:00: Stopping tasks 2017-04-03T16:25:03.1671898+03:00: 20 threads node creations took: 250186ms ![image](https://cloud.githubusercontent.com/assets/24253081/24613257/7678857c-1890-11e7-9404-c1b8c08cf584.png) v3.4.9.2, 5min, no restart: 2017-04-03T13:12:00.2656108+03:00: Created nodes count 191722 2017-04-03T13:12:00.2671150+03:00: connectionLossExceptionCnt = 0 2017-04-03T13:12:00.2671150+03:00: nodeExistsExceptionCnt = 0 2017-04-03T13:12:00.2795140+03:00: Stopping tasks 2017-04-03T13:12:00.2795140+03:00: 20 threads node creations took: 250408ms ![image](https://cloud.githubusercontent.com/assets/24253081/24613283/84ae6b98-1890-11e7-921b-5fcb75f1e896.png) v3.4.9.3, 5min, no restart: 2017-04-03T13:52:32.2595529+03:00: Created nodes count 196086 2017-04-03T13:52:32.2605553+03:00: connectionLossExceptionCnt = 0 2017-04-03T13:52:32.2605553+03:00: nodeExistsExceptionCnt = 0 2017-04-03T13:52:32.2736087+03:00: Stopping tasks 2017-04-03T13:52:32.2736087+03:00: 20 threads node creations took: 250655ms ![image](https://cloud.githubusercontent.com/assets/24253081/24613292/8c750756-1890-11e7-9520-7ea8abca99ce.png) v3.4.8.3, 5min, zk server restart: 2017-04-03T16:39:59.2733432+03:00: Created nodes count 239159 2017-04-03T16:39:59.2762872+03:00: connectionLossExceptionCnt = 20 2017-04-03T16:39:59.2762872+03:00: nodeExistsExceptionCnt = 58 ![image](https://cloud.githubusercontent.com/assets/24253081/24613302/9457704e-1890-11e7-9819-3f2b528cde3e.png) v3.4.9.2, 5min, zk server restart: ![image](https://cloud.githubusercontent.com/assets/24253081/24613307/99b3436a-1890-11e7-833c-b34a721ffbfd.png) v3.4.9.3, 5min, zk server restart: 2017-04-03T13:22:29.5448282+03:00: Created nodes count 66484 2017-04-03T13:22:29.5458398+03:00: connectionLossExceptionCnt = 64 2017-04-03T13:22:29.5458398+03:00: nodeExistsExceptionCnt = 0 ![image](https://cloud.githubusercontent.com/assets/24253081/24613329/a708a410-1890-11e7-9484-b5e6f5be6f30.png) Thanks again for your help. username_1: @username_0, thanks again for your continued help. I'm currently trying to, once again, figure out how to solve this issue. The most interesting discovery you've made is that version 3.4.8.3 doesn't show this behavior. I might be able to use it. I just have one minor request, if you have time, I'm trying to build the simplest test possible to reproduce these errors. For example, as you've noted, it can be reproduced without multiple threads. username_1: @username_0, I've just run your test again and I think that there's a problem with the test itself. Correct me if I'm wrong, each thread of the test runs a tight loop that creates new ZK nodes. The growth in memory and CPU in version 3.4.9.3 seems logical since the tight loop fails with connections loss (which is the expected behavior in this case). It'd be great to have your input on this, maybe I'm missing something. Status: Issue closed username_1: @username_0, I'm currently closing this issue as I consider it solved. If you happen to have time to review my previous comments and think It should still be opened, please do so. username_0: Hey, @username_1, sorry for a delay. I agree with your comment. My last test case isn't valid. The other tests passed. Thank you a lot for your help.
MicrosoftDocs/azure-docs
917552385
Title: Is there a direct way to read to pandas dataframe as read_csv for blob Question: username_0: Is there a direct way to read to pandas data frame as read_csv for blob? In AWS S3 this can directly be done using correct, to _csv and read_csv runs perfectly file. --- #### Document Details โš  *Do not edit this section. It is required for docs.microsoft.com โžŸ GitHub issue linking.* * ID: 2f45a6b5-0fea-7fbb-5d4d-37e0e2583fd7 * Version Independent ID: 7be6f792-09f8-c22f-86b6-d0f690e9b3a4 * Content: [Explore data in Azure Blob Storage with pandas - Team Data Science Process](https://docs.microsoft.com/en-us/azure/machine-learning/team-data-science-process/explore-data-blob) * Content Source: [articles/machine-learning/team-data-science-process/explore-data-blob.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/machine-learning/team-data-science-process/explore-data-blob.md) * Service: **machine-learning** * Sub-service: **team-data-science-process** * GitHub Login: @username_2 * Microsoft Alias: **tdsp** Answers: username_1: Thanks for the feedback! We are currently investigating and will update you shortly. username_1: @username_2 here to see if there any planned optimized solution besides the document. Thanks. username_2: @username_1 and @username_0 This page is a simple example -- see https://docs.microsoft.com/en-us/python/api/overview/azure/storage-blob-readme?view=azure-python for more options. The options are entirely driven by what is available in the Azure API username_1: Thanks for the information. @username_0 We will now proceed to close this thread. If there are further questions regarding this matter, please respond here and @username_1 and we will gladly continue the discussion. Status: Issue closed
NVIDIA/DALI
1023917276
Title: How to select a single parameter for the batch of images? Question: username_0: Hello! Thanks for providing such an excellent library. While experimenting with it I miss one feature - ability to select final image shape for the whole batch. In PyTorch there is a `collate_fn` which can handle such use cases. For example first batch of 32 images is resized to shape [256, 256, 3] while next batch is resized to [128, 128, 3]. It it possibly to do so? Answers: username_1: Hi @username_2 , I think maybe following simple code will describe the problem ``` import numpy as np from nvidia.dali.pipeline import Pipeline from nvidia.dali import fn class A(Pipeline): def __init__(self): super(A, self).__init__(batch_size=2, num_threads=1, device_id=0) def define_graph(self): imgs, labels = fn.readers.file(file_root='/dataset/dog/') imgs = fn.decoders.image(imgs) size = fn.random.uniform(values=[200,300,400,500,600,700,800]) imgs = fn.resize(imgs, size=size) return imgs, size a = A() a.build() o = a.run() print('img', o[0].at(0).shape, o[0].at(1).shape) print('size', np.array(o[1].as_tensor())) ``` The output is: img (800, 800, 3) (600, 600, 3) size [800. 600.] The pipeline batch size is 2 But you can see the output image size in same batch is different. How could I to make the resize operator to use same size for all images in same batch, and use another random size for next batch? (I also tried to feeding random size using ExternalSource to resize operator, so I can make every consecutive 2 numbers are same, e.g. [200,200,300,300,..]. but it complained that DataNode cannot be used for size parameter of resize operator..) Thanks! username_2: Hi @username_1, If you want to use external source you can try something like this: ``` import os import numpy as np import nvidia.dali.fn as fn import nvidia.dali.types as types from nvidia.dali import pipeline_def batch_size = 4 def get_data(): size = (np.random.ranf(size=[2]).astype(dtype=np.float32)*60 + 30) out = [size for _ in range(batch_size)] return out @pipeline_def def simple_pipeline(): jpegs, _ = fn.readers.file(files=["DALI_extra/db/single/jpeg/100/swan-3584559_640.jpg"]) images = fn.decoders.image(jpegs) size = fn.external_source(source=get_data) images = fn.resize(images, size=size) return images pipe = simple_pipeline(batch_size=batch_size, num_threads=4, prefetch_queue_depth=2, device_id=0) pipe.build() pipe.run() out = pipe.run()[0] print(np.array(out[0]).shape) print(np.array(out.as_tensor()).shape) out = pipe.run()[0] print(np.array(out[0]).shape) print(np.array(out.as_tensor()).shape) ``` or you can still use random generator and [the permute batch operator](https://docs.nvidia.com/deeplearning/dali/user-guide/docs/supported_ops.html#nvidia.dali.fn.permute_batch) to duplicate only one value for all samples in the batch: ``` import os import numpy as np import nvidia.dali.fn as fn import nvidia.dali.types as types from nvidia.dali import pipeline_def batch_size = 4 @pipeline_def def simple_pipeline(): jpegs, _ = fn.readers.file(files=["DALI_extra/db/single/jpeg/100/swan-3584559_640.jpg"]) images = fn.decoders.image(jpegs) size = fn.random.uniform(values=[200,300,400,500,600,700,800]) size = fn.permute_batch(size, indices=[0]*batch_size) images = fn.resize(images, size=size) return images pipe = simple_pipeline(batch_size=batch_size, num_threads=4, prefetch_queue_depth=2, device_id=0) pipe.build() pipe.run() out = pipe.run()[0] print(np.array(out[0]).shape) print(np.array(out.as_tensor()).shape) out = pipe.run()[0] print(np.array(out[0]).shape) print(np.array(out.as_tensor()).shape) out = pipe.run()[0] print(np.array(out[0]).shape) ``` username_1: @username_2 Thanks, your solution works!
phpstan/phpstan
473533006
Title: SimpleXMLElement->asXML() isn't aware of parameter rules Question: username_0: It looks more like `SimpleXMLElement::asXML() => string | false`, `SimpleXMLElement::asXML(string) => bool` Answers: username_1: Hi, this needs a simple dynamic return type extension. Check out for example the ones for `microtime()` function or `var_export()`, they are very close: * https://github.com/phpstan/phpstan/blob/master/src/Type/Php/MicrotimeFunctionReturnTypeExtension.php *ย https://github.com/phpstan/phpstan/blob/master/src/Type/Php/VarExportFunctionDynamicReturnTypeExtension.php You need to implement `DynamicMethodReturnTypeExtension` for this use case. Please submit a PR, thank you. Status: Issue closed
Jinjiang/vue-a11y-utils
417263288
Title: [request] Update to latest Vue version Question: username_0: Are there plans to update this to vue 2.6? Answers: username_1: Theoretically, it is compatible with Vue 2.6. I will make a quick check soon. Thanks. username_1: https://jinjiang.github.io/vue-a11y-examples All examples work in Vue 2.6. Seems OK. Feel free to recreate issue if any compatibility problem found. Thanks. Status: Issue closed
geezorg/Xliterator
496372930
Title: Add a Syntax Highlighting Editor Question: username_0: Implement a popup dialog that displays the list of CSS classes along side color picker buttons that would be used to select and set colors. The selections would be saved in a `user-icu-highlighting.css`, saved in the resource folder along side the `icu-highlighting.css` file. It will be used when available and would be reloaded by the dialog for updating. Answers: username_0: Add a special "background" element to set the editor background color. save as a preference since the background does not appear to be editable via css. username_0: "Reset" and "Load Default" to be implemented Status: Issue closed
HIIT/hybra-core
224129076
Title: Add domain extraction method with urlparse Question: username_0: <a href="https://github.com/username_0"><img src="https://avatars1.githubusercontent.com/u/16080355?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [username_0](https://github.com/username_0)** _Fri Feb 24 09:45:34 2017_ _Originally opened as https://github.com/HIIT/hybra-core-2/issues/20_ ----<issue_closed> Status: Issue closed
CM2Walki/CSGO
1117557745
Title: Duplicate server listing in server browser LAN-tab Question: username_0: Hi, I've used the latest docker image and everything works fine except if I try to join the game from the CSGO client. I select the server browser and then the LAN-tab. The server shows up 2 times with exact the same values. One of the servers is not responding, the other is responding and I'm able to join the game. Is there a way to remove the duplicate entry? Thanks. Answers: username_1: I have currently a similar problem, but the ports are different : 27016 and 27020 (I've set my server port to 27016)
jasonrohrer/OneLife
770455490
Title: Connection Lost vs Old Age Question: username_0: I've noticed while watching streamers, that instead of the YOU DIED AGE: 60 YEARS CAUSE: OLD AGE you get the "Connection Lost" page. Then they quickly click trying to connect again. But they still died of old age in actuality. This has sporadically been happening the last few weeks/months. (IDK what the cause is.) For me and I suspect many longtime players, this is not a problem/issue. We just accept that it happens sometimes. It's not a big deal. I assume this is only an issue for newer players or people who think it is satisfying to see the you died of old age page. Just posting to report it. Feel free to ignore if hard to fix/find the root of the problem. Answers: username_1: Hmm... yeah, I've seen this myself sometimes. I'll look into it. The server should send the YOU DIED message, and then keep the connection open for a while. That's how it's supposed to work, anyway. username_1: I'm going to double the time that the server keeps the connection open after telling them that they died (from 5 to 10 seconds) to give even more time for the message to make it through before the connection is closed. Please keep an eye out for this issue this week. If it's still happening, there must be another bug. Status: Issue closed username_2: Just saw this happen on a stream. https://www.twitch.tv/videos/847915585?t=3h9m37s
HTTPArchive/almanac.httparchive.org
665582096
Title: Figure 7 in 2019 Media Chapter image is not a Google Sheets image Question: username_0: https://almanac.httparchive.org/en/2019/media#fig-7 is just a PNG and without the `data-` attributes necessary to convert it to an interactive iFrame. Looks like the nicely formatted image in the PNG is no longer there in the [Google Sheets version](https://docs.google.com/spreadsheets/d/1hj9bY6JJZfV9yrXHsoCRYuG8t8bR-CHuuD98zXV7BBQ/edit#gid=1419236152). @username_1 / @username_2 any ideas how to format that same as the PNG? If so, and can give me the sheets published URL, then I can update the markup. Answers: username_1: IIRC @username_2 generated this chart outside of Sheets, like [Figure 10](https://almanac.httparchive.org/en/2019/media#fig-10), so I'm not sure that we'd be able to generate the interactive iframe. username_2: Yes, I used ggplot to generate the tree. google sheets doesn't do tree diagrams very well (at all). username_0: Fair enough closing then. Though can't see I honest can see much difference (other than the colour) myself! ๐Ÿ˜€ Status: Issue closed username_2: I haven't looked at the trees from google sheets lately. maybe it has been rev'd? I know I struggled with the labels and formatting and just gave up to use a tool that was better for formatting ;)
MichaelRFairhurst/wUnit
93589634
Title: Support BDD style tests Question: username_0: Support BDD style testing. Should probably follow this API or something close to it/better @Test every MainTest (a BddStyleTest): needs Asserts then { describes("some part of Main", { -> // nesting describes should be allowed it("should work", { -> // the test code }); }); } We'll need to refactor Asserts to throw exceptions to fulfill this API
yiisoft/yii2
188923674
Title: testBindParamValue test is failing with PostgreSQL and HHVM 3.15.2 Question: username_0: Extracted from #12936 and #12971. For example see this [build job](https://travis-ci.org/yiisoft/yii2/builds/175302218). **PostgreSQL version:**:9.3.15 **HHVM version:** 3.15.2 ``` yii\tests\unit\framework\db\pgsql\CommandTest::testBindParamValue Failed asserting that false is true. /home/travis/build/yiisoft/yii2/tests/framework/db/CommandTest.php:199 /home/travis/build/yiisoft/yii2/vendor/phpunit/phpunit/phpunit:47 ``` Related code: ```php $command = $db->createCommand('SELECT [[int_col]], [[char_col]], [[float_col]], [[blob_col]], [[numeric_col]], [[bool_col]] FROM {{type}}'); // $command->prepare(); // $command->pdoStatement->bindColumn('blob_col', $bc, \PDO::PARAM_LOB); $row = $command->queryOne(); $this->assertEquals($intCol, $row['int_col']); $this->assertEquals($charCol, $row['char_col']); $this->assertEquals($floatCol, $row['float_col']); if ($this->driverName === 'mysql' || $this->driverName === 'sqlite' || $this->driverName === 'oci') { $this->assertEquals($blobCol, $row['blob_col']); } else { $this->assertTrue(is_resource($row['blob_col'])); // It fails here $this->assertEquals($blobCol, stream_get_contents($row['blob_col'])); } ``` Answers: username_1: Thank you for the report. Do you have any suggestions? username_0: Not yet. Unfortunately I have not much experience with HHVM, but will try to see what can cause this error. username_2: Likely need to look at your pgsql driver methods for `createCommand()` and `queryOne()` The failure here seems to be indicating the query failed since $row['blob_col'] is not a resource username_1: HHVM support is dropped. #14178 Status: Issue closed username_3: For 2.1 it is. For 2.0 it is not. username_4: I think we should not actively drop support on 2.0 and still handle issues if they get reported, but unless someone reports it actively (not we see it in failing tests), I see no need to create a workaround to make it work. username_3: OK. Makes sense.
bioconda/bioconda-recipes
941815474
Title: No SSL connection for download data for checkM. Question: username_0: @ dparks1134 Multiple users had problems installing the bioconda recepie *checkm-genome* https://github.com/metagenome-atlas/atlas/issues/410 `Unable to establish SSL connection.` The problem seems to be an error to create a SSL connection in the post-link.sh The script uses `wget --no-check-certificate -O $TARBALL $URL` to download the data. Now is the problem with the server hosting the data. in this case, we should maybe find a better place to host it or on the client site? Help very much apreciated.<issue_closed> Status: Issue closed
rollup/rollup
192713995
Title: Treeshaking is defeated by Object.defineProperty Question: username_0: main.js: ``` import './core.js'; ``` core.js: ``` export var ViewContainer = (function () { function ViewContainer() { } Object.defineProperty(ViewContainer.prototype); return ViewContainer; }()); ``` One would expect that the resulting file would be empty as `ViewContainer` is not used anywhere. Turns out that if `Object.defineProperty(ViewContainer.prototype)` is removed then the resulting code file is empty. This means that the call to `Object.defineProperty` defeats the tree shaker. Answers: username_1: You have to consider side effects from any use of Object.defineProperty() that could happen with pathological code. I guess Rollup would have to treat ``` Object.defineProperty(o, 'prop', { value: some_object_reference, ... }); ``` as ``` o.prop = some_object_reference; ``` And it could only make that inference if the prop name argument was a literal, among other assumptions. Things can get really complicated quickly in simulating the program flow. username_0: This use case is extremely common. Consider [this example](https://www.typescriptlang.org/play/#src=class%20ViewContainer%7B%0D%0A%20%20get%20elementRef()%20%7B%20return%20null%3B%20%7D%0D%0A%7D) ``` class ViewContainer{ get elementRef() { return null; } } ``` produces ``` var ViewContainer = (function () { function ViewContainer() { } Object.defineProperty(ViewContainer.prototype, "elementRef", { get: function () { return null; }, enumerable: true, configurable: true }); return ViewContainer; }()); ``` This is very common, and it causes retention of a lot of code in a project. Could Rollup be smart and realize that the `Object.defineProperty` got called on local variable, hence there will be no side effects? username_1: Whether it's common or not doesn't make it simple to solve. :-) I was just trying to demonstrate some potential pitfalls to consider. What if Object.defineProperty() is called via indirection as is common for Babel generated code? ``` $ echo 'class Foo { bar(x){return x * x;} }' | babel "use strict"; var _createClass = (function () { function defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if ("value" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } } return function (Constructor, protoProps, staticProps) { if (protoProps) defineProperties(Constructor.prototype, protoProps); if (staticProps) defineProperties(Constructor, staticProps); return Constructor; }; })(); function _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError("Cannot call a class as a function"); } } var Foo = (function () { function Foo() { _classCallCheck(this, Foo); } _createClass(Foo, [{ key: "bar", value: function bar(x) { return x * x; } }]); return Foo; })(); ``` username_1: Related issue: #349 username_0: Hi @username_1 I have put together my thoughts on how this can be achieved in [this design doc](https://docs.google.com/document/d/16eAFx2ZzZcU9VthcbrpxAELe-vwXEsQowxsaZlSx4WA). Please have a look and give me back some comments. I am willing to do the work, if I can have some support from you. username_1: @username_0 If you take a step back and look at a similar problem: ``` var Person = (function(){ var Person = function(){}; Person.prototype.run = function run() { console.log("run"); } Person.prototype.jump = function jump() { console.log("jump"); } return Person; }()); var p = new Person(); p.jump(); ``` and run this code through rollup version 0.36.4 you'll see that it does not drop the unused function `Person.prototype.run`. Although the simple program above can be statically analyzed, in a more complex program any object property can be accessed in a dynamic fashion that cannot always be statically deduced: ``` var Person = (function(){ var Person = function(){}; Person.prototype.run = function run() { console.log("run"); } Person.prototype.jump = function jump() { console.log("jump"); } return Person; }()); var p = new Person(); p.jump(); p[ Math.random() > 0.5 ? "run" : "jump" ](); ``` which is why rollup [errs on the side of caution](https://github.com/rollup/rollup/issues/349#issuecomment-162573654) and retains all methods. Of course one could make some assumptions that methods will not be called in such a dynamic fashion, but I personally wouldn't want it to be the default behavior. It is not uncommon in javascript to add or monkey patch functions at runtime. Perhaps specific classes or IIFEs could be annotated with a comment directive to assume it will be statically accessed. username_0: I understand what you are saying, but I am not proposing that we drop methods from the class. I am proposing that we drop a whole class, when we know that no one is referring to the class. This operation is safe even in face of reflection. username_1: Ah, okay, we weren't on the same page. Rollup already can drop unreferenced IIFEs - aside from some problems with parens: #1101 I assumed that it could also drop unused ES6 classes. Let's check: $ cat classes.js ``` class Base { base() { console.log("base"); } } class Derived extends Base { derived() { console.log("derived"); } } new Base().base(); ``` $ node bin/rollup classes.js ``` class Base { base() { console.log("base"); } } new Base().base(); ``` It appears to drop unused ES6 classes. Rather than teaching rollup about the idiosyncrasies ES5 code generated from Typescript including `__extends`, I would recommend unused classes be dropped at the higher level of Typescript. username_1: That's a non-trivial task. Barring transforming the code at the higher Typescript level, perhaps you could create some sort of comment directive or runtime flag to tell rollup which functions mutate which arguments. username_1: @username_0 Any reason why you're not using the Typescript compiler to target ES6 and then running Rollup on that? $ cat classes.ts ``` class Animal { constructor(public name: string) { } move(distanceInMeters: number = 0) { console.log(`${this.name} moved ${distanceInMeters}m.`); } } class Snake extends Animal { constructor(name: string) { super(name); } move(distanceInMeters = 5) { console.log("Slithering..."); super.move(distanceInMeters); } } class Horse extends Animal { constructor(name: string) { super(name); } move(distanceInMeters = 45) { console.log("Galloping..."); super.move(distanceInMeters); } } let sam = new Snake("Sammy the Python"); sam.move(); ``` $ tsc --version ``` Version 2.0.10 ``` $ tsc classes.ts --target ES6 --outFile classes.js $ cat classes.js ``` class Animal { constructor(name) { this.name = name; } move(distanceInMeters = 0) { console.log(`${this.name} moved ${distanceInMeters}m.`); } } class Snake extends Animal { constructor(name) { super(name); } move(distanceInMeters = 5) { console.log("Slithering..."); super.move(distanceInMeters); } } class Horse extends Animal { constructor(name) { super(name); } move(distanceInMeters = 45) { [Truncated] constructor(name) { this.name = name; } move(distanceInMeters = 0) { console.log(`${this.name} moved ${distanceInMeters}m.`); } } class Snake extends Animal { constructor(name) { super(name); } move(distanceInMeters = 5) { console.log("Slithering..."); super.move(distanceInMeters); } } let sam = new Snake("Sammy the Python"); sam.move(); ``` Notice that the unused class `Horse` has been dropped. username_0: That is an excellent point! @robwormald do you have an opinion? So Angular is distributed on NPM in two forms. 1) As a umd bundle (which must be in ES5 so that browsers can execute it directly) and as 2) ES5 code with modules syntax. This form is needed for two reasons. 1) so that unit tests can run in browser with something like System.js (ie must be ES5) 2) and so that Angular AoT compiler can do deep imports. I see how shipping the code in both ES5 and ES2015 format would be beneficial for tree shaking for Rollup but it would complicate user deployment. Also it would complicate how third part components need to be packaged in NPM. username_2: Sigh. Yeah. This is so that our npm packages are compatible with webpack. According to @robwormald, Webpack 2 currently can't downlevel code from 3rd party npm packages from es6 to es5 - @seanlarkin can you please confirm? username_3: webpack only down levels import statements. For other syntactical features you need babel to transpile. In terms of tree shaking there are issues with UglifyJs not removing side effects from transpiled classes which may be what is being mentioned above. username_3: However if rollup is already being run against the es6 modules that should help with some of the opt before webpack bundles it. username_1: Similar discussion took place a few month back: https://github.com/mishoo/UglifyJS2/issues/1261 Same recommendations as I made above. Either perform these optimizations at a higher level in ES6 with rollup or equivalent or come up with some sort of annotation to mark the transpiled ES5 IIFE as being pure. username_4: I'm curious: What is the current status on this? username_5: @username_0 I think you "own" this ticket. Reading down the conversation it seems like this is somewhat taken care of by Rollup, at least as far as it seems it can be taken for the time being? username_0: While the issue still holds, Angular is no longer blocked on this as we have changed our pipeline to not run into this. So while it would be nice to fix, it is not needed by Angular. (Close at will) username_6: I am using three.js for visualizing agent based models .. [here is an example](http://username_6.github.io/as-app3d/models/?water). I used Three because of its popularity, and I assumed a rollup of the modeling software (57K minified) with three would be quite small, I use only their buffer geometries which are designed to be minimalistic. Amazingly, using rollup with a one-liner drags in ALL of three.js. Bummer. I'm wondering what I would have to do to use just the parts of three.js that I use? Would using the src/*.js files be better than using `import * as THREE from 'three.module.js'`? Or is this a deep problem that requires modifying all of three, a project like @Rich-Harris's initial modularization of three.js? username_1: [That PR](https://github.com/mrdoob/three.js/pull/9310) got them to use ES6 import/export but the modules are still comprised of [ES5 code](https://github.com/mrdoob/three.js/blob/dev/src/lights/SpotLight.js) whose side effects are difficult to determine for dead code elimination. username_7: Is there any reason to ever use `Object.defineProperty()`? I have never personally encountered a use case. If the code can work just fine without it, then perhaps it's better to recommend that such code be rewritten? username_8: How else would you define a getter and setter in ES5? username_1: http://2ality.com/2015/08/object-literals-es5.html#ecmascript_5_has_getters_and_setters username_9: You need `Object.defineProperty()` when creating getters and setters on function-style classes. Typescript also makes use of this. Consider this class in ES6: ```js class Example { get prop() { return true; } } ``` It will transpile to ```js function Example() { } Object.defineProperty(Example.prototype, "prop", { get: function () { return true; }, enumerable: true, configurable: true }); ``` username_10: Hey folks. This is a saved-form message, but rest assured we mean every word. The Rollup team is attempting to clean up the Issues backlog in the hopes that the active and still-needed, still-relevant issues bubble up to the surface. With that, we're closing issues that have been open for an eon or two, and have gone stale like pirate hard-tack without activity. We really appreciate the folks have taken the time to open and comment on this issue. Please don't confuse this closure with us not caring or dismissing your issue, feature request, discussion, or report. The issue will still be here, just in a closed state. If the issue pertains to a bug, please re-test for the bug on the latest version of Rollup and if present, please tag @username_10 and request a re-open, and we'll be happy to oblige. Status: Issue closed
JiangWeixian/cheatsheets
752911248
Title: before each class method Question: username_0: ```ts function AttachToAllClassDecorator<T>(someParam: string) { return function(target: new (...params: any[]) => T) { for (const key of Object.getOwnPropertyNames(target.prototype)) { // maybe blacklist constructor here let descriptor = Object.getOwnPropertyDescriptor(target.prototype, key); if (descriptor) { descriptor = someDecorator(someParam)(key, descriptor); Object.defineProperty(target.prototype, key, descriptor); } } }; } function ManualAttachToAllClassDecorator( target: new (...params: any[]) => any ) { for (const key of Object.getOwnPropertyNames(target.prototype)) { // maybe blacklist constructor here let descriptor = Object.getOwnPropertyDescriptor(target.prototype, key); if (descriptor) { descriptor = someDecorator("someParam")(key, descriptor); Object.defineProperty(target.prototype, key, descriptor); } } } function someDecorator( someParam: string ): (methodName: string, descriptor: PropertyDescriptor) => PropertyDescriptor { return ( methodName: string, descriptor: PropertyDescriptor ): PropertyDescriptor => { let method = descriptor.value; descriptor.value = function(...args: any[]) { // console.warn(`Here for descriptor ${methodName} with param ${someParam}`); // console.log(args) return method.apply(this, args); }; return descriptor; }; } ``` **ๆ–นๅผไธ€** ```ts @AttachToAllClassDecorator<Test>("something") class Test { constructor() { this.test(); this.test2(); } public test() {} public test2() {} public test3 = false; } ``` **ๆ–นๅผไบŒ** ```ts class Walk { vistor: any; constructor() { ManualAttachToAllClassDecorator(Visitor); this.vistor = new Visitor(); } } ``` `Vistor` is another class
elixir-ecto/ecto
195020521
Title: Allowing for multiple selects in a query Question: username_0: Problem When building queries that compose you can append `where` clauses and `limits`, but you can only have one `select` clause defined per query. This causes duplication when building queries that need to append additional values that are not defined in the original schema. This is problem we've experienced when building API's that require complex query results. Below is a simplified example that, while a bit contrived, represents a real life example. Example ```elixir def summary(query, :average) do from v in query, select: %{ total_count: fragment("count(?) as total_count", v.id), value: average(v.runtime) } end def summary(query, :sum) do from v in query, select: %{ total_count: fragment("count(?) as total_count", v.id), value: sum(v.runtime) } end def summary(query, :median) do from v in query, select: %{ total_count: fragment("count(?) as total_count", v.id), value: fragment("quantile(?, 0.5) as value", v.time) } end ``` ... continued for multiple summary style stats. Each with a similar structure, but small differences. Now if we want to rename total_count we'll need to do it in 3 different places. Proposal An addition to the Ecto query API called `select_merge` this will take the existing select map|dict|list and merge it with the one supplied to `select_merge` overwriting keys in the new map, if no select is currently defined it will use the value as is. Much like the Dict.merge semantics. Example ```elixir def summary_common(query) do from v in query, select_merge: %{ total_count: fragment("count(?) as total_count", v.id) } end def summary(query, :average) do from(v in query, select: %{ value: average(v.runtime) }) |> summary_common() end ... ``` In my ideal world `select` would always merge if it encounters multiple select's but I think this is a safer way. I do not have a strong understanding about how this will change aggregates, groupings, subqueries or joins. Further Discussion from the Mailing List: https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/elixir-ecto/ymyULSUhPnk/ruDdtk5nBQAJ Answers: username_1: @username_2 The map update syntax made me think of this. It could potentially be a good way to implement the basics of multiple select. A map update could represent a merge of the select statements? username_2: @username_1 the map update syntax assumes the key exists which is the opposite of what we want here (and a bit the opposite of merge). username_3: This is especially valuable when composing queries. @username_2 is there a place we can see priorities for these issues, especially this one? username_2: There is no priority list. The best way to guarantee this is in the next version of Ecto is by contributing it. Status: Issue closed username_0: ๐Ÿ˜ Omg thank you so much @username_2 โค๏ธ username_4: Right on time! Tnx @username_2 username_1: This feature just eliminated a massive amount of conditional complexity in many parts of our code. As always we are eternally grateful @username_2.
refraction-networking/utls
653860736
Title: uTlsConn.Handshake() error: local error: tls: unexpected message Question: username_0: Env: Go 1.14, Windows amd64 Whenever I use the client hello of chrome for version 70, 72, or 83, the program will run into this error `HttpGetByHelloID(HelloChrome_70) failed: uTlsConn.Handshake() error: local error: tls: unexpected message` How to reproduce: 1. Change the example to hello chrome of the versions mentioned above 2. Run it Answers: username_0: the same issue persist in Ubuntu-18.04 amd64 using go version 1.8 username_1: Could be a server issue. Did you take a look at which `unexpected message` is it in wireshark? username_0: @username_1 ![image](https://user-images.githubusercontent.com/23499232/87105279-b2d50d00-c284-11ea-8a93-22eddc70cc44.png) This is what I got in wireshark username_0: Comparison when using CHROME_62 ![image](https://user-images.githubusercontent.com/23499232/87105512-65a56b00-c285-11ea-8b71-f33526ad25b6.png) username_1: When I visit https://192.168.3.11 in Chrome 83 I get this: ``` This site canโ€™t provide a secure connection 192.168.3.11 uses an unsupported protocol. ERR_SSL_VERSION_OR_CIPHER_MISMATCH Unsupported protocol The client and server don't support a common SSL protocol version or cipher suite. ``` Looks like that server needs to update TLS config and/or implementation. username_0: I got the same exact problem when running on google.com as well, so I believe it's not a server side issue username_2: I can confirm this issue with HelloChrome_83 and cloudflare.com:443. I believe this is due to this extension not being implemented: https://github.com/refraction-networking/utls/blob/ada0bb9b38a0975b15bb4591cd4a939fe74d1a1b/u_parrots.go#L280-L282 When you comment this extension out, the TLS handshake succeeds. Of course, that's no longer Chrome 83. Fwiw, a fix exists: https://github.com/refraction-networking/utls/issues/22. We cannot merge that here due to license issues. Perhaps we can develop our own implementation of certificate compression, or check for another one. username_1: Perhaps we can. Or perhaps @username_3 can agree to dual license his uTLS changes so we can pull them here. I am pretty busy right now, since I have a thesis defense next month, so I would really appreciate help with the library. username_3: I'll need to think about this. username_0: I've resolved some of the issues that I received. It turns out that for some websites (example: www.something.com) I need to put something.com as the ServerName in tls.Config and dial the www.something.com. Putting the www. subdomain in tls.Config will result in handshake error username_4: Confirmed, @username_3 works, UTLS fails with anything using `TLS 1.3` with `unexpected message` on the handshake username_5: I have meet the same problem like you. Do you have any solution? Any help to me will be appreciated!
withyuns/cogsmap
220761961
Title: Milestone 2 Feedback Answers: username_1: Team C5 Feedback: We really like the fact that your storyboards are detailed and specific. The use cases that you presented make it very obvious as to when each app idea would be useful. The two storyboards that stood out to us the most were the foot traffic visualizer and pet-friendly location finder. We think these ideas are unique and could be quite useful for the target audiences of urban explorers and pet owners. Another idea we found interesting was the demographic map. However, we don't know if this application would be useful for the general user. Rather, it seems constrained to research uses. username_2: Feedback from Team: YellowCow (A5): For the traffic prediction idea, we really like the concept, but there many technical challenges to overcome. First, there will need to be a back-end mechanism to collect past data. If the team are able to implement these, we think this will be a really good idea. Another idea is checking foot traffic of certain area. The problem with this idea is that, we are not sure how viable the application will be in certain city. Some areas may not have enough information to be useful. Additionally, how will the application track user's position? Is it going to be constantly tracking the user all the time? The parking availability application sounds very generic and possibly has been done before. We see several problem with this idea. First, and the most important issue is, by what means will the application track the available spot. If this is all done by collecting positions of all user who use the app, what about the users who dont? username_3: Feedback from Big Happy Family (Team 26) 1- seems a lot like the functionality of google maps already, not sure how it expands on what already exists 2- where would the data come from, for how many people there are at an event 3- could be interesting to see how it is all visualized, data could be taken from some gov 4- seems novel, never seen weather overlayed with maps, but it could already exist. sounds simple, wonder how it can be expanded further so it is not just a visualization of weather on a map. where would this data even come from to be up to date. 5- where would the data come from, how to determine availability in the parking structures. no system in place at this point to make this. 6- how will this differ from existing apps that do this idea? username_4: Team K2 Storyboard - Checking Traffic - Google Maps already does this. - The application doesn't have a map as a core feature. Storyboard - Checking Foot Traffic - How are you going to check how many people are at a certain place? It would be difficult and expensive to develop this application. - It would be hard to account for factors like age for certain events. - Just because a place has a lot of people doesn't mean it's a place where an event is happening or a place a user might want to visit. Storyboard - Different Demographics in an Area - How would you get the data of the demographics of an area? - It seems like a lot of data to process for the application. Storyboard - Weather Patterns - Some weather applications already provide maps with weather patterns in real-time. Storyboard - UCSD Parking - How would you keep track of the parking spaces on campus without sensors? The application seems like it would be difficult to develop because of potential hardware dependencies. Storyboard - Many Pets - Certain establishments may prohibit pets from being there based on breed or even pets in general. It would be hard to account for that.
jaredhanson/passport-facebook
32500571
Title: Can't get user profile photos. Question: username_0: Hi all, I can't get the fb user photos as a profile field. Here is my code: config.js ``` passport.use(new FacebookStrategy({ clientID: config.facebook.clientID, clientSecret: config.facebook.clientSecret, callbackURL: config.facebook.callbackURL }, function(accessToken, refreshToken, profile, done) { console.log(profile) User.findOne({ 'facebook.id': profile.id }, function (err, user) { if (err) { return done(err) } if (!user) { user = new User({ name: profile.displayName, email: profile.emails[0].value, username: profile.username, provider: 'facebook', facebook: profile._json, photos: profile.picture }) user.save(function (err) { if (err) console.log(err) return done(err, user) }) } else { return done(err, user) } }) } )) ``` router.js ``` app.get('/auth/facebook', passport.authenticate('facebook', { display: 'popup', scope: [ 'email', 'basic_info', 'user_photos'], profileFields: ['id', 'displayName', 'photos', 'emails', 'birthday'], failureRedirect: '/login' }), users.signin) ``` Answers: username_1: This works - however you cannot user 'picture.type(large)' and 'photos' together username_2: `FacebookGraphAPIError: (#12) username field is deprecated for versions v2.0` Just remove `username` username_3: how do you actually pass the photo into a ejs template? username_4: ``` <%= img_tag(profile.photo) %> ``` username_3: ReferenceError: C:\Users\MyComp\Desktop\Project\views\profile.ejs:46 44| <strong>email</strong>: <%= user.facebook.email %><br> 45| <strong>name</strong>: <%= user.facebook.name %> 46| <%= img_tag(profile.photo) %> 47| </p> 48| 49| </div> img_tag is not defined Code: passport.use(new FacebookStrategy({ clientID: configAuth.facebookAuth.clientID, clientSecret: configAuth.facebookAuth.clientSecret, callbackURL: configAuth.facebookAuth.callbackURL, profileFields: ['email', 'displayName', 'photos'] }, function(accessToken, refreshToken, profile, done) { process.nextTick(function(){ User.findOne({'facebook.id': profile.id, picture: profile.photos ? profile.photos[0].value : '/img/faces/unknown-user-pic.jpg'}, function(err, user){ if(err) return done(err); if(user) return done(null, user); else { var newUser = new User(); newUser.facebook.id = profile.id; newUser.facebook.token = accessToken; newUser.facebook.name = profile.displayName newUser.facebook.email = profile.emails[0].value, newUser.facebook.picture = profile.photos[0].value; //is this correct? newUser.save(function(err){ if(err) throw err; return done(null, newUser); }) console.log(profile); } }); }); } )); username_5: Hi, Can you please tell me how to get the friends list... username_6: can you tell how to get the list of pages that the user is admin... username_7: @username_6 ````javascript scope: ['manage_pages'], ```` Status: Issue closed
dominikg/svite
654305057
Title: Routify intergration Question: username_0: hello, thank you for the the amazing work, I need some help integrating [Routify](https://github.com/sveltech/routify). i have no idea where to start from since i dont understand how the different tools work together. Answers: username_1: please check the routify-mdsvex example. if you need more help you can ask in routify or svite discord channels. closing. please follow the issue templates and do NOT post support requests here Status: Issue closed
adjust/ios_sdk
205645318
Title: Deeplink using traditional Adjust deeplink Question: username_0: Hi guys, I have a problem with Universal links on the app. I've configure both the app and the Adjust dashboard to use Universal links. I'll try to explain below the scenarios I've tested and the result of them (in the two ways described here https://docs.adjust.com/en/universal-links/#running-campaigns-through-universal-links ) : 1. Using deeplink: https://abcd.adj.st/?adjust_t=abc123 - App installed: The app is directly opened - App not installed: Open App Store. Scenario 1 is working as expected. 2. Using the tradicional Adjust deeplink (https://docs.adjust.com/en/universal-links/#universal-links-and-deep-link-parameters-within-adjust-trackers) https://app.adjust.com/tr4ck1ng1d?deep_link=myapp://promotion: - App Not Installed: Open the app store page. Deferred deep link works fine. - App Installed: Open the app store page. (_**should open directly the app**_). I also tried by encoding the deep_link parameter and by clicking on the link from Mail, Notes app, Messages and none of these worked when the app is installed. It always redirects to the app store. By looking at the table on the documentation I can see which are the recommended approach for every method. Note: If I paste the url directly on chrome it will act as a deep link to an URI, (prior to iOS 9) asking the user if he/she wants to open the app (continueUserActivity is not called). By clicking Open the app is opened and the deferred deep link works fine. Tested using iOS 9.3.1 and iOS 10.1. I have also checked some other issues: https://github.com/adjust/ios_sdk/issues/153 So, if I understood correctly, depending on the method we're using to target the users, the links have to be built one way or another. Haven't they?, however our app still supports iOS 8, so I guess we should always use the traditional deep link parameter. Our main focus will be targeting by Facebook, so deep link parameters should be used, however, how can I test this? If I run a campaign through email, the users won't be able to open the offer unless they specifically click on the link in Safari or **Universal Links** is used to create the campaign (which again, leads me to the question below). As specified here (https://docs.adjust.com/en/universal-links/#universal-links-and-deep-link-parameters-within-adjust-trackers), "Adjust will recognise if Universal Links should be used", but does it works upside down? If using Universal Link, will the app still be opened on iOS 8 by using the normal scheme approach? Is there a way of only using one of the methods above to make it work for all iOS versions? Thanks. Answers: username_1: Hi @username_0 And thanks for detailed issue description. First, I would like to check you scenario 2. You have said that when using traditional adjust tracking URL and once your app is installed, hitting that link redirects you to the store and doesn't open the app (which should happen, of course). Where are you clicking on the URL? Is it hosted on some web page which you are opening from your mobile browser and clicking on it or are you just pasting the URL to the URL bar? Which browser are you using? Which iOS version does your test device have? Cheers username_0: Hi @username_1 I've tested that with iOS 9.3.1 and iOS 10.1 I've tried by clicking on Mail, Notes and pasting directly on Safari. - All of these redirected to the App Store I've also tried pasting the link on Chrome, which act as a deep link to an URI. It works from Chrome after the user clicks on "Open". username_2: I also ran into this issue. The only way I could get it working was when the link was already embedded in a webpage. I set up a simple webpage that had a button that when clicked, opened the https://app.adjust.com/tr4ck1ng1d link. At that point, when Adjust redirects to the app's universal link, iOS opens up the app. But as you described, if the link was tapped on outside of Safari (like in Mail or Notes) it will not trigger the universal link to open. I'm not sure if this is a bug or just how Apple intended it to work. username_0: Thanks @username_2 for you comment. I'll try with that. username_1: Hi @username_0, Any update on this issue? username_0: @username_1 Sorry for the late reply. It is working as Erik commented, thanks username_1: Lovely. Thanks for the update! ๐Ÿบ Status: Issue closed
stellar/project-viewer
702921614
Title: [1] Add preliminary spec Question: username_0: **What** A preliminary spec is currently under review in this [Google doc](https://docs.google.com/document/d/1EKM1g0y8CGZv8y0BpT4ideXFCdVz1PweuGpaJFfV00g), so nontechnical users can easily comment on whether it hits their requirements. Once that round of revision is stamped, we should move the specification to this repo. **Why** Open-sourcing a project is meaningless if the spec is not also open-sourced.
hashgraph/hedera-services
701374642
Title: Fees - FieldSourcedFeeScreening can throw null exception in some scenarios Question: username_0: Created by [261](https://github.com/swirlds/hedera-fpcomplete-audit/pull/261) ## Situation We have a method named [canParticipantAfford](https://github.com/hashgraph/hedera-services/blob/e4f825ce23d5e4429192c8bf218ea220064d548e/hedera-node/src/main/java/com/hedera/services/fees/charging/FieldSourcedFeeScreening.java#L84 "canParticipantAfford") in the class `FieldSourcedFeeScreening` defined like this: ``` java @Override public boolean canParticipantAfford(AccountID participant, EnumSet<TxnFeeType> fees) { long exemptAmount = 0; if (fees.contains(THRESHOLD_RECORD)) { exemptAmount += exemptions.isExemptFromRecordFees(participant) ? feeAmounts.get(THRESHOLD_RECORD) : 0; } // what if feeAmounts.get doesn't have threshold_record long netAmount = totalAmountOf(fees) - exemptAmount; return check.canAfford(participant, netAmount); } ``` The problem here is that the above code can throw a null exception if the `feeAmounts` doesn't have the key `THRESHOLD_RECORD` in it. We have a small test to reproduce the above behavior. You can add the following method in the test file [FieldSourcedFeeScreeningTest.java](https://github.com/hashgraph/hedera-services/blob/e4f825ce23d5e4429192c8bf218ea220064d548e/hedera-node/src/test/java/com/hedera/services/fees/charging/FieldSourcedFeeScreeningTest.java#L103 "FieldSourcedFeeScreeningTest.java"): ``` java @Test public void feeTest() { // setup: EnumSet<TxnFeeType> thresholdRecordFee = EnumSet.of(THRESHOLD_RECORD); subject.setFor(NETWORK, network); subject.setFor(SERVICE, service); subject.setFor(NODE, node); subject.setFor(CACHE_RECORD, cacheRecord); // when: boolean viability = subject.canParticipantAfford(master, thresholdRecordFee); // then: assertFalse(viability); } ``` And when the test is executed, it fails with null exception. ## Problem When the `fees` contains `THRESHOLD_RECORD` and the `feeAmounts` doesn't contain `THRESHOLD_RECORD` as its key, in some conditions it can lead to a null exception. Also, we could see the same pattern at various other locations where it might be an issue: * In [itemizedFess method](https://github.com/hashgraph/hedera-services/blob/e4f825ce23d5e4429192c8bf218ea220064d548e/hedera-node/src/main/java/com/hedera/services/fees/charging/ItemizableFeeCharging.java#L148 "itemizedFess method") * In [totalAmountOf method](https://github.com/hashgraph/hedera-services/blob/e4f825ce23d5e4429192c8bf218ea220064d548e/hedera-node/src/main/java/com/hedera/services/fees/charging/FieldSourcedFeeScreening.java#L93 "totalAmountOf method") ## Suggestions There are multiple suggestions based on the situation: * Document the pre-condition of the call explicitly so that the caller is aware of it. * If the pre-condition doesn't exist, check for null explicitly and throw a new custom exception or modify the business logic accordingly.<issue_closed> Status: Issue closed
PalmerAL/min
148093169
Title: Copy typo - "written" has an extra t Question: username_0: At the bottom of the tour page. ![screen shot 2016-04-13 at 11 09 45 am](https://cloud.githubusercontent.com/assets/391577/14498457/841bec72-0168-11e6-983a-67b5dda228ee.png) Status: Issue closed Answers: username_1: Fixed, thanks for noticing this!
raviraa/related-files
57837773
Title: Atom.Object.defineProperty.get is deprecated. Question: username_0: atom.workspaceView is no longer available. In most cases you will not need the view. See the Workspace docs for alternatives: https://atom.io/docs/api/latest/Workspace. If you do need the view, please use `atom.views.getView(atom.workspace)`, which returns an HTMLElement. ``` Atom.Object.defineProperty.get (/Applications/Atom.app/Contents/Resources/app/src/atom.js:54:11) Object.activate (/Users/erik/.atom/packages/related-files/lib/related-files.js:9:11) ```
ProgerXP/Notepad2e
787515692
Title: Function requirements Question: username_0: Add a "Remove Duplicate Lines" function. Add a "Always On Top" button to the toolbar. Thanks! Answers: username_1: You need it in the toolbar? Alt+T doesn't work for you? username_0: Yes, a button on the toolbar may be more convenient. username_1: @username_2 1. Rename "Sort Lines..." to "Sort/Deduplicate Lines..." 2. Add toolbar button for the Always On Top command username_2: Done. ![image](https://user-images.githubusercontent.com/8019354/104959856-b69cd580-5a05-11eb-8630-b540ff6ba3f8.png) username_0: Thanks! :D @username_2 Status: Issue closed
sopaco/sia
284853220
Title: Fix java.lang.NullPointerException in FpsStatsSampler.java line 57 Question: username_0: ### Version 1.0(1) ### ### Stacktrace ### com.username_0.uap.support.modules.apm.sampler.FpsStatsSampler.onScheduleTicked (FpsStatsSampler.java:57); com.username_0.uap.support.modules.apm.sampler.ScheduleBasedSampler$1.run (ScheduleBasedSampler.java:44); ### Reason ### java.lang.NullPointerException ### Link to App Center ### * [https://appcenter.ms/users/dokhell/apps/51ATM/crashes/groups/32c5a4ffa2f90ecada9ec2d118502098ba66503b](https://appcenter.ms/users/dokhell/apps/51ATM/crashes/groups/32c5a4ffa2f90ecada9ec2d118502098ba66503b)
facebook/relay
637737400
Title: Relay - PaginationFragment does not refresh Question: username_0: Load more </button> ) : null} </> ); }; export default ProjectsList; ```` When I put the button the graphql query goes well and response is good but view does not update. And when I put it again, I have `Warning: Relay: Unexpected after cursor..., edges must be fetched from the end of the list` because it try to query same page again. Any idea of what I'm doing wrong ? Answers: username_1: check if this helps https://github.com/username_1/relay-workshop/blob/master/workshop/04-usePaginationFragment/exercise.md Status: Issue closed
idoun/SimpleBanner
150222920
Title: Display partial of current wallpaper on the background of preview in the editor Question: username_0: Display partial of current wallpaper on the background of preview in the editor Answers: username_0: Technically possible. Can be applied by cropping Drawable wallpaperDrawable = wallpaperManager.getDrawable(); previewTextView.setBackground(wallpaperDrawable);
thomasjo/atom-latex
203081538
Title: Latex compiles and opens xdv-file Question: username_0: Thanks for all your work on the excellent latex-package for Atom. Since a recent TexLive-Update (I guess the update of latexmk is responsible), the compilation of my tex-file additionally creates a *.xdv file and automatically opens it with Skim. An alternative test with TextMate as desired compiled a PDF and no xdv. Is there any chance to react on that (supposed) change in latexmk? My specs: Engine: xelatex, Output format: pdf, PDF producer: dvipfmx (but also dvipdf doesn't change anything) Thanks a lot! Answers: username_1: Thanks for reporting the issue! The recent `latexmk` update has changed the way that XeLaTeX and LuaLaTeX are used so we will probably have to adjust the code a bit. We'll take a look shortly. username_2: Same problem. It reports me a "No opener can be found to open *.xdv file". My specs: Engine: xelatex, Output format: pdf, PDF producer: xdvipfmx Status: Issue closed username_1: Just merged the fix. Will push out a patch release shortly. username_1: Released. Let us know how it works. username_0: Thank you so much for your unbelievably fast response and handling! This is why I love the Atom community! I already updated, tried and it works perfectly. I also added the *.xdv to cleaning pattern, which perhaps you would like to add in the default pattern. Since we are talking, I just wanted to spotlight briefly on this thread here: https://discuss.atom.io/t/keybinding-for-italic-bold-text-in-latex/25701 I don't dare to open an issue for that, but if you would like to implement the useful shortcuts in your latex package, feel free. It works for me and is quite convenient. All the best!! username_1: @username_0 Glad it works! Right now we trying are keeping this package focused on compiling and viewing the resulting documents from LaTeX. There are several other packages to do editing, autocomplete, etc. You probably want to look at [latex-autocomplete](https://atom.io/packages/latex-autocomplete) or [latexer](https://atom.io/packages/latexer). There are also other [packages](https://atom.io/packages/search?q=latex) to do hyperlinking, inline help, etc. Just be careful with `latex-plus` and `latextools` as they have overlapping functionality with this package and may cause problems. username_2: Also works, thank you for your help!
go-swagger/go-swagger
197088505
Title: Support array type validation for $ref definitions Question: username_0: ## Problem statement Go-swagger supports array type validation for the modals that uses direct definition, while if the modal definition is set by $ref (aliased), the validation logic are not generated in Swagger templates rendering, causing Rest server unable to prevent invalid rest calls. ## Swagger specification ``` definitions: IdentifierType: type: string pattern: ^[A-Za-z][-A-Za-z0-9_]*$ item: type: object properties: idlist: type: array items: $ref: '#/definitions/IdentifierType' ``` ## Steps to reproduce Generated rest server have no validation code rendered for array elements schema validation. Status: Issue closed Answers: username_1: @username_0 why is this closed. I am seeing something similar. Did you find a solution?
paldepind/flyd
143532380
Title: Non-atomic update of dependent stream created within a dependent stream Question: username_0: Hi @username_2, Let me begin by saying nice work. I've really enjoyed using flyd so far. Thank you for sharing it with us all. I've come across an edge case that I'm hoping you can help with. I did my best to uncover the cause in the source but had limited success. Basically, when a dependent stream is created within the body of another stream, and that stream is triggered, the dependent streams dependents are updated twice. I would expect it to be updated only once since the top level stream was only triggered once. It's hard to describe in words. Here is an example that will allow you to reproduce the unexpected behavior: ```js const f = require('flyd'); const input = f.stream(); input.mark = 'input'; const oninput = f.on(v => { const s = f.stream(v); s.mark = 's'; const ds = f.combine(s => s(), [s]); ds.mark = 'ds'; const dds = f.on(v => console.log('dds:', v), ds); dds.mark = 'dds'; }, input); oninput.mark = 'oninput'; input(0); // Console output: // // dds: 0 // dds: 0 ``` From the best I was able to discern, when the oninput body runs, the following streams are added to the global `toUpdate` object: s, ds (not dds). Then each of those streams have their dependencies updated: s => updateDeps: ds, dds ds => updateDeps: dds Which results in dds being updated twice. Thoughts? Thanks for your help! Best regards, Alex Answers: username_1: By using `flyd.on`, you're forcing anything inside to not be considered a dependency. A dependent stream can only be created through `combine`, which (under the hood) `map`, `merge`, and `scan` use. In your example, streams have the following dependencies: ``` input -> oninput === flyd.on end stream s -> ds ds -> dds === implicit flyd.on end stream ``` Where `a -> b` means "a is a parent to b" or, "b depends on a". so, `dds` isn't actually a stream -- it is an "end" stream for the `flyd.on`, `ds` depends only on the created `s` stream. username_0: Hi @username_1, Thanks for the reply. I went ahead and refactored the code to use `flyd.map` instead of `flyd.on` and got identical results: ```js const f = require('flyd'); const input = f.stream(); const oninput = f.combine(input => { const s = f.stream(input()); const ds = f.map(v => {console.log('ds:', v); return s();}, s); const dds = f.map(v => console.log('dds:', v), ds); }, [input]); input(0); /** * Console output: * * ds: 0 * dds: 0 * dds: 0 */ ``` The example above is simplified in an attempt to highlight the underlying cause of the real issue we are experiencing with the flyd `flatmap` module. To give this thread more context, this is a more specific example of what we are really trying to resolve: ```js const f = require('flyd'); f.flatMap = require('flyd/module/flatmap'); const input = f.stream(); const output = (v) => { const s = f.stream(v); return f.map(v => v, s); }; const outputs = f.map(output, input); // a stream of output streams const flatOutputs = f.flatMap(s => s, outputs); // a stream of output values f.combine(output => console.log('output:', output()), [flatOutputs]); input(0); /** * Console output: * * output: 0 * output: 0 */ ``` In this case, since `input` receives a single value, we expect the flattened stream of output streams to emit a single value. However, we see duplicate values emitted. username_2: Hello @username_0. Thank you for reporting this bug and contributing a nice example that demonstrates it. This truly is rather peculiar. I've added it as a test and seen the same results that you describe. I hope to take a stab at this soon. username_0: Hello @username_2, Thank you. Best regards, Alex username_1: Without looking deeper (I'm also curious of what's causing this), I've got a feeling it has to do with the creation of the stream inside an `on` callback -- Making the stream inside the update might be deferring a second update cycle, and since it's appended at the end of the graph, it's probably getting executed once the graph gets there, then again on the second cycle. Just a thought. username_3: Hello @username_2 - any updates on this issue? PS: I should have started by saying thank you for this great composable FRP lib, so here you go :-) I am evaluating it for a new frontend project where Rx would be an overkill and I'm getting hooked to it. I really like the fact it plays with well with Ramda and FP principles, fantasy land etc unlike most other established libs. With all due respect, I just wanted to know if the project is still alive and maintainable, cause it seems that lately there are serious issues like this one that are just hanging around... Kind Regards username_4: Hi @username_2! how are you? I start working with flyd and I found the same issue, do you know if is there any update about this? I try to fix it by myself but to be honest I don't understand very well the code and the FRP world is new for me. I guess that maybe the issue is related with the `updateStream` function and the variable `inStream` but that was all I can get. Thanks and sorry to bother you. username_3: Any update on this? Since people want to volunteer for a fix, it would be great if we could have even some guidance (things to look for etc), by the author @username_2 :-) Status: Issue closed username_5: Duuude. What a close. I think, i just [fixed it](https://github.com/username_5/flyd/tree/backport_typescript_fixes). `npm install https://github.com/username_5/flyd/tarball/backport_typescript_fixes`
xamarin/Xamarin.Forms
361912659
Title: PanGesture interaction causes setting scrollView to initial position (0, 0) Question: username_0: Here is small reproduction code. Try to pan this view and it set scroll view to 0.0 immediately I've investigated a little HandleScrolled is called but ContentOffset is (0,0) - i have no idea "Why?" ```csharp public class App : Application { public App() { var panGesture = new PanGestureRecognizer(); var scroll = new ScrollView { GestureRecognizers = { panGesture }, Content = new StackLayout { Children = { new BoxView { Color = Color.Red, HeightRequest = 500 }, new BoxView { Color = Color.Gray, HeightRequest = 500 }, new BoxView { Color = Color.Yellow, HeightRequest = 500 } } } }; panGesture.PanUpdated += (sender, e) => { if(e.StatusType == GestureStatus.Running) { ((View)sender).TranslationX = e.TotalX; } }; var mainLayout = new AbsoluteLayout(); mainLayout.Children.Add(scroll, new Rectangle(0, 0, 1, 1), AbsoluteLayoutFlags.All); MainPage = new ContentPage { Content = mainLayout }; scroll.Scrolled += (sender, e) => { }; MainPage.Appearing += (sender, e) => { scroll.ScrollToAsync(0, 300, true); }; } } ``` Status: Issue closed Answers: username_0: Sorry, used wrong branch https://github.com/xamarin/Xamarin.Forms/pull/3842 right PR username_0: Here is small reproduction code. Try to pan this view and it set scroll view to 0.0 immediately I've investigated a little HandleScrolled is called but ContentOffset is (0,0) - i have no idea "Why?" Tested on iOS simulator (iphone X) ```csharp public class App : Application { public App() { var panGesture = new PanGestureRecognizer(); var scroll = new ScrollView { GestureRecognizers = { panGesture }, Content = new StackLayout { Children = { new BoxView { Color = Color.Red, HeightRequest = 500 }, new BoxView { Color = Color.Gray, HeightRequest = 500 }, new BoxView { Color = Color.Yellow, HeightRequest = 500 } } } }; panGesture.PanUpdated += (sender, e) => { if(e.StatusType == GestureStatus.Running) { ((View)sender).TranslationX = e.TotalX; } }; var mainLayout = new AbsoluteLayout(); mainLayout.Children.Add(scroll, new Rectangle(0, 0, 1, 1), AbsoluteLayoutFlags.All); MainPage = new ContentPage { Content = mainLayout }; scroll.Scrolled += (sender, e) => { }; MainPage.Appearing += (sender, e) => { scroll.ScrollToAsync(0, 300, true); }; } } ``` username_0: So, you just need to change translationX/translationY or any other property which causes OnNativeViewChanged. Status: Issue closed
icsharpcode/CodeConverter
305948142
Title: VB -> C#: Select expressions not supported Question: username_0: Input: ``` Public Class TestClass Shared Function TimeAgo(daysAgo As Integer) As String Select Case daysAgo Case 1 Return "1 day ago" Case Is > 1 Return daysAgo & " days ago" Case Else Return "today" End Select End Function End Class ``` Error: ``` System.NotSupportedException: Specified method is not supported. at ICSharpCode.CodeConverter.CSharp.VisualBasicConverter.MethodBodyVisitor.VisitSelectBlock(SelectBlockSyntax node) at Microsoft.CodeAnalysis.VisualBasic.Syntax.SelectBlockSyntax.Accept[TResult](VisualBasicSyntaxVisitor`1 visitor) at Microsoft.CodeAnalysis.VisualBasic.VisualBasicSyntaxVisitor`1.Visit(SyntaxNode node) at ICSharpCode.CodeConverter.CSharp.CommentConvertingMethodBodyVisitor.DefaultVisit(SyntaxNode node) at Microsoft.CodeAnalysis.VisualBasic.VisualBasicSyntaxVisitor`1.VisitSelectBlock(SelectBlockSyntax node) at Microsoft.CodeAnalysis.VisualBasic.Syntax.SelectBlockSyntax.Accept[TResult](VisualBasicSyntaxVisitor`1 visitor) at ICSharpCode.CodeConverter.CSharp.VisualBasicConverter.NodesVisitor.<>c__DisplayClass33_0.<VisitStatements>b__0(StatementSyntax s) at System.Linq.Enumerable.<SelectManyIterator>d__17`2.MoveNext() at Microsoft.CodeAnalysis.SyntaxList`1.CreateNode(IEnumerable`1 nodes) at ICSharpCode.CodeConverter.CSharp.VisualBasicConverter.NodesVisitor.VisitStatements(SyntaxList`1 statements, Boolean isIterator) at ICSharpCode.CodeConverter.CSharp.VisualBasicConverter.NodesVisitor.VisitMethodBlock(MethodBlockSyntax node) at Microsoft.CodeAnalysis.VisualBasic.Syntax.MethodBlockSyntax.Accept[TResult](VisualBasicSyntaxVisitor`1 visitor) at Microsoft.CodeAnalysis.VisualBasic.VisualBasicSyntaxVisitor`1.Visit(SyntaxNode node) at ICSharpCode.CodeConverter.CSharp.CommentConvertingNodesVisitor.DefaultVisit(SyntaxNode node) at Microsoft.CodeAnalysis.VisualBasic.VisualBasicSyntaxVisitor`1.VisitMethodBlock(MethodBlockSyntax node) at Microsoft.CodeAnalysis.VisualBasic.Syntax.MethodBlockSyntax.Accept[TResult](VisualBasicSyntaxVisitor`1 visitor) at ICSharpCode.CodeConverter.CSharp.VisualBasicConverter.NodesVisitor.<ConvertMembers>d__19.MoveNext() at Microsoft.CodeAnalysis.SyntaxList`1.CreateNode(IEnumerable`1 nodes) at ICSharpCode.CodeConverter.CSharp.VisualBasicConverter.NodesVisitor.VisitClassBlock(ClassBlockSyntax node) at Microsoft.CodeAnalysis.VisualBasic.Syntax.ClassBlockSyntax.Accept[TResult](VisualBasicSyntaxVisitor`1 visitor) at Microsoft.CodeAnalysis.VisualBasic.VisualBasicSyntaxVisitor`1.Visit(SyntaxNode node) at ICSharpCode.CodeConverter.CSharp.CommentConvertingNodesVisitor.WithPortedTrivia[TSource,TDest](SyntaxNode node, Func`3 portExtraTrivia) at ICSharpCode.CodeConverter.CSharp.CommentConvertingNodesVisitor.VisitClassBlock(ClassBlockSyntax node) at Microsoft.CodeAnalysis.VisualBasic.Syntax.ClassBlockSyntax.Accept[TResult](VisualBasicSyntaxVisitor`1 visitor) at ICSharpCode.CodeConverter.CSharp.VisualBasicConverter.NodesVisitor.<VisitNamespaceBlock>b__18_0(StatementSyntax m) at System.Linq.Enumerable.WhereSelectEnumerableIterator`2.MoveNext() at Microsoft.CodeAnalysis.SyntaxList`1.CreateNode(IEnumerable`1 nodes) at ICSharpCode.CodeConverter.CSharp.VisualBasicConverter.NodesVisitor.VisitNamespaceBlock(NamespaceBlockSyntax node) at Microsoft.CodeAnalysis.VisualBasic.Syntax.NamespaceBlockSyntax.Accept[TResult](VisualBasicSyntaxVisitor`1 visitor) at Microsoft.CodeAnalysis.VisualBasic.VisualBasicSyntaxVisitor`1.Visit(SyntaxNode node) at ICSharpCode.CodeConverter.CSharp.CommentConvertingNodesVisitor.DefaultVisit(SyntaxNode node) at Microsoft.CodeAnalysis.VisualBasic.VisualBasicSyntaxVisitor`1.VisitNamespaceBlock(NamespaceBlockSyntax node) at Microsoft.CodeAnalysis.VisualBasic.Syntax.NamespaceBlockSyntax.Accept[TResult](VisualBasicSyntaxVisitor`1 visitor) at ICSharpCode.CodeConverter.CSharp.VisualBasicConverter.NodesVisitor.<VisitCompilationUnit>b__16_3(StatementSyntax m) at System.Linq.Enumerable.WhereSelectEnumerableIterator`2.MoveNext() at Microsoft.CodeAnalysis.SyntaxList`1.CreateNode(IEnumerable`1 nodes) at ICSharpCode.CodeConverter.CSharp.VisualBasicConverter.NodesVisitor.VisitCompilationUnit(CompilationUnitSyntax node) at Microsoft.CodeAnalysis.VisualBasic.Syntax.CompilationUnitSyntax.Accept[TResult](VisualBasicSyntaxVisitor`1 visitor) at Microsoft.CodeAnalysis.VisualBasic.VisualBasicSyntaxVisitor`1.Visit(SyntaxNode node) at ICSharpCode.CodeConverter.CSharp.CommentConvertingNodesVisitor.DefaultVisit(SyntaxNode node) at ICSharpCode.CodeConverter.CSharp.CommentConvertingNodesVisitor.VisitCompilationUnit(CompilationUnitSyntax node) at Microsoft.CodeAnalysis.VisualBasic.Syntax.CompilationUnitSyntax.Accept[TResult](VisualBasicSyntaxVisitor`1 visitor) at ICSharpCode.CodeConverter.CSharp.VisualBasicConverter.ConvertCompilationTree(VisualBasicCompilation compilation, VisualBasicSyntaxTree tree) at ICSharpCode.CodeConverter.CSharp.VBToCSConversion.SingleFirstPass(Compilation sourceCompilation, SyntaxTree tree) at ICSharpCode.CodeConverter.Shared.ProjectConversion`1.FirstPass() ``` Status: Issue closed Answers: username_1: The only way to reliably handle Select Case is with an if...else if...else if ladder because C#'s switch case can only handle constants for the cases.
joseluisvf/SEBC
212380626
Title: Storage Labs Question: username_0: Issue created to monitor the progress of the labs pertaining to HDFS storage - [] Replicate to another cluster - [] Test HDFS performance - [] Test HDFS Snapshots - [] Enable HDFS HA Answers: username_0: Here's the URL to access the CM instance: ``` http://172.16.58.3:7180/cmf/login ``` username_0: Notes concerning enabling HA for the NN: - In the wizard, simply naming the journal nodes' folders will create them; there is no need to get on hdfs and do this - After the wizard, follow the following [guide](https://www.cloudera.com/documentation/enterprise/5-3-x/topics/cdh_hag_hdfs_ha_cdh_components_config.html) for enabling HA on other services(Hive, Hue, Oozie, ...) and take notice that these all depend on Hive. - To backup the hive metastore, run the following command: ``` mysqldump -u root -p metastore -r /tmp/metastore.sql ``` Status: Issue closed
CrunchyData/postgres-operator
1065791327
Title: what about pgo client? Question: username_0: Hello, I couldn't have found an answer... What about pgo client pod or installation manuals for Release 5.X ? May we use the same pgo pod and/or client installation instructions with Release 4.X? Thanks & Regards Answers: username_1: I'm also interested about using the pgo cli with version 5? Is it not available? username_2: Hello, According to https://access.crunchydata.com/documentation/postgres-operator/5.0.4/releases/5.0.0/ `pgo-client` was removed
Clinical-Genomics/MIP
584272754
Title: MIP analysis of small panels Question: username_0: MIP in its current form isn't designed to run small panels and will often fail on such samples. This is somewhat due to a lack of variants in the panels, causing some files to be empty. The solution is to write a new MIP pipeline tailored towards panels. Answers: username_0: # Output from rtg ## Case: deepalien (NA24143) Panel: GMCKSolidv4.bed Read pairs: 118,168,226 ``` Threshold True-pos-baseline True-pos-call False-pos False-neg Precision Sensitivity F-measure ---------------------------------------------------------------------------------------------------- 99.000 4557 4564 183 43 0.9614 0.9907 0.9758 None 4557 4564 186 43 0.9608 0.9907 0.9755 ``` ## Case: deepalien (NA24143) Panel: GMCKSolidv4.bed Read pairs: 68,648,554 ``` Threshold True-pos-baseline True-pos-call False-pos False-neg Precision Sensitivity F-measure ---------------------------------------------------------------------------------------------------- 99.000 4552 4559 205 48 0.9570 0.9896 0.9730 None 4553 4560 210 47 0.9560 0.9898 0.9726 ```` ## Case: sureemu (NA24143) Panel: GMCKSolidv4.bed Read pairs: 40,585,090 ``` Threshold True-pos-baseline True-pos-call False-pos False-neg Precision Sensitivity F-measure ---------------------------------------------------------------------------------------------------- 99.000 4554 4560 193 46 0.9594 0.9900 0.9745 None 4554 4560 200 46 0.9580 0.9900 0.9737 ``` Status: Issue closed
dart-lang/dart-pad
394930688
Title: list.add not running Question: username_0: i have tried to use .add with my list object but it is showing uncaught exception Answers: username_1: Can you give an example of the code your are trying to run but gives you an exception? Status: Issue closed username_2: Just tested with this code: ``` void main() { final list = [1, 2, 3]; list.add(4); print(list); } ``` DartPad produced the expected [1, 2, 3, 4] as output. If you can still reproduce this, please reopen and include the code that's giving you trouble.
JoshMcguigan/betafpv-f3
345426871
Title: Why not choose a 10dof flight controller? Question: username_0: Hi. I'm working on a similar project. Why not choose a 10dof flight controller like [this](https://www.banggood.com/Upgrade-NAZE32-F3-Flight-Controller-Acro-6-DOF-Deluxe-10-DOF-for-Multirotor-Racing-p-1010232.html?rmmds=myorder&cur_warehouse=USA)? Answers: username_1: That does look like a nice board, especially because of all the exposed ports it has. That would make it much easier to troubleshoot. I originally chose the betafpv-f3 because it has onboard ESCs and you can buy pre-built drones that include the betafpv-f3 flight controller (the Beta75S is one of them). The benefit of that is if other Rust developers want to try this out for themselves, buying a prebuilt drone to test on is easier than building up a drone from components. That said, I have run into some challenges related to the lack of ports available for troubleshooting use on the betafpv-f3. At the moment I am trying to figure out how I can use one of the motor ports to communicate troubleshooting information, but I've had some trouble getting my [bit banging serial](https://github.com/username_1/bit-bang-serial) implementation working, although it works on the STM32F3DISCOVERY board. The next step is checking the output with an oscilloscope to hopefully track down the issue. username_0: Completely makes sense. While developing this project, do you have a prebuilt drone like `Beta75s` or assembling one yourself? username_1: I have not purchased the `Beta75s` yet, but I plan to soon. I did figure out the serial communication issue I was having, which will help move things along. username_0: Nice ๐Ÿ‘.
apache/incubator-ponymail
368375688
Title: Bug: no need to sort after scroll Question: username_0: The scroll API is most efficient when sorting by _doc. However if a different order is needed, it's generally more efficient to have ES do the search rather than perform the search in the client. It's certainly not efficient to do both, which is what currently happens in stats.lua and pminfo.lua (this is a hangover from when the code used scroll/scan)<issue_closed> Status: Issue closed
videojs/video.js
91253664
Title: Seek problems Question: username_0: Hello. I have some unexpected events while i am trying to sync 2 video layouts. The task is to sync to videos without fade(out/in) effect up to 0.000001 because videos are 3D renders of small objects. I am using the code below to take time of the first video and seek the second up to it. function changer (ob1,ob2,call) { ob1.pause(); var a = ob1.currentTime(); ob2.on('loadedalldata', function () { ob2.off('loadedalldata'); if(ob2.readyState() == 4 ) { ob2.off('loadedalldata'); //ob2.play(); ob2.currentTime(a); console.log("old a "+a); ob2.on("seeked",function() { ob2.pause(); ob2.off("seeked"); console.log("new b "+ob2.currentTime()); //ob2.play(); setTimeout(function() { $(ob2.L).parent().parent().show(); $(ob1.L).parent().parent().hide(); console.log("video changed!"); },200); }); } }); } In other words, stop Ob1 - take Ob1 curtime to a - when Ob1 is seeked to a pause ob1. Obviosly the a time and new ob1.curtime() must be the same but i have thouse outputs: ------ old a 7.870086 new b 7.871417 ----- old a 5.868995 new b 5.870086 ---- and it's the latest Chrome 43 version. old a 2.868995 new b 2.972241 latest mozilla If you can tell me why delta is so big it will be very good. Better if you can give the advice how to improve it. (cause i think it's bad idea to do something in the core "NEW") Regards. Answers: username_1: I honestly *seriously* doubt you're ever going to be able to get accuracy out to 6 decimal places. I don't think this is anything we can reasonably help with to that degree of accuracy, but if you can put a reduced test case online we can take a look. I'm going to close this issue, but feel free to keep the conversation going in the comments. Status: Issue closed
TEdit/Terraria-Map-Editor
664629446
Title: Line between two points Question: username_0: What I mean by that title is, basically a hotkey (or tool) relative to any drawing tool. This new feature would make you place 2 markers (2 points wherever you want your line of tiles to end) and would draw the shortest line from one to the other using your chosen tile or wall combo Status: Issue closed Answers: username_0: Just realized this is already a thing (Place a point, shift click another point with the pencil)
Azure/azure-functions-host
966629382
Title: Controlling scale out alternatives? Question: username_0: I see that there are several options for controlling scale out: **1 - functionAppScaleLimit** (https://github.com/MicrosoftDocs/azure-docs/blob/4f0990d78d5ece72d35a83b2f8d52a78fe425765/articles/azure-functions/event-driven-scaling.md#limit-scale-out) **2 - WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT** (https://github.com/MicrosoftDocs/azure-docs/blob/4f0990d78d5ece72d35a83b2f8d52a78fe425765/articles/azure-functions/functions-app-settings.md#website_max_dynamic_application_scale_out) **3 - Setting it manually via Azure portal** I see that WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT is in preview and has been so for a long time. Is it ever going to be graduating from preview? For our needs I would prefer to use WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT but wanted to know if it is reliable and we can use it. Answers: username_1: functionAppScaleLimit is the way to go. functionAppScaleLimit replaces WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT. username_0: Thanks for verifying that @username_1. Closing this issue. Status: Issue closed
smallnest/rpcx
275148346
Title: TLS Client Authentication Question: username_0: How can I use *rpcx* with TLS Client Authentication? By that I mean I have a server which provides some services and I need to just respond to clients which have **specific** certificate (not every certificate). Currently I used you example (tls) and provide a self-signed certificate (with Insecure parameter as true) to the client (server and client used two different certificates) and successfully managed to communicate to the server (via client). Thanks in advance Answers: username_1: It is same to usage in other golang TCP application with TLS. You can search TLS examples to set tLS config with non-self-signed certificates, for example, https://gist.github.com/michaljemala/d6f4e01c4834bf47a9c4 username_1: If you means only some clients have specific certificert, rpcx can't handle this case. Status: Issue closed
openbmc/openbmc
379914305
Title: Usb devices not getting enumerated on USB port A of evb2500 when set as host Question: username_0: aspeed-ast2500-evb device tree by default enables USB port A as usb device via virtual hub. here is the comment from device tree. ``` /* - * Enable port A as device (via the virtual hub) and port B as - * host by default on the eval board. This can be easily changed - * by replacing the override below with &ehci0 { ... } to enable - * host on both ports. - */ ``` In our future ast2500 based application, we need to have both usb ports as host so we removed vhub phandle and added ehci0 as said in the comment in device tree as below: ``` &ehci0 { status = "okay"; }; &ehci1 { status = "okay"; }; &uhci { status = "okay"; }; ``` Inserting both a high speed device(480Mbps) or a full speed device (12M) will lead to enumeration errors as shown below: ``` root@evb-ast2500:~# dmesg | grep usb [ 0.136508] usbcore: registered new interface driver usbfs [ 0.136665] usbcore: registered new interface driver hub [ 0.136856] usbcore: registered new device driver usb [ 3.770279] ehci-platform 1e6a1000.usb: EHCI Host Controller [ 3.776040] ehci-platform 1e6a1000.usb: new USB bus registered, assigned bus number 1 [ 3.800719] ehci-platform 1e6a1000.usb: irq 21, io mem 0x1e6a1000 [ 3.843858] ehci-platform 1e6a1000.usb: USB 2.0 started, EHCI 1.00 [ 3.851171] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 4.17 [ 3.859592] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.866828] usb usb1: Product: EHCI Host Controller [ 3.871713] usb usb1: Manufacturer: Linux 4.17.14-c71662c749dc66a542e717dbd51fefab995c9455 ehci_hcd [ 3.880841] usb usb1: SerialNumber: 1e6a1000.usb [ 3.901234] ehci-platform 1e6a3000.usb: EHCI Host Controller [ 3.906983] ehci-platform 1e6a3000.usb: new USB bus registered, assigned bus number 2 [ 3.917930] ehci-platform 1e6a3000.usb: irq 22, io mem 0x1e6a3000 [ 3.947306] ehci-platform 1e6a3000.usb: USB 2.0 started, EHCI 1.00 [ 3.954309] usb usb2: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 4.17 [ 3.962715] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.969951] usb usb2: Product: EHCI Host Controller [ 3.974836] usb usb2: Manufacturer: Linux 4.17.14-c71662c749dc66a542e717dbd51fefab995c9455 ehci_hcd [ 3.983966] usb usb2: SerialNumber: 1e6a3000.usb [ 4.006602] platform-uhci 1e6b0000.usb: Detected 2 ports from device-tree [ 4.013601] platform-uhci 1e6b0000.usb: Enabled Aspeed implementation workarounds [ 4.021155] platform-uhci 1e6b0000.usb: Generic UHCI Host Controller [ 4.027590] platform-uhci 1e6b0000.usb: new USB bus registered, assigned bus number 3 [ 4.035529] platform-uhci 1e6b0000.usb: irq 23, io mem 0x1e6b0000 [ 4.044485] usb usb3: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 4.17 [ 4.052908] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 4.060145] usb usb3: Product: Generic UHCI Host Controller [ 4.065723] usb usb3: Manufacturer: Linux 4.17.14-c71662c749dc66a542e717dbd51fefab995c9455 uhci_hcd [ 4.074760] usb usb3: SerialNumber: 1e6b0000.usb [ 4.270099] usbcore: registered new interface driver usbhid [ 4.275689] usbhid: USB HID core driver [Truncated] bPwrOn2PwrGood 1 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0101 power connect can't get debug descriptor: Resource temporarily unavailable Device Status: 0x0001 Self Powered root@evb-ast2500:~# lsusb -t /: Bus 03.Port 1: Dev 1, Class=root_hub, Driver=platform-uhci/2p, 12M /: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=ehci-platform/1p, 480M /: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=ehci-platform/1p, 480M ``` Any hint on where the problem could be is really appreciated. (Not really sure whether this is driver issue or it is a evb-ast2500 hw issue) Thanks in Advance, Answers: username_0: Appeared to be due to a zero ohm resistor not soldered on EVB. When soldered PortA worked in host mode as expected. Status: Issue closed
ionic-team/ionic-cli
590140802
Title: HTTP Error 503: POST https://res.ionic.io/api/v1/upload Question: username_0: Uploading source images to prepare for transformations - failed! HTTP Error 503: POST https://res.ionic.io/api/v1/upload <title>Application Error</title> <style media="screen"> html,body,iframe { margin: 0; padding: 0; } html,body { height: 100%; overflow: hidden; } iframe { width: 100%; height: 100%; border: 0; } </style> <iframe src="//www.herokucdn.com/error-pages/application-error.html"></iframe> I just want to user ionic 3
intenthq/gitkv
424235675
Title: If the target directory exists, fetch/pull instead of clone Answers: username_1: I think this issue is referring to [this old code](https://github.com/intenthq/gitkv/blob/a8c13b4b37b3b35dce0ddbde243ee06d452e9e00/docker/git-puller/bin/entrypoint.sh#L30), meaning it is no longer relevant. Status: Issue closed
twin-te/twinte-front
606081169
Title: ใƒกใƒ‹ใƒฅใƒผใซๅฏ„ไป˜่€…ไธ€่ฆงใธใฎใƒชใƒณใ‚ฏใ‚’ Question: username_0: <!-- ่ฆๆœ›ใฎใƒ†ใƒณใƒ—ใƒฌใƒผใƒˆ --> ## ๆฆ‚่ฆ ๅฏ„ไป˜ใ—ใฆใ„ใŸใ ใ„ใŸใฎใงใ€ใƒชใƒณใ‚ฏใ‚’่กจ็คบใ™ใ‚‹ ## ็›ฎ็š„ ๅฏ„ไป˜่€…ไธ€่ฆงใธใฎใƒšใƒผใ‚ธใธใฎใƒชใƒณใ‚ฏใ‚’่ฒผใ‚‹ใ“ใจใซใ‚ˆใฃใฆๆ„Ÿ่ฌใฎๆ„ใ‚’็คบใ™ใ€‚ ใพใŸใ€ๅฏ„ไป˜่€…ไธ€่ฆงใƒšใƒผใ‚ธใซๅฏ„ไป˜ใƒšใƒผใ‚ธใธใฎใƒชใƒณใ‚ฏใ‚‚ใ‚ใ‚‹ใฎใงๅฏ„ไป˜ใ—ใฆใ‚‚ใ‚‰ใ„ใ‚„ใ™ใใ™ใ‚‹ใ€‚ ## ๅ•้กŒ็‚น AppleใฎๅฏฉๆŸปใŒ้€šใ‚‰ใชใใชใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚‹๏ผˆใใฎๆ™‚ใฏใใฎๆ™‚ใƒปใƒปใƒป๏ผ‰ Answers: username_1: @username_2 ใ‚ตใ‚คใƒ‰ใƒกใƒ‹ใƒฅใƒผใฎใƒ‡ใ‚ถใ‚คใƒณไฝœใฃใฆ:pray: username_2: ๆ™ฎ้€šใซใ‚ตใ‚คใƒ‰ใƒใƒผใฎไธ€่ฆงใซ่ถณใ™ๆ„Ÿใ˜ใง่‰ฏใ•ใใ†ใ€œใจๆ€ใฃใŸใ‘ใฉใ€ใƒกใƒ‹ใƒฅใƒผใฎใ‚ณใƒณใƒ†ใƒณใƒ„ใŒๅข—ใˆใ™ใŽใ‚‹ใฎใŒไธๅฎ‰ใงใฏใ‚ใ‚‹ใƒปใƒปใƒปใ€‚๏ผˆ่ฒ ๅ‚ตไธธๆŠ•ใ’ใ ใ‘ใฉๆ–ฐverใงไฝ•ใจใ‹ใ—ใŸใ„๏ผ‰ Status: Issue closed
sulu/sulu
443617061
Title: Unpublished page not available in teaser selection Question: username_0: | Q | A | --- | --- | Bug? | no | New Feature? | no | Sulu Version | 1.6.26 | Browser Version | - #### Actual Behavior Unpublished page not shown in teaser selection after being selected. #### Expected Behavior An unpublished page should also be shown in the teaser selection after being selected. #### Steps to Reproduce 1. Create a teaser selection. 2. Add a unpublished page to it 3. Page will not be shown in the list #### Possible Solutions ContentTeaserProvider should in case of Admin not go on the published index it should go on the unpublished index. Answers: username_1: For me the page is listed, but the title in the teaser selection is missing. username_2: Fixed by #5716 Status: Issue closed
mesqueeb/vuex-easy-firestore
386077127
Title: [Bug] modifiedHook doesn't execute when adding guard. Question: username_0: https://github.com/username_0/vuex-easy-firestore/issues/83#issuecomment-442887026 Answers: username_0: @username_1 I have tried to understand this behaviour, and I found out why. every modification only launches `serverChange.modifiedHook` when a `firebase.firestore.FieldValue.serverTimestamp()` is used. And this library uses that for `created_at` and `updated_at`. So if you add those to `guard` then an insert won't execute `addedHook` and a modification won't execute `modifiedHook`. However, if you want a hook on a local change, you can still use the `sync.insertHook` and `sync.patchHook` instead! I'm gonna update the documentation now and close this thread when I've updated it. username_1: @username_0 Thanks for info! If just `updated_at` have to be used, its ok for me :) username_0: @username_1 Yeah, and I found out there was a bug with the addedHook, so just pushed another update to npm. Please update to latest version and check the new detailed documentation on [Execution timings of hooks](https://username_0.github.io/vuex-easy-firestore/extra-features.html#execution-timings-of-hooks). Status: Issue closed username_0: For anyone working with HOOKS, there has been a small (non-breaking) change to their behaviour. Please read about it in the [latest release](https://github.com/username_0/vuex-easy-firestore/releases/tag/v1.26.0).
Kraymer/flinck
843239957
Title: Client Error: Unauthorized for url: http://www.omdbapi.com/?t=Kurenai+no+buta+AKA+Porco+Rosso&y=1992&page=1&plot=short&tomatoes=False Question: username_0: I get many errors when i try to use flinck. I've included those below. My intuition is that this is because the omdb api is not included in the url "[](http://www.omdbapi.com/?t=Kurenai+no+buta+AKA+Porco+Rosso&y=1992&page=1&plot=short&tomatoes=False)", because when I visit it, I get a "No api key provided". `Traceback (most recent call last): File "C:\Users\Suleman\AppData\Local\Programs\Python\Python38-32\Scripts\flinck-script.py", line 11, in <module> load_entry_point('flinck==0.3.2', 'console_scripts', 'flinck')() File "c:\users\suleman\appdata\local\programs\python\python38-32\lib\site-packages\click\core.py", line 829, in __call__ return self.main(*args, **kwargs) File "c:\users\suleman\appdata\local\programs\python\python38-32\lib\site-packages\click\core.py", line 782, in main rv = self.invoke(ctx) File "c:\users\suleman\appdata\local\programs\python\python38-32\lib\site-packages\click\core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "c:\users\suleman\appdata\local\programs\python\python38-32\lib\site-packages\click\core.py", line 610, in invoke return callback(*args, **kwargs) File "c:\users\suleman\appdata\local\programs\python\python38-32\lib\site-packages\flinck\__init__.py", line 87, in flinck_cli item = brain.search_filename(fpath, by) File "c:\users\suleman\appdata\local\programs\python\python38-32\lib\site-packages\flinck\brain.py", line 141, in search_filename item = search_by(title, year, fields) File "c:\users\suleman\appdata\local\programs\python\python38-32\lib\site-packages\flinck\brain.py", line 113, in search_by item = omdb.get(**query) File "c:\users\suleman\appdata\local\programs\python\python38-32\lib\site-packages\omdb\api.py", line 23, in get return _client.get(**params) File "c:\users\suleman\appdata\local\programs\python\python38-32\lib\site-packages\omdb\client.py", line 106, in get data = self.request(timeout=timeout, **params).json() File "c:\users\suleman\appdata\local\programs\python\python38-32\lib\site-packages\omdb\client.py", line 55, in request res.raise_for_status() File "c:\users\suleman\appdata\local\programs\python\python38-32\lib\site-packages\requests\models.py", line 940, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: http://www.omdbapi.com/?t=Kurenai+no+buta+AKA+Porco+Rosso&y=1992&page=1&plot=short&tomatoes=False`
mubaidr/bulma-pro
371002916
Title: Logo & Banners Question: username_0: @username_1 please Answers: username_1: Hello @username_0 , what can i help about this project? username_0: This is css library same as `bulma-fluent`, demo here: https://username_0.js.org/bulma-pro/#/ If possible can you create just one icon for this? username_1: I try to make some proposals @username_0 , Thanks for trusting me this. username_1: Good Day @username_0 , Do have any preferred color for the logo ? username_0: Looks very good! Awesome work! No preference for color, just try color you think will suit it best. username_1: ![image](https://user-images.githubusercontent.com/35353768/47476798-af9db180-d854-11e8-824f-53cf90c7d16f.png) Here are some color choices: username_0: Can you use this color? (this is one of color used in the theme) ``` #009dcc ``` username_1: ![image](https://user-images.githubusercontent.com/35353768/47479365-7caceb00-d85f-11e8-91a0-369e5305fd72.png) username_0: Awesome! Good to go! Please create pull-request. username_1: OKay sir , I will. Thank you for your cooperation username_0: Thanks @username_1 https://github.com/username_0/bulma-pro/pull/2/files Status: Issue closed
nomurakatsuya90/manyo
849603333
Title: step4 ใƒฆใƒผใ‚ถ + ็ฎก็†่€… Question: username_0: step4่ชฒ้กŒใ‚’ไฝœๆˆใ™ใ‚‹ใ€‚ ใƒฆใƒผใ‚ถใƒขใƒ‡ใƒซใ‚’ไฝœๆˆใ™ใ‚‹ ๆœ€ๅˆใฎใƒฆใƒผใ‚ถใ‚’seedใงไฝœๆˆใ™ใ‚‹ ใƒฆใƒผใ‚ถใจใ‚ฟใ‚นใ‚ฏใ‚’็ดใฅใ‘ใ‚‹ ใƒฆใƒผใ‚ถใฎemailใซใƒฆใƒ‹ใƒผใ‚ฏๅˆถ็ด„ใ‚’ใคใ‘ใ‚‹ ้–ข้€ฃ๏ผˆใ‚ขใ‚ฝใ‚ทใ‚จใƒผใ‚ทใƒงใƒณใงไฝฟ็”จใ™ใ‚‹id๏ผ‰ใซๅฏพใ—ใฆใ‚คใƒณใƒ‡ใƒƒใ‚ฏใ‚นใ‚’ใคใ‘ใ‚‹ Herokuใซใƒ‡ใƒ—ใƒญใ‚คใ—ใŸ้š›ใซใ€ใ™ใงใซ็™ป้Œฒใ•ใ‚Œใฆใ„ใ‚‹ใ‚ฟใ‚นใ‚ฏใจใƒฆใƒผใ‚ถใŒ็ดใฅใ„ใฆใ„ใ‚‹ใ‚ˆใ†ใซใ™ใ‚‹ bcrypt-rubyไปฅๅค–ใฏใ€Gemใ‚’ไฝฟใ‚ใšใซๅฎŸ่ฃ…ใ™ใ‚‹ ใƒญใ‚ฐใ‚คใƒณๆฉŸ่ƒฝใ‚’ๅฎŸ่ฃ…ใ™ใ‚‹ ใƒญใ‚ฐใ‚คใƒณใ‚’ใ›ใšใซใ‚ฟใ‚นใ‚ฏไธ€่ฆงใฎใƒšใƒผใ‚ธใซ้ฃ›ใผใ†ใจใ—ใŸใจใใฏใ€ใƒญใ‚ฐใ‚คใƒณใƒšใƒผใ‚ธใซ้ท็งปใ•ใ›ใ‚‹ ่‡ชๅˆ†ใŒไฝœๆˆใ—ใŸใ‚ฟใ‚นใ‚ฏใ ใ‘ใ‚’่กจ็คบใ•ใ›ใ‚‹ ใƒญใ‚ฐใ‚ขใ‚ฆใƒˆๆฉŸ่ƒฝใ‚’ๅฎŸ่ฃ…ใ™ใ‚‹ ใƒฆใƒผใ‚ถใฎๆ–ฐ่ฆ็™ป้Œฒ็”ป้ขใ€ใƒญใ‚ฐใ‚คใƒณ็”ป้ขใ€่ฉณ็ดฐใƒปใƒžใ‚คใƒšใƒผใ‚ธ๏ผˆshow๏ผ‰็”ป้ขใ‚’ไฝœๆˆใ™ใ‚‹ ใƒฆใƒผใ‚ถใ‚’ๆ–ฐ่ฆ็™ป้Œฒ๏ผˆcreate๏ผ‰ใ‚’ใ—ใŸใจใใ€ๅŒๆ™‚ใซใƒญใ‚ฐใ‚คใƒณใ‚‚ใ•ใ›ใ‚‹ ใƒญใ‚ฐใ‚คใƒณใ—ใฆใ„ใ‚‹ใจใใฏใ€ใƒฆใƒผใ‚ถใฎๆ–ฐ่ฆ็™ป้Œฒ็”ป้ข๏ผˆnew็”ป้ข๏ผ‰ใซ่กŒใ‹ใ›ใชใ„ใ‚ˆใ†ใซใ‚ณใƒณใƒˆใƒญใƒผใƒฉใงๅˆถๅพกใ™ใ‚‹ ่‡ชๅˆ†๏ผˆcurrent_user๏ผ‰ไปฅๅค–ใฎใƒฆใƒผใ‚ถใฎใƒžใ‚คใƒšใƒผใ‚ธ๏ผˆuserใฎshow็”ป้ข๏ผ‰ใซใ‚ขใ‚ฏใ‚ปใ‚นใ—ใŸใ‚‰ใ‚ฟใ‚นใ‚ฏไธ€่ฆงใซ้ท็งปใ•ใ›ใ‚‹ ็ฎก็†็”ป้ขใ‚’่ฟฝๅŠ ใ™ใ‚‹ ็ฎก็†็”ป้ขใซใฏใ‹ใชใ‚‰ใš /admin ใจใ„ใ†URLใ‚’ๅ…ˆ้ ญใซใคใ‘ใ‚‹ ็ฎก็†็”ป้ขใงใƒฆใƒผใ‚ถไธ€่ฆง่กจ็คบใƒปไฝœๆˆใƒปๆ›ดๆ–ฐใƒปๅ‰Š้™คใŒใงใใ‚‹๏ผˆ็ฎก็†็”ป้ขใฎใƒ“ใƒฅใƒผใฏใ€ใƒ‘ใƒผใ‚ทใƒฃใƒซๅŒ–ใ—ใชใใฆใ‚‚ๆง‹ใ‚ใชใ„๏ผ‰ ใƒฆใƒผใ‚ถใ‚’ๅ‰Š้™คใ—ใŸใ‚‰ใ€ใใฎใƒฆใƒผใ‚ถใซ็ดใฅใ„ใฆใ„ใ‚‹ใ‚ฟใ‚นใ‚ฏใ‚’ๅ…จใฆๅ‰Š้™คใ™ใ‚‹ ใƒฆใƒผใ‚ถใฎไธ€่ฆง็”ป้ขใงใ€ใใฎใƒฆใƒผใ‚ถใซ็ดใฅใ„ใฆใ„ใ‚‹ใ‚ฟใ‚นใ‚ฏใฎๆ•ฐใ‚’่กจ็คบใ™ใ‚‹ N+1ๅ•้กŒใ‚’ๅ›ž้ฟใ™ใ‚‹ใŸใ‚ใฎไป•็ต„ใฟใ‚’ๅ–ใ‚Šๅ…ฅใ‚Œใ‚‹ใ“ใจ ใƒฆใƒผใ‚ถใŒไฝœๆˆใ—ใŸใ‚ฟใ‚นใ‚ฏใฎไธ€่ฆงใ‚’ใ€ใใฎใƒฆใƒผใ‚ถใฎใƒžใ‚คใƒšใƒผใ‚ธ๏ผˆuserใฎshow็”ป้ข๏ผ‰ใง่ฆ‹ใ‚‰ใ‚Œใ‚‹ ใƒฆใƒผใ‚ถใ‚’็ฎก็†ใƒฆใƒผใ‚ถใจไธ€่ˆฌใƒฆใƒผใ‚ถใ‚’ๅŒบๅˆฅใ™ใ‚‹ ็ฎก็†ใƒฆใƒผใ‚ถใ ใ‘ใŒใƒฆใƒผใ‚ถ็ฎก็†็”ป้ขใซใ‚ขใ‚ฏใ‚ปใ‚นใงใใ‚‹ ไธ€่ˆฌใƒฆใƒผใ‚ถใŒ็ฎก็†็”ป้ขใซใ‚ขใ‚ฏใ‚ปใ‚นใ—ใŸใจใใ€tasks/indexใซ้ฃ›ใฐใ—ใฆใ€Œ็ฎก็†่€…ไปฅๅค–ใฏใ‚ขใ‚ฏใ‚ปใ‚นใงใใชใ„ใ€ๆ—จใฎflashใƒกใƒƒใ‚ปใƒผใ‚ธใ‚’ๅ‡บๅŠ›ใ™ใ‚‹ ใƒฆใƒผใ‚ถ็ฎก็†็”ป้ขใงใƒญใƒผใƒซใฎไป˜ไธŽใจๅ‰Š้™คใŒใงใใ‚‹ ็ฎก็†ใƒฆใƒผใ‚ถใŒไธ€ไบบใ‚‚ใ„ใชใใชใฃใฆใ—ใพใ‚ใชใ„ใ‚ˆใ†ใซใ€ใƒขใƒ‡ใƒซใฎใ‚ณใƒผใƒซใƒใƒƒใ‚ฏใ‚’ๅˆฉ็”จใ—ใฆๆ›ดๆ–ฐใƒปๅ‰Š้™คใฎๅˆถๅพกใ‚’ใ™ใ‚‹ ใƒ†ใ‚นใƒˆ้ …็›ฎใ‚’ๆบ€ใŸใ™ใ‚ˆใ†System Specใ‚’ๆ›ธใ ใƒฆใƒผใ‚ถ็™ป้Œฒใฎใƒ†ใ‚นใƒˆ ใƒฆใƒผใ‚ถใฎๆ–ฐ่ฆ็™ป้ŒฒใŒใงใใ‚‹ใ“ใจ ใƒฆใƒผใ‚ถใŒใƒญใ‚ฐใ‚คใƒณใ›ใšใ‚ฟใ‚นใ‚ฏไธ€่ฆง็”ป้ขใซ้ฃ›ใผใ†ใจใ—ใŸใจใใ€ใƒญใ‚ฐใ‚คใƒณ็”ป้ขใซ้ท็งปใ™ใ‚‹ใ“ใจ - ใ‚ปใƒƒใ‚ทใƒงใƒณๆฉŸ่ƒฝใฎใƒ†ใ‚นใƒˆ ใƒญใ‚ฐใ‚คใƒณใŒใงใใ‚‹ใ“ใจ ่‡ชๅˆ†ใฎ่ฉณ็ดฐ็”ป้ข(ใƒžใ‚คใƒšใƒผใ‚ธ)ใซ้ฃ›ในใ‚‹ใ“ใจ ไธ€่ˆฌใƒฆใƒผใ‚ถใŒไป–ไบบใฎ่ฉณ็ดฐ็”ป้ขใซ้ฃ›ใถใจใ‚ฟใ‚นใ‚ฏไธ€่ฆง็”ป้ขใซ้ท็งปใ™ใ‚‹ใ“ใจ ใƒญใ‚ฐใ‚ขใ‚ฆใƒˆใŒใงใใ‚‹ใ“ใจ - ็ฎก็†็”ป้ขใฎใƒ†ใ‚นใƒˆ ็ฎก็†ใƒฆใƒผใ‚ถใฏ็ฎก็†็”ป้ขใซใ‚ขใ‚ฏใ‚ปใ‚นใงใใ‚‹ใ“ใจ ไธ€่ˆฌใƒฆใƒผใ‚ถใฏ็ฎก็†็”ป้ขใซใ‚ขใ‚ฏใ‚ปใ‚นใงใใชใ„ใ“ใจ ็ฎก็†ใƒฆใƒผใ‚ถใฏใƒฆใƒผใ‚ถใฎๆ–ฐ่ฆ็™ป้ŒฒใŒใงใใ‚‹ใ“ใจ ็ฎก็†ใƒฆใƒผใ‚ถใฏใƒฆใƒผใ‚ถใฎ่ฉณ็ดฐ็”ป้ขใซใ‚ขใ‚ฏใ‚ปใ‚นใงใใ‚‹ใ“ใจ ็ฎก็†ใƒฆใƒผใ‚ถใฏใƒฆใƒผใ‚ถใฎ็ทจ้›†็”ป้ขใ‹ใ‚‰ใƒฆใƒผใ‚ถใ‚’็ทจ้›†ใงใใ‚‹ใ“ใจ ็ฎก็†ใƒฆใƒผใ‚ถใฏใƒฆใƒผใ‚ถใฎๅ‰Š้™คใ‚’ใงใใ‚‹ใ“ใจ<issue_closed> Status: Issue closed
deltachat/deltachat-ios
837109732
Title: Issue building DeltaChat-ios on macOS 11.2.3: ld: library not found for -lSystem Question: username_0: - Operating System iOS - Build Environment: macOS Big Sur 11.2.3 (20D91) - Xcode Version 12.4 (12D4e) Getting build error: ld: library not found for -lSystem Looks like I need to add /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/lib to the search path but don't know where to do that. Thanks for your help - <NAME> Log attached [Build target DcCore_2021-03-21T10-43-33.txt](https://github.com/deltachat/deltachat-ios/files/6177764/Build.target.DcCore_2021-03-21T10-43-33.txt) Answers: username_1: this has been fixed with #1125 Status: Issue closed
docker-library/docs
285536476
Title: [RabbitMQ] Instructions for modifying log level incorrect Question: username_0: This does not work: ``` 2018-01-02 20:58:50 application_controller: ~ts: ~ts~n "unterminated string starting with \"-rabbit\"" "\"-rabbit" {"could not start kernel pid",application_controller,"{bad_environment_value,\"\\"-rabbit\"}"} could not start kernel pid (application_controller) ({bad_environment_value,"\"-rabbit"}) Crash dump is being written to: /var/log/rabbitmq/erl_crash.dump...done ``` Answers: username_1: I can't reproduce a failure, but there is a warning: ```console $ docker pull rabbitmq Using default tag: latest latest: Pulling from library/rabbitmq Digest: sha256:c3b0fbde40f2cf6847221bc156fa6c748eba6e6fbac200bcd52cf663f42eee16 Status: Image is up to date for rabbitmq:latest $ docker run -it --rm -e RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS='-rabbit log_levels [{connection,error}]' rabbitmq Using deprecated config parameter 'log_levels'. Please update your configuration file according to https://rabbitmq.com/logging.html2018-01-02 21:05:03.584 [info] <0.33.0> Application lager started on node rabbit@55e523d925c3 2018-01-02 21:05:03.652 [info] <0.33.0> Application recon started on node rabbit@55e523d925c3 2018-01-02 21:05:03.683 [info] <0.33.0> Application inets started on node rabbit@55e523d925c3 2018-01-02 21:05:03.683 [info] <0.33.0> Application crypto started on node rabbit@55e523d925c3 2018-01-02 21:05:03.683 [info] <0.33.0> Application xmerl started on node rabbit@55e523d925c3 2018-01-02 21:05:03.738 [info] <0.33.0> Application mnesia started on node rabbit@5<PASSWORD>5c3 2018-01-02 21:05:03.741 [info] <0.33.0> Application os_mon started on node rabbit@5<PASSWORD> 2018-01-02 21:05:03.741 [info] <0.33.0> Application jsx started on node rabbit@5<PASSWORD>3 2018-01-02 21:05:03.741 [info] <0.33.0> Application asn1 started on node rabbit@55e523d925c3 2018-01-02 21:05:03.741 [info] <0.33.0> Application public_key started on node rabbit@55e523d925c3 2018-01-02 21:05:03.769 [info] <0.33.0> Application ssl started on node rabbit@55e523d925c3 2018-01-02 21:05:03.771 [info] <0.33.0> Application ranch started on node rabbit@55e523d925c3 2018-01-02 21:05:03.771 [info] <0.33.0> Application ranch_proxy_protocol started on node rabbit@55e523d925c3 2018-01-02 21:05:03.771 [info] <0.33.0> Application rabbit_common started on node rabbit@55e523d925c3 2018-01-02 21:05:03.775 [info] <0.183.0> Starting RabbitMQ 3.7.2 on Erlang 20.1.7 Copyright (C) 2007-2017 Pivotal Software, Inc. Licensed under the MPL. See http://www.rabbitmq.com/ ## ## ## ## RabbitMQ 3.7.2. Copyright (C) 2007-2017 Pivotal Software, Inc. ########## Licensed under the MPL. See http://www.rabbitmq.com/ ###### ## ########## Logs: <stdout> Starting broker... 2018-01-02 21:05:03.783 [info] <0.183.0> node : rabbit@55e523d925c3 home dir : /var/lib/rabbitmq config file(s) : /etc/rabbitmq/rabbitmq.conf cookie hash : XxMKc9IdYs8/EEX1Vfuthg== log(s) : <stdout> database dir : /var/lib/rabbitmq/mnesia/rabbit@55e523d925c3 2018-01-02 21:05:04.589 [info] <0.191.0> Memory high watermark set to 25722 MiB (26972318924 bytes) of 64307 MiB (67430797312 bytes) total 2018-01-02 21:05:04.590 [info] <0.193.0> Enabling free disk space monitoring 2018-01-02 21:05:04.590 [info] <0.193.0> Disk free limit set to 50MB 2018-01-02 21:05:04.592 [info] <0.195.0> Limiting to approx 1048476 file handles (943626 sockets) 2018-01-02 21:05:04.592 [info] <0.196.0> FHC read buffering: OFF 2018-01-02 21:05:04.592 [info] <0.196.0> FHC write buffering: ON 2018-01-02 21:05:04.592 [info] <0.183.0> Node database directory at /var/lib/rabbitmq/mnesia/rabbit@55e523d925c3 is empty. Assuming we need to join an existing cluster or initialise from scratch... 2018-01-02 21:05:04.592 [info] <0.183.0> Configured peer discovery backend: rabbit_peer_discovery_classic_config 2018-01-02 21:05:04.592 [info] <0.183.0> Will try to lock with peer discovery backend rabbit_peer_discovery_classic_config 2018-01-02 21:05:04.592 [info] <0.183.0> Peer discovery backend rabbit_peer_discovery_classic_config does not support registration, skipping randomized startup delay. 2018-01-02 21:05:04.592 [info] <0.183.0> All discovered existing cluster peers: 2018-01-02 21:05:04.592 [info] <0.183.0> Discovered no peer nodes to cluster with 2018-01-02 21:05:04.593 [info] <0.33.0> Application mnesia exited with reason: stopped 2018-01-02 21:05:04.601 [info] <0.33.0> Application mnesia started on node rabbit@55e523d925c3 2018-01-02 21:05:04.641 [info] <0.183.0> Waiting for Mnesia tables for 30000 ms, 9 retries left 2018-01-02 21:05:04.653 [info] <0.183.0> Waiting for Mnesia tables for 30000 ms, 9 retries left 2018-01-02 21:05:04.665 [info] <0.183.0> Waiting for Mnesia tables for 30000 ms, 9 retries left 2018-01-02 21:05:04.665 [info] <0.183.0> Peer discovery backend rabbit_peer_discovery_classic_config does not support registration, skipping registration. [Truncated] 2018-01-02 21:05:04.679 [info] <0.183.0> message_store upgrades: Removing the old message store data 2018-01-02 21:05:04.679 [info] <0.183.0> message_store upgrades: All upgrades applied successfully 2018-01-02 21:05:04.691 [info] <0.183.0> Adding vhost '/' 2018-01-02 21:05:04.703 [info] <0.408.0> Making sure data directory '/var/lib/rabbitmq/mnesia/rabbit@55e523d925c3/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L' for vhost '/' exists 2018-01-02 21:05:04.705 [info] <0.408.0> Starting message stores for vhost '/' 2018-01-02 21:05:04.705 [info] <0.412.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_transient": using rabbit_msg_store_ets_index to provide index 2018-01-02 21:05:04.706 [info] <0.408.0> Started message store of type transient for vhost '/' 2018-01-02 21:05:04.706 [info] <0.415.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent": using rabbit_msg_store_ets_index to provide index 2018-01-02 21:05:04.706 [warning] <0.415.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent": rebuilding indices from scratch 2018-01-02 21:05:04.707 [info] <0.408.0> Started message store of type persistent for vhost '/' 2018-01-02 21:05:04.708 [info] <0.183.0> Creating user 'guest' 2018-01-02 21:05:04.709 [info] <0.183.0> Setting user tags for user 'guest' to [administrator] 2018-01-02 21:05:04.710 [info] <0.183.0> Setting permissions for 'guest' in '/' to '.*', '.*', '.*' 2018-01-02 21:05:04.715 [info] <0.455.0> started TCP Listener on [::]:5672 2018-01-02 21:05:04.717 [info] <0.183.0> Setting up a table for connection tracking on this node: tracked_connection_on_node_rabbit@55e523d925c3 2018-01-02 21:05:04.719 [info] <0.183.0> Setting up a table for per-vhost connection counting on this node: tracked_connection_per_vhost_on_node_rabbit@55e523d925c3 2018-01-02 21:05:04.719 [info] <0.33.0> Application rabbit started on node rabbit@55e523d925c3 completed with 0 plugins. 2018-01-02 21:05:04.802 [info] <0.5.0> Server startup complete; 0 plugins started. ``` username_2: Hi, I've got the same error using environment variable in docker-compose: ``` queue: image: rabbitmq:3.7.3 restart: always environment: - RABBITMQ_DEFAULT_USER=test - RABBITMQ_DEFAULT_PASS=<PASSWORD> - VIRTUAL_HOST=lovely.queue.local - VIRTUAL_PORT=15672 - RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="-rabbit log_levels [{connection,error}]." ``` username_1: Looks like a YAML syntax issue; try putting the quotes around the entire list value instead: `- 'RABBITMQ_...=-rabbit ...'` Status: Issue closed username_3: Since this doesn't seem to be an issue in the image or docs I'm going to close If you believe this to be in error then let me know and I'll re-open it username_4: I ran into the same issue. Removing the quotes seemed to fix it when when using a docker-compose YAML file: `RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS=-rabbit log_levels [{connection,error}]`
peterdemin/SublimePPrint
147295463
Title: Consider Using ast.literal_eval for Safer Evaluation Question: username_0: Not too excited to be have arbitrary python code evaluated when you pretty print. Consider using [ast.literal_eval](https://docs.python.org/2/library/ast.html#ast.literal_eval) instead of plain `eval`. Answers: username_1: It is vital for me to be able to parse collections.OrderedDict. AFAIK it is impossible using ast.literal_eval Status: Issue closed
concourse/concourse
333777540
Title: CI job to check whether package specs have been synced Question: username_0: # Feature Request ## What challenge are you facing? It is easy to forget to run `scripts/sync-package-specs` before committing. In cases when this causes a problem (changing go dependencies, e.g.), it is frustrating to wait for the `bosh-rc` job to fail, and even when it does, it may not be obvious that the problem was forgetting to run the aforementioned script. ## A Modest Proposal Since there's already a script called `preflight-checklist`, maybe it could be adapted so that it runs equally well locally or on CI, and maybe before even building any of the submodules, we just run it to sanity check the top-level `concourse` repo? Status: Issue closed Answers: username_0: Pretty sure this is irrelevant now, either because of one big repo or go modules.
FPtje/DarkRP
193126837
Title: MySQLite drops an error for seemingly no reason Question: username_0: [//]: # (DO NOT REMOVE THIS TEMPLATE.) [//]: # (GITHUB ISSUES ARE **NOT** FOR HELP) [//]: # (GO TO THE FORUMS INSTEAD: http://forum.darkrp.com/ ) ### Description of the bug [//]: # (Describe the issue as accurately as possible) On a clean vanilla setup using tmysql4 for database shit, if a player joins and there is nothing in the darkrp_player table, mysqlite drops an error regarding a missing column/key. The query it tries to run is `DROP INDEX rpname ON darkrp_player` which it reports back with `Can't DROP 'rpname'; check that column/key exists` ### How to make the bug happen [//]: # (Knowing how to make a bug happen allows the developers to figure out) [//]: # (what the cause of the problem is and whether a certain fix solves it) 1. Set up a blank database for a DarkRP server. I named mine `darkrp` with a user by the same name, permitted on any host identified by a password, and given full access to its database and nothing else. 2. Set up a vanilla DarkRP server. For reference, the only changes I made were a handful of hand written jobs, a weapon pack, and modified the configuration in the darkrpmodification addon. Configure the server to save its data via tmysql4 to the database setup in step 1. 3. Start up the server, and attempt to connect. Some time between the `PlayerConnected` hook executing and the `PlayerInitialSpawn` hook executing, the error will appear. ### Lua errors [//]: # (Note that errors on server startup are more important than other ones) [//]: # (because they can cause other errors. Always look in the startup log of the server!) ``` [ERROR] lua/includes/modules/hook.lua:84: gamemodes/darkrp/gamemode/libraries/mysqlite/mysqlite.lua:280: Can't DROP 'rpname'; check that column/key exists (DROP INDEX rpname ON darkrp_player) 1. v - [C]:-1 2. unknown - lua/includes/modules/hook.lua:84 ``` There are no earlier errors to report. ### Why the developer of DarkRP is responsible for this issue [//]: # (By posting on Github, you ask the developers of DarkRP to solve the problem) [//]: # (It may seem obvious to you, but you have to make clear why they are the right people to look at it) This is a vanilla install, with all files fetched directly from github using the `git clone` command. No changes to core files have been made, and the only present addons are a weapon pack, FProfiler (because I'm still setting shit up and I want to know what won't make the cut as I go), and the darkrpmodification addon. The only changes I made to the config are standard ones including replacement police, mayor, and hobo jobs. No additional addons have been installed yet. However, just to be sure, I did a `git diff` on the gamemode, and it returned nothing, implying there is no difference (iirc, im not the best at git cli). As such, the problem clearly doesn't lay in my hands, nor in the hands of a 3rd party addon developer. Thus, by process of elimination, the only responsible party remaining is the DarkRP development team. And yes, the only reason why I'm being this detailed is because of the usual children that think pasting an error in the issue will make everything magically work for them without having to put any effort into it. And I refuse to step foot in the DarkRP forums because of those children. Answers: username_1: Have you tried removing the table so DarkRP has to make it again? It might've just been a random glitch when DarkRP made the table. username_0: No, because it functions normally otherwise. It's just an error that could be easily avoided by checking if there is an entry in the first place before attempting to remove any entry. username_2: The check is there. It's running a migration from a previous version of the table to a newer version. The strange thing is that you say you've done a clean install. There should be no migration. The line running the query is here: https://github.com/username_2/DarkRP/blob/master/gamemode/modules/base/sv_data.lua#L261 The version is loaded here: https://github.com/username_2/DarkRP/blob/master/gamemode/modules/base/sv_data.lua#L112 So the version is initially 0. That means a migration will be performed. The thing is, though, it immediately writes the database version to the database, even before doing the migration. That means this should be a "first time booting" thing. Does the problem still occur when you restart the server? username_0: No, the issue never repeated itself. Would this migration occur in the event that in a previous boot attempt, the DarkRP server attempted to setup to a MySQL server but was denied access? We had an issue with MySQL refusing access, which we resolved by removing some stupid default user accounts that allow anyone anywhere to access my data without a username or password. username_2: The issue never repeating itself would be precisely what I'd expect. The fix should be easy. Status: Issue closed
home-assistant/supervisor
820628840
Title: Snapshots appear to fail but new one eventually appears Question: username_0: ### Describe the issue you are experiencing When I take a full snapshot, it tells me it has failed and to see Supervisor. If I wait long enough, a new snap shot will still appear, though I do not know if it is restorable. ### What is the used version of the Supervisor? supervisor-2021.02.11 ### What type of installation are you running? Home Assistant OS ### Which operating system are you running on? Home Assistant Operating System ### What is the version of your installed operating system? 5.12 ### What version of Home Assistant Core is installed? core-2021.2.3 ### Steps to reproduce the issue 1. take full snapshot 2. wait a while 3. view logs while waiting 4. look at list of snapshots ### Anything in the Supervisor logs that might be useful for us? ```txt 21-03-03 02:47:05 WARNING (MainThread) [supervisor.jobs] 'SnapshotManager.do_snapshot_full' blocked from execution, system is not running - CoreState.FREEZE 21-03-03 02:47:35 INFO (MainThread) [supervisor.snapshots] Found 4 snapshot files 21-03-03 02:47:49 INFO (MainThread) [supervisor.snapshots] Found 4 snapshot files 21-03-03 02:47:49 INFO (MainThread) [supervisor.addons.addon] Finish snapshot for addon a0d7b954_influxdb 21-03-03 02:47:49 INFO (MainThread) [supervisor.addons.addon] Building snapshot for add-on core_configurator 21-03-03 02:47:49 INFO (MainThread) [supervisor.addons.addon] Finish snapshot for addon core_configurator 21-03-03 02:47:49 INFO (MainThread) [supervisor.addons.addon] Building snapshot for add-on a0d7b954_esphome 21-03-03 02:51:32 INFO (MainThread) [supervisor.snapshots] Found 4 snapshot files 21-03-03 02:52:01 INFO (MainThread) [supervisor.addons.addon] Finish snapshot for addon a0d7b954_esphome 21-03-03 02:52:01 INFO (MainThread) [supervisor.addons.addon] Building snapshot for add-on core_dnsmasq 21-03-03 02:52:01 INFO (MainThread) [supervisor.addons.addon] Finish snapshot for addon core_dnsmasq 21-03-03 02:52:01 INFO (MainThread) [supervisor.addons.addon] Building snapshot for add-on core_mosquitto 21-03-03 02:52:01 INFO (MainThread) [supervisor.addons.addon] Finish snapshot for addon core_mosquitto 21-03-03 02:52:01 INFO (MainThread) [supervisor.addons.addon] Building snapshot for add-on a0d7b954_logviewer 21-03-03 02:52:01 INFO (MainThread) [supervisor.addons.addon] Finish snapshot for addon a0d7b954_logviewer 21-03-03 02:52:01 INFO (MainThread) [supervisor.addons.addon] Building snapshot for add-on a0d7b954_vscode 21-03-03 02:52:12 INFO (MainThread) [supervisor.addons.addon] Finish snapshot for addon a0d7b954_vscode 21-03-03 02:52:12 INFO (MainThread) [supervisor.addons.addon] Building snapshot for add-on a0d7b954_nodered 21-03-03 02:52:12 INFO (MainThread) [supervisor.addons.addon] Finish snapshot for addon a0d7b954_nodered 21-03-03 02:52:12 INFO (MainThread) [supervisor.addons.addon] Building snapshot for add-on core_deconz 21-03-03 02:52:12 INFO (MainThread) [supervisor.addons.addon] Finish snapshot for addon core_deconz 21-03-03 02:52:12 INFO (MainThread) [supervisor.snapshots] Snapshotting 984bcca7 store folders 21-03-03 02:52:12 INFO (SyncWorker_7) [supervisor.snapshots.snapshot] Snapshot folder media 21-03-03 02:54:47 INFO (SyncWorker_7) [supervisor.snapshots.snapshot] Snapshot folder media done 21-03-03 02:54:47 INFO (SyncWorker_6) [supervisor.snapshots.snapshot] Snapshot folder share 21-03-03 02:54:47 INFO (SyncWorker_6) [supervisor.snapshots.snapshot] Snapshot folder share done 21-03-03 02:54:47 INFO (SyncWorker_1) [supervisor.snapshots.snapshot] Snapshot folder addons/local 21-03-03 02:54:47 INFO (SyncWorker_1) [supervisor.snapshots.snapshot] Snapshot folder addons/local done 21-03-03 02:54:47 INFO (SyncWorker_2) [supervisor.snapshots.snapshot] Snapshot folder ssl 21-03-03 02:54:47 INFO (SyncWorker_2) [supervisor.snapshots.snapshot] Snapshot folder ssl done 21-03-03 02:54:47 INFO (SyncWorker_0) [supervisor.snapshots.snapshot] Snapshot folder homeassistant 21-03-03 02:55:15 INFO (MainThread) [supervisor.snapshots] Found 4 snapshot files 21-03-03 02:56:07 INFO (SyncWorker_0) [supervisor.snapshots.snapshot] Snapshot folder homeassistant done 21-03-03 02:56:15 INFO (MainThread) [supervisor.snapshots] Creating full-snapshot with slug 984bcca7 completed ``` Answers: username_1: Fixed on next release Status: Issue closed username_0: Thanks @username_1 username_1: Sorry, solved with new Core 2021.3.0 and new Supervisor 2021.03.1 (Currently on the beta channel)
TobieSurette/gulf.graphics
733716963
Title: Add 'layout' option to gdevice Question: username_0: Add 'layout' argument to gdevice to pass onto 'layout' function. In parallel, add a 'margin' argument with named arguments which sets the margins for layout in inches. Names are 'left', 'right', 'up' and 'down'
brianegan/flutter_redux
341867968
Title: Password in Redux State. Question: username_0: I have `AuthState` with `firebaseAuth` field. I have a widget called `LoginPage`, that have state and viewmodel. But when I click Login button, login and password are flushing (TextFormFields became empty). Should I store password and login in the redux state to restore them in viewmodel? Is it safe? ```dart @immutable class LoginViewModel { final Function(String login, String password) loginCallback; final Function loginAnonymouslyCallback; final Function openRegisterCallback; LoginViewModel({ this.openRegisterCallback, this.loginCallback, this.loginAnonymouslyCallback}); } class Login extends StatefulWidget { Login(); @override _LoginState createState() => new _LoginState(); } class _LoginState extends State<Login> { void _signInOrSignUpWithEmail(LoginViewModel viewModel) { viewModel.loginCallback(_emailController.text, _passController.text); } void _signInAnonymously(LoginViewModel viewModel) { viewModel.loginAnonymouslyCallback(); } final TextEditingController _emailController = new TextEditingController(); final TextEditingController _passController = new TextEditingController(); @override Widget build(BuildContext context) { final theme = Theme.of(context); final GlobalKey<FormState> formkey = new GlobalKey<FormState>(); return StoreConnector<AppState, LoginViewModel>( converter: (store) => LoginViewModel( loginCallback: (login, password) => store.dispatch(new LogInAction(login, password)), loginAnonymouslyCallback: () => store.dispatch(new LogInAnonymouslyAction()), openRegisterCallback: () => store.dispatch(new OpenRegisterAction()), ), builder: (context, viewModel) { return Column ( children: [ Form( key: formkey, child: Column( children: [ TextFormField( controller: _emailController, ), TextFormField( controller: _passController, ), ] ) [Truncated] FlatButton( child: Text("Next"), onPressed: () { if (formkey.currentState.validate()) _signInOrSignUpWithEmail(viewModel); } ), FlatButton( child: Text( "Continue without login", ), onPressed: () => _signInAnonymously(viewModel), ) ] ); } ); } } ``` Answers: username_0: Also, how to store the data in the state? I mean, if I want to update the password after every change, I should to write something like this: `` ```dart final fun = () => store.dispatch(new LoginOrPasswordChangedAction(_emailController.text, _passController.text)); _emailController.addListener(fun); _passController.addListener(fun); ``` But in this case, after each change, I lose the TextField focus. username_0: And last question. How to invoke action before another? Should I do it? For example: ``` Stream<dynamic> logoutEpic(Stream<LogOutAction> actions, EpicStore<AppState> store) async* { await for (dynamic action in actions) { if(action is LogOutAction) { yield new PrepareLogOutAction(); //wait while all midllewares handle it. yield new FirebaseLogOutAction(); } } } ``` I need to remove device id from firestore before user log out to stop FCM. username_1: I'd probably just dispatch one action in this case... not quite sure what splitting this one action into 2 actions gives you? username_0: I want to split middelwares logic, what I mean: `auth_middlewares.dart` โ€” contains only middlewares for auth actions (`LogInAction`, `LogOutAction`, etc..) `user_middlewares.dart` โ€” contains only middlewares for manipulations with user in db (`Get/Create/Update/DeleteUserAction` and `Add/DeleteTokenAction`) So, if I add delete token functions to middleware that catches 'LogOutAction' in auth_middleware, then it go apart from my ideas. username_1: Ah, I gotcha. Yah, in that case, you could definitely dispatch two actions to fulfill that requirement, or consider some Actions as more "Common Actions" that could affect multiple parts of your State tree, such as Logout actions. I don't mind having some "Common Actions" that are handled by different reducers / middleware, since it's generally a bit easier to work with. That said, if you like the idea of dispatching two actions, no problem at all, you'll just need to do a bit more coordination. In this case, the `PrepareLogOutAction` would need to contain a [`Completer`](https://docs.flutter.io/flutter/dart-async/Completer-class.html). Then, in your `user_middleware.dart`, after the middleware / epic finishes it's work, it could call `action.completer.complete()`. Something like this: ```dart // Assumes you're using a TypedEpic for PrepareLogoutActions Stream<dynamic> logoutEpic(Stream<PrepareLogoutAction> actions, EpicStore<AppState> store) async* { await for (PrepareLogoutAction action in actions) { await database.deleteUser(action.user); action.completer.complete(); } } ``` Then, inside your Epic function that coordinates all of this, you could do something like this: ```dart // This assumes you're using a TypedEpic, which will narrow the actions down to only LogOutAction Stream<dynamic> logoutEpic(Stream<LogOutAction> actions, EpicStore<AppState> store) async* { await for (LogoutAction action in actions) { final initialAction = PrepareLogoutAction(); yield initialAction; // Await for the initialAction to be completed by the middleware await initialAction.completer.future; // Then dispatch the FirebaseLogoutAction yield new FirebaseLogOutAction(); } } ``` That said, I think this would all be a bit less entangled if you just assume some actions are "Common Actions," and each middleware can handle the action however it wants, without depending on the ordering of the actions. If you need to wait until the first cleanup action finishes before doing the second, the completer solution might be an option, or putting all of this logic into 1 epic might be another option. Then you could just do: ```dart Stream<dynamic> cleanupEpic(Stream<LogoutAction> actions, EpicStore<AppState> store) async* { await for (LogoutAction action in actions) { await database.deleteUser(action.user); await firebase.deleteUser(action.user); } } ``` That might go against your design philosophy, but I think there are pros and cons to each approach. Hope that helps! username_0: Thank you again for helping! I think that 'Common actions' is a good approach, despite some cons. Have a good code. Status: Issue closed
AppMetrics/AppMetrics
609995311
Title: ASP.NET Core web sockets get reported as very long transactions using the built in middleware Question: username_0: If using web sockets the aspnet core pipeline is invoked on the initial upgrade connection request and lasts until the socket disconnects. To the middleware this appears as a very long transaction. This also appears as an active request for the lifetime of the web socket. I have a solution for this that filters web socket requests from the request, per request, and active request middleware. It also adds a separate "Active Web Sockets" metric in the active request middleware. Does that sound like a good change? If so I'll raise a PR. One downside is it depends on the AppMetrics middleware sitting after the WebSocketsMiddleware (UseWebSockets) in the pipeline. It's a very unobvious requirement that many people are going to miss. Answers: username_1: Yes this sounds like a good update :+1: Status: Issue closed
WebStandardsFuture/browser-engine-diversity
734908006
Title: Consider browser engine diversity impact on web standards in general, beyond W3C Question: username_0: As a place for bringing together interested and concerned parties about browser engine diversity and standards, this repo would be useful for considering web standards in general beyond W3C, and the impact upon them by the participation (or lack thereof) of one or more browser engine implementations. How different orgs (IETF, WHATWG, TC39, etc.) approach these challenges and questions may help provide common approaches worth considering. While the origin of this repo is from a W3C TPAC session, it was clear from the broad and diverse participation in that session that this is an area that goes beyond W3C, and thus we should consider expanding the README accordingly, noting browser engine diversity issues and opportunities across multiple standards organizations, and leave the W3C-specific parts as part of the origin (but not any restriction in scope) of this repo. If this general approach is non-controversial, I can make pull requests to update the README accordingly for specifics. (Originally published at: https://username_0.com/2020/307/b2/)
pedroslopez/whatsapp-web.js
1081706284
Title: Error while scan barcode Question: username_0: E:\whatsapp-api-80000\node_modules\puppeteer\lib\cjs\puppeteer\common\ExecutionContext.js:221 throw new Error('Evaluation failed: ' + helper_js_1.helper.getExceptionMessage(exceptionDetails)); ^ Error: Evaluation failed: TypeError: Cannot read properties of undefined (reading 'default') at __puppeteer_evaluation_script__:19:54 at ExecutionContext._evaluateInternal (E:\whatsapp-api-80000\node_modules\puppeteer\lib\cjs\puppeteer\common\ExecutionContext.js:221:19) at processTicksAndRejections (node:internal/process/task_queues:96:5) at async ExecutionContext.evaluate (E:\whatsapp-api-80000\node_modules\puppeteer\lib\cjs\puppeteer\common\ExecutionContext.js:110:16) at async Client.initialize (E:\whatsapp-api-80000\node_modules\whatsapp-web.js\src\Client.js:172:9) Answers: username_1: Unable to get qr code username_1: it is working username_2: How did you solved this?, Please enlighten as well, I'm having the same issue since yesterday username_3: Are you guys using the new version https://github.com/pedroslopez/whatsapp-web.js/releases/tag/v1.15.3 ? username_0: yes the problem solved npm i whatsapp-web.js@latest Status: Issue closed
sul-dlss/dlme-transform
459229065
Title: Use traject 3 pipeline in tei macro Question: username_0: https://github.com/sul-dlss/dlme-transform/blob/master/lib/macros/tei.rb and https://github.com/sul-dlss/dlme-traject/blob/master/tei_config.rb result in deprecation warnings. The macro needs to be refactored and the config should be adjusted accordingly: - [ ] Fix deprecation warnings - [ ] Remove unused methods from tei.rb Answers: username_1: Errors when xforming penn data: ``` [ERROR] Error loading configuration file config/tei_config.rb:46 ArgumentError:wrong number of arguments (given 2, expected 1) /usr/local/bundle/gems/traject-3.2.0/lib/traject/indexer.rb:231:in `rescue in block in load_config_file': Error loading configuration file config/tei_config.rb:46 ArgumentError:wrong number of arguments (given 2, expected 1) (Traject::Indexer::ConfigLoadError) ``` username_1: Running stanford: ``` 2019-09-26T22:24:01+00:00 INFO Traject::Indexer with 1 processing threads, reader: TrajectPlus::XmlReader and writer: DlmeJsonResourceWriter DEPRECATION WARNING: passing options to extract_xml is deprecated and will be removed in the next major release. Use the Traject 3 pipeline instead. (called from block (2 levels) in first at /usr/local/bundle/gems/traject_plus-1.3.0/lib/traject_plus/macros.rb:28) DEPRECATION WARNING: passing options to extract_xml is deprecated and will be removed in the next major release. Use the Traject 3 pipeline instead. (called from block in execute at /usr/local/bundle/gems/traject-3.2.0/lib/traject/indexer/step.rb:140) ``` username_2: @username_0 It doesn't look like https://github.com/sul-dlss/dlme-traject/blob/master/tei_config.rb exists anymore. Does this ticket need to be revised? username_0: @username_2 renamed to openn_config.rb so I didn't attempt to use it on other tei data. username_2: I'm not sure we can do anything on this other than update to traject_plus 2 since all these originate within traject_plus 1.3 username_3: Closing. I think this is limited by traject_plus 2, and we can revisit then. Status: Issue closed
COVID19Tracking/issues
667851155
Title: [SD] PCL Historicals Question: username_0: **State:** SD **Issue description:** We can't confirm that antibody testing isn't lumped in with RT-PCR testing results. SD has no annotation that explicitly states that cases are **_only confirmed_**, but we started reporting values in **positive cases (PCR)** on **7/17**. These values should be removed from positive cases (PCR). **Source:** https://doh.sd.gov/news/Coronavirus.aspx Answers: username_0: BEFORE (positive cases (PCR) column) ![Screen Shot 2020-07-29 at 9 40 45 AM](https://user-images.githubusercontent.com/57415941/88807403-ab1bd080-d17f-11ea-84ee-91034e3508ea.png) username_0: AFTER (positive cases (PCR) column) ![Screen Shot 2020-07-29 at 9 41 02 AM](https://user-images.githubusercontent.com/57415941/88807432-b5d66580-d17f-11ea-81be-c51ed39acaf5.png) username_0: POPUP BEFORE (positive cases (PCR) column) ![Screen Shot 2020-07-28 at 10 58 17 AM](https://user-images.githubusercontent.com/57415941/88807544-da324200-d17f-11ea-9367-c63348d1f350.png) username_0: POPUP AFTER (positive cases (PCR) column) ![Screen Shot 2020-07-28 at 10 51 42 AM](https://user-images.githubusercontent.com/57415941/88807607-f0400280-d17f-11ea-8993-3b76c78e6272.png) Status: Issue closed username_1: (DZL) Doublechecked -- 7/31 7:34
micebot/pubsub
650926944
Title: criar mรณdulo pra consumir a API Question: username_0: Obter o _access token_ para poder consumir os recursos do servidor: - https://micebot-server-dev.herokuapp.com/docs Credรชnciais: - **username**: ps_user (_em ambiente de desenvolvimento, em produรงรฃo รฉ para obter o valor da variรกvel de ambiente PS_USER_). - **password**: <PASSWORD> (_em ambiente de desenvolvimento, em produรงรฃo รฉ para obter o valor da variรกvel de ambiente PS_PASS_).<issue_closed> Status: Issue closed
D2CAMPUS-PARTNER/9th-PARTNER-s-DAY
478885186
Title: NEXTERS Partners day ์ฐธ์—ฌ ์‹ ์ฒญ Question: username_0: <!--- ์ œ๋ชฉ์€ "[๋™์•„๋ฆฌ๋ช…] - ์Šคํ„ฐ๋”” ๋ถ„์•ผ" ํ˜•ํƒœ๋กœ ์ž‘์„ฑํ•ด์ฃผ์„ธ์š”~ ๋™์•„๋ฆฌ ์†Œ๊ฐœ์™€ ํ™œ๋™ ๋‚ด์šฉ์€ ๊ฐœ๋ฐœ๊ด€๋ จ ์ฃผ์ œ๋กœ ์ž‘์„ฑ๋ถ€ํƒ๋“œ๋ฆฝ๋‹ˆ๋‹ค. (๋™์•„๋ฆฌ ๋‚ด ์นœ๋ชฉ ํ™œ๋™, MT, ํ™ˆ์ปค๋ฐ๋ฐ์ด ๋“ฑ์€ ์ œ์™ธํ•˜์—ฌ ์ฃผ์„ธ์š”~) ์‚ฌ์ง„์ด๋‚˜ ํŒŒ์ผ ๋“ฑ ๋” ์ถ”๊ฐ€ํ•˜๊ณ  ์‹ถ์€ ์‚ฌํ•ญ์ด ์žˆ์œผ๋ฉด ํŽธํ•˜๊ฒŒ ์˜ฌ๋ ค์ฃผ์„ธ์š”! !---> #### ์ฐธ์„์ž ์ •๋ณด <!-- ๋™์•„๋ฆฌ๋‹น 1~2๋ช…์ด ์ฐธ์„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฆ„/ํ•™๋…„/์—ฐ๋ฝ๋“œ๋ฆด ๋ฉ”์ผ์ฃผ์†Œ๋ฅผ ๋‚จ๊ฒจ์ฃผ์„ธ์š”. !--> - ์™•์†Œ์ •/์กธ์—…/<EMAIL> - ๊น€์ค€ํฌ/๋Œ€ํ•™์›์ƒ/<EMAIL> #### ๋™์•„๋ฆฌ ์†Œ๊ฐœ <!-- ์Šคํ„ฐ๋””๋ถ„์•ผ/ํ™œ๋™ ๋‚ด์šฉ, ํ™ˆํŽ˜์ด์ง€ ๋“ฑ !--> - ๋„ฅ์Šคํ„ฐ์ฆˆ๋Š” IT ์ƒํƒœ๊ณ„์˜ ์ฃผ์ธ๊ณต์ธ ๊ฐœ๋ฐœ์ž์™€ ๋””์ž์ด๋„ˆ๋ฅผ ์œ„ํ•œ ์—ฐํ•ฉ ๋™์•„๋ฆฌ๋กœ ์žฌ๋Šฅ ์žˆ๋Š” ๊ฐœ๋ฐœ์ž์™€ ๋””์ž์ด๋„ˆ๊ฐ€ ํ•จ๊ป˜ ๋ชจ์—ฌ ์ž์œ ๋กœ์šด ๋ถ„์œ„๊ธฐ์—์„œ ์–ด์šธ๋ฆฌ๊ณ  ์ƒˆ๋กœ์šด IT ์„œ๋น„์Šค๋ฅผ ๋งŒ๋“œ๋Š” ๋ชจ์ž„์ž…๋‹ˆ๋‹ค. ์ˆ˜๋„๊ถŒ ์ธ๊ทผ ๋Œ€ํ•™์ƒ๋“ค๊ณผ ์ง์žฅ์ธ๋“ค์ด ํ™œ๋™ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋Œ€ํ•™์ƒ์—๊ฒŒ๋Š” ์‹ค๋ฌด์™€ ์œ ์‚ฌํ•œ ํ”„๋กœ์ ํŠธ ์ง„ํ–‰์œผ๋กœ ์ „๋ฌธ์„ฑ์„ ๊ธฐ๋ฅผ ์ˆ˜ ์žˆ๋Š” ๊ฒฝํ—˜์„, ์‹ค๋ฌด์ž์—๊ฒŒ๋Š” ์ž์œจ์ ์ธ ํ™œ๋™์„ ํ†ตํ•ด ์ฐฝ์˜๋ ฅ์„ ํ‚ค์›Œ๋‚˜๊ฐˆ ๊ธฐํšŒ๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. - ํ™ˆํŽ˜์ด์ง€ : http://teamnexters.com/ (๊ณต์‚ฌ์ค‘) - ํŽ˜์ด์Šค๋ถ : https://www.facebook.com/Nexterspage/ #### 2019๋…„ ์ƒ๋ฐ˜๊ธฐ ๋™์•„๋ฆฌ ํ™œ๋™ ๋ฐ ์ˆ˜์ƒ ๋‚ด์—ญ <!-- ๊ณต๊ฐœํ•  ์‚ฐ์ถœ๋ฌผ์ด ์žˆ๋‹ค๋ฉด ํ•จ๊ป˜ ๋‚จ๊ฒจ์ฃผ์„ธ์š” !--> - ๋„ฅ์Šคํ„ฐ์ฆˆ ํ”Œ๋ ˆ์ด์Šคํ† ์–ด ๋“ฑ๋ก ์•ฑ ๋ฆฌ์ŠคํŠธ : https://play.google.com/store/apps/developer?id=Nexters - URL ๋‹จ์ถ• ์„œ๋น„์Šค : https://www.nexters.me/ #### 2019 ํ•˜๋ฐ˜๊ธฐ ํ™œ๋™๊ณ„ํš - ํ”„๋กœ์ ํŠธ ์œ ์ง€๋ณด์ˆ˜๋ฅผ ์œ„ํ•œ ํ–‰์‚ฌ, RE:FAC - 16๊ธฐ ์‹ ์ž…ํšŒ์› ๋ชจ์ง‘ - 16๊ธฐ ํŒ€๋นŒ๋”ฉ - ๋„คํŠธ์›Œํ‚น ํ–‰์‚ฌ (MT ๋“ฑ) - ์—ญ๋Ÿ‰ ํ–ฅ์ƒ์— ์กฐ์–ธ์„ ๋“ค์„ ์ˆ˜ ์žˆ๋Š” ์—ฐ์‚ฌ ์ดˆ์ฒญ ์„ธ์…˜ - 16๊ธฐ ์ตœ์ข…๋ฐœํ‘œ (2020๋…„) #### ๊ธฐํƒ€ <!-- ๋‹ค๋ฅธ ๋™์•„๋ฆฌ์— ๊ถ๊ธˆํ•œ ์ ์ด๋‚˜ D2 CAMPUS์— ๋ฐ”๋ผ๋Š” ์  ๋“ฑ์„ ํŽธํ•˜๊ฒŒ ๋‚จ๊ฒจ์ฃผ์„ธ์š” !--> - ์ง€๋‚œ 6์›”์— ๋„ฅ์Šคํ„ฐ์ฆˆ OT๋กœ ๊ฐ•๋‚จ D2 Startup Factory๋ฅผ ์‚ฌ์šฉํ•˜์˜€์Šต๋‹ˆ๋‹ค. 15๊ธฐ ์ฒซ ๋ชจ์ž„์ด์ž ํŒ€๋นŒ๋”ฉ์ด ํฌํ•จ๋œ ์ค‘์š”ํ•œ ํ–‰์‚ฌ์˜€๋Š”๋ฐ ์พŒ์ ํ•œ ํ™˜๊ฒฝ์—์„œ ์ง„ํ–‰ํ•  ์ˆ˜ ์žˆ์–ด์„œ ์šด์˜์ง„๋„ ํšŒ์›๋“ค๋„ ๊ต‰์žฅํžˆ ๋งŒ์กฑํ•ด ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด ์ž๋ฆฌ๋ฅผ ๋นŒ์–ด์„œ ๋‹ค์‹œ ๊ฐ์‚ฌ๋ฅผ ๋“œ๋ฆฝ๋‹ˆ๋‹ค.
open-telemetry/opentelemetry-js
491456587
Title: Exporter/Jaeger: use correct startTime and duration from ReadableSpan Question: username_0: Code pointer: https://github.com/open-telemetry/opentelemetry-js/blob/c020efe4c9ab29299dbb4f218687adbd8b82951a/packages/opentelemetry-exporter-jaeger/src/transform.ts#L83 The initial part of this is already being handled by #206, we need to rebase Jaeger Exporter based on that. Answers: username_0: Closed via #281 Status: Issue closed
scikit-learn/scikit-learn
447400856
Title: Ubuntu install issues on the Linux py35_np_atlas build Question: username_0: Failures are occurring like at https://dev.azure.com/scikit-learn/scikit-learn/_build/results?buildId=3415&view=logs ``` Err:1 http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu xenial/main amd64 libobjc4 amd64 8.1.0-5ubuntu1~16.04 404 Not Found Get:2 http://azure.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgc1c2 amd64 1:7.4.2-7.3ubuntu0.1 [82.1 kB] Get:3 http://azure.archive.ubuntu.com/ubuntu xenial/main amd64 libtcl8.6 amd64 8.6.5+dfsg-2 [875 kB] Get:4 http://azure.archive.ubuntu.com/ubuntu xenial/main amd64 libtk8.6 amd64 8.6.5-1 [693 kB] Get:5 http://azure.archive.ubuntu.com/ubuntu xenial/main amd64 tk8.6-blt2.5 amd64 2.5.3+dfsg-3 [574 kB] Get:6 http://azure.archive.ubuntu.com/ubuntu xenial/main amd64 blt amd64 2.5.3+dfsg-3 [4,852 B] Get:7 http://azure.archive.ubuntu.com/ubuntu xenial/universe amd64 fonts-lyx all 2.1.4-2 [161 kB] Get:8 http://azure.archive.ubuntu.com/ubuntu xenial/main amd64 gfortran amd64 4:5.3.1-1ubuntu1 [1,288 B] Get:9 http://azure.archive.ubuntu.com/ubuntu xenial/main amd64 javascript-common all 11 [6,066 B] Get:10 http://azure.archive.ubuntu.com/ubuntu xenial/main amd64 libblas-common amd64 3.6.0-2ubuntu2 [5,342 B] Get:11 http://azure.archive.ubuntu.com/ubuntu xenial/universe amd64 libatlas3-base amd64 3.10.2-9 [2,697 kB] Get:12 http://azure.archive.ubuntu.com/ubuntu xenial/main amd64 libblas3 amd64 3.6.0-2ubuntu2 [147 kB] Get:13 http://azure.archive.ubuntu.com/ubuntu xenial/main amd64 libblas-dev amd64 3.6.0-2ubuntu2 [153 kB] Get:14 http://azure.archive.ubuntu.com/ubuntu xenial/universe amd64 libatlas-dev amd64 3.10.2-9 [22.1 kB] Get:15 http://azure.archive.ubuntu.com/ubuntu xenial/universe amd64 libatlas-base-dev amd64 3.10.2-9 [3,596 kB] Get:16 http://azure.archive.ubuntu.com/ubuntu xenial/universe amd64 libjs-jquery-ui all 1.10.1+dfsg-1 [458 kB] Get:17 http://azure.archive.ubuntu.com/ubuntu xenial/universe amd64 ttf-bitstream-vera all 1.10-8 [352 kB] Get:18 http://azure.archive.ubuntu.com/ubuntu xenial/universe amd64 python-matplotlib-data all 1.5.1-1ubuntu1 [2,414 kB] Get:19 http://azure.archive.ubuntu.com/ubuntu xenial/universe amd64 python3-cycler all 0.9.0-1 [5,532 B] Get:20 http://azure.archive.ubuntu.com/ubuntu xenial/universe amd64 python3-dateutil all 2.4.2-1 [39.1 kB] Get:21 http://azure.archive.ubuntu.com/ubuntu xenial/universe amd64 python3-decorator all 4.0.6-1 [9,388 B] Get:22 http://azure.archive.ubuntu.com/ubuntu xenial-updates/main amd64 python3-pyparsing all 2.0.3+dfsg1-1ubuntu0.1 [35.5 kB] Get:23 http://azure.archive.ubuntu.com/ubuntu xenial/main amd64 python3-tz all 2014.10~dfsg1-0ubuntu2 [24.6 kB] Get:24 http://azure.archive.ubuntu.com/ubuntu xenial/main amd64 python3-numpy amd64 1:1.11.0-1ubuntu1 [1,762 kB] Get:25 http://azure.archive.ubuntu.com/ubuntu xenial/universe amd64 python3-matplotlib amd64 1.5.1-1ubuntu1 [3,881 kB] Get:26 http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu xenial/main amd64 libobjc-5-dev amd64 5.5.0-12ubuntu1~16.04 [381 kB] Get:27 http://azure.archive.ubuntu.com/ubuntu xenial/main amd64 libwebp5 amd64 0.4.4-1 [165 kB] Get:28 http://azure.archive.ubuntu.com/ubuntu xenial/main amd64 libwebpmux1 amd64 0.4.4-1 [14.2 kB] Get:29 http://azure.archive.ubuntu.com/ubuntu xenial-updates/main amd64 python3-pil amd64 3.1.2-0ubuntu1.1 [313 kB] Get:30 http://azure.archive.ubuntu.com/ubuntu xenial/main amd64 python3-tk amd64 3.5.1-1 [25.1 kB] Get:31 http://azure.archive.ubuntu.com/ubuntu xenial-updates/universe amd64 python3-virtualenv all 15.0.1+ds-3ubuntu1 [43.2 kB] Get:32 http://azure.archive.ubuntu.com/ubuntu xenial/universe amd64 python3-scipy amd64 0.17.0-1 [8,327 kB] Get:33 http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu xenial/main amd64 libasan2 amd64 5.5.0-12ubuntu1~16.04 [265 kB] Get:34 http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu xenial/main amd64 libmpx0 amd64 5.5.0-12ubuntu1~16.04 [9,830 B] Get:35 http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu xenial/main amd64 g++-5 amd64 5.5.0-12ubuntu1~16.04 [8,446 kB] Get:36 http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu xenial/main amd64 gcc-5 amd64 5.5.0-12ubuntu1~16.04 [8,620 kB] Get:37 http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu xenial/main amd64 libstdc++-5-dev amd64 5.5.0-12ubuntu1~16.04 [1,421 kB] Get:38 http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu xenial/main amd64 libgcc-5-dev amd64 5.5.0-12ubuntu1~16.04 [2,231 kB] Get:39 http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu xenial/main amd64 cpp-5 amd64 5.5.0-12ubuntu1~16.04 [7,796 kB] Err:40 http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu xenial/main amd64 lib32gcc1 amd64 1:8.1.0-5ubuntu1~16.04 404 Not Found Err:41 http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu xenial/main amd64 lib32stdc++6 amd64 8.1.0-5ubuntu1~16.04 404 Not Found Get:42 http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu xenial/main amd64 gcc-5-base amd64 5.5.0-12ubuntu1~16.04 [16.9 kB] Get:43 http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu xenial/main amd64 gcc-6-base amd64 6.5.0-2ubuntu1~16.04 [16.6 kB] Get:44 http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu xenial/main amd64 libgfortran3 amd64 6.5.0-2ubuntu1~16.04 [270 kB] Get:45 http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu xenial/main amd64 libgfortran-5-dev amd64 5.5.0-12ubuntu1~16.04 [291 kB] Get:46 http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu xenial/main amd64 gfortran-5 amd64 5.5.0-12ubuntu1~16.04 [8,179 kB] Fetched 64.8 MB in 1min 14s (872 kB/s) E: Failed to fetch http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu/pool/main/g/gcc-8/libobjc4_8.1.0-5ubuntu1~16.04_amd64.deb 404 Not Found E: Failed to fetch http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu/pool/main/g/gcc-8/lib32gcc1_8.1.0-5ubuntu1~16.04_amd64.deb 404 Not Found E: Failed to fetch http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu/pool/main/g/gcc-8/lib32stdc++6_8.1.0-5ubuntu1~16.04_amd64.deb 404 Not Found E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? ##[error]Bash exited with code '100'. ##[section]Finishing: Install ``` the amd64 files do not appear to be there at http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu/pool/main/g/gcc-8/ for those libraries. Answers: username_1: @mickeygousset Do you think this could be fixed on Azure's end? Status: Issue closed
enaml-ops/haproxy-plugin
199090214
Title: Support for multiple haproxy instances? Question: username_0: I should be able to provide multiple haproxy IPs, e.g. ``` ./omg deploy-product \ ... --gorouter-ip 10.10.4.20 \ --gorouter-ip 10.10.4.21 \ --gorouter-ip 10.10.4.22 \ --gorouter-ip 10.10.4.23 \ --haproxy-ip 10.10.4.100 \ --haproxy-ip 10.10.4.101 \ --haproxy-ip 10.10.4.102 \ ... ``` However, the current implementation only supports a single HAProxy instance, and uses the last IP provided, not all 3 as expected.
BlinkReceipt/blinkereceipt-ios
602817076
Title: Deprecated API Usage (UIWebview) - Apple will stop accepting submissions Question: username_0: After uploading the App to appstoreconnect, we receive this message: ITMS-90809: Deprecated API Usage - Apple will stop accepting submissions of apps that use UIWebView APIs . See https://developer.apple.com/documentation/uikit/uiwebview for more information. I am integrating AFNetworking by CocoaPods,I wonder how can i delete the files that using UIWebView related API. I believe that AFNetworking 4.0.0 has removed all UIWebViews. Are you guys planning to update the current AFNetworking 3.0 dependency ?<issue_closed> Status: Issue closed
penny-university/penny_university
562968301
Title: Draft Penny Chat Bug Question: username_0: # Steps to repeat 1) Create a draft penny chat in channel A, give it the title "BUG" 2) abandon it w/o sharing 3) Start a draft in channel B and see that it's the same draft (title = "BUG") 4) Submit the form 5) See that no pop-up occurs in channel B, but channel A gets a pop-up # Solution In `bot.processors.pennychat.PennyChatBotModule#create_penny_chat` we need to save the new `template_channel`. [`update_or_create`](https://docs.djangoproject.com/en/dev/ref/models/querysets/#update-or-create) might be the way to get this done. # While you're there * we probably need to update `user_tz` as well * we probably _don't_ need to update `date` because they might have already set that to something they want. Answers: username_0: Not a bug as of 2020.06.13... Status: Issue closed