repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
pytorch/pytorch | 723423276 | Title: Allow negative learning rates
Question:
username_0: ## 🚀 Feature
Currently, optimizers throw an assertion error when negative learning rates are supplied at construction. This proposal suggests removing this restriction.
## Motivation
Since maximization is equivalent to minimizing a negative loss function, negative learning rates are useful and make sense. For example, in GANs, the generator and discriminator can be trained adversarially by giving the discriminator a negative learning rate. This avoids having two backward passes, which improves both computational efficiency and conceptual clarity.
## Pitch
As learning rates are typically parameterized by constants, providing a negative rate by accident is highly unlikely. I argue that making this mistake is much less likely than wanting a negative learning rate, and so the defensive assertion is better removed.
## Alternatives
It is currently possible to set a negative LR through an ugly loop through the optimizer's `param_group`. An alternative would be to provide a cleaner way to do so, i.e., a `maximize=True` flag on optimizer construction.
Answers:
username_1: bumping priority based on internal request
username_2: I will do related literature survey and start working on this
username_1: (username_2 and I had private correspondence where I told him that literature survey probably not necessary for a patch like this :P)
Status: Issue closed
username_1: ## 🚀 Feature
Currently, optimizers throw an assertion error when negative learning rates are supplied at construction. This proposal suggests removing this restriction.
## Motivation
Since maximization is equivalent to minimizing a negative loss function, negative learning rates are useful and make sense. For example, in GANs, the generator and discriminator can be trained adversarially by giving the discriminator a negative learning rate. This avoids having two backward passes, which improves both computational efficiency and conceptual clarity.
## Pitch
As learning rates are typically parameterized by constants, providing a negative rate by accident is highly unlikely. I argue that making this mistake is much less likely than wanting a negative learning rate, and so the defensive assertion is better removed.
## Alternatives
It is currently possible to set a negative LR through an ugly loop through the optimizer's `param_group`. An alternative would be to provide a cleaner way to do so, i.e., a `maximize=True` flag on optimizer construction.
cc @username_1 @gchanan @zou3519 @bdhirsh @username_7 @username_3 @vincentqb @username_2
username_3: @username_4 verify if just using an alternate approach instead of neg LR would work for the internal use case too
username_4: From more discussion, I agree that we could add a "maximize=False" argument to all the optimizers and have special logic for each optimizer that should handle this case properly.
username_5: It would be better to simply remove the `lr < 0.0` check from `torch.optim.SGD`'s constructor.
username_6: Hey!
I am new to contributing to torch -- I will open a PR for this issue, if that is alright :)
username_4: I am not sure everyone agrees yet.
Let me double check with other core contributors on this and update back here.
username_7: I think a properly functioning `maximize` flag is better than allowing negative learning rates. For optimizers beyond simple SGD, it's likely that negative learning rates will weird undesired behavior. If the goal is to allow maximization, imo this should be provided on a per-optimizer basis to account for the subtleties.
username_0: I believe this should be fine with most optimizers? At the very least, this would be nice to have in Adam as it is the most commonly used optimizer. Glancing through the optimizer list on https://pytorch.org/docs/stable/optim.html, I think only Rprop would pose problem, but I am not on expert on optimization by any means (perhaps LBFGS would fail also, although I can't tell).
username_6: Great -- will do. Since I am also not an optimisation expert, I will start with SGD. Then Adam. Is it important to add support for all?
I am guessing it's better to do this iteratively. What do you think?
username_4: Iteratively is definitely better. And that's ok to not have all of them yes.
username_4: Leaving this closed in favor of the tracking issue in https://github.com/pytorch/pytorch/issues/68052 |
uhop/stream-json | 547631612 | Title: How to use it in browser? I know it is for node.js now
Question:
username_0: I need it to run in browser. client js.
Can I run npm run build to generate ./dist/ folder? and use it in browser?
Answers:
username_1: Unlikely. This is a library based on streams. While there are [browser-based streams](https://developer.mozilla.org/en-US/docs/Web/API/Streams_API), they are not widely implemented and their API is still experimental. There are some stubs to bring Node streams to a browser, but I have no first-hand experience with them and I don't know how good they are.
Just out of curiosity: what is your use case? The library was built to process mounds of data and used mostly in applications (e.g., packed with Electron), (command line) utilities, in some rare cases on a server, but I didn't hear anybody wanting to use it on a client.
username_0: Recommend you continue build the browser version of it.
I will have huge market in future. why? because the trend is move to rest api, json, is standard data exchange format. For large json, hundreds MB, to 1GB json, stream-json is a MUST.
my web site, handle json file from 100MB to 2 GB per page. all the core engine on every page 90% is build on [https://developer.mozilla.org/en-US/docs/Web/API/Streams_API](https://developer.mozilla.org/en-US/docs/Web/API/Streams_API)
my site:
[https://transparentgov.net/cleargov1/](https://transparentgov.net/cleargov1/)
a small sample:
[http://transparentgov.net:3000/socrata/dataset/default?url=https://data.lacity.org/api/id&layer=Building%20Permits%20over%20100K%20Valuation&layer_id=y5ik-mwat](http://transparentgov.net:3000/socrata/dataset/default?url=https://data.lacity.org/api/id&layer=Building%20Permits%20over%20100K%20Valuation&layer_id=y5ik-mwat)
username_0: currently, I use oboe.js as client json stream. But I think your library is newer and maybe remove all the inconvenience of oboe.js.
I need to stream json file (100MB - 1GB) to browser and store in indexedDB. Must have json stream api to avoid cache whole 1GB data in memory.
This is why client side stream api is greatly needed.
username_1: `stream-json` will support browser eventually but there are some issues that should be resolved besides the API stability and its general support. One of them is the lack of default implementations, which can be reused. For example the document (referenced above) defines only interfaces, but not pressure-related algorithms, there is no helpers to create `Transform` streams (most streams in `stream-json` are `Transform` ones) and so on.
But I am sure that we will be there. But not at the present moment.
username_0: Another thing why I need this is:
Oboe.js do NOT have function let me abort the stream before it reach end !
for example:
jquery ajax, have xhr.abort
fetch have
[fetch abort](https://developer.mozilla.org/en-US/docs/Web/API/AbortController/abort)
oboe use xhr, should have abort function, but NOT implement, not sure why. This have big use case like me. 90% of my mapping web page, need to abort ajax, abort fetch etc... because otherwise, when user pan the map, continuously, each pan/zoom map will fire a stream request, so a chain of streaming request piled up, user have to wait long time until all other stream ended one by one to get the latest usefull stream started, it is horrible user experience. We want abort all other stream, only keep the latest stream live.
Can you think of add abort stream before it reach end function?
username_1: This is outside of `stream-json`. Usually, when there is no more need in data a pipe is disconnected and discarded. Eventually, it will be garbage-collected. To be clean it is advised to disconnect event handlers. To process new request a new pipe can be constructed and hooked to the event machinery like the previous pipe was.
username_0: About abort stream, thanks for explain, that make sense to me.
Another issue, I found on oboe, that stop me from using it. Is performance, too slow.
I test 60K (350MB) json file, oboe, compare to browser native stream api
is 100 times slower.
Only for small amount of data, difference is trivals.
Oboe, must have some over head over xhr.
Now, for large data, I can't use oboe, it is way too slow. Instead, I just use browser native stream api , write simple parser on my own.
I am not sure about this json-stream performance, hope you can prevent the slowness that oboe had.
username_1: The whole point of `stream-json` is performance. It targets huge streams and obviously every millisecond helps — IRL they add up to hours and even days. So obviously when I port it to a browser I'll keep it in mind.
username_0: Perfect, I will stay tune and test it as core engine, to see how it performs.
username_0: I guess, not verified, the slowness of oboe, is caused by blob, Blob/FileReader , blob builder etc....
To make it fast, need to use
`var string = new TextDecoder(encoding).decode(uint8array);`
there are several article talking about this slowness:
[https://developers.google.com/web/updates/2012/06/How-to-convert-ArrayBuffer-to-and-from-String](https://developers.google.com/web/updates/2012/06/How-to-convert-ArrayBuffer-to-and-from-String)
[https://stackoverflow.com/questions/6965107/converting-between-strings-and-arraybuffers](https://stackoverflow.com/questions/6965107/converting-between-strings-and-arraybuffers)
Just FYI, when you do browser version, this thing need to aware of.
username_1: Thank you for the info! Archiving for now.
Status: Issue closed
|
HotDogfinba11/portfolio | 1187999049 | Title: Wrong font subsetting
Question:
username_0: Fyi, this page is relying on how chromium treats undefined glyphs in fonts to render the GitHub logo, so it doesn't show correctly on either WebKit or Gecko.
See https://bugzilla.mozilla.org/show_bug.cgi?id=1761686 for details. |
Atlantiss/NetherwingBugtracker | 392360608 | Title: [NPC][Shattered Halls] Shattered Hand Legionnaire not summoning allies upon NPC death
Question:
username_0: http://wowwiki.wikia.com/wiki/Shattered_Halls
Shattered Hand Legionnaire | Pulled with a group of 4-6 other adds; spawns additional adds if adds are killed before the legionnaire. Should be killed first. Can not be sheeped or alike
**Current behaviour**: These emote and properly acknowledge that their allies have died but currently do not summon more mobs into
**Expected behaviour**: Correctly summons adds when others die
**Server Revision**: 2462
Answers:
username_1: Each Legionnaire has a specific script. Which one exactly bugged out?
username_0: Not a single Legionnaire we killed summoned other mobs, they all enraged upon the death of another NPC but emoted that they had summoned other npcs into battle such as Next warrior up or things like that, we killed other mobs first on each legionnaire pack and none of them summoned, per the wowwiki all of them should.
username_0: https://www.twitch.tv/videos/350948196?t=00h34m26s vod for the lego tunnels |
google/re2 | 585130679 | Title: Do you think it is possible to support streams without a maximum length?
Question:
username_0: I mean the usual solution for streams is setting a maximum size for the match and using a circular buffer with twice the size and running the pattern matching on the buffer. While this works, I am curious if there is a better solution.
I think it is possible to solve pattern matching without storing the whole string or even the matching parts in memory. All we need is the candidate matches we haven't closed yet. At least this works on a simple example. We have a buffer size of 3 characters, a pattern like `a\d+b` a string like `"aa23436bx"` -> if we go through this:
```
begin stream
buffer = "aa234"
[0,"a"] -> candid[0] = {begin: 0, next:"\d"}
[1,"a"] -> candid[0] failed, candid[1] = {begin: 1, next:"\d+"}
[2,"2"] -> candid[1] = {begin: 1, next:"\d|b"}
[3,"3"] -> candid[1] = {begin: 1, next:"\d|b"}
[4,"4"] -> candid[1] = {begin: 1, next:"\d|b"}
buffer = "36bx"
[5,"3"] -> candid[1] = {begin: 1, next:"\d|b"}
[6,"6"] -> candid[1] = {begin: 1, next:"\d|b"}
[7,"m"] -> candid[1] = {begin: 1, end: 7}, candid[1] matched
[8,"x"] -> nothing
buffer = ""
end of stream
```
Here I start the `candid[1]` in the first chunk and finish it in the second chunk without keeping the first chunk. All I kept is the beginning position. Ofc. a real engine would need more than that, but it would be still better than keeping the whole string in memory imho. Any opinions?
Answers:
username_1: Sorry, RE2 does not support streaming. There was some discussion about this at the end of 2016 on #126 and #127.
Depending on your use case, you might want to look into using Hyperscan or lightgrep. You could also use RE2 for parsing and compiling only and then execute the RE2 bytecode however you like.
Status: Issue closed
username_2: See also #126, #127 and #213.
I've given a lot of thought to this and it is very hard. I've written more about it here: https://github.com/rust-lang/regex/issues/425 For example, a key thing you're missing is accounting for how the DFA works. It has to run backwards to find the starting location.
username_0: @username_2 Thanks!
username_1: +1 to what @username_2 wrote in https://github.com/rust-lang/regex/issues/425#issuecomment-348768742.
A further note about Hyperscan and tracking where matches begin:
```
* SoM: "Start of Match": we do accurate, streamable start of match
tracking through our system. This is an option and occasionally an expensive one, and some
patterns we support in non-SoM mode we don't support in SoM mode. Anyone who thinks
SoM is cheap or easy is either smarter than we are, not doing things in streaming mode,
trimming down the definition of SoM (possibly defensibly), or misunderstands the problem.
There is a certain amount of weird structure allowing SoM information to flow through our
system; this is an area that needs work.
```
([source](https://lists.01.org/hyperkitty/list/[email protected]/message/UFP3F4PMBWTBL3GSGKHX47KI72A42I7O/))
username_0: @username_1 Yes, I don't expect it to be easy to implement, just that it is possible at a cetain level with certain patterns.
username_0: Well I don't think that is possible without match size restrictions.
username_0: The rust-lang/regex#425 is the exact same thing I thought of, but this is just a naive approach. Usually one needs to think a lot more before coming up a real solution or giving up. I'll continue there, thanks for the links! |
netbox-community/netbox | 1142834170 | Title: tzdata error when upgrading from 3.1 to 3.2-beta1
Question:
username_0: @tobiasge mention :
[<NAME>](https://app.slack.com/team/U01QEG4SFMW) [9 minutes ago](https://netdev-community.slack.com/archives/C01P0FRSXRV/p1645180997892189?thread_ts=1645180110.952929&cid=C01P0FRSXRV)
With Django 4 the default timezone implementation was changed.
You need to to install tzdata in your virtual environment:
pip install tzdata
After adding the tzdata to my local requirements, did not errorer anymore.
Probably the 'tzdata' has to be added to the base_requirements.txt ?
Answers:
username_1: Interesting; thanks for the confirmation. I wonder why `django-timezone-field` doesn't pull in `tzdata` automatically.
At any rate, depending on it directly seems like a reasonable solution, at lease for now. I still need to muddle through the change from `pytz` to `zoneinfo` in Django 4.0.
Aren't timezones fun?
username_2: Just wanted to comment I've just had this same issue.
python 3.9.10
from django-timezone-field version 4.1.2
to django-timezone-field version 5.0
Guess I'll add tzdata to my dependencies meanwhile.
username_1: Here's a relevant [bug report](https://github.com/mfogel/django-timezone-field/issues/82) for `django-timezone-field`. It looks like `tzdata` may become a dependency of that project, but for now I'll just add `tzdata` as a direct dependency for NetBox.
Status: Issue closed
|
Ryujinx/Ryujinx-Games-List | 822925372 | Title: コープスパーティー ブラッドカバー リピーティッドフィアー (Corpse Party Blood Covered: Repeated Fear) - 0100965012E22000
Question:
username_0: ## コープスパーティー ブラッドカバー リピーティッドフィアー (Corpse Party Blood Covered: Repeated Fear)
#### Game Update Version : 1.0 & 1.1
#### Current on `master` : 1.0.6774
Tested it with and without the first update, and in both Japanese and English.
I played for a little more than an hour (finished two Chaper 1 endings) in total for this first session and never ran into any type of problem.
It plays at stable framerates with a pretty much consistant 60 FPS, it does dip at some points but never had it go below 40 FPS.
#### Hardware Specs :
##### CPU: Intel i7-7700K
##### GPU: NVIDIA GTX 1080
##### RAM: 16GB
#### Screenshots :








#### Log file :
[Ryujinx_1.0.6774_2021-03-05_03-37-05.log](https://github.com/Ryujinx/Ryujinx-Games-List/files/6089520/Ryujinx_1.0.6774_2021-03-05_03-37-05.log)
The log only covers the first ~12 minutes up until I shutdown Ryujinx to take a break before continuing a little later. |
esy/pesy | 506389843 | Title: Probably should tell people to use the `esy` shortcut instead of running `esy build`.
Question:
username_0: ```
esy pesy
Created package-json-to-opam.opam
Ready for esy build
```
I don't think there's a reason to use esy build instead of just esy.
Answers:
username_1: True! I like being explicit about the fact that they the next action is build. But always up for changing this 👍
Status: Issue closed
|
embeddedmz/socket-cpp | 462651128 | Title: Exception thrown (socket.cpp)
Question:
username_0: When im trying to compile my program i get
`Exception thrown at 0x0F8B3A44 (msvcp140d.dll) in networktest.exe: 0xC0000005: Access violation reading location 0x00000000.`
in line 31
`if (s_iSocketCount == 0)`
and in line 36
`WSAStartup(MAKEWORD(2, 2),&s_wsaData);`
Is there any solution for this?
Answers:
username_1: Can you post the code snippet showing the use of the class ?
0x00000000 => apparently you are trying to access a null pointer.
username_0: it isn't my code, its code provided in socket.cpp
username_1: I can't help you if you don't show me your Visual Studio solution. Maybe your VS project isn't correctly configured : for example, did you forget to add the .cpp files to your VS project ?
Otherwise, you can follow the "read me" file and use CMake to generate a VS solution to build the class in a static library.
Status: Issue closed
|
department-of-veterans-affairs/va.gov-team | 613800844 | Title: Map VAR Appointment Statuses to FHIR
Question:
username_0: VAR/VistA appointments can currently have up to 25 different statuses as documented in [this spreadsheet](https://docs.google.com/spreadsheets/d/1BJ8FJTNDfiWjqhs-xIQ-nQaw6RXMxXdU2CV0EvCfJhI/edit#gid=1413660998). Currently we hide these statuses or hide appointments completely based on these statuses. FHIR only has 7 possible statuses:
* arrived
* booked
* cancelled
* fulfilled
* noshow
* pending
* proposed
We currently hide statuses we don't want shown by setting the `status` to `null`. However the FHIR spec does not support a `null` status and the field is required, so in addition to mapping the 25 statuses to one of the FHIR statuses, we also need to decide how we will handle hidden statuses. We could possibly add an additional VAOS only flag to indicate if a status should be hidden.
AC
- [ ] Update appointment transformer to map statuses to FHIR statuses
- [ ] Figure out how we will determine if status should be hidden<issue_closed>
Status: Issue closed |
eyurtsev/fcsparser | 816147421 | Title: non-parsing FCS file (and a fix)
Question:
username_0: Hi, flow cytometry files from Apogee don't parse with fcs parser.
They (inexplicably) have a bunch of spaces before $P1N in the header, this can be remedied without too much trouble though.
An example of an FCS file with the issue:
https://bitbucket.org/mwfcomp/incubation_experiment/raw/a6ef3685428f6e373442ab0e04b553b36bbcdd48/sample_data/2015-07-08/THP-1%20-%20235%20nm%20Capsule%2016%20hr.fcs
And a (slightly hacky) script to fix this per file:
https://bitbucket.org/mwfcomp/incubation_experiment/raw/a6ef3685428f6e373442ab0e04b553b36bbcdd48/fcs_file_fixer.py
This could be dealt with more elegantly in the parser, I would be happy to put together a solution for it if this is acceptable and this project is still being maintained?
Answers:
username_1: PRs will be very welcome. I'm happy to merge as long as you include unit-tests.
For style if possible follow PEP-8 conventions and google style doc-strings (I don't have any linters that check for that, so if that's difficult I can follow up with appropriate linting). |
grafana/simple-json-datasource | 207803228 | Title: Singlestat panel
Question:
username_0: Hi,
I am trying to use a "singlestat" panel.
I return a correct JSON time serie but the value given in the 'singlestat' panel, is always the last value of the time serie not the 'max' or 'min' of the time serie, and the 'spark line' option has no effect.
It looks like only the last value is used.
Exactly the same time serie is nicely displayed in the graph panel ...
Any advices ?
Thanks for your great work. Very useful.
<NAME>. |
shibajee/realtek-hda-creative-sbc-mod | 658606806 | Title: Not working anymore
Question:
username_0: Hey there, I have been using this mod for 4 months and had 0 issues with it but recently, the app isnt working, it doesnt popup on startup, my windows version got updated to 2004 version so i think that maybe is a issue but i tried re installing before the windows update and it still didnt work. This driver is very good. I really appreciate your work. It would be very helpful if you look into the issue. and 1 more thing, your other driver mods are super good BUT there is one problem that is, when i unplug headphone and re plug it there is no popup of the driver(realtake driver) i have to launch realtake console and connect manually, this is super annoying and sometimes it doesnt even work. It would be really helpful if you fix that on the next updates of other mods(dont worry sbc doesnt have this issue) but yeah, please look into it. love your work and modded drivers.
Thank You
Answers:
username_1: I got same issue too.Aplication wont start.
username_2: Hi, sorry for the late reply. I updated the soft to latest 8967 version which should be compatible with Windows 10 2004. Give it a try and tell me if it's working properly. Uninstall previously installed realtek driver and related soft before u install this. @username_0 @username_1
username_1: Thank you so much.
username_2: r u on latest windows 10 2004 build ? (Windows Key+R and type > winver and then hit enter) @username_1
username_1: 
There are not any update on windows update.
username_2: can u do a quick test for me ? I will give u a link of a driver within next 5~6 min..dl it and test it..give me a feedback @username_1
username_1: Of course.
username_2: https://mega.nz/folder/2UNhySpZ#qrmR9DmBeMocGbfSiCPzYw , just the driver..genkga and creative soft same as before...test it @username_1
username_1: I installed driver there is no blue screen anymore but still program not work.
username_2: creative program launch properly but u don't get any effect, right ? @username_1
username_1: No Program not louch.
username_1: When i try to open program its not open and windos reporting service open in task manager.
username_2: ya, windows compatibility issue..this program probably won't work on windows from 2004 update...1909 is the last version @username_1 ...we have to move on to the UWP version @username_1
username_1: 
Nahimic mirroring device and audio effect component can cause this?I block this drivers from windows uptade but what ever i did it install itself automaticly after some weeks.
username_2: Right click on This PC > properties > advanced system settings > hardware > device installation settings > No, let me choose what to do > never install driver from windows update > ok/apply whatever...uninstall those nahimic component from device manager, both A-volute and mirroring..restart ur PC @username_1
username_1: I did it before but it still install after restart pc.

username_2: Classic windows 10 surprise me everyday. There is a group policy editor hack or registry hack something like that for driver update, search on google. I'm far away from my windows 10 pc, that's why can't help. @username_1
username_1: I try it before it wont work on windows home :D.Only way disabling uptades from windows uptade.Microsoft has a tool for it but still after some weeks it install again.
username_0: Lemme know if you fixed it. thanks
username_2: The driver is working fine in my Windows 10 PC (OS Build 19041.508), personally installed and tested it. You guys probably doing something wrong. @username_0 @username_1
Status: Issue closed
username_0: I dont know, I followed every steps and didnt work even tho I have USB Headphones now.
username_0: another thing, will this work on USB headphones tho? As realtek doesn't detect USB inputs
username_2: nope it won't, realtek won't detect any usb input @username_0 |
brutella/hc | 261893769 | Title: Camera RPi
Question:
username_0: Hi, does anyone know how to add the camera module from RPi as "surveillance camera" ?
it works on HAP-JS, but I like this Home control better and would like to keep using.
Thanks
Answers:
username_1: Just now I wrote my "question" :-)
https://github.com/username_5/hc/pull/93
username_2: I've spent some time looking at what it will take to support this. I think to do it right is going to require some large changes to `hc`.
1. I think this is going to depend on the `OnValueGet` hook from #100.
2. I think the `tlv8` API is going to need an overhaul. The HAP configuration for the camera descriptors require a lot of complicated `tlv8` structures. I think moving that to be based on `struct`s and with tags for the type code is the way to go, then implementing a proper marshaling system for `tlv8`. This also needs the ability to have multiple entries with the same tag for some features (like supporting multiple resolutions) that the current `tlv8` API can't do.
username_3: @username_2 Looking at how HAP-Node does it, it appears there can be a "get" and "set" type of callback?: https://github.com/KhaosT/HAP-NodeJS/blob/master/lib/StreamController.js#L265-L272
Does our OnValueRemoteUpdate handle "gets" and "sets"?
username_2: No, but see #100.
username_3: @username_2 I have found at least one endpoint that is not currently present in HC: resource[1][2]. It is only used to trigger an image snapshot from an IP camera. I have been trying to add some code to allow this to happen but so far I have been unsuccessful.
[1] Described on page 245 of the HAP
[2] https://github.com/KhaosT/HAP-NodeJS/blob/838c2845eecbd0478f82fdbf5e9e0a768cbcb069/lib/HAPServer.js#L997
username_4: Hello everyone, I'm a beginner developer, and I'm interested in the camera on raspberries, if anyone made an accessory, I will be very grateful if you will give me the code and example of use, thanks)
Status: Issue closed
username_5: There is a now an implementation of a HomeKit camera in [hkcam](https://github.com/username_5/hkcam).
Sorry that it took so long. |
ASU-CLAS/asu-local-isearch-directory-module | 162202587 | Title: A-Z index does not appear after enabling
Question:
username_0: Enable the A-Z index on an `isearch directory` pane and notice that it will not render.
Related to Montana/Ismay webspark release:
Ismay Release Notes: https://brandguide.asu.edu/web-standards/webspark/update/webspark-1271-ismay-release-notes
Montana Release Notes: https://brandguide.asu.edu/node/941
Answers:
username_0: This is caused by an update to views probably. The `title` exposed filter which accepts a regular expression can no longer be an empty `string` by default so it is returning zero results, thus hiding the a-z index display.
Status: Issue closed
|
pale-imitations/Magic_Kingdoms | 614920390 | Title: Summoned mobs missing vehicle
Question:
username_0: I do not know if this error is related to my last issue but for some reason mobs summoned by creatures from this mod crash the server saying "Entity's Vehicle: ~~ERROR~~ NullPointerException: null". The full crash log is below.
https://pastebin.com/9NBPeZq0 |
hubmapconsortium/portal-ui | 601999101 | Title: unpkg is unresponsive: need to host our own packages
Question:
username_0: The request for portal-search returns "Rate exceeded."
Answers:
username_0: I'm nervous about relying on unpkg. Options:
- I hit it on a bad day. No action is the best action?
- We should have all our client dependencies on S3... but how do we manage pushing updates from each of our libraries?
- Or try to use github package hosting?
- Or pull down all the dependencies and include them in the docker container?
@username_1 : Can you think about these options, add others that you can think of, and chart a path forward. It good-enough solution at least through the summer... It doesn't need to be, and I'd be surprised if it were, the solution for ever.
Status: Issue closed
|
adobe/react-spectrum | 1037720969 | Title: CardView GridLayout updates and bugs
Question:
username_0: # 🐛 Bug Report
- Vertical quiet cards in GridLayout should only allow for a single line of text, the rest should be truncated
- update the stories to fix this
- Update/double check the gutters (gutter should be .75x the spacing of the margin between the CardView div and the Cards)
- horizontal cards images break upon resizing the CardView div
- can be fixed by switching to `aspect-ratio` css property but this isn't supported in Safari 14
From testing sessions, reinvestigate:
- There might be an issue focus issue. On ipad if you press on the action menu and press again to close it the card gets focus so it keys the selected checkbox as show. On Desktop if you hover over the selected checkbox shows, but if you intereact with the action menu it becomes sticky, meaning you have to click on another card or outside of cardview to deselect it. Clicking on the cardview (not a card) keeps the sticked selection checkbox. (KT)
- Resolution: This is expected because clicking on the action menu moves focus into the Card, making the Card focused. Focused Cards will display their checkbox.
- Space between rows is very large (DG)
- in same vein as the gutters, double check. Perhaps eyeball better values for GridView
## 🧢 Your Company/Team
RSP<issue_closed>
Status: Issue closed |
cyclosm/cyclosm-cartocss-style | 409316999 | Title: highway=cycleway should be the same size as cycleway=track
Question:
username_0: `highway=cycleway` should tag the same thing as `cycleway=track`. Then, they should have the same size.
This is currently not the case, as can be seen [here](https://tiles.phyks.me/#17/48.87374/2.29670) (Avenue des Champs-Élysées vs Avenue de la Grande Armée).
Answers:
username_0: I started to have a look at this. Solving it would require having a lot of extra test cases in `roads.mss` file. I was thinking it might be useful to have a script generating all these tests and came up with https://gist.github.com/username_0/6cf2e030f5d7442f36ae8070e2207357 so far.
Not sure this is very much better actually. Maybe an option would be to have a templating system on top of CartoCSS for this.
I think this deserves a bit more reflexion and I'm waiting a bit to continue working on this.
username_0: I think this was solved in the latest refactor of `roads.mss`. Feel free to reopen if this is not the case.
Status: Issue closed
|
OWASP/threat-dragon | 891983071 | Title: PDF report diagram is split
Question:
username_0: The threat model PDF report generated by the tool is splitting the diagram across pages. For example, when a 1-page diagram occurs half-way down a page. To overcome this issue, the user could insert a page-break just before the diagram or manually ensure the diagram does not occur over a page break. |
YMCACOLUMBUS/activescience-issues | 152902672 | Title: Add a countdown time for the game
Question:
username_0: @username_2 Could you mock up a design with a countdown included for both BuddyJump and Asteroids?
Answers:
username_1: Updated version - no countdown
username_0: @username_2 Could you mock up a design with a countdown included for both BuddyJump and Asteroids?
username_2: Here's the asteroids, also increased the font size on the points as well.

username_2: BuddyJump Counter.

Status: Issue closed
|
AyuntamientoMadrid/agendas | 956493589 | Title: ArgumentError: invalid date
Question:
username_0: View details in Rollbar: [https://rollbar.com/consul/Agendas/items/600/](https://rollbar.com/consul/Agendas/items/600/)
```
ArgumentError: invalid date
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/activesupport-4.2.10/lib/active_support/core_ext/date/calculations.rb", line 126, in new
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/activesupport-4.2.10/lib/active_support/core_ext/date/calculations.rb", line 126, in change
File "/aytomad/app/agendas/agendas/releases/20210730065825/app/controllers/visitors_controller.rb", line 223, in get_calendar
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/activesupport-4.2.10/lib/active_support/callbacks.rb", line 432, in block in make_lambda
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/activesupport-4.2.10/lib/active_support/callbacks.rb", line 145, in block in halting_and_conditional
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/activesupport-4.2.10/lib/active_support/callbacks.rb", line 504, in block in call
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/activesupport-4.2.10/lib/active_support/callbacks.rb", line 504, in each
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/activesupport-4.2.10/lib/active_support/callbacks.rb", line 504, in call
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/activesupport-4.2.10/lib/active_support/callbacks.rb", line 92, in __run_callbacks__
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/activesupport-4.2.10/lib/active_support/callbacks.rb", line 778, in _run_process_action_callbacks
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/activesupport-4.2.10/lib/active_support/callbacks.rb", line 81, in run_callbacks
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/actionpack-4.2.10/lib/abstract_controller/callbacks.rb", line 19, in process_action
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/actionpack-4.2.10/lib/action_controller/metal/rescue.rb", line 29, in process_action
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/actionpack-4.2.10/lib/action_controller/metal/instrumentation.rb", line 32, in block in process_action
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/activesupport-4.2.10/lib/active_support/notifications.rb", line 164, in block in instrument
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/activesupport-4.2.10/lib/active_support/notifications/instrumenter.rb", line 20, in instrument
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/activesupport-4.2.10/lib/active_support/notifications.rb", line 164, in instrument
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/actionpack-4.2.10/lib/action_controller/metal/instrumentation.rb", line 30, in process_action
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/actionpack-4.2.10/lib/action_controller/metal/params_wrapper.rb", line 250, in process_action
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/activerecord-4.2.10/lib/active_record/railties/controller_runtime.rb", line 18, in process_action
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/actionpack-4.2.10/lib/abstract_controller/base.rb", line 137, in process
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/actionview-4.2.10/lib/action_view/rendering.rb", line 30, in process
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/actionpack-4.2.10/lib/action_controller/metal.rb", line 196, in dispatch
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/actionpack-4.2.10/lib/action_controller/metal/rack_delegation.rb", line 13, in dispatch
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/actionpack-4.2.10/lib/action_controller/metal.rb", line 237, in block in action
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/actionpack-4.2.10/lib/action_dispatch/routing/route_set.rb", line 74, in dispatch
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/actionpack-4.2.10/lib/action_dispatch/routing/route_set.rb", line 43, in serve
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/actionpack-4.2.10/lib/action_dispatch/journey/router.rb", line 43, in block in serve
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/actionpack-4.2.10/lib/action_dispatch/journey/router.rb", line 30, in each
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/actionpack-4.2.10/lib/action_dispatch/journey/router.rb", line 30, in serve
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/actionpack-4.2.10/lib/action_dispatch/routing/route_set.rb", line 817, in call
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/newrelic_rpm-3.13.2.302/lib/new_relic/agent/instrumentation/middleware_tracing.rb", line 67, in call
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/turnout-2.4.1/lib/rack/turnout.rb", line 25, in call
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/newrelic_rpm-3.13.2.302/lib/new_relic/agent/instrumentation/middleware_tracing.rb", line 67, in call
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/newrelic_rpm-3.13.2.302/lib/new_relic/rack/agent_hooks.rb", line 30, in traced_call
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/newrelic_rpm-3.13.2.302/lib/new_relic/agent/instrumentation/middleware_tracing.rb", line 67, in call
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/newrelic_rpm-3.13.2.302/lib/new_relic/rack/browser_monitoring.rb", line 32, in traced_call
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/newrelic_rpm-3.13.2.302/lib/new_relic/agent/instrumentation/middleware_tracing.rb", line 67, in call
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/warden-1.2.7/lib/warden/manager.rb", line 36, in block in call
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/warden-1.2.7/lib/warden/manager.rb", line 35, in catch
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/warden-1.2.7/lib/warden/manager.rb", line 35, in call
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/newrelic_rpm-3.13.2.302/lib/new_relic/agent/instrumentation/middleware_tracing.rb", line 67, in call
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/rack-1.6.8/lib/rack/etag.rb", line 24, in call
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/newrelic_rpm-3.13.2.302/lib/new_relic/agent/instrumentation/middleware_tracing.rb", line 67, in call
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/rack-1.6.8/lib/rack/conditionalget.rb", line 25, in call
File "/aytomad/app/agendas/agendas/shared/bundle/ruby/2.3.0/gems/newrelic_rpm-3.13.2.302/lib/new_relic/agent |
dask/dask | 445532065 | Title: [FEA] Remove pandas dependencies from rolling functionality
Question:
username_0: Removing the pandas dependencies in support for [rolling windows](https://github.com/dask/dask/blob/master/dask/dataframe/rolling.py) will help allow us to use other backend dataframe libraries such as cuDF for distributed rolling windows (cuDF rolling functionality is currently a work in progress).
As of May 17, 2019, the following lines of code have explicit pandas dependencies on what appear to be meaningful chunks of data based on the code:
- [concatenation](https://github.com/dask/dask/blob/master/dask/dataframe/rolling.py#L32) of the partitions (in `overlap_chunks`)
- [getting the deltas](https://github.com/dask/dask/blob/master/dask/dataframe/rolling.py#L108) of the dataframe divisions when the rolling window is operating on datetimes (in `map_overlap`) (two places due to branching logic). This may not be that meaningful of a bottleneck, as the number of divisions is `npartitions + 1`. It looks to me on a first pass through that there isn't concatenation of the results to any other objects so this might not block cuDF.
- [concatenation](https://github.com/dask/dask/blob/master/dask/dataframe/rolling.py#L197) again for rolling windows with timedeltas (in `tail_timedelta`).
- [converting the timedelta window string ('5S', for example) into an actual timedelta object](https://github.com/dask/dask/blob/master/dask/dataframe/rolling.py#L265) in the actual Rolling class implementation. This would not matter much if the cuDF implementation can accept pandas timedeltas as arguments, but will matter if it cannot.
Answers:
username_1: Nice analysis @username_0 ! My thoughts (which in many cases agree with yours)
https://github.com/dask/dask/blob/081a911148d472bb1a82a88a19f18e5c19d5f137/dask/dataframe/rolling.py#L108
This might be ok to keep we're only manipulating the divisions here, not the actual data. It's probably fine to keep this in Pandas regardless of how we store the data
https://github.com/dask/dask/blob/081a911148d472bb1a82a88a19f18e5c19d5f137/dask/dataframe/rolling.py#L197
https://github.com/dask/dask/blob/081a911148d472bb1a82a88a19f18e5c19d5f137/dask/dataframe/rolling.py#L32
We should use the general concat function that we use elsewhere throughout the dask dataframe codebase
https://github.com/dask/dask/blob/081a911148d472bb1a82a88a19f18e5c19d5f137/dask/dataframe/rolling.py#L265
This is probably ok to keep as well. This is just a scalar value
username_0: @username_1 I generally agree with what you've suggested. The unknown for me on your last one is that `before` is ultimately used for some operations as a timedelta, which might pose some interoperability concerns if cuDF can't handle that conversion.
username_1: I think that ideally cudf would be able to reason about Pandas and Python timedelta objects. They're likely to come up in normal workflows. If this isn't the case today then I think it's a reasonable request and that we should raise an issue on the cudf issue tracker.
username_0: Will verify.
username_2: Seems like the concats mentioned above have been changed to not depend on pandas, so this issue might be resolved. I am closing, but please reopen if there is still work to be done.
Status: Issue closed
|
gbv/jskos | 107174533 | Title: JSKOS extensions for typical concept types
Question:
username_0: * people (foaf)
* places (geolocation)
* periods (periodO)
* ...
Answers:
username_0: Here is a list of possible additional properties compiled from Europeana EDM contextual entities JSON-LD profile. Other properties have different name, e.g. `start` vs `startDate`.
### Agent
* hasMet
* name
* biographicalInformation
* gender
* professionOrOccupation
### Place
* hasPart
* lat, long, alt (wgs84)
Status: Issue closed
|
localstack/serverless-localstack | 776807780 | Title: Recoverable error occurred (socket hang up), sleeping for 5 seconds. Try 4 of 4
Question:
username_0: I am getting socket hangup error, is it because of all aws apis are running 4566? I am running localstack image created from [localstack](https://github.com/localstack/localstack) not from this repo docker-compose file.
Answers:
username_1: @username_0
did you figure this out? |
NVIDIA/caffe | 487709565 | Title: ResNet-50 successful training, poor inference
Question:
username_0: Specifications:
ResNet-50 (with transfer learning)
DIGITS-6.1.1
NVCaffe-0.17
CUDA-10.1
cuDNN-7.6
Windows-10
By successful, I mean both accuracy and loss (for train-set and validation-set) had converged at respectable high (++90%) and low (--0.005) ends respectively. However, these performances could not be reflected on the inference, via DIGITS or NVCaffe. The result, most of the time, seems to be biased on a particular class (out of 4 in total). This could be understandable if the class distribution was imbalanced but it is in fact, not. Furthermore, this peculiar behavior is observed even on the train-set itself. From where did the high training accuracy originate if the inference on train-set is exceptionally poor? Tried toggling with the preprocessing steps (enable/disable scaling, mean subtraction, etc.) during inference but to no avail...<issue_closed>
Status: Issue closed |
kubeflow/pipelines | 1091837023 | Title: [backend] Pipeline first step stuck in running state even after completing
Question:
username_0: Hey, hope this is the right place to post this issue at. I'm new to Kubeflow and Kubernetes so please let me know what else would be useful to know.
### Environment
* How did you deploy Kubeflow Pipelines (KFP): Installed Kubeflow on Kubernetes 1.19 with manifests, see below
* KFP version: 1.7.0
* KFP SDK version: build version dev_local
* Server specs: 8 CPUs, 16GB RAM, 240GB SSD ([Hetzner Cloud CPX41](https://www.hetzner.com/cloud))
### Steps to reproduce
1. Install KubeFlow on Kubernetes 1.19 (used K3S) with manifests, full setup script in materials below
2. Go to Kubeflow dashboard
3. Start a Pipeline run for `[Tutorial] DSL - Control structures`
4. First step completes successfully (eg. logs "tails"), but stays stuck in running state
Terminating the run does nothing. I also tried running other pipelines and the result is the same.

### Expected result
The pipeline step should complete and run the rest of the pipeline.
### Materials and Reference
#### Setup on Ubuntu 20.04 server from scratch
```
sudo apt update -y && sudo apt upgrade -y
# Install docker
sudo apt install ca-certificates curl gnupg lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io
# Install k3s 1.19 (I tried 1.20 too which had the same issue, but 1.21 is too new for manifests)
export INSTALL_K3S_VERSION="v1.19.16%2Bk3s1"
curl -sfL https://get.k3s.io | sh -
# Get Kustomize 3.2.0
cd /opt/
wget https://github.com/kubernetes-sigs/kustomize/releases/download/v3.2.0/kustomize_3.2.0_linux_amd64
chmod +x kustomize_3.2.0_linux_amd64
ln -s /opt/kustomize_3.2.0_linux_amd64 /usr/bin/kustomize
# Install Kubeflow using manifests
git clone https://github.com/kubeflow/manifests.git
cd manifests
while ! kustomize build example | kubectl apply -f -; do echo "Retrying to apply resources"; sleep 10; done
# Portforward Kubeflow dashboard in new tmux session
tmux new -d -s kubeflow-dashboard-portforward "kubectl port-forward svc/istio-ingressgateway -n istio-system 8080:80"
```
#### kubectl get pods output
[Truncated]
apiVersion: v1
resourceVersion: '6984'
fieldPath: 'spec.containers{main}'
reason: Started
message: Started container main
source:
component: kubelet
host: ubuntu-2gb-fsn1-2
firstTimestamp: '2022-01-01T15:05:07Z'
lastTimestamp: '2022-01-01T15:05:07Z'
count: 1
type: Normal
eventTime: null
reportingComponent: ''
reportingInstance: ''
```
---
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
Answers:
username_0: Using the pns executor instead makes everything work as described [here](https://github.com/kubeflow/manifests#kubeflow-pipelines)
`kustomize build apps/pipeline/upstream/env/platform-agnostic-multi-user-pns | kubectl apply -f -`
So I assume I made a mistake in my Docker setup? Although not much about docker is mentioned in the manifests readme.
username_1: Hello @username_0 , you can switch over to emissary executor since that is going to be the default executor going forward. https://github.com/kubeflow/pipelines/issues/5714 |
open-telemetry/opentelemetry-java-contrib | 1006736602 | Title: [maven-extension] Disable Mojo spans
Question:
username_0: **Is your feature request related to a problem? Please describe.**
In playing with the extension today on a build of Quarkus, it generates 17400 spans as each Mojo/plugin in a module build has its own span.
Though this information is beneficial in diagnosing a problem it does add a lot more information than is probably necessary for regular project builds. It might also be interesting to provide a list of mojo names that should have spans and ignore the rest, so known plugins that are troublesome can always have spans created?
**Describe the solution you'd like**
Ability to turn off span creation at the mojo level. Which option should be the default is possibly another discussion.
**Describe alternatives you've considered**
It would be possible to provide a sampling strategy to exclude them, but I think it's better to prevent their creation in the first place if that's what is desired.
**Additional context**
Happy to work on a fix for this if there's interest and agreement
Answers:
username_1: @v1v @kuisathaverat it reminds me the problem we faced on Jenkins observability. 17k span is one order of magnitude higher than the problem we faced in CI pipeline traces :-)
I like the idea to be incremental and first enable disabling bluntly the mojo spans. Later, we will be able to refine the feature to support smarter filtering if it makes sense.
Note that this telles me that the challenge will come when we add unit test visibility. One span per unit test?
username_0: Ok, I will start playing around with this when I have a moment.
Whether we need a span per unit test would depend on the purpose of the spans.
If it's purely as a view on how long a build is taking, does it need that really fine-grained level of a particular test? Or only all the tests and then recognize if a set of tests takes too long, to evaluate the logs for how long each test actually took to find the culprit?
Status: Issue closed
|
jordonbiondo/column-enforce-mode | 196301383 | Title: feature request: column-enforce-inhibit-major-modes
Question:
username_0: feature request: column-enforce-inhibit-major-modes
could we define a var `column-enforce-inhibit-major-modes` (or similar) to allow folks to enable column-enforce-mode globally, but exclude certain modes?
i'm an elisp newbie but would be willing to help if you're in favor of the idea. thanks!
Answers:
username_1: I opted initially for a more open-ended approach that can be used to implement the behavior you want. See this PR: #9 for an example of what you requested. If that works for you, I'll push that change out.
Status: Issue closed
|
sopaco/sia | 288316319 | Title: Fix com.facebook.react.common.JavascriptException in ExceptionsManagerModule.java line 56
Question:
username_0: ### Version 1.0(1) ###
### Stacktrace ###
com.facebook.react.modules.core.ExceptionsManagerModule.showOrThrowError (ExceptionsManagerModule.java:56);
com.facebook.react.modules.core.ExceptionsManagerModule.reportFatalException (ExceptionsManagerModule.java:40);
com.facebook.react.bridge.JavaMethodWrapper.invoke (JavaMethodWrapper.java:374);
com.facebook.react.bridge.JavaModuleWrapper.invoke (JavaModuleWrapper.java:162);
com.facebook.react.bridge.queue.NativeRunnable.run (NativeRunnable.java);
com.facebook.react.bridge.queue.MessageQueueThreadHandler.dispatchMessage (MessageQueueThreadHandler.java:31);
com.facebook.react.bridge.queue.MessageQueueThreadImpl$3.run (MessageQueueThreadImpl.java:194);
### Reason ###
com.facebook.react.common.JavascriptException
### Link to App Center ###
* [https://appcenter.ms/users/dokhell/apps/51ATM/crashes/groups/c7868440252323025235c9fa30b98a28f0da719e](https://appcenter.ms/users/dokhell/apps/51ATM/crashes/groups/c7868440252323025235c9fa30b98a28f0da719e) |
cybersemics/em | 521219841 | Title: Disable gestures when Helper is open
Question:
username_0: When the Helper (modal dialog) is open ([`showHelper`](https://github.com/cybersemics/em/blob/dev/src/reducers/showHelper.js) is not `null`), gestures should be disabled. See [handleGesture](https://github.com/cybersemics/em/blob/b4fd8faa535eb423fdb620d4f22040ff8129634c/src/shortcuts.js#L573).
You can set `isMobile = true` in [browser.js](https://github.com/cybersemics/em/blob/dev/src/browser.js#L5) to simulate mobile. Click and drag the mouse to gesture. Click the Help link in the footer to open the Helper modal and see the available gestures.<issue_closed>
Status: Issue closed |
oysterprotocol/webnode | 331268148 | Title: [Webnode] Analyze performance to ensure we do not stress clients CPU's
Question:
username_0: **Background**
*Web Workers*
As far as I know, web workers will not work for us. Web workers execute in their own scope and dont have access to the window object. We cannot use import statements, but must use importScripts which is a function on self. I have this implemented and working but the problem is the methods we are trying to offload to the web workers are using several libraries with import statements.
*Okay, but what does this mean?*
This means we can offload any function to the web workers as long as they dont have dependancies. Any function we can throw in the worker will execute on another thread. We can importScript other files, but they themselves cannot import files
*What do we gain from web workers?*
In my opinion not much. I dont think web workers were designed to alleviate stress on the client, but offload some tasks to another thread so you can continue to paint and render the ui, instead of it hanging up.
*Okay… so what now?*
I started learning how to do performance testing and am ready to profile the web node to see what areas are slow so we can find the best way to optimize. I think optimizing our build and controlling how much and how often we do POW is the way to alleviate stress. This in combination with test methods to see how much we are bogging down the client should be enough.
Answers:
username_0: Start by analyzing the webnode with google's performance dev tools. I have done research on this.
username_0: I have created a baseline performance measurement and logged it here. https://app.tettra.co/teams/oysterprotocol/pages/webnode-optimizations
Status: Issue closed
|
gfx-rs/wgpu | 1083754795 | Title: Window temporarily hangs when resizing continuously, eventually panics
Question:
username_0: <!-- Thank you for filing this! Please read the [debugging tips](https://github.com/gfx-rs/wgpu/wiki/Debugging-wgpu-Applications).
That may let you investigate on your own, or provide additional information that helps us to assist.-->
**Description**
When I run a wgpu example program on Xfce 4 or Openbox (X11), then resize it continuously, I get validation errors, eventually followed by a panic at "Error in Surface::configure: parent device is lost".
**Repro steps**
- Clone wgpu at master (ec1d022a75898a3bc4e03589471d0ba7facc066c).
- Run an example (eg. `cargo run --example skybox`). This prints "Using NVIDIA GeForce GT 730 (Vulkan)".
- Continually resize the window, eg. drag the bottom-right corner of the window in a circle. This bug is much easier to trigger with a 1000 Hz gaming mouse than a 125 Hz office mouse.
**Expected vs observed behavior**
On Xfce and Openbox X11, while I resize, the window stops redrawing, and I get Vulkan validation errors in the terminal. On KDE X11, the window slows down drawing, but prints very few validation errors.
On Xfce and Openbox (not KWin), the window remains hung after I finish resizing, since resize events are queued up. With an office mouse, the extra time hung is around 20% of the time spent resizing. With a gaming mouse, it's around 200% of the time.
When I look in htop, while resizing, Xorg burns half a core of CPU and xfwm4/skybox burns less. When I stop resizing and leave the app hung, Xorg and skybox each burn half a CPU core.
After spending a long time continually resizing with a gaming mouse, the app panics. On Openbox, you can resize for 2 seconds, and wait >5 seconds while the app continually processes resize events before panicking. On Xfce, I got a crash after around 40 seconds of resizing (IIRC).
```
[2021-12-18T04:36:40Z ERROR wgpu_hal::vulkan::instance] VALIDATION [VUID-VkSwapchainCreateInfoKHR-imageExtent-01274 (0x7cd0911d)]
Validation Error: [ VUID-VkSwapchainCreateInfoKHR-imageExtent-01274 ] Object 0: handle = 0x559c38b3dd98, type = VK_OBJECT_TYPE_DEVICE; | MessageID = 0x7cd0911d | vkCreateSwapchainKHR() called with imageExtent = (1507,751), which is outside the bounds returned by vkGetPhysicalDeviceSurfaceCapabilitiesKHR(): currentExtent = (1230,481), minImageExtent = (1230,481), maxImageExtent = (1230,481). The Vulkan spec states: imageExtent must be between minImageExtent and maxImageExtent, inclusive, where minImageExtent and maxImageExtent are members of the VkSurfaceCapabilitiesKHR structure returned by vkGetPhysicalDeviceSurfaceCapabilitiesKHR for the surface (https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/vkspec.html#VUID-VkSwapchainCreateInfoKHR-imageExtent-01274)
[2021-12-18T04:36:40Z ERROR wgpu_hal::vulkan::instance] objects: (type: DEVICE, hndl: 0x559c38b3dd98, name: ?)
[2021-12-18T04:36:40Z ERROR wgpu_hal::vulkan::instance] VALIDATION [VUID-VkSwapchainCreateInfoKHR-imageExtent-01274 (0x7cd0911d)]
Validation Error: [ VUID-VkSwapchainCreateInfoKHR-imageExtent-01274 ] Object 0: handle = 0x559c38b3dd98, type = VK_OBJECT_TYPE_DEVICE; | MessageID = 0x7cd0911d | vkCreateSwapchainKHR() called with imageExtent = (1471,772), which is outside the bounds returned by vkGetPhysicalDeviceSurfaceCapabilitiesKHR(): currentExtent = (1198,641), minImageExtent = (1198,641), maxImageExtent = (1198,641). The Vulkan spec states: imageExtent must be between minImageExtent and maxImageExtent, inclusive, where minImageExtent and maxImageExtent are members of the VkSurfaceCapabilitiesKHR structure returned by vkGetPhysicalDeviceSurfaceCapabilitiesKHR for the surface (https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/vkspec.html#VUID-VkSwapchainCreateInfoKHR-imageExtent-01274)
[2021-12-18T04:36:40Z ERROR wgpu_hal::vulkan::instance] objects: (type: DEVICE, hndl: 0x559c38b3dd98, name: ?)
thread 'main' panicked at 'Error in Surface::configure: parent device is lost', wgpu/src/backend/direct.rs:214:9
stack backtrace:
0: rust_begin_unwind
at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/std/src/panicking.rs:517:5
1: std::panicking::begin_panic_fmt
at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/std/src/panicking.rs:460:5
2: wgpu::backend::direct::Context::handle_error_fatal
at ./wgpu/src/backend/direct.rs:214:9
3: <wgpu::backend::direct::Context as wgpu::Context>::surface_configure
at ./wgpu/src/backend/direct.rs:921:13
4: wgpu::Surface::configure
at ./wgpu/src/lib.rs:3267:9
5: skybox::framework::start::{{closure}}
at ./wgpu/examples/skybox/../framework.rs:276:17
6: winit::platform_impl::platform::x11::EventLoop<T>::drain_events::{{closure}}::{{closure}}
at /home/username_0/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.26.0/src/platform_impl/linux/x11/mod.rs:419:29
7: winit::platform_impl::platform::sticky_exit_callback
at /home/username_0/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.26.0/src/platform_impl/linux/mod.rs:753:5
8: winit::platform_impl::platform::x11::EventLoop<T>::drain_events::{{closure}}
at /home/username_0/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.26.0/src/platform_impl/linux/x11/mod.rs:407:17
9: winit::platform_impl::platform::x11::event_processor::EventProcessor<T>::process_event
at /home/username_0/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.26.0/src/platform_impl/linux/x11/event_processor.rs:481:25
10: winit::platform_impl::platform::x11::EventLoop<T>::drain_events
at /home/username_0/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.26.0/src/platform_impl/linux/x11/mod.rs:406:13
11: winit::platform_impl::platform::x11::EventLoop<T>::run_return
at /home/username_0/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.26.0/src/platform_impl/linux/x11/mod.rs:278:13
12: winit::platform_impl::platform::x11::EventLoop<T>::run
at /home/username_0/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.26.0/src/platform_impl/linux/x11/mod.rs:392:9
13: winit::platform_impl::platform::EventLoop<T>::run
at /home/username_0/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.26.0/src/platform_impl/linux/mod.rs:669:56
14: winit::event_loop::EventLoop<T>::run
[Truncated]
Is this a wgpu or winit bug? Should wgpu or winit or etc. discard resize events when backed up? What causes the "parent device is lost" error?
**Extra materials**
Screenshots to help explain your problem.
Validation logs can be attached in case there are warnings and errors.
Zip-compressed API traces and GPU captures can also land here.
Related: https://github.com/gfx-rs/wgpu/issues/1971, https://github.com/gfx-rs/wgpu/issues/2286, https://github.com/sotrh/learn-wgpu/issues/230
**Platform**
Information about your OS, version of `wgpu`, your tech stack, etc.
Operating System: Arch Linux
Xfce Version: 4.16
Kernel Version: 5.15.7-zen1-1-zen (64-bit)
Graphics Platform: X11
Processors: 12 × AMD Ryzen 5 5600X 6-Core Processor
Memory: 15.6 GiB of RAM
Graphics Processor: NVIDIA GeForce GT 730/PCIe/SSE2
Graphics Driver: Proprietary 470.74
Answers:
username_0: Why does KWin not cause the wgpu app to hang, and Xfce takes longer to hang than Openbox. I think it's because KWin delivers 1 or less resize event per frame, not because it discards resize events while wgpu is backed up.
## Sleeping during resizing causes events to build up even on KDE
If I edit `wgpu/examples/skybox/main.rs` and add `std::thread::sleep(std::time::Duration::from_millis(100));` to `fn resize(...)`, then after I resize the window continuously, the window remains hung (stuck sleeping) for many seconds, meaning KDE doesn't (immediately) stop sending resize events. (I'm not sure if KDE avoids sending resize events entirely, or only sends resize events if the queue size is below a limit. Given that other WMs don't seem to have any backpressure at all, my guess is that KDE has none.)
## Different window managers produce different rates of resize() calls
I modified the example to count the number of resize() calls, then tested similar mouse motions on different X11 window managers.
When I resized the app continuously for 10 seconds on kwin_x11, it counted 263 calls (<30 resizes per second). Maybe KDE is just bad on my GPU, drops frames, and/or runs at a low framerate.
On xfce, 5 seconds produced 502 resizes with a wide circle (so 100 resizes per second, though the window was quite stuttery). If I drew a smaller circle so the window was smaller during resizing, I got >760 resizes (>150 resizes per second), since Xorg/xfwm4/the app spent less time redrawing a large window, and more time picking up mouse movements.
On openbox, 1 second of continuous resizing produced 872 resizes followed by a panic, and another attempt at 1 second of continuous resizing produced 985 resizes with no panic. This indicates that Openbox passes through my mouse's 1000 Hz refresh rate as up to 1000 resize events per second.
## xtruss
I wondered if the resize messages were seen by the app immediately, or buffered on the X server. When I ran `xtruss -e events=Expose -e requests=none ./target/debug/examples/skybox 2>&1 | rg Expose`, I saw that `Expose` messages with new sizes were sent during the resize, but not afterwards while the app processes these messages.
username_1: @username_0 Are you getting the same issue on the latest master?
username_0: I still get this issue on master 39a0256bcb98876d9138fd01f25b48fc0d62a3c0. On Linux X11 Openbox, I get endless validation errors and a panic, as before:
```
[2022-01-25T00:21:41Z ERROR wgpu_hal::vulkan::instance] VALIDATION [VUID-VkSwapchainCreateInfoKHR-imageExtent-01274 (0x7cd0911d)]
Validation Error: [ VUID-VkSwapchainCreateInfoKHR-imageExtent-01274 ] Object 0: handle = 0x56213095a398, type = VK_OBJECT_TYPE_DEVICE; | MessageID = 0x7cd0911d | vkCreateSwapchainKHR() called with imageExtent = (1455,847), which is outside the bounds returned by vkGetPhysicalDeviceSurfaceCapabilitiesKHR(): currentExtent = (1076,615), minImageExtent = (1076,615), maxImageExtent = (1076,615). The Vulkan spec states: imageExtent must be between minImageExtent and maxImageExtent, inclusive, where minImageExtent and maxImageExtent are members of the VkSurfaceCapabilitiesKHR structure returned by vkGetPhysicalDeviceSurfaceCapabilitiesKHR for the surface (https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/vkspec.html#VUID-VkSwapchainCreateInfoKHR-imageExtent-01274)
[2022-01-25T00:21:41Z ERROR wgpu_hal::vulkan::instance] objects: (type: DEVICE, hndl: 0x56213095a398, name: ?)
[2022-01-25T00:21:41Z ERROR wgpu_hal::vulkan::instance] VALIDATION [VUID-VkSwapchainCreateInfoKHR-imageExtent-01274 (0x7cd0911d)]
Validation Error: [ VUID-VkSwapchainCreateInfoKHR-imageExtent-01274 ] Object 0: handle = 0x56213095a398, type = VK_OBJECT_TYPE_DEVICE; | MessageID = 0x7cd0911d | vkCreateSwapchainKHR() called with imageExtent = (1445,847), which is outside the bounds returned by vkGetPhysicalDeviceSurfaceCapabilitiesKHR(): currentExtent = (1076,615), minImageExtent = (1076,615), maxImageExtent = (1076,615). The Vulkan spec states: imageExtent must be between minImageExtent and maxImageExtent, inclusive, where minImageExtent and maxImageExtent are members of the VkSurfaceCapabilitiesKHR structure returned by vkGetPhysicalDeviceSurfaceCapabilitiesKHR for the surface (https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/vkspec.html#VUID-VkSwapchainCreateInfoKHR-imageExtent-01274)
[2022-01-25T00:21:41Z ERROR wgpu_hal::vulkan::instance] objects: (type: DEVICE, hndl: 0x56213095a398, name: ?)
thread 'main' panicked at 'Error in Surface::configure: parent device is lost', wgpu/src/backend/direct.rs:214:9
stack backtrace:
0: rust_begin_unwind
at /rustc/02072b482a8b5357f7fb5e5637444ae30e423c40/library/std/src/panicking.rs:498:5
1: core::panicking::panic_fmt
at /rustc/02072b482a8b5357f7fb5e5637444ae30e423c40/library/core/src/panicking.rs:107:14
2: wgpu::backend::direct::Context::handle_error_fatal
at ./wgpu/src/backend/direct.rs:214:9
3: <wgpu::backend::direct::Context as wgpu::Context>::surface_configure
at ./wgpu/src/backend/direct.rs:921:13
4: wgpu::Surface::configure
at ./wgpu/src/lib.rs:3171:9
5: skybox::framework::start::{{closure}}
at ./wgpu/examples/skybox/../framework.rs:276:17
6: winit::platform_impl::platform::x11::EventLoop<T>::drain_events::{{closure}}::{{closure}}
at /home/username_0/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.26.0/src/platform_impl/linux/x11/mod.rs:419:29
7: winit::platform_impl::platform::sticky_exit_callback
at /home/username_0/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.26.0/src/platform_impl/linux/mod.rs:753:5
8: winit::platform_impl::platform::x11::EventLoop<T>::drain_events::{{closure}}
at /home/username_0/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.26.0/src/platform_impl/linux/x11/mod.rs:407:17
9: winit::platform_impl::platform::x11::event_processor::EventProcessor<T>::process_event
at /home/username_0/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.26.0/src/platform_impl/linux/x11/event_processor.rs:481:25
10: winit::platform_impl::platform::x11::EventLoop<T>::drain_events
at /home/username_0/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.26.0/src/platform_impl/linux/x11/mod.rs:406:13
11: winit::platform_impl::platform::x11::EventLoop<T>::run_return
at /home/username_0/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.26.0/src/platform_impl/linux/x11/mod.rs:278:13
12: winit::platform_impl::platform::x11::EventLoop<T>::run
at /home/username_0/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.26.0/src/platform_impl/linux/x11/mod.rs:392:9
13: winit::platform_impl::platform::EventLoop<T>::run
at /home/username_0/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.26.0/src/platform_impl/linux/mod.rs:669:56
14: winit::event_loop::EventLoop<T>::run
at /home/username_0/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.26.0/src/event_loop.rs:154:9
15: skybox::framework::start
at ./wgpu/examples/skybox/../framework.rs:229:5
16: skybox::framework::run
at ./wgpu/examples/skybox/../framework.rs:386:5
17: skybox::main
at ./wgpu/examples/skybox/main.rs:463:5
18: core::ops::function::FnOnce::call_once
at /rustc/02072b482a8b5357f7fb5e5637444ae30e423c40/library/core/src/ops/function.rs:227:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
```
On Windows 10, 39a0256bcb98876d9138fd01f25b48fc0d62a3c0, `cargo run --example skybox`, I noticed some oddities with a 1 second sleep added in `resize()`:
When resizing the window corner, the window size updates based on the mouse position once per second, but with an extra second of latency (so 1-2 seconds of latency). **Discarding all but one mouse cursor position each second avoids the unbounded buffering** causing this crash on Linux.
- The extra second (1 frame) of latency feels undesirable to me. Can this be resolved? Is it worth fixing or not? (Possibly related: https://raphlinus.github.io/ui/graphics/2020/09/13/compositor-is-evil.html)
Interestingly, when I drag the corner to the top/bottom of my screen, Windows 10's "maximize window vertically" updates position once per second with no extra latency.
The window size updates in lockstep with the window being rerendered at the new size. I don't think this happens on X11, and I suspect it's not possible. |
dComputeCore/golem-codepen | 777121200 | Title: Download Files
Question:
username_0: ### User story
As a requestor user I want to download files from my Slate so that I can use the output of my computation.
### What is missing from the app that's causing this pain point?
Download button.
### Do you have any proposed solutions?
Download button.
### Definition of Done
A way to download files.
### Additional context |
GetDotaStats/site | 57292738 | Title: Add new Lobby Explorer++ API
Question:
username_0: lobby_user_keep_alive
Get a better idea of who is on at any given time.
Answers:
username_0: We can jack the lobby_user_stats temporarily until the new API is rolled out in the client.
Status: Issue closed
username_0: No longer necessary with the new chat feature rolling out soon. |
cityofaustin/atd-vz-data | 541814706 | Title: VZV | Summary | Page Formatting
Question:
username_0: Implement a set of common rules for the spacing and positioning of quick views and charts.
Below is an example of how the Summary page could look based on these common rules:

Answers:
username_0: Make the VZV Summary page formatting print-friendly so that the Vision Zero team can continue to print and post in ATD offices.
username_0: Updating the requirements based on a conversation with @username_1. I forgot to specify in the requirements that we do not want to show navigation tabs that are not available. We need to remove the tabs for Engineering, Education, and Enforcement as those may never be built out. The user should only see tabs that they are able to access (i.e., the Summary and Map tabs). Also, the tab that is currently being viewed should show as active (reverse colors than inactive tabs)

Status: Issue closed
|
oh-my-fish/theme-bobthefish | 654216935 | Title: could not disable date
Question:
username_0: I do not want date/time shown on the right side of the terminal after each command.
I've tried: `set -g theme_display_date no` but it did not help.
How can I disable the date display?
Answers:
username_0: Any suggestions?
username_1: you can override `fish_right_prompt` (either to an empty function, to disable it completely, or to something more to your liking).
[see the "overrides" section of the README for more info](https://jira.corp.dropbox.com/browse/PAPER-3248?atlLinkOrigin=c2xhY2staW50ZWdyYXRpb258aXNzdWU%3D).
Status: Issue closed
username_0: Adding: `function fish_right_prompt; end` to my `config.fish` solved the problem. Thanks |
ethereumclassic/ethereumclassic.github.io | 580962222 | Title: Add Forks Diagram
Question:
username_0: The existing restructure lists the ETC forks.
It would be cool to have something like this https://consensus.corepaper.org/wiki/Ethereum_family
Answers:
username_1: Totally agree. If you can make it interactive with some js, that would be super fresh!
username_1: Name | Release date | Release block
-- | -- | --
Frontier | 2015-07-30 | 0
Frontier Thawing | 2015-09-08 | 200,000
Homestead | 2016-03-15 | 1,150,000
The DAO Bailout | 2016-07-20 | 1,920,000
Gas Reprice | 2016-10-24 | 2,500,000
Diehard | 2017-01-13 | 3,000,000
Gotham (5M20 Era 2) | 2017-12-11 | 5,000,000
Defuse Difficulty Bomb | 2018-05-29 | 5,900,000
Atlantis | 2019-07-29 | 8,500,000
Agharta | 2020-01-11 | 9,573,000
Gotham (5M20 Era 3) | 2020-03-17~ | 10,000,000
Status: Issue closed
|
helpshift/scyllabackup | 428643651 | Title: Support Google Storage
Question:
username_0: What should be done to support [Google Storage](https://cloud.google.com/storage/)?
TODO:
- [ ] add support of Google Storage in https://github.com/helpshift/spongeblob.py
P.S. I don't have time to implement it myself right now, I just want to show some interest and I hope to come back to it later.
Answers:
username_1: Sure @username_0 , Patches are welcome. I will do my level best to get them merged. |
shamblett/mqtt_client | 1163464472 | Title: How to decode a mqtt payload that contains utf8 characters?
Question:
username_0: With regard to[ ](https://github.com/shamblett/mqtt_client/issues/46) :
The solution in the closed issue #46 was to use the UTF library.
"decode:
final MqttPublishMessage recMess = c[0].payload as MqttPublishMessage;
String msg = decodeUtf8(recMess.payload.message);"
But meanwhile the library[ ](https://pub.dev/packages/utf) is discontinued. How can this be handled now?
Answers:
username_0: I found
utf8.decode(recMess.payload.message);
to be working. So I am closing the issue
Status: Issue closed
|
nozzlegear/dotfiles | 271987031 | Title: `download.sh` needs to be replaced with `install-powershell.sh` in 'setup.sh'
Question:
username_0: I find the references to the powershell `download.sh` in [`setup.sh`](https://github.com/username_1/dotfiles/blob/23b73eba7ba1dc483e7055c6aad88b0418cd51b8/setup.sh#L49) from a search in Github.
The `download.sh` file under the `tools` folder of the [PowerShell repository](https://github.com/PowerShell/PowerShell) was deprecated and now is removed. Please use the `install-powershell.sh` file from the same folder to install powershell on Linux.
Status: Issue closed
Answers:
username_1: Thanks for the warning! I've replaced `download.sh` with `install-powershell.sh`. |
codesquad-member-2020/airbnb-01 | 622270443 | Title: 0521
Question:
username_0: - 어제 한 일
- 오늘 할 일
- 공유되어야 할 이슈
Answers:
username_1: * 어제 한 일
* 없음
* 오늘 할 일
* 화면 크기 대응
* 메뉴 리스트 네트워킹
username_2: - 어제 한 일
- 페어프로그래밍
- 페어 리뷰 & ddd start
- 오늘 할 일
- 합의
username_0: - 어제 한 일
- 페어프로그래밍
- 스켈레톤 구조 협의
- 오늘 할 일
- 스켈레톤 구조 합의
- detail Accommodation 만들기
Status: Issue closed
|
strophe/strophejs | 289185535 | Title: Authentication fail after #271
Question:
username_0: Hi,
Strophe can't establish a connection since #271 is merged. It was working with the same settings before.
I'm using WebSocket and here is the log:
send:
```
<open xmlns='urn:ietf:params:xml:ns:xmpp-framing' to='localhost' version='1.0'/>
```
revieve:
```
<open xmlns='urn:ietf:params:xml:ns:xmpp-framing' id='5952572546526895809' version='1.0' xml:lang='en' from='localhost'/>
```
```
<stream:features xmlns:stream='http://etherx.jabber.org/streams'><mechanisms xmlns='urn:ietf:params:xml:ns:xmpp-sasl'><mechanism>PLAIN</mechanism><mechanism>X-OAUTH2</mechanism><mechanism>SCRAM-SHA-1</mechanism></mechanisms></stream:features>
```
send:
```
<auth xmlns='urn:ietf:params:xml:ns:xmpp-sasl' mechanism='X-OAUTH2'>ADkyMGNmYjhkNjQ5M2IxNzI1OTM5MWY5ODcwMjY0ZTc0QGxvY2FsaG9zdAAzNzg3ZTA3ZjNmY2Q2OTNhZTQwMGY3Mjg1ODdiMzdjYg==</auth>
```
revieve:
```
<failure xmlns='urn:ietf:params:xml:ns:xmpp-sasl'><not-authorized/><text xml:lang='en'>Invalid token</text></failure>
```
```
<failure xmlns='urn:ietf:params:xml:ns:xmpp-sasl'><not-authorized/><text xml:lang='en'>Invalid token</text></failure>
<close xmlns='urn:ietf:params:xml:ns:xmpp-framing'/>
```
Please advise.
Answers:
username_0: @Egorikhin @username_1 Please advise.
username_1: Looks like your server is offering X-OAUTH2 and then strophe.js chooses it as auth mechanism instead of SHA1.
Strophe chooses it above SHA1 because (like OAUTHBEARER, which is a related auth mechanism) it's given higher priority than SHA1 in the code.
I think this is probably wrong, and that like the EXTERNAL auth mechanism, X-OAUTH2 and OAUTHBEARER should be given lower priority than SHA1. They should likely be the 3 lowest prioritized auth mechanisms.
@username_2 added the OAUTHBEARER auth mechanism, I'd like to hear from him whether there was a good reason why he gave it a higher priority than SHA1.
For now, as a workaround you can disable X-OAUTH2 as auth mechanism on your server.
username_2: I didn't give the priorities to much thought, to be honest. I assumed that, if enabled on the server, it'd be preferable at least over PLAIN and DIGEST-MD5, which is why I added it to the top of the list (at the time of the commit that introduced it).
Status: Issue closed
username_1: Hi,
Strophe can't establish a connection since #271 is merged. It was working with the same settings before.
I'm using WebSocket and here is the log:
send:
```
<open xmlns='urn:ietf:params:xml:ns:xmpp-framing' to='localhost' version='1.0'/>
```
revieve:
```
<open xmlns='urn:ietf:params:xml:ns:xmpp-framing' id='5952572546526895809' version='1.0' xml:lang='en' from='localhost'/>
```
```
<stream:features xmlns:stream='http://etherx.jabber.org/streams'><mechanisms xmlns='urn:ietf:params:xml:ns:xmpp-sasl'><mechanism>PLAIN</mechanism><mechanism>X-OAUTH2</mechanism><mechanism>SCRAM-SHA-1</mechanism></mechanisms></stream:features>
```
send:
```
<auth xmlns='urn:ietf:params:xml:ns:xmpp-sasl' mechanism='X-OAUTH2'>ADkyMGNmYjhkNjQ5M2IxNzI1OTM5MWY5ODcwMjY0ZTc0QGxvY2FsaG9zdAAzNzg3ZTA3ZjNmY2Q2OTNhZTQwMGY3Mjg1ODdiMzdjYg==</auth>
```
revieve:
```
<failure xmlns='urn:ietf:params:xml:ns:xmpp-sasl'><not-authorized/><text xml:lang='en'>Invalid token</text></failure>
```
```
<close xmlns='urn:ietf:params:xml:ns:xmpp-framing'/>
```
Please advise.
username_1: @username_0 Why did you close the ticket, I don't see a pull request for this?
username_0: Sorry for the delay.
I changed the priorities like this:
```
registerSASLMechanisms: function (mechanisms) {
this.mechanisms = {};
mechanisms = mechanisms || [
Strophe.SASLOAuthBearer,
Strophe.SASLXOAuth2,
Strophe.SASLExternal,
Strophe.SASLAnonymous,
Strophe.SASLMD5,
Strophe.SASLPlain,
Strophe.SASLSHA1
];
mechanisms.forEach(this.registerSASLMechanism.bind(this));
},
```
But it didn't work, I still get the not-authorized response. Please advise.
username_0: Sorry for the delay.
Do the following mechanisms have the ideal priorities?
```
70 SCRAM-SHA1
60 PLAIN
50 DIGEST-MD5
40 ANONYMOUS
30 EXTERNAL
20 OAUTH-2
10 OAUTH-BEARER
```
username_1: Currently DIGEST-MD5 has a higher priority than PLAIN, and it should stay like that, since using it is preferable to using PLAIN (if both are available).
I propose the following:
```
70 SCRAM-SHA1
60 DIGEST-MD5
50 PLAIN
40 OAUTH-BEARER
30 OAUTH-2
20 ANONYMOUS
10 EXTERNAL
```
username_1: This assumes that the client supports OAuth login. Strophe.js supports the mechanism, but that doesn't mean the client properly supports OAuth login, so logging in fails, as @username_0 found out.
If the client developer wants to make OAuth login a higher priority than PLAIN etc. then they can still change the default priorities encoded in Strophe.js.
username_0: #290 here is the PR, please review.
username_0: @username_1 pr is ready. #290
Please review.
username_1: Merged, thanks @username_0
Status: Issue closed
|
getgrav/grav-plugin-admin | 152658225 | Title: Double tooltips in editor
Question:
username_0: Hover over a button in the editor to display a standard tooltip and a custom tooltip.
Tested in Firefox and Chrome on Windows.
Screenshot:
https://www.dropbox.com/s/nm9l4g9zckpp0v2/double-tooltips.jpg?dl=0
Answers:
username_1: One is an "alt" tag, and one is a tooltip.. Alt tags generally don't show up unless you leave the mouse on the item for a few seconds, where as the tooltip shows up instantly.
Status: Issue closed
username_1: Hover over a button in the editor to display a standard tooltip and a custom tooltip.
Tested in Firefox and Chrome on Windows.
Screenshot:
https://www.dropbox.com/s/nm9l4g9zckpp0v2/double-tooltips.jpg?dl=0
username_1: Actually it's not an alt tag, but it's displaying like one.. reopening to investigate.
username_1: Should be sorted in develop branch
Status: Issue closed
|
WarEmu/WarBugs | 1144219002 | Title: Can't put talismans into armor or weapons.
Question:
username_0: **Expected behavior and actual behavior:** a single level 1 talisman unstacked, armor with open talisman socket, click talisman so it's on cursor, click armor, talisman is inserted into armor and character receives bonuses.
Actual behavior - click talisman, it's on cursor, click armor with talisman socket, receive error message "cannot equip an item that is stacked, please report to the bugtracker"
talisman will not socket
**Steps to reproduce the problem:** start new character, get armor with open talisman socket, get level 1 talisman, attempt to insert, fail, receive error message

Answers:
username_1: Can you try putting the talisman in your armor when it is unequipped (in your backpack)?
username_0: I just figured out I have to shift right click on the item when it's in the backpack. Successfully enhanced my items with talismans. Sorry for the post, I didn't know what to do.
username_1: Good to hear! You should probably close this ticket then.
username_2: okay! glad to hear!
Status: Issue closed
|
matrix-org/matrix-appservice-gitter | 192087788 | Title: @matrixbot needs a profile
Question:
username_0: Since @matrixbot is the front-facing endpoint for all Matrix bridging into Gitter, it might be nice if it actually had some profile information to tell Gitter-side users what it's about. Otherwise it might look like a random spambot, that e.g. may be responsible for it being `/ban`'ed from some rooms (see #35)
Answers:
username_0: Actually it might be better to move it to a new dedicated gitter account. |
micalevisk/me-telegram-bot | 306296678 | Title: autoreply falha quando executado pelo Heroku
Question:
username_0: 
Apesar da versão ser a mesma testada localmente (v6.0.3) o [handler_action_auto_reply](https://github.com/username_0/me-telegram-bot/blob/a3892f9bb937fae4a39b82af4e952bcf99462881/setup.py#L12) não funciona como o esperado...<issue_closed>
Status: Issue closed |
sta/websocket-sharp | 797702066 | Title: Node Socket.IO server not receiving pong reply/C# client not receiving ping
Question:
username_0: Hi, I've searched through open/closed issues for someone in the same scenario as me and unfortunately haven't been able to find a solution.
I can see in a capture that my client is sending pings to the server, who is responding with pongs (as expected), however the server seems to be dropping the client as it's never receiving a pong.
(I tried adapting some existing examples with ws.OnPing [event] or Ws.Pong() [method] but neither of these seem to be valid)
My Socket.IO server (Node) is sending pings and expecting a pong:
` engine:socket writing ping packet - expecting pong within 5000ms +24s
engine:socket sending packet "ping" (undefined) +0ms
engine:socket flushing buffer to transport +0ms
`
And the time out message when client is dropped:
`
engine:ws closing +30s
socket.io:client client close with reason ping timeout +55s
socket.io:socket closing socket - reason ping timeout +30s
`
I have also got the following as my client:
`ws.EmitOnPing = true;
ws.OnMessage += (sender, e) => {
if (e.IsPing)
{
// WriteLine to show this is being hit doesn't work
} else {
// WriteLine to show a message received does work
}`
I've also tried to manually implement an engine.io event from client to server using "ws.Send("3")" as the engine code for a pong, however this will only work when it's inside of my "else" statement so I'm thinking that there's something I'm missing - any suggestions?
I know I could do something hacky but doesn't seem like the solution I should be using when there's a proper way to handle this.
Answers:
username_0: Note: I tried changing "e.IsPing" to "2" and this does work but seems like it makes the point of having "IsPing" redundant if it doesn't even work.. Maybe I'm not using correctly but from what I've read in the documentation this is the suggested method.
Some further information/clarification from dev would be great.
Thanks
Status: Issue closed
|
factor/factor | 841101176 | Title: Profile unoptimized/optimized code/calls
Question:
username_0: Is it possible to have the sampling Profiler store information about whether the sample was taken in unoptimized or optimized code? Or alternatively, the respective invocation counts?
The motivation is that, when working on compiler/optimizer code, it can have non-obvious effects on bootstrap time. With that, it may be easier to figure out a better pre-compile order for bootstrapping. |
debrief/pepys-import | 586412133 | Title: Don't store `prev_location` in State
Question:
username_0: I'd like to request a change to how we keep track of the previous location for a set of State measurements.
We currently store `prev_location` in each state as we work through them. But this has to be done in every parser.
It would be more effective for the `common_db` `validate()` code to remember the previous state, and pass the new and previous states to `Basic` and `Enhanced` validators.
Once we have the previous state, we can pass it to `EnhancedValidator`, then on to `speed_loose_match_with_location` and it will have the current/previous locations _AND_ the current/previous timestamps, and it can calculate the speed to travel between them.
I've started to write the pseudo-code to perform the above in branch #hotfix/207_haversine_formula.
But, it won't work until we change how we handle `prev_location`.
Answers:
username_1: I think we can directly move the prev_location to the EnhancedValidator if we are going to pass all measurements to the validators (Issue #173). What do you think @username_0 ?
username_0: The issue is that we also need `prev_timestamp` in the enhanced validator.
Aaah, we could cache previous location _and_ time in the `validate()` loop, and pass them in.
It will only become clumsy if/when we require more previous attributes for quality testing.
Oh, and it means the API for all enhanced validators have to include `prev_location` and `prev_timestamp` parameters.
If they just contained `previous` and `current` state objects, it would be tidier, IMHO.
username_1: Okay, I think you're right. It sounds better to send only previous and current state objects to the validator. I'll cache and assign `prev_location` and `prev_timestamp` in the `validate()` method.
Status: Issue closed
|
swordapp/swordv3 | 536246079 | Title: Guidance on multiple metadata documents in different formats
Question:
username_0: It's currently unclear from the spec what happens if a client uploads multiple metadata documents using different metadata formats. Options include:
* maintain them alongside each other
* replace the previous metadata document that has a different format
* return a 409 Conflict response, to say that only one format is supported at the time
If more than one is supported at once, there should also be guidance on whether a SWORD service should attempt to reconcile between them.
Answers:
username_1: Metadata formats are for deposit, but we should make it clear that there's no guarantee that the server will be able to expose the metadata in the same format that it's deposited.
- [ ] add a section 19.3 to cover how alternative metadata formats are exposed in the Service Document
- [ ] Find a suitable place to explain that metadata in does not strictly have to equate to metadata out
username_1: @username_0 probably one for the implementers guide here, too, about choices in managing additional formats. |
getsentry/sentry-cli | 1084808297 | Title: Unable to upload source maps
Question:
username_0: error: API request failed
caused by: sentry reported an error: [Errno 13] Permission denied: '/app' (http status: 400)
```
3. But I have all the permissions
```
$ ./node_modules/.bin/sentry-cli info
Sentry Server: https://sentry.nugit.jco
Default Organization: nugit
Default Project: agg
Authentication Info:
Method: Auth Token
User:
Scopes:
- project:read
- project:write
- project:admin
- team:read
- team:write
- team:admin
- project:releases
- org:read
- org:write
- org:admin
```
### Expected Result
Successfully uploading sourcemaps
Status: Issue closed
Answers:
username_0: @username_1 the proposed workaround doesn't fix this issue. I'm still experiencing it 🤔
username_1: @username_0 create a new issue at https://github.com/getsentry/self-hosted with linking this and two issues above please |
shrimpy-dev/shrimpy-python | 749157942 | Title: Encryption
Question:
username_0: It would be nice to know what encryption is done to protect the transmission of the Private key, if any. As the README.MD currently stands it is unclear if their is any protection or if the private key is being publicly being broadcasted. I didn't see any use of ssl. Is there another encryption method used? Are the Public and Private key combined so that Shrimpy decodes the Private via the Public code? Some clarity on this would would be helpful. |
pysat/pysatSeasons | 1087190332 | Title: DOC: update demo codes
Question:
username_0: **Describe the bug**
- Demo codes currently use old `custom.attach` rather than `custom_attach`.
- Add docstrings to functions in demo codes to assist flow for users
- Scrub for modern style
**To Reproduce**
Try to run a demo with pysat 3.0+
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Version [e.g. 22]
- Other details about your setup that could be relevant
**Additional context**
Add any other context about the problem here. |
PowerShell/PowerShellEditorServices | 134894085 | Title: Add capabilities data to InitializeResponse body in debug adapter
Question:
username_0: The VS Code team has added a capabilities list that our debug adapter can opt into:
https://github.com/Microsoft/vscode/blob/master/src/vs/workbench/parts/debug/common/debugProtocol.d.ts#L495
In the following issue, Isidor recommended that we set the `supportsEvaluateForHovers` to true so that data tips are evaluated correctly. We should also determine which other capabilities we'd like to support and file separate issues for enabling those.
Related: https://github.com/Microsoft/vscode/issues/1804
Answers:
username_1: We should add support for this one as well: `supportsConditionalBreakpoints`.
username_0: Yep, we'll take care of that in this issue: https://github.com/PowerShell/PowerShellEditorServices/issues/94
Status: Issue closed
|
AlphaWallet/alpha-wallet-android | 556011377 | Title: Use API key for accessing Etherscan
Question:
username_0: I thought this was done by @colourful-land recently, but I can't find it. So just to be safe.
Refer to: https://github.com/AlphaWallet/alpha-wallet-ios/issues/1677
Answers:
username_0: Ah, probably #1077
username_1: Confirmed this was done by @colourful-land in class TransactionsNetworkClient.
Status: Issue closed
|
openssl/openssl | 949467502 | Title: Question about SSL routines:SSL_renegotiate:wrong ssl version:ssl/ssl_lib.c:2127
Question:
username_0: In order to test renegotiation function, i use openssl s_client -connect 172.16.70.82:8200, and the middle output i don't list detail, only pick some part:
1)
···
Server public key is 2048 bit
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
···
Does this mean service using 8200 port do not support Renegotiation?
2)
···
HEAD / HTTP/1.0
R
RENEGOTIATING
140371330676480:error:1420410A:SSL routines:SSL_renegotiate:wrong ssl version:ssl/ssl_lib.c:2127:
···
What is the possible reason?
My system openssl version is OpenSSL 1.1.1g, and service using 8200 port use openssl 1.1.1k.
Answers:
username_1: ```openssl s_client -connect 172.16.70.82:8200 ```, The tls1.3 version will be negotiated.However, The tls1.3 does not support renegotiation.
This command is required if renegotiation is required. ```openssl s_client -connect 172.16.70.82:8200 -tls1_2```
username_2: The previous comment answered this part correctly.
Status: Issue closed
|
manan025/DS-Algo-Zone | 1012983302 | Title: Selection Sort
Question:
username_0: ## 🚀 Feature
Selection Sort - an algorithm used to sort an array.
### Have you read the Contribution Guidelines?
Yes
## Pitch
Basic Sorting Algorithm
## Assignees
(Do not make changes in this section until asked to do so)
C -
C# -
C++ -
Go -
Java -
Javascript -
Kotlin -
Python -
Answers:
username_0: @username_2 would like to contribute in java
username_1: i want to solve in python
username_2: @username_0 - Java
@username_1 - Python
assigned
username_3: I want to contribute in c++
username_3: pls assign me
username_2: @username_3 - C++ assigned
username_4: @username_2 I would like to contribute in C
username_2: @username_4 - C assigned
username_5: I would like to contribute in JavaScript
username_2: @username_5 - Javascript assigned
username_4: I am sorry unassigne me from C , I thought I was assigned in C++ , but by mistake I wrote C in place of C++
username_2: Oh okay.
username_6: I would like to contribute in C
username_2: C - @username_6
assigned
username_7: @username_2 I would like to this in Golang.
username_2: @username_7 - Go assigned
username_8: I would like to contribute in Selection Sort for Kotlin kindly assign me for it.
username_8: I can also do Selection Sort in C++, assign me to C++ as well |
floooh/sokol | 851891527 | Title: emscripten + fibers
Question:
username_0: This may not be an issue with sokol, but any help much appreciated;
I'm using emscripten fibers to simulate threads. I have the sokol frame callback swap to fiber then swap back and continue. this works until frame returns, then everything stops. no more frames rendered and no errors. Although the event callback is still happening.
If i hack `sokol_app.h/_app_emsc_run` as follows;
```
/* start the frame loop */
//emscripten_request_animation_frame_loop(_sapp_emsc_frame, 0);
emscripten_set_main_loop(_sapp_emsc_frame_void, 0, true);
```
where
```
_SOKOL_PRIVATE void _sapp_emsc_frame_void()
{
// XX HACK!
_sapp_emsc_frame(0, 0);
}
```
it works!!
Is there some reasons the main loop via animation frame does not work with fibers but the standard main loop does? Or perhaps it could be something to do with frame timing as calculated by the caller to animation frame loop.
My hacks seem to work fine, but i'd rather like to know what's really going on and not have to hack `sokol`.
thanks for any help.
Answers:
username_1: Hmm, no idea TBH. I haven't dabbled yet with Asyncify stuff (which emscripten fibers are based on). The documentation for ```emscripten_set_main_loop()``` reads like there's quite a bit of special handling for threads under the hood (and maybe this extends to asyncify):
https://emscripten.org/docs/api_reference/emscripten.h.html#c.emscripten_set_main_loop
...while ```emscripten_request_animation_frame_loop()``` is most likely just a very simple wrapper for JS ```window.requestAnimationFrame()```.
Maybe you could ask on the emscripten discussion group what the under-the-hood differences between the two functions are (e.g. whether the ```emscripten_request_animation_frame_loop()``` function is incompatible with asyncify.
For sokol_app.h we could add a boolean flag to ```sapp_desc``` (e.g. ```emsc_use_main_loop``` or similar), to select between the two approaches. But I agree, it would be better to first investigate what's actually going on under the hood ;)
Status: Issue closed
|
square/spoon | 177293250 | Title: Convert sample to gradle
Question:
username_0: I want to use spoon-sample to test some things with a wonky emulator (hopefully to debug a screen capture problem on some emulators). I'm working in Android Studio and noticed that the sample is built with Maven. I thought about converting just spoon-sample to Gradle. Any thoughts on usefulness of this? I want to get others input before I take the time to do this.
Answers:
username_1: That would require converting the whole project and release process to Gradle which I'm not really keen to do. Why not just import the project into IDEA?
username_2: It's worth noting that right now we don't even include `spoon-sample` in the top-level pom.xml because it breaks `mvn package`. I'd be ok with just converting it to gradle since it's not really a viable part of the main project anymore.
username_1: Oh sweet. Go for it then. |
Vita3K/compatibility | 452455387 | Title: Damascus Gear: Operation Tokyo [PCSE00518]
Question:
username_0: # Game summary
- Game name: Damascus Gear: Operation Tokyo
- Game serial: PCSE00518
- Game version: 1.00
# Vita3K summary
- Version: v0.1
- Build number: 1063
- Commit hash: https://github.com/vita3k/vita3k/commit/050c22b3
# Test environment summary
- Tested by: username_0
- OS: Windows 10
- CPU: AMD Ryzen 2700X
- GPU: NVIDIA GTX 1080 Ti
- RAM: 16 GB DDR4 3600 mhz
# Issues
- Crash on Trophy create ?
# Screenshots


# Log
[vita3k.log](https://github.com/Vita3K/compatibility/files/3256957/vita3k.log)
Answers:
username_0: On golden pr by @pent0, green fixed
# Screenshots

username_0: # Issue :
- Ime function missing in moment want set username
# Screenshots:



# Log:
[vita3k.log](https://github.com/Vita3K/compatibility/files/3807249/vita3k.log) |
fjordllc/bootcamp | 1010584696 | Title: DocsとQ&A作成時のプラクティス選択で違うコースのプラクティスが表示されている
Question:
username_0: ## 概要と再現手順
以下は「Railsプログラマーコース」のユーザーでログインした場合のQ&Aを作成画面です。
Railsプログラマーコースには設定されてない、「自動テスト(Javascript)」のプラクティスが表示されています。

これと同様のことがDocsの作成画面でも起ります。
## 期待される振る舞い
issue #3209では日報のみの修正を行いました。
質問とDocs作成において、「ユーザーが登録しているコースのプラクティス」に絞った方が良ければ同様に変更が望ましいと思いました。
## 関連Issue
Ref: #3209<issue_closed>
Status: Issue closed |
easylist/easylist | 906799708 | Title: [NSFW] hentai-foundry.com
Question:
username_0: <!--
Easyprivacy requests:
** If a site implements any tracking or monitoring, UA/IP/Geo checks, browser detection, analytics, telemetry, linking to third-partys, pixels, referrers, fingerprinting, event/perf logging etc. Regardless how helpful or needed the script(s) are, it will be blocked in Easyprivacy. Privacy comes first and the block on these scripts will remain in place.
Any additions, changes or removals is at the Authors discretion.
You're free to counterargue (to a certain point) if you disagree with the decision.
To avoid being banned, don't constantly re-open or create new (related) issue reports.
-->
<!-- Just include the website URL in the Title line of this issue report -->
### List the website(s) you're having issues:
`https://www.hentai-foundry.com/pictures/user/ghosthart/755428/Huevember-day-19`
### What happens?
Some new ads have been added to the random selection of the topmost banner, and those new ads are not blocked.
### List Subscriptions you're using:

### Your settings
<!-- Just to ensure there is no issues or conflicts with other webbrowser extensions.
Disable Noscript, Ghostery, Disconnect, HTTPS Everywhere, Privacy Badger before reporting (and re-test with them disabled).
Just ensure you're running just one Adblock extension only -->
- OS/version: Windows 10 21H1 x64
- Browser/version: Chrome 91.0.4472.77 x64
- Adblock Extension/version: uBO 1.35.3b7 + IDCAC 3.3.0
### Other details:
I propose `hentai-foundry.com##main > p[style]`
Screenshots:
* `https://images2.imgbox.com/94/30/Z3rNzXLU_o.png`
* `https://images2.imgbox.com/b8/8a/5fkIQBkK_o.png`
<!-- If you suspect certain filters (this helps spending time to debug it manually).
If you have a screen shot of the issue or advert, this will help to highlight it. --><issue_closed>
Status: Issue closed |
graphql-java/graphql-java | 177194012 | Title: Context argument to GraphQL#execute() should be explicit in next breaking version
Question:
username_0: The reference implementation now has an [explicit context value](https://github.com/graphql/graphql-js/commit/d7cc6f9aed462588291bc821238650c98ad53580). So in that implementation, [`GraphQL#execute()`](https://github.com/graphql/graphql-js/blob/v0.7.0/src/execution/execute.js#L116-L117) now looks like:
```
export function execute(
schema: GraphQLSchema,
documentAST: Document,
rootValue?: mixed,
contextValue?: mixed,
variableValues?: ?{[key: string]: mixed},
operationName?: ?string
): Promise<ExecutionResult> {
```
We should probably follow in our next breaking version.
Answers:
username_1: @username_0 - how keen are you to break this?
I think we should use a Parameter pattern so we can 1) reduce the number of arguments and 2) add new parameters without method overloading and breaking API
username_2: Currently our `context` argument is presented as `root` to `DataFetchers`. This is confusing and would be happy to clear it up.
username_2: This will not come in 3.0.0, added to 4.0.0.
username_2: This is the proposal:
* We will add a `root` object: This will be presented as `root` to `DataFetchers`
* `root` will be the `source` object for the first query
This is a breaking change.
username_2: PR: #456
Status: Issue closed
username_2: the PR is merged and will be in the next release |
department-of-veterans-affairs/va.gov-team | 695971770 | Title: Upgrade StatsD Exporter
Question:
username_0: ## The Problem
[We currently install version 0.3 of `statsd-exporter` for the vets-api AMI build](https://github.com/department-of-veterans-affairs/devops/blob/6bb7a512265cec2996b2bd0107605bc76572526b/ansible/build/roles/prometheus/defaults/main.yml#L25). The latest version of `statsd-exporter` as of writing this ticket is [0.18](https://github.com/prometheus/statsd_exporter/releases/tag/v0.18.0). We want to be on a more recent version.
## Extra Context
It became apparent how far behind we were when `statsd-exporter` began to repeatedly crash when some bad StatsD metrics were introduced in a PR to `vets-api` - [link](https://github.com/department-of-veterans-affairs/va.gov-team/issues/13265#issuecomment-687360859).
It's thought that if we upgrade to at least 0.10.2 where [this PR adds functionality to handle StatsD metrics with inconsistent labels](https://github.com/prometheus/statsd_exporter/pull/194), we can better defend against the `statsd-exporter` repeatedly crashing (which causes loss of metrics gathering).
## Work to be Done
- [ ] Update our ansible builds to use a more recent version of `statsd-exporter`
Answers:
username_0: Looks good to close for me. 👍
Status: Issue closed
|
GluuFederation/oxAuth | 241047935 | Title: oxAuth should initialize custom scripts in separate thread
Question:
username_0: oxAuth has AppInitializer which initializes all components in right order at startup. It's single thread process. The problem happens with custom script manager which application initializes at startup too.
Few custom scripts to download metadata from same oxAuth instance. As result the script (example: u2f) can't download it which led to script initialization failuer.
Status: Issue closed
Answers:
username_0: Implemented in oxcore-service. Now custom script manager initializes in parallel with main initialization process |
svenvc/P3 | 385917440 | Title: Additional types
Question:
username_0: Using glorp, I am trying to specify `platform serial` or `platform bigint` and realising that neither exists. The serial type in my experience seems pretty standard for primary key use. Would it be hard to add support for these? I understand that array support may be a bit more involved but is also rather more rare in use than serial types.
Answers:
username_1: Hmm, that is indeed not good.
The list of supported/implemented types can be found in P3Converter class>>#typeMap and since int2, int4 and int8 are already supported, it can't be too hard to add support for more integer types. All that is probably needed is to find the corresponding type oid and add entries. You can also find that type oid from the error you are now seeing.
Could you try that yourself ?
username_0: I haven't really grokked how glorp and p3 work yet to understand what's going on. But a serial is just an integer (int4) with an autoincrementing sequence attached. Likewise a bigserial is just a int8 with a sequence. I had a look in pg_catalog.pg_type but found no references to serial or bigserial there. For bigint it's really just a synonym so one could use int8 instead.
But when describing a table column as begin of type `platform serial` using glorp and then creating the table I end up with a column without any autoincrementing sequence.
username_1: You have to make a difference between P3 and Glorp.
P3 is a PostgreSQL client that allows you to execute textual SQL queries and process the results converting it into native types.
Glorp is an object-rdbm mapper on top of some DB client. The small class https://github.com/username_1/P3/blob/master/P3-Glorp/P3DatabaseDriver.class.st is all there is to Glorp from the perspective of P3.
I did the following in psql command line against posgresql 9.6
create table test_1 (s smallint, i integer, b bigint, id serial);
insert into test_1 values (1, 2, 3);
insert into test_1 values (1, 2, 3);
select * from test_1;
s | i | b | id
---+---+---+----
1 | 2 | 3 | 1
1 | 2 | 3 | 2
In Pharo with P3 loaded, I did
(P3Client new url: 'psql://sven@localhost') in: [ :client |
[ client query: 'select * from test_1' ] ensure: [ client close ] ].
And got the correct results back. The data being
#(#(1 2 3 1) #(1 2 3 2))
So from the P3 standpoint all these types are supported and work correctly, AFAICT.
It seems you have some issues with Glorp. Those should be asked elsewhere, I would start with the Pharo mailing lists. Make sure to add some executable code.
Did you try running the Glorp unit tests on your setup ? When I did that a while ago, most of them were OK (like 80/90 %).
username_0: I do understand the general difference between Glorp and P3. I just assumed that glorp was somehow using the P3 driver for the sql statement generation when creating tables. I could have made that clearer from the get go but I saw no reference to the `big*` or `*serial` types in `P3Converter supportedTypes` and just assumed that the failure to create a database sequence along with the integer column was that of the driver.
I ran across this in Glorp-Platform: `PostgreSQLPlatform >> serial`:
```
^ self typeNamed: #serial ifAbsentPut: [GlorpSerialType new typeString: 'integer' ]
```
I interpreted that as a fallback and assumed that was just checking the capability of the driver. I was apparently mistaken. Sorry about the noise.
Entirely unrelated to this. Are you planning on adding support for prepare statements? SQL-injection is a nasty thing.
username_1: If you tell your Glorp session #logging: true and open a Transcript you can see all actual SQL statements being executed.
But Glorp is pretty complex.
I have used platform serial myself and it worked as expected (longer ago, for example https://medium.com/concerning-pharo/reddit-st-in-10-cool-pharo-classes-1b5327ca0740) - I would be quite surprised if it did no longer work or if it broke, but everything is possible.
With an OO-RDBMS mapper you do not have to be scared of SQL injection I think. With pure P3 or SQL in classic statements, you just have to handle your arguments carefully. I have no immediate plans to add prepared statements.
I am closing this issue now, you will probably get more/better help on the pharo mailing lists.
Status: Issue closed
|
rust-lang/rust | 22381373 | Title: fix to pass test/debug-info/* on android
Question:
username_0: To enable test on android bot #9120
debugging with gdb on android target works differently from debugging with gdb on linux.
there are two options however it is not high priority
- we can modify android gdb to work like linux gdb
- we can modify debug-info to work with android gdb
Answers:
username_1: Fixed in #21774.
Status: Issue closed
username_2: To enable test on android bot #9120
debugging with gdb on android target works differently from debugging with gdb on linux.
there are two options however it is not high priority
- we can modify android gdb to work like linux gdb
- we can modify debug-info to work with android gdb
username_2: (a few still ignored)
username_3: Triage: as far as I know, this is still true.
username_3: Triage: no change |
yunuselci/PHP-SESSION | 528814310 | Title: Geçerli bir eposta girmezsem ne olacak?
Question:
username_0: https://github.com/username_1/PHP-SESSION/blob/be2725ed479f8f03b52943afdf9b98b484b88ace/sign_up.php#L20
Burada sadece boş olup olmadığına bakmışsın. Geçerli bir eposta adresi yazmazsam ne olacak?
Bunu yaparken FILTER_VARIABLE_* fonksiyonlarına bakabilirsin: https://www.php.net/manual/tr/filter.filters.validate.php
Answers:
username_1: FILTER_VARIABLE_ ile benim kullanımım arasındaki farka bakacağım abi,teşekkür ederim, ben Html içerisinde bulunan input type=email'i kullandığım için geçersiz e posta adreslerinde aşşağıdaki gibi hata alıcaksınız


username_0: input'un type'ı konsol'dan değiştirilip gönderebilirsin. front-end tarafında yaptığın hiçbir doğrulamaya güvenmemelisin. hep backend'den kontrol etmelisin.
username_1: 
Güncellemesi yapıldı
username_0: Eline sağlık. Düzelttikçe commit atıp hashini buraya kopyarlarsan takip ederiz.
username_1: Kullanıcının geçerli bir email girip girmediğini backendde de kontrol ediyoruz artık
commit $hash = c17092d6cb8eeff450bf3e411aa0271096f0a79f
Status: Issue closed
username_1: https://github.com/username_1/PHP-SESSION/blob/be2725ed479f8f03b52943afdf9b98b484b88ace/sign_up.php#L20
Burada sadece boş olup olmadığına bakmışsın. Geçerli bir eposta adresi yazmazsam ne olacak?
Bunu yaparken FILTER_VARIABLE_* fonksiyonlarına bakabilirsin: https://www.php.net/manual/tr/filter.filters.validate.php
Status: Issue closed
|
swoole/swoole-src | 224675356 | Title: error :Assertion `target_worker_id < serv->worker_num' failed.
Question:
username_0: COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/4.8.5/lto-wrapper
Target: x86_64-redhat-linux
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-linker-hash-style=gnu --enable-languages=c,c++,objc,obj-c++,java,fortran,ada,go,lto --enable-plugin --enable-initfini-array --disable-libgcj --with-isl=/builddir/build/BUILD/gcc-4.8.5-20150702/obj-x86_64-redhat-linux/isl-install --with-cloog=/builddir/build/BUILD/gcc-4.8.5-20150702/obj-x86_64-redhat-linux/cloog-install --enable-gnu-indirect-function --with-tune=generic --with-arch_32=x86-64 --build=x86_64-redhat-linux
Thread model: posix
gcc version 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC)
Answers:
username_0: 使用socat发送UDP 数据后出现错误。
Status: Issue closed
username_1: fixed https://github.com/swoole/swoole-src/commit/2fc5889e48138683908c2084e5801ed71745b558 |
harrisiirak/cron-parser | 403297418 | Title: Day of week goes from 0-7 instead of 0-6
Question:
username_0: https://github.com/username_1/cron-parser/blob/d197e25b4ec5dce4c65dbe18dc31b954f56dddce/lib/expression.js#L67 Right there
Answers:
username_1: It's pretty standard to handle 0 and 7 both as "Sunday" here.
username_0: Yikes my bad, I thought this was a bug, but then I found this: https://unix.stackexchange.com/questions/106008/day-of-week-0-7-in-crontab-has-8-options-but-we-have-only-7-days-in-a-week
Closing.
Status: Issue closed
|
kp-marczynski/webowe | 367098039 | Title: HTML 1.2
Question:
username_0: Grafika:
- [x] umieszczanie na stronie w naturalnych rozmiarach i zmodyfikowanych
- [x] każda grafika powinna posiadać tekst alternatywny (alt)
Answers:
username_0: Grafika:
- [x] umieszczanie na stronie w naturalnych rozmiarach i zmodyfikowanych
- [x] każda grafika powinna posiadać tekst alternatywny (alt)
Status: Issue closed
|
SharePoint/sp-dev-docs | 384873278 | Title: 🐞SPFx v1.7.0 generated launch.json breaks VSCode+Chrome Debugging (with workaround)
Question:
username_0: - Close & quit all instances of Chrome
- Start the debugging process in VSCode
- Use the opened instance of Chrome, navigate to local workbench, add web part of the page
- Observe breakpoint not hit
#### THE FIX: Steps for Workaround
After step 1 above, after creating the project...
- open **config/launch.json**
- Add this line to the `sourceMapPathOverrides` array:
```json
"webpack:///.././src/*": "${webRoot}/src/*",
```
- Continue with step 2 above... except replace the last step: *"Observe breakpoint **is** hit"*
Consider removing the other options... not sure if they are still possibilities...
The problem is a change to the project structure in the generated files by the SPFx Yeoman generator change the path where webpack thinks the files are located.
#### Related issues:
These can be closed when this bug is fixed: #2062, #2248, #2267, #2831
Answers:
username_1: Thanks.
Status: Issue closed
username_3: I've the same issue, and even with the workarounds I can't debug... is there any update? it's quite critical, thanks in advance
username_0: Need a bit more info to understand what's not working & what you're doing. |
Azure/Industrial-IoT | 1157588487 | Title: Node publishing doesn't work for some OPC UA endpoints
Question:
username_0: **Describe the bug**
We are running an Edge device with IIoT modules in orchestrated mode in the version 2.8.
Discovery feature of the IIoT solution found 9 OPC UA endpoints in the customer network and we activated node publishing for 5 endpoints without any issues.
Unfortunately, the publishing doesn't work for the 6th configured OPC UA endpoint.
During the configuration everything looked fine but we are not receiving any messages in the IoT Hub from this 6th OPC UA endpoint.
There are no errors or warnings in the module logs.
We observed, that our Edge device in the IoT Hub is missing DataSetWriter device identity for this 6th OPC UA endpoint. However, we found the corresponding DataSetWriter device in the IoT Hub but somehow the parent-child relation to the Edge device is missing.
We tried to add the DataSetWriter device to the edge device manually by using Azure Portal, but this didn't work, since our edge device doesn't appear in the list of devices selectable as parent. We also tried in opposite direction, to add the DataSetWriter device as child to our Edge device but we see the same issue there, the DataSetWriter is not selectable as child device.
Multiple times we tried to turn off published nodes and deactivate the OPC endpoint and configured it again with no success.
We also tried to delete the job and cleanup device registrations in IT Hub and to start the job again by using Publisher REST Api but this also didn't work.
It looks like the solution has some kind of limitation to max 5 OPC UA endpoints. To check that, we removed one of the working 5 endpoints and then tried to add the endpoint which didn't work before and suddenly this endpoint properly published nodes.
After that we tried to add the removed endpoint again as 6th endpoint to the Edge device but we observed the same issue as described above.
**To Reproduce**
Steps to reproduce the behavior:
1. Add 5 OPC UA endpoints and configure publishing of at least one node per endpoint by using Engineering tool.
2. Add 6th OPS UA endpoint and configure publishing of at least one node by using Engineering tool.
3. Go to the "Published nodes" of the 6th OPC UA endpoint and check if the value of the published is changing.
You'll observe, that the value is not changing. The published nodes values of other 5 OPC UA endpoints are properly published. This can also be checked by connection to the IoT Hub by using VS, VS Code , Iot Explorer or other tools.
**Expected behavior**
Publishing of node values should work for all configured OPC UA endpoints, not only for first 5 configured endpoints.
Answers:
username_1: @username_2 can this be limited by the MaxWorker settings of PublisherConfigApiModel and then can be changed by the REST API call "Update publisher configuration" (documented here: https://github.com/Azure/Industrial-IoT/tree/main/docs/api/registry)?
@username_0 if you have time, you can give it a shot as well.
username_2: @username_0, the default MaxWorker setting is set to 5, so one publisher will handle up to 5 publishing jobs. You can increase this value to a more appropriate value in the engineering tool as in the following print-screen:

Alternatively, the registry service api can be called as @username_1 already mentioned
e.g.:

username_0: Thanks @username_1 & @username_2, this worked.
Status: Issue closed
|
nunit/nunit | 234558408 | Title: TestFixture attribute not being inherited from external library
Question:
username_0: I wrote a [bdd framework](https://github.com/username_0/given) a while back and one of the supported testing frameworks was nunit. I recently decided to update it for the first time in quite a while and make it compatible with nunit 3.x. What I'm noticing now is that when my (abstract) base class that has a `TestFixture` attribute is in a different assembly it seems to not get picked up on by the nunit runner.
A simple example is [here](https://github.com/username_0/given/blob/master/Given.NUnit.Example/when_building_a_toyota.cs). The class `Scenario` that is being inherited from is located in the [Given.NUnit project](https://github.com/username_0/given/blob/master/Given.NUnit/Scenario.cs)/assembly and is marked with the `TestFixture` attribute.
This definitely worked in 2.6.3, the last version I had used previously. Any ideas as to why this isn't working, or is this just not supported anymore? I can work around it by inheriting from the `Scenario` class in each test project and marking it with `TestFixture` and then have all my tests inherit from that base instead, but I'd prefer not to have an empty class sitting around in each test project.
Answers:
username_1: Rather than "supported" in the past, I would have to say it "just happened to work." 😞 Neither NUnit V2 nor 3 was designed with much thought for cases where NUnit itself was not being used directly.
The NUnit 3 engine, which runs tests, looks at the test assembly in order to decide what framework it uses. It does so by calling all the installed framework drivers, asking "Can you run this?" We provide drivers for NUnit 2 and NUnit 3 but anyone could write a driver for some other framework. In the case of the NUnit 3 driver, the rule that is used is "Has a reference to nunit.framework with version >= 3."
There is a hard way and an easy way to deal with this:
1. Write a driver for the engine, distribute it to your users and tell them to install it. This is probably rather inconvenient for the users as well as requiring some work on your part.
2. Tell your users to make at least one reference to NUnit in their test assemblies. Note that this means more than just adding an assembly reference. Something actually has to be used in order for the reference to appear in the test assembly metadata. Any assertion or attribute would work, so long as it actually appears in the __user__ assembly. You could also tell users to add some assembly level attribute.
Some time back, we discussed creating an attribute just for that purpose - something like `[UsesNUnit]`. We didn't do that because we realized that any existing attribute could serve the same purpose. For example: `[Description]`, `[LevelOfParallelism]`, `[Parallelizable]`, '[NonParallelizable]`.
To avoid confusing users, I would suggest just telling them what to add rather than explaining the entire problem to them.
In a future release, we could add some special attribute for use with frameworks like yours. Or we could introduce some other way to recognize a framework based on NUnit, preferably without enumerating all the references your assembly contains. Any ideas?
username_0: I'm honestly not familiar enough with all of the cases you guys have to optimize for. I see you're using Mono.Cecil to inspect the IL and get the references, which explains why you don't see references without them being used. I'm fine with the `[Description]` attribute idea mostly, but it might be nice to have an attribute that doesn't need anything passed in. I personally prefer the idea of an `[NUnitAssembly]` attrirbute that inherits from description and just passes in something like 'NUnit Testing Assembly'.
Just out of curiosity, what made you all decide to use mono.cecil rather than loading the assembly and scanning for types and references?
username_1: To load the assembly in the same process as the engine for inspection, we had to create and we unload an AppDomain. In addition, we could not load any assembly that was not compatible with the current runtime.
Of course, use of Mono.Cecil versus System.Reflection doesn't affect this issue one way or another. In either case, we would not see the reference.
Since you know the workaround already, I'm converting this issue into a feature request for `[NUnitAssembly]`.
username_1: I'm in favor of @username_0 's suggestion:
```C#
public class NUnitAssemblyAttribute : DescriptionAttribute
{
...
}
```
It will be easier for third-party framework developers in this position to tell their users to use this attribute rather than telling them to use "some" attribute.
@username_0 Please consider using `[NonTestAssembly]` on your own assembly.
username_2: @username_1 This may or may not be related like it was to https://github.com/nunit/nunit/issues/2163, but I think NUnit should discover test classes and methods which have attributes applied which inherit from TestFixture and Test, even though the test assembly itself does not directly reference NUnit.
If we agree and fix that, it may remove the need for an `NUnitAssemblyAttribute` for everyone or at least for some.
username_0: Well, yes, obviously checking assembly tyoes for attributes would be
ideal, but it seems like the point at which the engine makes the decision
is before the assembly is loaded into an app domain, so that level of
inspection isn't possible. I could be wrong, I formed that opinion after
about 30 minutes of looking at the code. :)
username_2: Right, so to obtain the ideal outcome, we'd either check each test assembly's references recursively for NUnit before loading into an AppDomain and doing further checking to see if it is a test assembly, or we'd just always load it into an AppDomain and do further checking to see if it is a test assembly.
Actually checking references recursively is really simple and can be implemented quite efficiently, so that would be an optimization before doing more examination in an AppDomain to see if it is a test assembly.
username_1: @username_2 By default (and I think preferred usage) we run tests in a separate Process. Before setting up the process, we have to inspect the assembly to see what kind of process we will set up. In the general case, we have to allow for that process having a different runtime target and a different platform from the one we are on. In the future, it will be worse, since we will want to run on devices.
That's why we need a very general way to examine assemblies, without loading them. Mono.Cecil was the simplest way to go. The alternative, which I had previously done, was to read the dll as a file and search for various flags, etc. inside it.
Doing a recursive check using Mono.Cecil is tricky but possible. We would have to figure out where the dependent assembly is and add create a path form mono.cecil to be able to find it. But what is the payback? It merely avoids one attribute being added by the user. All users pay a performance penalty for our recursively searching the references. It's an interesting problem but that's not a reason to do it.
In fact, if we were going to do anything more than coach the users, I would suggest giving the third-party guys some attribute to use that told us to use the nunit 3 framework driver.
BTW, not that it's either here or there, but we have already refused to do anything about this multiple times in the past. The attribute for users to use seemed to me like a big concession! 😄
username_2: This just academic but I'd be interested to see which is actually less of a performance penalty, using Mono.Cecil to parse the IL or ReflectionOnlyLoad in a temporary AppDomain, and by how much.
The payback is that test frameworks like in https://github.com/nunit/nunit/issues/2163 that use NUnit underneath can stop being leaky abstractions. For what my personal instinct is worth, I'll always see the need for `NUnitAssemblyAttribute` or `DescriptionAttribute` as dirty and as a failure on our part.
username_1: I'm somehow not being clear. It is __not possible__ for us to use Reflection in all cases because the engine is expected to examine assemblies that could not be loaded and/or run in the current process.
I guess one has to also ask if building abstractions on top of NUnit is something we want to support. I don't know that. It was never thought about in the past an we have never discussed it.
Supporting abstractions built on top of NUnit is not a goal of ours. It has never been proposed as a goal or discussed as a goal. I'm not saying I'm against it, just that it has to become one of our goals and be prioritized against other goals before we just jump to implementing anything serious. Creating an attribute is about all I would be willing to commit to without that discussion. Anyone, user or dev, is able to propose such a thing. But just jumping to it adhoc seems equally "dirty" to me.
A serious discussion of the topic - not here please - would involve identifying the kinds of extensions or abstractions over NUnit that we want to support versus those we don't want to support.
That said, it's hard for me to imagine that helping people write better frameworks on top of NUnit would take precedence over the rather long list of high-priority bugs, enhancements and new features we are not getting done for our actual users right now. When resources are short, compromise is needed.
And anyway, I'm pretty sure that the next step up from a user attribute is an attribute for the package-writer. That would tell us directly "I'm standing in for NUnit." We can't really assume that every assembly referencing an assembly that references nunit is a test assembly.
username_1: @nunit/framework-team We're talking around this because nobody has actually added it to the backlog, much less self-assigned it. If we decide to fix it in the engine rather than as I proposed, then the issue should be moved there and the title changed.
username_3: This issue has come up a few times. If it's possibly to fix it via recursive inspection of references, I'd be in favour of that over a special attribute, as that's the closes path to 'just works' for the user. That of course relies on someone being willing to implement such a thing!
username_2: Me 😈
username_2: @username_1 I like to keep a clear delineation between "this is a direction we like in isolation" and "this is worth prioritizing over other work." They are separate conversations. If a design is agreed upon, it can sit on our backlog and be up for grabs if a passerby has a motivation to implement it. And if we determine we want it next year, we don't have to start the design conversation from square one.
username_4: If we go with the recursive inspection, the graph for .NET Core applications can be quite large, so we should probably optimize it a bit. We need to keep track of assemblies that have already been visited and not revisit (probably obvious, but putting it out there) and we might want to skip dependent assemblies that start with `System.` or `Microsoft.`.
Might we also only want to walk out one level deep? My thinking there is that the most common use case is a base test framework that users derive all their tests from. This might be more dangerous though.
I expect that we can make the discovery fairly quick, but we need to handle the worst case scenarios like solutions with hundreds of projects.
username_1: Remember also that Cecil has to read __every__ assembly you inspect into memory. This sounds to me like one of those things developers do because it's a cool challenge, without a strong payback. Projects that do things on that basis sometimes end up with fragile code - although I'm sure we could do it well if we did it.
The attribute is not elegant, but it's trivial.
Another approach is for the framework designer to create something in their framework that ends up forcing a reference to NUnit to be added. Could be something derived from NUnit or returning an nunit Type or taking one as an argument.
The third option is to simply create a framework driver and tell the users to install it.
A fourth option is for us to provide a way for such frameworks to register themselves with NUnit and be recognized.
THis is not the first time the issue has been raised, but the last time was about a year ago.
username_2: This would be my preference second to the elegant solution. However I think we have what may genuinely be considered a flaw: If `FancyFixtureAttribute : TestFixtureAttribute` and I'm referencing the assembly containing `FancyFixtureAttribute ` but not the NUnit framework assembly containing `TestFixtureAttribute`, and I use `[FancyFixtureAttribute]` on classes in my test assembly, I'd expect the NUnit engine to notice that I was using a `TestFixtureAttribute` even though it is a more derived instance.
Currently, it won't. Fixing this mental model inconsistency would actually be the same as implementing the aforementioned elegant solution.
username_1: It's absolutely a flaw, but it's a fairly obscure flaw. You can use FancyFixtureAttribute from your separate assembly very happily and the NUnit framework will figure it out __unless__ you don't use any other NUnit attributes, assertions or other references.
This is not something that users generally do unless they use a third-party framework that makes them do it. Third party devs can fix it easily and I have no problem making them do something inelegant if it helps their users. There is also an equally inelegant workaround that the users can put in while waiting.
In many cases like this in the past, we have done a quick fix and then gone on to do the better, longer-term solution at a later time. That's what I tried to do here. There is basic flaw in how we decide what driver to use. Knowing that fixing it would take a lot of time and effort from one of us, I created __this__ issue to describe the quick fix.
The fundamental issue is that drivers need to be responsible for deciding what they need to know in order to say that they can work with an assembly. That is only partially true in the current code. It's a long-standing problem that nobody has had time to fix and I can't see that changing soon.
I think we need the quick fix. I don't think it's a bad thing to do because it gives us a fallback whenever the algorithms in the drivers cause no framework or the wrong framework to be selected.
Are you arguing against a quick fix here? If you are simply trying to discuss the longer-term problems in the engine, let's figure out a way to do that without holding up fixes on bugs that we can get done in a matter of hours.
username_2: Implementing the 'elegant' solution took about twenty minutes:
https://github.com/nunit/nunit-console/compare/master...username_2:try_searching_deeper_references
username_1: @username_2 Knowing you, I'm not surprised. But when was it ever about whether you knew how to code it or how long it would take you?
Here's my two cents. If you have a strong opinion about how something should be done, then assign yourself the issue. In my book, that gives you extra credibility in how you implement something. I'm afraid I have shown my impatience too much in these abstract discussions we are having, for which I apologize.
I see two questions we have to answer to end the discussion... the same two we have had all along...
1. Do we want to implement the special attribute or a depth search? Or both?
2. Do we want to implement a solution that applies to all frameworks or just to nunit 3?
username_4: Personally, I think it is up to each driver to determine if it can run tests on an assembly in whatever way it sees fit. Technically, it may not even be related to references.
username_1: Exactly. The bit about the driver is what I was calling the fundamental flaw. I'd like to see that fixed rather than adding additional code into the DirectRunner, which is assuming it knows how frameworks work.
username_2: Ah... now that has sunk in, I'm with @username_1. This isn't an ideal fix unless we can push it to the driver. I don't want the engine to be responsible for this.
And that is certainly not a quick fix, so if someone is having pain, let's do the band-aid fix.
username_2: @username_1 complete aside. I'm catching myself rushing through and squeezing time to do this between other tasks, but not realizing that the effect is that I'm not spending the time to fully understand everything I read. I'm sorry. That can't be coming across well. I am making an effort to be more thorough.
username_1: @username_2 No problem, I'm just grumpy.
But seriously, I do have the impression you are going broad rather than deep. Which I count as too bad, because I think you could go very deep indeed, and we need depth.
username_4: @username_2 No worries there, I've been doing the same thing lately. Last night I sat down to over a hundred GitHub issues that I hadn't read, most with many comments. I misread or misinterpreted several as I was trying to catch up 😦
username_2: Now of course the band-aid fix *could* be to put this recursive reference search in, but only have it apply to the NUnit 3 framework driver and keep the other drivers as-is. Then at our convenience we can move the responsibility which will require an API change to do cleanly.
username_1: If we were sure that nobody else is using the driver API, we could just change it. We would need to modify our two drivers.
Otherwise, we have to maintain the old interface and create a new one beside it like you do in COM and we did in V2 for ISuiteBuillder.
My guess is that we would know if someone had actually published a driver, but someone could be working ion one without our knowledge. I'd take the chance and change it.
username_1: If we were sure that nobody else is using the driver API, we could just change it. We would need to modify our two drivers.
Otherwise, we have to maintain the old interface and create a new one beside it like you do in COM and we did in V2 for ISuiteBuillder.
My guess is that we would know if someone had actually published a driver, but someone could be working ion one without our knowledge. I'd take the chance and change it.
username_1: @username_4 What do you think? Is the driver API still flexible at this point? Or do we need to maintain it?
username_1: FWIW I made the API work as it does because I didn't want to repeat the reflection in every driver that is tried out before one is found. No doubt that was premature optimization.
username_2: I don't like repeating reflection either. We could still calculate the transitive closure of references, assuming that any driver would likely find that interesting, and hand that to each driver along with the assembly file location in case the driver wants to do additional examination.
username_2: Even better, pass two enumerables, one for direct references and one for all transitive references. If no driver enumerates them, no work is done. If one driver enumerates, the contents (as far as it has gone) are cached in case the next driver enumerates so that the work is done no more than once. I have code sitting around that does this.
What we should probably do is pass an args object to prevent future breaking changes when we add things. The old EventArgs pattern. EF Core started using this for their dependency injection because signatures broke every time they added a dependency to a base class.
```c#
public sealed class DriverResolutionArgs
{
public string AssemblyPath { get; }
public IEnumerable<AssemblyName> DirectReferences { get; }
public IEnumerable<AssemblyName> AllTransitiveReferences { get; }
}
```
username_1: Good ideas. Let's wait for @username_4 's view on changing versus doubling up on the API.
username_2: Too bad we can't drop Mono.Cecil and reference https://www.nuget.org/packages/System.Reflection.Metadata/. I did a proof of concept for something unrelated at work and memory-mapping the assembly files and reading the assembly reference list is insanely fast. Problem is the NuGet package only works as far back as net45 (and netstandard11).
username_4: I am okay with changing the driver interface at this point. Nobody has asked about or mentioned writing a driver in issues or in the mailing lists, so I doubt anyone has. It would also be a fairly easy update for people to change their driver to take in the extra arguments even though they don't use them. I think that the API we need to guard more religiously is the Engine API.
I actually wish we could remove the `AppDomain` from the driver because I think that should be the responsibility of the driver, possibly by passing in a settings collection. I say this because AppDomains aren't supported on several platforms or in .NET Standard/Core.
As for dropping Cecil in favor of a library supplied and maintained by Microsoft, I would love that too, but as you say, it is only good back to net45 ☹️
username_1: Note that we are not actually talking about the driver interface here but the factory interface. For practical purposes, it has the same effect. I always expected to change this... just much sooner!
username_1: The new interface should be as general as our runner, driver and framework interfaces. The driver should be the only place that knows the string representation of test container is actually an assembly with references etc. IMO, the factory interface should not depend on that.
The key issue is how to avoid each factory separately loading assemblies and finding references. That's why I preferred to avoid recursive references if possible. However, it makes sense to me to stop worrying about that performance issue and implement something of the desired shape first. For the moment, performance is hardly an issue, since there are only two existing framework drivers.
username_1: @username_2 You're correct about AppDomains but for the moment it's a necessary evil - one I'd like to make unnecessary and then remove. Drivers have no access to the services of the engine, so they cant create domains in the way we would prefer to see them created. Otherwise, they could just use a setting.
In fact, I think giving drivers a ServiceContext is the first step needed to move to a better interface. I did the same thing with Runners early on - that is, they originally didn't know about services but later needed them.
username_1: @nunit/framework-team Here's another one of those issues where we have discussed three or four alternative features. It seems clear we are not going to do the one that's suggested by the title. Perhaps it would be clearer to close this.
username_5: Since this issue hasn't been marked as closed (not sure if just unintentionally left open or not), I thought I'd ask if there is an update on this. Particularly, I'm wondering if there is an Attribute or other method to indicate I'd like to search recursively for assertions, as I generally like to abstract my assertions into extension methods. I've traversed a number of issues just finding this one and it seems closest to what I'm trying to accomplish, so I'm just checking in to see if there is an update.
username_1: This issue (which I advocated marking as closed in my prior comment) deals with visibility of Attributes. You are, of course, interested in assertions rather than attributes but both of those end up translating to your test assembly having reference to the NUnit framework.
So, here's the thing... in order to add an attribute to your actual tests, indicating that a recursive search should be used, you would need to reference the assembly in which that attribute is defined, i.e. the nunit framework assembly. But right now, that's all you actually have to do in order for your test assembly to be recognized. That's why we have never added such an attribute.
@nunit/framework-team I'm still in favor of closing this.
username_4: I am closing old Idea issues that have not had comments or made progress in several years. If anyone comes back with a compelling argument for these issues, we can reopen.
Status: Issue closed
|
hacs/integration | 660870225 | Title: Installation failing
Question:
username_0: <!-- Learn how to submit an issue here https://hacs.xyz/docs/issues -->
<!-- Before you open a new issue, search through the existing issues to see if others have had the same problem.-->
## Installation details
<!-- In the table below you are expected to add information under the "Value" part -->
| Description | Value |
| -------------------------- | ----- |
| HACS version |
| Home Assistant version |
| Installation method for HA |
## Checklist
<!-- You need to check ALL these boxes (tasks), if you do not do that, your issue is incomplete and may be closed -->
- [ ] I'm running the newest version of HACS <https://github.com/hacs/integration/releases/latest>
- [ ] I have enabled debug logging for my installation.
- [ ] I have filled out the issue template to the best of my ability.
- [ ] I have read <https://hacs.xyz/docs/issues>
- [ ] This issue is related to the backend of HACS.
- [ ] This issue only contain 1 issue (if you have multiple issues, open one issue for each issue).
## Describe the issue
After I make and chown and chmod the folder custom_components and copy hacs folder into it, My HA fails to start on reboot.
As per log files it errors for permission for custom_components folder.
After I remove the folder HA loads fine
### Steps to reproduce
<!-- Without steps to reproduce, it will be hard to fix, it is very important that you fill out this part, issues without it will be closed -->
1. 1. 1.
## Debug logs
<!-- To enable debug logs check this https://hacs.xyz/docs/basic/logs -->
pi@raspberrypi:/home/homeassistant/.homeassistant $ sudo chmod 755 custom_components/
pi@raspberrypi:/home/homeassistant/.homeassistant $ ls -lah
total 6.2M
drwxr-xr-x 7 homeassistant homeassistant 4.0K Jul 19 15:09 .
drwxr-xr-x 4 homeassistant homeassistant 4.0K Jul 19 05:22 ..
-rw-r--r-- 1 homeassistant homeassistant 2 Jul 19 05:22 automations.yaml
drwxr-xr-x 2 homeassistant homeassistant 4.0K Jul 19 05:23 .cloud
-rw-r--r-- 1 homeassistant homeassistant 263 Jul 19 05:22 configuration.yaml
drwxr-xr-x 3 homeassistant homeassistant 4.0K Jul 19 15:08 custom_components
drwxr-xr-x 2 homeassistant homeassistant 4.0K Jul 19 05:22 deps
-rw-r--r-- 1 homeassistant homeassistant 0 Jul 19 05:22 groups.yaml
-rw-r--r-- 1 homeassistant homeassistant 7 Jul 19 05:22 .HA_VERSION
-rw-r--r-- 1 homeassistant homeassistant 2.5K Jul 19 15:12 home-assistant.log
-rw-r--r-- 1 homeassistant homeassistant 6.1M Jul 19 15:09 home-assistant_v2.db
-rw-r--r-- 1 homeassistant homeassistant 0 Jul 19 05:22 scenes.yaml
-rw-r--r-- 1 homeassistant homeassistant 0 Jul 19 05:22 scripts.yaml
-rw-r--r-- 1 homeassistant homeassistant 161 Jul 19 05:22 secrets.yaml
drwxr-xr-x 2 homeassistant homeassistant 4.0K Jul 19 15:09 .storage
[Truncated]
Jul 19 15:20:42 raspberrypi hass[31911]: File "/usr/lib/python3.7/pathlib.py", line 1365, in is_file
Jul 19 15:20:42 raspberrypi hass[31911]: return S_ISREG(self.stat().st_mode)
Jul 19 15:20:42 raspberrypi hass[31911]: File "/usr/lib/python3.7/pathlib.py", line 1161, in stat
Jul 19 15:20:42 raspberrypi hass[31911]: return self._accessor.stat(self)
Jul 19 15:20:42 raspberrypi hass[31911]: PermissionError: [Errno 13] Permission denied: '/home/homeassistant/.homeassistant/custom_components/
Jul 19 15:20:42 raspberrypi systemd[1]: [email protected]: Main process exited, code=exited, status=1/FAILURE
Jul 19 15:20:42 raspberrypi systemd[1]: [email protected]: Failed with result 'exit-code'.
<details>
<summary>Logs</summary>
```text
PASTE YOUR DEBUG LOGS HERE
```
</details>
<!-- IssueTemplateID: issue_backend -->
Answers:
username_1: This is not an issue with HACS.
as the homeassistant user, run this https://hacs.xyz/docs/installation/manual_cli
Status: Issue closed
|
gridcoin-community/Gridcoin-Research | 393679426 | Title: Integrate Google Mock
Question:
username_0: With @jamescowens's NN/scraper rewrite we will conver the neuralnet functions to an interface with concrete implementations. This is something we should have as a general mindset as it allows us to mock implementations and add behavior tests. This is "trivial" with Google Test but since we're using boost::test we need to take some extra steps to integrate GMock.
http://alexott.net/en/cpp/CppTestingIntro.html#sec7 |
dotnet/runtime | 663992919 | Title: In JsonConverter, .Skip() throws, but .TrySkip() never returns false during DeserializeAsync
Question:
username_0: Details here: https://stackoverflow.com/questions/63038334/how-do-i-handle-partial-json-in-a-jsonconverter-while-using-deserializeasync-on
Essentially, calling `.Skip()` in my custom `JsonConverter<T>` during a `DeserializeAsync<T>(stream, ...)` throws `InvalidOperationException: Cannot skip tokens on partial JSON. Either get the whole payload and create a Utf8JsonReader instance where isFinalBlock is true or call TrySkip.`
Calling `TrySkip` never returns false, though. So... why would `Skip` fail?
Anyway - this may be a case of me not knowing how to write a JsonConverter properly. However, apparently it doesn't occur in 3.1 and I was encouraged to open an issue. So here it is :D
Answers:
username_0: Looking through source, I see "Skip" is not even attempted if a non-final block is passed in. I was expecting the logic to be basically `if (!TrySkip()) { throw new Explosion(); }`. Looks like I was wrong.
Status: Issue closed
username_1: In the existing `JsonConverter` model, custom converters do not have to worry about handling partial data, as the serializer passes all the data for the current JSON scope. `Skip`/`TrySkip` logic is unnecessary. cc @tdykstra we should consider adding a section about this in the docs.
The "read ahead" logic that the serializer performs to make this possible does have some perf implications. There has been some discussion around a new model to, among other benefits, enable more performant async-handling when custom converters are used - https://github.com/dotnet/runtime/issues/1562. Relevant parts of the new model have been implemented internally in serializer, but exposing this publicly is not currently on the roadmap for System.Text.Json. |
dask/dask-cloudprovider | 601084405 | Title: worker denied permission to S3 requester pays
Question:
username_0: I'm creating a `FargateCluster` and when I read public buckets, everything works fine, but when I try to read a requester pays bucket (where my credentials are obviously needed), the workers are failing.
Here's a worker log:
https://gist.github.com/username_0/d9171c73aaecef625aed88d9a8802aee
Is this just user error on my part, missing something from the Docker image or from the policy I added to the `AmazonSageMaker-ExecutionRole`?
Or is this an actual issue for this or some other package?
Answers:
username_1: I wonder if @username_2 knows enough to give a quick answer here?
username_2: I'm 90% certain this is an AWS permissions issue. Clued in by the below error from your logs
```
botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
```
I'm not super familiar with Requester Pays Buckets but I'd troubleshoot by trying to read from that bucket using the awscli on the instance that your workers are running on. Removing layers of abstraction can be helpful for this kind of thing
username_0: I was able to solve this by providing the `AmazonS3FullAccess` policy to the workers:
```
cluster = FargateCluster(n_workers=1, image='rsignell/fargate-worker:2020-04-14f',
scheduler_timeout='20 minutes',
task_role_policies=['arn:aws:iam::aws:policy/AmazonS3FullAccess']
```
Status: Issue closed
|
kubernetes/website | 338822414 | Title: Problem with katacoda bash terminal
Question:
username_0: <!-- Thanks for filing an issue! Before submitting, please fill in the following information. -->
<!--Required Information-->
**This is a...**
<!-- choose one by changing [ ] to [x] -->
- [ ] Feature Request
- [ ] Bug Report
**Problem:**
while starting scenanio on https://www.katacoda.com/courses/serverless/getting-started-with-kubeless
getting issue on the console : we encounter a problem with connection
**Proposed Solution:**
**Page to Update:**
https://kubernetes.io/...
<!--Optional Information (remove the comment tags around information you would like to include)-->
<!--Kubernetes Version:-->
<!--Additional Information:-->
Status: Issue closed
Answers:
username_1: @username_0 Katacoda is a third party that is not affiliated with this repository. We suggest reporting issues with the Katacoda interface to them.
username_1: @username_0 Actually, it turns out that the Katacoda interface is technically under the same umbrella with the k8s docs. I'll re-open this and investigate.
username_1: <!-- Thanks for filing an issue! Before submitting, please fill in the following information. -->
<!--Required Information-->
**This is a...**
<!-- choose one by changing [ ] to [x] -->
- [ ] Feature Request
- [ ] Bug Report
**Problem:**
while starting scenanio on https://www.katacoda.com/courses/serverless/getting-started-with-kubeless
getting issue on the console : we encounter a problem with connection
**Proposed Solution:**
**Page to Update:**
https://kubernetes.io/...
<!--Optional Information (remove the comment tags around information you would like to include)-->
<!--Kubernetes Version:-->
<!--Additional Information:--> |
gluster/project-infrastructure | 756870092 | Title: Update Community meeting link in Gluster-devel email signature
Question:
username_0: Please update the Gluster-devel email signature to below. We missed to update Gluster-devel when we did the same for Gluster-users mailing list.(https://github.com/gluster/project-infrastructure/issues/99)
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-devel mailing list
<EMAIL>
https://lists.gluster.org/mailman/listinfo/gluster-devel
Answers:
username_1: Ok, done
Status: Issue closed
|
ARM-software/lisa | 129418226 | Title: [RFC] Add support to save execution logs into a file
Question:
username_0: Currently we dump logging statements only on console. It can be useful in general to have a copy of the execution log statements also into a file produced in the same output folder.
The logging on file should not exclude console logging, which is still required possibly with a different logging level. Loglevel on file should be (by default) ad DEBUG level, since debug stataments can be easily filtered out offline.
Answers:
username_1: Implemented in https://github.com/ARM-software/lisa/pull/237 :)
Status: Issue closed
|
NVIDIA/NeMo | 998508662 | Title: Maximum LR in ASR YAML is not used correctly
Question:
username_0: Hello,
I am fine-tuning the conformer model, and I have noticed that the defined LR in the Yaml file is not used and another lower LR is actually used during training.
The defined LR is 0.005, but the actual reported LR on wandb is around 0.00000206 .
Are you per forming any kind of change of the LR based on the number of the GPUs or the accumulate_grad_batches ?
```
# It contains the default values for training a Conformer-CTC ASR model, large size (~120M) with CTC loss and sub-word encoding.
# Architecture and training config:
# Default learning parameters in this config are set for effective batch size of 2K. To train it with smaller effective
# batch sizes, you may need to re-tune the learning parameters or use higher accumulate_grad_batches.
# Here are the recommended configs for different variants of Conformer-CTC, other parameters are the same as in this config file.
# One extra layer (compared to original paper) is added to the medium and large variants to compensate for replacing the LSTM decoder with a linear one.
#
# +-------------+---------+---------+----------+------------+-----+
# | Model | d_model | n_heads | n_layers | time_masks | lr |
# +=============+=========+========+===========+============+=====+
# | Small (13M)| 176 | 4 | 16 | 5 | 5.0 |
# +-------------+---------+--------+-----------+------------+-----+
# | Medium (30M)| 256 | 4 | 18 | 5 | 5.0 |
# +-------------+---------+--------+-----------+------------+-----+
# | Large (121M)| 512 | 8 | 18 | 10 | 2.0 |
# +---------------------------------------------------------------+
#
# If you do not want to train with AMP, you may use weight decay of 0.0 or reduce the number of time maskings to 2
# with time_width=100. It may help when you want to train for fewer epochs and need faster convergence.
# With weight_decay=0.0, learning rate may need to get reduced to 2.0.
# You may find more info about Conformer-CTC here: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/models.html#conformer-ctc
# Pre-trained models of Conformer-CTC can be found here: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/results.html
# The checkpoint of the large model trained on LibriSpeech with this recipe can be found here: https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_conformer_ctc_large_ls
name: "Conformer-CTC-BPE-Small"
model:
sample_rate: 16000
log_prediction: true # enables logging sample predictions in the output during training
ctc_reduction: 'mean_batch'
train_ds:
manifest_filepath: /mnt/local/extra/users/ael/dl/data/en/train_filtered_flac.json
sample_rate: ${model.sample_rate}
batch_size: 8 # you may increase batch_size if your memory allows
shuffle: true
num_workers: 4
pin_memory: true
use_start_end_token: false
trim_silence: false
max_duration: 17 # it is set for LibriSpeech, you may need to update it for your dataset
min_duration: 0.1
is_tarred: false
#use_dali: true
validation_ds:
manifest_filepath:
- /home/jho/nemo/github_ahmed/NeMo/tasks/en/weighted_combined1/datasets/librispeech_dev_clean.json
- /home/jho/nemo/github_ahmed/NeMo/tasks/en/weighted_combined1/datasets/librispeech_dev_other.json
[Truncated]
checkpoint_callback_params:
# in case of multiple validation sets, first one is used
monitor: "val_wer"
mode: "min"
save_top_k: 3
# you need to set these two to True to continue the training
resume_if_exists: true
resume_ignore_no_checkpoint: true
# You may use this section to create a W&B logger
create_wandb_logger: true
wandb_logger_kwargs:
name: conformer-small-bpe-en-balanced-16k
project: asr-en
hydra:
run:
dir: /mnt/local/extra/users/ael/dl/data/nemo/models/en/
```
Answers:
username_1: For the Noam scheduler, LR is a scalar multiplier to the LR determined by the Noam scheduler, NOT the actual LR itself.
That is why you will note that the config had high values such as 2 and 5 for the LR - it scales the Noam LR by 2x or 5x.
username_0: Aha, thanks for the explanation.
In this case, we should use a much higher learning rate for fine-tuning.
Thanks again.
Status: Issue closed
username_2: @username_1 so if the pretrained model has LR 2.0, what should be the optimal value for finetuning? |
renovatebot/config-help | 515852033 | Title: [Question] vulnerabilityAlerts for GitLab merge requests?
Question:
username_0: ### Which Renovate are you using? CLI, App, or Pro
App
### Which platform are you using? GitHub, GitLab, Bitbucket Azure DevOps
GitLab
### What is your question?
I've set `"automerge": true` and `"automergeType": "branch"` in my `renovate.json` config, and I just realized that security patches might be silently merged into master, so I probably wouldn't know about it until my next `git pull`. This might mean that there is a delay of a few days before I deploy the updated dependency to production.
Before Renovate Bot, I had set up a daily CI job that ran `yarn audit` and `bundle audit`, so I would get notified about any security vulnerabilities at least once a day (when the build started crashing.)
Is there any way to ensure that Renovate Bot will always open a Merge Request for vulnerability patches, so that I always see a notification for these?
Answers:
username_1: Currently Renovate doesn't have any support for vulnerability alert awareness on GitLab. While GitHub have a formal API to allow apps to query the list of alerts for a repository, GitLab does not. So for now, Renovate is unfortunately unaware if an update it is proposing fixes a vulnerability or not.
If you are interested enough, please raise issues in the main repo (one for JS, one for Ruby) about investigating "audit" integration for both npm and bundler ecosystems. We'd need to do some investigation about how to integrate into those APIs (e.g. via CLI scraping or direct API access) as well as then how to integrate the data into Renovate's logic flow.
Status: Issue closed
username_0: Thanks @username_1! I opened an issue on the main repo. GitLab CI has a [Dependency Scanning](https://docs.gitlab.com/ee/user/application_security/dependency_scanning/) feature, but I just realized that this might not be so helpful, because it only runs during the CI build. So it can only tell us that there are no vulnerabilities in the updated branch, but it doesn't provide any information about security vulnerabilities in the previous version of a package.
So I will open two issues for JS and Ruby. |
flutter/flutter | 874885650 | Title: Text alignment doesn't work if the text has formatting
Question:
username_0: Alignment doesn't seem to work when there is formatting within a string, like **bold** or **hyperlink**.
```dart
Markdown(
data:
'Aliqua ut et cillum velit non cillum nostrud occaecat quis ullamco **laboris eiusmod**.',
styleSheet:
MarkdownStyleSheet.fromTheme(Theme.of(context)).copyWith(
textAlign: WrapAlignment.center,
),
)
```
Flutter Channel stable, v1.12.13+hotfix.9
flutter_markdown v.0.3.4
Answers:
username_1: Verified this using [latest](https://pub.dev/packages/flutter_markdown/install) package version on latest master and stable and confirmed that the text alignment works properly, ie, the text aligns to center with long / multi-line text as below:
<img width="346" alt="Screenshot 2021-09-02 at 5 43 06 PM" src="https://user-images.githubusercontent.com/67046386/131841459-1e64e156-2f4d-4dd5-8375-c72d3c6f6302.png">
<details>
<summary> stable, master flutter doctor -v </summary>
```
[✓] Flutter (Channel stable, 2.2.3, on Mac OS X 10.15.4 19E2269 darwin-x64,
locale en-GB)
• Flutter version 2.2.3 at /Users/dhs/documents/fluttersdk/flutter
• Framework revision f4abaa0735 (4 days ago), 2021-07-01 12:46:11 -0700
• Engine revision 241c87ad80
• Dart version 2.13.4
[✓] Android toolchain - develop for Android devices (Android SDK version 30)
• Android SDK at /Users/dhs/Library/Android/sdk
• Platform android-30, build-tools 30.0.3
• ANDROID_HOME = /Users/dhs/Library/Android/sdk
• Java binary at: /Users/dhs/Library/Application Support/JetBrains/Toolbox/apps/AndroidStudio/ch-0/202.7486908/Android
Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b3-6915495)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 12.3, Build version 12C33
• CocoaPods version 1.10.1
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 4.1)
• Android Studio at /Users/dhs/Library/Application Support/JetBrains/Toolbox/apps/AndroidStudio/ch-0/202.7486908/Android
Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build
1.8.0_242-release-1644-b3-6915495)
[✓] VS Code (version 1.57.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.21.0
[✓] Connected device (4 available)
• Darshan's iphone (mobile) • 21150b119064aecc249dfcfe05e259197461ce23 •
ios • iOS 14.4.1 18D61
• iPhone 12 Pro Max (mobile) • A5473606-0213-4FD8-BA16-553433949729 •
ios • com.apple.CoreSimulator.SimRuntime.iOS-14-3 (simulator)
• macOS (desktop) • macos •
darwin-x64 • Mac OS X 10.15.4 19E2269 darwin-x64
• Chrome (web) • chrome •
web-javascript • Google Chrome 92.0.4515.159
• No issues found!
[Truncated]
[✓] Connected device (4 available)
• Darshan's iphone (mobile) • 21150b119064aecc249dfcfe05e259197461ce23 •
ios • iOS 14.4.1 18D61
• iPhone 12 Pro Max (mobile) • A5473606-0213-4FD8-BA16-553433949729 •
ios • com.apple.CoreSimulator.SimRuntime.iOS-14-3 (simulator)
• macOS (desktop) • macos •
darwin-x64 • Mac OS X 10.15.4 19E2269 darwin-x64
• Chrome (web) • chrome •
web-javascript • Google Chrome 92.0.4515.159
• No issues found!
```
</details>
Closing this as fixed. If anybody disagrees, write in comments and I'll reopen it.
Status: Issue closed
|
y-yu/trpl-2nd-pdf | 318081088 | Title: 一部を除き図表の画像が PDF に埋め込まれていない
Question:
username_0: 原書で使われている画像は「Final Project: Building a Multithreaded
Web Server」[(原書第 20 章)]((https://doc.rust-lang.org/book/second-edition/ch20-00-final-project-a-web-server.html)のものを覗いて埋め込まれていないようです。
Answers:
username_1: うーん、ちょっと`setup.sh`で画像を拾ってくるところに問題があったかな……。調査します!
(別のところから流用したスクリプトだとバレてしまう)
Status: Issue closed
username_1: @username_0
ひとまず #12 で無理やり修正しました。図をもっといい感じにするというIssueは別で作っておいて、ひとまずは解決とさせてください。
username_0: 修正ありがとうございます! 図の総数が多くないこともあってひとまずはこれで十分です。
こちらでも改良できないか PDF をビルドする環境が整ったら調べてみます。 |
cloud-custodian/cloud-custodian | 417495959 | Title: aws.asg - not-encrypted filter removed error handling for missing AMI's
Question:
username_0: in this commit:
https://github.com/cloud-custodian/cloud-custodian/commit/45d059bccc77df01f79256816038731d2756b12f#diff-8d7e1840f31e642d80e0fe0b85ef0324L266
the error handling for missing AMI's was removed, in the case that AMI's are not found, this will result in the ASG being filtered in as being not-encrypted rather than raising (per the previous behavior)
Answers:
username_1: @jtroberts was noting issues for invalid as well on image.
username_1: Are there any logs or trace backs for when this exhibits?
username_1: the error handling for missing amis i found was out dated wrt to current api behavior, ie. the api doesn't return errors/exceptions on those anymore the same way that it would on a snapshot. i tested fetching a thousand amis with a single api call (aka no pagination needed), and 5 that were bad ami ids, and it returned all the ones extant and omitted the ones not found. the snapshot not found case was explicitly handled, bad amis just don't get returned by the api. there's definitely an issue here though, but we need more data to correlate to cause and solution.
username_1: it looks like there is some region specific behavior to the api response.
in us-east-1 it will just omit the bad image id from the output
```
$ aws ec2 describe-images --owner=self --image-ids ami-123f000c8c1f9f654 --region us-east-1
{
"Images": []
}
```
in other regions it will report as an invalid ami id.
```
$ aws ec2 describe-images --owner=self --image-ids ami-123f000c8c1f9f654 --region us-east-2
An error occurred (InvalidAMIID.NotFound) when calling the DescribeImages operation: The image id '[ami-123f000c8c1f9f654]' does not exist
$ aws ec2 describe-images --owner=self --image-ids ami-123f000c8c1f9f654 --region us-west-2
An error occurred (InvalidAMIID.NotFound) when calling the DescribeImages operation: The image id '[ami-123f000c8c1f9f654]' does not exist
```
Status: Issue closed
|
StevenThuriot/SettingsManagement | 427822655 | Title: SettingsConverter
Question:
username_0: The Converter should work two-way.
Currently it only parses the string from the config file to a typed instance.
However, in some cases the reversed conversion is applicable as well (ref `JaNeeConverter` ).
Instead of keeping a func in the settings class, it should keep a two-way converter instead.<issue_closed>
Status: Issue closed |
vuejs/vetur | 683548212 | Title: No suggestion on hover in TypeScript part only in Linux (not in macOS)
Question:
username_0: <!-- Check those before opening an issue -->
- [x] I have searched through existing issues
- [x] I have read through [docs](https://vuejs.github.io/vetur)
- [x] I have read [FAQ](https://github.com/vuejs/vetur/blob/master/docs/FAQ.md)
## Info
- Platform: Linux Mint 20 (also not working in Elementary Linux). Works OK in macOS.
- Vetur version: 0.26.1
- VS Code version: VSCodium 1.48.0
## Problem
- No suggestion for TypeScript
- No dialog on hover (No IntelliSense)
- Syntax highlighting is OK. Autofix is OK (Prettier + ESLint). No complain on variable typos.
- Nothing in output panel
## Reproducible Case
- Install VSCodium
- Install Vetur extension
- Open a `*.vue` file.
Answers:
username_1: Your issue is closed because you did not provide a repro case. Please read https://github.com/vuejs/vetur/blob/master/.github/NO_REPRO_CASE.md and open a new issue.
And BTW, vscodium modifies VS Code source and I don't have time to fix issues caused by it.
Status: Issue closed
username_0: It appears to work OK in VSCode, but not in VSCodium on the same platform (Linux). Thanks anyway. |
killbill/killbill | 187753104 | Title: Fix missing RECURRING item when item has been fully adjusted
Question:
username_0: Prior the fix, the code would not (re)generate a recurring item for a given period after such item had been item adjusted. Example:
1. Invoice created item for period [2016-08-01 -> 2016-09-01)
2. Admin adjusted item (and probably created a matching refund for payment, and most probably cancelled the subscription).
With previous behavior: If subscription was not cancelled or cancelled at EOT, item would **not** be regenerated by invoice code, and we would **not** see a new item for [2016-08-01 -> 2016-09-01)
With previous behavior: If subscription was not cancelled or cancelled at EOT, item would **then** be regenerated by invoice code, and we would see a new item for [2016-08-01 -> 2016-09-01).
Note that if admin wants such item to **not** be regenerated, then subscription should be cancelled at SOT (start of term). The new behavior only matters when subscription was not cancelled or cancelled at EOT.
Answers:
username_1: Is there a typo under the second "With previous behavior"? Is this actually a description of the new behavior?
username_0: @username_1 Yes, sorry, fix the typo!
username_0: Fixed in 93b8b68b62ef3bec51326065feea999cd4e5baf4
Status: Issue closed
username_0: Reopening issue for more explanation on what product behavior should be.
username_0: Prior the fix, the code would not (re)generate a recurring item for a given period after such item had been item adjusted. Example:
1. Invoice created item for period [2016-08-01 -> 2016-09-01)
2. Admin adjusted item (and probably created a matching refund for payment, and most probably cancelled the subscription).
* With previous behavior: If subscription was not cancelled or cancelled at EOT, item would **not** be regenerated by invoice code, and we would **not** see a new item for [2016-08-01 -> 2016-09-01)
* With new behavior: If subscription was not cancelled or cancelled at EOT, item would **then** be regenerated by invoice code, and we would see a new item for [2016-08-01 -> 2016-09-01).
Note that if admin wants such item to **not** be regenerated, then subscription should be cancelled at SOT (start of term). The new behavior only matters when subscription was not cancelled or cancelled at EOT.
username_0: Let's revisit the behavior of the invoicing system after a subscription was cancelled. For simplicity we will **only consider the billing cancellation date** and not the entitlement cancellation since we care about billing.
Before digging into specific scenarios, it is worth revisiting the basic principle of invoicing a subscription (let's assume a monthly):
* The subscription starts (`CREATE` billing event)
* Every month the subscription gets invoiced for the current period
* At some point the subscription gets cancelled (`CANCEL` billing event)
The goal of the invoicing system is to match the state of the subscription and in particular, when it comes to cancellation, the following can happen:
* SOT (start of term cancellation): Full credit generated
* IMM (immediate cancellation): Pro-ration credit generated (matching what was not used)
* EOT (end of term cancellation): No credit generated
When it comes to **invoice item adjustment**, we made the following **product choice**:
* Any recurring item that has been **fully item adjusted** will remain in that state and this is independent of the cancelation date of the subscription.
* Any recurring item that has been **partially item adjusted** will be invoiced normally, but the amount invoiced will be the original amount - adjustment.
Let's look at the following use cases below:

* Case 1: Normal EOT cancellation. The full period has been invoiced
* Case 2: EOT cancellation followed by full invoice item adjustment. The full period has been invoiced and then fully adjusted. Although the cancellation is EOT, the invoicing system will not re-attempt to invoice for that period.
* Case 3: EOT cancellation followed by partial invoice item adjustment. The full period has been invoiced and then partially adjusted. Because the cancellation is EOT, the period should remain fully charged for (but invoice balance will be lowered by taking into account the partial adjustment)
* Case 4: SOT cancellation. The full period has been invoiced and then `repaired` by the system by generating a full credit.
* Case 5: IMM cancellation. The full period has been invoiced and then partially `repaired` by the system by generating a pro-ration credit.
Summary:
1. The choice of the cancellation date will drive the generation of credit
2. Invoice item adjustments are useful in situation where we want to bring the invoice balance to 0 (or to lower its value) and are often useful when customer payment failed or after a refund. Invoice adjustment are not intended to be used as a replacement of choosing the correct cancellation date (e.g we cancel EOT which does not generate credit, and hen we adjust item to make sure customer does not owe money).
Status: Issue closed
|
rust-lang/rust-clippy | 730071487 | Title: `try_validation` unconditionally adds semicolons
Question:
username_0: <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code (https://github.com/rust-lang/rust/blob/0da6d42f297642a60f2640ec313b879b376b9ad8/compiler/rustc_mir/src/interpret/validity.rs#L78-L96):
```rust
macro_rules! try_validation {
($e:expr, $where:expr,
$( $( $p:pat )|+ => { $( $what_fmt:expr ),+ } $( expected { $( $expected_fmt:expr ),+ } )? ),+ $(,)?
) => {{
match $e {
Ok(x) => x,
// We catch the error and turn it into a validation failure. We are okay with
// allocation here as this can only slow down builds that fail anyway.
$( $( Err(InterpErrorInfo { kind: $p, .. }) )|+ =>
throw_validation_failure!(
$where,
{ $( $what_fmt ),+ } $( expected { $( $expected_fmt ),+ } )?
),
)+
#[allow(unreachable_patterns)]
Err(e) => Err::<!, _>(e)?,
}
}};
}
```
I expected to see this happen: The `Err(e)?` is replaced with `return Err(e)`.
Instead, this happened: It's replaced with `return Err(e);`, causing a syntax error since this is an expression context.
```
error: expected one of `)`, `,`, `.`, `?`, or an operator, found `;`
--> compiler/rustc_mir/src/interpret/validity.rs:97:18
|
78 | / macro_rules! try_validation {
79 | | ($e:expr, $where:expr,
80 | | $( $( $p:pat )|+ => { $( $what_fmt:expr ),+ } $( expected { $( $expected_fmt:expr ),+ } )? ),+ $(,)?
81 | | ) => {{
... |
93 | | Err(e) => return Err(try_validation!(
| |-
94 | || self.ecx.memory.read_bytes(mplace.ptr, Size::from_bytes(len)),
95 | || self.path,
96 | || err_ub!(InvalidUninitBytes(..)) => { "uninitialized data in `str`" },
97 | || );),
| || -^ expected one of `)`, `,`, `.`, `?`, or an operator
| ||_________________|
| | in this macro invocation (#2)
98 | | }
99 | | }};
100 | | }
| | -
| | |
| |_in this expansion of `try_validation!` (#1)
| in this expansion of `try_validation!` (#2)
...
341 | / try_validation!(
[Truncated]
325 | | { "too small vtable" },
326 | | );
| |__________________- in this macro invocation
|
= note: `#[warn(clippy::try_err)]` on by default
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#try_err
help: try this
|
93 | Err(e) => return Err(try_validation!(
94 | self.ecx.memory.check_ptr_access_align(
95 | vtable,
96 | 3 * self.ecx.tcx.data_layout.pointer_size, // drop, size, align
97 | Some(self.ecx.tcx.data_layout.pointer_align.abi),
98 | CheckInAllocMsg::InboundsTest,
...
```
### Meta
- `cargo clippy -V`: clippy 0.0.212 (ffa2e7a 2020-10-24)
Answers:
username_0: Actually I think this is the same issue as https://github.com/rust-lang/rust-clippy/issues/6234, clippy is expanding `e` to `$e`. But that seems really weird because e is a variable and not a macro parameter.
Status: Issue closed
|
teju85/teditor | 334808627 | Title: Check out the draw/render issue with S-insert
Question:
username_0: With shift-insert, only the first char appears, the remaining chars only appear after some other key-press!
Status: Issue closed
Answers:
username_0: `Terminal::waitAndFill` wasn't consuming an already populated `seq` string. hence it needed another key-press to consume all those chars and display them. Fixed this with commit aa2ac84. Closing. |
Unidata/tomcat-docker | 617644778 | Title: Cleanup The Way This Project Deals With server.xml
Question:
username_0: The way this project deals with `server.xml` is confusing. There is programmatic manipulations in the `Dockerfile` which are kludgy and fragile. I think the original rationale for this was to attempt to avoid maintenance of `server.xml` by this project as the `server.xml` evolved in the parent container. In retrospect, I don't think this was a very good idea. There is additionally a `server.xml` in this project which contains `relaxedQueryChars` and `relaxedPathChars` attributes (to make requests DAP happy) to serve as hints, but it never gets copied into the container leading to additional confusion. Moreover, this project already maintains a `web.xml` that is copied into the container.
Plan:
1. Abandon programmatic manipulation of `server.xml`.
2. Incorporate the changes that the programmatic manipulation intends into the `server.xml` contained in this project.
3. Copy that `server.xml` into the container via the `Dockerfile`.
4. Maintain `server.xml` as the `server.xml` from the parent container evolves going forward. |
LIJI32/SameBoy | 438014115 | Title: MIDI over the link port
Question:
username_0: I've been building a libretro frontend that acts as a VST instrument, with SameBoy as my main focus for now. The bulk of the work is done, all that is remaining is sending MIDI over the link port, for software like LSDJ and mGB (there is similar chat going on in issue https://github.com/username_2/SameBoy/issues/18).
Now, I have the MIDI functionality working, but it's very laggy, and I think it's due to how I'm sending the bytes to SameBoy. I can seemingly send one byte to the link port for every call to `retro_run`, and since a MIDI message consists of 3 bytes, it takes 3 calls to `retro_run` before the full message is received, which is obviously creating massive delays between notes. Am I misunderstanding how the link port operates, or is there some way of sending multiple bytes per call to `retro_run`?
Answers:
username_0: OK maybe I asked that question prematurely.. I realise I should be using `GB_run` instead of `GB_run_frame`!
username_1: @username_0 wow that is a great idea!
And that could be transposed to other emulators too.
Do you already have some code ready?
username_0: I've got it working with mGB and LSDJ and it's AWESOME! However, due to the way VST plugins are loaded in to the same process, it doesn't play too well with libretro (which only allows a single instance). As a result I've been building my own emulator wrapper that contains streams timed link port and controller data (so I can eventually support genMDM). Code will be up once that's done :)
username_1: cool!
What is the benefit of using libretro rather than the native SameBoy ?
username_2: Yes, you probably want to send bytes much faster than only once per frame. Also keep in mind that if the MIDI port is the serial master, you can send bytes in whatever rate you want, as long as the Game Boy software processes them quickly enough. Any more issues or can I close this issue?
Status: Issue closed
username_0: I found that sending 1 bit every ~488 ticks didn't work, but sending a whole byte every ~3907 ticks works great! |
OpusCapita/styles | 323196102 | Title: Multiple backticks in a row visually collapsing into one char (Noto-Sans font)
Question:
username_0: Meta-Info | Value
-- | --
ExtProjectId | JCCMMN-01
Original Estimation | 8h
Remaining Estimation | 8h

**Related issue:** https://github.com/googlei18n/noto-fonts/issues/736 |
josdejong/mathjs | 937988425 | Title: CommonJS or AMD dependencies can cause optimization bailouts seededRNG.js
Question:
username_0: ```
"mathjs":` "^9.4.3",
```
```powershell
Angular CLI: 12.1.1
Node: 16.4.0 (Unsupported)
Package Manager: npm 7.19.1
OS: win32 x64
Angular: 12.1.1
... animations, cdk, cli, common, compiler, compiler-cli, core
... forms, localize, material, material-moment-adapter
... platform-browser, platform-browser-dynamic, router
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1201.1
@angular-devkit/build-angular 12.1.1
@angular-devkit/core 12.1.1
@angular-devkit/schematics 12.1.1
@angular/flex-layout 12.0.0-beta.34
@schematics/angular 12.1.1
rxjs 6.6.7
typescript 4.3.4
```
```powershell
ng build --configuration production
Warning: E:\Angular\VReport\node_modules\mathjs\lib\esm\function\probability\util\seededRNG.js depends on 'seedrandom'. CommonJS or AMD dependencies can cause optimization bailouts.
For more info see: https://angular.io/guide/build#configuring-commonjs-dependencies
Warning: initial exceeded maximum budget. Budget 500.00 kB was not met by 1.32 MB with a total of 1.81 MB.
```
Answers:
username_1: Hey Toso!
I have no experience with Angular, but I'm assuming that the warning is about [seedrandom](https://www.npmjs.com/package/seedrandom) being a CommonJS module. I'm afraid this is an upstream problem, see [#72 on seedrandom](https://github.com/davidbau/seedrandom/issues/72). Unless we want to maintain our own fork of seedrandom (and I don't think we do), there doesn't seem to be any simple way to fix this.
username_0: Hi all any news of this ?
I try to optimise import and as suggesion use:
```ts
import { create, detDependencies, evaluateDependencies, lusolveDependencies, sumDependencies } from 'mathjs';
const {det, evaluate, lusolve, sum } = create({
detDependencies, evaluateDependencies, lusolveDependencies, sumDependencies
})
```
But I have this but not use seededRNG.js
```powershell
Warning: E:\Angular\VReport\node_modules\mathjs\lib\esm\function\probability\util\seededRNG.js depends on 'seedrandom'. CommonJS or AMD dependencies can cause optimization bailouts.
For more info see: https://angular.io/guide/build#configuring-commonjs-dependencies
```
Latest version of mathjs and angular
```
"mathjs": "^10.1.0",
```
username_2: No news.
In https://github.com/davidbau/seedrandom/issues/72#issuecomment-750472919 I see there is a forked esm version. We could have a look at that.
Anyone interested in looking in to this?
username_0: @username_2 Ok thanks. Or i don't know it is possible to not use or not include in dependance. Personaly i don't use |
recharts/recharts | 383063139 | Title: Add className prop to <Cell />
Question:
username_0: ## Do you want to request a feature or report a bug?
Feature
## What is the current behavior?
Currently, classnames can't be appended to the outputted <path>s such as .recharts-rectangle or .recharts-sector. The only way to give color to a <Cell /> is through directly adding its fill.
## What is the expected behavior?
We want to be able to append a custom className to any Cell, so a <path class="recharts-rectangle cell-something-green"> can be colored in green in any chart automatically through CSS, instead of passing it as a fill every time when initializing the component in JS. Imagine a scenario with 500 subjects that should always have the same color... Cheers!
Answers:
username_1: I'd also quite like a bit more control with className. I'd like to add a material style box-shadow to a pie chart / donut chart to help lift it from it's background.

username_1: I had a play and i've partly got something working see here 👉 https://jsfiddle.net/9yt3upr7/9/

This uses the `activeShape` prop http://recharts.org/en-US/examples/CustomActiveShapePieChart.
It's not quite what i wanted because
1. It could only apply a new style to the active shape not all shapes (which is understandable as it's using the 'ACTTIVE' shape method)
2. I was only able to add a stroke. I tried adding a filter / blur / drop shadow but it wasn't looking right.
I'd love to see if anyone else has any thoughts on this.
Ideally there would be a method to render a shadow svg under this segmented one that is just a full circle which i could either style using an svg filter or by adding a className.
Apologies if this isn't quite in line with you original query @username_0
Status: Issue closed
|
thebuilder/react-intersection-observer | 424767446 | Title: possibility to trigger change if window resized?
Question:
username_0: Hello,
I love the simplicity of this package! I am currently experimenting with `useInView` hook and it works very neat. I would like to know if its somehow possible to watch changes on the `entry`. I can read the `entry.boundingClientRect.width` and would love to listen on this changes as well on window resize. Is there any chance or does the IntersectionObserver does not trigger any change?
Cheers
Status: Issue closed
Answers:
username_1: The `IntersectionObserver` is only triggered when an element crosses the threshold.
So for this use case, you would add a `ResizeObserver` to the element, that notifies you of changes to the size. You could combine it with the `useInView` hook, so it's only active if the element is inside the viewport.
username_0: @username_1 thanks for the heads up - this is how I currently doing it. I thought there might be an option inside of IntersectionObserver but good to hear that I am doing it the right way |
xlsdg/react-intl-tel-input-v2 | 470234131 | Title: SyntaxError: Unexpected identifier at Jest test
Question:
username_0: I'v got an error while trying to check component with unit tests using Jest
({"Object.<anonymous>":function(module,exports,require,__dirname,__filename,global,jest){import baseGet from './_baseGet.js';
^^^^^^^
SyntaxError: Unexpected identifier
at ScriptTransformer._transformAndBuildScript (node_modules/jest-runtime/build/script_transformer.js:403:17)
at Object.<anonymous> (node_modules/react-intl-tel-input-v2/dist/ReactIntlTelInput.cjs.js:8:28)
Status: Issue closed
Answers:
username_0: Fix:
change `import PhoneInput from 'react-intl-tel-input-v2';` import to
`import PhoneInput from 'react-intl-tel-input-v2/dist/ReactIntlTelInput';` |
woocommerce/woocommerce | 792858687 | Title: Update products/deactivate plugins
Question:
username_0: I have a problem with woocommerce after last update (4.9.1)
I have had an issue with updating products and plugins. When I change the text or variations of a product and click update the little wheel spins round until the connection times out and I get the message: 504 Gateway Time-out. The server didn't respond in time.
The same happens when I try to deactivate or delete a plugin – nothing happens and then the connection times out.
Change theme, install old version woocommerce not works.
Any ideas?
*Sorry for my bad english
Answers:
username_1: Hi there! I have the exact same problem. I even set up a new Wordpress site and installed WooCommerce. After installation I am not able to deactivate or delete any plugins. Seems to be a WooCommerce problem?
username_0: I found this https://www.wpbeginner.com/wp-tutorials/fix-wordpress-memory-exhausted-error-increase-php-memory/
username_1: Hi Petru! I have already changes the memory limit. Unfortunately it has no affect. I also checked all plugins and themes. Makes still no difference.
username_0: Add this to functions.php
public_html/wp-content/themes/storefront
add_filter('woocommerce_admin_features', 'pk_woocommerce_admin_features');
function pk_woocommerce_admin_features($features) {
if(($key = array_search('remote-inbox-notifications', $features)) !== false) {
unset($features[$key]);
}
}
username_1: Hi Petru! I found a similar function as well. But thank you very much for sharing! Hope WC will fix this within the next few days.
username_2: @username_1 does setting `'remote-inbox-notifications' => false,` works for you? It's under `woocommerce/packages/woocommerce-admin/includes/feature-config.php`
username_3: This workaround published here seems to solve this:
https://github.com/woocommerce/woocommerce-admin/issues/6168
username_4: This is caused by the run method in the RemoteInboxNotificationsEngine class.
Version 4.9.1
This occurred when I was updating a separate plugin, as the code is called when any plugin is activated/deactivated. It is hooked to the 'activated_plugin' and 'deactivated_plugin' hooks and is called for all plugins, not just woocommerce,
The code calls DataSourcePoller::read_specs_from_data_sources() and if successful, it recursively calls the run method again. Internally each call adds to PHPs stack and eventually it runs out of stack space. Code execution stops at that point.
Looking at the code the logic should be inverted so that it only calls the run method if read_specs_from_data_sources() fails. Adding a ! before the call ( !DataSourcePoller::read_specs_from_data_sources() ) inverts the return state and solved the problem for my client.
I'm not sure that creating a recursive loop like this is the best idea as there is always the possibility that a terminal loop could be created. I'm sure the developers had a reason for the code but I would have put the logic in one method where I could exit gracefully if it had failed repeatedly X times.
If needs be, i'll create a pull request with my fix, but not sure when i'll have chance. In the meantime hopefully this will help someone else, till the developers can fix it properly.
username_5: Hi All,
Thanks for raising this issue, we are looking into it, stay tuned for updates.
This issue is primarily being discussed in the WooCommerce Admin repository as early signs are pointing towards usage of the Remote Inbox System which is developed there.
https://github.com/woocommerce/woocommerce-admin/issues/6168
It would be helpful for us to troubleshoot this issue if anyone experiencing it could share your current System Status Report. You can get it by navigating to the WooCommerce / Status section of your site. Once there, click on the Get system report button and then copy it by clicking on the Copy for support button. Then paste it here in a comment.
I'm leaving this issue open for feedback for now until we know more.
username_1: There was an update to for WC 4.9.2. Unfortunately the updates does not affect the problem. Anyone else?
username_5: Hi @username_1, this was an unrelated update, you can find more details on it here https://developer.woocommerce.com/2021/01/25/woocommerce-4-9-2-fix-release/
username_6: We've had the same issue on one of our sites and have had to roll back to 4.8.0 in order to make it functional which is not ideal. The first roll back to 4.9.0 saw the problem persist so it's clearly an issue with that update. I haven't checked any of the others yet, but will take a look at them today.
Client was unable to update/edit any products which then triggered a 504 timeout. The update command generated the following string:
[25-Jan-2021 16:45:26]
script_filename = /www//wp-admin/post.php
[0x00007fb3a0a23c20] curl_exec() /www/wp-includes/Requests/Transport/cURL.php:162
[0x00007fb3a0a237a0] request() /www/wp-includes/class-requests.php:381
[0x00007fb3a0a234c0] request() /www/wp-includes/class-http.php:394
[0x00007fb3a0a228d0] request() /www/wp-includes/class-http.php:626
[0x00007fb3a0a22800] get() /www/wp-includes/http.php:162
[0x00007fb3a0a22750] wp_remote_get() /www/wp-content/plugins/woocommerce/packages/woocommerce-admin/src/RemoteInboxNotifications/DataSourcePoller.php:71
[0x00007fb3a0a22530] read_data_source() /www/wp-content/plugins/woocommerce/packages/woocommerce-admin/src/RemoteInboxNotifications/DataSourcePoller.php:51
[0x00007fb3a0a22420] read_specs_from_data_sources() /www/wp-content/plugins/woocommerce/packages/woocommerce-admin/src/RemoteInboxNotifications/RemoteInboxNotificationsEngine.php:85
[0x00007fb3a0a222e0] run() /www/wp-content/plugins/woocommerce/packages/woocommerce-admin/src/RemoteInboxNotifications/RemoteInboxNotificationsEngine.php:86
[0x00007fb3a0a221a0] run() /www/wp-content/plugins/woocommerce/packages/woocommerce-admin/src/RemoteInboxNotifications/RemoteInboxNotificationsEngine.php:86
[0x00007fb3a0a22060] run() /www/wp-content/plugins/woocommerce/packages/woocommerce-admin/src/RemoteInboxNotifications/RemoteInboxNotificationsEngine.php:86
[0x00007fb3a0a21f20] run() /www/wp-content/plugins/woocommerce/packages/woocommerce-admin/src/RemoteInboxNotifications/RemoteInboxNotificationsEngine.php:86
[0x00007fb3a0a21de0] run() /www/wp-content/plugins/woocommerce/packages/woocommerce-admin/src/RemoteInboxNotifications/RemoteInboxNotificationsEngine.php:86
[0x00007fb3a0a21ca0] run() /www/wp-content/plugins/woocommerce/packages/woocommerce-admin/src/RemoteInboxNotifications/RemoteInboxNotificationsEngine.php:86
[0x00007fb3a0a21b60] run() /www/wp-content/plugins/woocommerce/packages/woocommerce-admin/src/RemoteInboxNotifications/RemoteInboxNotificationsEngine.php:86
[0x00007fb3a0a21a20] run() /www/wp-content/plugins/woocommerce/packages/woocommerce-admin/src/RemoteInboxNotifications/RemoteInboxNotificationsEngine.php:86
[0x00007fb3a0a218e0] run() /www/wp-content/plugins/woocommerce/packages/woocommerce-admin/src/RemoteInboxNotifications/RemoteInboxNotificationsEngine.php:86
[0x00007fb3a0a217a0] run() /www/wp-content/plugins/woocommerce/packages/woocommerce-admin/src/RemoteInboxNotifications/RemoteInboxNotificationsEngine.php:86
[0x00007fb3a0a21660] run() /www/wp-content/plugins/woocommerce/packages/woocommerce-admin/src/RemoteInboxNotifications/RemoteInboxNotificationsEngine.php:86
[0x00007fb3a0a21520] run() /www/wp-content/plugins/woocommerce/packages/woocommerce-admin/src/RemoteInboxNotifications/RemoteInboxNotificationsEngine.php:86
Either way, we couldn't get it functional no matter what we tried so, while the roll back was our last resort as they'd lose some transaction data, it was necessary in the end.
Status: Issue closed
username_5: Thanks all for your patience and helpful information.
**This issue has now been resolved by a fix in woocommerce.com** this means you shouldn't need to take any further action and the issue should no longer occur. More info on this in https://github.com/woocommerce/woocommerce-admin/issues/6168
I am closing this issue now. Please feel free to comment on it in case we missed something. We’d be happy to take another look. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.