repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
Mcents/little_shop
246611631
Title: capybara login tests refactor Question: username_0: all the places we use ` page.all(:css, '.login-button-unique')[0].click ` can be refactored to ` login_button.click ` or ` logout_button.click ` with some classic ruby encapsulation: ` logout_button = page.all(:css, '.login-button-unique')[0] `<issue_closed> Status: Issue closed
DerPavlov/Cannons
611842480
Title: Gunpowder loading/block spawn duration addition. Question: username_0: First of all thanks for updating the images on the wiki! Not sure if you are still up for making some additions but here's is some i'd really like if you are. Right so i know theres a way of autoloading your cannon with redstone and a chest. I was wondering if you could make an option which basically automatically loads gunpowder into the cannon when you put a projectile in. It removes the need to use a lever and setup redstone etc and you still get the novelty of adding the projectile in. I was also wondering if you could add a option which gives the blocks that can be spawned via 'spawnOnExplosion' a duration. Many thanks! Answers: username_1: If you want to use a projectile which contains gunpowder and a shell I recommend to use "needsGunpowder: false". This way you can directly load the projectile, without loading the gunpowder. Or do you want to detect the projectile when you put it in the chest and automatically notify the cannon next to it to load? Do you mean the blocks for 'spawnOnExplosion' should vanish after a certain time? username_0: I saw this setting "needsGunpowder: false" but was hoping to still include the novelty of adding gunpowder, just without the pain of manually putting it in. But without needing the redstone contraption to trigger twice to load both gunpowder and the projectile. i'd rather cannons were more neat as they will be fitted on smaller ships. My thinking was upon loading in a projectile it would automatically take gunpowder out of your inventory and load it into the cannon. If you didn't i guess have any.. you'd then have to go grab some and manually add it into the cannon like normal. username_1: All of the points are possible to implement, I will have a look. username_1: I added a option to autoload the gunpowder when you load the projectile, it is a little bit strange because you don't get a message that the gunpowder was loaded. username_0: Ah thanks a lot! I was wondering if the duration time for spawning block explosion is doable, that would be quite a neat addition, the server im helping out would greatly benefit from a specific use of this. username_1: It can definitely bet done. There is already a code in the program which removes the smoke, but these are only virtual. However, I try to limit myself to bug fixes at the moment. username_0: No worries, we have found a work around, its not a huge priority, something i might use for a project down the line.
asriz7777/learning_testsuites
370964568
Title: ASIYATESTING : ApiV1ProjectsIdProjectChecksumsGetAnonymousInvalid Question: username_0: Project : ASIYATESTING Job : UAT Env : UAT Region : FXLabs/US_WEST_1 Result : fail Status Code : 404 Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Wed, 17 Oct 2018 08:51:02 GMT]} Endpoint : http://13.56.210.25/api/v1/api/v1/projects/0LMmcSjJ/project-checksums Request : Response : { "timestamp" : "2018-10-17T08:51:02.333+0000", "status" : 404, "error" : "Not Found", "message" : "No message available", "path" : "/api/v1/api/v1/projects/0LMmcSjJ/project-checksums" } Logs : Assertion [@StatusCode == 401 OR @StatusCode == 403] resolved-to [404 == 401 OR 404 == 403] result [Failed] --- FX Bot ---
mqttjs/MQTT.js
922591562
Title: cleanStart flag for MQTT protocol version 5 Question: username_0: I am unable to find the "cleanStart" flag syntax to test persistent session in MQTT protocol version 5. Could you please point out the source for the relevant syntax ? Thanks. Answers: username_1: `clean: true, set to false to receive QoS 1 and 2 messages while offline` should do the trick. Status: Issue closed
zaproxy/zaproxy
99236270
Title: Update to use a more recent user-agent Question: username_0: We are seeing some sites return errors with the current 'default' user agent: "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0;)" and I've also seen WAFs block access as well. Updating to a more recent one would help but it would also be useful to make this easily configurable via the Options, eg in the Connection section Status: Issue closed Answers: username_1: I have a question about this: even though I've changed the user agent in the connection screen to one of the presets "Firefox 39.0 Win 8.1 64-bit" (Mozilla/5.0 Windows NT 6.3; WOW64; rv:39.0 Gecko/20100101 Firefox/39.0), when I inspect the headers of the requests that the spider makes I see: User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0;) How is that? I do I make it use the user agent I've chosen? Thank you. username_0: Very strange - all you _should_ need to do is change it via the Connection screen :/ How are you examining the request headers? username_1: I'm looking at them in the upper right window, under the request tab. I've tested a script to change the user agent on each request and following to that I can see, in the same screen above, that the user agents do change. But not if I make the change in the connection screen? Update: Looking at it now, after having started a new Zap session, it seems that the change has gone into effect. Maybe it will not change for the current session (even though the session is idle and has not ongoing spidering)? username_2: Hello I found a solution; Go to Tools > Replacer Options > Add > set "Match Type" to "Request Header String" > enter a description > enter your old user agent to "Match string" > enter your new user-agent to "Replacement String" textbox > Click "enable" > click "Save" Works like charm
alphagov/asset-manager
297182434
Title: Add Smokey or End-to-End test for viewing draft attachments Question: username_0: We want to be confident that users have to be signed in to view draft assets in Asset Manager. We should add a Smokey or End to End test (whichever is most suitable) to ensure this functionality continues to work as expected. Answers: username_0: I've added a Smokey test to ensure we need to signin before accessing draft-assets in https://github.com/alphagov/smokey/pull/341.
sejinkim1904/robot_sports_league_api
726943972
Title: User can create a new team Question: username_0: ``` As a user When I send a POST request to '/api/v1/teams' with a valid email, password, and team name it returns the newly created team record with an authentication token and a status code of 201 If unsuccessful, it returns a status of 400 and a error message ```<issue_closed> Status: Issue closed
cybertec-postgresql/pgwatch2
564117186
Title: Installation of pgwatch2 - Non Docker Question: username_0: Hi Team, In the documentation of pgwatch2 installation of non-docker individual components, I see you have step2 to install influx db as shown below: ============================== Install InfluxDB (for Postgres as metrics storage DB see instructions here) INFLUX_LATEST=$(curl -so- https://api.github.com/repos/influxdata/influxdb/tags | grep -Eo '"v[0-9\.]+"' | grep -Eo '[0-9\.]+' | sort -nr | head -1) wget https://dl.influxdata.com/influxdb/releases/influxdb_${INFLUX_LATEST}_amd64.deb sudo dpkg -i influxdb_${INFLUX_LATEST}_amd64.deb ============================== Q 1. We are installing on CentOS 7 --- would you please provide the URL to download InfluxDB for CentOS 7. And would you also modify the above instructions as applicable for CentOS 7. Q 2. Why do we need the InfluxDB installation -- is it mandatory for pgwatch2 non-docker individual installation? Any help/clarifications will be greatly appreciated. Thanks and regards, Bikram Answers: username_1: Q1: Please look here for downloads, googling also helps: https://portal.influxdata.com/downloads/ Q2: No, it's not mandatory. Only when you want to store your metrics in InfluxDB Status: Issue closed
Caphyon/clang-power-tools
498600719
Title: Missed conditional prop sheet Imports Question: username_0: In the attached files, you can see `hello_world.vcxproj` begins with these lines: ```xml <Import Project="$([MSBuild]::GetPathOfFileAbove('globals.props', '$(MSBuildThisFileDirectory)'))" /> <Import Project="$(MyProjectRoot)imported_second.props" /> <ImportGroup Label="PropertySheets"> <Import Condition="'$(IsWindows)'=='true'" Project="$(MyProjectRoot)some_lib.props" /> </ImportGroup> ``` The first, `globals.props`, defines `$(MyProjectRoot)`. The second, `imported_second.props` also seems to resolve correctly despite using a property defined in the prior prop sheet. I'm guessing this works correctly because the verbose output shows it being processed. Inside `imported_second.props`, `$(IsWindows)` is conditionally set for x64. However, the verbose output does not show `some_lib.props` being processed at all and that one sets the include path needed to compile. Therefore, if you build the attached files for x64 you will see CPT cannot find `some_lib.h` even though it compiles and runs correctly with MSBuild. [cpt_bug_3.zip](https://github.com/Caphyon/clang-power-tools/files/3655434/cpt_bug_3.zip) Answers: username_1: Hello, This behavior is actually a duplicate of #685 , caused by early evaluation of `Platform` and `Configuration` MSBuild properties. That fix will apply to this issue as well. It will be available in the next version of Clang Power Tools (v5.3). We really appreciate the feedback. Regards, Gabriel. Status: Issue closed
littleflute/blog
474781406
Title: blcd3_cd01: Imaginary roads [CD] by <NAME> Question: username_0: https://library.ci.corvallis.or.us/?id=26928003301282#section=resource&resourceid=2311603&currentIndex=0&view=fullDetailsDetailsTab Answers: username_1: https://username_0.github.io/blcd3/cd01 username_1: https://mp.weixin.qq.com/s?__biz=MzA5MzMwNTc0Ng==&mid=2247488065&idx=1&sn=ee3eb3ace3f80c66bc99f79f6097f506&chksm=905ebcbca72935aaa220ddc8ab025377b2af393c1e577f02123612ab190f1d567628f15230f0&scene=21#wechat_redirect Status: Issue closed
DavidFW1960/bom-weather-card
789558570
Title: Card Width Issue Using Companion App Question: username_0: I migrated to your wonderful card as part of moving from Darksky to OpenWeatherMap. On desktop, it works perfectly, however, in the Android companion app, the card causes the page to become wider than the screen as shown in the images below (the page scrolls side to side instead of having chevrons on the top bar). I have tried both with my layout tweaks and with the default configuration and it makes no difference. Removing the card returns the page width to normal. I have skimmed the code looking for a culprit but I am no expert... ![screenshot1](https://user-images.githubusercontent.com/19308297/105117264-d548ae00-5a91-11eb-998b-02399048cf2f.jpg) ![screenshot2](https://user-images.githubusercontent.com/19308297/105117266-d548ae00-5a91-11eb-8cf2-4642bdf811dc.jpg) Answers: username_1: no idea and I don't have/use android. It doesn't do this in iOS app. Have you tried maybe putting the card in a vertical stack? I use it in a stack. I can't replicate this in any case and seems maybe to be an android app issue.... username_0: No problem! I do have the card in a vertical stack. I'll keep an eye on the issues for the app as it does sound like an issue there although I am only seeing it with this card (so far...). username_1: Someone reported a similar issue with the bom radar card but of course you're not in Australia anyway.... Perhaps ask the android guys... they might be able to assist with advice as to what might be causing it.. username_0: Thanks! I'll keep an eye on the radar card issue even though the card doesn't apply to me and will ask the Android guys for advice. I just ran through the code again in WinMerge alongside the Darksky card's code and nothing jumps out as the culprit so I believe that is where the issue lies. Status: Issue closed
opensalt/opensalt
273626360
Title: Document possibly issues importing from ASN Question: username_0: Now that we are expecting items to have markdown syntax and never HTML we should document that in the user guide. Specifically that a manual pass will be required for things like importing from ASN where anything with HTML will need to be fixed and there could also be accidental markdown syntax that gets interpreted. Answers: username_1: Could be solved with #253 username_0: This should be less of an issue now that we decided to allow some HTML (specifically ul, ol, li, b, i, u, br, p tags) (implemented with #256). The documentation should still be updated mentioning that most HTML tags will be removed. Status: Issue closed
GlenNicholls/solar_tracking_project
376651872
Title: [PISW] Figure out how to enable specific debug log level for specific sub-module Question: username_0: https://github.com/username_0/solar_tracking_project/blob/ff0ca15a51329c8713c68a152ccea38717c1dcd2/src/main.py#L19-L34 Answers: username_0: logger_name = 'main_app' sub_1_name = 'sub_module' inst = some_class(logger_name, sub_1_name) This will yield the following output name somewhere in the log `main_app.sub_module`, then we can easily use the above link to have a higher level interface select to turn on debug level logging for this module. The class should now look like this: class some_class(object): def __init__( logger='main_logger', module_name='module_name') self.logger = logging.getLogger(logger + '.' + module_name) This example can be seen in the `power_measurement.py` package, I will also start adding this compatability. Status: Issue closed username_0: closed per 29b9c7f
catc/displace
487172865
Title: window.displacejs.destroy(); Question: username_0: I couldn't Figure out how this works or removes the event listener, can you please guide me on its usage thanks tbh great library. Answers: username_1: To destroy you have to call the `destroy` method on the displace instance you created. So you need a reference to the displace instance you create earlier. ```js // create displace instance const d = displace(document.querySelector('.some-div'), {}); // destroy the instance d.destroy() ``` Let me know if you have any other questions. Status: Issue closed
tdewolff/canvas
563307233
Title: Why splitting bezier for length calculation? Question: username_0: I have been reading your blog and part of you code to learn more about arc length computation for cubic bezier curves and found that in [cubicBezierLength()](https://github.com/username_1/canvas/blob/master/path_util.go#L468) you are splitting the curve before applying the Gauss quadrature to calculate the length of each piece of curve. As I understand it this should not be necessary, because you can choose `t0` and `t1` (or `a` and `b` in your case) to calculate the length of each piece without splitting. Is this a possible optimization for your code? Answers: username_1: I think it gave slightly better results by splitting the Béziers, but not too much. It really depends on the curve, as some curves are very hard to calculate accurately. Your mileage may vary, but it should work fine without splitting. This needs better testing to see if it really improves accuracy. username_0: Thanks. I have also tried all kind of things to improve accuracy for the worst case curves. Someone recommended to use Romberg's method and [I actually implemented](https://github.com/Pomax/BezierInfo-2/issues/77#issuecomment-584532772) it, but results were worse (by a factor 2) than Gauss quadrature. One thing that actually improves accuracy quite a bit is splitting the curve (or setting intermediate t values) at "fake curvature extrema", where `x'*x'' + y'*y'' = 0`. These roots are not the real curvature extrema, but often very close and if there is a the cusp they include it. Even though calculating the roots of this is not exactly slow, it takes as long as the whole Gauss quadrature with n=16. This shows how fast and awesome Gauss quadrature is, but I wish there was a simple way to improve accuracy for edge cases. Anyway, thanks for your great blog post, it helped me a lot for understanding how to approach arc length approximation.
spacetelescope/cubeviz
303721864
Title: Testing v0.1.dev534 Question: username_0: (Apologies in advance if some of these features are not supposed to work yet.) v0.1.dev534 Region selection doesn't work with the MaNGA cube I've loaded. Can select a ROI in SpecViz, but cannot Apply to Cube. Data Processing -> Collapse Cube doesn't seem to do anything. "a" and "d" hot keys don't change the slice displayed. "f" for freezing doesn't seem to work. same for "p" for pull When I have a dialog box displayed and I click outside of glue, the box disappears. Whenn I click back in glue, it appears again. View -> Wavelength Units: When I change the units the spectrum display shows a super large range, including wavelengths where there are no data. Also, this causes the the IVAR and MASK cubes to rescale so that they are both all black. Same thing happens if I choose other options in this menu other than Angstroms. If I choose mm, the spectrum disappears entirely. View -> RA - Spectral and DEC - Spectral don't seem to do anything. Data Processing -> Arithmetic Operations TEST2 = FLUX + 100 However, the spectrum doesn't seem to reflect this change. The Save Dialog box makes it looks like I will overwrite the currently loaded cube. Phew, I hit "OK" and see this is not the case now. The saved file has no suffix if I don't type one. Perhaps we should automatically append one? If I save a single smoothed component (as a single componet cube wiht no ivar or mask) and read it into a fresh glue session, it loads the component in all three viewers. It should only display it in one viewer. Unable to save session. The error is: "Failed to save session Don't know how to serialize <cubeviz.data_factories.DataConfiguration object at 0x18335e6748> of type <class 'cubeviz.data_factories.DataConfiguration'>" Smoothing -"Info: Smoothing previews are displayed on CubeViz's left and single image viewers." Since we will eventually allow the users to move the viewers around, and some users may be unsure about what the single image viewer is, hwo about stating somethign like this instead: "A preview of the smoothing operation is displayed in Viewer 1." -We need to state somewhere that the preview smooths the one channel displayed, and clicking on "okay" smooths the entire cube. Contours -Perhaps chance "Custom Component" to "Other Component" because custom implies that the user can create this component somehow. Viewer Options -scaling (ie slider bar next to "color/opacity) in the viewer options panel) shouldn't bee linked across the 3 viewers as a default -I can't think of a use case where one would want to scale the 3 viewers at the same time, so perhaps remove the "sync" button. -For playing movie to run through channels, can I change the speed? -The "limits" in "Plot Options - 2D Image" don't seem useful since they don't correspond to the limits shown on the image. -If I change the limits, I cannot undo it using the Edit -> Undo option. @username_1 @drdavella @astrofrog @robelgeda @javerbukh Answers: username_1: I'll go through this list and check things. username_2: Here are my testing notes on **v0.1.dev540**, following the items in #257 * Load your favorite datasets - KMOS cubes only have 2 extensions (data + noise) but the loader loads 3 views (1x data + 2x noise). It would make sense to leave the 3rd viewer blank if there is so extension * Spatial smoothing - All good (I like the preview!) * Spectral smoothing - The smoothing kernel needs units. I *think* it's currently pixels but I'm not sure... - It would be great to have an option to plot a smoothed spectrum weighted by the inverse variance (from the noise cube). This is useful to smooth out bright skylines in near-IR observations * Cube collapsing over wavelength ranges - Worked fine, though it would be nice if the smoothed cube appeared automatically in one of the viewers * Basic image control (moving through slices, overlays etc) - Would it be possible to show 2D ROIs as a contour? I.e. have the option to show only the edges rather than the whole ROI - Under contour options show the default parameters (at the moment everything is = 0) - Hovering over 2D viewer, pixel values (~1E-20) show as 0.000 --> need to display in scientific notation - Hot keys a, d, f and p work for me * Interaction with 1D spectra functionality - I still can't see a line showing which wavelength position the 2D viewer is at - When hovering can you show the wavelength pixel as well as physical units? - If you drag an ROI in specviz which was already linked to a cubeviz viewer component to a new wavelength position, I think that the 2D viewer should move automatically to the new position. * Save calculated data - At the moment unless you type the file extension it doesn't add one (might be a MacOS High Sierra thing) - I don't understand what saving a subset does. I thought it might zero all the flux outside the ROI for example, but it didn't seem to do anything. - You can save the 2D viewers as images, is it also possible to save them as 2D fits files? - Can you save 1D spectra from specviz as fits/txt AND as images? username_3: @username_0 can this issue be closed? username_0: Not until Charlotte & I are able to run our use cases through CubeViz. username_1: From @username_0's comments above. These are all fixed, worked or I created an issue. * Region selection doesn't work with the MaNGA cube I've loaded. * works for me * Can select a ROI in SpecViz, but cannot Apply to Cube. * The "Apply to Cube" is for things like smoothing operations * Data Processing -> Collapse Cube doesn't seem to do anything. * worked for me * "a" and "d" hot keys don't change the slice displayed. * "f" for freezing doesn't seem to work. same for "p" for pull * all worked for me * When I have a dialog box displayed and I click outside of glue, the box disappears. Whenn I click back in glue, it appears again. * Issue #320 to track this. * View -> Wavelength Units: When I change the units the spectrum display shows a super large range, including wavelengths where there are no data. Also, this causes the the IVAR and MASK cubes to rescale so that they are both all black. Same thing happens if I choose other options in this menu other than Angstroms. * This seems to be fine, but found another bug https://github.com/spacetelescope/specviz/issues/683 * If I choose mm, the spectrum disappears entirely. * The rescaling appears to be working, but see above bug. * View -> RA - Spectral and DEC - Spectral don't seem to do anything. * Those are now removed from the menu and will be implemented "later". * Data Processing -> Arithmetic Operation TEST2 = FLUX + 100 However, the spectrum doesn't seem to reflect this change. * this work, just a matter of intent in specviz on arithemetic operations. Currently a new spectrum is not displayed * The Save Dialog box makes it looks like I will overwrite the currently loaded cube. Phew, I hit "OK" and see this is not the case now. The saved file has no suffix if I don't type one. Perhaps we should automatically append one? * Fixed in #324 * If I save a single smoothed component (as a single componet cube with no ivar or mask) and read it into a fresh glue session, it loads the component in all three viewers. It should only display it in one viewer. * This is the current way it happens. When we have time to re-org the GUI it will only bring up one image viewer. * Unable to save session. The error is: "Failed to save session Don't know how to serialize <cubeviz.data_factories.DataConfiguration object at 0x18335e6748> of type <class 'cubeviz.data_factories.DataConfiguration'>" * This is still a current issue (created a ticket https://github.com/spacetelescope/cubeviz/issues/519) Smoothing * "Info: Smoothing previews are displayed on CubeViz's left and single image viewers." Since we will eventually allow the users to move the viewers around, and some users may be unsure about what the single image viewer is, hwo about stating something like this instead: "A preview of the smoothing operation is displayed in Viewer 1." * This now states "can be selected in the viewer drop-down menu" * We need to state somewhere that the preview smooths the one channel displayed, and clicking on "okay" smooths the entire cube. * This was fixed https://github.com/spacetelescope/cubeviz/pull/336 Contours * Perhaps change "Custom Component" to "Other Component" because custom implies that the user can create this component somehow. * Fixed by #307 Viewer Options * scaling (ie slider bar next to "color/opacity) in the viewer options panel) shouldn't bee linked across the 3 viewers as a default * Fixed in https://github.com/spacetelescope/cubeviz/pull/384 * I can't think of a use case where one would want to scale the 3 viewers at the same time, so perhaps remove the "sync" button. * Fixed in https://github.com/spacetelescope/cubeviz/pull/384 * For playing movie to run through channels, can I change the speed? * That is a glue thing, not sure about speed on there. Would have to open a ticket in glue * The "limits" in "Plot Options - 2D Image" don't seem useful since they don't correspond to the limits shown on the image. * Discussion on https://github.com/spacetelescope/cubeviz/issues/327 not a bug * If I change the limits, I cannot undo it using the Edit -> Undo option. * The Edit -> Undo does not seem to be selectable given changes for limits etc. So, I think this is a Glue thing. It is probably fine with the way it is and think about how "Undo" might work. Or, maybe, could just remove the Undo menu. username_1: From @username_2's comments above: * Load your favorite datasets KMOS cubes only have 2 extensions (data + noise) but the loader loads 3 views (1x data + 2x noise). It would make sense to leave the 3rd viewer blank if there is so extension * This would be part of a "free the viewers" set of work https://github.com/spacetelescope/cubeviz/issues/304 * The smoothing kernel needs units. I think it's currently pixels but I'm not sure... * Appears to be fixed in https://github.com/spacetelescope/specviz/pull/395 * It would be great to have an option to plot a smoothed spectrum weighted by the inverse variance (from the noise cube). This is useful to smooth out bright skylines in near-IR observations * This should now be possible: Create a smoothed cube, use the arithmetic operation to divide by the IVAR though there is a bug report for the last step for the calculated arithmetic operation to be displayed as a spectrum. * Cube collapsing over wavelength ranges Worked fine, though it would be nice if the smoothed cube appeared automatically in one of the viewers * This works now * Basic image control (moving through slices, overlays etc) * Would it be possible to show 2D ROIs as a contour? I.e. have the option to show only the edges rather than the whole ROI * not currently an * Under contour options show the default parameters (at the moment everything is = 0) * This does have proper default parameters * Hovering over 2D viewer, pixel values (~1E-20) show as 0.000 --> need to display in scientific notation * they are now in scientific notation * Interaction with 1D spectra functionality * It isn't possible to change flux units * Works now... * I still can't see a line showing which wavelength position the 2D viewer is at * this works * When hovering can you show the wavelength pixel as well as physical units? * this works (if there are units in the input file) * If you drag an ROI in specviz which was already linked to a cubeviz viewer component to a new wavelength position, I think that the 2D viewer should move automatically to the new position. * Once a collapsed cube is created from a spectral region, by design, the cube is not updated. If this is desired, then we will have to work on it (@username_3 @username_0) * Save calculated data * At the moment unless you type the file extension it doesn't add one (might be a MacOS High Sierra thing) * I believe that is a glue thing but will discuss with @astrofrog * I don't understand what saving a subset does. I thought it might zero all the flux outside the ROI for example, but it didn't seem to do anything. * It does seem to save a FITS file (if fits is selected), but the extension is missing, as noted above. * You can save the 2D viewers as images, is it also possible to save them as 2D fits files? * Saving appears to work (though must specify the extension) * Can you save 1D spectra from specviz as fits/txt AND as images? * I don't see how to and the export button doesn't seem to do anything, opened a SpecViz ticket https://github.com/spacetelescope/specviz/issues/685 username_1: Discussed with @username_3 and we are good to close this. Status: Issue closed
dask/dask
476625364
Title: Can't upload one specific file Question: username_0: Trying to upload a specific python file to the workers, through `upload_file`, throws an exception "**Exception: object Future can't be used in 'await' expression**". I'm running out of ideas of what the problem could be, which is why I need your help. But these are my current hypothesis: - It somehow doesn't like that I'm uploading a script that contains potentially unwanted pandas usages, as it allows uploading the file if I remove certain `apply` methods (although, if I just remove some surrounding code and leave there the supposedly conflicting pandas line, it also starts working). - It has a certain memory limit as to the file size that I want to upload (which would be strange as the file is only 10KB and other dummy examples with several lines of code work just fine). Here's the full error message: ![image](https://user-images.githubusercontent.com/19359510/62433315-cf3f0000-b72b-11e9-82dd-88fcb08082cd.png) What I'm doing in the notebook before calling `upload_file`: ![image](https://user-images.githubusercontent.com/19359510/62433473-715ee800-b72c-11e9-8c1e-22c949fb0ee8.png) The file that I'm trying to upload: [utils.py.zip](https://github.com/dask/dask/files/3465661/utils.py.zip) Please help me as I'm really feeling clueless here! Answers: username_1: @username_0 generally, text tracebacks are better than screenshots. Can you post the output of `client.get_versions(check=True)`?
dasariramacharan/100-days-of-code
205428150
Title: Direct url's doesn't work Question: username_0: Actual e.g http://localhost:49410/index.html/projects does not work but application loads this as default page and url. Error is 404 Expected: Direct url's should work Answers: username_0: ref: https://github.com/angular/angular/issues/11046 Status: Issue closed
EnigmaDragons/ProjectNeon
502231261
Title: DeckBuilder: On Main Card pane fill paged with all Matching Class Cards Question: username_0: Main pane should display all cards in the Library that match the class of the Current Character. Next/Previous buttons should be hidden if there is no Next/Previous page. Create a bunch of test cards for at least one class for visualizing this feature. Estimate: 2/units<issue_closed> Status: Issue closed
cmdcolin/mafviewer
227978027
Title: MAF Track Not Showing Up Question: username_0: JBrowse version 1.12.1 After installing the plugin, go to [test data](http://localhost/JBrowse/?data=plugins/MAFViewer/test/hg38), while the augustusGene track is shown, MAF track is not shown. ![mafviewer](https://cloud.githubusercontent.com/assets/3411162/25949377/a23cc562-3657-11e7-95bf-6b56a1823502.png) Answers: username_1: You'll need to use 1.12.3, because 1.12.1 didn't have bedtabix support The MAFViewer relies on a derived bed tabix format If you are unable to upgrade let me know and I can recommend workarounds I'll update the readme to reflect (currently it mentions using jbrowse master but now 1.12.3 release is fine) Status: Issue closed
bitshares/bitshares-ui
302953930
Title: Desktop client (linux and mac) is not loading the content. Question: username_0: **Version:** 2.0.180306 (only on desktop client) **OS:** Xubuntu 16.04 / macOS Sierra **After the update linux version do not loads the content until reload the application inside the Electron window.** ![linux desktop client 2 0 180306 not loading content](https://user-images.githubusercontent.com/9022734/37071983-2ab5da14-21b7-11e8-99ea-cdb4679a1821.png) **After the reload and clicking on Dashboard** (the only thing that works is the Send modal) ![linux desktop client 2 0 180306 - after reload and clicking on dashboard](https://user-images.githubusercontent.com/9022734/37072250-5ccfd5e4-21b8-11e8-92bf-f47295108688.png) --- It was been reported on Telegram that after the update, **macOS Sierra version hangs clicking on Exchange button**, maybe the same problem. Other user said that it works on macOS High Sierra. _macOS Sierra user:_ - The exchange button doesn’t work - Using Apple Mac, downloaded the new update - It is a desktop client... - No nothing works, no buttons open - Constantly freezes _macOS High Sierra user:_ - I am using Mac High Sierra having no issues Answers: username_0: @username_2, it's possible that this bug might be high priority because it seems to happen to multiple users cross platform. I can provide more debug information (on Linux version). Let me know what you need. username_1: We've had this bug before and fixed it, not sure why it's back. It's related to the inital opening of the database that stores the key, aka the wallet. It used to be caused by a redirect happening to the "#" on app initialisation, could be same thing now. username_0: Added new information about web client with Chrome and Firefox and console error on Opera. username_2: @username_0 try opening with Chrome incognito to see it the problem is related to cached elements in your browser. I'm not having any issues on Mac OS web wallet or Desktop wallet, high sierra. username_3: Same here, Debian user. Switched back to 180302 and now it works again username_0: @username_2, I cleaned all my offline data of desktop client and I continue with the same errors. username_4: Have this on High Sierra, Bitshares.180306 Buttons do not work Status: Issue closed username_1: I'm unable to fix the root cause but I've put in place some workarounds that have fixed the issue as far as I can tell. username_0: @username_1, Navigation seems more slowly than previous versions (100Mbps web connection) but **is everything working**. Thank you username_2: Strange. It's faster than ever for me. username_0: @username_2 Can be related with my node connection (I'm connected from Portugal to Germany, my closest node). I will inform you if this situation persists with less latency. username_2: Please do. We've solved so many extraneous calls. I still do have node disconnects but we have an active issue working towards holding onto the socket better. username_4: How to solve on Mac? username_2: **Version:** 2.0.180306 (desktop and web client) **OS:** - Xubuntu 16.04 (desktop and web client using Chromium 64.0.3282.167) - Debian (desktop client) - macOS Sierra (desktop client) - Windows 10 (web client running on Opera browser 51.0.2830.40) - OS unknown (web client running on Chrome and Firefox) --- ## Linux **After the update linux version do not loads the content until reload the application inside the Electron window.** It's happening too on **web client using Chromium** ![linux desktop client 2 0 180306 not loading content](https://user-images.githubusercontent.com/9022734/37071983-2ab5da14-21b7-11e8-99ea-cdb4679a1821.png) **After the reload and clicking on Dashboard** (the only option that works is the Send modal) ![linux desktop client 2 0 180306 - after reload and clicking on dashboard](https://user-images.githubusercontent.com/9022734/37072250-5ccfd5e4-21b8-11e8-92bf-f47295108688.png) **The issue continues after cleaning all offline data** --- ## Mac It was been reported on Telegram that after the update, **macOS Sierra version hangs clicking on Exchange button**, maybe the same problem. Other user said that it works on macOS High Sierra. _macOS Sierra user:_ - The exchange button doesn’t work - Using Apple Mac, downloaded the new update - It is a desktop client... - It is the Mac sierra - No nothing works, no buttons open - Constantly freezes _macOS High Sierra user:_ - I am using Mac High Sierra having no issues --- ## Windows It was been reported on Discord that **the same issue is happening on Windows 10 using wallet.bitshares.org with Opera** browser 51.0.2830.40 _Windows 10 user_ - It seems not working properly. ![2018-03-07](https://user-images.githubusercontent.com/9022734/37108992-aac25b82-2230-11e8-8fb8-e50132574531.png) ![2018-03-07](https://user-images.githubusercontent.com/9022734/37122110-201573ee-2257-11e8-8a05-b3c72f64791d.png) --- ## OS unknown It was been reported on Telegram that **the same issue is happening on Chrome and Firefox** browsers. I didn't get an answer about the OS. _Chrome and Firefox user_ - ...Can't get to my assets - Chrome and firefox.. --- ##### **Edits**: ##### 2018 Mar 07: - Added new information about web client with Opera running on Windows 10 - Added new information about web client with Chrome and Firefox ##### 2018 Mar 08 - Added result after cleaning offline data on Linux - Added new information about web client with Chromium on Xubuntu 16.04 - Added desktop client on Debian Status: Issue closed
hemanth/awesome-pwa
367578340
Title: Great job of collecting PWA resources!!! KUDOS!!! Question: username_0: My main question for the time being would be... Out of your experience... Which tool can you recommend to build PWA from web URL's online that can work on both iOS and Android devices out of the box? Thanks! Answers: username_1: Hello there, was just passing by to ask to add my app and Saw this question. I created a framework to build PWA with the Go programming language (Golang) name [go-app](https://github.com/hemanth/awesome-pwa). It is pretty uncommon to code frontend with Go but this one make it possible. I just released a PWA at the start of the week: [Lofimusic.app](https://lofimusic.app), an app that lists and plays Lo-Fi radio. Handling a music player with web tech on mobile is a little bit challenging but this one does a good enough job. username_0: Do I need to learn Golang to use it though? username_1: yes, it is a Go framework Status: Issue closed
apache/airflow
893687836
Title: Chart: Extra mounts with DAG persistence and gitsync Question: username_0: **What happened**: When you have `dag.persistence` enabled and a `dag.gitSync.sshKeySecret` set, the gitSync container isn't added to the pod_template_file for k8s workers, as expected. However, `volumes` for it still are and maybe worse, the ssh key is mounted into the Airflow worker. **What you expected to happen**: When using `dag.persistence` and a `dag.gitSync.sshKeySecret`, nothing gitsync related is added to the k8s workers. **How to reproduce it**: Deploy the helm chart with `dag.persistence` enabled and a `dag.gitSync.sshKeySecret`. e.g: ``` dags: persistence: enabled: true gitSync: enabled: true repo: {some_repo} sshKeySecret: my-gitsync-secret extraSecrets: 'my-gitsync-secret': data: | gitSshKey: {base_64_private_key} ``` **Anything else we need to know**: After a quick look at CeleryExecutor workers, I don't think they are impacted, but worth double checking.<issue_closed> Status: Issue closed
Pokecube-Development/Pokecube-Issues-and-Wiki
805106624
Title: Oricorio does not give EXP to Pokemobs when defeated Question: username_0: #### Issue Description: Defeating a wild Oricorio does not award your Pokemon with any EXP. *I found this issue while testing my modpack, but made a new profile with only Pokecube installed and replicated the bug. #### What happens: Once you defeat an Oricorio (any form), EXP is not granted to the Pokemon that killed it. #### What you expected to happen: Oricorio should award EXP like any other Pokemon. #### Steps to reproduce: 1.Check EXP of tamed Pokemob 2.Defeat Oricorio with that tamed Pokemob 3.Check EXP of tamed Pokemob again; it does not go up ____ #### Affected Versions (Do *not* use "latest"): Replace with a list of all mods you have in. - Pokecube AIO: 1.16.4-3.8.0 - Minecraft: 1.16.1 - Forge: 36.0.15 Answers: username_1: ahh, right, I think the auto-updater didn't apply the exp yield to them properly, I will look into this Status: Issue closed
ucdavis/ipa-client-angular
179633089
Title: Possible speed-up: auto-expire tokens Question: username_0: Sometimes when I log into IPA, it takes quite a while, even if I'm already on CAS. I see the main spinner and the page sits for 1-2 seconds, reloads, sits with the spinner again, waits 1-2 seconds, reloads, and I'm in. I haven't investigated but I'm guessing my localStorage token is expired and I'm wasting a roundtrip finding that out. I say "wasting" because, if we already know the server has some set timeout like 30 minutes, can we timestamp our token in localStorage (add a new localStorage member that indicates when we received our token) and if that was more than 30 minutes ago, we'll know to expire our token and save one of those round-trips? This could might logging in 2-3 seconds faster overall if my hunch is right.<issue_closed> Status: Issue closed
2020PB/police-brutality
657505904
Title: Incident in New York City, NY Question: username_0: --- ## Location New York City, NY ## Date May 29th ## Description Police repeatedly push and shove protestor who is arguing with them ## Links https://twitter.com/JohnPhilpNY/status/1266595992427790338<issue_closed> Status: Issue closed
facebook/react-native
869749970
Title: ImageBackground display picture abnormal Question: username_0: </ImageBackground> <img width="499" alt="Screen Shot 2021-04-28 at 5 37 00 PM" src="https://user-images.githubusercontent.com/16317769/116382322-5d381400-a848-11eb-8bfe-efe262091a0e.png"> ## Expected Results when the uri of source is invalid,imageBackground show abnormal,the defaultSource is as follows: ![src_assets_frequencybgtwo](https://user-images.githubusercontent.com/16317769/116382754-d59ed500-a848-11eb-84a5-6166ef5508d9.jpg) why is it so?Can someone tell me,thanks! ## Snack, code example, screenshot, or link to a repository: Please provide a Snack (https://snack.expo.io/), a link to a repository on GitHub, or provide a minimal code example that reproduces the problem. You may provide a screenshot of the application if you think it is relevant to your bug report. Here are some tips for providing a minimal example: https://stackoverflow.com/help/mcve<issue_closed> Status: Issue closed
ansible/galaxy
954167136
Title: I am not sure if this feature exists or not. Question: username_0: ## Feature Request ### Use Case <!--- What problem does this feature solve? Please describe. A clear and concise description of what the problem is. Ex. I have an issue when [...] --> ### Proposed Solution <!--- Describe the solution you'd like A clear and concise description of what you want to happen. Add any considered drawbacks. --> ### Alternatives <!--- Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered. --> ### Implementation <!--- Teachability, Documentation, Adoption, Migration Strategy If you can, explain the user story, and possibly provide a version of the docs. Maybe a screenshot or mockup of the design? --> Status: Issue closed Answers: username_0: Accident
ampproject/amphtml
136437494
Title: GA account ID Question: username_0: With regard to multiple account IDs, is it possible to do: `'vars': { 'account': [gaPropertyID, gaPropertyID2] },` instead of just: `'vars': { 'account': gaPropertyID },` Answers: username_1: Currently this is not possible. You'll have to create two triggers to support this (supply the account id at trigger level). The main reason behind lack of this support is that the `account` variable is no different from any other variable in amp-analytics and making an array of account ids mean something different from any other variable is hacky.
aleksandr-m/gitflow-maven-plugin
245042028
Title: Update versions on the develop branch when starting a release Question: username_0: When a new release is started, shouldn't all new development be done on the next version following the release? At the moment the new development version is set only after finishing the release. What do you think of adding an option to set the versions when starting a release? If enabled the release-start goal would first set and commit the release version, then create the release branch and finally set and commit the next development version on the develop branch. For example when the develop branch has version 0.1-SNAPSHOT the process of starting the release would be something like this: 1. On the develop branch change version to 0.1 2. Commit the change 3. Create a release branch from the previous commit 4. On the develop branch change version to 0.2-SNAPSHOT 5. Commit the change 6. Checkout the release branch This kind of workflow would make it clear that a version is about to be released and the develop branch already focuses on the next version while avoiding merge conflicts related to changing version numbers in pom.xml files. Answers: username_0: Sent a PR #61 username_0: The PR adds a new configuration option `commitDevelopmentVersionAtStart` which defaults to `false`. When set to `true` release-start and release-finish follow the steps outlined in the first comment. username_1: Merged. Thanks. Status: Issue closed username_0: Do you have ideas when the next version might be released? Would be great to see this feature in a release :-) username_1: @username_0 Next week, probably. username_1: @username_0 `1.7.0` is out. username_0: Awesome, thanks!
fedora-infra/noggin
583565191
Title: As a User, I want to have to reauthenticate when enrolling an OTP token Question: username_0: As a User, I want to have to reauthenticate when enrolling a new OTP token, to ensure that only the user can add new tokens. If a token is already added, the authentication method would be password + token
wmenjoy/awesome-knowleges
617864382
Title: elasticsearch Question: username_0: # elasticsearch 可视化管理工具 ## 参考 1. [一文上手 Elasticsearch常用可视化管理工具](https://www.jianshu.com/p/54e04b5b5ce2) Answers: username_0: # Elasticsearch的灾备方案 ## 灾备的目的 ES 的灾备的目的是在双机房单活的架构下,主机房故障时,切换到备份机房,备份机房的 ES 存储的数据可以提供服务。 ## ES灾备方案 ### 前提 本灾备方案基于搜索场景, 实时性要求不高, 可以容忍分钟级别的延迟 ## 常见的方案 1. 双写。 在数据写入ES时,通过MQ或者其他方式实现数据双写或者多写,目前很多MQ都有数据持久化功能,可以保障数据不丢;再结合ES各种状态码来处理数据重复问题,即可实现多中心数据的最终一致。 1. 从原始数据处理。 比如 利用mysql的binlog的主从同步功能。在不同的数据中心,都从本地机房的mysql同步数据到ES,依赖于ES的数据一致性来保证数据的一致性。 1. 基于translog同步。 读取translog,同步并且重放,类似于mysql的binlog方式,但是需要修改es的底层源码,维护成本比较高 1. esm定期快照。 定期快照,实时性明显受影响 1. ## 参考 1. [Elasticsearch 跨集群同步: XDCR](https://developer.aliyun.com/article/599185) username_0: # 单节点 集群状态为yellow 的情况 ``` curl -X PUT "10.88.0.92:9200/_settings" -H 'Content-Type: application/json' -d' {"number_of_replicas":0}' ``` ## 检查 ``` curl -X GET "10.88.0.92:9200/_cluster/health?pretty" ``` username_0: # 性能优化 1. [百亿级实时计算系统性能优化–—Elasticsearch篇 - SegmentFault 思否](https://segmentfault.com/a/1190000038342833)
quasarframework/quasar
917589670
Title: QRouteTab [router-link] - event prop is deprecated and has been removed in Vue Router 4 Question: username_0: **Describe the bug** A clear and concise description of what the bug is. **Codepen/jsFiddle/Codesandbox (required)** https://codesandbox.io/s/practical-curran-emwke?file=/src/layouts/MainLayout.vue **To Reproduce** Steps to reproduce the behavior: 1. Create new Quasar App 2. Add a QTabs with QRouteTab 3.The Warning of VueRouter appear in console **Screenshots** <img width="676" alt="Screen Shot 2021-06-11 at 00 13 32" src="https://user-images.githubusercontent.com/7381858/121568399-ddf24e80-ca49-11eb-905b-53fd8ed1c2ad.png"> **Platform (please complete the following information):** Quasar Version: "^1.15.20", @quasar/app Version: "^2.2.10", Quasar mode: - [ x] SPA Tested on: - [ x] SPA Answers: username_1: Hello @username_0, You are right, `event` prop is deprecated in `Vue Router 4`, as well as `tag` and `addRoutes()`, as it states in the official [changelog](https://github.com/vuejs/vue-router/blob/dev/CHANGELOG.md#351-2021-01-26) and on the [documentation](https://next.router.vuejs.org/guide/migration/#removal-of-event-and-tag-props-in-router-link) for `router-next`. But, the thing is that Quasar`v1` currently uses `"vue-router": "3.5.1"`, so those props and methods aren't deprecated in it. Quasar `v2` will utilise `"vue-router": "4.0.8"` from the start, so you don't have to worry about that, and if the router version gets updated to 4+ in this version of Quasar, be sure it will be implemented in the proper way. username_1: @username_0 Could we close this issue if the above answers your questions? Status: Issue closed username_0: @username_1 yes, i know that. one more thing, I think we should have a new label `Warning` which mean no bugs. I spend little minus to find it :)
KnightMiner/Inspirations
605300664
Title: java.lang.NullPointerException: Initializing game [...] at knightminer.inspirations.building.BuildingClientProxy.registerItemColors(BuildingClientProxy.java:132) ~[?:1.14.4-1.0.2] {re:classloading} Question: username_0: <!--Do not type between these brackets: <>, that text will not be shown in the final issue--> **Describe the bug:** <!--Describe the expected and actual behavior. Give steps to reproduce if relevant--> Crashing upon launch, apparently during registration of item colors. **Versions:** * Minecraft: 1.14.4 * Forge: 28.2.4 * Mantle: 1.4.32 * Inspirations: 1.0.2 **Other mods required:** <!--List any other mods required to reproduce this issue. This list should only contain mods confirmed to cause this issue, not just the mods in your pack --> Unknown if related to mod compatibility - but mod pack is quite large (see crash report for mod list) **Attachments** <!--If applicable, add screenshots or crash reports related to your issue--> [crash-2020-04-23_03.10.33-client.txt](https://github.com/username_1/Inspirations/files/4520822/crash-2020-04-23_03.10.33-client.txt) Answers: username_1: Post your full game log. This crash is caused by another mod breaking something during initialization. Forge instead of showing the useful crash decides to continue trying to register stuff when it never registered blocks. username_1: Closing for lack of response. Duplicate of #152 for the record. Status: Issue closed
opencontainers/runc
156515053
Title: sysconfig_notcgo does not implement GetLongBit Question: username_0: When compiling with `CGO_ENABLED=0` this results in a build error. This occurs for us because this package is a dependency of a dependency, and we're building our package using `CGO_ENABLED=0`. The error is that `system.GetLongBit` is not defined. This is where `GetLongBit` defined for most build: https://github.com/opencontainers/runc/blob/master/libcontainer/system/sysconfig.go#L29 This file is used when CGO is not enabled: https://github.com/opencontainers/runc/blob/master/libcontainer/system/sysconfig_notcgo.go `GetLongBit` is only used in `libcontainer/cgroups/fs` so you can reproduce it as follows: ``` $ cd $GOPATH/src/github.com/opencontainers/runc/libcontainer/cgroups/fs $ CGO_ENABLED=0 go build # github.com/opencontainers/runc/libcontainer/cgroups/fs ./memory.go:71: undefined: system.GetLongBit ``` Answers: username_1: Looks like we need to split `sysconfig.go` into `sysconfig.go` (with the truly agnostic stuff) and `sysconfig_cgo.go` for the cgo-specific things. username_2: We can add some better build tags around this. I think we already have some things that check for cgo or !cgo username_3: Just assume it's 64bit for now I guess? Seeing a better way to handle this part @username_1 ? I'm open to all suggestions :) username_1: `CHAR_BIT * sizeof(long int)` is okay but there's issues about running a 32-bit program on a 64-bit host. So we'll probably have to do something more clever than that. username_4: @username_1 Assuming that kmem setting logic [here](https://github.com/opencontainers/runc/blob/88bb59e35f71bffa94b39722c0faf61e411c83bf/libcontainer/cgroups/fs/memory.go#L66) is the only user of `GetLongBit()`, can we have the kmem logic compare current memcg kmem limits with that of the root `/` container? Is runc expected to work within a container where not all cgroups are visible? Status: Issue closed username_6: When compiling with `CGO_ENABLED=0` this results in a build error. This occurs for us because this package is a dependency of a dependency, and we're building our package using `CGO_ENABLED=0`. The error is that `system.GetLongBit` is not defined. This is where `GetLongBit` defined for most build: https://github.com/opencontainers/runc/blob/master/libcontainer/system/sysconfig.go#L29 This file is used when CGO is not enabled: https://github.com/opencontainers/runc/blob/master/libcontainer/system/sysconfig_notcgo.go `GetLongBit` is only used in `libcontainer/cgroups/fs` so you can reproduce it as follows: ``` $ cd $GOPATH/src/github.com/opencontainers/runc/libcontainer/cgroups/fs $ CGO_ENABLED=0 go build # github.com/opencontainers/runc/libcontainer/cgroups/fs ./memory.go:71: undefined: system.GetLongBit ``` username_7: I suggest that we modify the logic for setting the kmem.limit_on_bytes to be more deterministic and remove the kmemInitialized check. If kernel memory is enabled we will set the kmem.limit_in_bytes to -1 when the cgroup is created ie. in apply(). This will enable kernel memory accounting on the cgroup ( http://lxr.free-electrons.com/source/Documentation/cgroups/memory.txt?v=3.9#L284 ) Once accounting is enabled we should be able to set the kernel memory limit to the required value in Set(). The only scenario in which this would fail is if someone straight out calls Set() on a cgroup which was not created by libcontainer, in which case, if there are already process attached to the cgroup setting kernel memory limit would return an Ebusy signal and we can error out. username_4: I don't think the APIs are expected to handle cgroups that are created out of band. Setting `-1` will only enable accounting and will not affect containers in any way. So +1 for setting `-1` as `memory.kmem.limit_in_bytes` by default. if kmem is enabled. username_7: Re, above, In apply() just setting -1 won't limit the cgroup, so we will have to limit the kernel memory by writing some arbitrary limiting value(say 1) and then write -1 to the kmem.limit file. Both the writes would be done one after the other in Apply() username_5: The double write seems a doable solution, but that would loose the accurate error message on kernel memory setting failure. @username_4 Why didn't you compare current memcg kmem limits with that of the root `/` container as you suggested in https://github.com/opencontainers/runc/issues/841#issuecomment-230662085 ? I think that's a better idea. username_4: @username_5 Comparing limits of parent requires access to parent limits and that is a lot more tedious. Essentially what we are trying to accomplish is that of prompting users whenever they fail to set kmem limits prior to moving processed into cgroups. I suggested the applying dummy limits approach to @username_7 to make the API more deterministic. Can you expand a bit more on what error message would be lost by following this method? What this approach does is that of enabling kmem accounting as part of cgroups creation. It is upto the user to apply kmem limits. username_7: From what I understand @username_5 's concern is that we wont be able to prompt the user as to why exactly the kmem set() failed as he would just get a resource busy error. But as I and @username_4 have already pointed out this would be a part of the cgroup creation and it would only error out if a user tries applying kmem limits to a cgroup which was not created by libcontainer. username_3: @username_7 but that's a scenario that runc support, and it may make sense in some environment. Can we just have an error message on `EBUSY` akin to `can not override limit for cgroup %s, currently sets sets to %d: device or resource busy` on your PR? That way the user would know they either need to not join that cgroup or remove the limit from their conf and accept the one currently enforced. username_7: @username_3 I totally agree with you. I looked into how golang handles syscalls and I found out that error numbers are not floated back to the calls instead it returns the error string. So we can't find the EBUSY error from the returned error. The less elegant solution would be to compare the "Resource Busy" error string but again thats difficult as the error string is not uniformly defined across different architectures. username_3: @username_7 `WriteFile` returns an `os.PathError` in that case normal, which contains a `Err` field with the error number. If the type assertion on the returned error succeed on that type, that could do the trick. WDYT? username_7: I am not completely sure if understood you correctly, but the problem is that the type assertion check would pass everytime a os.PathError is returned which can happen due to multiple reasons and not necessarily EBUSY. Please correct me if I am wrong but we would need some way of accessing the EBUSY error number which as far as I know is not possible this way. Or maybe I din't get the trick . username_3: @username_7 It's not very pretty, but something like: ``` if pe, ok := err.(*os.PathError); ok { if ern, ok := pe.Err.(syscall.Errno); ok && ern == syscall.EBUSY { fmt.Println("Special error message goes here") } } ``` should work. username_7: @username_3 Oh I should have been a bit more specific in the last comment, but the problem with what you are suggesting is that the error being returned by ioutil.WriteFile() is of type `error`. And when the resource busy error is encountered we return `&PathError()`, So you cannot access the field `Err` from the returned pointer PathError. Because the interface `error` type is being returned and we can't access the field of the struct PathError which is being returned. username_3: @username_7 This works for me though: ```go package main import ( "fmt" "io/ioutil" "os" "syscall" ) func main() { f, err := os.OpenFile("toto", os.O_CREATE|os.O_TRUNC|os.O_RDONLY, 0444) if err != nil { fmt.Println(err) return } f.Close() e := ioutil.WriteFile("toto", []byte("ha!"), 0444) if val, ok := e.(*os.PathError); ok { if v, ok := val.Err.(syscall.Errno); ok { if v == syscall.EACCES { fmt.Println("SUCCESS") } } return } fmt.Printf("FAILURE") } ``` `WriteFile` will just return the error from `File.Write` when it occurs, which implements the `error interface` in which case the assertion will succeed username_1: @username_7 You can use a type assertion with go to essentially "cast" the interface to the type `PathError`. See @username_3's examples. username_7: @username_3 @username_1 Thanks for pointing that out. I wasn't aware that Go supports "type casting" in the following manner. I will update my patch with the suggested check. Again, thanks a lot @username_3 :) username_1: AFAICS this is fixed by #962 and #968. Closing. Status: Issue closed
fluentmigrator/fluentmigrator
410010498
Title: Painful to use edge cases with Execute.WithConnection Question: username_0: Consider the following situation: 1. Database default collation was incorrectly set-up as `Latin1_General_CI_AI` - it should be `SQL_Latin1_General_CP1_CI_AS` 2. All columns in the database are therefore currently collated as `Latin1_General_CI_AI` 3. You have a table-valued function that returns a table that contains an nvarchar(50) with no collation specified (as is typical... I don't think I've ever seen anyone specify collation on a programmability object like functions and stored procedures). - This indirectly causes schema binding to occur. 4. To change the collation of columns, you have to: 1. Drop any objects that depend on schema binding for character data 2. Alter collation at the database-level 3. Re-create any objects that depended on schema binding for character data 4. For each column in `INFORMATION_SCHEMA.COLUMNS` that is of data type NVARCHAR, VARCHAR, NCHAR or CHAR: 1. Sort ascending whether the column belongs to an `INFORMATION_SCHEMA.TABLES` with `TABLE_TYPE` = 'BASE TABLE' or 'VIEW' (to ensure base tables are fixed before dependent objects like views are fixed). 2. If the column belongs to an `INFORMATION_SCHEMA.TABLES` with `TABLE_TYPE` = 'BASE TABLE', check whether the column belongs to any constraints, indexes, primary keys, or foreign keys. If it does, generate the drop statement for those dependent objects. 3. If the column belongs to an `INFORMATION_SCHEMA.TABLES` with `TABLE_TYPE` = 'BASE TABLE', then generate an ALTER TABLE ALTER COLUMN expression to change the collation. 4. Re-apply any constraints, indexes, primary keys, or foreign keys you dropped in step 2. 5. If the column belongs to an `INFORMATION_SCHEMA.TABLES` with `TABLE_TYPE` = 'VIEW', execute `sp_refreshview` for that view. In FM, the solution, today, is pretty much to use Execute.WithConnection, due to limitations in T-SQL's Transactional DDL. You might be able to write all of this in one giant dynamic sp_executesql statement, but I doubt it. Even if it's possible, you would need a lot of nested sp_executesql statements to delineate batches of CREATE code. The code gets even nastier to write and harder to read if the constraints use auto-generated names. When using Execute.WithConnection, it will work, but you have to properly delineate the statements into batches. I can, of course, propose a design, but I wonder if I'd be over-optimizing for SQL Server users. @username_1 Do you have any statistics on how many people use SQL Server vs. DB2 vs. Oracle vs. Postgres vs. Firebird vs. SQLite, etc? Do you use SQL Server at work or a different SQL DB technology? Answers: username_1: No, I don't have such statistics. What would your proposed solution be? username_0: I haven't fully fleshed it out, but I think some form of Promises as a way to model future computations to be run would be quite effective. In a sense, most of what FluentMigrator does can be described using Monad Comprehensions as well - and Execute.With is just a big hammer way to put a bunch of global state into a migration script. With Monad Comprehensions, you just have to "thread the state" through the migration. So, the downside is you have to know your initial state ahead of time. This links back to the discussion we had about a generic IState abstraction in #955 For example, one common thing I need to do in Execute.WithConnection migrations is something like the following template: ```csharp [Migration(2019021104, TransactionBehavior.None, "happy valentine's day")] public class _2019021401_happy_valentines_day : ForwardOnlyMigration { public override void Up() { var databaseName = new SqlConnectionStringBuilder(ConnectionString).InitialCatalog; var cmdText1 = $@"SELECT '{databaseName}' AS BatchOne"; var cmdText2 = $@"SELECT '{databaseName}' AS BatchTwo"; Execute.WithConnection((conn, tran) => { using (var conn2 = new SqlConnection(conn.ConnectionString)) { using (var cmd = conn2.CreateCommand()) { Console.WriteLine($"{cmdText1}"); cmd.CommandText = cmdText1; conn2.Open(); cmd.ExecuteNonQuery(); } } Console.WriteLine("Starting Batch 2..."); using (var conn2 = new SqlConnection(conn.ConnectionString)) { using (var cmd = conn2.CreateCommand()) { Console.WriteLine($"{cmdText2}"); cmd.CommandText = cmdText2; conn2.Open(); cmd.ExecuteNonQuery(); } } } ``` There's a couple of things here that strike me as odd: 1. Because I have TransactionBehavior.None, I have to make sure I create new connections via the conn lambda parameter, rather than the tran lambda parameter. tran will be null 2. Because I have exited the graces of FM syntax, I have to know the names of any auto-generated objects, which, if I am rebuilding a development environment from scratch on every build, is a constantly moving target. If I could somehow thread the FM syntax into Execute.WithConnection or some similar "let me do some fancy SQL stuff" abstraction, that would save me from writing a bunch of home brewed extension methods on an ad-hoc basis. 3. The tool can't help me in knowing cmdText1 and cmdText2 need to be executed as separate batches, and so depending on what those commands are, I can get zombied sql transaction errors and have to dip into SQL Profiler and turn on "User Error Messages" to figure out what is causing the zombied transaction. This makes both writing and debugging the code hard. - This is a subtle point that most engineers I explain this to do not grasp. username_1: @username_0 Can you please create a new issue and - maybe - create a small prototype that shows what you'd like it to look like - covering API and use cases? username_0: OK, I'll try, but it's probably at least 40 hours of work on my end. Note, I am not trying to push this on you, just sharing my feedback of using your great tool and what I feel the present limitations are. username_1: Don't invest too much time. I just need to get a feel for what you meant/described. username_0: Here is a contemporaneous note, which I will use to flesh out my idea further. In functional programming, it's desirable to be able to describe side effects, like logging, but to do so in a way that doesn't break referential transparency and reasoning about program state. To do so, we "thread" global state from the start of the computation all the way through the end of the computation. Some people call this "Railway Oriented Programming". Here is a subtle hint at the power of restructuring things as rails (monad comprehensions). It's hard to reason about the effects of Execute.WithConnection, because: 1. Typically, Execute.Sql and other migrations automatically log the SQL as it's executed. FM automatically adds, as a side-effect "threaded through the computation", log messages which are largely annotated ahead of time with [+] and other useful information. ``` ------------------------------------------------------------------------------- 2018120302: DisableForeignKeys migrating ------------------------------------------------------------------------------- [+] Beginning Transaction [+] ExecuteSqlStatement ALTER TABLE dbo.OM NOCHECK CONSTRAINT FK_OM_OFId_F_FId ALTER TABLE dbo.OtherManager NOCHECK CONSTRAINT FK_OM_OFId_F_FId => 0.024154s [+] ExecuteSqlStatement ALTER TABLE dbo.FD NOCHECK CONSTRAINT FK_FD_SId_S_SId ALTER TABLE dbo.FD NOCHECK CONSTRAINT FK_FD_SId_S_SId => 0.0281397s INSERT INTO [Monitor].[VersionInfo] ([Version], [AppliedOn], [Description]) VALUES (2018120302, '2019-03-01T20:51:15', 'Disable FKs due to third party providing bad data.') [+] Committing Transaction [+] 2018120302: DisableForeignKeys migrated => 0.0775537s ``` To do the same thing with Execute.WithConnection, you have to manually write your own Console.WriteLine. And now, if FM changes how it writes output, similar to xunit 2.0 changing how it handles output, Console.WriteLine will be a breaking change at some point. 2. You cannot weave Execute.Sql and Execute.WithConnection in the same migration in a top-down, readable style - the order of SQL actually executed is different from the order in which they may appear, since one is a Lambda and the other is internally a Builder pattern that builds a set of commands to execute once the Runner is ready to execute it. 3. You cannot weave TransactionBehavior through a computation. It has to be declared ahead of time. The only workaround for this is Execute.WithConnection and nasty hacks like I post above. In case you did not see the hack above, note I am creating a new connection inside Execute.WithConnection.
cocos2d/cocos2d-x
529249489
Title: tolua/genbindings.py NDK-r20 Question: username_0: - cocos2d-x version:3.17.2 - devices test on:windows pc - developing environments - NDK version:r20 - Xcode version: - VS version:vs2017 - browser type and version: python genbindings.py. It doesn't work. Errors in parsing headers: 1. <severity = Fatal, location = <SourceLocation file 'C:\\MyProject\\BuildRequire\\android-ndk-r20/sysroot/usr/include\\android/log.h', line 57, column 10>, details = "'stdarg.h' file not found">
nanodbc/nanodbc
289325899
Title: Problems with Unicode & iODBC Question: username_0: _From @lexicalunit on January 31, 2016 22:18_ Following #96 there are issues with iODBC when Unicode build is enabled on *nix platforms. We can easily detect this using `sizeof(SQLWCHAR) == 4`. To resolve we should do the following ONLY when `sizeof(SQLWCHAR) == 4` and Windows is not the build platform: 1. Change `u16string` to `u32string`. 2. Change `char16_t` to `char32_t`. 3. Change `std::codecvt_utf8_utf16` to `std::codecvt_utf8`. 4. Change `NANODBC_TEXT` from `u ## s` to `U ## s`. After doing this, **all tests** pass except **one**: ``` ------------------------------------------------------------------------------- string_test ------------------------------------------------------------------------------- /opt/nanodbc/test/sqlite_test.cpp:120 ............................................................................... /opt/nanodbc/test/base_test_fixture.h:875: FAILED: REQUIRE( results.get<nanodbc::string_type>(0) == U"Fred" ) with expansion: {?} == 0x5031f8 ``` For some reason the `string_test` is failing. I am testing using a docker image built using the provided Dockerfile. After building and then spinning up the docker container, I then uninstalled unixODBC and installed iODBC. The only ODBC driver that works with iODBC for that version of Ubuntu is SQLite's ODBC driver. Trying to install the MySQL or PostgreSQL ODBC driver will automatically uninstall iODBC, and suggest installing unixODBC instead. Testing code has been pushed up in branch `testing_iodbc_u32string` (commit as of this writing is: eae12b647f5a76aacd700d8f3b1b0dce789db050). Ideally I think the best solution would factor out all of this heinous Unicode preprocessor branching and string conversion (widen and narrow) into it's own utility header file, probably very similar to the utility header file written for the example code snippets. That way we have all that code in one place, and don't have to repeat bits of it in multiple files. Here's the run script I am using to test with: ``` shell #!/bin/bash -ue mv catch ../. || true find . -maxdepth 1 -not -name $(basename $0) -and -not -name "." -print0 | xargs -0 rm -rf mv ../catch . || true UNICODE=ON cmake \ -D NANODBC_ENABLE_LIBCXX=OFF \ -D NANODBC_USE_BOOST_CONVERT=OFF \ -D NANODBC_USE_UNICODE=${UNICODE} \ -D NANODBC_HANDLE_NODATA_BUG=${UNICODE} \ .. make sqlite_check ``` _Copied from original issue: lexicalunit/nanodbc#111_ Answers: username_0: _From @lexicalunit on January 31, 2016 22:21_ Since all the ODBC drivers I care about (aside from SQLite) seem to not even be compatible with iODBC, I'm not super motivated to resolve this issue. For now, I would suggest users of nanodbc avoid iODBC all together and use unixODBC. Status: Issue closed
aavanzyl/ngx-tiny
709593529
Title: 'ngx-date-picker' is not a known element depending on starting page. Question: username_0: I have a multipage Ionic app. let's just call them page A and page B page A imports a module that imports a few other packages, including ngx-date-picker. if I load the app starting on page A, everything works as expected. page B doesn't load that module. if I load the app starting on page B then navigate to page A, it throws the following error: `'ngx-date-picker' is not a known element`. it's funny because it doesn't complain about any of the other modules that page A loads that page B doesn't. page A also imports ngx-time-picker but there are no errors on that (although, it might be halting execution after the date-picker error). any thoughts on what is going on? any fixes beyond importing ngx-date-picker at the root? Answers: username_1: @username_0 can you provide me with a code example
ShyykoSerhiy/gfm-plugin
392991397
Title: Loboevolution rendering engine does not properly work Question: username_0: I tried to switch the rendering engine to "Loboevolution", but that yielded a pretty ugly result. Besides that the font is somewhat skinny und strange-looking, there are encoding issues, no images shown, no code highlighting, ...
2DegreesInvesting/r2dii.data
917346982
Title: All clasification bridges should include a `title` column, or equivalent, descrbing what each code refers to Question: username_0: This exists for all but the `nace` and`isic` classification bridges. ``` r library(r2dii.data) library(dplyr) #> #> Attaching package: 'dplyr' #> The following objects are masked from 'package:stats': #> #> filter, lag #> The following objects are masked from 'package:base': #> #> intersect, setdiff, setequal, union sic_classification %>% select(description) #> # A tibble: 256 x 1 #> description #> <chr> #> 1 private households, exterritorial organisations, representatives of foreign … #> 2 private households, exterritorial organisations, representatives of foreign … #> 3 growing of cereals and other crops n.e.c. #> 4 growing of fruit, nuts, beverage and spice crops #> 5 farming of cattle, sheep, goats, horses, asses, mules and hinnies;dairy far… #> 6 growing of crops combined with farming of animals (mixed farming) #> 7 forestry and related services #> 8 logging and related services #> 9 ocean and coastal fishing #> 10 mining of coal and lignite #> # … with 246 more rows gics_classification %>% select(description) #> # A tibble: 263 x 1 #> description #> <chr> #> 1 oil & gas drilling #> 2 oil & gas equipment & services #> 3 integrated oil & gas #> 4 oil & gas exploration & production #> 5 oil & gas refining & marketing #> 6 oil & gas storage & transportation #> 7 coal & consumable fuels #> 8 commodity chemicals #> 9 diversified chemicals #> 10 fertilizers & agricultural chemicals #> # … with 253 more rows psic_classification %>% select(original_code) #> # A tibble: 1,271 x 1 #> original_code #> <chr> #> 1 Growing of leguminous crops such as: mongo, string beans (sitao), pigeon pea… #> 2 Growing of ground nuts #> 3 Growing of oil seeds (except ground nuts) such as soya beans, sunflower and … #> 4 Growing of sorghum, wheat #> 5 Growing of other cereals (except rice and corn), leguminous crops and oil se… #> 6 Growing of paddy rice, lowland, irrigated #> 7 Growing of paddy rice, lowland, rainfed [Truncated] #> # … with 986 more rows isic_classification #> # A tibble: 768 x 4 #> code code_level sector borderline #> <chr> <dbl> <chr> <lgl> #> 1 A 1 not in scope FALSE #> 2 1 2 not in scope FALSE #> 3 11 3 not in scope FALSE #> 4 111 4 not in scope FALSE #> 5 112 4 not in scope FALSE #> 6 113 4 not in scope FALSE #> 7 114 4 not in scope FALSE #> 8 115 4 not in scope FALSE #> 9 116 4 not in scope FALSE #> 10 119 4 not in scope FALSE #> # … with 758 more rows ``` <sup>Created on 2021-06-10 by the [reprex package](https://reprex.tidyverse.org) (v2.0.0)</sup> Answers: username_1: Maybe a good time to try unify the name of such column? The name "description" seems most informative to me. username_0: Agreed! I think we can also unify a couple other things about the classification bridges, including how we handle different levels. This relates to #229
pytest-dev/pytest-xdist
1027032724
Title: -n auto doesn't scale workers properly in AWS Codebuild Question: username_0: I'm running tests with `pytest -n auto` in AWS CodeBuild and my build instance size is `BUILD_GENERAL1_MEDIUM`, which has 4 vCPUs (https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-compute-types.html), however xdist seem to spawn only two worker processes. I also tried upgrading build instance size to `BUILD_GENERAL1_LARGE` (8 vCPUs) but xdist still spawns only 2 workers. I am at loss on how to debug this. Any ideas on how I can make `pytest -n auto` scale workers to number of vCPUs in CodeBuild and is this a bug in xdist or I'm doing something wrong? Thanks! Answers: username_0: I'm running tests with `pytest -n auto` in AWS CodeBuild and my build instance size is `BUILD_GENERAL1_MEDIUM`, which has 4 vCPUs (https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-compute-types.html), however xdist seem to spawn only two worker processes. I also tried upgrading build instance size to `BUILD_GENERAL1_LARGE` (8 vCPUs) but xdist still spawns only 2 workers. I am at loss on how to debug this. Any ideas on how I can make `pytest -n auto` scale workers to number of vCPUs in CodeBuild and is this a bug in xdist or I'm doing something wrong? Thanks!
el-pths/telegraph-a2
177645985
Title: Передатчик Морзе (прототип) Question: username_0: Запрограммировать контроллер таким образом, чтобы при получении символа (от A до Z) по последовательному порту, соответствующий код Морзе выдавался на подключенный светодиод. Answers: username_1: Надо что-бы программа получала строчку сразу или по-символьно? И если по-символьно в чём различие этого задания от следующего? username_0: Различия такие: - работать должно в контроллере (так что программу придется адаптировать к Си) - символы получать по последовательному соединению (т.е. `Serial.read()`) и в виде букв `A...Z` а не кодов Бодо (коды будут позже, когда клавиатуру сделаем) - в качестве результата должна мигать светодиодом (а не выводить точки и тире в консоль) - основная разница в том что этот процесс занимает ненулевое время... username_1: Запрограммировал контроллер таким образом, чтобы при получении символа (от A до Z) по последовательному порту, соответствующий код Морзе выдавался на подключенный светодиод. Файл лежит в корне (ToMorseFirst). Сейчас хочу сделать так чтобы он принимал предложения и дальше их высвечивал. username_0: Я залил эту программу и обнаружил что Ардуино умеет накапливать приходящие символы при обработке Serial. Так что видимо предложения (последовательности букв) до 64 символов он и так будет высвечивать. Ну значит с этим вопросом мы столкнемся немного позже, при переходе на клавиатуру Бодо (у нее никакого буфера внутреннего не будет).
openwisp/ansible-openwisp2-imagegenerator
245524440
Title: Styling problems Question: username_0: Hey guys, Just a couple of styling/best practices issues. normally you want to use apt only if the platform is ubuntu and or debian so i would recommend to put a when condition for that. as in `when ansible_distribution==Debian or ansible_distribution==Ubuntu` Besides that you normally do not specify become_method and become_user on each task, is normally done in the generic playbook. Normally you do not want to enforce to use sudo in case the user is running in a fakeroot environment or similia. Just my two cents. Don't really have time to send you a pull requests ATM. Answers: username_1: Hi @username_0! This playbook only supports debian and ubuntu systems, so `apt` is good enough. Unfortunately, we can't run the playbooks using this role to run as root with `become` because compiling OpenWRT/LEDE with root would result in failure because it is not allowed. Therefore we must run playbooks using this role with a normal user which has sudo privileges and become root when necessary, if at all (the parts of the playbook needing root privileges may be skipped if already performed in the past). I hope it's clear! If you have more suggestions on how to improve things further let us know :-) Thanks 👍 Status: Issue closed
vitaly-t/pg-monitor
298127921
Title: Display tags that are numbers. Question: username_0: In light of the soon-coming version 8.x of `pg-promise`, add displaying tags when they are numbers. Currently, only strings are displayed. But tags can be simple numbers sometimes also. Answers: username_0: Implemented in version [0.9.0](https://github.com/username_0/pg-monitor/releases/tag/v.0.9.0). Status: Issue closed
django/channels
240067668
Title: add attributes to route_class not working Question: username_0: I tried to move the above statement into `__init__` function and even into receive but the attribute exsits only until the I return message. The next time I'll get a message (from the same websocket) the key will no longer exist. Is there a way to fix it ? Answers: username_1: Hi, You definitely shouldn't try to store state inside consumer instances. There isn't any guarantee that messages will be processed by the same worker host. So it is good practice to keep your consumers stateless. I can try to use session storage to share information between consumers run. Regards, Artem. username_2: @username_1 I wholly agree with this - if you need to keep a small state between consumers across workers, use consumer sessions. username_0: You're right, however, I was using this package to create a server side proxy for multiple steps authentication process. on new connection a selenium webdriver would be opened to enter username and password, if it succeeded, the client would be prompted for the next credentials, which in turn will be given to the selenium webdriver and so on. The problem was that I couldn't access the webdriver in the next session and only one step of authentication would happen (sorry for bad english) username_1: I don't understand your exact use case. But it's definitely isn't a channels issue. Channels session bound to the reply channel, not HTTP user. Channels have some helpers to handle both kinds of sessions. But the main point here, that you can use channels session to store shared data between consumer calls as long as they have same reply channel. In other words, as long as they triggered by same connection. Status: Issue closed
nrwl/nx
356658425
Title: polyfills.ts in generated tsconfig.spec.json with jest flags Question: username_0: I generated new lib module with `--unit-test-runner=jest` flags. `ng g lib test123 --publishable=true --unit-test-runner=jest` noticed the generated `tsconfig.spec.json` file have `src/polyfills.ts` but i don't see file `polyfills.ts` present in `src` is this added for future use? ``` { "extends": "../../tsconfig.json", "compilerOptions": { "outDir": "../../dist/out-tsc/libs/app-confirm", "module": "commonjs", "types": ["jest", "node"] }, "files": ["src/test-setup.ts", "src/polyfills.ts"], "include": ["**/*.spec.ts", "**/*.d.ts"] } ``` Answers: username_1: It should be removed. It's a low effort fix for someone looking to contribute ;) Status: Issue closed username_1: This was fixed with https://github.com/nrwl/nx/pull/767
dotnet/runtime
1076001067
Title: File.SetLastWrite time fails on readonly files on Windows Question: username_0: ```c# string path = Path.GetTempFileName(); File.SetLastWriteTime(path, new DateTime(2000, 1, 1)); // succeeds File.SetAttributes(path, FileAttributes.ReadOnly); File.SetLastWriteTime(path, new DateTime(2001, 1, 2)); // throws UnauthorizedAccessException ``` This is because SetLastWrite time [tries to open a file handle with GENERIC_WRITE](https://github.com/dotnet/runtime/blob/main/src/libraries/System.Private.CoreLib/src/System/IO/FileSystem.Windows.cs#L202) instead of only FILE_WRITE_ATTRIBUTES. This bug exists in .NET Framework as well and has been encountered by customers eg [here](https://stackoverflow.com/questions/53980059/modify-access-time-of-windows-read-only-file-through-wsl) and now reported in a DTS issue. Answers: username_0: I'll throw up a fix. username_1: Just from a theoretical perspective, why is it incorrect to fail changing a read-only file? username_0: 1. Windows allows it just fine. It's precisely what this flag is for - we were just passing the wrong one. 2. Customers have asked for it, including a DTS customer today. 3. Unix allows it. Customer pointed out Windows is inconsistent due to the flag we're passing. ``` dan@danmoseL:~/tmp$ ll tmp -r--r--r-- 1 dan dan 2 Jan 1 2020 tmp dan@danmoseL:~/tmp$ chmod 444 tmp dan@danmoseL:~/tmp$ touch -t 202101010101 tmp dan@danmoseL:~/tmp$ ll tmp -r--r--r-- 1 dan dan 2 Jan 1 2021 tmp ``` Note that my implementation of GNU touch on Windows does fail, I didn't look at sources, but under the debugger it doesn't seem to call CreateFile (or SetFileTime) after reading the attributes. It looks like it is choosing to bail when it sees the r/o flag. username_0: @username_1 thoughts on above? username_0: Also it occurs to me that copying a readonly file changes the timestamp on it. You can also rename or move it, and set other attributes on it. It seems that readonly really means "cannot change the content". username_1: Seems fine then. Status: Issue closed username_2: @username_0 Should we backport the fix? It was relatively low risk. Thank you for fixing it BTW. username_3: Reponed since customer is asking for this in 6.0. Status: Issue closed
laravel/octane
909835985
Title: Possible memory leak from eloquent? Question: username_0: - Octane Version: latest - Laravel Version: latest - PHP Version: 8 - Server & Version: Swoole - Database Driver & Version: MySQL ### Description: When i use `ModelName::create(['something'=>'something'])` and there is unique index in the database when error is thrown i notice that the memory usage is building up when i hit this particular request. ``` 500 POST /something/install ....... 15.68 mb 3.33 ms 500 POST /something/install ....... 15.68 mb 3.22 ms 500 POST /something/install ....... 15.68 mb 3.25 ms 500 POST /something/install ....... 15.69 mb 3.07 ms 500 POST /something/install ....... 15.69 mb 3.35 ms 500 POST /something/install ....... 15.69 mb 3.42 ms 500 POST /something/install ....... 15.69 mb 3.25 ms 500 POST /something/install ....... 15.69 mb 3.38 ms 500 POST /something/install ....... 15.69 mb 3.34 ms 500 POST /something/install ....... 15.70 mb 3.17 ms 500 POST /something/install ....... 15.70 mb 3.01 ms 500 POST /something/install ....... 15.70 mb 3.40 ms 500 POST /something/install ....... 15.70 mb 4.17 ms 500 POST /something/install ....... 15.70 mb 3.10 ms 500 POST /something/install ....... 15.70 mb 3.52 ms 500 POST /something/install ....... 15.71 mb 3.29 ms 500 POST /something/install ....... 15.71 mb 3.02 ms 500 POST /something/install ....... 15.71 mb 3.00 ms 500 POST /something/install ....... 15.71 mb 3.04 ms 500 POST /something/install ....... 15.71 mb 3.16 ms 500 POST /something/install ....... 15.71 mb 3.06 ms 500 POST /something/install ....... 15.72 mb 3.11 ms 500 POST /something/install ....... 15.72 mb 3.11 ms 500 POST /something/install ....... 15.72 mb 3.15 ms 500 POST /something/install ....... 15.72 mb 3.02 ms 500 POST /something/install ....... 15.72 mb 3.91 ms 500 POST /something/install ....... 15.72 mb 3.66 ms 500 POST /something/install ....... 15.73 mb 3.36 ms 500 POST /something/install ....... 15.73 mb 3.37 ms 500 POST /something/install ....... 15.73 mb 3.21 ms 500 POST /something/install ....... 15.73 mb 3.37 ms ``` ### Steps To Reproduce: Create table with unique rule, then try to create something with eloquent `ModelName::create(['something'=>'something'])` Answers: username_0: I fell like this memory reporting thingy is not working as supposed, i noticed the memory going down after some time. It also happens on other requests. So it might not be eloquent issue. username_1: @username_0 Do you have telescope enabled? username_0: No i don't use telescope, this are my only packages ``` "require": { "php": "8.*", "ext-pdo": "*", "fideloper/proxy": "^4.4", "fruitcake/laravel-cors": "^2.0", "guzzlehttp/guzzle": "^7.0.1", "laravel/framework": "^8.40", "laravel/octane": "^1.0" }, "require-dev": { "facade/ignition": "^2.5", "nunomaduro/collision": "^5.0" }, ``` username_2: The leak is in facade/ignition. Fixed https://github.com/facade/ignition/pull/393 Status: Issue closed username_0: It looks like the leak is still here even after i removed facade/ignition i don't have any middlewares other than the laravel's default. username_0: @username_2 @username_3 sorry to ping you but i think this issue needs to be reopened i discovered the following issue: I installed fresh laravel and added octane with swoole, when i run it without touching anything its all fine but when i add `return abort(404)` to the routes/web.php default route the memory starts to increase again over time. username_3: - Octane Version: latest - Laravel Version: latest - PHP Version: 8 - Server & Version: Swoole - Database Driver & Version: MySQL ### Description: When i use `ModelName::create(['something'=>'something'])` and there is unique index in the database when error is thrown i notice that the memory usage is building up when i hit this particular request. ``` 500 POST /something/install ....... 15.68 mb 3.33 ms 500 POST /something/install ....... 15.68 mb 3.22 ms 500 POST /something/install ....... 15.68 mb 3.25 ms 500 POST /something/install ....... 15.69 mb 3.07 ms 500 POST /something/install ....... 15.69 mb 3.35 ms 500 POST /something/install ....... 15.69 mb 3.42 ms 500 POST /something/install ....... 15.69 mb 3.25 ms 500 POST /something/install ....... 15.69 mb 3.38 ms 500 POST /something/install ....... 15.69 mb 3.34 ms 500 POST /something/install ....... 15.70 mb 3.17 ms 500 POST /something/install ....... 15.70 mb 3.01 ms 500 POST /something/install ....... 15.70 mb 3.40 ms 500 POST /something/install ....... 15.70 mb 4.17 ms 500 POST /something/install ....... 15.70 mb 3.10 ms 500 POST /something/install ....... 15.70 mb 3.52 ms 500 POST /something/install ....... 15.71 mb 3.29 ms 500 POST /something/install ....... 15.71 mb 3.02 ms 500 POST /something/install ....... 15.71 mb 3.00 ms 500 POST /something/install ....... 15.71 mb 3.04 ms 500 POST /something/install ....... 15.71 mb 3.16 ms 500 POST /something/install ....... 15.71 mb 3.06 ms 500 POST /something/install ....... 15.72 mb 3.11 ms 500 POST /something/install ....... 15.72 mb 3.11 ms 500 POST /something/install ....... 15.72 mb 3.15 ms 500 POST /something/install ....... 15.72 mb 3.02 ms 500 POST /something/install ....... 15.72 mb 3.91 ms 500 POST /something/install ....... 15.72 mb 3.66 ms 500 POST /something/install ....... 15.73 mb 3.36 ms 500 POST /something/install ....... 15.73 mb 3.37 ms 500 POST /something/install ....... 15.73 mb 3.21 ms 500 POST /something/install ....... 15.73 mb 3.37 ms ``` ### Steps To Reproduce: Create table with unique rule, then try to create something with eloquent `ModelName::create(['something'=>'something'])` username_3: ping @username_2 ^ username_1: @username_0 I dont appear to have the same issue ``` 404 GET / .............................................. 17.63 mb 93.02 ms 404 GET / ............................................... 17.73 mb 3.48 ms 404 GET / ............................................... 17.73 mb 3.35 ms 404 GET / ............................................... 17.73 mb 3.38 ms 404 GET / ............................................... 17.73 mb 3.36 ms 404 GET / ............................................... 17.73 mb 3.36 ms 404 GET / ............................................... 17.73 mb 3.54 ms 404 GET / ............................................... 17.73 mb 3.37 ms 404 GET / ............................................... 17.73 mb 3.29 ms 404 GET / ............................................... 17.73 mb 3.71 ms 404 GET / ............................................... 17.73 mb 3.62 ms 404 GET / ............................................... 17.73 mb 3.45 ms 404 GET / ............................................... 17.73 mb 3.45 ms 404 GET / ............................................... 17.73 mb 3.45 ms ``` username_0: Is this on clean laravel project? username_2: @username_0 with a fresh Laravel app, I can't replicate. Make sure the facade/ignition package is up to date. Status: Issue closed username_4: using fresh install still having memory leak issue when using `abort()` or `throw()` methods, (heavy usage inside private sub method), fresh laravel install with default packages + `laravel/octane` only besides, checked that is latest `facade/ignition` in composer.lock
lutraconsulting/MDAL
505167982
Title: Support for 12D TIN Question: username_0: Some info here: http://12dwiki.com.au/retriangulate-tin/ Examples and format spec: UNKNOWN! Answers: username_1: There's a mix of possible formats. See .12da https://www.12d.com/downloads/pdf/12d_archive_file_format.pdf .12daz : zipped 12da file .12dxml: http://downloads.12dmodel.com/v11/11.1.13/11.1.13.1/12d_XML_File_Format_18_Nov_16.pdf At least the formats are well documented... username_2: Yep.. 12da format has been used by surveyors in Australia very widely.. Please add support for it.. and make a feature to export all the supported format converting into DXF. username_2: Just upload two example. There are 12da files I received from client, and the corresponding dxf files I got my friend helped converted, He has a 12d software. https://www.dropbox.com/sh/1q7o7d5pqtwbljt/AABaXtCaWvWhtfYXPUhB1Ncva?dl=0 hope them help. username_0: @username_2 Thanks for the example files. As this is a closed source format and we have not much info on the spec, it would be good if someone from 12d Solutions can provide us with the detailed spec and ideally fund the work!
umijs/umi
460728799
Title: webpack编译过程异常,人工修改后正常启动(包括dev和build) Question: username_0: <!-- https://github.com/YOUR_REPOSITORY_URL --> ## How To Reproduce **Steps to reproduce the behavior:** 1. 2. **Expected behavior** 1. 2. ## Context - **Umi Version**:2.7.7 & 2.8.4 - **Node Version**: 10.12.0 - **Platform**: mac os Answers: username_1: 复现步骤给下,是约定式路由并且页头有注释吗? username_0: @username_1 https://github.com/username_0/error-feedback.git 复现仓库已经建立 username_1: 看了下,是用的约定式路由,把 `error-feedback/src/pages/map/plugin` 改成 `error-feedback/src/pages/map/_plugin` 试试。 Status: Issue closed username_0: @username_1 问题解决,感谢!
robotframework/robotframework
368592609
Title: Unload Dynamic/Remote library? Question: username_0: Hi! Is it possible to unload the Dynamic/Remote library? Background: We have several hardware modules that are assembled to one system. Each hardware module has its own ROBOT file using RPC server specific to this hardware module. If a single module test run on its own everything is fine. But if we run all module tests in a single run we get a error message saying "Keyword not found". The ROBOT file looks basically like this: ``` Copy Files To Target And Start Rpc ${IP} MODULE1 Module1Rpc.py Import Library Remote http://${IP}:${PORT} WITH NAME uut uut.Do Something uut.Stop Remote Server Copy Files To Target And Start Rpc ${IP} MODULE7 Module7Rpc.py Import Library Remote http://${IP}:${PORT} WITH NAME uut uut.Do Something Completely Different uut.Stop Remote Server ```` The call on the second block ```uut.Do Something Completely Different``` causes "Keyword not found" error. Renaming ```uut``` to ```uut1``` and ```uut7``` doesn't solve the problem. I can see that the keywords of the second block are transferred via network from target to host according to wireshark. Versions: Python 3.5.2 (default, Nov 23 2017, 16:37:01) robot-extensions 0.1 robotframework 3.0.4 PythonRemoteServer commit 8047b40 Any ideas? Thanks. Answers: username_1: Could you try using `Reload Library` keyword? Status: Issue closed username_0: Yes, too easy... I must have overlooked the `Reload Library` keyword.
studiometa/webpack-config
927052087
Title: Generate manifest.json Question: username_0: When we modify only SCSS, it recompiles the `manifest.json` file and the `.map`. This will be a problem later if the file is commited and we want to rebase our branch. Answers: username_1: This is not fixable, as when the content of any bundled file changes, its hash in the `manifest.js` file will change. Status: Issue closed
scriptBoris/DataGridSam
755318461
Title: Grouping Question: username_0: can SamDataGrid **Grouping Data** ? , if it can please give me an example ! Thanks. Answers: username_1: Sorry, but DataGridSam can't grouping data. Maybe sometime i add grouping data functional username_0: Oh.. , Okay .., any idea for the groupinjg data , Thanks Status: Issue closed username_0: Thanks
the-blue-alliance/the-blue-alliance-ios
316509546
Title: Add "Top Teams" to EventInfoTableViewController Question: username_0: https://github.com/the-blue-alliance/the-blue-alliance-android/blob/master/android/src/main/java/com/thebluealliance/androidclient/binders/EventInfoBinder.java#L323 Status: Issue closed Answers: username_0: Closing this one out - after a chat in Slack we've determined this feature may not be as useful as we'd hope. We'll watch user feedback to see if people are looking for this sort of overview information on the info tab.
yasminfrost-IT/ISDMTute12Group4
625220951
Title: Prototype Activity: RM Profile Creation System Flow Question: username_0: Step One: Once the user has launched the system, the information system will display the login screen. This screen consists of a welcome message as well as input fields for the employees login details. It also displays an option for users who are not yet registered. As the user is not yet completely onboarded as a relationship manager, they will click the 'new employee' option. <img width="1084" alt="Screen Shot 2020-05-27 at 7 32 41 am" src="https://user-images.githubusercontent.com/62311268/82952417-4495fd00-9fec-11ea-8264-c7b0138ac9e3.png"> Step Two: From here, the information system will show the employee initialisation form. The questionnaire asks employees specific questions about their travel history, their language speaking abilities and other questions that will help the information system match them to customers. It also asks personal information such as their full name and contact details. The user must fill out all fields in order to move to the next screen. The user fills out the form and will press the submit button which will become illuminated and clickable upon completion of the form. <img width="1082" alt="Screen Shot 2020-05-27 at 7 42 20 am" src="https://user-images.githubusercontent.com/62311268/82953157-9ee38d80-9fed-11ea-974b-e873b7f2bfdf.png"> Step Three: The information system indicates to the user that the form has been submitted. It displays messaging evidencing the successful submission. This page indicates that the information the user provided has been successfully sent and stored in the systems database. The user is not required to do anything else and can close the window/application at this point. <img width="1086" alt="Screen Shot 2020-05-27 at 7 36 09 am" src="https://user-images.githubusercontent.com/62311268/82952663-c38b3580-9fec-11ea-9508-71145cb3a1ba.png"> Step Four: The information system will then automatically generate a profile for the Relationship Manager using the information the user provided, this includes the creation of an employee number. Each profile will detail the employees name, employee number, competencies, customer satisfaction rating and sales history. <img width="1085" alt="Screen Shot 2020-05-26 at 10 10 00 am" src="https://user-images.githubusercontent.com/62311268/82953562-52e51880-9fee-11ea-8890-c5ca8f4749e4.png">
bumptech/glide
382791495
Title: load fails: wrong rounding and division by zero with fitCenter() Question: username_0: **Glide Version**: 4.8.0 **Issue details / Repro steps / Use case background**: Very similar to issue #3420 Now `Downsampler.calculateScaling` computes target dimensions, and can get to width or height of zero, and immediately divides by this, resulting in load error. I suggest that the computations always produce Math.min(1, ...) so that rounding to zero never happens: ``` Downsampler.java: private static int round(double value) { return Math.max(1, (int) (value + 0.5d)); } } ``` **Glide load line / `GlideModule` (if any) / list Adapter code (if any)**: ```java GlideApp.with(act).load(file).fitCenter().into(v) ``` <!-- What is the error message that you got in the log? You can find some help on diagnosing issues here: https://github.com/bumptech/glide/wiki/Debugging-and-Error-Handling --> **Stack trace / LogCat**: ```ruby java.lang.ArithmeticException: divide by zero at com.bumptech.glide.load.resource.bitmap.Downsampler.calculateScaling(Downsampler.java:372) at com.bumptech.glide.load.resource.bitmap.Downsampler.decodeFromWrappedStreams(Downsampler.java:245) at com.bumptech.glide.load.resource.bitmap.Downsampler.decode(Downsampler.java:206) at com.bumptech.glide.load.resource.bitmap.StreamBitmapDecoder.decode(StreamBitmapDecoder.java:62) at com.bumptech.glide.load.resource.bitmap.StreamBitmapDecoder.decode(StreamBitmapDecoder.java:18) at com.bumptech.glide.load.resource.bitmap.BitmapDrawableDecoder.decode(BitmapDrawableDecoder.java:58) at com.bumptech.glide.load.engine.DecodePath.decodeResourceWithList(DecodePath.java:72) at com.bumptech.glide.load.engine.DecodePath.decodeResource(DecodePath.java:55) at com.bumptech.glide.load.engine.DecodePath.decode(DecodePath.java:45) at com.bumptech.glide.load.engine.LoadPath.loadWithExceptionList(LoadPath.java:58) at com.bumptech.glide.load.engine.LoadPath.load(LoadPath.java:43) at com.bumptech.glide.load.engine.DecodeJob.runLoadPath(DecodeJob.java:507) at com.bumptech.glide.load.engine.DecodeJob.decodeFromFetcher(DecodeJob.java:472) at com.bumptech.glide.load.engine.DecodeJob.decodeFromData(DecodeJob.java:458) at com.bumptech.glide.load.engine.DecodeJob.decodeFromRetrievedData(DecodeJob.java:410) at com.bumptech.glide.load.engine.DecodeJob.onDataFetcherReady(DecodeJob.java:379) at com.bumptech.glide.load.engine.SourceGenerator.onDataReady(SourceGenerator.java:112) ``` <!-- Bonus points if you attach a relevant screenshot, screen recording or a small demo project --> Testing image 20000x1 pixels ![extreme wide and low](https://user-images.githubusercontent.com/3145438/48794088-37ae9400-ecf9-11e8-81a6-67b84b6201d5.jpg)
mycelium-com/wallet-android
282442836
Title: BTC not received using "Exchange other coins to BTC" Question: username_0: I used in-app functionality to exchange my OMG tokens for BTC, it generated an address to send 10 OMG tokens to, but I did not receive BTC in exchange! The address generated: 0x2d2fe42f5ad159f37498a8024d96f7cb97967c66 As you can see, the tokens are there. I don't have any other receipt or link I could provide as the mycelium app doesn't generate any. Answers: username_1: I am wondering if your problem has been resolved? I am experiencing the same thing and am starting to get very upset. username_0: I just checked. It eventually got through. Wait for at least 24 hours. Status: Issue closed
Opentrons/opentrons
811407663
Title: bug: Multiple blow outs do not work as expected Question: username_0: <!-- Thanks for taking the time to file an issue! Please make sure you've read the "Opening Issues" section of our Contributing Guide: https://github.com/Opentrons/opentrons/blob/edge/CONTRIBUTING.md#opening-issues To ensure your issue can be addressed quickly, please fill out the sections below to the best of your ability! --> # Overview <!-- Use this section to describe your bug at a high level. Please include any issues you can find that may be related. --> For use, sometimes one `blow_out` isn't enough to get rid of all the liquid. When I try calling it twice, nothing happens for the second one. I also tried to put a `touch_tip` before the second `blow_out`, thinking that might make it reset. It just does the touch tip and moves on to trash. # Steps to reproduce <!-- If this is a bug report and there are specific steps we can take to reproduce the bug, please list them here. This is a good place to put things like software version, hardware version, and operating system. --> In a protocol, write: ``` pipette.blow_out() pipette.blow_out() ``` # Current behavior <!-- Describe how the software currently behaves and how that differs from how you think the software should behave. --> Tip is only blown out once. # Expected behavior <!-- Describe how you think the software should behave. --> Tip is blown out, plunger goes up again, and blows out again. Answers: username_1: Examined the code in question, and I can see why this behavior is happening. The first blow out leaves the plunger at the bottom of the pipette, in the blowout position. When the second blow out comes along, the plunger is told to move to the location that it's already at, so it does nothing. I don't know why the plunger isn't reset as part of the blowout procedure (there might be a good reason not to do so), but that strikes me as a potential avenue to fix this. In terms of a workaround, the only thing in the public API that is able to reset the plunger position is an `aspirate`. I wonder if something like this would work? ```py pipette.blow_out() pipette.aspirate(volume=0.001) # some tiny volume rather than `0`, since `0` means "as much as possible" pipette.blow_out() ``` username_0: Thanks for the answer. I tried that just now. The `aspirate` makes the tip move up a little bit before aspirating and then back down for the second blow out. I thought `aspirate` with `location=None` was supposed to use the current position? Anyway, thanks for the workaround. I am happy to have this closed now, but maybe you want to keep it open to see about fixing bringing the plunger up immediately? username_0: Since you say there might be a reason to not reset the plunger immediately after the blow out, how about checking if the plunger has to be brought up at the start of `blow_out` like `aspirate` does? username_1: This is definitely on the list of available fixes. Might even be the most likely one! username_2: Hi, just wanted to comment on this a little bit. I have a protocol that tries to do this: ``` # after dispensing some glycerol: # 1 Residual liquid is removed with slower blow_out() at slower flowrate p20multi.blow_out(well.bottom(mm)) # 2 move_to() from liquid to top of labware at 1-5 mm/s p20multi.move_to(well.top(mmt), speed=vel_exit_liquid) # 3 move_to() to bottom of labware at 10 mm/s speed to aspirate p20multi.move_to(reservoir.wells_by_name()[reservoir_well_name].bottom(mm)) # 4 aspirate() (at slower flowrate) p20multi.aspirate(20, reservoir.wells_by_name()[reservoir_well_name].bottom(mm)) ``` What happens here in practice is that the pipette goes to the bottom of the reservoir well as per step 3, but after that, since the plunger is still down after the blowout, it moves back to the top to (I guess) return the plunger to its default position, then waits there 10 seconds and then does step 4. There's a bit of time waste there, but the big problem is that when it comes out the well between steps 3 and 4, the pipette is full of glycerol and that glycerol ends up absorbed , so the pipette ends up dispensing 21 or so uL instead of 20. If I didn't explain myself clearly I'm happy to clarify. The thing is, this can become a big problem and, if you don't figure it out early, a big headache. I'm surprised that this hasn't been fixed yet as of September 2021. username_1: After my last comment in the thread, I shopped this around internally, and we're sort of in a difficult spot with blow-out. In general, protocol API v2 has some fairly fundamental architectural limitations that make changes like this a little fraught. Where that comes up with blow-outs specifically is: it's pretty difficult for the robot to know the answer to the question "is the pipette tip currently in liquid?" This is a _really_ important question when it comes to resetting after a blow-out, because if you answer the question incorrectly, you risk damaging the pipette hardware itself by sucking up liquid into the internals. The current behavior, while annoying a lot of the time, is a way of ensuring we pick a reliably not-risky spot to perform the plunger reset in the architecture we currently have. We're in the middle of an overhaul of the protocol execution engine with the goal of making these sort of changes much less risky. But as it turns out, re-writing our the execution engine to address core architectural limitations while keeping everything else working takes... a lot more time than I thought 🙃. We are working on it, though! I recommend following along with the [protocol-engine issue label](https://github.com/Opentrons/opentrons/issues?q=is%3Aopen+is%3Aissue+label%3Aprotocol-engine) if you're curious about watching this work progress username_2: Thank you for your hard work. As for your question, `why is the (3) move_to call needed? ` you're absolutely right and I skipped that step in my updated code. The only reason I included that step is because I took that idea verbatim from your latest glycerol handling webinar, see page 13. https://f.hubspotusercontent30.net/hubfs/5383285/Automating-Viscous-Liquid-Handling_09-09-2021_Opentrons-Webinar-slides.pdf?utm_campaign=2021_Webinars_Viscous-Liquids_09-09-2021&utm_medium=email&_hsmi=157751988&_hsenc=p2ANqtz-94wprFwoIxGOjyCNCuaoELTtZsI5g7KMg3hjTmRhDyFONRlxSudmZPEhknhKV7knvLpoiWQz5den2lrEm1NrgORlLOmg&utm_content=157751988&utm_source=hs_email . Note that they also included a `blow_out` step in the dispense part, so this issue was bound to happen if you do sequential aspirate-dispense steps. I don't have the OT2 here right now so I can't check, but I guess aspirate would move to the `.top(0)` (or whatever the default is) of the reservoir instead of `.top(mmt)`. So in order to keep the `.top(mmt)` height I might need to do a `aspirate(0.001,well.top(mmt))` to reset the plunger "manually" first. Thank you again for that workaround idea
automat-ed/portal
799838300
Title: UI Mockup Question: username_0: Before we can create start creating the UI for our Portal, we need some idea of what it should look like to ensure a consistent and high quality UI/UX. # Task For this issue, we should create a mockup of all the different pages to illustrate what they should look like. This can be done using [Figma](https://www.figma.com/). All designs should be uploaded to the [Portal Drive folder](https://drive.google.com/drive/folders/1UPzyKxq18V1Q8eMkiVhL-2CsuRoB0PpB?usp=sharing). Answers: username_0: A basic mock-up was created and can be found on our [Drive](https://drive.google.com/file/d/1Z0LOuUe6oXAljprnEqv65Fn02M4l1p6R/view?usp=sharing). Closing this. Status: Issue closed
BEEmod/BEE2-items
230224343
Title: Let Rockets Break Glass & Vactubes Question: username_0: Original issue by @ZFM2004 - Source: https://github.com/BEEmod/BEE2-items/issues/1983 "Let Rockets Break Glass & Vactubes" Status: Issue closed Answers: username_2: Original issue by @ZFM2004 - Source: https://github.com/BEEmod/BEE2-items/issues/1983 "Let Rockets Break Glass & Vactubes"
ct-js/ct-js
688608787
Title: Can't set maxWidth property on PIXI.BitmapText declaration Question: username_0: **Describe the bug** Creating a PIXI.BitmapText object does not allow you to set the `maxWidth` property on instantiation. You have to set it once it has been created. Example: (Does not work) ```js this.dialog_text = new PIXI.BitmapText('WALK AROUND WITH ARROW KEYS, PRESS "E" TO INTERACT.', { font: { name: 'PressStart2P_400', size: 2 }, align: 'left', maxWidth: 72 }) ``` Example: (Does work) ```js this.dialog_text = new PIXI.BitmapText('WALK AROUND WITH ARROW KEYS, PRESS "E" TO INTERACT.', { font: { name: 'PressStart2P_400', size: 2 }, align: 'left' }) this.dialog_text.maxWidth = 72; ``` **Screenshots (recommended)** Attempting to set the maxWidth property when creating PIXI.BitmapText object fails to set it's property as shown here: https://prnt.sc/u86m1b **Versions:** - OS: Windows 10 - ct.js version 1.4.2
dcfemtech/hackforgood-waba-map
174659099
Title: Design addition: map description Question: username_0: #WHAT IS THIS MAP THING? Trusting your design eye and design decisions for a sleek, simple, consistent look. Please add, “The green lines on this map represent the existing bike lanes, protected bike lanes, and multi-use trails that make up our region’s bike network. Toggle the different buffers to see which neighborhoods in our region are served by our current network and which are not. WABA's goal is to get everyone in our region within one mile of this network.” Font? Lato Color: Your call Size: You do you Answers: username_1: Done as well~ username_2: @username_1 - can you add a screenshot of the new description and link to your WIP branch with this? This will help the team review your work.
emergenzeHack/covid19italia_segnalazioni
610789643
Title: Apre a Ostia il primo Mercato Sociale di Roma Capitale dedicato alle famiglie in difficoltà. Un merc Question: username_0: <pre><yamldata> Cosa: 'Apre a #Ostia il primo Mercato Sociale di #Roma Capitale dedicato alle famiglie in difficoltà' Descrizione: Apre a Ostia il primo Mercato Sociale di Roma Capitale dedicato alle famiglie in difficoltà. Un mercato dove per fare la spesa non servono soldi ma una card che verrà ricaricata attraverso il tempo che le persone dedicheranno a lavori socialmente utili per la città Link: https://www.comune.roma.it/web/it/notizia/al-via-primo-mercato-sociale-arrivano-alimenti-di-cittadinanza-per-famiglie-in-difficolta.page?fbclid=IwAR2rnMtFV4Z5TUbt8AXPwIKoDTklWI4QrkNrvzYBl0_13800dW-0IouDmjA Posizione: 41.741625 12.263265 0 0 </yamldata></pre> Answers: username_1: @favoeva su segnalazioni dei Comuni non mettiamo label Fonti Istituzionali perché non c’è categoria...
akre54/Backbone.D3View
102280538
Title: plans to publish on npm? Question: username_0: Would be quite useful, as in my experience, backbone and browserify make a great combo. Answers: username_1: Ah yep. Oversight on my part. Published. Status: Issue closed username_0: Thanks so much. Really excited about this plugin--kudos for the good work. username_2: `npm install --save backbone.d3view`
rust-lang/rustfmt
473764302
Title: Format empty dyn without trailing space Question: username_0: ```rust macro_rules! token { ($t:tt) => {}; } fn main() { token!(dyn); } ``` As of current master, in 2018 edition mode rustfmt applies the following diff: ```diff - token!(dyn); + token!(dyn ); ``` I would prefer for there not to be a trailing space inserted. @username_2 -- I believe this should fall under #3333. Affects the `Token!` macro in Syn. Answers: username_1: I would like to fix this. I've had a look at #3333 however can't find token!(dyn) in the diffs username_2: @username_1 Thanks! IIRC all you need to do is to flip the order of these two function calls: https://github.com/rust-lang/rustfmt/blob/3adfb08afe3cb6a8c055e581dea87808464d1381/src/macros.rs#L306-L309 When you create a PR, please do not forget to add a test :). Also, note that this issue is reproducible only when we are using edition 2018. username_1: Hey, just wanted to let you know that I won't be able to work on this as a result of my lack of time right now Status: Issue closed
moleculerjs/moleculer
677660042
Title: Invalid validator.type FastestValidator Question: username_0: ## Prerequisites Please answer the following questions for yourself before submitting an issue. - [x] I am running the latest version - [x] I checked the documentation and found no answer - [x] I checked to make sure that this issue has not already been filed - [x] I'm reporting the issue to the correct repository ## Current Behavior moleculer.config.js accept validator.type="Fastest" instead of "FastestValidator" as documentation said ## Expected Behavior Correct documentation about "FastestValidator" ## Failure Information ### Steps to Reproduce Please provide detailed steps for reproducing the issue. 1. Edit moleculer.config.js, set ``` validator: { type: "FastestValidator", } ``` 2. run "npm run dev" ### Context Please provide any relevant information about your setup. This is important in case the issue is not reproducible except for under certain conditions. * Moleculer version: 0.14.9 * NodeJS version: v12.18.1 * Operating System: MacOS 10.15.5 ### Failure Logs ``` [2020-08-12T12:30:38.616Z] FATAL XXX.local-82113/BROKER: Unable to create ServiceBroker. BrokerOptionsError: Invalid Validator type 'FastestValidator'. at Object.resolve (/XXX/node_modules/moleculer/src/validators/index.js:49:10) at new ServiceBroker (/XXX/node_modules/moleculer/src/service-broker.js:233:33) at MoleculerRunner.startBroker (/XXX/node_modules/moleculer/src/runner.js:450:17) at /XXX/node_modules/moleculer/src/runner.js:475:21 { code: 500, type: 'BROKER_OPTIONS_ERROR', data: { type: 'FastestValidator' }, retryable: false } npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! [email protected] dev: `moleculer-runner --repl --hot services/**/*.service.js --envfile .env` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the [email protected] dev script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! /XXX/.npm/_logs/2020-08-12T12_30_38_628Z-debug.log ```
snap-stanford/ogb
829850965
Title: ImportError: cannot import name 'PygGraphPropPredDataset' from 'ogb.graphproppred' Question: username_0: I just installed the package with `pip3 install ogb` and tried to run the small testscript: ``` from ogb.graphproppred import PygGraphPropPredDataset from torch_geometric.data import DataLoader # Download and process data at './dataset/ogbg_molhiv/' dataset = PygGraphPropPredDataset(name="ogbg-molhiv", root='dataset/') split_idx = dataset.get_idx_split() train_loader = DataLoader(dataset[split_idx["train"]], batch_size=32, shuffle=True) valid_loader = DataLoader(dataset[split_idx["valid"]], batch_size=32, shuffle=False) test_loader = DataLoader(dataset[split_idx["test"]], batch_size=32, shuffle=False) ``` However I receive the following: ``` Connected to pydev debugger (build 203.7148.72) Traceback (most recent call last): File "/snap/pycharm-community/226/plugins/python-ce/helpers/pydev/pydevd.py", line 1477, in _exec pydev_imports.execfile(file, globals, locals) # execute the script File "/snap/pycharm-community/226/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "/home/tue/.config/JetBrains/PyCharmCE2020.3/scratches/scratch_2.py", line 1, in <module> from ogb.graphproppred import PygGraphPropPredDataset ImportError: cannot import name 'PygGraphPropPredDataset' from 'ogb.graphproppred' (/home/tue/.local/lib/python3.8/site-packages/ogb/graphproppred/__init__.py) python-BaseException Process finished with exit code 130 (interrupted by signal 2: SIGINT) ``` I'm using python 3.8 and should have all the required packages, as far as I can see. ``` pip3 list Package Version ------------------------ -------------------- apt-clone 0.2.1 apturl 0.5.2 ase 3.21.1 beautifulsoup4 4.8.2 biopython 1.78 blinker 1.4 Brlapi 0.7.0 certifi 2019.11.28 chardet 3.0.4 Click 7.0 colorama 0.4.3 command-not-found 0.3 configobj 5.0.6 cryptography 2.8 cupshelpers 1.0 cycler 0.10.0 dbus-python 1.2.16 decorator 4.4.2 defer 1.0.6 [Truncated] tinycss2 1.0.2 tldextract 2.2.1 torch 1.8.0 torch-geometric 1.6.3 torchaudio 0.8.0 torchvision 0.9.0 tqdm 4.59.0 typing-extensions 3.7.4.3 ubuntu-advantage-tools 20.3 ubuntu-drivers-common 0.0.0 ufw 0.36 Unidecode 1.1.1 urllib3 1.25.8 wadllib 1.3.3 webencodings 0.5.1 wheel 0.34.2 xkit 0.0.0 xopen 1.1.0 zeroconf 0.24.4 ``` Answers: username_1: It seems that you do not have all required packages installed in order to run PyG. Please follow the installation procedure described [here](https://github.com/username_1/pytorch_geometric#installation) for installing PyG. Alternatively, you can access OGB datasets via ``` from ogb.graphproppred import GraphPropPredDataset ``` without any additional library requirements. Status: Issue closed username_3: Hello, I can't find a file named "ogbn arXiv" in graphproppreddataset. Which file does this file correspond to in graphproppreddataset? username_2: ogbn-arxiv is a node classification dataset. ```python from ogb.nodeproppred import NodePropPredDataset ```
usgo/gocongress
925703316
Title: Sign Out Remains for Multiple Years Question: username_0: ## Summary of the Bug It seems unusual that we are checking the year of our user for Sign In, but not if they are the correct year on other years. ## Steps to Reproduce the Behaviour 1. Sign In with any account. 2. Goto another year. ## The Expected Behaviour The Sign Out button should only be for the current year. Previous years should require separate Sign In. ## Screenshots ### 2018 ![Screenshot from 2021-06-20 16-13-00](https://user-images.githubusercontent.com/5054653/122691490-d3bc1700-d1e4-11eb-9dd6-35bb5f941457.png) ### 2020 ![Screenshot from 2021-06-20 16-13-09](https://user-images.githubusercontent.com/5054653/122691491-d454ad80-d1e4-11eb-8300-abe309e32ecc.png) ### 2021 ![Screenshot from 2021-06-20 16-31-57](https://user-images.githubusercontent.com/5054653/122691530-1251d180-d1e5-11eb-96a4-79fb7c84be84.png) Answers: username_1: Yeah, this is definitely a slightly funny aspect of the way the site works. I'm not sure it's a _problem_, though, especially since previous years are mainly kept around for historical reference. To fix it, we could... 1. Update the site so that one could be logged in to any number of years simultaneously -- that login status would be year-specific and not site-specific, and the site would really act like different years were fundamentally different sites. This is a bigger technical challenge than I know how to handle at the moment. :) (@jaredbeck?) 2. Log a person out when they visit another year, which seems undesireable. 3. ? The mix of difficulty + low urgency from my perspective has always made me leave this on the pile.
soscripted/sox
331611752
Title: Why is there always a delay of around 3-4 seconds in loading SOX even after the webpage has fully loaded Question: username_0: ### Installed Version: 2.1.13DEV Environment: Tampermonkey I am on a slow internet, if that matters, but I've been using more than a dozen SE userscripts since past several months, and SOX is the first to be delayed for so long even **after** the page has fully loaded. Why's it like so? What can be done to reduce the lag time? Answers: username_1: Honestly, quite a few of the features haven't been optimised and are quite dirty. I tried a while ago to do stuff like common event listeners for the features so each feature didn't create its own listener to do stuff etc. but it didn't do *that* much. Hopefully, one day, I'll go through and try to clean some features up, but I just don't have enough time atm :/ username_0: Hmm, no, I wasn't talking about some of the individual features, I was talking about the overall SOX script. The gear icon in the topbar appears at least 3-4 seconds **after** the page load. I was just worried because, for example, if I've hidden the community bulletin, then I get to see it for the first time the page loads, and then it suddenly disappears (after SOX loads), which is a kinda jarring experience. It's okay though, the rest of the SOX features are excellent, so it's not really a big deal, but do look at it once you've some free time :) username_0: I just also saw this error in my console: Tampermonkey: couldn't load @require from URL sox.enhanced_editor.js tms_074b7a5c_3e4d_4844_995b_51e03ce12a9c @ userscript.html?id=074b7a5c-3e4d-4844-995b-51e03ce12a9c:4090 username_1: @username_0 I'll have a look at it soon, making some more global listeners for all features might help. also the error in console is from a deprecated feature, I must have left the require in so I'll get rid of that too username_0: Seems like that's the issue. Among all the userscripts I currently had, SOX is the only one that has tons of `require` statements in it. I just noticed that because Tampermonkey gives SOX a unique cloud icon on the dashboard. Digging in, SOX seems to require: ``` https://code.jquery.com/jquery-2.1.4.min.js https://code.jquery.com/ui/1.11.4/jquery-ui.min.js https://cdn.rawgit.com/timdown/rangyinputs/master/rangyinputs-jquery-src.js https://cdn.rawgit.com/jeresig/jquery.hotkeys/master/jquery.hotkeys.js https://cdnjs.cloudflare.com/ajax/libs/jquery-timeago/1.5.3/jquery.timeago.min.js sox.github.js sox.dialog.js sox.features.js sox.enhanced_editor.js sox.css sox.dialog.html sox.features.info.json sox.common.info.json ``` I don't understand why: - the first jquery import is necessary. SE already uses jQuery, so iy needn't be imported. - there are so many `sox.*` files . Why are they being imported (and that too separately) instead of being shipped as a part of the single userscripte code? I would admit that such a separated file structure is better suited for browser-addons, not good for userscripts Also, I am not sure about those `ui.js`, `timeago`, `hotkeys`, and `rangyinput` files. I doubt they are even necessary (SOX is the first text manipulation userscript out of all those out there that uses rangyInput o.O), but I will check out while refactoring the code later. username_1: @username_0 If I remember correctly, the jquery import it to work on FF because it didn't used to like to use the jQuery from the website itself there are so many files because it makes it *so much more* easy to maintain and develop. It's probably not the reason it delays the script because (at least TM does) they are cached once they are first downloaded, so they shouldn't really need to be fetched again. about the other ones, I don't remember if they're being used anymore; I'll have a look. username_1: I had a look, and those files are being used albeit only in a few features; they could probably be rewritten to not need them at some point, but I won't be able to do it anytime soon. username_0: Could you please list those features out? I might be able to help rewrite them to not require those dependencies. username_1: @username_0 sure! It'd be great if you had time to! starting with the easiest probably is `kbdAndBullets` which uses `hotkeys`: $('[id^="wmd-input"]').bind('keydown', 'alt+l', function() { and $('[id^="wmd-input"]').bind('keydown', 'alt+k', function() { just replacing them with a `keydown` handler on `document` and checking the keycodes should do it! --- `timeago` is used in `quickAuthorInfo` there is a class `timeago` and it's triggered with `$("time.timeago").timeago();`. Not sure how to do this other than polling every few seconds/minutes? --- `rangyinputs` is also used in `kbdAndBullets`, which uses `surroundSelectedText`, `insertText`, and `replaceText`. I remember looking into this a while back and thought it's a pain to implement to work with Chrome and FF, which is why I used rangyinputs to make it quicker 😆 username_0: I looked into `timeago`, and I am pretty sure I can't [rewrite it](https://github.com/rmm5t/jquery-timeago/blob/master/jquery.timeago.js) into something much more minimal. I could probably eliminate `rangyInputs` though. Could you please confirm it's only used in `kbdAndBullets` and nowhere else? username_1: @username_0 yes, it's only in `kbdAndBullets`! :) username_0: Great, now we're only with jQuery, jQuery UI, and timeago - each of which I find difficult to reduce atm. Might revisit this later, but closing for now. I'll see it the delays arise again. Status: Issue closed
ReactiveX/RxPY
68742357
Title: Event loops using subjects -- duplicate events Question: username_0: I am trying to use `Subject` to create a loop (connecting the end of a stream to the beginning). I have a real use-case for this, but I wrote a contrived script to isolate my problem. I'm confused, because it seems like there are duplicate events. (Note, I'm running Python 2.7, and I have `tornado` installed.) Here's the code without the loop connected: ```python # coding=utf-8 """ I do not understand why this behaves the way it does. """ from __future__ import print_function import re import time from rx import Observable from rx.concurrency import IOLoopScheduler from rx.subjects import Subject from tornado import ioloop scheduler = IOLoopScheduler() seed_list = ['a', 'b', '1', '2', 'c', ] def check_no_numbers(s): """Filters out strings w/ number characters """ if not re.search(r'\d', s): print("+", s) return True else: print("x", s) return False def dot(s): return s + '.' #+ str(uuid4()) if __name__ == "__main__": xs = Observable.from_(seed_list, scheduler=IOLoopScheduler()) subject = Subject() def push_new(x): print('pushing: {!r}'.format(x)) time.sleep(0.1) subject.on_next(x) xs.subscribe(on_next=push_new) filtered = subject.filter(check_no_numbers).filter(lambda s: len(s) <= 3) results = filtered.map(lambda x: "==> {}".format(x)) news = filtered.map(dot) # news.subscribe(on_next=push_new) # DON'T loop def out(result): print(result) [Truncated] ``` + a + a + a. + a. + a.. + a.. + a... + a... # snipped pushing: 'a' pushing: 'a.' pushing: 'a..' pushing: 'a...' # snipped ``` (You can also uncomment the snippet in the `dot()` function, that calls `uuid4()`, to tag the strings and see the problem again.) Is this a bug? Answers: username_1: I'm not sure I understand the code fully, but I don't think it's a bug. Remember that you have 2 subscriptions going through the `check_no_numbers()` function, so every event will be filtered twice (once for each subscription). Each subscription is it's own functional pipe. But that shouldn't matter if you don't have any side-effects (and you should not have side-effects in `filter`). But your print statements in `check_no_numbers` have side-effects, and that's why you are seing things being printed twice. If this is a problem then you can use `.publish().ref_count()` to force a single subscription through `filter` which will multiplex for `results` and `news` using an internal subject inside `.publish(). Something like: ```python filtered = subject.filter(check_no_numbers).filter(lambda s: len(s) <= 3).publish().ref_count() ``` Does this fix your problem? username_0: Aha! `publish()` does the trick, seems to work right with `ref_count()` or with `connect()`. I know the side-effects are bad, but my goal was to trace the execution. Here's the code with your `publish().ref_count()` suggestion applied: ```python from __future__ import print_function import re import time from rx import Observable from rx.concurrency import IOLoopScheduler from rx.subjects import Subject from tornado import ioloop scheduler = IOLoopScheduler() seed_list = ['a', 'b', '1', '2', 'c', ] def check_no_numbers(s): """Filters out strings w/ number characters """ if not re.search(r'\d', s): return True else: return False def debug_print(s): print("~", s) return s def dot(s): return s + '.' if __name__ == "__main__": xs = Observable.from_(seed_list, scheduler=IOLoopScheduler()) subject = Subject() def push_new(x): print('pushing: {!r}'.format(x)) time.sleep(0.1) subject.on_next(x) xs.subscribe(on_next=push_new) filtered = subject.filter(check_no_numbers).map(debug_print).filter(lambda s: len(s) <= 3) \ .publish().ref_count() results = filtered.map(lambda x: "==> {}".format(x)) news = filtered.map(dot) news.subscribe(on_next=push_new) # loop def out(result): print(result) results.subscribe(on_next=out) ioloop.IOLoop.current().start() ``` [Truncated] ~ b... ==> b.. ==> b. ==> b pushing: '1' pushing: '2' pushing: 'c' ~ c pushing: 'c.' ~ c. pushing: 'c..' ~ c.. pushing: 'c...' ~ c... ==> c.. ==> c. ==> c ``` Thanks for your help. I need to chew on this a little more to make sure I understand everything, but I can see that this is not a bug! Status: Issue closed username_0: Hmm. Do you think it'd be good to have some (per-scheduler) test cases for things like these maybe? It seems like a lot of the test-cases are simple. That's good, but it might be good to capture some complex scenarios too.
udacity/sdc-issue-reports
207337739
Title: Suggest to add more AWS information in lesson Question: username_0: Dear Sir or Madam, I enrolled in Self Driving Nanodegree on Udacity. I suggest to add more information about AWS for Project 2 in Term 1. Since I used Windows system and I'm new for AWS, I used two days to search how to upload dataset to AWS and tried different options. Otherwise, I couldn't train my model. So I suggest to add that information in the lesson, which would benefit future students. Thanks. Have a good day! <NAME> Answers: username_1: Thanks for your feedback, we'll look into it! Status: Issue closed
Kormil/harbour-powietrze
600568712
Title: Powietrze crashes when choosing Warszawa Wokalna Question: username_0: ``` Answers: username_1: I think the cause was coming some empty data from provider. Powietrze.gios.gov.pl fix this in theirs side but I add guard for this too. username_0: yes, it's on from powietrze side now. Thanks for fix this from your side as well. Status: Issue closed
Google-IO-Extended-Grand-Rapids/conference_android
58487182
Title: Build should determine the URL Question: username_0: Part of the Build system is to allow for build specific properties. Right now the URL for the API is hardcoded into the application. Instead, it should be pulled from the BuildConfig.API_BASE_URL. Use that instead of the hardcoded value. Complexity = 1 (very easy) Estimated effort = 15 min. Expected completion once taken: 2 days Answers: username_1: https://github.com/Google-IO-Extended-Grand-Rapids/conference_android/pull/25 Status: Issue closed username_0: Great job!!! @username_1
WarEmu/WarBugs
672215052
Title: Fortress reservation acting up Question: username_0: 1/ Fight in Chaos wastes for a while 2/ Get Fortress Reservation 3/ Switch Warbands 4/ Try to get into fortress 1-2 mins after reservation 5/ get pushed back Answers: username_0: https://i.imgur.com/QiKuEeT.png username_1: Did the timer run out? username_0: YUp for sure i entered the fortress as i was switchign warbands. username_2: Same thing happened to me a few minutes ago. I had reservation in Shining Way, I was there few minutes before the 5m reservation were supposed to end. Joined a warband on the way and it wouldnt allow me to enter, it just put me in queue. username_1: If the timer runs out you will lose your reservation. Status: Issue closed username_0: Timer hadnt run out...
godotengine/godot
970975152
Title: CSG nodes are not exported correctly with scenes as GLTF (3.4b3) Question: username_0: ### Godot version 3.4 beta3 ### System information Windows 10 ### Issue description CSG nodes are not exported correctly when exporting a scene as GLTF, these are read as empty axes by Blender and as simple Spatial nodes by Godot. CSG nodes seem to be correctly exported on master (4-dev), except for materials. The original scene tree ![original tree](https://user-images.githubusercontent.com/10215987/129457247-9eee5178-a167-4a56-873b-63c93dc21686.png) Scene view ![original scene](https://user-images.githubusercontent.com/10215987/129457223-eb454f2f-c7da-4e67-b2d3-a8b29a8c5c9d.png) Exported tree (inherited) ![Exported tree](https://user-images.githubusercontent.com/10215987/129457259-49fdd423-372d-479c-803b-9ad347c41a51.png) Scene view of the exported scene in Blender (windows vewer can't open it). ![Opened with blender](https://user-images.githubusercontent.com/10215987/129457324-d1d1f834-3907-4665-aa29-afdf43af33ed.png) ### Steps to reproduce Create CSG nodes in any configuration and amount, then export the scene with the GLTF exporter. ### Minimal reproduction project Simple scene, includes an exported model. [csgexportbug.zip](https://github.com/godotengine/godot/files/6987038/csgexportbug.zip) Answers: username_1: Verified still broken on https://github.com/godotengine/godot/commit/f28199f403df45b1e1276f39afe2d415170adcf5 ![image](https://user-images.githubusercontent.com/32321/137580066-9e50f409-966a-4fb3-bb42-a5d33667aafd.png) username_2: Hey, found a culprit in modules/gltf/gltf_document.cpp the define macro MODULE_CSG_ENABLED is not defined (grid maps seems to also be broken the same way). As a workaround: Adding define MODULE_CSG_ENABLED to gltf_document.h fixed the issue, but that's not a proper way to do it. username_3: Fixed by #54911. Status: Issue closed username_0: Looks like it exports the mesh correctly now, CSG object's materials are missing (material slots/surfaces are correct, though) but need to test it on 4 again to see if it does the same there . username_4: Just a heads up, I've just downloaded the latest Stable: v3.4.stable.official [206ba70f4], but unfortunately the export for _gltf/glb_ for CSG objects just comes up as Plain Axes: ![image](https://user-images.githubusercontent.com/5027750/142135130-8526edbb-915e-408e-8b30-d078316eaf6c.png) username_3: @username_4 This was fixed 5 days ago, which is *later* than the release of Godot 3.4-stable, so that's normal. See milestones and comments on https://github.com/godotengine/godot/pull/54911, the fix is in 3.5 and 3.4.1. username_4: Got it, thanks @username_3!
electron/electron
469648013
Title: typings for MenuItemConstructorOptions.roles are missing possible values Question: username_0: <!-- As an open source project with a dedicated but small maintainer team, it can sometimes take a long time for issues to be addressed so please be patient and we will get back to you as soon as we can. --> ### Preflight Checklist <!-- Please ensure you've completed the following steps by replacing [ ] with [x]--> * [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/master/CONTRIBUTING.md) for this project. * [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/master/CODE_OF_CONDUCT.md) that this project adheres to. * [x] I have searched the issue tracker for an issue that matches the one I want to file, without success. ### Issue Details * **Electron Version:** 5.0.7 * **Operating System:** macOS ### Expected Behavior No typescript error when using for example `recentDocuments`-role for MenuItem ### Actual Behavior Typescript error when using for example `recentDocuments`-role for MenuItem ### To Reproduce Try to use `recentDocuments`-role on a MenuItem ### Additional Information In the [menu-item-md](https://github.com/electron/electron/blob/master/docs/api/menu-item.md#roles) there are the possible roles only for macOS missing. I can still use them if I write `role: 'recentDocuments' as any`. I don't think this is how it should be.<issue_closed> Status: Issue closed
uchicago-bio/2016-Autumn-Forum
180403353
Title: HW #2 Prob. 2-- clarification about output Question: username_0: For problem 2, we're asked to read, validate, and output the FASTA sequences in the following format: - description | sequence - description | sequence ... Does 'description' refer to the whole line including the identifier, or just the description part after the identifier? For example: Option 1 (whole line including ID): gi|117320674|gb|AC188433.3| Pan troglodytes BAC clone CH251-677J9 from chromosome 7, complete sequence versus... Option 2: Pan troglodytes BAC clone CH251-677J9 from chromosome 7, complete sequence Thanks for the clarification! Answers: username_1: Yes, use the entire header line for the description. Status: Issue closed
kubeedge/kubeedge
457332762
Title: using “fieldPath: spec.nodeName” Error. Question: username_0: **What happened**: When I apply daemonset in kubeedge ,I used the "valueFrom: fieldRef: fieldPath: spec.nodeName" to set node name as env value. But I fand the pod's status is false, and got the ERROR log:"F0617 11:47:36.114866 1 main.go:65] MY_NODE_NAME environment variable not set". I check the env value in the container —— is empty, But I check the pod's info as "kubectl get pod <podname> -o yaml |grep fieldPath " is natural. **What you expected to happen**: I need to set each pod's environment variable as the node's host name at deployment time **How to reproduce it (as minimally and precisely as possible)**: ''' - name: MY_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName ''' **Environment**: k8s master(1.11.6) - KubeEdge version: .0.0.2 - Hardware configuration: arm64 - OS (e.g. from /etc/os-release): ubuntu16.04 - Kernel (e.g. `uname -a`): master :4.4.0-141-generic - Others: Answers: username_1: @username_0 , currently edged doesn't support environment variables. @gpinaik , can you please take a look ? username_2: @username_0 , Environment variables support has been merged in the master branch. Please let us know if you still face the issue. Closing this issue for now Status: Issue closed
getgrav/grav-plugin-admin
630066591
Title: Josefin Slab? Question: username_0: In `grav-plugin-admin/themes/grav/scss/fonts.scss` ``` $font-definitions: ( Josefin+Slab: 400, Roboto: '300,400,500', Inconsolata: '400,700' ); ``` Should it be `'Josefin Slab': 400,` ? Otherwise it won't pass the if clause in `@function admin-font-faces` in `grav-plugin-admin/themes/grav/scss/configuration/fonts/_support.scss` Answers: username_1: probably.. will change Status: Issue closed
woocommerce/woocommerce
508790338
Title: Shipping charge based on product ID Question: username_0: **Is your feature request related to a problem? Please describe.** Different products have different shipping charges. It would be good to have a more advanced method inside shipping to create additional adjustments. **Describe the solution you'd like** I would like a simple way to attach a shipping charge to each product (ID). That can mean selecting shipping (frakt) in each product. (Norwegian language) ![Screen Shot 2019-10-17 at 20 34 17](https://user-images.githubusercontent.com/5323259/67057198-85a46580-f11d-11e9-9896-e7df6ddfe20f.png) **Describe alternatives you've considered** Searching for a code snippet to use for product id in cart. I have also been looking into using one of my tutorials here: https://easywebdesigntutorials.com/adjust-shipping-price-when-quantity-changes/ Adjust shipping fee based on total amount in cart or adjusting shipping fee based on quantity in cart. **Additional context** 1. Adding additional shipping options being able to create a more unique experience. 2. Or adding on to the above tutorial with a way to by product select the shipping charge. Answers: username_1: Per that tutorial sounds like better to just create a new shipping method extending flat rate class to attend your use case. username_0: I am thinking there is a need for some brain storming as to various methods on how to improve the shipping area of WooCommerce today. This could mean having conditional logic using bigger, smaller or equal to. Selecting product ID, amount of products, weight etc. Creating unique shipping options. Example of what is needed. 1 product has a specific shipping price. Purchasing 2 or 3 will have another shipping price. 4 or more will have a default price. I addressed this in the tutorial. I also addressed cart total amount in the cart that gives a specific shipping price. WooCommerce -> Shipping -> Zone. That would mean Shipping methods could do for a more extended way to handle shipping that gives the additional options. Today one clicks "Add shipping method" which is very limited. This could be extended to give additional options. Such as the conditional logic flat rate option. username_2: Did you already have a look to the https://codecanyon.net/item/woocommerce-advanced-shipping/8634573 plugin? username_1: When, there's several plugins and integrations to achieve what you want. Also you can extend the shipping class and create your own shipping method, so you can create the specific rules for your business. I'll close for votes. username_0: Hey @username_1 and @username_2 Bottom line is that we have today the following shortcode one can use in the cart: if ( WC()->cart->get_cart_contents_count() < $threshold ) { if ( WC()->cart-> cart_contents_total < $threshold1 ) { Based on the above one can adjust shipping based on cart count and cart total. I am looking for a product ID so that it would be something like the following fake shortcode I use to express what I am looking for. if ( WC()->cart-> cart_product_id < $threshold1 ) { Does there exist an cart product id today? If one can this be added so that it gives another option on how to adjust the cart shipping fee?
jsk-ros-pkg/jsk_control
55107989
Title: footcoordsの接地判定で力センサ値を座標変換する Question: username_0: @username_1 https://github.com/jsk-ros-pkg/jsk_control/blob/master/jsk_footstep_controller/src/footcoords.cpp の接地判定で,/lfsensor, /rfsensor をそのまま読んでいますが, 力センサ座標系に変換する必要がありそうです. HRP2とSTAROで力センサの座標系が異なっていました. (`rostopic echo /lfsensor`で,HRP2は接地時にZ成分が正の値,STAROは接地時にZ成分が負の値になる.) Rviz上の力センサを表す矢印は,どちらのロボットも正しく出ているのでtfで変換すれば正しくなると思います. Answers: username_1: STAROとHRP2の力センサ座標系ってなんだろう? frame教えてもらえる? おそらく、STAROは `hoge->fuga->wrench`みたいになんか一個挟まってて、その一つ上が知りたいです. たぶんhogeとfugaが反対を向いているんじゃないかな username_1: STAROの力センサ、回転だけじゃなくて位置も変えてるのか,,, 面倒ですねー username_0: STAROの足力センサまわりのtfです. HRP2と同じに見えます,というので合っていますでしょうか. ![image](https://cloud.githubusercontent.com/assets/6636600/5849965/4142e524-a230-11e4-9839-f7ea8730caec.png) username_1: https://github.com/jsk-ros-pkg/jsk_control/issues/202 これで対応しました。 とりあえずlleg_end_coords/rleg_end_coordsと同じ座標系で値を見るようにしましたが、`lfoot_sensor_frame`, `rfoot_sensor_frame`を適切に指定してください username_1: なるほど、このプルリクエストで動くようになると思います username_2: ちなみに接地判定はどういう処理をしていますか? hrpsys-baseの方でも似たような値を計算してますが、取得できるとよかったりしますでしょうか? username_1: lowpass filterかけて、閾値処理ですね。 取得できると良いですね、という話は以前も出てましたが。 username_1: ついでに、ですがhrpsysではエンコーダからルートリンクの速度とかは計算していますか? それも欲しいですね username_0: ありがとうございます. username_1: マージしたので確認してみてください username_0: ``` pnh.param("lfoot_sensor_frame", lfoot_sensor_frame_, std::string("lleg_end_coords")); pnh.param("rfoot_sensor_frame", rfoot_sensor_frame_, std::string("rleg_end_coords")); ``` だと以下のエラーでした. ``` terminate called after throwing an instance of 'tf2::LookupException' what(): "lleg_end_coords" passed to lookupTransform argument source_frame does not exist. [footcoords-28] process has died [pid 15889, exit code -6, cmd /home/leus/ros/hydro/devel/lib/jsk_fo otstep_controller/footcoords __name:=fo ``` ``` pnh.param("lfoot_sensor_frame", lfoot_sensor_frame_, std::string("lfsensor")); pnh.param("rfoot_sensor_frame", rfoot_sensor_frame_, std::string("rfsensor")); ``` では,エラーが出ませんでしたが,座標系が違うままでした. username_1: ふむ、一回目は失敗するのかなぁ、対応するのでちょい待ちです username_0: したら,上がって座標も合っていて解決しました. start_ros_bridgeでいろいろ一緒に上がるなかで, lleg_end_coords のtfがpublishされる前に,footcoordsが始まってしまうのが問題みたいですね. username_0: したら のときも, 5回に3回くらいは, ``` terminate called after throwing an instance of 'tf2::LookupException' what(): "lleg_end_coords" passed to lookupTransform argument source_frame does not exist. Aborted (core dumped) ``` で落ちてしまいます. 5回に2回の上がるときは, ``` [ERROR] [1421902329.922831239]: transform error: Lookup would require extrapolation into the future. Requested time 1421902329.922467916 but the latest data is at time 1421902329.877795528, when looking up transform from frame [lleg_end_coords] to frame [lfsensor] [ERROR] [1421902329.922878461]: failed to resolve tf of force sensor ``` というエラーが5秒くらい出てその後は出なくなって,正しい接地状態が得られます. username_1: 了解です、五分待ってください ✉︎ Ryohei username_1: https://github.com/jsk-ros-pkg/jsk_control/pull/203 これでどうだ? username_1: マージしたので試してみてください。今度こそうまくいくはず username_0: 動きました.ありがとうございました. Status: Issue closed
flood-io/cli
298520695
Title: Fix: Screenshots not generated in CLI Question: username_0: Screenshots work when running a proper flood, but do not work when running through the CLI. They would make it a lot easier to debug while scripting without having to run full-blown tests. Status: Issue closed Answers: username_1: Duplicate of #8
stvmlbrn/node-express-boilerplate
239454031
Title: Log to mongo Question: username_0: If you ever want to log to mongo instead of a file... logger-mongo.js ```javascript const winston = require('winston'); const winstonMongo = require('winston-mongodb').MongoDB; winston.emitErrs = true; const logger = new winston.Logger({ transports: [ new (winston.transports.MongoDB)({ db: 'mongodb://localhost:27017/databasename', collection: 'logs' }) ], exitOnError: false, }); logger.stream = { write(message) { logger.info(message); }, }; module.exports = logger; ```
onnx/onnx
894302877
Title: merge two submodels Question: username_0: # Merge two submodels into one model ### Question in `onnx` we have `onnx.utils.extract_model` to cut model on two different submodels by `inputs` and `outputs` of nodes. Can we merge this two submodels in single model by some method? ### Concat solution with `loop` I think we can solve this by iterating though `inputs`, `nodes`, `outputs` for two models and after this just concat them all to one model, but maybe `onnx` have some interesting way to do it? Answers: username_1: Hi @username_0, As you said, official ONNX does not provide a utility function to merge two submodels, although I did see there are some ways to do it (e.g., https://github.com/scailable/sclblonnx). If you are interested, welcome to contribute :) Status: Issue closed
Pradyuman7/ChocoBar
1020438848
Title: Improve documentation Question: username_0: Currently, the documentation is included in the main ReadMe. Please separate it from there and create a new `Documentation.md` for it - Add documentation for the missing Chocobars (newly added) - Make the documentation look nice and good - Add the documentation file link to main ReadMe Status: Issue closed Answers: username_0: Nice work @JamesEllerbee
backstage/backstage
1087694862
Title: Disallow deletion of non-orphaned entities Question: username_0: ## Feature Suggestion The catalog currently allows deletion of non-orphaned entities, an orphaned entity being one that is not referenced by any other entity and is usually safe to delete. Deleting a non-orphaned entity is really only valid in the case where an entity somehow gets stuck in some way, so it's basically a way to work around bugs in the catalog. The trouble with allowing deletions of non-orphaned entities is that due to caching in the catalog processing, they will remain deleted until something invalidates the cache, which can be tricky to figure out how to do and is not a self-healing problem. We're thinking that allowing deletion of non-orphaned entities is currently worse than not allowing it, as it's causing quite a lot of confusion. It's arguably more correct to not allow deletion, and if there are any cases where entities get stuck, we can focus efforts towards resolving those bugs instead. ## Possible Implementation We're happy to have people from the community chip in on this one, so let us know if you want to take a stab at it and we can provide additional pointers as needed The main change will be to update [`removeEntityByUid`](https://github.com/backstage/backstage/blob/3313e03905c92a4d184a1a91a12093b1fc4d85c1/plugins/catalog-backend/src/service/NextEntitiesCatalog.ts#L206) to reject the deletion if an entity isn't orphaned. We won't want to rely on the orphan annotation at that point though, as that's not the source of truth. We'll instead want to check the incoming references, just like the [Stitcher does](https://github.com/backstage/backstage/blob/3313e03905c92a4d184a1a91a12093b1fc4d85c1/plugins/catalog-backend/src/stitching/Stitcher.ts#L156) when it populates the orphan annotation. With that in place we'll likely hit some issues though, as we now start rejecting deletions that aren't safe. This is particularly likely to break the location unregistration, which assumes that it is safe to [optimistically delete](https://github.com/backstage/backstage/blob/3313e03905c92a4d184a1a91a12093b1fc4d85c1/plugins/catalog-react/src/components/UnregisterEntityDialog/useUnregisterEntityDialogState.ts#L105) all entities referenced by the location. Most likely the best way to solve this is to move the eager deletion to the backend instead, but to be honest it might already be the case that this happens already and just needs a bit of verification :grin: ## Context Plenty of reports through Discord of deleted entities not reappearing Requests such as #8419 Automated cleanup is another way to go about deletions, but we'll likely want a bit of both #7860 Answers: username_1: Ah, so that's why! I've have many issues with this before and never really understood why that happens. How does one invalidates the cache? Just so I know what to do next time this happens. username_0: @username_1 anything that fails [this](https://github.com/backstage/backstage/blob/3313e03905c92a4d184a1a91a12093b1fc4d85c1/plugins/catalog-backend/src/processing/DefaultCatalogProcessingEngine.ts#L141) check in the processing engine for the parent location. The `deferredEntities` entities is that typically the json object of each of the documents in `catalog-info.yaml`, so changing the contents of the yaml file should invalidate the cache. If you want to poke at the DB then clearing the [result hash](https://github.com/backstage/backstage/blob/3313e03905c92a4d184a1a91a12093b1fc4d85c1/plugins/catalog-backend/src/database/DefaultProcessingDatabase.ts#L374) of the parent location would be a way to go about it.
drone/docs
133804925
Title: Questions setting up github Question: username_0: I'm going over [this page on setting up github](https://github.com/drone/docs/blob/master/content/setup/remotes/github.md), and have some questions. I managed to figure out that to get a `client_id` and `client_secret`, I need to [register](https://github.com/settings/applications/new) a new Oauth application on github. However, I'm looking at this screen: <img width="740" alt="screen shot 2016-02-15 at 12 06 18 pm" src="https://cloud.githubusercontent.com/assets/296876/13059075/8c81dff2-d3dc-11e5-9d42-881fee1d2565.png"> And not sure what to fill out for the **Authorization callback URL**. (or for that matter, the **Homepage URL**). At the moment I'm trying to setup a self-hosted drone on an anonymous EC2 instance under docker. Any help would be appreciated and I'd be happy to follow up with a PR on the docs. Status: Issue closed Answers: username_1: Check the end of the release/0.5 github reference for details: https://github.com/drone/docs/blob/release/0.5/content/installation/services/github.md Homepage URL is kind of an arbitrary thing that is shown in the Github user's list of authorized apps. It should point to the root URI for your Drone install.
rcdexta/react-event-timeline
383169714
Title: Collapsible not working Question: username_0: Want to use the neat collapse feature which I saw is pretty new, however it does not work as it seems the property is not recognized. Copying the code from your storybook: https://username_1.me/react-event-timeline/?selectedKind=Timeline&selectedStory=TimelineEvent%20with%20collapsible%20content&full=0&down=0&left=1&panelRight=0 gives me error: Warning: Received `true` for a non-boolean attribute `collapsible`. If you want to write it to the DOM, pass a string instead: collapsible="true" or collapsible={value.toString()}. Using version 1.5.4. I noticed that the merge of that feature to master gave a compile error so that's probably why? Answers: username_1: @username_0 Thanks for pointing this out. This is fixed in the latest release v1.6.0 Status: Issue closed
julia-vscode/julia-vscode
646340933
Title: MethodError occur running under viscode-REPL but not under normal REPL Question: username_0: I want to reran the example given in the Julia manual (V1.4.1, PP 190-191) ```` struct SparseArray{T,N} <: AbstractArray{T,N} data::Dict{NTuple{N,Int}, T} dims::NTuple{N,Int} end SparseArray(::Type{T}, dims::Int...) where {T} = SparseArray(T, dims); SparseArray(::Type{T}, dims::NTuple{N,Int}) where {T,N} = SparseArray{T,N}(Dict{NTuple{N,Int}, T}(), dims); Base.size(A::SparseArray) = A.dims Base.similar(A::SparseArray, ::Type{T}, dims::Dims) where {T} = SparseArray(T, dims) Base.getindex(A::SparseArray{T,N}, I::Vararg{Int,N}) where {T,N} = get(A.data, I, zero(T)) Base.setindex!(A::SparseArray{T,N}, v, I::Vararg{Int,N}) where {T,N} = (A.data[I] = v) A = SparseArray(Float64, 3, 3) ```` Everything works fine when the code is ran under a normal REPL, but when it is ran under the vscode-REPL via `view --> Command Palette --> Julia: start REPL`, I get the following error: ```` julia> A = SparseArray(Float64, 3, 3) 3×3 SparseArray{Float64,2}: 0.0Internal Error: ERROR: MethodError: no method matching size(::SparseArray{Float64,2}) The applicable method may be too new: running in world age 27143, while current world is 27151. Closest candidates are: size(::SparseArray) at REPL[4]:1 (method too new to be called from this world context.) size(::AbstractArray{T,N}, ::Any) where {T, N} at abstractarray.jl:38 size(::BitArray{1}) at bitarray.jl:99 ... Stacktrace:0.0 [1] treerender(0.0::SparseArray{ Float64 ,20.0}) at /Users/lzhan/.vscode/extensions/julialang.language-julia-0.16.8/scripts/packages/VSCodeServer/src/trees.jl:1250.0 [2] treerender(0.0::VSCodeServer. SubTree ) at 0.0/Users/lzhan/.vscode/extensions/julialang.language-julia-0.16.8/scripts/packages/VSCodeServer/src/trees.jl:54 [3] repl_getvariables_request0.0(:: VSCodeServer.JSONRPC.0.0JSONRPCEndpoint, :: Nothing julia> [4] dispatch_msg(::VSCodeServer.JSONRPC.JSONRPCEndpoint, ::VSCodeServer.JSONRPC.MsgDispatcher, ::Dict{String,Any}) at /Users/lzhan/.vscode/extensions/julialang.language-julia-0.16.8/scripts/packages/JSONRPC/src/typed.jl:63 [5] macro expansion at /Users/lzhan/.vscode/extensions/julialang.language-julia-0.16.8/scripts/packages/VSCodeServer/src/VSCodeServer.jl:83 [inlined] [6] (::VSCodeServer.var"#53#55"{Bool,String})() at ./task.jl:358 Internal Error: ERROR: MethodError: no method matching size(::SparseArray{Float64,2}) The applicable method may be too new: running in world age 27143, while current world is 27151. Closest candidates are: size(::SparseArray) at REPL[4]:1 (method too new to be called from this world context.) size(::AbstractArray{T,N}, ::Any) where {T, N} at abstractarray.jl:38 size(::BitArray{1}) at bitarray.jl:99 ... Stacktrace: [1] treerender(::SparseArray{Float64,2}) at /Users/lzhan/.vscode/extensions/julialang.language-julia-0.16.8/scripts/packages/VSCodeServer/src/trees.jl:125 [2] treerender(::VSCodeServer.SubTree) at /Users/lzhan/.vscode/extensions/julialang.language-julia-0.16.8/scripts/packages/VSCodeServer/src/trees.jl:54 [3] repl_getvariables_request(::VSCodeServer.JSONRPC.JSONRPCEndpoint, ::Nothing) at /Users/lzhan/.vscode/extensions/julialang.language-julia-0.16.8/scripts/packages/VSCodeServer/src/request_handlers.jl:88 [4] dispatch_msg(::VSCodeServer.JSONRPC.JSONRPCEndpoint, ::VSCodeServer.JSONRPC.MsgDispatcher, ::Dict{String,Any}) at /Users/lzhan/.vscode/extensions/julialang.language-julia-0.16.8/scripts/packages/JSONRPC/src/typed.jl:63 [5] macro expansion at /Users/lzhan/.vscode/extensions/julialang.language-julia-0.16.8/scripts/packages/VSCodeServer/src/VSCodeServer.jl:83 [inlined] [6] (::VSCodeServer.var"#53#55"{Bool,String})() at ./task.jl:358 ```` Answers: username_1: Will be fixed with https://github.com/julia-vscode/julia-vscode/pull/1388. username_2: I had recently similar issue with smaller MWE but it was working fine couple of days ago. ```julia julia> using ForwardDiff julia> x = ForwardDiff.Dual(0, 0, 0, 0, 0, 0) Dual{Nothing}(0,0,0,0,0,0) julia> Internal Error: ERROR: MethodError: no method matching size(::ForwardDiff.Partials{5,Int64}) The applicable method may be too new: running in world age 27138, while current world is 27147. Closest candidates are: size(::ForwardDiff.Partials{N,V} where V) where N at /home/lutfullah/Programs/homedir/.julia/packages/ForwardDiff/cXTw0/src/partials.jl:21 (method too new to be called from this world context.) size(::AbstractArray{T,N}, ::Any) where {T, N} at abstractarray.jl:38 size(::BitArray{1}) at bitarray.jl:99 ... Stacktrace: [1] length at ./abstractarray.jl:206 [inlined] [2] isempty(::ForwardDiff.Partials{5,Int64}) at ./abstractarray.jl:916 [3] typeinfo_prefix(::IOContext{Base.GenericIOBuffer{Array{UInt8,1}}}, ::ForwardDiff.Partials{5,Int64}) at ./arrayshow.jl:506 [4] show_vector(::IOContext{Base.GenericIOBuffer{Array{UInt8,1}}}, ::ForwardDiff.Partials{5,Int64}, ::Char, ::Char) at ./arrayshow.jl:447 (repeats 2 times) [5] show(::IOContext{Base.GenericIOBuffer{Array{UInt8,1}}}, ::ForwardDiff.Partials{5,Int64}) at ./arrayshow.jl:420 [6] _show_default(::Base.GenericIOBuffer{Array{UInt8,1}}, ::Any) at ./show.jl:394 [7] show_default at ./show.jl:377 [inlined] [8] show(::Base.GenericIOBuffer{Array{UInt8,1}}, ::Any) at ./show.jl:374 [9] sprint(::Function, ::ForwardDiff.Dual{Nothing,Int64,5}; context::Nothing, sizehint::Int64) at ./strings/io.jl:105 [10] #repr#339 at ./strings/io.jl:227 [inlined] [11] repr at ./strings/io.jl:227 [inlined] [12] treerender(::ForwardDiff.Dual{Nothing,Int64,5}) at /home/lutfullah/.vscode/extensions/julialang.language-julia-0.16.8/scripts/packages/VSCodeServer/src/trees.jl:134 [13] treerender(::VSCodeServer.SubTree) at /home/lutfullah/.vscode/extensions/julialang.language-julia-0.16.8/scripts/packages/VSCodeServer/src/trees.jl:54 [14] repl_getvariables_request(::VSCodeServer.JSONRPC.JSONRPCEndpoint, ::Nothing) at /home/lutfullah/.vscode/extensions/julialang.language-julia-0.16.8/scripts/packages/VSCodeServer/src/request_handlers.jl:88 [15] dispatch_msg(::VSCodeServer.JSONRPC.JSONRPCEndpoint, ::VSCodeServer.JSONRPC.MsgDispatcher, ::Dict{String,Any}) at /home/lutfullah/.vscode/extensions/julialang.language-julia-0.16.8/scripts/packages/JSONRPC/src/typed.jl:63 [16] macro expansion at /home/lutfullah/.vscode/extensions/julialang.language-julia-0.16.8/scripts/packages/VSCodeServer/src/VSCodeServer.jl:83 [inlined] [17] (::VSCodeServer.var"#53#55"{Bool,String})() at ./task.jl:358 Internal Error: ERROR: MethodError: no method matching size(::ForwardDiff.Partials{5,Int64}) The applicable method may be too new: running in world age 27138, while current world is 27147. Closest candidates are: size(::ForwardDiff.Partials{N,V} where V) where N at /home/lutfullah/Programs/homedir/.julia/packages/ForwardDiff/cXTw0/src/partials.jl:21 (method too new to be called from this world context.) size(::AbstractArray{T,N}, ::Any) where {T, N} at abstractarray.jl:38 size(::BitArray{1}) at bitarray.jl:99 ... Stacktrace: [1] length at ./abstractarray.jl:206 [inlined] [2] isempty(::ForwardDiff.Partials{5,Int64}) at ./abstractarray.jl:916 [3] typeinfo_prefix(::IOContext{Base.GenericIOBuffer{Array{UInt8,1}}}, ::ForwardDiff.Partials{5,Int64}) at ./arrayshow.jl:506 [4] show_vector(::IOContext{Base.GenericIOBuffer{Array{UInt8,1}}}, ::ForwardDiff.Partials{5,Int64}, ::Char, ::Char) at ./arrayshow.jl:447 (repeats 2 times) [5] show(::IOContext{Base.GenericIOBuffer{Array{UInt8,1}}}, ::ForwardDiff.Partials{5,Int64}) at ./arrayshow.jl:420 [6] _show_default(::Base.GenericIOBuffer{Array{UInt8,1}}, ::Any) at ./show.jl:394 [7] show_default at ./show.jl:377 [inlined] [8] show(::Base.GenericIOBuffer{Array{UInt8,1}}, ::Any) at ./show.jl:374 [9] sprint(::Function, ::ForwardDiff.Dual{Nothing,Int64,5}; context::Nothing, sizehint::Int64) at ./strings/io.jl:105 [10] #repr#339 at ./strings/io.jl:227 [inlined] [11] repr at ./strings/io.jl:227 [inlined] [12] treerender(::ForwardDiff.Dual{Nothing,Int64,5}) at /home/lutfullah/.vscode/extensions/julialang.language-julia-0.16.8/scripts/packages/VSCodeServer/src/trees.jl:134 [13] treerender(::VSCodeServer.SubTree) at /home/lutfullah/.vscode/extensions/julialang.language-julia-0.16.8/scripts/packages/VSCodeServer/src/trees.jl:54 [14] repl_getvariables_request(::VSCodeServer.JSONRPC.JSONRPCEndpoint, ::Nothing) at /home/lutfullah/.vscode/extensions/julialang.language-julia-0.16.8/scripts/packages/VSCodeServer/src/request_handlers.jl:88 [15] dispatch_msg(::VSCodeServer.JSONRPC.JSONRPCEndpoint, ::VSCodeServer.JSONRPC.MsgDispatcher, ::Dict{String,Any}) at /home/lutfullah/.vscode/extensions/julialang.language-julia-0.16.8/scripts/packages/JSONRPC/src/typed.jl:63 [16] macro expansion at /home/lutfullah/.vscode/extensions/julialang.language-julia-0.16.8/scripts/packages/VSCodeServer/src/VSCodeServer.jl:83 [inlined] [17] (::VSCodeServer.var"#53#55"{Bool,String})() at ./task.jl:358 ``` username_1: Yeah, this is a regression. Status: Issue closed username_3: should be fixed on the master
RayTracing/raytracing.github.io
623646825
Title: [In one Weekend] cam::get_ray input parameters sudden change Question: username_0: Looking into the camera implementation, the initial parameters where u, v and they changed to s, t without highlighting. I assume they were supposed to be u, v to keep in line with the figures and s, t was a leftover from a previous change. ![image](https://user-images.githubusercontent.com/22738317/82730305-ee307200-9cfe-11ea-89eb-bcb0f5a5f919.png) ![image](https://user-images.githubusercontent.com/22738317/82730312-f7214380-9cfe-11ea-84bb-7faa4f7c4ee1.png) Answers: username_1: Thank you. Yes, I actually noticed (and have a correction for this) yesterday, when I went through the entire book. Lots of little mistakes to be fixed in book 1, including this one. Keeping this one open, as I haven't created separate issues for the problems I've encountered. Status: Issue closed username_1: Resolved in PR #625
firebase/firebase-admin-node
653500217
Title: Jests tests cannot run when firebase-admin is used. Question: username_0: | ^ 9 | }); 10 | at FirebaseAppError.FirebaseError [as constructor] (node_modules/firebase-admin/lib/utils/error.js:42:28) at FirebaseAppError.PrefixedFirebaseError [as constructor] (node_modules/firebase-admin/lib/utils/error.js:88:28) at new FirebaseAppError (node_modules/firebase-admin/lib/utils/error.js:123:28) at new ServiceAccount (node_modules/firebase-admin/lib/auth/credential.js:118:19) at new ServiceAccountCredential (node_modules/firebase-admin/lib/auth/credential.js:68:15) at Object.cert (node_modules/firebase-admin/lib/firebase-namespace.js:219:58) at Object.<anonymous> (src/libs/firebaseAdmin.ts:7:32) ``` `APP.FIREBASE_ADMIN_KEY` gets the firebase key over the `process.env.FIREBASE_ADMIN` environment variable, which in turn has the path to the admin key defined. #### Relevant Code: ```typescript import * as admin from 'firebase-admin'; admin.initializeApp({ credential: admin.credential.cert(APP.FIREBASE_ADMIN_KEY), }); ``` When I start the application, there is no error with firebase and it works fine. Answers: username_0: I got it working with the following code inside the test: ```typescript import * as admin from 'firebase-admin'; jest.mock('firebase-admin'); ``` Status: Issue closed
raftario/filite
512781490
Title: URL Shortening - display the link instead of opening the shortened link in a new tab Question: username_0: When a URL is shortened the existing behavior is that the shortened URL is opened in a new tab which them immediately resolves to the original URL. The purpose of a URL shortener is to shorten a url and being able to copy the url to share it to other people. But here, a user would have to remember the custom alias that he inputted, or the random alias that was generated before hitting the submit button. Answers: username_1: Yeah that annoyed me too. Adding the option to copy created links to the clipboard would also be nice. Will be looking into that one as soon as possible. Status: Issue closed
abrensch/brouter
355319410
Title: elevation display problem by round trip Question: username_0: The elevation box has a problem to display the elevation and different two lines, when the lines are to nearby on bigger streets, or is overlapped, by routing at a round trip on the same street. OK, the elevation box is only a module, but i don't remember the name to write the issue to the right place. sorry. Answers: username_1: I think this is a BRouter-web (https://github.com/nrenner/brouter-web / http://brouter.de/brouter-web/) issue. I cannot reproduce it, it seems to have been fixed, see e.g. http://brouter.de/brouter-web/#map=18/48.83556/2.35012/standard,Waymarked_Trails-Cycling&lonlats=2.348977,48.835836;2.350945,48.83649;2.348676,48.835801 cc @nrenner username_2: As @username_1 mentioned this issue (if it still applies) should be reported at brouter-web Status: Issue closed
mctools/mcpl
218446855
Title: review mcxtrace docs Question: username_0: I now created the mcxtrace docs as well: https://mctools.github.io/mcpl/hooks_mcxtrace/ Mostly based on the mcstas docs. A few differences and implied questions: - I replaced mcdoc with mxdoc, hope this is correct. - Version mentioned is 1.4 (what happened to 1.3 btw.) - I could not link to MCPL_input and MCPL_output docs online, I guess they won't be available until version 1.4 of mcxtrace is actually released. - I mention in the second paragraph that "spring 2017" is when 1.4 will be released. Correct? Answers: username_1: Looks good. All of your bullet pts are correct. We decided to bump to 1.4 since it has been so long (far too long) in coming. 1.3 seemed bad luck in a sense...McXtrace 1.4 is due a couple of weeks after McStas 2.4 username_0: Thanks! Status: Issue closed
KyleAMathews/typography.js
303378387
Title: Fairy Gates Theme: Messed up text shadow in default starter project header Question: username_0: This image shows what I see just following along with the tutorial (except I tried to use the `fairy-gates` theme instead of just `lawton` and `bootstrap` like the tutorial suggests). I don't see any issues with `lawton` or `bootstrap`. <img width="841" alt="screen shot 2018-03-07 at 11 03 25 pm" src="https://user-images.githubusercontent.com/9665562/37137853-80347360-225c-11e8-9ef8-e33ef2c8477b.png"> Status: Issue closed Answers: username_1: Inspect the styles for the header. You'll need to remove I think some box-shadow. username_1: Or override more likely is what you'll want to do. username_2: I am having this issue also, I can't override with inline styles, but why is the text shadow applied on this theme? username_3: This theme applies both `text-shadow` and `background-image` probably because of the good appearance when some letters overlaps the background-image (the "underline"), but when [applying the styles](https://github.com/username_1/typography.js/blob/1a3bfc103d41d59f04e89573c2c87e1d95abdd12/packages/typography-theme-fairy-gates/src/index.js#L38), it "implies" that the background will be `#fff`, which I don't think it will be happening in most of the cases, and it's just my opinion. If someone had any problem to change this, simply override `a` tag with: ```css a { text-shadow: none; background-image: none; } ``` username_4: @username_3 Where do I apply the override? Most newbies (like me) just follow the tutorial and apply the theme and then wonder why its looking like that. username_4: @username_3 Where do I apply the override? Most newbies (like me) just follow the tutorial and apply the theme and then wonder why its looking like that. username_4: From https://kyleamathews.github.io/typography.js/ **Customizing themes** ``` fairyGateTheme.overrideThemeStyles = () => ({ a: { textShadow: `none`, backgroundImage: `none`, }, }); const typography = new Typography(fairyGateTheme); ``` fixed the problem.
jprichardson/node-jsonfile
155296625
Title: Shorten the names of read/write functions Question: username_0: `readFile` and `writeFile` feels kind of given, since the package is called jsonfile and it's purpose to handle files. The same goes for `writeFileSync` Is there any reasoning for why the aren't called just `read` and `write`? The whole file suffix seems redundant otherwise. Answers: username_1: @username_0 I believe that the current names are used to conform with the standard that people expect from nodejs `fs` so that they don't need to remember that its different. Status: Issue closed username_2: Yep, closing as per @username_1's comment.
mbukosky/SpotifyUnchained
59834182
Title: Security hole Question: username_0: Remove the refresh token from the client user object. ``` javascript $scope.refreshToken = function() { $http.get('/auth/spotify/refresh', { params: { refresh_token: $scope.user.providerData.refreshToken } }).success(function(response) { // Update the user with new tokens $scope.user = Authentication.user = response; $location.path('/'); }).error(function(response) { console.log(response.message); }); }; ``` Answers: username_0: http://jeremymarc.github.io/2014/08/14/oauth2-with-angular-the-right-way/ username_0: [jwt-simple](https://www.npmjs.com/package/jwt-simple) [example](https://github.com/sahat/satellizer/blob/master/examples/server/node/server.js) Status: Issue closed
haskell/hackage-server
400983953
Title: Do not autofocus search field on package page Question: username_0: It seems that several month ago the search field in the header on package page (like hackage.haskell.org/package/base) became autofocused. IMHO this is unconvenient, because it makes navigation harder. 1. I cannot scroll a page by pressing arrow down. Instead of scrolling, my browser (Firefox) shows me the list of autocompletions for the search field. 2. I cannot invoke quick search by pressing "S". Obviously, with a text box in focus it just appends letter "S" to its content. Answers: username_1: +100 Really annoying username_2: oOps sorry guys, it was me who proposed the PR. As explained [here](https://github.com/haskell/hackage-server/commit/5039b04a0d48d205b7f3f2311b486607f04b4e58) in an answer to @username_1 , I find it much more convenient to be directly able to type in the name of the package you are searching when you land on the home page and _then_ need to use your mouse to scroll (alternatively you can press `Tab` and then use `PgUp/PgDown`) than needing to _first_ use your mouse to focus the search field, _then_ type, and _then_ use the mouse again to scroll and click on your package. But well, if this is too cumbersome for a majority of people, let's reverse :) username_3: Well I think the issue is that this change might help on the homepage, but it is inconvenient on other pages, such as package pages. So the "right" patch would be to special-case the homepage, or perhaps the homepage and the browse page or the like, and not apply the same behavior on package pages. username_2: I understand your point, it's not really an issue for me as I either scroll with the mouse on the package pages, or use `tab` + `PgUp/Down`. I initially made the PR because I found this more convenient and was thinking that it would help other people, too. But if that is causing more discomfort that it solves pain, then let's revert, no worries on my side =) username_0: One can setup a search engine in his favorite browser to eliminate loading Hackage homepage at all. I just type `h base` in the address bar. Super convenient. So, can we please have 5039b04a0d48d205b7f3f2311b486607f04b4e58 reverted? username_0: I rarely open homepage of Hackage, but IMO autofocus on search field makes sense only for Google Search and alike, where there is literally nothing else to do. However Hackage homepage has a lot of contents and search field is only a minor part of it. I find frustrating not be able to navigate it by arrows of `PgDown`, because these keystrokes get intercepted by search field. Status: Issue closed username_1: When may this get deployed btw? username_1: Ping! username_0: @username_3 @hvr can we please have this fix deployed finally? username_4: the autofocus is still there hope that this change will be deployed soon username_5: @hvr redeployed hackage-server, the autofocus is no more.
Club-Alpin-Annecy/collectives
723828445
Title: Resynchronisation automatique des données FFCAM à la connexion Question: username_0: Les données d'un adhérent sont "facilement" modifiables sur https://extranet-clubalpin.com/monespace/ Une fois les modifications faites sur https://extranet-clubalpin.com/monespace/, pour qu'elles soient prises en compte sur le site des collectives, l'adhérent doit se connecter puis aller dans Mon profil, puis cliquer sur le bouton "Resynchroniser mes données". Combien de personnes vont penser à le faire ? Je pense notamment à l'adresse email, le numéro de téléphone pour être contacté par l'encadrant. Et aussi au numéro de la personne à contacter en cas d'urgence. A voir l'impact au niveau de la performance, mais serait-il possible de faire une synchro automatique à la connexion ? A la connexion, mettre un message "Vos informations FFCAM ont été mises à jour récemment depuis le site de la fédération. Elles ont été mises à jour automatiquement sur ce site des collectives" Si l'email a été modifié, alors cela modifie le login. Mettre alors un 2ème message "Attention! Vous avez modifié votre email. Votre email de connexion est désormais ...." Answers: username_1: Je suis également tombé sur le cas d'un adhérent inscrit dans la mauvaise catégorie « -25 ans »... Après rectification de la cotisation… Cette personne n'a pas fait l'effort de « resynchroniser ». Peut-être pourrions-nous resynchroniser en boucle les adhérents ? (mais connait-on les limites correctes d'utilisation de l'api extranet ?) Ou alors, proposer dans l'écran de connexion une checkbox « resynchroniser mes données FFCAM ». (certains vont toujours cocher, d'autres pas comprendre... ) username_0: d'où mon idée de resynchroniser automatiquement à la connexion :) username_1: arf, et du coup un message d'alerte quand on modifie l'email sur resynchro ? Ça modifie les paramètres de connexion, ça parait donc important de notifier l'utilisateur qu'à sa prochaine connexion sur le site il doit modifier son champ « email ». Autre ticket ? username_0: bah non justement c'est celui-ci :) username_2: @username_0 La resynchro se refait déjà automatiquement à la connexion :) Le problème c'est que la plupart de gens ont un cookie qui les laisse connectés, donc il faudrait rajouter un timeout pour resynchronizer de facon périodique en arrière plan aussi, ou à la création de session (qui ont un temps de vie plus court que le cookie d'authentification) On peut effectivement ajouter un bandeau lorsque l'email change