repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
cockpit-project/cockpit-project.github.io
456052099
Title: Update screenshots for PF4-ish reskinning Question: username_0: When https://github.com/cockpit-project/cockpit/pull/11987 lands, we probably want new screenshots on the website. Answers: username_0: To everyone: Feel free to upload screenshots here on the issue. I'll add optimized versions of them on a PR. Thanks! username_1: I looked what we currently have and try to recreate them. I havn't done the one from dashboard as I have only one machine set up. If anyone else would do that that would be great, otherwise I can setup some machines. And also instead of docker I took screenshot of podman containers as it seems to be now the trend :) ![screenshot-podman](https://user-images.githubusercontent.com/12330670/61023899-054ac780-a3ac-11e9-9f4d-2315255c52c2.png) ![screenshot-network](https://user-images.githubusercontent.com/12330670/61024128-eb5db480-a3ac-11e9-812c-538c44a10780.png) ![screenshot-storage](https://user-images.githubusercontent.com/12330670/61023901-054ac780-a3ac-11e9-8b0d-c1193fe7e850.png) ![screenshot](https://user-images.githubusercontent.com/12330670/61023902-05e35e00-a3ac-11e9-8829-6aeafeeb2939.png) username_2: I did a Dashboard screenshot with three machines and three different OSes: ![dashboard](https://user-images.githubusercontent.com/200109/61025642-a38d5c00-a3b1-11e9-85ae-cd557b34a060.png) Annoyingly this has a scrollbar, even though there's no content below the server list. That's something we should fix, too :-) username_0: Thanks for the screenshots! However, there are some issues. @username_1: Your screenshots are using the wrong fonts and have "localhost.localdomain" as the hostname. We cannot use these for that reason. There are also incorrect widgets in some for odd reasons. @username_2: Your screenshot is too wide. We need to update the screenshots according to the screenshot guidelines @ https://github.com/cockpit-project/cockpit-project.github.io/issues/183 username_0: We'll need to update the screenshots with the dark nav and search... and make sure they're in accordance with https://github.com/cockpit-project/cockpit-project.github.io/issues/183 username_0: Resolved in #373. Status: Issue closed
electron/electron
194822679
Title: Setcookies throwing errors Question: username_0: <!-- Thanks for opening an issue! A few things to keep in mind: - The issue tracker is only for bugs and feature requests. - Before reporting a bug, please try reproducing your issue against the latest version of Electron. - If you need general advice, join our Slack: http://atom-slack.herokuapp.com --> * Electron version: 1.4 * Operating system: windows ### Expected behavior After taking from cookies.get I should be able to use the same output to cookies.set ### Actual behavior Error: Setting cookie failed at Error (native) ### How to reproduce When a browser window closes I want to nab it's cookies and store them. I don't want to use your system because the user might want to export the cookies and use them on another machine. I'm developing something to allow people to browse on multiple machines using the same cookies. ``` function ChildBrowserClose(e) { console.log(e.sender); e .sender .webContents .session .cookies .get({}, (error, cookies) => { e.sender.CustomSession.Cookies = JSON.stringify(cookies); var Sessions = app .Session .get('Sessions') Sessions .find({id: e.sender.CustomSession.id}) .assign(e.sender.CustomSession) .value() console.log(error, cookies) }); } ``` ----------------------- later I try to set with ``` for(var i=0;i<Cookies.length;i++){ NewBrowser .webContents .session .cookies .set(Cookies[i], (error) => { console.log(error); }); } ``` Error: Setting cookie failed at Error (native) <!-- For bugs, provide sample code or a repo URL that demos the problem --> Answers: username_1: The `url` property is required for `cookies.set`. I believe this is a duplicate of https://github.com/electron/electron/issues/4422 See https://github.com/electron/electron/issues/4422#issuecomment-182618974 for details on how to generate this property from the values returned from `cookies.get`. Status: Issue closed username_0: Probably is, but if someone finds this and wants the answer: parse url from domain it works fine. You may close
bioconda/bioconda-recipes
432833317
Title: Trinity: need to remove `@` from paths in perl script Question: username_0: Error is: ``` Possible unintended interpolation of @2 in string at /gpfs1/data/galaxy_server/galaxy-dev/database/dependencies/_conda/envs/[email protected]/bin/Trinity line 46. ``` Had the same problem in the stacks2 wrapper. For variables setting path to binaries that are on the PATH anyway the solution is to set them to an empty string. For the others (like paths to jar files), a fix might involve using some `pwd` magic. Answers: username_1: Is this a Bioconda issue (packaging) or an upstream problem at Trinity (software)? ping @bgruening - if this is galaxy specific, maybe you know who could help. username_0: Kind of both: The cause of the problem is that galaxy uses an `@` character in the environment names (`__name@version`). If perl packages/programs use hard coded paths (e.g. via an install script that uses BIN_DIR as parameter) for dependencies, then the perl interpreter has problems in this case since the @ should be quoted. One solution is to remove the hard coded paths (which I did for stacks) since the binaries are on the PATH anyway. Alternatively the @ could be quoted. Don't know if conda could do this in general for BIN_DIR (or all its path). username_1: So basically, you are saying that `Trinity`'s perl code breaks if Trinity is installed into a directory that has an `@` in it? That sounds like something that needs to be fixed in Trinity. Galaxy is doing something unusual, but nothing wrong. Neither is Conda when it rewrites the paths. How does the proper Perl Quoting look like? It might be something to add to `conda` - rewrite the strings with correct quoting if it's a perl file. username_0: I'm no perl expert, but `\@` should do the trick. username_1: @username_2 Can you pitch in with suggestions here? From `bin/Trinity`: ``` #!/usr/bin/env perl [...] my $TRIMMOMATIC = "/opt/anaconda1anaconda2anaconda3/share/trimmomatic/trimmomatic.jar"; ``` So installing into an env with `$` or `@` in it breaks the script during path rewriting. Do you have code handling this for bash? Is there a chance to add code to `conda` to escape the `@` for perl files? username_2: no, I don't think we have anything for that currently. A PR to add it would be welcome. username_1: Quick pointer where it would have to go? (term that can be `grep`ped suffices) username_1: @username_0 For the time being, can you come up with a patch for Trinity? Using `'` instead of `"` should suffice for the variables in question. A PR against `conda` would fix this for good, but take quite a while until it has arrived at everyone's desks. username_1: (Or use `q[text]` or `q{text}` or `q(text)` instead. Perl has a veritable arsenal of ways to write a string constant) username_1: I've created an issue for now (https://github.com/conda/conda/issues/8601), adding to the nearly 1k open ones over there. username_0: done https://github.com/bioconda/bioconda-recipes/pull/14741 Status: Issue closed
typora/typora-issues
837810058
Title: Additional behaviour for asset folder and images when changing file name Question: username_0: Hello, Recently i started using a feature to paste image to folder with a name of file `{filename.assets}`, this is great option for people like me, who likes to have everything organized, however i found few key improvements that could be done to make it even more appealing. - when we rename note (via Files pane) asset folder should have been renamed too and all paths changed in that file to match new directory - when we move note to different directory (via Files pane) asset folder should move as well - we should have option on File pane to hide directories with assets, because when we have lots of files in one directory there are tons of folders showing on the Pane along with them, there is solution for it to change view from `File view` to `Article view`, but it won't show directories that are not assets folder I hope it's clear enough, Thanks Answers: username_1: duplicate with #2084 #2198 and #1753 Status: Issue closed
derailed/k9s
629501189
Title: Error when benchmarking container: Get "1": unsupported protocol scheme "" Question: username_0: **Describe the bug** When I attempt to benchmark a pod via port-forward, the benchmark fails with the following error: Get "1": unsupported protocol scheme "" **To Reproduce** Steps to reproduce the behavior: 1. Setup port-forward from pod view 2. Goto port forward view 3. <ctrl-l> 4. See error **Expected behavior** The benchmark to be executed and the results generated. **Screenshots** ![image](https://user-images.githubusercontent.com/7509729/83567355-6249f180-a519-11ea-9f3d-03a867d5b305.png) **Versions (please complete the following information):** - OS: OSX - K9s 0.20.2 [2856] - K8s v1.18.0 **Additional context** Love this tool, it would be great if I can get some help getting benchmarking working. Status: Issue closed Answers: username_0: I managed to figure this out by creating a benchmark configuration as described in the documentation here: https://github.com/derailed/k9s#benchmark-your-applications. Love this tool, keep up the good work!
GameServerManagers/LinuxGSM
516721257
Title: sdtdserver pd fails to post details to hastebin Question: username_0: Follow **[this guide](https://linuxgsm.com/support/#guide)** to make sure you post the correct info. For general support visit the **[LinuxGSM-Support](https://github.com/GameServerManagers/LinuxGSM-Support)**. Issues here are **ONLY** for: * LinuxGSM bugs * feature suggestions * code contributions Issues here are **NOT** for: * General support * Specific game server issues (e.g CS:GO, TF2) * Dedicated server issues (e.g Ubuntu, CentOS) * Anything not directly related to LinuxGSM development Any general support issues on GitHub will be migrated to [LinuxGSM-Support](https://github.com/GameServerManagers/LinuxGSM-Support). *Please use the template below* ## User Story As a server administrator, I want to see and share server details to ask better questions so that I can debug server issues myself. ## Basic info * Distro: [Debian 10] * Game: [7 Days To Die] * Command: [pd] ## Further Information Attempt to use the `pd` or `postdetails` command result in an error message. ## To Reproduce Steps to reproduce the behaviour: Go to directory where sdtdserver was installed. Run `./sdtdserver pd` Output: ``` sdtdserver@localhost:~$ ./sdtdserver pd fetching command_postdetails.sh...OK [ INFO ] Postdetails sdtdserver: Check IP: <SERVER_IP> [ OK ] Postdetails sdtdserver: Posting details to hastebin.com for 30D Please share the following url for support: https://hastebin.com/<!DOCTYPE html> <html> <head> width=device-width, initial-scale=1 <title>Application Error</title> html,body,iframe { margin: 0; padding: 0; } html,body { height: 100%; overflow: hidden; } iframe { width: 100%; height: 100%; border: 0; } </style> </head> <body> </body> </html> ``` Answers: username_1: sometime pastebin failed to receive the data sent by lgsm. try few more times Status: Issue closed
department-of-veterans-affairs/va.gov-team
1052357594
Title: 508-defect-2 [SCREENREADER]: Loading indicator should have aria-live="polite" value Question: username_0: # [508-defect-2](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/guidance/defect-severity-rubric.md#508-defect-2) <!-- Enter an issue title using the format [ERROR TYPE]: Brief description of the problem --- [SCREENREADER]: Edit buttons need aria-label for context [KEYBOARD]: Add another user link will not receive keyboard focus [AXE-CORE]: Heading levels should increase by one [COGNITION]: Error messages should be more specific [COLOR]: Blue button on blue background does not have sufficient contrast ratio --- --> <!-- It's okay to delete the instructions above, but leave the link to the 508 defect severity level for your issue. --> ## Feedback framework - **❗️ Must** for if the feedback must be applied - **⚠️ Should** if the feedback is best practice - **✔️ Consider** for suggestions/enhancements ## Definition of done 1. Review and acknowledge feedback. 1. Fix and/or document decisions made. 1. Accessibility specialist will close ticket after reviewing documented decisions / validating fix. ## Point of Contact <!-- If this issue is being opened by a VFS team member, please add a point of contact. Usually this is the same person who enters the issue ticket. --> **VFS Point of Contact:** Noah ## Details The loading indicator needs to announce to screen readers that the content is loading. Adding an aria-live value to the component will read the message to the user automatically ## Acceptance Criteria - [ ] Add `aria-live="polite"` to the loading message container. The spinner itself does not need to be inside `aria-live` Status: Issue closed Answers: username_1: Solved in the linked PR
KubiGR/ticket-to-ride
849698702
Title: Planned routes should animate (show/hide) Question: username_0: Planned routes should stand out compared to normal player placement. The colors below should be visible. Status: Issue closed Answers: username_0: Planned routes should stand out compared to normal player placement. The colors below should be visible. username_0: Nice feature. Slight bug: lines already established in double tracks: the other track is animating. username_1: Fixed in #0600c2a6eb4a4dad4a6a1ca6ca65e4148f167aef Status: Issue closed
microsoft/CSS-Exchange
969510278
Title: Health Checker should check Recycling settings Question: username_0: Setting Recycling Settings on an IIS vdir will cause the app pool to recycle periodically, resulting in a short outage. Health Checker should validate these settings to make sure we are not configured for self-inflicted outages. Answers: username_1: Related to #537 username_2: Disagree - setting AutoD recycling to recycle every hour or so is good practice in hybrid deployments to provide clients with accurate information after on- or offboarding. username_0: @username_2 We've had a series of support cases from customers getting 503's from Autodiscover - in the latest case case, every 30 minutes - due to recycling. Even if recycling is useful when moving mailboxes between on-prem and cloud, the setting needs to be called out to stop people from opening support cases on hourly (or half-hourly) 503's. Do you have any more information on what Autodiscover caches _for more than an hour_ that makes recycling useful? Is this in an EHLO blog post or a best practices document somewhere? That's a very long cache lifetime, rivalling the good old 2-hour MBI cache. username_1: @username_0 the latest case was actually regarding EAS, they just so happened to also set the AutoD app pool as well to recycle. @username_2 Unless customer are moving mailbox without the wait for complete option, meaning they have to manually cut over the mailboxes or wait for a particular time, there should be no reason to automatically restart AutoD App Pool during the migration process. You should only need to do it 1 time after AD has been properly updated. Even with that being said, AutoD is the only app pool that you can argue for the automatic restarts. Even though it shouldn't be done in my opinion. The other app pools where we have seen actual cases on, like `MSExchangeSyncAppPool` and `MSExchangeMapiMailboxAppPool` this should **never** be set for this feature. username_0: Ah, thanks. Got my app pools confused. username_2: We seldomly onboard customers in batches using the CompleteAfter, given the additional work/fixing that usually also needs to take place. Real-life on/offboarding is a process, involving additional tasks which need to take place pre and post. We never seen customers reporting 503's. We usually configure an interval to to accommodate emergency on/offboarding (eg mailboxes missed or ones that shouldn't have been processed) when we can't remotely trigger a recycle over all Exchange boxes, and when waiting for the (next) scheduled post-migration recycle isn't an option. But since this isn't about the migration, you could check DNS and only report configured recycle settings if the AutoD DNS record points to the outlook-s redirector. username_1: @username_2 What is the reason to recycle the app pool every hour? This should have been fixed within Exchange 2013 CU13 when we switched the cache to be recycled every hour. You are virtually doing the same thing, except with the code fix it still allows clients to get processed. https://support.microsoft.com/en-us/topic/outlook-client-remains-disconnected-after-the-mailbox-is-migrated-to-exchange-server-2013-3bdf4bf9-c524-bd69-b7a3-5e58768173ec the only benefit you have is to recycle right after the cut over. Which you would need to do manually. You can also remotely recycle all the Exchange Servers AutoD app pools. Sample: ``` Invoke-Command -ComputerName "ListOfExchangeServers" -ScriptBlock { & "$env:windir\system32\inetsrv\appcmd.exe" recycle apppool MSExchangeAutodiscoverAppPool } ``` Status: Issue closed username_2: Thanks, David. I'm aware of the process and procedures.
BuilderIO/builder
1048689964
Title: Add `renderImage` to `BuilderComponent` Question: username_0: Like `renderLink`, I think `renderImage` would be a useful addition as well, especially if you're using Next.js which comes with [image optimization](https://nextjs.org/docs/basic-features/image-optimization). Answers: username_1: hey @username_0 - the way to do this is to override the built-in image component. The reason `renderLink` doesn't work this way is we don't have a `Link` component, instead any block can be linked so it needs the separate flavor But for images you will want to just do ```tsx function MyImage(props) { ... } Builder.registerComponent(MyImage, { name: 'Image', // This name is what overrides the built in component named "Image" inputs: [...] }) ``` Status: Issue closed
DerLuca/MTINVRGame-LMM
558650173
Title: interation with objects is missing Question: username_0: For the first step in the direction of interacting with objects a first test version for grabbing an object is missing. Make an object, attach the grabbing script onto it, test it (create a branch to experiment)
cake-contrib/Cake.Issues
808838698
Title: Add support for pull request approval Question: username_0: Original issue: https://github.com/cake-contrib/Cake.Prca/issues/54 Add functionality that the pull request implementation can approve the pull request. - Should be option for pull request implementation to implement (maybe introduce some call where the main addin asks for capabilities) - Add flag to `ReportCodeAnalysisIssuesToPullRequestSettings` - Cake.Issues.PullRequests addin would determine status (maybe we need to add some settings to define the criteria, eg on priority, etc) - Enhance `IPullRequestSystem` with method to approve or decline a PR - Virtual implementation in `PullRequestSystem` which doesn't do anything
AY2021S2-CS2113T-W09-1/tp
845640346
Title: Review comments for your DG Question: username_0: 1. better to include links to your fork ![image](https://user-images.githubusercontent.com/1023494/113077549-ee917780-9203-11eb-8499-0b43ca94f73b.png) 2. Duke? ![image](https://user-images.githubusercontent.com/1023494/113077783-62cc1b00-9204-11eb-843f-bf5eba2c8b08.png) 3. Parser component class diagram: consider removing non-critical attributes and methods. It looks pretty cluttered ![image](https://user-images.githubusercontent.com/1023494/113078186-29e07600-9205-11eb-84e2-5eed647b8423.png) 4. 1 -> one ![image](https://user-images.githubusercontent.com/1023494/113078231-44b2ea80-9205-11eb-8a91-2d74ca8a1f3d.png) 5. Possibly use code formatting (with back ticks)? ![image](https://user-images.githubusercontent.com/1023494/113078326-70ce6b80-9205-11eb-9f7b-17bb53c5ed58.png) 6. indicate that these two paths are due to exception ![image](https://user-images.githubusercontent.com/1023494/113078448-b428da00-9205-11eb-88d7-5a19724b2fcc.png) 7. Possible to split to a couple of diagrams? Improves redability ![image](https://user-images.githubusercontent.com/1023494/113078734-44671f00-9206-11eb-94ed-af9e9682908f.png) 8. Which class is using the enumeration? ![image](https://user-images.githubusercontent.com/1023494/113078805-69f42880-9206-11eb-80ab-cf3aa871a29d.png) 9. Good use of UML notes. Other class diagrams can use this technique to simplify ![image](https://user-images.githubusercontent.com/1023494/113078864-885a2400-9206-11eb-9b30-be029f14a135.png) 1. ??? ![image](https://user-images.githubusercontent.com/1023494/113079051-ec7ce800-9206-11eb-9aff-f43b59c60a66.png) 1. To complete (a few subsections also like this) ![image](https://user-images.githubusercontent.com/1023494/113079190-29e17580-9207-11eb-82c5-4da3574558f3.png) Status: Issue closed Answers: username_1: All done!
LearnersGuild/echo
246129911
Title: Not all cycles started on a Moday Question: username_0: There was a period during which cycles were often launched on a Friday. So the values in the "Week" column of the Projects view in the web UI would be inaccurate for those cycles as it would display the Monday of the previous cycle's week. We can assume that any cycle started on a Fri, Sat or Sun should be considered to be for the week ahead and display the following Monday's date for the "Week" (starting) value. Related: https://github.com/LearnersGuild/echo/pull/1049/files<issue_closed> Status: Issue closed
ermau/Aura
837201001
Title: Example Library Question: username_0: Rather than ship with an empty library, a basic sound set should be included (or maybe prompted to download) to use as an example. The initial set of layers should include: - [ ] Tavern - [ ] Outside market - [ ] Calm woods - [ ] Storm - [ ] Dungeon
coingecko/cryptoexchange
345667275
Title: Implement CoinEx Market Ticker, Orderbook, Trades Question: username_0: https://www.username_4.io/ http://api.username_4.io/ Answers: username_1: I am working on this one. username_1: Can someone confirm if it is possible to work on this one? The documentation states different responses than what is returned. For example, the history call returns ```javascript {"status":1,"message":"Success","data":[{"id":"13","vendor":"ETH","market":"BTC","coins":"28.0000000000","coincost":"13.4890000000"},{"id":"7681","vendor":"ETH","market":"BTC","coins":"0.9650000000","coincost":"14.1906000000"},{"id":"7779","vendor":"ETH","market":"BTC","coins":"0.8460000000","coincost":"14.1906000000"},{"id":"7905","vendor":"ETH","market":"BTC","coins":"0.1590000000","coincost":"14.1906000000"},{"id":"7696","vendor":"ETH","market":"BTC","coins":"0.8790000000","coincost":"14.1902000000"}]} ``` And the documentation says. ```javascript { "result": "true", "data": [ { "tradeID": "27734287", "date": "2017-09-29 11:52:05", "timestamp": "1506657125", "type": "buy", "rate": 0.1, "amount": 0.01, "total": 0.001 } ], "elapsed": "6.901ms" } ``` Fetching tickers takes forever and times out. I am also having trouble mapping fields returned by history call. username_1: @username_3 Even a simple HTTP.get fails from the Ruby console. I am droping this one since something seems broken about it. username_2: @username_3 when you have time, do kindly take over here to see if you can implement this. thank you. username_3: @username_2 ok, sure! username_3: @username_2 turns out I've completed this exchange under Issue #633 Status: Issue closed username_4: Hello team, we are updated with the bug fix and performance optimizations. can you check them with the updates and complete integration process. here are the sample links. http://api.username_4.io/tradeHistory/BTT_BTC http://api.username_4.io/orderBook/BTT_BTC Thanks
FTBTeam/FTB-Academy
690668392
Title: Block placers for 2 mods wont function- please help Question: username_0: <!-- Thanks for wanting to report an issue you've found. Please delete this text and fill in the template below. If unsure about something, just do as best as you're able. Thank you! Note: any external modifications to this modpack will render all support useless, ie; adding mods like optifine to the modpack! So please remove all added content, re-test bug/issue and resubmit! If you are using Twitch, please try using the new FTB Launcher found here: https://www.feed-the-beast.com/, as we will not provide support otherwise. --> * **Academy 1.1.1**: <!-- you must provide the version of the pack this issue happened --> * **Industrial foregoing, and actually additions block placer not functioning, power given, tried all modes of activation, tried feeding them blocks instead of inv to inv. tried directions, reloading pack and world.**: <!-- detailed description of the issue --> * **game didnt crash**: <!-- please use http://paste.feed-the-beast.com/ to paste the text of your log/crash file --> * **yes*: <!-- can you repeat the issue --> * **industrial foregoing and actually additions**: <!-- optional; if any mods are causing the direct issue please provide the name/version of the mod --> * **none**: <!-- optional; if you know of a fix please let me know! Thanks -->
edent/Sercomm-API
240262641
Title: NEW API: Enable Telnet!! Question: username_0: While i haven't been able to determine the username / password crusername_1ials as of yet, this is interesting: You can enable a telnet server on the device, and connect to it: /adm/file.cgi?todo=inject_telnetd No joy on actually logging into it yet, i've tried the default user/pass combo's and quite a few untraditional one's... If anyone manages to get in please post the user/pass, or walkthrough if you had to alter data to get in. Answers: username_1: Cool! Could you please let us know what cameras you've tried it on? username_0: I only have access at the moment to an iControl iCamera-1000, most of the api listed here are useful with this camera, i found this page while googling around for the default pass (just in case :D) and somewhere along the lines, i dumped the sercomm name out of the camera (can't seem to find it in my notes at the moment) There are some funcional differences between camera's you've listed, and this camera, namely, this camera doesn't have pan/tilt, nor does it appear to have samba, other than that, it's appearing that there is quite a few firmware similarities, and shared API commands. As soon as i get around to it, i think i'll do a binwalk on the dumped fw.bin, and see if there's anything else... Also, i'm looking into a possible exploit of the FTP server, whereas the password field can be used to execute commands on the camera (the goal at this point is to pen-test it and find a leak, or exploit where i can gain the telnet crusername_1ials) As far as i can tell ,there's a pretty backdoor on these firmwares, i have tested other similar cameras, and found that if you can get telnet going the default user/pass seems to be root:123456 On this particular camera however, this isn't the case, so i'll have to find a way of coaxing the passwd file out of it, and check the hashed pass against johntheripper. username_0: just ran binwalk on the dumped firmware, looks like they are stripping the passwords from the passwd file :( Booo... but now i've got a decent idea about the folder / file structure, i'll keep posting updates until i get in, or you guys tell me to shut up, or i reach an impassable dead end. :p username_0: Well.. I'm note sure what i borked, but i did.. The first run of binwalker extracted the etc, var, and tmp directories intact, one of which had a backup of passwd, however, my vm crashed, and it was all lost, subsequent attempts to unpacking the squashfs, and bin have not been good, the var, tmp, and etc folders now unpack with symlinks to /mnt/ramdisk/* wich leaves them dead ends.. The bin consists of a combination of gzip, and lzma compressed files, easily dumped with binwalker, but unless i can get my hands on a full clean firmware, and can find a way of mounting, rather than dumping the squashfs, i don't think i can go too much further with this... Since the firmware can be dumped, it may be possible to edit, or add some scripts to it, repack it, and upload it back to the device to gain root access that way, who knows, i only have one of these, so i'm not too eager to brick it just yet :D I tried to throw firmware-mod-kit at it, just to see if anything would stick to the wall, no joy, it spit errors, and warnings at me, and i'm a little too tired to trudge through the source to narrow down the issue. username_0: If anyone wants to tinker around, i found somewhere that had a list of firmware updates for the icamera-1000, here are the first, and last of that list (because no one like middleware :p) I may open the device later and see if there's a jtag or a known IC i can pin into to dump the full contents of the device.. [ComcrapFirmware.zip](https://github.com/username_1/Sercomm-API/files/1121011/ComcrapFirmware.zip) username_1: People have left links to firmware on my blog post about these cameras - https://shkspr.mobi/blog/2013/11/hacking-around-with-network-cameras/ You may find some to be useful. username_0: Thanks, i've looked over that once before at a cursory glance, seems everyone is pretty much at the same stage i am, trying to repackage the dumped firmware to enable, or extend the feature set of the device.. I've also run into an issue, where linux distro i'm using has squashfs 4.0, which annoyingly enough, doesn't deal well with older versions (namely, it simply won't mount the squashfs) My hopes is to find the firmware source somewhere, or perhaps, if i can tinker it out, do a complete dump of the device, ramdisk and all, and try to pick it apart that way... Still have my hopes up that i can wade through all the comcast / adt nonsense, and "user" manuals using google-fu and find something useful. username_0: Yea, after looking closer, there's really no way for me to reverse this firmware at the moment.. If i get some free time later this month, i'll build a bus pirate and go in and dump the chips, and have a look.. From what i'm seeing, Sercomm like every other mass-producer, is using "tinkered with" or non standard squashfs settings, but on top of that, there are several version checks to keep someone from simply opening the bin, editing the squash, and repackaging everything.. I would be really interesting to see the source stack, and procedures they use to build these firmwares and updates... In all honesty, it probably isn't that difficult to build the updates if you have the source, because Comcast is able to, and from my experience with their hacked up one off firmwares, this is a step or 300 above their usual standard of quality, and security. *shrugs* at this point, i don't really need to keep going, but interest has been piqued, and i'm really quite interested in why there's a telnet backdoor, and what other nefarious odds and ends are packed into this little device. username_0: Because i can't leave well enough alone, i pulled the sucker apart, good thing too, there's an unpopulated jtag header on the bottom PCB. I'm a little too drunk to dink with it tonight, but looking over the passives, it appears it can handle 3.3v - 5v on that header (DON'T QUOTE ME ON THAT!!) Maybe tomarrow, if i'm not in too bad shape, i'll take a stab at it.. that JTAG header means i can more than likely bypass the root login crusername_1ials, and have unfettered access to the device, if so, maybe i can dump the entire system to an external device over the network, if there's not enough on-system space to create a proper dump, i may have to tinker around, and see what tools are available... busybox and all that, no clue what tools are available. *secretly hopes for rsync* username_0: oh.. just a heads up... it appears there are backdoors on older sercomm firmware for some of their devices, so if you have a really old one, with factory firmware on it, you might want to look into it. username_0: Well.. Looks like that 4 pin header is [5v] [TX] [RX] [GND] Serial coms connected at 115200 baud. and dammit.. it's password protected... Bootup infor tho :D `DM36x initialization passed! TI UBL Version: 1.51 Booting Catalog Boot Loader BootMode = SPI DONE Dump mem - 0x81080000: 0xEA000012. 0xE59FF014. 0xE59FF014. 0xE59FF014. Jumping to entry point at 0x81080000. U-Boot 1.3.4-svn4286 (Jun 7 2013 - 11:06:42) DM365 DRAM: 128 MB SF: Got idcode c2 20 18 SF: Detected MX25L12845E with sector size 4096, total 16777216 bytes *** Warning - bad CRC, using default environment In: serial Out: serial Err: serial ARM Clock :- 297MHz DDR Clock :- 270MHz Ethernet PHY: GENERIC @ 0x01 SF: Got idcode c2 20 18 SF: Detected MX25L12845E with sector size 4096, total 16777216 bytes ========================================== Neutral Bootloader V3.08 ========================================== address = 0x0006FFFA MAC address 94:4A:0C:0C:18:B2 Calculate checksum chksum=0xCFC88185, chksum_in_flash=0x30377E7B, 0x0 Done: 0x0. ## Booting kernel from Legacy Image at 80700000 ... Image Name: Linux-2.6.18_SC-DM365 Image Type: ARM Linux Kernel Image (uncompressed) Data Size: 1304788 Bytes = 1.2 MB Load Address: 80008000 Entry Point: 80008000 Verifying Checksum ... OK Loading Kernel Image ... OK OK Starting kernel ... Uncompressing Linux....................................................................................... done, booting the kernel. Linux version 2.6.18_SC-DM365 (selina@ISBU-Compiler-B1) (gcc version 4.2.3) #1 Tue Dec 6 17:26:07 CST 2011 CPU: ARM926EJ-S [41069265] revision 5 (ARMv5TEJ), cr=00053177 [Truncated] rtusb init out--> rtusb exit ---> <--- rtusb exit Ethernet link is ready now mDNSResponder: _http._tcp. service renamed from "iCamera-0c18b2" to "iCamera-0c18b2 (2)" /etc/init.d/rcS: line 227: /usr/local/bin/nc_qset: not found /etc/init.d/rcS: line 234: /usr/local/bin/jabberlog: not found SinfulCNC login: rtusb init ---> rtusb init out--> Davinci EMAC MII Bus: probed TI DaVinci EMAC Linux version updated 4.0 rtusb exit ---> <--- rtusb exit rtusb init ---> rtusb init out--> : Starting lighttpd mDNSResponder: _http._tcp. service renamed from "iCamera-0c18b2" to "iCamera-0c18b2 (2)"` I'll keep cludging away at this over the next week or so while i have free time.. i also odered a buspirate (too lazy to build one) so i should be able to just dump the chip to gain access to the contents of ramdisk. username_0: Unfortunately, i simply won't have any time for the forseeable future to keep at this. New job mandates i never eat or sleep... friggin heathens! Anywho... As i see it, there's two possibilities here, both force us to unpack and repack the bin and include the following precursory steps: Download the FW.bin, use DD to pull out the squashfs part, decompress it, Once done, you have several methods of attack: 1) edit the RC script to copy /etc/passwd file to a location you can grab it from. 2) edit the RC script to change the root password (might be derpy.. duno) 3) use RC script to change permissions of executables, as well as add shell script frontend web page(s) 4) precompile rsync for the busybox version on the hardware, and add it, then edit rc file to rsync entire live system over to an external device / drive... all of these attack vectors present a couple of major obstacles, and forces us to ask a few tough questions: 1) Does the protected portion of the OS prevent any of the measures from working? If so, how do we circumvent it? 2) The FW upload cgi checks the uploaded file, but what is it checking? Name? embedded version number? bin in etc? CRC? If so, we need to understand how this mechanism works, and what we can do to fool the system into accepting our modified FW bin. 3) What exact squashfs settings are being used? This may require quite a lot of trial an error extracting, and repacking the squashfs filesystem, without editing it, until you can get a filesize and CRC match.. once you've accomplished that, you'll know how to edit the squashfs and have it match the format used by the original dev's. I think it's in everyone's best interest if we all continue to poke, prod, and probe these ubiquitous devices. They not only pose privacy concerns, but security concerns as well. Who knows what is being done with the backdoors present in these devices? It's especially repugnant that these devices usually come with secondary "middle-ware" devices that serve no other purpose other than further obfuscating the mechanisms working behind the scene. Case in point, the Xfinity, and ADT versions of these particular devices are often bundled with a "home security router" that is also locked out from the user, and performs several hidden functions, as well as contain several hidden back door access points. The fact that people are using these devices in the privacy of their homes, and businesses should shock us all. They grant unfettered access into every moment of our lives to large corporations most of whom have a less that stellar history when it comes to privacy, and maintain a reputation of collecting, and utilizing private information for monetary gain. username_2: URL http://[IP]/adm/file.cgi?todo=inject_telnetd Telnet username: root Telnet password: <PASSWORD> username_1: Sorry, I should have added this last year. https://shkspr.mobi/blog/2017/11/telnet-and-root-on-the-sercomm-icamera2/ Status: Issue closed
PaddlePaddle/Paddle
1162667081
Title: 【PaddlePaddle Hackathon 2】32、为飞桨框架修复海光DCU算子 Question: username_0: (此 ISSUE 为 PaddlePaddle Hackathon 第二期活动的任务 ISSUE,更多详见 [【PaddlePaddle Hackathon 第二期】任务总览](https://github.com/PaddlePaddle/Paddle/issues/40234)) 【任务说明】 - 任务标题:为飞桨框架修复海光DCU算子 - 技术标签:深度学习框架,C++,Python,HIP - 任务难度:简单 - 详细描述:飞桨支持在海光DCU上运行,但是存在部分算子在海光DCU下还无法通过算子单测。下表中为当前在海光DCU下算子单测失败的用例,请根据单测错误提示,对列表中的算子C++文件和单测Python文件进行相应修复,在不删减单测Python文件中的单测用例的前提下,使得算子单测列表通过。 | 单测失败的OP | | -------------- | | test_cast_op | | test_eye_op | | test_median | | test_reduce_op | 【修复建议】 - 无,请根据具体单测的错误信息和失败结果进行修复,更多修复建议请参考《Paddle框架下海光DCU算子修复说明》。 【提交流程】 - 请按[贡献指南](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/dev_guides/index_cn.html)中的描述,完成飞桨PR任务提交过程中的相关步骤。 【提交内容】 - 算子修复代码,在Paddle repo的 paddle/fluid/operators 目录 - 单测修复代码,在Paddle repo的 python/paddle/fluid/tests/unittests 目录 - 单测修复之后,在海光DCU环境下的成功运行结果截图 - 单测修复之后,每个算子的问题定位及修复思路的简单描述(1-2句话即可) 【合入标准】 - 按照[贡献指南](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/dev_guides/index_cn.html)要求提交相关算子修复的PR,并通过所有PR的CI检测 - PR描述中需贴入海光DCU环境下的算子单测修复之后,成功运行的结果截图 - PR描述中针对每个单测需给出1-2句话关于每个单测修复的问题及解决办法 【技术要求】 - 熟悉Paddle框架的AI算子开发和测试 - 熟悉海光DCU软件栈ROCm的开发接口 - 熟练掌握 C++, Python, HIP 【参考内容】 - 文档:[曙光智算平台-Paddle源码编译和单测执行](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/dev_guides/sugon/complie_and_test_cn.html) - 文档:[Paddle框架下C86加速卡算子单测修复指导](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/dev_guides/sugon/paddle_c86_fix_guides_cn.html) - 文档:[Paddle适配C86加速卡详解](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/dev_guides/sugon/paddle_c86_cn.html) 【答疑交流】 - 在开发中对于上述任务有任何问题:请登录[光合开发者社区](https://developer.hpccube.com/) 进行交流,可至 光合开发者社区-论坛-AI应用-paddle专栏 进行发帖。 - 更多任务详情、技术内容可在光合开发者社区【揭榜挂帅】、【资源工具】等更多页面查看。
PendalF89/yii2-filemanager
93993702
Title: How to config to upload a type of file to upload? Question: username_0: I'm using your extension to manager image of an object. But in default,I can upload all file type such as audio file MP3. But I only want to upload Image file as JPG, PNG , GIF...So can you help me to setting to upload only image file. Thanks a lot. Answers: username_1: Sorry, but in current version impossible to set file type for upload. username_0: I think it very important. Hope you can upgrade soon.
QubesOS/qubes-issues
1154506681
Title: Confusing notifications on attaching USB devices Question: username_0: ### Qubes OS release 4.1 ### Brief summary On attaching a USB device to an AppVM using the USB widget, two notices pop up, one to say the device has been attached, and another to say it has been removed. I understand what is being attempted here but would argue that this is confusing and apparently contradictory (so it has been attached and removed?). The 'removed' message does not convey any useful information in the context, so I would suggest ditching the 'removed' message altogether to avoid confusion and provide a cleaner UX. NB. On mounting a USB device with multiple partitions you get a whole series of these messages, one per partition, which is even more confusing and messy. ### Steps to reproduce Attach a USB storage device using the widget ![Screenshot_2022-02-28_19-57-14](https://user-images.githubusercontent.com/12783198/156051577-1ca8e467-2916-45f3-a2d7-ca562e66aba8.png) Answers: username_1: And when attaching Yubikey, sometime there is also an popup about some "slow keys" (?)
smallbusinesshero/sbh-service
611145039
Title: Make CORS headers configurable Question: username_0: As a software developer I want to deploy feature app branches. Currently, the CORS configuration is hard coded and requires a hard deploy for new frontend URLs to be added to the CORS list. Please make the configuration of CORS URLs configurable, e.g. via an environment variable that can be set in Heroku like this: ACCEPTED_ORIGINS=["url1","url2"] Then the instance only needs to be restarted and we don't need to create commit / deploy
visgl/deck.gl
1015887257
Title: 8.6 Release Tracker Question: username_0: Target release date 10/12 - [ ] Publish alpha/beta - [ ] Bump all examples to latest alpha/beta - [ ] Stage the website - [ ] Verify whats-new - [ ] Verify upgrade-guide - [ ] Verify all issues in milestone are closed or move out - [ ] Verify all API changes are documented - [ ] Website (docs and example) testing: - [ ] Testing on Chrome - [ ] Testing on Safari - [ ] Testing on Firefox - [ ] Testing on Edge - [ ] Testing on iOS - [ ] Testing on Android - [ ] Cut 8.6-release branch after all fixes - [ ] Update website doc urls to point to `8.6-release` branch - [ ] Publish 8.6 prod version. Answers: username_1: @username_2 can you help us with the website testing? username_1: Staging available at https://username_1.github.io/deck.gl username_2: I have already tested in Chrome, Firefox and Safari from my MAC laptop. (I am not able to check the markers since I am not a collaborater, I guess) A common minor issue to all browsers is that the view https://username_1.github.io/deck.gl/examples/first-person-view/ seems that it doesn't load the play button properly.
influxdata/telegraf
808615278
Title: Secure access from a web server Question: username_0: <!-- WOAHH, hold up. This isn't this best place for support questions. You can get a faster response on slack or forums: Please redirect any QUESTIONS about Telegraf usage to - InfluxData Slack Channel: https://www.influxdata.com/slack - InfluxData Community Site: https://community.influxdata.com Check the documentation for the related plugin including the troubleshooting section if available. https://docs.influxdata.com/telegraf https://github.com/influxdata/telegraf/tree/master/docs --> Status: Issue closed Answers: username_0: error
celzero/rethink-app
775537096
Title: Ability to pause the app Question: username_0: A lot of users find themselves stopping the app often. Makes sense to provide a _pause_ in its stead. Two ways: 1. Pause with disabled firewall and network / OS provided DNS. 2. Pause by tearing down the VPN and restarting it again after a set period of time. Answers: username_0: May be also provide a _pause_ action from the notification drawer? username_0: https://github.com/celzero/rethink-app/issues/21 username_0: Implemented in #354 Status: Issue closed
kingoflolz/mesh-transformer-jax
946957439
Title: Jax TPU Issue Question: username_0: WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.) [CpuDevice(id=0)] ``` Somehow `jax` fails to detect the TPU, don't know what's the issue, I am running Google Cloud's TPU VM v3-8 with v2-alpha software version. Would really appreciate your help. Answers: username_1: Try using this script to install your dependencies https://github.com/username_1/mesh-transformer-jax/blob/master/scripts/init_ray.sh username_0: `init_ray.sh` installs out of the virtualenv but even after that running the below code sometime runs and sometimes not like If I ssh into vm and the very firstly I check the `jax.devices_count()` then it shows **8** but if I exit and run it again then it goes back to the `WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)` or simply gets stuck and doesn't do anything. Apart from this when running the `infer` method, it throws this error, ``` Traceback (most recent call last): File "mesh-transformer-jax/inference.py", line 115, in <module> print(infer(context)) File "mesh-transformer-jax/inference.py", line 97, in infer output = network.generate( File "/home/Pranav/mesh-transformer-jax/mesh_transformer/transformer_shard.py", line 337, in generate return self.generate_xmap(self.state, File "/home/Pranav/.local/lib/python3.8/site-packages/jax/experimental/maps.py", line 583, in fun_mapped in_axes_flat = flatten_axes("xmap in_axes", in_tree, in_axes) File "/home/Pranav/.local/lib/python3.8/site-packages/jax/api_util.py", line 278, in flatten_axes raise ValueError(f"{name} specification must be a tree prefix of the " ValueError: xmap in_axes specification must be a tree prefix of the corresponding value, got specification (FrozenDict({'shard': 0}), FrozenDict({'batch': 0}), FrozenDict({'batch': 0}), FrozenDict({'batch': 0}), FrozenDict({'batch': 0}), FrozenDict({'batch': 0})) for value tree PyTreeDef(({'opt_state': CustomNode(namedtuple[<class 'optax._src.transform.ScaleState'>], []), 'params': CustomNode(<class 'haiku._src.data_structures.FlatMap'>[PyTreeDef({'causal_transformer_shard/~/embedding_shard/~/linear': CustomNode(<class 'haiku._src.data_structures.FlatMap'>[PyTreeDef({'b': *, 'w': *})], [*, *]), 'causal_transformer_shard/~/layer_0/~/linear': CustomNode(<class 'haiku._src.data_structures.FlatMap'>[PyTreeDef({'w': *})], [*]), 'causal_transformer_shard/~/layer_0/~/linear_1': CustomNode(<class 'haiku._src.data_structures.FlatMap'>[PyTreeDef({'w': *})], [*]), 'causal_transformer_shard/~/layer_0/~/linear_2': CustomNode(<class 'haiku._src.data_structures.FlatMap'>[PyTreeDef({'w': *})], [*]), 'causal_transformer_shard/~/layer_0/~/linear_3': CustomNode(<class 'haiku._src.data_structures.FlatMap'>[PyTreeDef({'w': *})], [*]), 'causal_transformer_shard/~/layer_0/~/linear_4': CustomNode(<class 'haiku._src.data_structures.FlatMap'>[PyTreeDef({'b': *, 'w': *})], [*, *]), 'causal_transformer_shard/~/layer_0/~/linear_5': CustomNode(<class 'haiku._src.data_structures.FlatMap'>[PyTreeDef({'b': *, 'w': *})], ............. # let me know if you need the whole error ``` Here is my infer method copied from the collab, ``` def infer(context, top_p=0.9, temp=1.0, gen_len=512): tokens = tokenizer.encode(context) provided_ctx = len(tokens) pad_amount = seq - provided_ctx padded_tokens = np.pad(tokens, ((pad_amount, 0),)).astype(np.uint32) batched_tokens = np.array([padded_tokens] * total_batch) length = np.ones(total_batch, dtype=np.uint32) * len(tokens) start = time.time() output = network.generate( batched_tokens, length, gen_len, {"top_p": np.ones(total_batch) * top_p, "temp": np.ones(total_batch) * temp} ) samples = [] decoded_tokens = output[1][0] for o in decoded_tokens[:, :, 0]: samples.append(f"\033[1m{context}\033[0m{tokenizer.decode(o)}") print(f"completion done in {time.time() - start:06}s") return samples ``` username_1: When you have multiple python instances running at the same time, only the first one that imports jax and is still running will be able to use the TPU. I'm not sure why your infer code is erroring out, can you run the notebook verbatim on the tpu? username_0: Ok makes sense. Have made some progress with the above error, now this error comes when running the infer function, ``` Traceback (most recent call last): File "mesh-transformer-jax/inference.py", line 115, in <module> print(infer(context)) File "mesh-transformer-jax/inference.py", line 97, in infer output = network.generate( File "/home/Pranav/mesh-transformer-jax/mesh_transformer/transformer_shard.py", line 328, in generate return self.generate_xmap(self.state, File "/home/Pranav/.local/lib/python3.8/site-packages/jax/experimental/maps.py", line 615, in fun_mapped out_flat = xmap_p.bind( File "/home/Pranav/.local/lib/python3.8/site-packages/jax/experimental/maps.py", line 818, in bind return core.call_bind(self, fun, *args, **params) # type: ignore File "/home/Pranav/.local/lib/python3.8/site-packages/jax/core.py", line 1551, in call_bind outs = primitive.process(top_trace, fun, tracers, params) File "/home/Pranav/.local/lib/python3.8/site-packages/jax/experimental/maps.py", line 821, in process return trace.process_xmap(self, fun, tracers, params) File "/home/Pranav/.local/lib/python3.8/site-packages/jax/core.py", line 606, in process_call return primitive.impl(f, *tracers, **params) File "/home/Pranav/.local/lib/python3.8/site-packages/jax/experimental/maps.py", line 646, in xmap_impl xmap_callable = make_xmap_callable( File "/home/Pranav/.local/lib/python3.8/site-packages/jax/linear_util.py", line 262, in memoized_fun ans = call(fun, *args) File "/home/Pranav/.local/lib/python3.8/site-packages/jax/experimental/maps.py", line 673, in make_xmap_callable _check_out_avals_vs_out_axes(out_avals, out_axes, global_axis_sizes) File "/home/Pranav/.local/lib/python3.8/site-packages/jax/experimental/maps.py", line 1454, in _check_out_avals_vs_out_axes raise TypeError(f"One of xmap results has an out_axes specification of " TypeError: One of xmap results has an out_axes specification of ['batch', ...], but is actually mapped along more axes defined by this xmap call: shard ``` username_1: You must make sure jax.__version__ is equal to 0.2.12, reopen if that is actually the case Status: Issue closed username_2: Can you check the initial post? 0.2.12 immediately aborts.
mlaursen/react-md
1020073797
Title: DropdownMenu inside TabPanel is displayed incorrectly Question: username_0: When placing a DropdownMenu inside a TabPanel, the open list is below the expected position and is truncated by the end of the tab content. ![DropdownMenu_in_TabPanel](https://user-images.githubusercontent.com/17616120/136397301-77843bf9-f001-4af4-bb51-73b0b513f7ec.gif) **Desktop:** - OS: Windows 10 - Browser Chrome - Version 94.0.4606.71 The sandbox in which the problem is reproduced: https://codesandbox.io/s/tabs-example-basic-usage-forked-nkz14?file=/src/Demo.jsx Is there a way to get around this? Thanks in advance!<issue_closed> Status: Issue closed
kubernetes/kubernetes
102090488
Title: Unable to run cronjobs on kubernetes Question: username_0: I'm having an issue whereby cronjob in kubernetes doesnt seem to work. Below is the test dockerfile used ``` FROM debian:jessie RUN apt-get update RUN apt-get -y install --no-install-recommends cron RUN echo 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin' | crontab RUN echo '0-59/2 * * * 0-4 export ENV=dev RECIPIENT=<EMAIL>; echo "$(date) ${ENV} ${RECIPIENT}" >> /var/log/cron.log' | crontab CMD ["cron", "-f", "-L", "15"] ``` Using native docker to run the above dockerfile, I could see the output in the logfile but not in kubernetes. Checked the events but didn't notice anything unusual. Below is the replicationcontroller yaml file used ``` apiVersion: v1 kind: ReplicationController metadata: labels: name: cron-test name: cron-test spec: replicas: 1 selector: name: cron-test template: metadata: labels: name: cron-test spec: containers: - name: cron-test image: example/cron-test:latest resources: limits: cpu: 100m memory: 512Mi imagePullPolicy: Always ``` Thanks. Answers: username_1: Please direct your questions to [stackoverflow](http://stackoverflow.com/questions/tagged/kubernetes). We are trying to consolidate the channels to which questions for help/support are posted so that we can improve our efficiency in responding to your requests, and to make it easier for you to find answers to frequently asked questions and how to address common use cases. We regularly see messages posted in multiple forums, with the full response thread only in one place or, worse, spread across multiple forums. Also, the large volume of support issues on github is making it difficult for us to use issues to identify real bugs. The Kubernetes team scans stackoverflow on a regular basis, and will try to ensure your questions don't go unanswered. Before posting a new question, please search stackoverflow for answers to similar questions, and also familiarize yourself with: * [the user guide](http://kubernetes.io/v1.0/) * [the troubleshooting guide](http://kubernetes.io/v1.0/docs/troubleshooting.html) Again, thanks for using Kubernetes. The Kubernetes Team Status: Issue closed
YappyBots/YappyGitLab
505111155
Title: Add support for confidential issues Question: username_0: As requested by **ajgeiss0702#0702** on Discord. ![image](https://user-images.githubusercontent.com/13403210/66550341-ddefcc00-eb4d-11e9-95ce-54e87ad389d1.png) POST data is the same for both confidential and non-confidential issues. `X-Gitlab-Event` is different, however - `Confidential Issue Hook` instead of `Issue Hook`. There's a `object_attributes.confidential` field (values: `true`/`false`), however, when testing "Confidential issues events", GitLab might send an event for a non-confidential issue.
dresende/node-orm2
463668023
Title: IF condition Question: username_0: Hi, sorry but I'm somehow confused on how to exactly do query condition for IF and IS NULL I have this particular SQL query. WHERE has_sent = 0 AND messenger_messages.create_date >= IF(users.last_login IS NULL, ( Curdate() - interval 1 month ), users.last_login + interval 5 minute) Hope you can help. Thanks!
cloudfoundry/cli
81122635
Title: cf runtime error Question: username_0: ran this command: cf push SkyKisses version of CLI: 6.11.2-2a26d55 runtime error: invalid memory address or nil pointer dereference with extended stack trace information (not shown) Answers: username_1: @username_0 What cf-release are you hitting? Can you push again with CF_TRACE=true and provide the output? Thanks username_0: [image: Inline image 1] username_2: Hey there, it looks like your image didn't upload, could you try again please? username_1: @username_0 Hey there, it looks like your image didn't upload, could you try again please? username_2: Closing this. If you continue to experience this, please feel free to reopen. Status: Issue closed
datamade/metro-pdf-merger
386405552
Title: the worker process died mysteriously Question: username_0: metro reported packets not being generated, so i shelled into the metro pdf merger server, and had a look at the logs. i found this: ```bash ubuntu@ip-10-0-0-208:~$ tail -n 200 /tmp/metro-pdf-worker-d-XSTH3EHWO-err.log ... Traceback (most recent call last): File "/usr/local/lib/python3.5/threading.py", line 914, in _bootstrap_inner self.run() File "/home/datamade/metro-pdf-merger-d-XSTH3EHWO/tasks.py", line 146, in run self.doWork() File "/home/datamade/metro-pdf-merger-d-XSTH3EHWO/tasks.py", line 149, in doWork msg = redis.blpop(REDIS_QUEUE_KEY) File "/home/datamade/.virtualenvs/metro-pdf-merger-d-XSTH3EHWO/lib/python3.5/site-packages/redis/client.py", line 1163, in blpop return self.execute_command('BLPOP', *keys) File "/home/datamade/.virtualenvs/metro-pdf-merger-d-XSTH3EHWO/lib/python3.5/site-packages/redis/client.py", line 578, in execute_command connection.send_command(*args) File "/home/datamade/.virtualenvs/metro-pdf-merger-d-XSTH3EHWO/lib/python3.5/site-packages/redis/connection.py", line 563, in send_command self.send_packed_command(self.pack_command(*args)) File "/home/datamade/.virtualenvs/metro-pdf-merger-d-XSTH3EHWO/lib/python3.5/site-packages/redis/connection.py", line 538, in send_packed_command self.connect() File "/home/datamade/.virtualenvs/metro-pdf-merger-d-XSTH3EHWO/lib/python3.5/site-packages/redis/connection.py", line 442, in connect raise ConnectionError(self._error_message(e)) redis.exceptions.ConnectionError: Error 111 connecting to localhost:6379. Connection refused. ``` sure enough, the worker process had exited at some point the previous day. ```bash ubuntu@ip-10-0-0-208:~$ sudo supervisorctl status ... metro-pdf-merger-d-XSTH3EHWO:metro-pdf-merger RUNNING pid 1974, uptime 3 days, 6:34:30 metro-pdf-worker-d-XSTH3EHWO:metro-pdf-worker EXITED Nov 29 06:48 AM ... ``` there were no alerts in sentry or semaphor that our app had gone down. let's investigate why this happened, and consider putting alerts in place so we can handle this proactively if it happens again. Answers: username_1: @evz and I looked in to *why* the worker died, in the first place. We did not find a logical explanation, though we did find a pretty unlikely one: Redis exited on November 29, and restarted itself. Could this brief milisecond, when redis was down, have collided with the same brief milisecond when the Metro PDF worker executed `msg = redis.blpop(REDIS_QUEUE_KEY)`? Possibly, but that seems improbable. ``` [1947 | signal handler] (1543474090) Received SIGTERM, scheduling shutdown... [1947] 29 Nov 06:48:10.278 # User requested shutdown... [1947] 29 Nov 06:48:10.278 * Saving the final RDB snapshot before exiting. [1947] 29 Nov 06:48:10.281 * DB saved on disk [1947] 29 Nov 06:48:10.282 * Removing the pid file. [1947] 29 Nov 06:48:10.282 # Redis is now ready to exit, bye bye... [4352] 29 Nov 06:48:11.874 # Unable to set the max number of files limit to 10032 (Operation not permitted), setting the max clients configuration to 3984 . _._ _.-``__ ''-._ _.-`` `. `_. ''-._ Redis 2.8.4 (00000000/0) 64 bit .-`` .-```. ```\/ _.,_ ''-._ ( ' , .-` | `, ) Running in stand alone mode |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379 | `-._ `._ / _.-' | PID: 4352 `-._ `-._ `-./ _.-' _.-' |`-._`-._ `-.__.-' _.-'_.-'| | `-._`-._ _.-'_.-' | http://redis.io `-._ `-._`-.__.-'_.-' _.-' |`-._`-._ `-.__.-' _.-'_.-'| | `-._`-._ _.-'_.-' | `-._ `-._`-.__.-'_.-' _.-' `-._ `-.__.-' _.-' `-._ _.-' `-.__.-' [4352] 29 Nov 06:48:11.875 # Server started, Redis version 2.8.4 ``` Moving forward, let's be better about logging (as the issue suggests!), and if this occurs again, we might have a better sense of the reason. It seems like a reasonable place to add additional logging is in [the `run` function of the ChildProcessor](https://github.com/datamade/metro-pdf-merger/blob/master/tasks.py#L132) (i.e., the processor [that "does the work"processing messages from Redis](https://github.com/datamade/metro-pdf-merger/blob/master/tasks.py#L151)). username_0: I am betting that Redis got taken out by the OOM Killer (see https://github.com/datamade/la-metro-councilmatic/issues/738), then the worker process died because it couldn't connect to Redis.
arkivverket/noark5-tjenestegrensesnitt-standard
371806529
Title: Legge inn formatkode for FLAC i listen over arkivformater? Question: username_0: I følge "Forskrift om utfyllende tekniske og arkivfaglige bestemmelser om behandling av offentlige arkiver (riksarkivarens forskrift)"[1] aksepteres FLAC ved deponering av arkiv, men i spesifikasjonen for tjenestegrensesnittet er FLAC ikke med i listen over formatkoder[2]. Dette er type M701 i metadatakatalogen. I følge definisjonen til feltet i dokumentobjekt vil "faste verdier bestemmes senere". Kan det være en ide å utvide listen over formatkoder i tjenestegrensesnittet til å inneholde alle formater i riksarkivarens forskrift, eller i det minste legge til FLAC? Se forøvrig utfordring #4 og endringsforslag #10 for tilsvarende forespørsel om PNG og forslag om en formatkatalog med tilhørende endringsprotokoll/prosedyre i stedet for å vedlikeholde listen med formater som del av API-spesifikasjonen. [1] https://lovdata.no/dokument/SF/forskrift/2017-12-19-2286 [2] https://github.com/arkivverket/noark5-tjenestegrensesnitt-standard/blob/master/kapitler/07-tjenester_og_informasjonsmodell.md#format [3] https://github.com/arkivverket/noark5-tjenestegrensesnitt-standard/blob/master/kapitler/07-tjenester_og_informasjonsmodell.md#dokumentobjekt Answers: username_0: For å forenkle leverandøruavhengighet og samhandling på tvers samt mulighet for å sømløs migrere arkivdata fra løsning til løsning, så må formatkodene være like i alle slike løsninger. Dette er kun mulig hvis format-kodene standardiseres. Jeg synes arkiverket er jo mest nærliggende for å vedlikeholde slike kodestandardlister, og foreslår altså derfor at disse kodeverdiene standardiseres av arkivverket. Hvis det som skal stå i tjenestegrensesnitt-spesifikasjonen kun er eksempler, så bør det nevnes eksplisitt i teksten, samt henvise til annen kilde for den komplette listen over standardiserte formatkoder. Det er eneste måten en kan sikre at arkivklienter og arkiv-API-tjenere vil kunne forstå hverandre på tvers når det gjelder dokumentobjekt-informasjon. -- <NAME> <NAME> username_0: Jeg har startet på et slikt register over formatkoder på https://gitlab.com/username_0/m701-noark5-katalog . Det gir alle som ønsker å standardisere formatkoder et felles sted å samle kodeverdier. username_1: Vi ser ikke hensikt til å ta dette inn i tjenestegrenssnittet. Vi ser heller at det henvises til Riksarkivarens forskrift over godkjente format. username_0: Riksarkivarens forskrift mangler formatkoder, dvs. M701-verdier. De må standardiseres på felles sted for å sikre samvirke på tvers av løsninger. username_2: Vi er ikke fornøyd med dette lokale kodeverket, og ønsker å benytte Britiske National Archives sin PRONOM (http://www.nationalarchives.gov.uk/PRONOM/), som identifiserer formater som benyttes av mange i arkiv- og biblioteksektoren, Bl.a. i DROID (Digital Record Object Identification), som er et verktøy for identifikasjon av formater. Se http://www.nationalarchives.gov.uk/aboutapps/pronom/puid.htm. Vår format-ansvarlige går nå gjennom alle de gyldige arkivformatene i Riksarkivarens forskrift for å finne PRONOM-identifikatorene for disse. PRONOM-identifikatorene (PUID) er er tall (for øyeblikket tresifret). username_2: Jeg tror SOSI er det eneste du ikke finner. Jeg vet ikke hvordan vi skal h?ndtere det. PRONOM er utbredt i arkiv/bibliotek, men det kan hende det bare er basert p? tradisjon. Jeg vet ellers ikke om det har noen fordeler framfor MIME-typer, og jeg vet ikke hvordan DROID og JHOVE forholder seg til MIME. F? Outlook for iOS<https://aka.ms/o0ukef> username_2: PRONOM har så vidt vi har kunnet bringe på det rene mye større granularitet enn MIME-typene, og egner seg bedre for vedlikehold i arkivdepot, mens MIME-typene egner seg for å finne en applikasjon som kan vise filen. PRONOM benyttes utstrakt i arkivmiljøene. Vi har besluttet å benytte PRONOM-identifikatorene. Vi vil komme med en liste over PRONOM-identifikatorene for alle de gyldige arkivformatene, Verdien skrives som et heltall (desimalt) uten ledende nuller, For å unngå å gjøre endringer i skjemaet skrives det som en tekststreng. Vi anmoder Kartverket om å registrere SOSI i PRONOM. username_0: Merk at tjenestegrensesnittet uansett vil trenge MIME-type for alle formatene, da den brukes i HTTP-responsen ved opp- og nedlasting av filer. Jeg har bedt SOSI-sekretariatet om å registrere MIME-type for SOSI hos IANA. Hvordan vil listen over PRONOM-identifikatorene for alle gyldige arkivformater vedlikeholdes? En slik liste trengs raskt for å sikre samvirke på tvers av ulike API-implementasjoner og slik at ulike API-klienter vet hva de kan forvente ut fra API-et. username_0: Hvor kan jeg finne ut hva format-ansvarlige kom frem til etter å ha gått gjennom alle de gyldige arkivformatene i Riksarkivarens forskrift for å finne PRONOM-identifikatorene for disse? Etter å ha sett litt på PRONOM, så foreslår jeg at PUID i format-feltet i Noark 5 lagres som "fmt/123", slik at det er enkelt å kjenne igjen at det er snakk om en PUID. username_0: Jeg sjekket med PRONOM, og der var det ennå ikke mottatt noe om SOSI. Jeg sendte derfor inn et forslag til PRONOM og bedt om at SOSI burde få en formatkode, og har mottatt referansenummer TNA1555078202S60 fra PRONOM på min henvendelse. Så får vi se om det dukker opp PRONOM-koden vi trenger. :) Jeg håper SOSI-sekretariatet, hvis de ønsker å bidra til PRONOM-beskrivelsen, følger opp og sender sin egen henvendelse via skjemaet tilgjengelig på <URL: https://www.nationalarchives.gov.uk/contact-us/submit-information-for-pronom/ >. -- Vennlig hilsen <NAME>en username_3: Dette er bra. AV v/ <NAME> holder i dette hos oss, og følgende ble avtalt 24/06/19: Pronom - noen vedtakspunkter fra i dag: - Vi (foreløpig jeg - - ) vedlikeholder mapping mellom forskriften og PRONOM formater. - Vi (jeg) korrekturleser og synkroniserer md KDRS og Noark tjenestegrensesnitt - Mapping legges i høst ut på våre nettsider (vi finner et sted) - Om det senere skal inn i selve forskriften må vi komme tilbake til - KDRS vedlikeholder sin egen tabell, men vi holder dialogen - Noark tj. gr. snitt faser ut M701 og «RA-« koder og går over til PRONOM koder - AV administrerer dette - Disse punktene blir tema på møtet i Forvaltningsforum/Filformater i september. username_0: Jeg trodde listen med PRONOM-koder var beskrivelsen av lovlige verdier i M701, som har vært lovet levert i ca. 10 år, ikke at M701 skulle fases ut. Har jeg misforstått? I så fall er det feil beskrivelse av format-feltet (M701) i tjenestegrensesnittet. -- Vennlig hilsen <NAME>en
pytest-dev/pytest
1044190256
Title: `pytest.Instance` is not documented in the API Reference Question: username_0: The `Instance` collector type is part of the public API and is passed to some hooks, e.g. `pytest_pycollect_makeitem`, so we should explain what it is (it is not very obvious!). Answers: username_0: Looking at `Instance`, I'm think we might want to get rid of it instead of publicizing it more. I'll post a proposal for this after #9273 is merged. username_1: @username_0 i want to remove instance for more than 5 years now ^^ and at the same time introduce FunctionDefinition more directly username_0: This is made irrelevant by #9277. Status: Issue closed
ioBroker/ioBroker.modbus
909149835
Title: New Scale Factor not used immeditaley, old one used instead Question: username_0: **Describe the bug** I have a Fronius Symo Gen24 with Fronius SmartMeter and reading all important data via modbus TCP. This is set as int+SF and I'm using the formula and SF feature to calculate the real value out of raw+SF. I transferred the values of interest from an Excel file provided by Fronius to CSV and imported it to the adapter Holding Registers. For most values, the SF has a higher address and is after the int value in the register, for a few it's the other way round. Problem: When the SF is changed, the adapter uses the old SF to calculate the real value out of the int value. On the next poll, when the SF is not changed again, the calculated value is correct. But whenever the SF is changed, the calculated values at the poll, where the SF changed, is wrong my one order of magnitude. See the screenshots for example. **Screenshots & Logfiles** My settings: ![image](https://user-images.githubusercontent.com/37954747/120431569-f80c9c80-c378-11eb-8ebd-57a43f654326.png) Part of the Holding Registers with both examples, SF after (e.g. 40083) or before (e.g. 40274) the int value. ![image](https://user-images.githubusercontent.com/37954747/120431668-14a8d480-c379-11eb-81aa-91ff0be96476.png) Example data from influxdb with the jumps marked ![image](https://user-images.githubusercontent.com/37954747/120432125-c0eabb00-c379-11eb-9df5-fb46c7eff6db.png) Corresponding SF logged with influxdb, you see that the jumps above are exactly at the time, where SF changed. ![image](https://user-images.githubusercontent.com/37954747/120432206-df50b680-c379-11eb-9ce1-53f5c75a3798.png) Both values above in Grafana. Even worse, if the SF changes only for one poll and than back, two values in sequence are wrong (example 1 on Grafana plot). ![image](https://user-images.githubusercontent.com/37954747/120431443-c8f62b00-c378-11eb-9915-25b2bb3b48cc.png) **Versions:** - Adapter version: 3.3.0 - JS-Controller version: 3.2.16 - Node version: 12.20.1 - Operating system: Docker in Synology DS916+ with DSM 6.2.4-25556 **Additional context** Problem is not observed for the values where SF is before int value. Maybe, a workaround would be to put all SF first, but how can I change the order of the Holding Registers settings without loosing the datapoints created? Answers: username_0: Update: Moving the SFs above the int values, did not solve the problem. username_1: Please enable debug log and post a debug log of such an effect username_0: Thank you. I reproduced the issue if debug logs. Logs (including startup of adapter at the beginning) [iobroker.2021-06-03.log.txt](https://github.com/ioBroker/ioBroker.modbus/files/6589307/iobroker.2021-06-03.log.txt) Error: 2021-06-03 07:55:31.334 - debug: modbus.0 (2890) Poll holdingRegs DevID(200) address 40087 - 5 bytes 2021-06-03 07:55:31.354 - debug: modbus.0 (2890) Input Value = -14701 2021-06-03 07:55:31.354 - debug: modbus.0 (2890) Formula = x*Math.pow(10, sf['40091']) 2021-06-03 07:55:31.355 - debug: modbus.0 (2890) Scale factor value stored from address 40091 = -2 But at 07:55:31, -1470.1 is stored in the data point, instead of -147.01, though scaling factor is correct according to debug logs. ![image](https://user-images.githubusercontent.com/37954747/120595809-44241380-c443-11eb-9faf-2fb2b54dd364.png) username_0: New debug log together with influxdb log to confirm, that the wrong value is received: 2021-06-03 09:40:10.804 - debug: modbus.0 (9007) Input Value = -27059 2021-06-03 09:40:10.805 - debug: modbus.0 (9007) Formula = x*Math.pow(10, sf['40091']) 2021-06-03 09:40:10.805 - debug: modbus.0 (9007) Scale factor value stored from address 40091 = -1 2021-06-03 09:40:10.814 - debug: influxdb.0 (8992) Min-Delta reached modbus.0.holdingRegisters.200.40087_W, last-value=-331.9, new-value=-27059, ts=1622706010810 Scale factor before was 0. username_0: Me again. I extended the logging the following (I hardcoded reading current SF for one example) `if (regs.config[n].formula) { adapter.log.debug('Input Value = ' + val); adapter.log.debug('Formula = ' + regs.config[n].formula); try { // calculate value from formula or report an error const func = new Function('x', 'sf', 'return ' + regs.config[n].formula); adapter.log.debug('Raw value: ' + val); val = func(val, scaleFactors[regs.deviceId]); adapter.log.debug('Scale facture currently stored: ' + scaleFactors[regs.deviceId]['40091']); adapter.log.debug('Value after formula: ' + val); val = Math.round(val * options.config.round) / options.config.round; adapter.log.debug('Final value: ' + val);` debug log: 2021-06-03 11:00:10.494 - debug: modbus.0 (4053) Input Value = -31809 2021-06-03 11:00:10.494 - debug: modbus.0 (4053) Formula = x*Math.pow(10, sf['40091']) 2021-06-03 11:00:10.495 - debug: modbus.0 (4053) Raw value: -31809 2021-06-03 11:00:10.495 - debug: modbus.0 (4053) Scale facture currently stored: 0 2021-06-03 11:00:10.496 - debug: modbus.0 (4053) Value after formula: -31809 2021-06-03 11:00:10.496 - debug: modbus.0 (4053) Final value: -31809 2021-06-03 11:00:10.497 - debug: modbus.0 (4053) Scale factor value stored from address 40091 = -1 It confirms that on change of SF, adapter uses the old value because it reads the new value later. username_0: Ich switch mal kurz auf deutsch. Ich habe jetzt einen Workaround implementiert, der für mich das Problem größtenteils abfängt, aber nicht in allen Fällen. Aus meiner Sicht ist das aber ein grundsätzliches Problem und sicher eine Eigenheit von modbus und inf+SF. Workaround: In der Funktion pollFloatBlock() geht er die for Schleife nun zweimal durch. Im ersten Schritt werden nur die regs.config Elemente verarbeitet, bei denen isScale true ist und entsprechend der scaleFactor gesetzt. In der zweiten Iteration wird der Rest gemacht, sodass die SFs vor der Berechnung in das sf array geschrieben werden. Ursprünglich wollt ich das smarter machen, nämlich dass er den SF nicht aus dem Array holt, sondern direkt aus dem aktuellen Block, den er gelesen hat. Das funktioniert aber nur, wenn int und SF im gleichen Block sind, was aber auch für obigen Workaround gilt und damit sind wir auch schon beim grundsätzlichen Problem. Aus meiner Sicht muss, damit int und SF immer synchron sind, beide mit dem gleichen Block eingelesen werden. Die smarte Lösung von oben funktioniert bei mir nicht, weil es einmal nicht der Fall ist. Hier werden zuerst alle SFs in einem Block gelesen und danach die ints. Das ist der einzige Fall, wo im HoldingRegister die SFs alle zusammen vor den ints kommen, weil mehrere ints den gleichen SF verwenden, in allen anderen Fällen kommt der SF immer nach dem int. Jetzt könnte man meinen, ist ja gut, wenn die SF vor dem int als Block gelesen werden, dann werden diese zuerst aktualisiert. Dennoch kommt es vor, dass SF und int nicht synchron sind, weil bis zum lesen der ints sich der SF schon wieder geändert hat. Wenn man es irgendwie hinbekommen würde, dass int + dazugehörige(r) SF in einem zusammenhängenden Block gelesen werden, könnte man den SF direkt aus dem Block zur Berechnung lesen. Wäre meiner Meinung nach die sauberste Lösung. Was meinst du? Ich könnte meinen Workaround zumindest mal als Diskussionsbasis als pull request einstellen, wenn dir das hilft. Er macht zumindest nicht die bestehende Lösung kaputt, er minimiert nur das Problem. username_1: @username_2, I think the logic for formulas and scalignwas prided in an PR from you ... Do you have an opinion here? username_0: Finally, I managed that all int and corresponding SFs are read in the same block. It was not the case for one part, where the int values for both strings (MPPT1 and 2) and battery charge and discharge shared the same SFs and the SFs came first. I checked the code to understand how the blocks are determined and found out, that in case there is a gap of more than 10 bytes between two addresses, a new block is generated. And that was the case above, as between the SFs and each block of MPPT1, etc. I had a gap of 15 bytes. I changed my holding register table, added the ID String of 8 bytes, no the gap is less than 10 bytes and everything is read in one single block and my workaround is working. The smarter solution would also work. To sum up: I could identify and solve the issue on my side. I would like to rase a PR with this solution. Additional infos: Here you see the holding register of the situation described above. After adding the Input ID, the gap between the blocks is less than 10 bytes and everything is read within one block. int and SF are synchronous. ![image](https://user-images.githubusercontent.com/37954747/120753990-51f19b80-c50c-11eb-99e2-06dfbc095fd5.png) To get an idea of the severity of this problem, here you see a Grafana plot from this morning where above solution was not applied. You see the big jumps by about one order of magnitude in both directions which doesn't make sense physically. ![image](https://user-images.githubusercontent.com/37954747/120754224-a7c64380-c50c-11eb-9512-94b4fb120a03.png) Last but not least, we are not alone: https://community.openenergymonitor.org/t/logging-solaredge-inverter-data-using-modbus-over-tcp/16341 https://github.com/erikarenhill/solaredge-modbus-hass https://www.loxforum.com/forum/german/software-konfiguration-programm-und-visualisierung/50841-solaredge-wechselrichter-einbinden/page7 username_2: @username_0 Thanks for your detailled analysis. I think there would another special case to be considered. That is the case where the maximum number of bytes per block is reached and because of physical limits the read must split into multiple blocks. This case we cannot cover. I see some ways to solve it 1. Do 2 reads, first all scale values and then directly after that the read of the data -> there is still a small risk of having a change in the scale factor in that time period which is might up to 1 or 2 sec depending on the speed of the device and the number of datapoints 2. Read first all data, buffer the data till everything is read, then evaluate the sf and later the formulas 3. Read the datablock and evaluate. Whenever the system uses a SF from an address, which is higher then the current one, the system either directly reading it from the current buffer or buffering the data to process after everything is read... Solution 2 and 3 is quite tricky to handle. So for me the question is, if that is really needed to do. I have the feeling that even the option 1 should solve the issue for situations which are not too dynamic... username_0: @username_2 Thank you for your suggestions. In my opinion, int+SF must be read within one block, means when the read is split into multiple blocks, besides the 10 bytes gap and maximum number, this should be added to the split logic. In my opinion, it must be possible in all cases, otherwise the holding register is waste and wrongly implemented. 1. In my opinion, this solution will not work. This is what I have in the example above. First, all SFs are read and then the ints. And you see nicely in the Grafana plot, how often it fails. 2. Don't understand it. If int+SF are not read in the same block, in general, you have no chance to find out, if both are synchronous. 3. I would generally read it from the buffer, in case it is possible to ensure that int+SF are in the same block. But the sf array and updating it first before applying the formulas might be better as it covers also the cases where the SF cannot be read in the same block. For me, option 1 would not work. I have currently option 3 running, defining the holding register table in a way, that the current split logic keeps int+SF within the same block. Of course, it would be a nice to have to do this automatically, or at least to give a warning to the user if it is not the case. username_2: Btw. just found out why i did not see this issue. I am using float instead of int+sf. That brings the benefit that in all cases the scale factor needed is before the value to be scaled and less parameters which needs calculation compared to int+sf username_0: This was also my first idea, but infortunately, even in the float register, int+SF is used, anyway, e.g. for the example above :( Honestly spoken, in my holy opinion the actual error is the dynamic SF which doesn't make sense technically and physically. int + static SF would do it. To say it quickly in German: Da hat sich am Holding Register ein Ingenieur ausgetobt, der von Physik und Programmierung keine Ahnung hat. username_2: @username_0 I fully agree with your opinion about that dynamic SF. That is strange here... It is true that int+sf is also used for floats, but there is at least the order always guaranteed to be before the value to scale. Splitting into blocks is another issue. Btw. i didnt understand correctly where you told "reading scale factors" I thought it was just done in the current block and not through all blocks. How big was the time gap between reading the SF and the rest of the data? Was it really just 1 or 2 seconds or was that still 2 cycles (e.g. 30sec in between)? username_0: ![image](https://user-images.githubusercontent.com/37954747/120763581-a9493900-c517-11eb-8e81-ae44add4885a.png) Here you see my settings. The SF are read in the same poll with the int. I checked the debug logs: 2021-06-03 12:22:56.985 - debug: modbus.0 (8356) Poll holdingRegs DevID(1) address 40255 - 4 bytes -> the block with all SFs 2021-06-03 12:22:57.011 - debug: modbus.0 (8356) Poll holdingRegs DevID(1) address 40272 - 5 bytes -> the first block with int for MPPT1 ... 2021-06-03 12:22:57.112 - debug: modbus.0 (8356) Poll holdingRegs DevID(1) address 40332 - 27 bytes -> last block with int for Discharge So, the time gap between reading SF and reading last block which uses this SF is ~130 ms in this example. To my information, the register is written every second, so the chance that int+SF changes between reading SF and reading int is not so small. username_2: Ok, but then it is even not ensured in case you are reading all in one block because i am not sure if they have implemented a latching to ensure that nothing changes during the full holding register read... But for sure the risk is much lower. The question is then how to implement. Issue a warning in case the sF is not in the current block during execution on every cycle? username_2: I will issue a ticket to Fronius. Maybe they are willing to solve the issue in any of the future releases... username_0: Good question. In my opinion modbus must ensure that int+SF are in sync if they are read in one block, otherwise you can throw this protocoll in the bin. To give one example, around 8 a.m. I changed my settings that int+SF are read in one block and you see that the issue did not occur so far. ![image](https://user-images.githubusercontent.com/37954747/120767332-6721f680-c51b-11eb-901d-c0c1b156ac05.png) So, maybe we can asume for now, that if int+SF are read in the same block, they are in sync (I will give an update tomorrow if it never occurred). Regarding ticket to Fronius: They implemented the sunspec which also used by other OEMs like SolarEdge. First solution would be to update SF in the sf array first before applying the formulas, when looping through a block. This is what I have currently changed yesterday, and it's working so far. Second would be either how the split the blocks that int+corresponding SF are in the same block (and give a warning to the customer if it is technically not possible) or at least give a warning/hint to the customer that he/she must adjust the settings. username_1: Great progress guys, thank you. I think you are way more deep in that topic then me ... I would be happy to get a PR username_2: @username_0 Could you try my fix and see if that is how you would expect? Thanks username_0: @username_2 I will do it tomorrow. Wanted to raise a PR myself, but you were faster :) and Visual Studio was on strike :( (to be honest it's my first PR) At least, I checked your code, looks very similar to my suggestion. Thanks a lot for the warning, in addition. Questions: Why do you process the formula part in the first iteration? Is a scale factore used together with formula? My first iteration is simply: `for (let n = regBlock.startIndex; n < regBlock.endIndex; n++) { let id = regs.config[n].id; let val = common.extractValue(regs.config[n].type, regs.config[n].len, response.payload, regs.config[n].address - regBlock.start); // If this value is used as scale factor => store it if (regs.config[n].isScale) { scaleFactors[regs.deviceId][regs.config[n].address] = val; adapter.log.debug('Scale factor value stored from address ' + regs.config[n].address + ' = ' + scaleFactors[regs.deviceId][regs.config[n].address]); } }` Is there a reason for choosing 10 bytes as decision gap size to start a new block or not? username_2: I thought that not processing the formula in this case without giving a warning is also not a good idea, therefor is simply processed the formula as well. Time wise it will not hurt as only the id's which are marked as sf are processed. Regarding the 10bytes gap: I think thats simply a guess where it make sense to split. But in my case anyhow multiple blocks must read as i have the first block at adress 40000 starting and the last dataset i need is on 40378. And at least on my side approx. 120 registers is the maximum which can be read at once. username_0: I applied your commit and removed one of the IDs from the holding register to check if the warning is working: 2021-06-05 08:31:46.722 - debug: modbus.0 (4515) Poll holdingRegs DevID(1) address 40332 - 27 bytes 2021-06-05 08:31:46.742 - debug: modbus.0 (4515) Input Value = 0 2021-06-05 08:31:46.743 - debug: modbus.0 (4515) Formula = x*Math.pow(10, sf['40255']) 2021-06-05 08:31:46.743 - debug: modbus.0 (4515) Scalefactor adress is = 40255 2021-06-05 08:31:46.743 - debug: modbus.0 (4515) Scalefactor adress is inside current read range = false 2021-06-05 08:31:46.743 - warn: modbus.0 (4515) The current range for reading the values was from adress 40332 up to adress 40359! 2021-06-05 08:31:46.743 - warn: modbus.0 (4515) Please make sure to configure the read process that both adresses are read in the same block! 2021-06-05 08:31:46.743 - warn: modbus.0 (4515) The used scaleFactor from address 40255 is not inside the same read block as the parameter on address 40332 Here, we have the data of the last 24 hours since I use my workaround which works basically same as yours. ![image](https://user-images.githubusercontent.com/37954747/120882603-31434780-c5d9-11eb-8c79-e267b94098a0.png) The jumps did not occur within the last 24 hours, though scaling factor changed a lot. ![image](https://user-images.githubusercontent.com/37954747/120882636-5f288c00-c5d9-11eb-9607-5c85f6fc61c0.png) I will let your commit run and give an update by tomorrow. All in all, thanks a lot for your help, discussion and solution. For sure, we can think about improvements, e.g. to inform the user about int+SF not beeing in the same block when saving the register config. Or to improve the splitting in a way, that this criterion is also covered. But this might not be so easy. username_2: @username_0 Hope that this fix works fine. If you have time, then it would perfect if you could think about a process of defining the blocks automatically in a way that the scaling factors are in the same block as the formulas. In addition checking this behaviour in the admin and give a hint to the user how to solve it (e.g. needed block sizes ...) username_0: @username_2 With your commit issue did not occur within the last 24 hours though SF changed a lot. From my side, you have the go for a PR Regarding taking int+SF in same block into account of splitting the request, I'm already thinking about, but I guess it's not an easy task - even as I only know my case and the holding register of Fronius (which schould basically the same for all which are using the sunspec). My general thoughts: When it loops through the config, it should check if the formula contains sf or if the register itself is marked as sf. In case this is true, it should not split the block until the corresponding sf or all registers using this sf are found (so it should overrule the 10 byte criterion, sure, the maxBlock criterion always wins). If sf comes after the int, this should be easy (assuming that all ints using the sf have lower addresses than the sf itself). In case, sf comes first and than all int using it, it's a little bit tricky, as you need to look for the int using this sf with the highest address. An additional challenge are cases, where the int with sf or the sf itself are part of a block before (gap lower than 10 bytes), then, the block cannot be split because sf was found, but then the block gets too big. In those cases the block before shall be split though the 10 bytes criterion did not match. I'm thinking about, if three criterions for splitting would be better in the following priority: maxBlock size, int+SF in same block, minimize number of blocks. Maybe, this is a task for a recursive solution. username_2: @username_0 I think in general your idea is good. I would might do it in the following way - Loop through the config and calculate for each sf the upper and lower limit where it is used. E.g. you find a formula which uses sf 4005, then store to the sf4005 this value either as minimum or maximum depending if already a value set or not. In case both values are currently not set, then both are set to 4005 - After that loop is finished, you can calculate the possible blocks according to this limits. If it is not possible to create a valid config just raise a warning after it was split as good as possible. Maybe also give a hint of changing the maxBlockSize to a certain value... - The split with the 10 bytes i would also see as the lowest priority username_3: Fixed with https://github.com/ioBroker/ioBroker.modbus/pull/128 Status: Issue closed
kubeflow/mpi-operator
398765594
Title: Failed to create MPIJob due to invalid values Question: username_0: Hi. I am new and trying to run the tensorflow-benchmark using argo-events (https://github.com/argoproj/argo-events). When the K8s CustomResourceDefinition (CRD) tries to launch the MPIJob, it also includes empty arrays such as below: 1. parameters with empty array such as spec.arguments, spec.arguments.templates (input,output etc) (removed private information) `apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: clusterName: "" creationTimestamp: 2019-01-14T04:52:00Z generateName: sch201901141352001-mwf20190036- generation: 1 labels: pipeline-id: MWF20190036 schedule-id: SCH201901141352001 name: sch201901141352001-mwf20190036-cq66b spec: arguments: {} entrypoint: workflow-steps templates: - inputs: {} metadata: {} name: workflow-steps outputs: {} steps: - - arguments: {} name: job20190038 template: job20190038 - activeDeadlineSeconds: 36000 inputs: {} metadata: labels: job-id: JOB20190038 pipeline-id: MWF20190036 schedule-id: SCH201901141352001 name: job20190038 outputs: {} resource: action: create failureCondition: status.launcherStatus == Failed manifest: | apiVersion: kubeflow.org/v1alpha1 kind: MPIJob metadata: annotations: null clusterName: null creationTimestamp: null deletionGracePeriodSeconds: null deletionTimestamp: null finalizers: null generateName: null generation: null initializers: null labels: null name: sch201901141352001-mwf20190036-job20190038-mpijob namespace: null ownerReferences: null resourceVersion: null selfLink: null uid: null spec: [Truncated] command: null ..... ` 2. error log received (removed private information) `message: |- The MPIJob "sch201901141352001-mwf20190036-job20190038-mpijob" is invalid: []: Invalid value: map[string]interface {}{"apiVersion":"kubeflow.org/v1alpha1", "kind":"MPIJob", "metadata":map[string]interface {}{"finalizers":interface {}(nil), "name":"sch201901141352001-mwf20190036-job20190038-mpijob", "ownerReferences":interface {}(nil), "selfLink":"", "uid":"21944a2d-17b8-11e9-b830-9c713a20a9b0", "clusterName":"", "labels":interface {}(nil), "resourceVersion":interface {}(nil), "annotations":interface {}(nil), "generateName":interface {}(nil), "creationTimestamp":"2019-01-14T04:52:01Z", "generation":1}, "spec":map[string]interface {}{"backoffLimit":interface {}(nil), "gpus":interface {}(nil), "launcherOnMaster":interface {}(nil), "replicas":2, "template":map[string]interface {}{"metadata":interface {}(nil), "spec":map[string]interface {}{"initContainers":interface {}(nil), "nodeName":interface {}(nil), "serviceAccountName":interface {}(nil), "subdomain":interface {}(nil), "terminationGracePeriodSeconds":interface {}(nil), "tolerations":interface {}(nil), "volumes":interface {}(nil), "activeDeadlineSeconds":interface {}(nil), "dnsPolicy":interface {}(nil), "hostname":interface {}(nil), "priorityClassName":interface {}(nil), "schedulerName":interface {}(nil), "affinity":interface {}(nil), "hostAliases":interface {}(nil),"nodeSelector":interface {}(nil), "restartPolicy":interface {}(nil), "containers":[]interface {}{map[string]interface {}{"volumeMounts":[]interface {}{map[string]interface {}{"mountPropagation":interface {}(nil), .... , "securityContext":interface {}(nil), "lifecycle":interface {}(nil), "ports":interface {}(nil), "terminationMessagePath":interface {}(nil)}}, "dnsConfig":interface {}(nil), "priority":interface {}(nil), "securityContext":interface {}(nil), "serviceAccount":interface {}(nil)}}}, "status":interface {}(nil)}: validation failure list: must validate one and only one schema (oneOf) name: sch201901141352001-mwf20190036-cq66b[0].job20190038 phase: Failed startedAt: 2019-01-14T04:52:00Z templateName: job20190038 type: Pod phase: Failed ` Is there any way to work around the error for invalid values? Many Thanks!<issue_closed> Status: Issue closed
dotnet/SqlClient
806532315
Title: SQL Server handle string literal with N' prefix Question: username_0: Is there a way in SqlClient to perform the below example insert query with chinese characters using a command? ```SQL INSERT INTO [dbo].[ProjectTable] (ProjectId, ProjectName) VALUES ('9781728', N'測試'); ``` The db collaction is: SQL_Latin1_General_CP1_CI_AS Answers: username_1: Hi @username_0 , You can simply use SqlCommand with the same query you use above to insert the Chinese characters into nvarchar column. Sample is like this: ``` string sqlQuery = "INSERT INTO [dbo].[User] (Id, ProjectName) VALUES (12345, N'測試')"; using (SqlConnection conn = new SqlConnection(connStr)) { conn.Open(); SqlCommand command = new SqlCommand(); command.Connection = conn; command.CommandText = sqlQuery; command.ExecuteNonQuery(); Console.WriteLine("Data inserted."); } ``` username_0: Ok, but that would mean the sqlQuery is created from a string concatenation? Any chance SqlParameterCollection can be used? username_0: I've ran this example code ``` C# using (SqlConnection conn = new SqlConnection(connectionString)) { conn.Open(); var item = new { LastName = "測試", FirstMidName = "測試", EnrollmentDate = DateTime.Now }; SqlCommand command = new SqlCommand(); command.Connection = conn; command.CommandText = @$"INSERT INTO [ContosoUniversity].[dbo].[Student] (LastName, FirstMidName, EnrollmentDate) VALUES (@LastName, @FirstMidName, @EnrollmentDate)"; command.Parameters.Add("@LastName", SqlDbType.NVarChar).Value = item.LastName; command.Parameters.Add("@FirstMidName", SqlDbType.NVarChar).Value = item.FirstMidName; command.Parameters.Add("@EnrollmentDate", SqlDbType.DateTime).Value = item.EnrollmentDate; command.ExecuteNonQuery(); Console.WriteLine("Data inserted."); } ``` In SQL Profiler I now see this: ```SQL exec sp_executesql N'INSERT INTO [ContosoUniversity].[dbo].[Student] (LastName, FirstMidName, EnrollmentDate) VALUES (@LastName, @FirstMidName, @EnrollmentDate)',N'@LastName nvarchar(2),@FirstMidName nvarchar(2),@EnrollmentDate datetime',@LastName=N'測試',@FirstMidName=N'測試',@EnrollmentDate='2021-02-12 09:42:46.853' ``` So apparently there's already something automated in place. Thanks! Status: Issue closed
Altinn/altinn-studio
573794156
Title: titletext is missing for components in the app frontend Question: username_0: ## Describe the bug titletext is missing for components in the app frontend. ## To Reproduce Steps to reproduce the behavior: 1. Login to SBL test environment and instantiate a deployed app 2. Title text for components shows the keys and NOT the values for the keys ## Expected behavior The titletext should have the values for the defined keys and the key itself. ## Additional info Env: At22 App: ttd/apps-test Browser: Chrome 80 Answers: username_0: the texts are still are not as expected. username_0: verified. closing the issue. Status: Issue closed
kids-first/kf-portal-ui
857994831
Title: Fix participant link in variant search page and in variant entity page Question: username_0: For Variant Search page table: - if one of the studies has < 10 participants => no link on the participants. - if >10 participants for all studies: Fetch participants id and add the link. For Variant Entity page table: - if one of the studies has < 10 participants => no link on any the participants per study and no link on the total. - if >10 participants for all studies: Fetch participants id and add the link to all.<issue_closed> Status: Issue closed
OSC/ood-myjobs
204947360
Title: OOD-ify Job Constructor Question: username_0: OSC specific items in job constructor: 1. use of osc-machete which auto-sets Account String to primary group Torque specific items in job constructor: 1. osc-machete's default job adapter uses torque 2. when filtering list of clusters to use, we do this: <br>`c.resource_mgr_server.is_a?(OodCluster::Servers::Torque)`<br>https://github.com/AweSim-OSC/osc-jobconstructor/blob/f11798a42816aa7d558502a840ab028ce860ad30/config/initializers/ood_appkit.rb#L Minimal steps to remove OSC specific items (but retain Torque Specific items): - [x] 1. Apply the fix to update_status https://github.com/AweSim-OSC/documentation/wiki/Port-an-existing-app-to-OOD#5-explicitly-update-status-of-jobs-on-each-request - [x] 2. Create a OSC::Machete::TorqueAdapter that uses ood_job gem object to pass into all places OSC::Machete::Job.new is called: Workflow#build_jobs and Job#job https://github.com/OSC/osc_machete_rails/blob/57b19d30eccb748162f11f9e4501319a137a0860/lib/osc_machete_rails/statusable.rb#L123-L125 - [x] 3. Add custom Account String field to Job Options for jobs and save this in the Workflow#job_attrs. The custom OSC::Machete::TorqueAdapter that uses ood_job should have the custom script attributes added here - [x] 3a. Display Account string and helpful information in detail pane - [x] 4. Remove OSC specific job templates from the app into a separate repo which are both OSC and Torque specific - [x] 5. Ensure app works when templates directory is empty or does not exist - [x] 6. Update error messages in osc_machete_rails to say "An error occurred" instead of "A PBS::Error occurred" https://github.com/OSC/osc_machete_rails/blob/57b19d30eccb748162f11f9e4501319a137a0860/lib/osc_machete_rails/workflow.rb#L116-L122 - not doing now - mentioned in https://github.com/AweSim-OSC/osc-jobconstructor/issues/159 - [x] 7. Revisit "updating workflows before delete". https://github.com/OSC/ood_job/issues/8 _its a hard problem_<issue_closed> Status: Issue closed
ctimmerm/axios-mock-adapter
386183563
Title: Using regex in config parameters Question: username_0: Using the library, there is a possibility to have Regex in URL. But what about the Regex in config params? For example, I could use something like this: ``` mockAdapter .onPost(/^\/api\/es\/search/, { size: /d+/, // Any number I want query: { 'query' } }) .reply(200, {}); ``` Answers: username_1: I've submitted a PR that should support your use case; #181. Example: ```js mockAdapter .onPost(/^\/api\/es\/search/, expect.objectContaining({ size: expect.stringMatching(/d+/), query: { 'query' } })) .reply(200, {}); ``` Status: Issue closed
quarkusio/quarkus
705925652
Title: 0.0.0.0:8080 does not resolve on windows Question: username_0: **Describe the bug** on linux and osx you can just click the link printed in console: 0.0.0.0:8080 and the browser will figure it out - but on windows that gives an error: Can't reach this page. **Expected behavior** that users can copy/paste/click the urls shown in console to open browser **Actual behavior** broken on windows. ...could we convert 0.0.0.0:8080 to 127.0.0.1:8080 on windows or would that be misleading as its technically bound to any of the interfaces on the machine. just wondering if there is a way to make the links work on windows. Answers: username_1: @username_0 A ping to 0.0.0.0 on Linux actually resolves to 127.0.0.1. Under windows this address is not pingable. IMHO it would be more concise to show 127.0.0.1 in the console to click on. That should work locally for all operating systems. And non-locally the 0.0.0.0 does not work for any OS :-) username_0: I think that's what I suggest - convert 0.0.0.0 to 127.0.0.1 on console print. Btw. 127.0.0.1 doesn't work anywhere but locally thus not sure what you mean by non-locally. username_1: I would print 127.0.0.1 instead of 0.0.0.0 for all operating systems. Not just for windows. That 0.0.0.0 is redirecting to 127.0.0.1 on Linux seems to be a hack. In my comment I said that 0.0.0.0 (and not 127.0.0.1) does not work remotely. Therefore, for remote access you don't gain anything if you display 0.0.0.0 in the console (and e.g. copy it). Conclusion: as I am forced to use Windows at work I would be happy to see a link in the console that is working. Every co-worker that starts with Quarkus just wonders why this link that you see there is not working out-of-the-box. And you know: the first impression is the most important one. username_2: Hi @username_0 , do you happen to know who should I ping for a review on this PR? username_0: Pinged on the pr! Thanks!
nhl/link-move
391672483
Title: Upgrade Jackson to 2.9.5 Question: username_0: Need to upgrade Jackson version. There's a [CVE](https://github.com/nhl/link-move/network/alert/pom.xml/com.fasterxml.jackson.core:jackson-databind/open) for the older version. Also would be nice to align versions with Bootique.<issue_closed> Status: Issue closed
paritytech/substrate
817401712
Title: Could not compil Question: username_0: error[E0277]: the trait bound `sp_runtime::generic::Block<sp_runtime::generic::Header<u32, BlakeTwo256>, OpaqueExtrinsic>: sp_runtime::traits::Block` is not satisfied --> node/src/rpc.rs:37:10 | 37 | C::Api: pallet_contracts_rpc::ContractsRuntimeApi<Block, AccountId, Balance, BlockNumber>, | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `sp_runtime::traits::Block` is not implemented for `sp_runtime::generic::Block<sp_runtime::generic::Header<u32, BlakeTwo256>, OpaqueExtrinsic>` | ::: /Users/davirain/.cargo/git/checkouts/substrate-7e08433d4c370a21/debec91/frame/contracts/rpc/runtime-api/src/lib.rs:30:1 | 30 | / sp_api::decl_runtime_apis! { 31 | | /// The API to interact with contracts without using executive. 32 | | pub trait ContractsApi<AccountId, Balance, BlockNumber> where 33 | | AccountId: Codec, ... | 66 | | } 67 | | } | |_- required by this bound in `ContractsRuntimeApi` error: aborting due to previous error For more information about this error, try `rustc --explain E0277`. error: could not compile `node-template` To learn more, run the command again with --verbose. Answers: username_0: error[E0277]: the trait bound `sp_runtime::generic::Block<sp_runtime::generic::Header<u32, BlakeTwo256>, OpaqueExtrinsic>: sp_runtime::traits::Block` is not satisfied --> node/src/rpc.rs:37:10 | 37 | C::Api: pallet_contracts_rpc::ContractsRuntimeApi<Block, AccountId, Balance, BlockNumber>, | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `sp_runtime::traits::Block` is not implemented for `sp_runtime::generic::Block<sp_runtime::generic::Header<u32, BlakeTwo256>, OpaqueExtrinsic>` | ::: /Users/davirain/.cargo/git/checkouts/substrate-7e08433d4c370a21/debec91/frame/contracts/rpc/runtime-api/src/lib.rs:30:1 | 30 | / sp_api::decl_runtime_apis! { 31 | | /// The API to interact with contracts without using executive. 32 | | pub trait ContractsApi<AccountId, Balance, BlockNumber> where 33 | | AccountId: Codec, ... | 66 | | } 67 | | } | |_- required by this bound in `ContractsRuntimeApi` error: aborting due to previous error For more information about this error, try `rustc --explain E0277`. error: could not compile `node-template` To learn more, run the command again with --verbose. username_0: ` error[E0277]: the trait bound `sp_runtime::generic::Block<sp_runtime::generic::Header<u32, BlakeTwo256>, OpaqueExtrinsic>: sp_runtime::traits::Block` is not satisfied --> node/src/rpc.rs:37:10 | 37 | C::Api: pallet_contracts_rpc::ContractsRuntimeApi<Block, AccountId, Balance, BlockNumber>, | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `sp_runtime::traits::Block` is not implemented for `sp_runtime::generic::Block<sp_runtime::generic::Header<u32, BlakeTwo256>, OpaqueExtrinsic>` | ::: /Users/davirain/.cargo/git/checkouts/substrate-7e08433d4c370a21/debec91/frame/contracts/rpc/runtime-api/src/lib.rs:30:1 | 30 | / sp_api::decl_runtime_apis! { 31 | | /// The API to interact with contracts without using executive. 32 | | pub trait ContractsApi<AccountId, Balance, BlockNumber> where 33 | | AccountId: Codec, ... | 66 | | } 67 | | } | |_- required by this bound in `ContractsRuntimeApi` error: aborting due to previous error For more information about this error, try `rustc --explain E0277`. error: could not compile `node-template` To learn more, run the command again with --verbose. ` username_0: ``` error[E0277]: the trait bound `sp_runtime::generic::Block<sp_runtime::generic::Header<u32, BlakeTwo256>, OpaqueExtrinsic>: sp_runtime::traits::Block` is not satisfied --> node/src/rpc.rs:37:10 | 37 | C::Api: pallet_contracts_rpc::ContractsRuntimeApi<Block, AccountId, Balance, BlockNumber>, | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `sp_runtime::traits::Block` is not implemented for `sp_runtime::generic::Block<sp_runtime::generic::Header<u32, BlakeTwo256>, OpaqueExtrinsic>` | ::: /Users/davirain/.cargo/git/checkouts/substrate-7e08433d4c370a21/debec91/frame/contracts/rpc/runtime-api/src/lib.rs:30:1 | 30 | / sp_api::decl_runtime_apis! { 31 | | /// The API to interact with contracts without using executive. 32 | | pub trait ContractsApi<AccountId, Balance, BlockNumber> where 33 | | AccountId: Codec, ... | 66 | | } 67 | | } | |_- required by this bound in `ContractsRuntimeApi` error: aborting due to previous error For more information about this error, try `rustc --explain E0277`. error: could not compile `node-template` To learn more, run the command again with --verbose. ``` username_0: node/Cargo.toml ``` [dependencies] structopt = "0.3.8" sc-cli = { version = "0.9.0", features = ["wasmtime"] , git = "https://github.com/paritytech/substrate.git", rev = "ec498bb"} sp-core = { version = "3.0.0", git = "https://github.com/paritytech/substrate.git", rev = "ec498bb"} sc-executor = { version = "0.9.0", features = ["wasmtime"], git = "https://github.com/paritytech/substrate.git", rev = "ec498bb" } sc-service = { version = "0.9.0", features = ["wasmtime"] , git = "https://github.com/paritytech/substrate.git", rev = "ec498bb"} sc-telemetry = { version = "3.0.0" , git = "https://github.com/paritytech/substrate.git", rev = "ec498bb"} sc-keystore = { version = "3.0.0", git = "https://github.com/paritytech/substrate.git", rev = "ec498bb" } sp-inherents = { version = "3.0.0" , git = "https://github.com/paritytech/substrate.git", rev = "ec498bb"} sc-transaction-pool = { version = "3.0.0", git = "https://github.com/paritytech/substrate.git", rev = "ec498bb"} sp-transaction-pool = { version = "3.0.0" , git = "https://github.com/paritytech/substrate.git", rev = "ec498bb"} sc-consensus-aura = { version = "0.9.0" , git = "https://github.com/paritytech/substrate.git", rev = "ec498bb"} sp-consensus-aura = { version = "0.9.0" , git = "https://github.com/paritytech/substrate.git", rev = "ec498bb"} sp-consensus = { version = "0.9.0" , git = "https://github.com/paritytech/substrate.git", rev = "ec498bb"} sc-consensus = { version = "0.9.0" , git = "https://github.com/paritytech/substrate.git", rev = "ec498bb"} sc-finality-grandpa = { version = "0.9.0" , git = "https://github.com/paritytech/substrate.git", rev = "ec498bb"} sp-finality-grandpa = { version = "3.0.0", git = "https://github.com/paritytech/substrate.git", rev = "ec498bb"} sc-client-api = { version = "3.0.0" , git = "https://github.com/paritytech/substrate.git", rev = "ec498bb"} sp-runtime = { version = "3.0.0" , git = "https://github.com/paritytech/substrate.git", rev = "ec498bb"} # These dependencies are used for the node template's RPCs jsonrpc-core = "15.1.0" sc-rpc = { version = "3.0.0" , git = "https://github.com/paritytech/substrate.git", rev = "ec498bb"} sp-api = { version = "3.0.0" , git = "https://github.com/paritytech/substrate.git", rev = "ec498bb"} sc-rpc-api = { version = "0.9.0" , git = "https://github.com/paritytech/substrate.git", rev = "ec498bb"} sp-blockchain = { version = "3.0.0", git = "https://github.com/paritytech/substrate.git", rev = "ec498bb"} sp-block-builder = { version = "3.0.0" , git = "https://github.com/paritytech/substrate.git", rev = "ec498bb"} sc-basic-authorship = { version = "0.9.0", git = "https://github.com/paritytech/substrate.git", rev = "ec498bb"} substrate-frame-rpc-system = { version = "3.0.0" , git = "https://github.com/paritytech/substrate.git", rev = "ec498bb"} pallet-transaction-payment-rpc = { version = "3.0.0", git = "https://github.com/paritytech/substrate.git", rev = "ec498bb"} pallet-contracts = { version = "3.0.0", default-features = false, git = "https://github.com/paritytech/substrate.git", rev = "ec498bb"} pallet-contracts-rpc = { version = "3.0.0", git = "https://github.com/paritytech/substrate.git", branch = "master"} # These dependencies are used for runtime benchmarking frame-benchmarking = { version = "3.0.0", git = "https://github.com/paritytech/substrate.git", rev = "ec498bb" } frame-benchmarking-cli = { version = "3.0.0" , git = "https://github.com/paritytech/substrate.git", rev = "ec498bb"} node-template-runtime = { version = "2.0.0", path = "../runtime" } ``` username_1: I think it is because there is 2 different crate sp-runtime in your dependency. because you make use of: ``` pallet-contracts = { version = "3.0.0", default-features = false, git = "https://github.com/paritytech/substrate.git", rev = "ec498bb"} pallet-contracts-rpc = { version = "3.0.0", git = "https://github.com/paritytech/substrate.git", branch = "master"} ``` which builds on 2 different version of substrate. In you Cargo.lock you should see sp-runtime is imported 2 times. You should make use of cargo patch if you want to use specific version of some crate
Nhowka/Elmish.Bridge
335884506
Title: Q: SignalR for automatic fallback? Question: username_0: Hi @username_1, I have a question on fallbacks. Is there a plan to use / upgrade to https://github.com/aspnet/SignalR so that `Elmish.Bridge` can take advantage of automatic fallbacks? According to https://docs.microsoft.com/en-us/aspnet/core/fundamentals/websockets?view=aspnetcore-2.1, using `Microsoft.AspNetCore.WebSockets` requires the developer (aka either you or the app developer) to implement fallback to alternative transport protocols. Would love to hear your thoughts on this. Thank you Answers: username_1: Good thinking! I'll check the impact on bundle size if any and if I can keep the ServerHub as it is or give some access to SignalR Hub. username_1: I'm not sure if it will be needed... Looks like there is good support for it: https://caniuse.com/#feat=websockets As Elmish doesn't do much with javascript disabled, maybe it will be unnecessary complexity. Status: Issue closed username_2: For us the issue is not the browsers themselves, but rather intercepting proxies / firewalls, which are unfortunately common in corporate environments. Have you already taken a look at how much effort it would be to model Elmish.Bridge on top of SignalR, or whether the SignalR communication layer can be extracted out?
daylightstudio/FUEL-CMS
80892433
Title: Custom Dwoo plugins not bound Question: username_0: We created a custom Dwoo plugin under applications/dwoo/plugins and it was not available, giving this error: ``` Plugin partial can not be found, maybe you forgot to bind it if it's a custom plugin ? ``` We fixed this by modifying MY_Parser.php within the spawn() method to add a directory: ``` $dwoo = new Dwoo; $dwoo->getLoader()->addDirectory(ARION_ROOT.'libraries/dwoo/plugins'); ``` It would be great if it were possible for all advanced module directories to be added at this point as well as we have so far kept everything within a single advanced module directory and would like to continue to do so. p.s. I tried to post this on your forum but the post won't go live -- confirming my email address appears to not have worked properly, it always says "You need to confirm your email address. Click here to resend the confirmation email." Answers: username_1: There have been some updates to the Parsing engine in the develop branch which will be in the next release (such as support for Twig). You access it on the CodeIgniter master object like so: $CI =& get_instance(); $CI->fuel->parser->engine->getLoader()->addDirectory(ARION_ROOT.'libraries/dwoo/plugins'); You can set this code perhaps to a CI hook: https://ellislab.com/codeigniter/user-guide/general/hooks.html That's strange about the forum. I actually see your post here: http://forum.getfuelcms.com/discussion/2097/custom-dwoo-plugins-not-bound Status: Issue closed
florisboard/florisboard
861019478
Title: German Crash Report has Inconsistencies. Question: username_0: Maybe just place the button names in variables and reference them whenever necessary? ![Screenshot_20210419-094946_FlorisBoard.jpg](https://user-images.githubusercontent.com/59611881/115201054-2c900600-a0f5-11eb-99ac-3bba569327ad.jpg) #### Environment information - FlorisBoard Version: 0.3.10 - Install Source: Google PlayStore - Device: Samsung Galaxy S9 - Android: 10 Answers: username_1: This was indeed an issue in v0.3.10. I circumvented it in #704 by not using the exact labels of the keys anymore. For the crash report label, it is saved in an extra variable and never translated, as everything on GitHub is always in English and it is more confusing if I allow this label to be translated. username_0: Ah, correct, we changed that part too! I'll close this issue then. :) Status: Issue closed
OpenSWE1R/swe1r-re
323401989
Title: Add a function concerning RE Textures Question: username_0: There is a second use of "Load textureblock" "sub_42D680(3);" -the first is on Load Model function, because model and Textures are linked. -the second is on a function witch -read 4 bytes for cheaking (xx < 1700), else it's a infinity loop. -copy 0x1A90u bytes in a global variable. Answers: username_1: Pretty sure you are talking about this (patched US version): ```c //----- (00447420) -------------------------------------------------------- int sub_447420() { // Open textureblock sub_42D680(3); // Read 4 bytes from the textureblock and swap it; this is the number of textures sub_42D640(3, 0, &dword_E9823C, 4u); dword_E9823C = swap32(dword_E9823C); // Ensure that we have 1700 or less textures if (dword_E9823C > 1700) { while (1); } //FIXME: Unknown memset(dword_E93860, 0, sizeof(dword_E93860)); // Close the textureblock again return sub_42D6F0(3); } ``` username_1: Someone should verify or redo my work and send a PR.
TIBCOSoftware/be-tools
680332662
Title: Parameters in Values.yaml file under Helm should be same as in CDD for consistency. Question: username_0: The initial BE parameters such as unclustered, inmemory, store, cache, sharedNothing, etc are not consistent with the CDD. They should be equal in upper/lower case, i.e, the string should be same as seen in CDD. Answers: username_1: Not really mandatory, in fact its not a good practice to have field/variables/parameters on cli have spaces, be case sensitive, etc. So long as the names are self explanatory and have appropriate doc/help details explaining. The only change i feel we can make is change "sharedNothing" to "sharednothing". Keep all options lower case. username_2: Fix is available in branch [fix-issue-98](https://github.com/TIBCOSoftware/be-tools/tree/fix-issue-98) Status: Issue closed
conan-io/conan-center-index
1017192914
Title: [conan.io/center] Error downloading binary package Question: username_0: <!-- What is your problem or feature request? Please be as specific as possible! --> I am trying to build a simple package, but I get some network errors. The Cmake files are [here](https://github.com/username_0/cmakelib/blob/29e870462c3ff2f44b864fcac6ab9c4074a40d95/src/Conan.cmake#L20) ```py fmt/8.0.1: ERROR: Error downloading binary package: 'fmt/8.0.1:a5daf53c372b205eeb0dc1f71326497703534363' ERROR: HTTPSConnectionPool(host='center.conan.io', port=443): Max retries exceeded with url: /v1/ping (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1131)'))) Unable to connect to cci=https://center.conan.io 1. Make sure the remote is reachable or, 2. Disable it by using conan remote disable, Then try again. CMake Error at build/conan.cmake:631 (message): Conan install failed='1' Call Stack (most recent call first): build/_deps/cmakelib-src/src/Conan.cmake:44 (conan_cmake_install) build/_deps/cmakelib-src/src/Index.cmake:155 (run_conan) CMakeLists.txt:16 (cmakelib) ``` I didn't have such issues before, but I got this today. Answers: username_1: This is a certificate problem. You need to update your Conan client to >= 1.40.4 username_2: see https://github.com/conan-io/conan/issues/9695
postmanlabs/postman-app-support
202787620
Title: Postman App crashing upon receiving response Question: username_0: 1. Postman Version: 4.9.2 2. App (Chrome app or Mac app): Chrome app, Chrome 49.0.2623.87 3. OS details: linux / x86-64 4. Is the Interceptor on and enabled in the app: No 5. Did you encounter this recently, or has this bug always been there: Started happening about a week ago or so 6. Expected behaviour: Getting an answer from my API endpoint 7. Console logs: There's not much being logged upon crashing sadly, I can't provide anything more than basic logging information like such: http://pastebin.com/muFKFgU1 To reproduce the problem, I can just send a few request to random GET endpoints (haven't tried with POST) and surely at some point one of them will crash the app. It usually takes like 2, 3, up to a maximum of 4 successful requests to make it crash. It only seems to crash upon 200-OK responses, none of the 403-Forbidden responses seem to crash it. Answers: username_1: @username_0 Thanks for posting this issue 👍 . The occurrence of this issue is quite intermittent as reported by few others and hard to reproduce. Could you verify if the issue is reproducible on the native app?. username_0: @username_1 I just tried using the same API endpoints on the native Linux x64 app, the issue doesn't seem to be reproducible there. Status: Issue closed
httpwg/http-extensions
536126376
Title: [SH] dictionary with default True value doesn't accept parameters Question: username_0: In section **4.2.2 Parsing a Dictionary** step 2.4.2 initialises an empty set of parameters, but never parses any from input_string. This corresponds with the ABNF rule `dict-member = member-name [ "=" member-value ]` where `member-value` includes the parameters. This conflicts with a [recently-added test](https://github.com/httpwg/structured-header-tests/pull/24/files#diff-6976ba7e46d6978bde81530178302e1fR112-R118). Related to #992 Answers: username_1: Nope, I just has a moment. Fixed. Status: Issue closed
gradle/gradle
1046788837
Title: Provide a generic support for .gitignore, .gitattributes, and .editorconfig properties Question: username_0: Execution optimizations have been disabled for task ':release:gitProps' to ensure correctness due to the following reasons: - Gradle detected a problem with the following location: '/Users/runner/work/calcite/calcite'. Reason: Task ':release:gitProps' uses this output of task ':babel:checkstyleMain' without declaring an explicit or implicit dependency. This can lead to incorrect results being produced, depending on what order the tasks are executed. Please refer to https://docs.gradle.org/7.2/userguide/validation_problems.html#implicit_dependency for more details about this problem. - Gradle detected a problem with the following location: '/Users/runner/work/calcite/calcite'. Reason: Task ':release:gitProps' uses this output of task ':babel:checkstyleTest' without declaring an explicit or implicit dependency. This can lead to incorrect results being produced, depending on what order the tasks are executed. Please refer to https://docs.gradle.org/7.2/userguide/validation_problems.html#implicit_dependency for more details about this problem. - Gradle detected a problem with the following location: '/Users/runner/work/calcite/calcite'. Reason: Task ':release:gitProps' uses this output of task ':babel:compileJava' without declaring an explicit or implicit dependency. This can lead to incorrect results being produced, depending on what order the tasks are executed. Please refer to https://docs.gradle.org/7.2/userguide/validation_problems.html#implicit_dependency for more details about this problem. - Gradle detected a problem with the following location: '/Users/runner/work/calcite/calcite'. Reason: Task ':release:gitProps' uses this output of task ':babel:compileTestJava' without declaring an explicit or implicit dependency. This can lead to incorrect results being produced, depending on what order the tasks are executed. Please refer to https://docs.gradle.org/7.2/userguide/validation_problems.html#implicit_dependency for more details about this problem. - Gradle detected a problem with the following location: '/Users/runner/work/calcite/calcite'. Reason: Task ':release:gitProps' uses this output of task ':babel:fmppMain' without declaring an explicit or implicit dependency. This can lead to incorrect results being produced, depending on what order the tasks are executed. Please refer to https://docs.gradle.org/7.2/userguide/validation_problems.html#implicit_dependency for more details about this problem. - Gradle detected a problem with the following location: '/Users/runner/work/calcite/calcite'. Reason: Task ':release:gitProps' uses this output of task ':babel:jandexMain' without declaring an explicit or implicit dependency. This can lead to incorrect results being produced, depending on what order the tasks are executed. Please refer to https://docs.gradle.org/7.2/userguide/validation_problems.html#implicit_dependency for more details about this problem. - Gradle detected a problem with the following location: '/Users/runner/work/calcite/calcite'. Reason: Task ':release:gitProps' uses this output of task ':babel:jandexTest' without declaring an explicit or implicit dependency. This can lead to incorrect results being produced, depending on what order the tasks are executed. Please refer to https://docs.gradle.org/7.2/userguide/validation_problems.html#implicit_dependency for more details about this problem. - Gradle detected a problem with the following location: '/Users/runner/work/calcite/calcite'. Reason: Task ':release:gitProps' uses this output of task ':babel:jar' without declaring an explicit or implicit dependency. This can lead to incorrect results being produced, depending on what order the tasks are executed. Please refer to https://docs.gradle.org/7.2/userguide/validation_problems.html#implicit_dependency for more details about this problem. - Gradle detected a problem with the following location: '/Users/runner/work/calcite/calcite'. Reason: Task ':release:gitProps' uses this output of task ':babel:javaCCMain' without declaring an explicit or implicit dependency. This can lead to incorrect results being produced, depending on what order the tasks are executed. Please refer to https://docs.gradle.org/7.2/userguide/validation_problems.html#implicit_dependency for more details about this problem. - Gradle detected a problem with the following location: '/Users/runner/work/calcite/calcite'. Reason: Task ':release:gitProps' uses this output of task ':babel:processJandexIndex' without declaring an explicit or implicit dependency. This can lead to incorrect results being produced, depending on what order the tasks are executed. Please refer to https://docs.gradle.org/7.2/userguide/validation_problems.html#implicit_dependency for more details about this problem. - Gradle detected a problem with the following location: '/Users/runner/work/calcite/calcite'. Reason: Task ':release:gitProps' uses this output of task ':babel:processResources' without declaring an explicit or implicit dependency. This can lead to incorrect results being produced, depending on what order the tasks are executed. Please refer to https://docs.gradle.org/7.2/userguide/validation_problems.html#implicit_dependency for more details about this problem. - Gradle detected a problem with the following location: '/Users/runner/work/calcite/calcite'. Reason: Task ':release:gitProps' uses this output of task ':babel:processTestResources' without declaring an explicit or implicit dependency. This can lead to incorrect results being produced, depending on what order the tasks are executed. Please refer to https://docs.gradle.org/7.2/userguide/validation_problems.html#implicit_dependency for more details about this problem. - Gradle detected a problem with the following location: '/Users/runner/work/calcite/calcite'. Reason: Task ':release:gitProps' uses this output of task ':babel:sourcesJar' without declaring an explicit or implicit dependency. This can lead to incorrect results being produced, depending on what order the tasks are executed. Please refer to https://docs.gradle.org/7.2/userguide/validation_problems.html#implicit_dependency for more details about this problem. ... ``` ### Context The issue results in lots of warnings in Apache Calcite, and Apache JMeter projects. Status: Issue closed Answers: username_1: This sounds like great functionality for a Git plugin, but not something we'd support in Gradle as a core feature. username_0: I believe, Gradle provides no way to write such a plugin without running into "Gradle detected a problem with the following location" issues. username_0: Then, `.editorconfig` is a pretty-much standard thing, which covers more ecosystems than `checkstyle`. username_2: @username_0 I think you should be able to change the default excludes in settings to ignore those locations. See here: https://docs.gradle.org/current/userguide/working_with_files.html#sec:file_trees username_0: @username_2 , the thing is: 1. I do not want to duplicate configuration across `.gitignore` and `gradle.kts` 1. That implies I need to discover `.gitignore` files 1. That implies I need to create a task that "scans all the directories starting from the root, and locates all `.gitignore` files" 1. Unfortunately, I do not remember quite well why I use `[Files.walkFileTree](https://github.com/username_0/username_0-release-plugins/blob/83c85c5faa4c7cd1fe0173b75c1cba5e60c3f209/plugins/crlf-plugin/src/main/kotlin/com/github/username_0/gradle/git/GitignoreFinder.kt#L41-L49)` instead of Gradle's `fileTree(..).matching { exclude {...} }` Most likely the matching-exclude does not allow me to skip subdirectories. For instance, I start from the root, and from the root `.gitignore` file I learn that `/build` should be ignored. AFAIK, there's no way to tell `fileTree` that I do not want to enter `/build` since it tries to locate files first. I'm not sure that is the case. username_0: An alternative option could probably be something like `build service` that collects all the files going up the hierarchy. I do not remember why did I settle on top-down approach :-/ username_0: Ok, I released the task on 19 Jul 2019 , and Shared Build Services appeared in Gradle 6.1 on Jan 15, 2020 username_2: @username_0 I see that the plugin needs to do quite some work to detect all the `.gitignore` files and parse them. That is another reason why I wouldn't want to build this into Gradle core. Though I also don't object to having a plugin which supports this feature. So the question is if it is already possible to write such a plugin, or if you'd need some additional APIs from Gradle. And if you need some additional APIs from Gradle, maybe we should create them to enable you write the plugin. username_0: I'm quite sure currently there's now way to create a top-down approach (a task that walks the file tree and discovers `.gitignore` files). On the other hand, a `build service` that answers queries like "is this file ignored?" could be a workable solution. Let me see that is a good enough solution. username_2: I agree that a task isn't the right solution. You'd need to discover the `.gitignore` files at configuration time, while running `settings.gradle`. Since this is the only time you could configure the default excludes.
snagcliffs/PDE-FIND
794100656
Title: How to generate data for Burgers, KS, Schrodinger, NLS Question: username_0: Hello Rudy, I am interested in the PDE-FIND and want to see what will happen for PDE-FIND when the space and time of the PDEs are changed. I have tried Python and Matlab (pdetool) to simulate Burgers, KS, Schrodinger and NLS equations, but my results seem to be incorrect. What software/programming did you use to generate the data? And could you please show how to generate data for Burgers, KS, Schrodinger and NLS equations? Thank you! Answers: username_1: Hi Jack, with the exception of Navier-Stokes and Kuramoto-Sivashinsky, all of the datasets were made using ode45 and the fft to evaluate derivatives. We solved KS using a method meant for stiff PDEs described in the paper below. <NAME> and <NAME>. Fourth-order time-stepping for stiff pdes. SIAM Journal on Scientific Computing, 26(4):1214–1233, 2005 Status: Issue closed
cms-sw/cmssw
381497941
Title: Failure of test wf 1020.0 after the integration of #25237 Question: username_0: A new failure is observed in CMSSW_10_4 2018-11-15-1100 for the test wf 1020.0 after the merge of #25237: https://cmssdt.cern.ch/SDT/cgi-bin/logreader/slc7_amd64_gcc700/CMSSW_10_4_X_2018-11-15-1100/pyRelValMatrixLogs/run/1020.0_AlCaLumiPixels2016H+AlCaLumiPixels2016H+TIER0EXPLP+ALCAEXPLP+ALCAHARVLP+TIER0PROMPTLP/step3_AlCaLumiPixels2016H+AlCaLumiPixels2016H+TIER0EXPLP+ALCAEXPLP+ALCAHARVLP+TIER0PROMPTLP.log#/ The failure is reproducible and isolated to depend on this update Answers: username_0: assign core username_1: I should have a fix to the module shortly. username_1: Fixed in #25263 username_1: Don’t we normally wait to close an issue after the fix has been merged into the release? username_0: the issue in itself looks understood, with a solution provided and the code is technically waiting for a final approval. Anyway... username_0: A new failure is observed in CMSSW_10_4 2018-11-15-1100 for the test wf 1020.0 after the merge of #25237: https://cmssdt.cern.ch/SDT/cgi-bin/logreader/slc7_amd64_gcc700/CMSSW_10_4_X_2018-11-15-1100/pyRelValMatrixLogs/run/1020.0_AlCaLumiPixels2016H+AlCaLumiPixels2016H+TIER0EXPLP+ALCAEXPLP+ALCAHARVLP+TIER0PROMPTLP/step3_AlCaLumiPixels2016H+AlCaLumiPixels2016H+TIER0EXPLP+ALCAEXPLP+ALCAHARVLP+TIER0PROMPTLP.log#/ The failure is reproducible and isolated to depend on this update
protocolbuffers/protobuf
899725652
Title: How to delete repeated field with reflection GetRepeatedFieldRef Question: username_0: Hi guys, Who knows how to delete with reflection. I want to use `GetRepeatedFieldRef` to remove all repeated items. But do not find the public function. ```c++ template <class T> void RemoveRepeatedField(Message& parent, const FieldDescriptor* field) { atuo filed_ref = parent.GetReflection()->GetRepeatedFieldRef<T>(parent, field); // I can get the interator of the filed, but how to use the interator to delete the related filed? // filed_ref.begin(); // filed_ref.end(); } ``` Anyone help, thanks!! Status: Issue closed Answers: username_1: You likely want to do `parent.GetReflection()->GetMutableRepeatedFieldRef<T>(parent, field).Clear()`.
nicehash/NiceHashMiner
276854000
Title: NeoScrypt benchmark never ends Question: username_0: After auto-update to 2.0.1.5-beta and Start button click, NeoScrypt benchmark is auto-started and gets stuck at 50%. After stopping benchmark manually, mining details are unavailable. GPU: MSI gtx 1050ti Console output: Excavator v1.3.6a_nvidia GPU Miner for NiceHash. Build number: 99 [0x00002fcc][info] core | Preparing benchmark for MSI GeForce GTX 1050 Ti - NeoScrypt --------- Benchmark never completes (> 4min). Answers: username_1: hey Status: Issue closed
runelite/runelite
397600347
Title: Hide other players' Barrows brothers Question: username_0: It's really annoying doing Barrows with tons of other players stacking up their current brother either in the same tomb or in the chest room making you have to right-click to attack the brother that's attacking you. With the entity hider plugin you can hide NPCs, which helps with the brothers but then you can't see the monsters you need to kill for points. Is there any way to implement a "Hide all other players' brothers" option to the Barrows plugin (or entity hider)? If not, no worries. Just wanted to see if there was any way to increase the efficiency. If so, well I suppose that's what this post is for :) Another potential solution for this, which I feel like would be more under Jagex's belt than RuneLite, would be to send the attacking brother to the top of the list so that you don't have to right-click. Answers: username_1: We are intentionally not hiding specific NPCs/Players, so this will not be implemented Status: Issue closed
cmsd2/codelauf
352540546
Title: [codelauf] [codelauf] [sequoia] Add code to handle reviewer who has committed Question: username_0: <b><NAME></b> added the card <a href="https://trello.com/c/GMtqeH5u">[codelauf] [codelauf] [sequoia] Add code to handle reviewer who has committed</a> to the <b>Bug List</b> list in the <b>codelauf</b> board at August 21, 2018 at 02:17PM<br> <br> <i>[codelauf] [codelauf] [sequoia] Add code to handle reviewer who has committed By username_0 Assigned to <NAME> added the card [codelauf] [sequoia] Add code to handle reviewer who has committed to the Bug List list in the codelauf board at August 21, 2018 at 12:52PM [codelauf] [sequoia] Add code to handle reviewer who has committed By username_0 Assigned to <NAME> added the card [sequoia] Add code to handle reviewer who has committed to the Bug List list in the codelauf board at August 21, 2018 at 12:07PM [sequoia] Add code to handle reviewer who has committed By brendadeely Assigned to We had code to handle reminding a reviewer to commit. Now we need code to handle a reviewer who has committed but hasn't submitted their review yet. NB: I will add code documentation in a later commit once the code is totally done. https://ift.tt/2MsxEyH ## Code Review Checklist - [ ] Changes conform to [coding style guide](https://ift.tt/2LKIiBl) - [ ] Commit message is well written and describes _why_ the changes were made - [ ] New unit tests were written or existing ones were updated as appropriate - [ ] Any brand new files have been linted and added to the lint whitelist - [ ] New logs do not contain personal data - [ ] (If new personal data): Any new personal data is essential to operations - [ ] (If new personal data): Any new data is: easy to collect and/or delete Labeled: August 21, 2018 at 11:53AM via GitHub https://ift.tt/2N6hBDZ View on Trello Labeled: August 21, 2018 at 12:19PM via GitHub https://ift.tt/2N1nL8a View on Trello Labeled: August 21, 2018 at 12:54PM via GitHub https://ift.tt/2N5As1R</i><br> <br> <a href="https://trello.com/c/GMtqeH5u">View on Trello</a>
rust-lang/libc
1054082456
Title: Describe how to convert a c_char to a native Rust string, and back Question: username_0: I'm trying to work with a FFI library that accepts (and in some cases returns) C strings (represented as `*const c_char`). I need to convert from Rust strings to C strings, and from C strings to Rust strings. I'm finding it fairly difficult to find up to date information on how to accomplish this in a straightforward way. The documentation for `c_char` is pretty sparse: https://docs.rs/libc/0.2.107/libc/type.c_char.html It's literally just this line: ``` pub type c_char = i8; ``` It would be nice if there was an example here of how to generate this and translate it into a Rust string, or even just a link explaining where to find more information about how to use the type. Answers: username_1: Note that the main aim of this crate is just to expose the items, and given how many items we maintain, preparing such a doc for each item is quite hard. IMHO the docs like this should go to the [nomicon](https://github.com/rust-lang/nomicon) or a community article. Closing as wontfix here, thanks for reporting anyway! Status: Issue closed
dresden-elektronik/deconz-rest-plugin
860045745
Title: docker setup, Raspi 4B, Ubuntu server, "device state timeout ignored in state" [2|3] Question: username_0: (ttyAMA0 is used by the `agetty` process, with different config.txt settings it's the other way round and S0 is used by agetty) I really like my RaspBee, and am getting a bit desparate here with the home automation blocked-broken. Any help I can get would be much appreciated! Answers: username_1: @username_2 username_2: I notice in your docker compose file that you have 2 devices, is there a reason for that? By passing the environment variable `DECONZ_DEVICE` twice it's most likely you use the last entry. So clean that up and have a look at: https://github.com/dresden-elektronik/deconz-rest-plugin/issues/3974#issuecomment-758165371 perhapse update your firmware to the latest version? username_2: you should check https://www.raspberrypi.org/documentation/configuration/uart.md as well for the config.txt username_0: Guys you rock! It wasn't any the specific hints you gave above, but a follow-up from one of these... on Ubuntu there's no raspi-config to disable the serial console. However it's still activated by the kernel cmdline - as I just figured out. Disabling that gets me deconz running properly on ttyS0! Two more things: 1. Would deconz-RaspBee profit from the better specs of PL011, vs the miniUART? (i.e. I'll try to get it configured to run on ttyAMA0 if that should be the case) 2. I would do a little paragraph-writeup of the steps to enable deconz on an Ubuntu Server [arm64] system with what I learned, to go into the deconz docker installation docs - if you want me to and would put it in... username_2: Documentation is always welcome! username_0: After a reboot, the RaspBee will be available on /dev/ttyAMA0 You will find information about the differences between these two UART options on the raspberrypi documentation page referenced above. Once you have decided on your setup, put in these configuration changes and rebooted your Raspberry, you should be good to go! Just remember to - install the wiringpi package, - add your user account to the dialout group, - set your docker cmdline (or docker-compose file) to use /dev/ttyS0 or /dev/ttyAMA0, depending on which option you chose above. These steps are described in marthoc's documentation, as referenced above. username_3: Hay, thanks for crafting some documentation which probably will help quite some people out there. However, I feel the passage about configuring the `dtoverlay` is "a bit dangerous", so please allow me to explain why I think so. No matter how often I read the referenced raspberry documentation in the past, I still kinda shocked how complicated they make it. First of all, the RaspBee (I or II) must be run on the PL011, as only this is fullfeatured while the mini UART is stripped in capabilities. That being said, it of course also matters which type of RPi you have, as PL011 is not automatically the primary UART as the documentation explains. To add more complexity to this, it also matters if you desire/require BT to run or not. If yes, the required configuration might vary as well. However, there seems to be a savior in form of the overlays. Documentation states: ``` Primary UART | Default state of enable_uart flag ---------------------|---------------------------- mini UART | 0 first PL011 (UART0) | 1 ``` As I recall, upon deCONZ installation, `enable_uart` is always set to 1 in `config.txt`, which should ensure the primary UART is PL011. However, not sure if that could interfere in any way with the below. "`disable-bt` disables the Bluetooth device and makes the first PL011 (UART0) the primary UART. You must also disable the system service that initialises the modem, so it does not connect to the UART, using `sudo systemctl disable hciuart`." This is what we want if we **don't want** BT running in parallel. "`miniuart-bt` switches the Bluetooth function to use the mini UART, and makes the first PL011 (UART0) the primary UART. Note that this may reduce the maximum usable baud rate (see mini UART limitations below). You must also set the VPU core clock to a fixed frequency using either `force_turbo=1` or `core_freq=250`." This is what we want if we **want** BT running in parallel. So TLDR, as you might have recognized, it is more the wording in your guide that might need to be amended. For the BT to remain active, it depends on the RPi which additional parameter needs to be added to the configuration. Please note that the above is based on my current understanding of this topic and must not necessarily be correct. But as my RPi is running without any issues while BT is enabled, it cannot be that wrong 😉 username_2: I've copied to a wiki page: https://github.com/dresden-elektronik/deconz-rest-plugin/wiki/Setup-dockerized-deconz-in-Ubuntu,-connecting-to-a-RaspBee-board Feel free to further improve it. username_0: Solved, see here for resolution: https://github.com/dresden-elektronik/deconz-rest-plugin/wiki/Setup-dockerized-deconz-in-Ubuntu,-connecting-to-a-RaspBee-board Status: Issue closed
tipsi/tipsi-stripe
288527473
Title: Android Build failed with RN0.52 Question: username_0: iOS is work fine but Android is not work. I linked the library and tried clean and build again but it's not help. I'm using compileSdkVersion 23 but the error: Could not find com.android.support:appcompat-v7:27.0.0. Could not find com.android.support:support-v4:26.1.0. Could not find com.android.support:design:27.0.0. Could not find com.android.support:support-annotations:27.0.0. Could not find com.android.support:support-v4:26.1.0. After that, I tried change the compileSdkVersion 26 and meet the requirement, but I will get a dozen "No resource found that matches the given name", such as: No resource found that matches the given name 'android:TextAppearance.Material.Widget.Button.Borderless.Colored' Answers: username_1: Hi @username_0 ! Try to set all support libraries to the same version first. for example: ``` Could not find com.android.support:appcompat-v7:27.0.0. Could not find com.android.support:support-v4:26.1.0. Could not find com.android.support:design:27.0.0. Could not find com.android.support:support-annotations:27.0.0. Could not find com.android.support:support-v4:26.1.0. ``` set ``` compileSdkVersion 26 buildToolsVersion "26.0.2" ``` That bug are from 27 version of some support library. If that does not help, post your _dependencies_ and _androis_ parts of **build.gradle** file. username_0: @username_1 I think set to 26 is not help in my case. Changed those of com.android.support.xxx to 26 but error(s) still remain ``` dependencies { compile project(':tipsi-stripe') compile project(':react-native-youtube') compile project(':react-native-push-notification') compile project(':react-native-vector-icons') compile project(':react-native-device-info') compile project(':react-native-image-crop-picker') compile project(':react-native-sound') compile project(':react-native-audio') compile project(':react-native-localization') compile project(':react-native-image-resizer') compile project(':@yfuks/react-native-action-sheet') compile project(':react-native-fetch-blob') compile fileTree(dir: "libs", include: ["*.jar"]) compile "com.android.support:appcompat-v7:26.1.0" <- 23.0.1 before compile "com.android.support:support-annotations:26.1.0" <- Add Manually compile "com.android.support:support-annotations:26.1.0" <- Add Manually compile "com.facebook.react:react-native:+" // From node_modules } ``` username_1: error? That issue looks very similar to [that one](https://stackoverflow.com/questions/44190829/facebook-sdk-android-error-building) username_0: Searched in the following locations: file:/Users/wayne/Library/Android/sdk/extras/android/m2repository/com/android/support/support-v4/26.1.0/support-v4-26.1.0.pom file:/Users/wayne/Library/Android/sdk/extras/android/m2repository/com/android/support/support-v4/26.1.0/support-v4-26.1.0.jar file:/Users/wayne/Documents/ReactNative/testApps/android/sdk-manager/com/android/support/support-v4/26.1.0/support-v4-26.1.0.jar Required by: testApps:app:unspecified > testApps:tipsi-stripe:unspecified > com.google.android.gms:play-services-wallet:10.2.6 > com.google.android.gms:play-services-basement:11.0.4 ``` username_1: @username_0 You post two issues: 1. Could not find com.android.support:appcompat-v7:27.0.0. 2. No resource found that matches the given name 'android:TextAppearance.Material.Widget.Button.Borderless.Colored' That [link](https://stackoverflow.com/questions/44190829/facebook-sdk-android-error-building) are for second one. Not the same but similar. For first one you can try to add `allprojects { repositories { maven { url 'https://maven.google.com' } } }` to project level **build.gradle** Look this [example](https://github.com/bugsnag/bugsnag-react-native/blob/master/examples/plain/android/build.gradle ) and this [thread](https://github.com/bugsnag/bugsnag-react-native/issues/177). username_0: @username_1 The first problem has fixed when I added repositories to build.gradle but the second issue occurred. Fortunately, the second issue fixed by changed the version of compileSdkVersion, buildToolsVersion and com.android.support:appcompat-v7 to 26. Thank you for your help. Status: Issue closed
milyasyousuf/emonx_cicd_app
344709750
Title: Docker compose has a issue in building with jekins Question: username_0: 11:14:57 Step 6/7 : ADD requirements.txt /code/ 11:14:57 Service 'web' failed to build: ADD failed: stat /var/lib/docker/tmp/docker-builder602047213/requirements.txt: no such file or directory 11:14:57 Build step 'Execute shell' marked build as failure 11:14:57 Finished: FAILURE
MicrosoftDocs/windows-itpro-docs
601318447
Title: Individual User login limit Question: username_0: Please advice the status for Cached login attempts for individual user. What I understand from the article is that a user can use his cached credentials to login as long as Domain Controller isn't available even for infinite count! --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: ef720207-c5e6-182e-8a34-361244c45641 * Version Independent ID: 4591df7c-e474-9b85-7c09-ce572a66a0a3 * Content: [Interactive logon Number of previous logons to cache (in case domain controller is not available) (Windows 10) - Windows security](https://docs.microsoft.com/en-us/windows/security/threat-protection/security-policy-settings/interactive-logon-number-of-previous-logons-to-cache-in-case-domain-controller-is-not-available#feedback) * Content Source: [windows/security/threat-protection/security-policy-settings/interactive-logon-number-of-previous-logons-to-cache-in-case-domain-controller-is-not-available.md](https://github.com/MicrosoftDocs/windows-itpro-docs/blob/public/windows/security/threat-protection/security-policy-settings/interactive-logon-number-of-previous-logons-to-cache-in-case-domain-controller-is-not-available.md) * Product: **w10** * Technology: **windows** * GitHub Login: @Dansimp * Microsoft Alias: **dansimp** Answers: username_1: @username_0 It seems that a possible value of from 0 to 50 has already been included in the article under the following section, https://docs.microsoft.com/en-us/windows/security/threat-protection/security-policy-settings/interactive-logon-number-of-previous-logons-to-cache-in-case-domain-controller-is-not-available#possible-values Please let me know whether this clarifies your question. Status: Issue closed
emberjs/ember.js
143282771
Title: Confusing error when using `Ember.computed.or` incorrectly Question: username_0: Check the console on [this twiddle](https://ember-twiddle.com/f3d7d9d8f45cc8ca77ab?openFiles=application.controller.js%2C) to see current behavior Answers: username_0: Check the console on [this twiddle](https://ember-twiddle.com/f3d7d9d8f45cc8ca77ab?openFiles=application.controller.js%2C) to see current behavior username_1: @username_0 yeah I see that error in the console of our twiddle example, seems near impossible to match that up to the offending code. This seems like a feature/enhancement request. See [CONTRIBUTING.md#requesting-a-feature](https://github.com/emberjs/ember.js/blob/master/CONTRIBUTING.md#requesting-a-feature) which suggests to use an RFC Issue to suggest a feature/enhancement (no need for a PR with RFC, only an issue in the repo) username_2: Agreed, I think it should be possible to make this assertion more understandable. I'm going to label this and leave it open for a bit to see if we have any takers... username_0: Some changes though are "substantial", and we ask that these be put through a bit of a design process and produce a consensus among the Ember core team. This seemed more like a documentation improvement than a "substantial" feature that would need a design process. Maybe I'm wrong? [Requesting a feature](https://github.com/emberjs/ember.js/blob/master/CONTRIBUTING.md#requesting-a-feature) does seem to imply that *all* feature requests require an rfc. username_2: YA, agree. This is a "bad error message" bug. username_3: I'll take a stab at this one username_0: @username_3 cool! If you didn't know, we do like to unit test asserts. For example, [here][expand-test] is the test for the confusing error we currently get. [expand-test]: https://github.com/emberjs/ember.js/blob/ed298ced885b2f4b4435fbccb26b24b9a24bb68f/packages/ember-metal/tests/expand_properties_test.js#L89 Status: Issue closed
NLog/NLog
287960286
Title: Change Target.ToString Question: username_0: Currently File Target[file] Maybe a bit more explicit: File Target (name=file) Answers: username_0: related https://github.com/NLog/NLog/issues/2506 username_1: Could be nice with some bonus logic, regarding wrappers. When using `async=true`, then the wrapped target will be called `<Name>_wrapped`. When doing manual wrapping, then this auto-naming doesn't occur, so targets remain unnamed, as one usually do this: ``` <target name="EventLogAsyncTarget" xsi:type="AsyncWrapper"> <target xsi:type="RetryingWrapper"> <target xsi:type="EventLog" source="${EventSource}" machineName="." /> </target> </target> ``` This causes the internal-logging to become very vague for the final target. Could also be nice that if the `TargetAttribute` has `IsWrapper=true` or `IsCompound=true`, then it will not get `Target` appended to the end. (Ex `AsyncWrapper(Name=Hello)` and `MailTarget(Name=Hello)`) Status: Issue closed
delvedor/find-my-way
1087676764
Title: Incorrect parametric brother route set up. Question: username_0: Sometimes it sets a non-nearest parametric brother route. Static route `/text/hello` will have `/:c` as a parametric brother, but should have `text/:e/test`. It's not the same issue with (#221) ```js router.on('GET', '/text/hello', () => {}) router.on('GET', '/text/:e/test', () => {}) router.on('GET', '/:c', () => {}) assert.equal(router.find('GET', '/text/hellos/test'), null) // shouldn't be null ``` Answers: username_1: Would you like to send a Pull Request to address this issue? Remember to add unit tests. username_0: If you set up routes in a different order this will work ```js router.on('GET', '/:c', () => {}) router.on('GET', '/text/hello', () => {}) router.on('GET', '/text/:e/test', () => {}) assert.deepEqual(router.find('GET', '/text/hellos/test').params, { e: 'hellos' }) ``` Status: Issue closed
transhumandesign/kag-base
571056165
Title: Highlighting attacked objects w/ smooth shader causes shader issues. Question: username_0: ![image](https://user-images.githubusercontent.com/2775830/75311150-82cc2680-581b-11ea-9221-00eca8d03546.png) Answers: username_1: Probably some z buffer bullcrap username_2: This is caused by RenderStyle::outline, Line 247 BuilderAnim.as username_1: Investigated this, it is because render target switching resets the Z buffer in Irrlicht (contrary to what the documentation says). There doesn't seem to be a way to fix that easily, so I suppose an approach would be to move that kind of outline rendering to before the render target swap (which would make more sense anyway), but that would interfere with the rest of rendering most likely... username_3: This is caused by the issues stated in #89 Status: Issue closed
MATF-RG19/RG67-build-a-tower
628864270
Title: Kod Question: username_0: Podeliti kod na vise fajlova, trenutno se sve nalazi u main.c. Answers: username_1: Podeljen kod u vise fajlova, stuktura za blok i funkcije za blok se nalaze u block.c, a struktura za tornjeve u pozadini i funkcije za tornjeve se nalaze u tower.c Status: Issue closed
ramiromagno/gwasrapidd
540154306
Title: Bug in example Question: username_0: There seems to be a bug in the example on the github README.md. It all works until the following: `variants <- get_variants(study_id = 'GCST002305')` Then I get the error: ``` Error: Tibble columns must have consistent lengths, only values of length one are recycled: * Length 0: Columns `chromosome_name`, `chromosome_position` * Length 15: Columns `gene_name`, `distance`, `is_mapped_gene`, `is_closest_gene`, `is_intergenic`, … (and 4 more) Run `rlang::last_error()` to see where the error occurred. ``` Answers: username_1: Thank you for reporting this @username_0! A quick look indicates that there's been a change on the API server side... I will look into this today. username_0: Fantastic, thanks! Since I wasn’t able to follow through with the example, I was just wondering - is this package able to extract formatted full GWAS summary stats from the catalog? They can be downloaded directly from GWAS catalogue, but often they are not consistently formatted, so it would be great to easily access consistently formatted summary stats. Is this package designed for this? Thanks in advance! > username_1: Hi @username_0: unfortunately this package won't retrieve the summary stats. I am about to start a new package that will address that question indeed in the next month. The GWAS Catalog actually has separated the two databases, so summary statistics is provided as a separate service, having its own REST API; given that the two are somewhat different I think it is best to have separate clients to access them. username_1: @username_0: It seems the recent changes to the REST API are more extensive than I expected. The issue here specifically reported should not be too hard to solve but there are other problems popping up now... I have already asked the GWAS Catalog team by email to clarify what were the changes made to server to fully assess what will need fixing. I am sorry for the trouble! It seems you were the first one to detect these problems as it relates to their update. username_1: @username_0: The GWAS Catalog got back to me and seemingly they have inadvertently committed changes to the release branch that led to these breaking changes. Meanwhile they have already to revert these changes, and it seems to me that gwasrapidd is working again. Can you confirm that it works for you too so that I can close this issue? username_1: @username_0 : can you confirm this issue is now solved for you? Status: Issue closed
igordejanovic/parglare
294372859
Title: LAYOUT must allow empty match Question: username_0: * parglare version: bc33add9eedbea64e5177fd716865cdb8360aa15 (Jan 27, 2018) * Python version: 2.5.3 * Operating System: linux, ubuntu ### Description The following specification breaks on missing whitespace at the start of the text: ``` python from parglare import Grammar from parglare import Parser gram = """\ words : word | words word ; word : /[a-z]+/ ; LAYOUT : WS | comment ; comment : /#.*/ ; WS : /[ \t]+/ ; """ text = "abc def" grammar = Grammar.from_string(gram) parser = Parser(grammar) result = parser.parse(text) print(result) ``` produces ``` python traceback Traceback (most recent call last): File "n.py", line 17, in <module> result = parser.parse(text) File "~/compiler3/parglare/parser.py", line 208, in parse position) File "~/compiler3/parglare/parser.py", line 475, in _skipws input_str, position, context=context) File "~/compiler3/parglare/parser.py", line 282, in parse nomatch_error(actions.keys())) parglare.exceptions.ParseError: Error at position 1,0 => "*abc def". Expected: WS or comment ``` If you take out the `LAYOUT` and `comment` rules, it works. It looks like there is a forced `LAYOUT` at the start of the file. There may also be one at the end of the file, but that is not testable currently, I think. I know forcing space between tokens is not normal in programming languages, as people tend to be afraid of using the spacebar and write code like `a+1-4=b`. However, the language I am parsing is a constrained natural language with sentences like `power-supply must provide power`. Optional white space doesn't make much sense there, and (I think) complicates parsing due to non-existing white-space ambiguities that must be resolved (words should never be broken into two pieces). Depending on how you see this problem, several options to fix it exists (for as far as I can see): * Note in the discussion of `LAYOUT` that it must allow to be empty. * Don't require a `LAYOUT` at the start of the file, ie make it optional (the text might start with white-space, but not always). Likely other options exist as well. Answers: username_1: If `LAYOUT` is not provided [`ws` setting](http://www.username_1.net/parglare/parser/#ws) is used. If `LAYOUT` is provided it **must** match between each token, including beginning and end of the input. I've added [a note in the docs](http://www.username_1.net/parglare/grammar_language/#handling-whitespaces-and-comments-in-your-language). Controlling word boundaries is in parglare handled with a [special rule `KEYWORD`](http://www.username_1.net/parglare/grammar_language/#handling-keywords-in-your-language) as the word boundary is not determined only by whitespaces. I think this mechanism should work in your case also. username_0: I found the `WS` and `LAYOUT` explanation in the documentation already (which is quite good, I might add). The confusion part there is that the `WS` setting is also non-empty, and it doesn't fail to match before the first token (just like my example doesn't fail if you take out the `LAYOUT` rule). I guess there is a default `LAYOUT` rule that allows empty matches. I agree that `KEYWORD` is an alternative, and I have used that currently, and made `LAYOUT` allow empty matches. Thank you for confirming that is the desired solution here. username_1: Chars in `ws` setting are always optional and they are used if `LAYOUT` is not defined. [This](https://github.com/username_1/parglare/blob/master/parglare/parser.py#L473) is relevant part of the implementation. username_0: Ah right, the code provides the optional matching rather than the pattern. Looking at the code, I noticed the position is always incremented by 1. Does that mean my `WS` definition of `WS : /[ \t]+/ ;` is wrong, or at least non-efficient? username_1: No, your definition is perfectly fine. `WS` from your grammar is used in entirely different way. It has nothing to do with `ws` Parser param. You can call your rules in `LAYOUT` part of grammar anyway you like, only the entry rule must be called `LAYOUT`. Matching using grammar rules is done [here](https://github.com/username_1/parglare/blob/master/parglare/parser.py#L541) using regex and string matches from terminal definitions. If the match succeeds the position is incremented by the length of the match, so your `WS` will greadilly consume all whitespaces ahead in one go. Status: Issue closed username_0: thank you for the explanation
docker/for-linux
340539591
Title: Docker add owner is not working (also with --chown) Question: username_0: <!-- This issue tracker is for *bug reports* and *feature requests*. For questions, and getting help on using docker: - Docker documentation - https://docs.docker.com - Docker Forums - https://forums.docker.com - Docker community Slack - https://dockercommunity.slack.com/ (register here: http://dockr.ly/community) - Post a question on StackOverflow, using the Docker tag --> * [x] This is a bug report * [ ] This is a feature request * [x] I searched existing issues before opening this one <!-- DO NOT report security issues publicly! If you suspect you discovered a security issue, send your report privately to <EMAIL>. --> ### Expected behavior When adding a file / directory to the container with ADD command, it should be owned by user:group 1000:1000. ### Actual behavior Container with files owned by user:group 1001:1001 Also when executing with ADD --chown=1000:1000 ### Steps to reproduce the behavior <!-- Describe the exact steps to reproduce. If possible, provide a *minimum* reproduction example; take into account that others do not have access to your private images, source code, and environment. REMOVE SENSITIVE DATA BEFORE POSTING (replace those parts with "REDACTED") --> Dockerfile content: ``` FROM debian:stretch RUN useradd -s /bin/nologin -m myuser ENV USER_HOME /home/myuser WORKDIR $USER_HOME ADD resources/jre-7u80-linux-x64.tar.gz /usr/lib/ ENV JAVA_HOME /usr/lib/jre1.7.0_80 ADD --chown=1000:1000 resources/myapp.tgz ./ USER myuser:myuser ENTRYPOINT [ "myapp/bin/startup.sh" ] ``` myapp.tgz is created on the host **with user 1001:1001** Execute `docker build -t test .` to build de image (done with user 1001 and with root) Execute `docker run -ti test bash`, and on the container, ls -al Result: myapp folder is owned by 1001:1001 myuser has uid=1000 gid=1000 /usr/lib/jre1.7.0_80 is **correctly owned** by myuser:myuser I have changed (on the host) the owner of myapp.tgz to 1000:1000 with the same result I need to create the tgz file with user 1000:1000 to get the expected behavior Workaround: execute `RUN chown -R 1000:1000 myapp` after the ADD [Truncated] OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 15.52GiB Name: jicenteno-pc ID: 5IAI:AO5P:7W2Y:X72R:U5PM:EKWI:K3NX:Q2WH:JHOP:PY6J:SSQH:FFRJ Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false WARNING: No swap limit support ``` **Additional environment details (AWS, VirtualBox, physical, etc.)** Answers: username_1: Hi, I'm having the same problem and here you can see what it seems to be a good explanation: https://github.com/moby/moby/issues/35525#issuecomment-413947158
dls-controls/malcolmjs
321507310
Title: Add a configurable footer space to the UI Question: username_0: We need to be able to inject a footer into the index.html page, therefore we need to add a feature switch to allow us to set the height of the main area as 100vh - #px where # is set in the feature switch json. Answers: username_0: Remember to consider the snackbar here, it should appear above the bottom nav bar, if this isn't possible then the snackbar could drop in from the top. We may also want to consider using a toastr instead. Status: Issue closed
rs/zerolog
895289496
Title: Customize colors per log level Question: username_0: Maybe it's possible with custom formatting, but not sure how to achieve that. For example, I'd like to change the colors that render for each log level in the debug console, perhaps even apply that color to the whole line (not just the word which indicates the log level). Thanks
DataDog/datadog-agent
459632379
Title: Pull in latest integrations-core to fix Apache TLS validation errors Question: username_0: I'm trying to use a fix that was applied in `integrations-core` in April 2019 (https://github.com/DataDog/integrations-core/pull/2475). I'm using the `6.11.3-1` version of the datadog-agent RPM, but I don't see this fix yet in `/embedded/lib/python2.7/site-packages/datadog_checks/apache/apache.py`. I'm not sure the process here, but I would like to get this fix into production so I can cut down on the amount of noise from datadog agent. Answers: username_1: hey @username_0 - thanks for that contribution! Looks like it was released with [1.6.0](https://github.com/DataDog/integrations-core/blob/master/apache/CHANGELOG.md#160--2019-05-02) in the Apache check, so you can likely run the `datadog-agent integration install datadog-apache==1.6.0` [command](https://github.com/DataDog/datadog-agent/blob/master/docs/agent/integration.md#install) on the agent. Let me know if that works for you. username_0: Hi, @username_1 I'll give that a go, but I was hoping to get this integration version bundled into the RPM. Will this be bundled into the 6.12 release? username_1: Hey @username_0, sorry! Forgot to mention that at the current moment datadog-apache v1.6.0 should be packaged in with v6.12.0 of the agent. username_0: Thanks, I'll look out for the 6.12.0 RPM when that's built/released. Status: Issue closed
anancds/document
638079956
Title: Positional-only parameters and keyword-only Arguments Question: username_0: def name(positional_only_parameters, /, positional_or_keyword_parameters, *, keyword_only_parameters): Answers: username_0: ![image](https://user-images.githubusercontent.com/4735508/84557833-daf24e80-ad60-11ea-8758-73991322e940.png) username_0: def name(p1, p2, /, p_or_kw, *, kw): def name(p1, p2=None, /, p_or_kw=None, *, kw): def name(p1, p2=None, /, *, kw): def name(p1, p2=None, /): def name(p1, p2, /, p_or_kw): def name(p1, p2, /): username_0: https://www.zhihu.com/question/57726430 username_0: https://blog.csdn.net/littleRpl/article/details/89497670
PaddlePaddle/X2Paddle
684457037
Title: OCR paddle2onnx failed. Question: username_0: Hi, there. I'd like to transfer a PaddleOCR model to onnx, the procedure I followed is from https://github.com/PaddlePaddle/PaddleOCR/issues/213 . According to the issue, the author succeeded, but I failed during my trial. ### Model: I downloaded the PaddleOCR model from: https://paddleocr.bj.bcebos.com/ch_models/ch_det_mv3_db_infer.tar And I renamed them to__model__ and__params__, according to issue 213. ### Procedure: ``` git clone https://github.com/PaddlePaddle/X2Paddle.git cd X2Paddle python setup.py install pip install paddlepaddle x2paddle -f paddle2onnx -m ../ch_det_mv3_db -s ./output ``` And the error message is: ``` paddle.__version__ = 1.8.4 Now, onnx2paddle support convert onnx model opset_verison [9, 10, 11],opset_verison of your onnx model is 10, automatically treated as op_set: 10. Translating PaddlePaddle to ONNX... Total:246, Current:1 : feed image Total:246, Current:4 : hard_swish Traceback (most recent call last): File "/usr/local/bin/x2paddle", line 9, in <module> load_entry_point('x2paddle==0.8.1', 'console_scripts', 'x2paddle')() File "/usr/local/lib/python3.5/dist-packages/x2paddle-0.8.1-py3.5.egg/x2paddle/convert.py", line 273, in main paddle2onnx(args.model, args.save_dir, opset_version=args.onnx_opset) File "/usr/local/lib/python3.5/dist-packages/x2paddle-0.8.1-py3.5.egg/x2paddle/convert.py", line 205, in paddle2onnx opset_version=opset_version) File "/usr/local/lib/python3.5/dist-packages/x2paddle-0.8.1-py3.5.egg/x2paddle/op_mapper/paddle2onnx/paddle_op_mapper.py", line 55, in convert node = getattr(self.op_set, op.type)(op, block) File "/usr/local/lib/python3.5/dist-packages/x2paddle-0.8.1-py3.5.egg/x2paddle/op_mapper/paddle2onnx/opset9/opset.py", line 849, in hard_swish min_value = op.attr('min') File "/home/xulingchuan/.local/lib/python3.5/site-packages/paddle/fluid/framework.py", line 2155, in attr return self.desc.attr(name) paddle.fluid.core_avx.EnforceNotMet: -------------------------------------------- C++ Call Stacks (More useful to developers): -------------------------------------------- 0 std::string paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int) 1 paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int) 2 paddle::framework::OpDesc::GetAttr(std::string const&) const ---------------------- Error Message Summary: ---------------------- Error: Attribute min is not found at (/paddle/paddle/fluid/framework/op_desc.cc:507) ``` Great appreciate if anyone could help me with this. Thanks~ Answers: username_1: @username_0 please try to use x2paddle command with: --onnx_opset 11 username_0: 部署onnx模型花了些时间,已经确认转换成功且可以使用,感谢! Status: Issue closed
primefaces/primereact
841670765
Title: Add itemTemplate property to FileUpload Question: username_0: Used to create custom item elements in the container. ``` <FileUpload itemTemplate={customItemTemplate} ... /> const customItemTemplate = (file, props) => { // file: Current file object. // options.onRemove: Event used to remove current file in the container. // options.previewElement: The default preview element in the container. // options.fileNameElement: The default fileName element in the container. // options.sizeElement: The default size element in the container. // options.removeElement: The default remove element in the container. // options.formatSize: The formated size of file. // options.element: Default element created by the component. // options.props: component props } ```<issue_closed> Status: Issue closed
aws/amazon-sagemaker-examples
891516687
Title: 1_dataprep_dw_job_predmaint.ipynb failed CI Question: username_0: Link to the notebook: https://github.com/aws/amazon-sagemaker-examples/blob/master/use-cases/predmaint/1_dataprep_dw_job_predmaint.ipynb Error: --------------------------------------------------------------------------- Exception encountered at "In [4]": --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-4-c804398dbd2b> in <module> 6 # get the Data Wrangler container associated with our region 7 region = boto3.Session().region_name ----> 8 container_uri = get_dw_container_for_region(region) 9 10 dw_output_path_prm = output_path /opt/ml/processing/input/demo_helpers.py in get_dw_container_for_region(region_in) 53 Get the Data Wrangler container based on the given region 54 ''' ---> 55 container_uri = dw_container_dict[region_in] 56 return container_uri KeyError: 'us-west-2'
WebAssembly/WASI
626970255
Title: Poll + Callbacks Question: username_0: WASI currently uses a traditional "poll" api for some (in cases, asynchronous) functionality. For the uninitiated, the `poll_once` function writes tagged unions into a user-supplied buffer. This works fine, and is the tried and true method, but a major downside is that it's difficult to expand it to new, optional modules without creating a new copy for each new module. Otherwise, even though the user could ignore tags that they don't understand, the size of the tagged union could grow larger as new modules add new events, which obviously won't work. There are two alternatives that I can think of: 1. Use callbacks instead of polling. This appears to be simpler for programs at first glance, since they don't need to make sure to poll at some point to get events, but it brings up a number of questions that we aren't able to answer yet. For example, are callbacks called on a separate thread? Are they preemptive? I'm sure more answerless questions would appear if this path were pursued. 2. Still use explicit polling, but run callbacks while polling instead of returning events. As a topical example, you could call `webgpu/map_read` (which returns a promise in the browser), and pass it a callback as an argument. However, this won't be called until you explicitly call `wasi/poll` (perhaps with an object returned from `webgpu/map_read`) later on, during which your webgpu callback will be called, as well as other callbacks, on the same thread and stack that you called `poll` from. This issue pertains to #79. Answers: username_1: (2) sounds a bit like the nested event loop model found in popular frameworks like gtk and qt: - https://doc.qt.io/qt-5/qcoreapplication.html#processEvents - https://developer.gnome.org/gtk3/stable/gtk3-General.html#gtk-main-iteration-do My understanding is that these work kind of like the web event loop but with the control inverted. I have no strong opinion as to whether these are good or bad. It does seem to lend itself to unbounded stack use, since callbacks can then trigger processing more callbacks before returning and so on. Maybe thats not a problem in practice? username_0: @username_1 That's an interesting point. We definitely would want to avoid unbounded stack growth here. I think we could enforce that a callback can't call poll, but I'm not sure if that'd ever actually happen in practice. However, I think we should enforce that poll won't trigger callbacks that are registers from callbacks and there's a quite clever way to do that. If `poll` takes a list of `promise`/`future`/etc objects and only calls the callbacks associated with those, then this problem is avoided. I think that would also be quite elegant and would bridge the gap between the async world (with promises) and the syncronous wasm world. For example, the poll module type could be roughly the following (assuming interface types and type imports): ```wat (module $wasi_poll (export "Future" (type $Future)) (@interface func (export "poll") (param $futures (array (ref $Future))) (result $result (variant [ $ok [$nevents u32] $err [$error string] ])) ) ) ``` username_2: In practice this would work the same as any low level C ABI which supports extensions; identifiers are reserved in the registry (in our case a central enum) and the union has the size of the largest possible member which known in the implementation so I don't quite see the argument for how this "would obviously not work"? username_0: @username_2 I don't think I explained that point very well. Basically, what I meant was this: The events "returned" from `poll_once` are distinguished by a tagged union. The data associated with each event varies, in both types and size. While additional, optional modules can add new tags and supply new events associated with those tags, the new events can never be larger than some specified size. If core wasi has a timer event and a file event, than a webgpu event can never supply more data than `max(sizeof(timer event), sizeof(file event))`. This may not be an issue in practice, perhaps there's a reasonable max size that can be set to begin with, but, in my eyes, it's quite inflexible, not to mention, requires complex code to interpret the events, especially in a language that prefers to be safe.
scalameta/metals
435296170
Title: Not a valid command: metalsEnable Question: username_0: Good day all, I am new to metals an might have something set up wrong, so I apologize in advance. I did try what was mentioned in the troubleshooting section to no avail: Using metals with vscode I get the following error when trying to build a project. (It happens for all the projects I have tried so far). ``` [error] Not a valid command: metalsEnable [error] Not a valid project ID: metalsEnable [error] Expected ':' [error] Not a valid key: metalsEnable [error] metalsEnable [error] ^ sbt exit: 1 ``` I have added `addSbtPlugin("io.get-coursier" % "sbt-coursier" % "1.1.0-M13")' to /home/userd/.sbt/1.0/plugins/plugins.sbt but have also tried it with M14. - Operating system: Arch Linux - Editor: Visual Studio Code - Metals version: v0.5.0 - Coursier installed via AUR: 1.1.0-M14 Kind regards Calvin Answers: username_1: It is a know bug in coursier, please upgrade to `1.1.0-M13-2` username_0: Yes, I saw that from issue #383 which lead me to try using 1.1.0-M14, but it does not work. username_1: which sbt and java versions are you using ? username_0: The project I am running is vanilla udash template: sbt new UdashFramework/udash.g8 with javac 1.8.0_212. sbt version = 1.2.8 username_1: Can't reproduce the issue with the project template even with coursier `1.1.0-M14`. Maybe @olafurpg have better ideas username_0: Ok thank you for your effort! I think I have this working now. Had to delete /tmp/metals6902378459201529676/sbt-launch.jar and then restarting vscode. (but now I have a different error..) ```Failed to connect with build server, no functionality will work. scala.meta.internal.metals.BloopServers$NoResponse$: no response: bloop bsp``` I will close and browse the issues for this new error. username_1: have you tried deleting your `.metals` `.bloop` directories as well ? username_0: Ok I just tried it: 1) deleted .metals and .bloop then 2) metals import build. It did not work. I closed vscode, deleted them again, opened vscode, and this time it worked.. username_0: I appreciate the help! Status: Issue closed
Roam-Research/issues
609659809
Title: Filter not list not updating properly even after I filter some of them out Question: username_0: **Describe the bug** ![image](https://user-images.githubusercontent.com/4127841/80682759-bad43c00-8ae0-11ea-9c4f-b4995c61f3b8.png) After exclusion tmp1 and tmp2 still showing up in the filter list with count 1. ![image](https://user-images.githubusercontent.com/4127841/80682806-cde70c00-8ae0-11ea-97d2-e06f63680f8f.png) This is completely wrong and makes using filters so much more difficult. This was a dummy example that I created but in case of bigger pages like TODO this makes using the filters extremely hard. **System Information:** - Device: desktop Computer - OS: MacOS Catalina 15.04 - Browser Brave
fomantic/Fomantic-UI
436395216
Title: [Search] Search ignoring Diacritics docs issue Question: username_0: # Docs Issue ## Steps to reproduce 1. Go to https://fomantic-ui.com/modules/search.html 2. Scroll down to “Search ignoring Diacritics” ## Expected result JS should be in a code block ## Actual result Not in a code block Answers: username_1: Fixed by https://github.com/fomantic/Fomantic-UI-Docs/pull/115 Status: Issue closed
loogart/leburgerweek
239590229
Title: FAQ Page Question: username_0: I think we might have to add another page where people can learn about things. One page with mainly text and when people want to learn more about Bud and Burgers, Just Eat or Buy a Friend a burger, it’s all there. Let me know if that is possible. They can click "learn more" on the index jumbo images and go to that secton of the page. We can also have the FAQ in the footer. Answers: username_1: @username_0 do you want this to be an "About" page? I would avoid the FAQ in the footer, put it in the about page for now. username_0: Yes - sure! Status: Issue closed username_1: ![screen shot 2017-06-30 at 2 45 11 pm](https://user-images.githubusercontent.com/7881400/27749735-bda00392-5da2-11e7-93c4-246616b80429.png) created an about.html page where you can put info, FAQ, different info sections about LBW. It'a a generic page that you can customize to whatever you want @username_0
void-linux/void-packages
357851997
Title: System accounts that don't have _ prefix Question: username_0: - [ ] apache-storm - [ ] apache-tomcat - [ ] at - [ ] avahi - [ ] beanstalkd - [ ] bind - [ ] bitlbee - [ ] boinc - [ ] caddy - [ ] chrony - [ ] clamav - [ ] clockspeed - [ ] colord - [ ] couchdb - [ ] couchpotato - [ ] cups - [ ] dbus - [ ] deluge - [ ] dictd - [ ] dma - [ ] dnscrypt-proxy - [ ] dnsmasq - [ ] dovecot - [ ] elasticsearch - [ ] etcd - [ ] fcron - [ ] gdm - [ ] geoclue2 - [ ] gerbera - [ ] gitolite - [ ] gogs - [ ] gpsd - [ ] h2o - [ ] haproxy - [ ] icinga2 - [ ] inadyn - [ ] inspircd - [ ] jenkins - [ ] kodi-rpi - [ ] kubernetes - [ ] libvirt - [ ] lightdm - [ ] mariadb - [ ] minidlna - [ ] miniflux - [ ] mlocate - [ ] monero - [ ] mongodb - [ ] mopidy - [ ] mpd - [ ] munge - [ ] mysql - [ ] nbd - [ ] ndhc - [ ] nethack - [ ] network-ups-tools - [ ] nginx - [ ] nsd - [ ] nss-pam-ldapd - [ ] ntp [Truncated] - [ ] rspamd - [ ] rtkit - [ ] sddm - [ ] sickbeard - [ ] sndio - [ ] squid - [ ] suricata - [ ] sv-helper - [ ] taskd - [ ] tor - [ ] transmission - [ ] tuntox - [ ] twoftpd - [ ] umurmur - [ ] usbmuxd - [ ] util-linux - [ ] void-updates - [ ] vsftpd - [ ] znc - [ ] shadowsocks-libev
jkuusama/LitePlacer-DEV
117542819
Title: Feature request: Support up-camera component rotation / offset correction Question: username_0: Up-camera shall be used to determine center and rotation of a component currently held by the nozzle. This could be achieved by detecting the edges and / or pins using computer vision. Software should apply necessary corrections to accurately place the part. This is a must have to place large parts, ICs in TQFP or QFN (no leads) packages and might pace the way towards BGA or fine pitched parts. Answers: username_1: I would vote for this one also. This would also avoid the ugly events when nozzle misses the part from reel and puts needle inside the paste. Validating that the component was actually picked up is my most wanted feature at the moment. username_2: I would really like to see the up camera check component rotation and correct any off-center issues. We are having issues with TSOP and QFN. username_3: This is one of the basic features for a P&P machine. Sure, it needs time to implement it, but it should be on the roadmap. username_4: The up-camera-feature is essential for placing of ICs with very little distances between the pins. With the current software you have to jog the machine by hand to its position . This can't be done effectively if you want to produce small series with the liteplacer. username_5: +1 this should get priority username_6: No improvements on this issue so far? I just had a look at this video: https://www.youtube.com/watch?v=PDmg4tf3Aio username_4: May I send you some Information in german about that thing❓ ----- Ursprüngliche Nachricht ----- username_6: of course you may. zeilhofer.co.at/kontakt username_3: Waiting for nearly two years, but nothing in sigth. May be switching over to OpenPnP software or better looking for an other P&P machine. username_7: +1 for this, please.
mapbox/mr-ui
377638215
Title: Card component Question: username_0: The Account Dashboard and Atlas Account Dashboard use a card to display an overview/preview of information that can lead to dedicated pages with more detailed information. This `Card` component should be a wrapper component with children props, can be with or without a header, and can be with or without an actionable icon button in the top right corner. In the past, we've used the actionable icon button to close/remove the card or to collapse the card in view. <img width="738" alt="screen shot 2018-11-05 at 3 59 59 pm" src="https://user-images.githubusercontent.com/9087698/48034448-4cddcd00-e114-11e8-82d7-e3e802c493ae.png"> We're open to discussion about this component! cc: @angel<issue_closed> Status: Issue closed
Ogeon/rust-wiringpi
548417740
Title: Why it's deprecated Question: username_0: The creator of the wrapped WiringPi library has chosen to deprecate it. As part of that, he has taken down the Git repository (at the time of writing), so it's practically impossible to make new builds of rust-wiringpi from the real source. Due to that, I'm no longer going to maintain this library. Gordon himself writes bout it on http://wiringpi.com/wiringpi-deprecated/. Thanks, Gordon, for creating WiringPi, and sorry for being one of those who linked it statically. Also, big thanks to everyone who have contributed to rust-wiringpi. Answers: username_1: It's sad to see this deprecated :( there is still the GitHub mirror, though: https://github.com/WiringPi/WiringPi username_0: Good point. I hadn't seen it. I suppose it can be used for reproducing the builds, but other than that I believe there are (or can be) better alternatives for Rust. WiringPi is a bit too C focused for a smooth integration.
joncoop/pygame-xbox360controller
743433518
Title: Xbox One controller button index problem Question: username_0: I tried the xbox360_controller module on a regular xbox one controller and an xbox adaptive controller and they both had the buttons in the wrong "index". For example, to get a,b,x and y, I have to use the get_pad() function. The dpad buttons are bound to the last 4 indices of the list returned by get_buttons(). PS. this is a really cool project and makes development faster. Answers: username_0: Turns out, it's because pygame2 changed its button mapping. Fixed by using pygame 1 username_1: How to use vibration function of gamepad in pygame ?
arquivo/pwa-technologies
111178667
Title: Embeded video in home page of Arquivo.pt should link to playlist Question: username_0: The embeded video in home page of Arquivo.pt should link to playlist "Arquivo.pt: the Portuguese Web Archive (official)" at: https://www.youtube.com/playlist?list=PLKfzD5UuSdETtSCX_TM02nSP7JDmGFGIE So that if the users decided to watch the video on Youtube, when then finish they will get the next video also related to the PWA instead of a random one. Answers: username_0: Also put link to English version of video on the English version of the home page. https://www.youtube.com/watch?v=dqG0VILi3gs&list=PLKfzD5UuSdETtSCX_TM02nSP7JDmGFGIE&index=10 on http://archive.pt Status: Issue closed
mfornet/acmx
599059237
Title: Close open tabs on View Code Question: username_0: When View Code command is called the following tabs should be closed: - Tabs about test cases - Tabs about files that are not related about the current project. Nice to have: They layout will change to single and the main solution is opened. Instead of opening the main solution open the current active document if it is not a test case. Edge case: What about tabs that are not saved and should be closed. Options: 1. Save + Close 2. Ask to save + Close 3. Ignore 4. Close (without saving) (this should be avoided) Answers: username_1: At least in my workflow, I only need the tabs about testcases to be closed, and I think all of them could be saved at the time I want to switch to code view.
RedHatInsights/insights-results-aggregator
601120489
Title: Use "shortened" pagination on top and "full" pagination on bottom Question: username_0: Use "shortened" pagination on top and "full" pagination on bottom Preliminary design: https://marvelapp.com/852jaj9/screen/66458352 Answers: username_1: Will be fixed in https://gitlab.cee.redhat.com/service/uhc-portal/merge_requests/1229 username_1: done Status: Issue closed
jdesboeufs/connect-mongo
398506167
Title: req.logout() isn't logging a user out Question: username_0: Hello! I'm calling the req.logout() function on my logout path, and it isn't logging out my user. I took a look at issue #140 and tried taht implementation to no avail. I have also tried using req.session.destroy() but after I call that I am then unable to login with any username or password. Any ideas on how to fix it? Thanks! -Craig Answers: username_1: Hello. Your problem is at a higher level than session persistence. You should have the same behavior with the default MemoryStore. Status: Issue closed
hyperopt/hyperopt
380870966
Title: scope.int does not effect fmin() return types Question: username_0: {'colsample_bytree': 0.4933562654213974, 'gamma': 0.4152873392462757, 'learning_rate': 0.026329714594146604, 'max_depth': 5.0, 'n_estimators': 209.0, 'reg_lambda': 7.684806554444843, 'subsample': 0.5908104152037739} ``` This is workable, but less than ideal when I have a large variety of models and search spaces and would rather not have to manually do something like the following for every possible model type: ```python params = { **default_params, **best_params, **{'max_depth': int(best_params['max_depth']), 'n_estimators': int(best_params['n_estimators'])} } ``` Answers: username_1: I created a PR to address the issue upstream. https://github.com/hyperopt/hyperopt/pull/444 Status: Issue closed username_0: Thanks @username_1!
googleads/googleads-mobile-unity
337294190
Title: AdMob desn't work on Huawei/Apple/Xiaomi phones Question: username_0: Hey, So, i always test my games, apps etc.. on huawei phones. I just figured out that AD like Banners and Interstitials don't work on Huawei, Xiaomi and Apple phones. - No fill error Also i tested the same app on Altacet and Samsung phones - i worked perfectly. I tried same thing on yours sample app Hello Word and here's same thing (on Huawei, Xiaomi and Apple phones don't work) In the past like 1 month ago all ADs work on all phones (Huawei, Apple, Samsung etc...) I tested app from Android Studio and game from Unity. I hope, you'll help me :) Regards, Jacob Answers: username_1: @username_0 on iOS, are you testing with a bundle id that corresponds to an app published on the App Store? Are you able to load test ads without issue? Status: Issue closed username_1: Closing due to non-response.
krrish94/CarKeypoints
463111928
Title: Request for ASCII mode version of the pretrained model Question: username_0: I just failed to load the pretrained model file and got an error like `Failed to load function from bytecode: binary string: not a precompiled chunk /root/torch/install/bin/lua: /root/torch/install/share/lua/5.3/torch/File.lua:314: bad argument #1 to 'setupvalue' (function expected, got nil)`. Seems that something differs between the platforms we use. Would you please provide an ASCII mode model file? Thanks. Answers: username_1: The model has been trained (and the code has been tested) with CUDA 7 and CuDNN v5. It could be a problem of a mismatch between your CUDA / CuDNN version. Status: Issue closed
wubostc/virtualized-table-for-antd
598795960
Title: size props not work Question: username_0: 这是一个在线Demo(取消components相关代码会报错,但是在本地可以重现) https://stackblitz.com/edit/react-gdqekr 应用VTComponents的components后,antd的size属性没有生效 Answers: username_1: 应用VTComponents的components后直接报错 username_1: 应用VTComponents的components后直接报错 username_2: 晚上我看一下 username_2: 目前并不兼容antd4 username_0: ok,换成v3的包可以跑通了。 table size设置成small的时候,只有thead变小了,tbody还是default时的大小。 username_1: 你好,问下你的表格将浏览器拖动到最大,表格会错位吗?我的错位了 username_0: 这个没有。 不过在用antd table左右列固定的时候遇到过固定列错位,当时没办法遍历设置列宽度才解决。 username_2: 里面的样式要自己改,因为加了一层div,你仔细看一下文档https://github.com/username_2/virtualized-table-for-antd/blob/master/README.md Status: Issue closed
puma/puma
170956824
Title: Errno::EBADF Bad file descriptor - not a socket file descriptor Question: username_0: Puma 3.6.0 / Ruby 2.3.0-p0/ Rails 5.0.0.1. When running 'rails restart' I encounter the issue below. I checked out [https://github.com/puma/puma-dev/issues/12](https://github.com/puma/puma-dev/issues/12) and was unsuccessful in solving the issue. I also attempted using the master github branch but was still unable to restart using the new shortcut. Here is what I see: ``` Exiting /Users/name/.rvm/gems/ruby-2.3.0/bundler/gems/puma-ddea4f345d17/lib/puma/binder.rb:284:in `for_fd': Bad file descriptor - not a socket file descriptor (Errno::EBADF) from /Users/name/.rvm/gems/ruby-2.3.0/bundler/gems/puma-ddea4f345d17/lib/puma/binder.rb:284:in `inherit_tcp_listener' from /Users/name/.rvm/gems/ruby-2.3.0/bundler/gems/puma-ddea4f345d17/lib/puma/binder.rb:91:in `block in parse' from /Users/name/.rvm/gems/ruby-2.3.0/bundler/gems/puma-ddea4f345d17/lib/puma/binder.rb:85:in `each' from /Users/name/.rvm/gems/ruby-2.3.0/bundler/gems/puma-ddea4f345d17/lib/puma/binder.rb:85:in `parse' from /Users/name/.rvm/gems/ruby-2.3.0/bundler/gems/puma-ddea4f345d17/lib/puma/runner.rb:133:in `load_and_bind' from /Users/name/.rvm/gems/ruby-2.3.0/bundler/gems/puma-ddea4f345d17/lib/puma/single.rb:85:in `run' from /Users/name/.rvm/gems/ruby-2.3.0/bundler/gems/puma-ddea4f345d17/lib/puma/launcher.rb:172:in `run' from /Users/name/.rvm/gems/ruby-2.3.0/bundler/gems/puma-ddea4f345d17/lib/rack/handler/puma.rb:51:in `run' from /Users/name/.rvm/gems/ruby-2.3.0/gems/rack-2.0.1/lib/rack/server.rb:296:in `start' from /Users/name/.rvm/gems/ruby-2.3.0/gems/railties-5.0.0.1/lib/rails/commands/server.rb:79:in `start' from /Users/name/.rvm/gems/ruby-2.3.0/gems/railties-5.0.0.1/lib/rails/commands/commands_tasks.rb:90:in `block in server' from /Users/name/.rvm/gems/ruby-2.3.0/gems/railties-5.0.0.1/lib/rails/commands/commands_tasks.rb:85:in `tap' from /Users/name/.rvm/gems/ruby-2.3.0/gems/railties-5.0.0.1/lib/rails/commands/commands_tasks.rb:85:in `server' from /Users/name/.rvm/gems/ruby-2.3.0/gems/railties-5.0.0.1/lib/rails/commands/commands_tasks.rb:49:in `run_command!' from /Users/name/.rvm/gems/ruby-2.3.0/gems/railties-5.0.0.1/lib/rails/commands.rb:18:in `<top (required)>' from /Users/name/Desktop/airbnb/bin/rails:9:in `require' from /Users/name/Desktop/airbnb/bin/rails:9:in `<top (required)>' from /Users/name/.rvm/gems/ruby-2.3.0/gems/spring-1.7.2/lib/spring/client/rails.rb:28:in `load' from /Users/name/.rvm/gems/ruby-2.3.0/gems/spring-1.7.2/lib/spring/client/rails.rb:28:in `call' from /Users/name/.rvm/gems/ruby-2.3.0/gems/spring-1.7.2/lib/spring/client/command.rb:7:in `call' from /Users/name/.rvm/gems/ruby-2.3.0/gems/spring-1.7.2/lib/spring/client.rb:30:in `run' from /Users/name/.rvm/gems/ruby-2.3.0/gems/spring-1.7.2/bin/spring:49:in `<top (required)>' from /Users/name/.rvm/gems/ruby-2.3.0/gems/spring-1.7.2/lib/spring/binstub.rb:11:in `load' from /Users/name/.rvm/gems/ruby-2.3.0/gems/spring-1.7.2/lib/spring/binstub.rb:11:in `<top (required)>' from /Users/name/Desktop/airbnb/bin/spring:13:in `require' from /Users/name/Desktop/airbnb/bin/spring:13:in `<top (required)>' from bin/rails:3:in `load' from bin/rails:3:in `<main>' ``` Answers: username_1: Did you update to the latest version of puma-dev? That was the fix for that issue. username_0: @username_1 yes I installed the latest version of puma-dev and received the error above username_1: Can you run 'puma-dev -V' and also check the full path to puma-dev running via 'ps' username_0: @username_1 ``` MacBook-Pro:app-name name$ puma-dev -V Version: v0.10 (go1.7) MacBook-Pro:app-name name$ ps PID TTY TIME CMD 54326 ttys000 0:00.12 -bash 69600 ttys000 0:09.51 puma 3.6.0 (tcp://localhost:3000) [app-name] 69603 ttys000 0:00.01 /Users/name/.rvm/gems/ruby-2.3.0/gems/rb-fsevent-0.9.7/bin/fsevent_watch --latency 0.1 /Users/name/.rvm/gems/ruby-2.3 69604 ttys000 0:00.01 /Users/name/.rvm/gems/ruby-2.3.0/gems/rb-fsevent-0.9.7/bin/fsevent_watch --latency 0.1 /Users/name/.rvm/gems/ruby-2.3 69605 ttys000 0:00.01 /Users/name/.rvm/gems/ruby-2.3.0/gems/rb-fsevent-0.9.7/bin/fsevent_watch --latency 0.1 /Users/name/.rvm/gems/ruby-2.3 69606 ttys000 0:00.01 /Users/name/.rvm/gems/ruby-2.3.0/gems/rb-fsevent-0.9.7/bin/fsevent_watch --latency 0.1 /Users/name/.rvm/gems/ruby-2.3 69607 ttys000 0:00.01 /Users/name/.rvm/gems/ruby-2.3.0/gems/rb-fsevent-0.9.7/bin/fsevent_watch --latency 0.1 /Users/name/.rvm/gems/ruby-2.3 69608 ttys000 0:00.01 /Users/name/.rvm/gems/ruby-2.3.0/gems/rb-fsevent-0.9.7/bin/fsevent_watch --latency 0.1 /Users/name/.rvm/gems/ruby-2.3 69609 ttys000 0:00.01 /Users/name/.rvm/gems/ruby-2.3.0/gems/rb-fsevent-0.9.7/bin/fsevent_watch --latency 0.1 /Users/name/Desktop/app-name/con 69610 ttys000 0:00.01 /Users/name/.rvm/gems/ruby-2.3.0/gems/rb-fsevent-0.9.7/bin/fsevent_watch --latency 0.1 /Users/name/.rvm/gems/ruby-2.3 69611 ttys000 0:00.01 /Users/name/.rvm/gems/ruby-2.3.0/gems/rb-fsevent-0.9.7/bin/fsevent_watch --latency 0.1 /Users/name/.rvm/gems/ruby-2.3 69612 ttys000 0:00.01 /Users/name/.rvm/gems/ruby-2.3.0/gems/rb-fsevent-0.9.7/bin/fsevent_watch --latency 0.1 /Users/name/.rvm/gems/ruby-2.3 69613 ttys000 0:00.01 /Users/name/.rvm/gems/ruby-2.3.0/gems/rb-fsevent-0.9.7/bin/fsevent_watch --latency 0.1 /Users/name/.rvm/gems/ruby-2.3 69614 ttys000 0:00.01 /Users/name/.rvm/gems/ruby-2.3.0/gems/rb-fsevent-0.9.7/bin/fsevent_watch --latency 0.1 /Users/name/.rvm/gems/ruby-2.3 69615 ttys000 0:00.01 /Users/name/.rvm/gems/ruby-2.3.0/gems/rb-fsevent-0.9.7/bin/fsevent_watch --latency 0.1 /Users/name/.rvm/gems/ruby-2.3 69616 ttys000 0:00.01 /Users/name/.rvm/gems/ruby-2.3.0/gems/rb-fsevent-0.9.7/bin/fsevent_watch --latency 0.1 /Users/name/.rvm/gems/ruby-2.3 69618 ttys000 0:00.01 /Users/name/.rvm/gems/ruby-2.3.0/gems/rb-fsevent-0.9.7/bin/fsevent_watch --latency 0.1 /Users/name/Desktop/app-name/db 69619 ttys000 0:00.01 /Users/name/.rvm/gems/ruby-2.3.0/gems/rb-fsevent-0.9.7/bin/fsevent_watch --latency 0.1 /Users/name/Desktop/app-name/app 69620 ttys000 0:00.01 /Users/name/.rvm/gems/ruby-2.3.0/gems/rb-fsevent-0.9.7/bin/fsevent_watch --latency 0.1 /Users/name/Desktop/app-name/app 69621 ttys000 0:00.01 /Users/name/.rvm/gems/ruby-2.3.0/gems/rb-fsevent-0.9.7/bin/fsevent_watch --latency 0.1 /Users/name/Desktop/app-name/app 69622 ttys000 0:00.01 /Users/name/.rvm/gems/ruby-2.3.0/gems/rb-fsevent-0.9.7/bin/fsevent_watch --latency 0.1 /Users/name/Desktop/app-name/app 69623 ttys000 0:00.01 /Users/name/.rvm/gems/ruby-2.3.0/gems/rb-fsevent-0.9.7/bin/fsevent_watch --latency 0.1 /Users/name/Desktop/app-name/app 69624 ttys000 0:00.01 /Users/name/.rvm/gems/ruby-2.3.0/gems/rb-fsevent-0.9.7/bin/fsevent_watch --latency 0.1 /Users/name/Desktop/app-name/app 69625 ttys000 0:00.01 /Users/name/.rvm/gems/ruby-2.3.0/gems/rb-fsevent-0.9.7/bin/fsevent_watch --latency 0.1 /Users/name/Desktop/app-name/app 69626 ttys000 0:00.01 /Users/name/.rvm/gems/ruby-2.3.0/gems/rb-fsevent-0.9.7/bin/fsevent_watch --latency 0.1 /Users/name/.rvm/gems/ruby-2.3 69627 ttys000 0:00.01 /Users/name/.rvm/gems/ruby-2.3.0/gems/rb-fsevent-0.9.7/bin/fsevent_watch --latency 0.1 /Users/name/.rvm/gems/ruby-2.3 69628 ttys000 0:00.01 /Users/name/.rvm/gems/ruby-2.3.0/gems/rb-fsevent-0.9.7/bin/fsevent_watch --latency 0.1 /Users/name/.rvm/gems/ruby-2.3 69629 ttys000 0:00.01 /Users/name/.rvm/gems/ruby-2.3.0/gems/rb-fsevent-0.9.7/bin/fsevent_watch --latency 0.1 /Users/name/Desktop/app-name/tes 55531 ttys001 0:00.15 -bash MacBook-Pro:app-name name$ ``` username_1: @username_0 Could you run `ps ax | grep puma` for me? Puma-dev isn't showing up at all in the normal `ps` because of the way process sessions work. username_0: @username_1 sure: ``` MacBook-Pro:User user$ ps ax | grep puma 46155 ?? S 0:06.45 /usr/local/Cellar/puma-dev/0.10/bin/puma-dev -launchd -dir ~/.puma-dev -d dev -timeout 15m0s 69600 s000 S+ 0:16.69 puma 3.6.0 (tcp://localhost:3000) [appname] 99391 s003 S+ 0:00.00 grep puma MacBook-Pro:User user$ ``` username_1: You have something very odd going on with spring because puma-dev isn't using that mechanism anymore, no should the app be launched via `rails s` which it appears to be in the backtrace. Oh, I reread your docs and you're using `rails restart`. I'm not certain that will work properly, try using the native mechanism of touching `tmp/restart.txt`. Status: Issue closed username_2: @username_1 I have encountered this problem as well. Rails 5.0.0.1 puma 3.6.0 No puma-dev macOS Sierra Here's my `ps ax | grep puma`: ``` 3335 s002 S+ 0:01.89 puma 3.6.0 (tcp://localhost:3000) [hats] 3342 s002 S+ 0:00.18 puma: cluster worker 0: 3335 [hats] ``` This seems independent from Rails though. If I start the server via `puma -b tcp://localhost:3000` and then `touch tmp/restart.txt` it yields the same issue. If I start it with `puma -b tcp://127.0.0.1:3000`, it has no problem. My guess is something to do with IPv6? I tried commenting out `::1 localhost` in `/etc/hosts` with no luck. I didn't reboot or do anything to clear my DNS though. This is far as my journey goes. 😄 I hope this info helps. username_2: @username_1 I see you fixed this for puma-dev via https://github.com/puma/puma-dev/commit/49f2549c14fb5156255809b30c4273241dc3a231. Is there a solution for those of us who just use puma without puma-dev? Thanks!
transientskp/tkp
97919010
Title: new skyregion created for every image in case of AARTFAAC Question: username_0: When we feed AARTFAAC data to TraP for every image a skyregion is created. This doesn't look right. This is caused by the fact that every AARTFAAC image has a different pointing center due to the rotation of the earth. Answers: username_0: related issue https://github.com/transientskp/banana/issues/89 username_0: I'm not sure yet, but it could be that the source association step is failing because of this, which yields a lot of new sources which in turn cause an explosion of forced fits. username_0: The source association not working correctly was a problem with the positions being wrongly put in the metadata by the AARTFAAC imaging pipeline. Source association works correctly now, still for every image a new sky region is created, not sure if that is intended. Will discuss with Bart about what was his plan here. username_1: Hey @username_0 , I wrote the skyregion code, so it may be easier to ask me! At the moment skyregions are matched very simply - do the central co-ords match up, does the radius match up, that's it. I'm guessing AARTFAAC images have a slowly drifting pointing centre which means their skyregion is always slightly offset to the previous one? username_2: Hi Tim, yes that is the case due to the RADEC of the image center changing with every image. What are the skyregions used for? username_0: I think the idea was to normalize the data (remove redundant information). Until now a lot of the metadata in a observation was the same, like the beam parameters. But we have no win if the center point is constantly moving. The passing around of the `xtr_radius` makes me think there is an unfinished idea of grouping images in skyregion based on the extraction radius? https://github.com/transientskp/tkp/blob/master/tkp/db/sql/statements/functions/getSkyRgn.sql username_1: Well, it's partially about normalization. It's a way of calculating the previous detection limits for a given position on sky, and also used for efficiently checking if there are 'null-detections', i.e. missing sources in an image. To replicate that behaviour for AARTFAAC data it sounds like you may have to de-normalize and check every new source position against every previous image. Sounds slow. username_0: Why would it be more efficient? Also, it looks like the null detection logic is working at the moment right, or do you see problems ahead? username_2: This might be a naive view, but shouldn't the skyregion be tied to a world coordinate? In which case, it shouldn't matter that the center of every image has a different world coordinate. Effect would be that the skyregion moves over the field of view. in the SIN projection, the beam parameters will remain the same over the field of view, although the sensitivity will vary of course...
flathub/net.openra.OpenRA
444087650
Title: heads up: next playtest and release will use Roslyn compiler and have new dependencies Question: username_0: Just thought I'd give you some warning, in case you're unaware of this. The next playtest and release of OpenRA is set to use the Roslyn compiler, which requires Mono 5.10 or later, preferably 5.12 or later. It also requires MSBuild. Answers: username_1: Fixed in 38daa0b6b78b3559886aab331ecdc0d1fd0ecd0b Status: Issue closed