repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
klpdotorg/mobile | 159345600 | Title: Searchins Questions while creating Survey
Question:
username_0: Add filter to survey/id/questiongroup/id/questions where questiongroup.status = "active" . So we only get an actively used set of questions, Alternatively, if i could get a sort order, where the questions used /associated more frequently get shown on top. @harisibrahimkv |
carpentries/community-facilitators-program | 977589395 | Title: Mentioning UTC times in email and workshop website
Question:
username_0: In addition to mentioning meeting/workshop timings in local times, it might be helpful for some participants and facilitators to have the same timing mentioned in the UTC times as well, specially in our email and workshop website.
Our instructor sign up page already mentions UTC times.
Changes to the workshop template has been suggested at https://github.com/carpentries/workshop-template/pull/751
Instructors might wish to mention the time in the website in the following manner:
9:00 am - 4:30 pm CEST (7:00 am - 2:30 pm UTC)
We might wish to adopt similar practices in our emails and teaching plan documents.
[whenisgood.net](https://whenisgood.net/) allows to get the local time for each respondents of a poll. So when emailing, both UTC as well as local times may be mentioned as shown below in addition to sending the calendar invite:
UTC
Date: July 22, 2021
Time: 3 AM
America/New_York
Date: July 22, 2021
Time: 6 PM
Invitees: Ms. Participant A
Ottawa
Date: July 22, 2021
Time: 5 PM
Invitees: Mr. Participant B
Answers:
username_1: Thanks, @username_0! I will get this recommendation shared with our team and get back to you shortly |
gcobb321/icloud3 | 639419743 | Title: In v2.2.0-rc6, HA restarts are prone to the following situations. Restarting HA is enough
Question:
username_0: 2020-06-16 13:26:54 ERROR (SyncWorker_12) [custom_components.icloud3.device_tracker] None (None) iCloud3 Error > Not Tracking Devices > iPhone123 (iPhone),
2020-06-16 13:26:54 ERROR (SyncWorker_12) [custom_components.icloud3.device_tracker] None (None) iCloud3 Error for Meifeng (mei_feng_iphone)/mei_feng_iphone > The iCloud Account for <EMAIL> did not return any device information for this device when setting up Family Sharing.. 1. Restart iCloud3 on the Event_log screen or restart HA.. 2. Verify the devicename on the track_devices parameter if the error persists.. 3. Refresh the Event Log in your browser to refresh the list of devices.
2020-06-16 13:26:54 ERROR (SyncWorker_12) [custom_components.icloud3.device_tracker] None (None) iCloud3 Error for Lm (iphone_lm)/iphone_lm > The iCloud Account for <EMAIL> did not return any device information for this device when setting up Family Sharing.. 1. Restart iCloud3 on the Event_log screen or restart HA.. 2. Verify the devicename on the track_devices parameter if the error persists.. 3. Refresh the Event Log in your browser to refresh the list of devices.
2020-06-16 13:26:54 ERROR (SyncWorker_12) [custom_components.icloud3.device_tracker] None (None) iCloud3 Error for 6Gongzuoji (iphone6gong_zuo_ji)/iphone6gong_zuo_ji > The iCloud Account for 18150<EMAIL> did not return any device information for this device when setting up Family Sharing.. 1. Restart iCloud3 on the Event_log screen or restart HA.. 2. Verify the devicename on the track_devices parameter if the error persists.. 3. Refresh the Event Log in your browser to refresh the list of devices.
2020-06-16 13:26:54 ERROR (SyncWorker_12) [custom_components.icloud3.device_tracker] None (None) iCloud3 Error for Lb (iphone_lb)/iphone_lb > The iCloud Account for <EMAIL> did not return any device information for this device when setting up Family Sharing.. 1. Restart iCloud3 on the Event_log screen or restart HA.. 2. Verify the devicename on the track_devices parameter if the error persists.. 3. Refresh the Event Log in your browser to refresh the list of devices.
2020-06-16 13:26:54 ERROR (SyncWorker_12) [custom_components.icloud3.device_tracker] None (None) iCloud3 Error for <EMAIL> > No devices to track. Setup aborted. Check `track_devices` parameter and verify the device name matches the iPhone Name on the `Settings>General>About` screen on the devices to be tracked.
Answers:
username_1: I've seen this too and have debug' code running on my system to try to catch the error. I think it is done type of running issue getting back device days from iCloud web services. I've restarted ha a zillion times to try to catch it but o it always works. I'll probably just add error checking and Terry code on the next few days.
username_0: 293/5000
At version V2.2.0-RC5, no error was reported in the log, but icloud3 stopped to get the location. At that time, Timestamp also stopped updating. Then I tried to run the icloud3_restart service, and got an error, which seemed to be the same in my memory. When the previous version included 2.1, I have reported no errors for many times, and I felt that icloud3_restart service was basically invalid when problems occurred. Could you improve this service? I hope this service will be useful in any circumstances, so as to avoid restarting HA.
username_1: The iCloud3 Release Candidate has been updated and can be downloaded here. See the Change Log-Release Candidates a list of the items that were addressed in this Release Candidate.
The first item below is an important change that corrects the iOS App sensor monitoring function that generates the zone enter/exit and location change triggers which were not being detected after the iOS App and HA were updated.
The only programs that were updated are:
device_tracker.pyicloud3-event-log-card.js
------------------
I just uploaded it. As it turns out, a change to the iOS App and HA broke the trigger monitoring function in iC3. I found it yesterday while out running errands without iC3 loaded and the zone enter/exit and other triggers started coming in.
I'm closing this issue. If you continue to have problems after RC7, please open another issue.
Status: Issue closed
|
dlang/dub-registry | 417150123 | Title: Improve documentation about different linkers and subsystems used on Windows
Question:
username_0: See:
````
digitalmars.D.learn
DUB / compiling same source and config to different windows subsystems in 32/64 bit
````
The main problem is that there is no reference nor description that:
x86 = Optilink
x86_64 = MS-64 coff
x86_ms32coff = MS-32 coff
IMO it would be better to use:
x86 = MS-32 coff
x86_64 = MS-64 coff
x86_optilink = Optilink
Answers:
username_1: See https://github.com/dlang/dub/pull/1661 (it's not exactly what you want, but does tackle the problem of the annoying wrong defaults for beginners). |
red-hat-storage/ocs-ci | 564058336 | Title: Currently our VmWare DC are not able to handle production config
Question:
username_0: We cannot handle 3x 64GB compute nodes + 3x 16GB master nodes which requires 240GB of free memory. But I see in DC11 and 12 just those number of free memory:
- DC 11: 225.68 free memory
- DC 12: 180.54 free memory
So for now I would go with setting up 32GB for compute nodes and 16GB for masters.<issue_closed>
Status: Issue closed |
amjadafanah/FX-SAAS-16 | 358367023 | Title: FX-SAAS-16 : ApiV1OrgsIdDeleteAnonymousInvalid
Question:
username_0: Project : FX-SAAS-16
Job : DEV
Env : DEV
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 200
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Sun, 09 Sep 2018 10:22:01 GMT]}
Endpoint : http://13.56.210.25//api/v1/orgs/s1RnMgM8
Request :
Response :
{
"requestId" : "None",
"requestTime" : "2018-09-09T10:22:01.210+0000",
"errors" : true,
"messages" : [ {
"type" : "ERROR",
"key" : "",
"value" : "No class com.fxlabs.fxt.dao.entity.users.Org entity with id s1RnMgM8 exists!"
} ],
"data" : null,
"totalPages" : 0,
"totalElements" : 0
}
Logs :
Assertion [@StatusCode != 200] failed, not expecting [200] but found [200]
--- FX Bot ---
Answers:
username_0: Project : FX-SAAS-16
Job : DEV
Env : DEV
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 200
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Sun, 09 Sep 2018 10:22:01 GMT]}
Endpoint : http://172.16.17.32//api/v1/orgs/s1RnMgM8
Request :
Response :
{
"requestId" : "None",
"requestTime" : "2018-09-09T10:22:01.210+0000",
"errors" : true,
"messages" : [ {
"type" : "ERROR",
"key" : "",
"value" : "No class com.fxlabs.fxt.dao.entity.users.Org entity with id s1RnMgM8 exists!"
} ],
"data" : null,
"totalPages" : 0,
"totalElements" : 0
}
Logs :
Assertion [@StatusCode != 200] failed, not expecting [200] but found [200]
--- FX Bot ---
Status: Issue closed
username_0: Project : FX-SAAS-16
Job : DEV
Env : DEV
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 200
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Sun, 09 Sep 2018 10:22:01 GMT]}
Endpoint : http://13.56.210.25//api/v1/orgs/s1RnMgM8
Request :
Response :
{
"requestId" : "None",
"requestTime" : "2018-09-09T10:22:01.210+0000",
"errors" : true,
"messages" : [ {
"type" : "ERROR",
"key" : "",
"value" : "No class com.fxlabs.fxt.dao.entity.users.Org entity with id s1RnMgM8 exists!"
} ],
"data" : null,
"totalPages" : 0,
"totalElements" : 0
}
Logs :
Assertion [@StatusCode != 200] failed, not expecting [200] but found [200]
--- FX Bot --- |
coreos/fedora-coreos-tracker | 1051059058 | Title: add CI testing for Azure images
Question:
username_0: **Describe the enhancement**
We've recently created a Fedora Azure account and received free credits for doing testing. This means we can add Azure to our list of cloud providers we run automated tests against. Let's add CI testing for Azure to our pipeline like we have for AWS/GCP/OpenStack.
Answers:
username_0: found a few test failures when running through this:
```
--- FAIL: rhcos.selinux.boolean.persist (602.32s)
harness.go:413: TIMEOUT[10m0s]
--- FAIL: coreos.ignition.ssh.key (433.11s)
cluster.go:202: "[ ! -e ~/.ssh/authorized_keys.d/afterburn ]" failed: Process exited with status 1; logs from journalctl -t kola:
+ [ ! -e ~/.ssh/authorized_keys.d/afterburn ]
--- FAIL: non-exclusive-tests/ext.config.networking.nic-naming (3.58s)
harness.go:888: kolet failed: : kolet run-test-unit failed: Error: Unit kola-runext-37.service exited with code 1
systemctl status kola-runext-37.service:
× kola-runext-37.service
Loaded: loaded (/etc/systemd/system/kola-runext-37.service; static)
Active: failed (Result: exit-code) since Thu 2021-12-16 16:18:32 UTC; 1s ago
Process: 3390 ExecStart=/usr/local/bin/kola-runext-nic-naming (code=exited, status=1/FAILURE)
Main PID: 3390 (code=exited, status=1/FAILURE)
CPU: 7ms
Dec 16 16:18:32 kola-bf218837-8486216325 systemd[1]: Started kola-runext-37.service.
Dec 16 16:18:32 kola-bf218837-8486216325 kola-runext-nic-naming[3391]: + ip link
Dec 16 16:18:32 kola-bf218837-8486216325 kola-runext-nic-naming[3392]: + grep -o -e ' eth[0-9]:'
Dec 16 16:18:32 kola-bf218837-8486216325 kola-runext-nic-naming[3392]: eth0:
Dec 16 16:18:32 kola-bf218837-8486216325 kola-runext-nic-naming[3390]: + fatal 'detected eth* NIC naming on node'
Dec 16 16:18:32 kola-bf218837-8486216325 kola-runext-nic-naming[3390]: + echo 'detected eth* NIC naming on node'
Dec 16 16:18:32 kola-bf218837-8486216325 kola-runext-nic-naming[3390]: detected eth* NIC naming on node
Dec 16 16:18:32 kola-bf218837-8486216325 kola-runext-nic-naming[3390]: + exit 1
Dec 16 16:18:32 kola-bf218837-8486216325 systemd[1]: kola-runext-37.service: Main process exited, code=exited, status=1/FAILURE
Dec 16 16:18:32 kola-bf218837-8486216325 systemd[1]: kola-runext-37.service: Failed with result 'exit-code'.
```
username_0: - `rhcos.selinux.boolean.persist` failure seems to be a flake. Passed in a re-run.
- `coreos.ignition.ssh.key` fix in https://github.com/coreos/coreos-assembler/pull/2618
- `ext.config.networking.nic-naming` fix in https://github.com/coreos/fedora-coreos-config/pull/1377 |
sr320/LabDocs | 201309321 | Title: RE Proteomics MSMS sample collection and storage
Question:
username_0: Is there a recommended sampling and storage portion for this protocol?
https://github.com/sr320/LabDocs/blob/82c0f06869baa574a85479f24b7421aa1b76d5ca/protocols/ProteinprepforMSMS.md
I am interested in the potential to store coral samples from remote locations in RNAlater.
E.g., http://onlinelibrary.wiley.com.offcampus.lib.washington.edu/doi/10.1002/pmic.201100328/full
@username_1 do you have concerns about RNAlater?
Answers:
username_1: I'm glad you found that article! I had read that RNAlater works for microbes, but I was curious about multicellular organisms. It sounds like RNAlater may even be a better choice than snap freezing, although I would want to look at the methods of some of the other papers more closely.
I'm curious about the temperature at which the samples were stored after being preserved in RNAlater. They don't specify in this article, but I am going to check out the other ones.
I would also like to use a better MS technology to assess protein identifications. We are able to identify thousands of proteins with the machines we use, so I'd like to make sure the robustness of identification holds up with both preservation methods. This may be a good thing to discuss with Brook and @sr320 to see if we can do a simple experiment with different tissue preservation methods and make sure our pipeline yields the results we expect if we use RNAlater.
username_1: I did a bit of poking around and I found this article especially convincing: http://www.sciencedirect.com/science/article/pii/S2212968515300222
Note that in most of the RNAlater protocols, after letting the tissue stew in the RNAlater for 24 hours they are stored at -80C. Sometimes -20C. I'm guessing that would be difficult in a field situation as well.
username_0: Thanks @username_1 This is actually feasible in Moorea and what I had them do for the last samples collected (https://username_0.github.io/Putnam_Lab_Notebook/Sample_Collection_RNALater/). They were then transported on dry ice, so in theory should be good samples for proteomics. I will likely not be able to get samples for snap frozen RNALater comparison until April when I travel with a dry shipper, but would be super interested in collaborating on a comparison!
username_1: The comparison would probably be easier to do with relatively unimportant tissue in the lab, but I will definitely keep you in the loop. Based on that paper I found, though, it does seem that RNAlater preservation still allows for the same number of protein IDs as snap freezing, even with a high mass accuracy instrument that will be detecting proteins across the abundance spectrum. Given those findings, I'm not sure if it is even necessary to investigate this further, but I will see how the PIs feel.
username_0: Sounds good, Thanks!
Status: Issue closed
|
sitespeedio/browsertime | 511740112 | Title: Unable to start Android session on remote server (Browserstack)
Question:
username_0: sitespeed.io version = 10.2.0
iOS = Catalina 10.15
Trying to run simple test
`module.exports = async function (context, commands) {
return commands.measure.start('https://google.com');
};`
### speedsite.config.json
`{
"browsertime":{
"iterations": 1,
"selenium": {
"url": "http://user:[email protected]/wd/hub",
"capabilities": {
"browserName" : "chrome",
"device" : "SamsungGalaxyS8",
"realMobile" : "true",
"browserstack.console":"info",
"browserstack.networkLogs": true,
"browserstack.appium_version" : "1.6.5",
"name": "Sitespeed.io performance audit",
"enablePerformanceLogging": true
}
},
"visualMetrics": false,
"video": false,
"headless": false,
"screenshot": true
}
}
`
### Running with NPM script
` "test:perf": "sitespeed.io ./src/integration/performance/perf.js -n=1 --config=speedtest.config.json --multi",
`
### Got an error
`[2019-10-22 18:03:30] ERROR: [browsertime] BrowserError: Appium error: unknown error: unhandled inspector error: {"code":-32601,"message":"'Browser.setWindowBounds' wasn't found"}
(Session info: chrome=76.0.3809.89)
(Driver info: chromedriver=76.0.3809.68 (420c9498db8ce8fcd190a954d51297672c1515d5-refs/branch-heads/3809@{#864}),platform=Linux 3.19.8-100.fc20.x86_64 x86_64)
(Session info: chrome=76.0.3809.89)
(Driver info: chromedriver=76.0.3809.68 (420c9498db8ce8fcd190a954d51297672c1515d5-refs/branch-heads/3809@{#864}),platform=Linux 3.19.8-100.fc20.x86_64 x86_64)
at SeleniumRunner.start (/Users/username_0/Documents/GitHub/qa_automation_ts/node_modules/sitespeed.io/node_modules/browsertime/lib/core/seleniumRunner.js:121:13)
at process._tickCallback (internal/process/next_tick.js:68:7)
[2019-10-22 18:03:33] DEBUG: [browsertime] Closed the browser.
[2019-10-22 18:03:33] ERROR: [browsertime] No data to collect
`
Link to Appium logs - [https://gist.github.com/username_0/26c640060d184aa35d60105eae22065c](url)
If added android : true to config - got another error
`[2019-10-22 18:12:18] INFO: Versions OS: darwin 19.0.0 nodejs: v10.16.3 sitespeed.io: 10.2.0 browsertime: 6.1.3 coach: 4.0.2
[2019-10-22 18:12:18] ERROR: TypeError: Cannot read property 'id' of undefined
at Android.initConnection (/Users/username_0/Documents/GitHub/qa_automation_ts/node_modules/sitespeed.io/node_modules/browsertime/lib/android/index.js:20:28)
at process._tickCallback (internal/process/next_tick.js:68:7)
`
Answers:
username_0: I tried this solution. Did not help
username_1: The problem is that when we use Chrome we heavily use the Chrome devtools protocol using https://github.com/cyrus-and/chrome-remote-interface since the Selenium support is limited for CDP (you can set but not get if I remember correctly).
Is there away to get the ip/host to use for CDP from Browserstack? Setting it to hub-cloud.browserstack.com doesn't seem to work.
username_1: To make this work someone need to add https://github.com/sitespeedio/selenium/blob/4a71646fc54880025217da8b35b69be35482d0b4/chrome.js#L821-L836 into Selenium. At the moment Selenium only support sending a devtools command, but we also get information from devtools. We used to run our own "fixed" version of Selenium but gave that up because wanted to run latest stable (or at least latest release) and then move over to the current setup.
username_1: That is merged in Selenium but not released yet.
username_1: This is merged in Selenium so maybe we should have a new go and test if it works. |
mozilla/addons-server | 572433010 | Title: Handle updating an existing block with a regex
Question:
username_0: When we import a block from the v2 blocklist that contains a regex in the guid field we expand the regex to a separate Block instance for each matching guid. This is correct (and what we want). What is currently undetermined is the behaviour if that block was to be updated or deleted - if changing a single block on AMO results in changes to a block on kinto that targetted multiple guids then we're making unexpected changes; if we create a new block (for updates) then the existing block remains.
Options:
1. Don't make changes on kinto to blocks imported from a regex - add a warning message and expect the change to be manually made via kinto admin webui by the admin. Simplest to implement but places an extra burden on the admin; also leaves kinto admin webui as a dependency until v2 blocklist is decommissioned.
2. Attempt to replace the existing regex block on kinto with multiple single guid blocks. More complex (so more edge cases and bugs) and could make adding the blocks take a lot more time; Expanding the blocks could significantly (and inadvertently) increase the size of the v2 blocklist - e.g. a deleting a block on AMO that was imported from a kinto block that was a regex covering 100 guids would mean deleting the existing kinto block and adding 98 new block records.
I don't think adjusting the existing regex automatically from AMO is a feasible option.
(1) would be the option to choose if we think editing/deleting existing blocks in the v2 blocklist will be a rare task; (2) would ease the administrative burden if it was a frequent task - but at the cost of complexity and bloating the v2 blocklist.
Answers:
username_0: @username_2 @username_1
username_1: I'm ok with doing (1) if the period where we run v2 and v3 isn't very long (say less than half a year).
I agree that (2) is too complex and probably not worth the effort given it is supposed to be of use for a transitional period only.
username_2: Yeah, (1) sounds like the most reasonable approach, at least given the current plan of shipping v3 on ESR. If that changes we will need to re-evaluate.
Status: Issue closed
username_3: I've found some blocks coming from a regex submission among the guids that were batch imported on -dev.
The edit page for these blocks will show the following warning:

The editing and sign-off part (if applicable) are still working as before. |
stfalcon-studio/ChatKit | 213449912 | Title: Ordering dialogs
Question:
username_0: Hi!
I'm new on android development... I'm using this library and is amazing!, it works perfectly, but now I want to keep the dialog list ordered by last messages.
My problem is not the SQL query or how can I get the List<Dialog> ordered, but how can I set the order in this library (I think that it gets ordered by seeing the lastMessage's createdAt, but seems that not)...
Can you help me, please?
Thanks!
Answers:
username_1: If i'm not mistaken, it gets ordered by the order you insert the messages, using addToStart, ordering as nothing to do with createdAt field.
Just load the messages by order and add them one by one, from the oldest to the newest
username_0: Sorry, but I'm talking about the DialogsList...
I don't speak english so maybe you understand me... I want to keep the dialogs list ordered all the time (I'm using websockets so the order can change, but I'm also using EventBus, if it is needed)
Thanks!
username_2: I haven't seen a proper way of achieving this nor I tried, but from the top of my head, a way around the limitation probably would be to first delete the dialog by invoking ```adapter.deleteById(String id)``` and then re-add it at the top of the adapter. As per the documentation this can be achieved by invoking ```adapter.addItem(int position, DIALOG dialog``` where ```position``` indicates the position of the new dialog.
Status: Issue closed
username_0: Thanks! but I moved to my own implementation of RecyclerView since I need a much simply and "focused case" implementation... Throw it's an amazing library! I'll close this issue... |
bcgov/entity | 696291544 | Title: AUTO-ANALYZE-Add Year Check to Well-Formed Name for BC
Question:
username_0: ## Description:
The year at the end of a name must be the current year. Please discuss with Genevieve to get the details.
Acceptance for a Task:
- [ ] Requires deployments
- [ ] Add/ maintain selectors for QA purposes
- [ ] Test coverage acceptable
- [ ] Linters passed
- [ ] Peer Reviewed
- [ ] PR Accepted
- [ ] Production burn in completed
Answers:
username_1: These are the rules from the manual @username_2 where did the rules in the ticket come from? If it was a change of name it would follow the below which would allow an older year. Were the rules in the ticket just for incorp? Also they are required in brackets as per the manual? I emailed the examiners for a response as well:)
Numbers may be used in company names, but only as the
distinctive element. (e.g. 123456 Ltd.)
o A year may be used in a name
If it is at the beginning of the name there are no restrictions
(e.g. 1910 Dog Grooming Ltd.)
If it is at the end of the name it must be the year of
incorporation or amalgamation and it must be in brackets
(e.g. Pacific Enterprises (1989) Ltd.)
o The incorporation number may be used as the name of a BC
company. The accepted format is “345678 B.C. Ltd.”
username_2: @username_1 I believe Lorna got the requirements from the Names Examiners. Probably the google sheet they've been completing. It was raised at a check in as well and Sienna and I would have run it by Lorna.
I've only ever seen the year with the brackets around them, never without.
Great questions about the request types and dates. **We'll need the examiners to confirm, but here are my thoughts.**
Incorporations - must be current year or if >Dec 15th current year +1
Continuations - same as incorporations because they have to change their names to conform to BC's naming conventions
Restorations - I'm pretty sure that if it was restoring/reinstating an entity with the same name that had a year in it, we would take an older year.
Change of name - same as incorporations but probably some exceptions (ie. did a vertical or horizontal amalgamation and now want to change the name to the name of the amalgamating company whose NOA they didn't adopt)
Amalgamations - same as incorporations but probably some exceptions (ie. want to use name of one of the amalgamating companies and it has a year in its name)
Also, I'm not sure about other entities (GPs, LLPs, societies, etc.) - whether they can use the year in their name.
Status: Issue closed
|
OpenBCI/OpenBCI_GUI | 444048513 | Title: LSL stream cannot be started.
Question:
username_0: ## Problem
In OpenBCI_GUI, the LSL stream cannot be started. Pressing the "start" button makes the application's GUI freeze.
## Expected
When the button start is pressed, the LSL stream starts and the OBCI_GUI's GUI keeps running.
## Operating System and Version
Linux ubuntu 16.10, Windows 10

## GUI Version
410, 407
## Running standalone app
Are you running the downloaded app or are you running from Processing 3
I am not running anything special, and the application is not running from Processing 3.
## Type of OpenBCI Board
Ganglion
## Are you using a WiFi Shield?
No
## Log
`
java.lang.InterruptedException: sleep interrupted
LSL selected from Protocol Menu
Hub: apparent sampleIndex jump from Serial data: 16 to 21. Keeping packet. (112)
numPacketsDropped = 5
Hub: apparent sampleIndex jump from Serial data: 156 to 161. Keeping packet. (113)
numPacketsDropped = 5
Hub: apparent sampleIndex jump from Serial data: 6 to 11. Keeping packet. (114)
numPacketsDropped = 5
Stream update numChan to 4
nPointsPerUpdate 8
dataToSend len: 32
[TimeSeries, obci_eeg1, EEG, 4, 1]
java.lang.RuntimeException: java.lang.NoClassDefFoundError: com/sun/jna/Platform
at processing.opengl.PSurfaceJOGL$2.run(PSurfaceJOGL.java:412)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoClassDefFoundError: com/sun/jna/Platform
at edu.ucsd.sccn.LSL.<clinit>(LSL.java:976)
at edu.ucsd.sccn.LSL$StreamInfo.<init>(LSL.java:123)
at OpenBCI_GUI$Stream.openNetwork(OpenBCI_GUI.java:28160)
at OpenBCI_GUI$Stream.start(OpenBCI_GUI.java:27571)
at OpenBCI_GUI$W_networking.startNetwork(OpenBCI_GUI.java:27389)
at OpenBCI_GUI$W_networking.mouseReleased(OpenBCI_GUI.java:27113)
at OpenBCI_GUI$WidgetManager.mouseReleased(OpenBCI_GUI.java:28957)
at OpenBCI_GUI.mouseReleased(OpenBCI_GUI.java:12447)
at processing.core.PApplet.mouseReleased(PApplet.java:2777)
at processing.core.PApplet.handleMouseEvent(PApplet.java:2686)
at processing.core.PApplet.dequeueEvents(PApplet.java:2599)
[Truncated]
at jogamp.opengl.GLAutoDrawableBase$2.run(GLAutoDrawableBase.java:443)
at jogamp.opengl.GLDrawableHelper.invokeGLImpl(GLDrawableHelper.java:1293)
at jogamp.opengl.GLDrawableHelper.invokeGL(GLDrawableHelper.java:1147)
at com.jogamp.newt.opengl.GLWindow.display(GLWindow.java:759)
at com.jogamp.opengl.util.AWTAnimatorImpl.display(AWTAnimatorImpl.java:81)
at com.jogamp.opengl.util.AnimatorBase.display(AnimatorBase.java:452)
at com.jogamp.opengl.util.FPSAnimator$MainTask.run(FPSAnimator.java:178)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
Caused by: java.lang.ClassNotFoundException: com.sun.jna.Platform
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 24 more
Hub: apparent sampleIndex jump from Serial data: 168 to 173. Keeping packet. (115)
numPacketsDropped = 5
Hub: apparent sampleIndex jump from Serial data: 74 to 77. Keeping packet. (116)
numPacketsDropped = 3
`
Answers:
username_1: Please try running the GUI from Processing IDE, rather than using the standalone,
https://docs.openbci.com/OpenBCI%20Software/01-OpenBCI_GUI#the-openbci-gui-running-the-openbci-gui-from-the-processing-ide
Regards,
William
Status: Issue closed
username_0: Running the GUI from Processing is not an option, in case of workshops for example. Processing really complicate the setup. Also, I had run into other problems trying to make OBCI_GUI run from Processing IDE. Thanks anyway username_1.
In order to face one problem at time.. it seems the 412 fix the problem. I tested it on Linux 16.10 and:
`Networking: LSL selected from Protocol Menu
++++Opening dropdown dataType1
Stream update numChan to 4
nPointsPerUpdate 8
dataToSend len: 32
StringList size=5 [ "TimeSeries", "obci_eeg1", "EEG", "4", "1" ]
[DEFAULT]: Network Stream Started`
..and the GUI doesn't freeze!
https://github.com/OpenBCI/OpenBCI_GUI/releases/tag/v4.1.2-beta.2
(fixed already in v4.1.2-beta.0)
username_2: @username_0 There was a missing dependency for the LSL feature, including the library in the standalone explicitly fixed the issue. Thanks for confirming this! |
postgis/docker-postgis | 564405033 | Title: after build the dockerfile,the docker container don't running
Question:
username_0: use the dockerfile to build:
https://github.com/postgis/docker-postgis/tree/master/11-2.5
the docker cmd is:
docker run -t --name postgis -e POSTGRES_PASSWORD=<PASSWORD> -p 5432:5432 -d postgis:11-2.5
then the error is :
/docker-entrypoint-initdb.d/postgis.sh: line 17: --dbname=template_postgis: command not found

Answers:
username_1: @username_0 I can run this image following on my mac.
```
$ docker run -t --name postgis -e POSTGRES_PASSWORD=<PASSWORD> -p 5432:5432 -d postgis/postgis:11-2.5
a24586a80e563107ca099a54198e5e92064b400a788ff15f9a6695bd09fde3d6
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a24586a80e56 postgis/postgis:11-2.5 "docker-entrypoint.s…" 21 seconds ago Up 20 seconds 0.0.0.0:5432->5432/tcp postgis
$ docker logs a24586a80e56
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default timezone ... Etc/UTC
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok
WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
Success. You can now start the database server using:
pg_ctl -D /var/lib/postgresql/data -l logfile start
waiting for server to start....2020-02-13 03:00:26.429 UTC [46] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2020-02-13 03:00:26.448 UTC [47] LOG: database system was shut down at 2020-02-13 03:00:26 UTC
2020-02-13 03:00:26.452 UTC [46] LOG: database system is ready to accept connections
done
server started
/usr/local/bin/docker-entrypoint.sh: sourcing /docker-entrypoint-initdb.d/postgis.sh
CREATE DATABASE
UPDATE 1
Loading PostGIS extensions into template_postgis
CREATE EXTENSION
CREATE EXTENSION
CREATE EXTENSION
CREATE EXTENSION
Loading PostGIS extensions into postgres
CREATE EXTENSION
CREATE EXTENSION
CREATE EXTENSION
CREATE EXTENSION
waiting for server to shut down...2020-02-13 03:00:31.006 UTC [46] LOG: received fast shutdown request
.2020-02-13 03:00:31.008 UTC [46] LOG: aborting any active transactions
2020-02-13 03:00:31.011 UTC [46] LOG: background worker "logical replication launcher" (PID 53) exited with exit code 1
2020-02-13 03:00:31.027 UTC [48] LOG: shutting down
2020-02-13 03:00:31.228 UTC [46] LOG: database system is shut down
done
server stopped
PostgreSQL init process complete; ready for start up.
2020-02-13 03:00:31.326 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2020-02-13 03:00:31.326 UTC [1] LOG: listening on IPv6 address "::", port 5432
2020-02-13 03:00:31.331 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2020-02-13 03:00:31.347 UTC [83] LOG: database system was shut down at 2020-02-13 03:00:31 UTC
2020-02-13 03:00:31.353 UTC [1] LOG: database system is ready to accept connections
```
What is your environment?
username_0: my environment is CentOS7.6,but I want to change the Dockerfile,add some .so file in it,so I must build it myself,How to change the error:
/docker-entrypoint-initdb.d/postgis.sh: line 17: --dbname=template_postgis: command not found
username_0: and,when I use your cmd to run contanier but failed

I don't search postgis/postgis:11-2.5
username_1: Please paste your current Dockerfile.
Note: postgis/postgis:11-2.5 is build by `make` command.
username_0: the Dockerfile is:
FROM postgres:11
LABEL maintainer="PostGIS Project - https://postgis.net"
ENV POSTGIS_MAJOR 2.5
ENV POSTGIS_VERSION 2.5.3+dfsg-3.pgdg90+1
RUN apt-get update \
&& apt-cache showpkg postgresql-$PG_MAJOR-postgis-$POSTGIS_MAJOR \
&& apt-get install -y --no-install-recommends \
postgresql-$PG_MAJOR-postgis-$POSTGIS_MAJOR=$POSTGIS_VERSION \
postgresql-$PG_MAJOR-postgis-$POSTGIS_MAJOR-scripts=$POSTGIS_VERSION \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir -p /docker-entrypoint-initdb.d
COPY ./PGSQLEngine.so /usr/lib/postgresql/11/lib
COPY ./st_geometry.so /usr/lib/postgresql/11/lib
COPY ./initdb-postgis.sh /docker-entrypoint-initdb.d/postgis.sh
COPY ./update-postgis.sh /usr/local/bin
username_2: @username_0 :
* Do you using the latest `postgres:11` image?
* try `docker pull postgres:11`
* or add '--pull' options for [docker build](https://docs.docker.com/engine/reference/commandline/build/)
* what is your image hashes ?
* `docker images | grep postgis`
* your environment?
* `docker version`
username_0: the others don't chaged,for example the initdb-postgis.sh and update-postgis.sh

username_1: @username_0 If you remove following line, can you `docker run`?
```
COPY ./PGSQLEngine.so /usr/lib/postgresql/11/lib
COPY ./st_geometry.so /usr/lib/postgresql/11/lib
```
username_0: docker version:

the postgres:11 is the latest

username_0: just now,I remove the 2 line:
COPY ./PGSQLEngine.so /usr/lib/postgresql/11/lib
COPY ./st_geometry.so /usr/lib/postgresql/11/lib
then build again:
[root@docker postgis-12]# docker build -t postgis:test .
Sending build context to Docker daemon 12.82MB
Step 1/8 : FROM postgres:12
---> cf879a45faaa
Step 2/8 : LABEL maintainer="PostGIS Project - https://postgis.net"
---> Using cache
---> cbf1dc0d309b
Step 3/8 : ENV POSTGIS_MAJOR 2.5
---> Using cache
---> efabfb522b6c
Step 4/8 : ENV POSTGIS_VERSION 2.5.3+dfsg-3.pgdg100+1
---> Using cache
---> 2a2251d70f7f
Step 5/8 : RUN apt-get update && apt-cache showpkg postgresql-$PG_MAJOR-postgis-$POSTGIS_MAJOR && apt-get install -y --no-install-recommends postgresql-$PG_MAJOR-postgis-$POSTGIS_MAJOR=$POSTGIS_VERSION postgresql-$PG_MAJOR-postgis-$POSTGIS_MAJOR-scripts=$POSTGIS_VERSION && rm -rf /var/lib/apt/lists/*
---> Using cache
---> 0a0ee9ae8d64
Step 6/8 : RUN mkdir -p /docker-entrypoint-initdb.d
---> Using cache
---> 034399f26db4
Step 7/8 : COPY ./initdb-postgis.sh /docker-entrypoint-initdb.d/postgis.sh
---> c794d9c57ed4
Step 8/8 : COPY ./update-postgis.sh /usr/local/bin
---> 0d774ec22d98
Successfully built 0d774ec22d98
Successfully tagged postgis:test
docker run -it --rm postgis:test
[root@docker postgis-12]# docker run -it --rm postgis:test
****************************************************
WARNING: No password has been set for the database.
This will allow anyone with access to the
Postgres port to access your database. In
Docker's default configuration, this is
effectively any other container on the same
system.
Use "-e POSTGRES_PASSWORD=<PASSWORD>" to set
it in "docker run".
****************************************************
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default time zone ... Etc/UTC
creating configuration files ... ok
running bootstrap script ... ok
[Truncated]
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
Success. You can now start the database server using:
pg_ctl -D /var/lib/postgresql/data -l logfile start
waiting for server to start....2020-02-13 03:26:35.565 UTC [45] LOG: starting PostgreSQL 12.1 (Debian 12.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
2020-02-13 03:26:35.566 UTC [45] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2020-02-13 03:26:35.587 UTC [46] LOG: database system was shut down at 2020-02-13 03:26:35 UTC
2020-02-13 03:26:35.592 UTC [45] LOG: database system is ready to accept connections
done
server started
/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/postgis.sh
Loading PostGIS extensions into template_postgis
/docker-entrypoint-initdb.d/postgis.sh: line 17: --dbname=template_postgis: command not found
the same error
username_1: @username_0 Which is this problem for postgis 12-2.5 or 11-2.5?
username_2: Strange; I have different hashes ... `0b4b85e22843 ` for `postgis:11-2.5`
```
$ docker images | grep postgis | grep 11 | grep 2.5
postgis/postgis 11-2.5 0b4b85e22843 6 minutes ago 467MB
$ docker images | grep postgres | grep 11
postgres 11 2c963c0eb8c6 10 days ago 332MB
```
my hashes for an input files
```
/docker-postgis/11-2.5$ sha256sum *
sha256sum: alpine: Is a directory
983343d72d4502fdf5e17844f4e9967287110f40f4d6525c1f3266b0a1635dd9 Dockerfile
c91a52f8333e9d6fb814235dc2d7bd533761743b1485e924cdee3599ea04aa0e initdb-postgis.sh
d6cb9278db7215f2c65c10e6075961490dabdcc93c6615eaea78ffa9030f319e README.md
ffb6a41cff824c8187a321feb0776cfe81bd3b27948acaf4d3c1532fd8cdbebf update-postgis.sh
```
username_0: I download them here ,no change the other line ,only add 2 line:
COPY ./PGSQLEngine.so /usr/lib/postgresql/11/lib
COPY ./st_geometry.so /usr/lib/postgresql/11/lib

username_1: @username_0 Please do `docker pull postgres:11` and build.
username_2: Please rebuild .. and show the new image hashes ...
* `docker build --pull -t postgis:11-2.5 .`
* query the new hashes ...
```
$ docker images postgis:11-2.5
REPOSITORY TAG IMAGE ID CREATED SIZE
postgis 11-2.5 0b4b85e22843 20 minutes ago 467MB
```
username_2: @username_0 :
* sorry the hash id is not stable .. changing every build `docker build --no-cache --pull -t postgis:11-2.5 . `
* but in theory after you rebuild .. it should work ..
username_0: [root@docker 11-2.5]# docker build --no-cache --pull -t postgis:11-2.5 .
Sending build context to Docker daemon 16.9kB
Step 1/8 : FROM postgres:11
11: Pulling from library/postgres
Digest: sha256:6f2062ab11d720f4756f17da4d0a64534346cce33b7cdea9d7ac4f43eed9fc02
Status: Image is up to date for postgres:11
---> 2c963c0eb8c6
Step 2/8 : LABEL maintainer="PostGIS Project - https://postgis.net"
---> Running in 4749417a7c86
Removing intermediate container 4749417a7c86
---> 2bb98ac96922
Step 3/8 : ENV POSTGIS_MAJOR 2.5
---> Running in f9472cd18c53
Removing intermediate container f9472cd18c53
---> 7991fa132605
Step 4/8 : ENV POSTGIS_VERSION 2.5.3+dfsg-3.pgdg90+1
---> Running in 229589ee2298
Removing intermediate container 229589ee2298
---> f9b89e410510
Step 5/8 : RUN apt-get update && apt-cache showpkg postgresql-$PG_MAJOR-postgis-$POSTGIS_MAJOR && apt-get install -y --no-install-recommends postgresql-$PG_MAJOR-postgis-$POSTGIS_MAJOR=$POSTGIS_VERSION postgresql-$PG_MAJOR-postgis-$POSTGIS_MAJOR-scripts=$POSTGIS_VERSION && rm -rf /var/lib/apt/lists/*
---> Running in eed4af3f2266
Get:1 http://security.debian.org/debian-security stretch/updates InRelease [94.3 kB]
Ign:2 http://deb.debian.org/debian stretch InRelease
Get:3 http://deb.debian.org/debian stretch-updates InRelease [91.0 kB]
Get:4 http://security.debian.org/debian-security stretch/updates/main amd64 Packages [517 kB]
Get:5 http://apt.postgresql.org/pub/repos/apt stretch-pgdg InRelease [51.4 kB]
Get:6 http://deb.debian.org/debian stretch Release [118 kB]
Get:7 http://deb.debian.org/debian stretch-updates/main amd64 Packages [27.9 kB]
Get:8 http://deb.debian.org/debian stretch Release.gpg [2,410 B]
Get:9 http://deb.debian.org/debian stretch/main amd64 Packages [7,083 kB]
Get:10 http://apt.postgresql.org/pub/repos/apt stretch-pgdg/11 amd64 Packages [2,585 B]
Get:11 http://apt.postgresql.org/pub/repos/apt stretch-pgdg/main amd64 Packages [211 kB]
Fetched 8,199 kB in 54s (151 kB/s)
Reading package lists...
Package: postgresql-11-postgis-2.5
Versions:
2.5.3+dfsg-3.pgdg90+1 (/var/lib/apt/lists/apt.postgresql.org_pub_repos_apt_dists_stretch-pgdg_main_binary-amd64_Packages.lz4)
Description Language:
File: /var/lib/apt/lists/apt.postgresql.org_pub_repos_apt_dists_stretch-pgdg_main_binary-amd64_Packages.lz4
MD5: 9644687edef0c323059284467775c0c4
Reverse Depends:
postgresql-11-postgis-2.5-dbgsym,postgresql-11-postgis-2.5 2.5.3+dfsg-3.pgdg90+1
postgresql-11-postgis-2.5-scripts,postgresql-11-postgis-2.5
Dependencies:
2.5.3+dfsg-3.pgdg90+1 - postgresql-11 (0 (null)) postgresql-11-postgis-2.5-scripts (0 (null)) libc6 (2 2.14) libgdal20 (2 2.0.1) libgeos-c1v5 (2 3.7.0) libjson-c3 (2 0.11) liblwgeom-2.5-0 (2 2.5.0~beta1) libpcre3 (0 (null)) libproj12 (2 4.9.0) libprotobuf-c1 (2 1.0.1) libsfcgal1 (2 1.2.0) libxml2 (2 2.7.4) postgis (3 1.2.1) postgis (0 (null))
Provides:
2.5.3+dfsg-3.pgdg90+1 - postgresql-postgis (= ) postgresql-11-postgis (= )
Reverse Provides:
Reading package lists...
Building dependency tree...
Reading state information...
The following additional packages will be installed:
fontconfig-config fonts-dejavu-core libaec0 libarmadillo7 libarpack2
libblas-common libblas3 libboost-atomic1.62.0 libboost-chrono1.62.0
libboost-date-time1.62.0 libboost-filesystem1.62.0
libboost-program-options1.62.0 libboost-serialization1.62.0
libboost-system1.62.0 libboost-test1.62.0 libboost-thread1.62.0
libboost-timer1.62.0 libcgal13 libcurl3-gnutls libdap23 libdapclient6v5
[Truncated]
Setting up odbcinst1debian2:amd64 (2.3.4-1) ...
Setting up odbcinst (2.3.4-1) ...
Setting up libgdal20 (2.1.2+dfsg-5) ...
Setting up postgresql-11-postgis-2.5 (2.5.3+dfsg-3.pgdg90+1) ...
Processing triggers for libc-bin (2.24-11+deb9u4) ...
Removing intermediate container eed4af3f2266
---> f6817120e4a5
Step 6/8 : RUN mkdir -p /docker-entrypoint-initdb.d
---> Running in e8c249729f1b
Removing intermediate container e8c249729f1b
---> 911f08d9c69c
Step 7/8 : COPY ./initdb-postgis.sh /docker-entrypoint-initdb.d/postgis.sh
---> f069bc026922
Step 8/8 : COPY ./update-postgis.sh /usr/local/bin
---> 73b73d765bb8
Successfully built 73b73d765bb8
Successfully tagged postgis:11-2.5
[root@docker 11-2.5]# docker images |grep postgis
postgis 11-2.5 73b73d765bb8 3 minutes ago 467MB
username_0: it works,now can I add file to the Dockerfile ,then rebuild it?
username_1: @username_0 You don't need rebuild.
Create new directory, copy your so files, and create Dockerfile following:
```
FROM postgis:11-2.5
COPY ./PGSQLEngine.so /usr/lib/postgresql/11/lib
COPY ./st_geometry.so /usr/lib/postgresql/11/lib
```
Then, you build this Dockerfile.
But we can't follow your so files.
I recommend build new so files in your Dockerfile from source code.
username_0: thanks very much,now it works
Status: Issue closed
|
dhilt/ngx-ui-scroll | 720359763 | Title: [Issue]: wrong Datasource 'get' method behaviour after appended/prepended elements
Question:
username_0: Hello @username_1
I found one issue with the lib, related to wrong Datasource 'get' method behaviour after appended/prepended elements
I have prepared a Stackblitz demo so it's easy to reproduce it
https://stackblitz.com/edit/angular-ngx-ui-scroll-crash-on-append-x5e1pr?file=src/app/app.component.ts
Steps to reproduce:
1) Use inverted & reversed datasource flow (I believe the issue is also reproducible with normal direction)
2) Once ran the demo, the Datasource 'get' method requested the following range:
-9 -> 0
1 -> 10
11 -> 20
21 -> 30
3) append 25 items
4) start scrolling to top
Result:
Instead of requesting next batch (30+25) -> (40+25), it requests 31 -> 40 which is wrong,
so same elements will be displayed twice (see screenshot)
<img width="325" alt="Screenshot 2020-10-13 at 17 40 48" src="https://user-images.githubusercontent.com/57942649/95875968-43872180-0d7b-11eb-84a4-a072bd34cdcb.png">
It seems the Datasource 'get' method is not aware of newly appended/prepended elements, and does not adjust its internals counters, so the issue pops up
Plese let me know if any other details are required to provide
Thank you in advance
Answers:
username_1: @username_0 Thanks for the demo, it makes the problem clear. And this is not a bug, this is about the difficulties of the Datasource API applied to the case of inversion. Point 4 reveals the details and the solution.
1. Update please ngx-ui-scroll to v1.8.5 as it has some stability improvements related to `Adapter.relax`.
2. `Adapter.relax` + `Adapter.isLoading`, this double protection is not necessary. The `isLoading` flag will always be `false` inside the `relax` callback. If it's not true in some case and you can reproduce it, I would really appreciate if you could open a separate issue for this.
3. If you are not at the bottom line of the viewport and there are some virtual items after the last rendered one, then appending should be done virtually. You may read about it [here](https://username_1.github.io/ngx-ui-scroll/#/adapter#append-prepend-sync). This behavior is achieved via `eof` setting:
```js
this.datasource.adapter.append({ items, eof: true });
```
4. The main problem is the indexes inconsistency. Initially you have 100 items with natural indexes from 0 to 99. But the datasource indexes are different. Due to inversion, you have a list of items with indexes from -100 to -1. The mapping you implemented in the `Datasource.get` method works fine until you break it by appending new zero-positive items. After appending, say, 25 items, your Datasource consists of 125 items with indexes from -100 to 24. But natural indexes of the data array are strongly positive: from 0 to 124. You have to manage this shift at the `Datasource.get` level in order to provide correct mapping after append:
```js
const start = this.addedCount - index - count;
```
The updated demo is here: https://stackblitz.com/edit/angular-ngx-ui-scroll-crash-on-append-fork
username_0: @username_1 Thnaks for your reply, please see my comments:
1 - will do
2 - indeed this is a real issue and in big projects it can be easily faced w/o much effort. By big projects I mean some complex UI components to render, which requires more CPU/render time to display. For example, I have a chatting app, similar to Slack, with complex UI components for chat messages, with dozens sub elements. And sometimes we face such an issue with 'relax'. I found a similar issue https://github.com/username_1/ngx-ui-scroll/issues/187
Also, please find a stackblitz demo
https://stackblitz.com/edit/angular-ngx-ui-scroll-relax-issue
I was not able to find a simple way to repro it using these simple examples,
but here are the clear steps re how to reproduce a crash:
1) press 'Append' button 5 times, so it will start 5 threads to append elements
2) then just scroll to top and you will get the issue
<img width="1300" alt="Screenshot 2020-10-13 at 22 18 41" src="https://user-images.githubusercontent.com/57942649/95905885-100bbd80-0da2-11eb-9555-c9171284f603.png">
And then, if you uncomment that 'isLoading' check return, then I will start work well.
So I'm looking forward to find a way re how to mitigate it.
Basically, 2 ideas here:
1) fix 'relax' method
2) fix 'relax' method and create a new set of 'safe' methods, e.g. 'safeAppend', 'safePrepend', 'safeReload', where internally they will be using a fixed version of 'relax' method and do all the durty job, so the end users do not need to care about it all
3 - thanks. I know about this API, but was not able to properly undderstand its usage, will try here again
4 - I had same idea in my mind. The only drawback here I see is that now we need to handle the case of reseting this shift, e.g. when we do 'reload' or other operations which lead to reloading/reseting the datasource internals. It all must be done carefylly.
Thnaks for your help!
username_0: As for the info, `this.addedCount` shift should be added to astVisible/firstVisible values as well to get proper values
```
let lastVisibleIndex = this.datasource.adapter.lastVisible.$index;
let firstVisibleIndex = this.datasource.adapter.firstVisible.$index;
```
Status: Issue closed
username_1: Great! I think we can continue the investigation in the issue https://github.com/username_1/ngx-ui-scroll/issues/220. Closing this as the original (datasource) issue is resolved. |
quasarframework/quasar | 520913394 | Title: QMenu: re-apply target listeners/custom position
Question:
username_0: **Is your feature request related to a problem? Please describe.**
I want to use QMenu with QColor. At the same time i want to show the current selected color for the element that opens the QMenu.
- Mutating data while selecting a color re-renders also the QMenu when used as a child.
- If QMenu is placed outside of the re-render the target listeners are lost
**Describe the solution you'd like**
- #1 Use `QMenu.updatePosition()` to re-apply target listeners
- #2 Another idea: Use `QMenu.updatePosition(x,y)` with the x/y coods from a @click event or custom values
**Additional context**
Here a screenshot of the scenario. Currently you can click on a color field and than click at the card section (wrapped) with the listener attached to open the picker next to the mouse

Please share your ideas :)
Answers:
username_1: Please add a codepen with the problem in order to test it. Thank you.
username_0: I understand, not clear enough.
https://jsfiddle.net/edeltraud/q8pbz3cn/28/
Issue: Display QColor within a QMenu that opens on click at a color div that displays the current color (use while source data mutates)
- In fiddle QMenu uses the parent div, but the parent shouldn't be the target.
- The target should be the color div. But the re-renders destroy the listeners set from QMenu.
- QMenu can't be a child of each div, as it would re-render during QColor use (Or is there a way to block?)
- Probably i miss something. Can the click events be captured from the parent by QMenu? That would be also a solution.
Thank you for your advice
username_1: I think you want something like this: https://jsfiddle.net/n4vhbp5j/
Status: Issue closed
username_0: Genius!
Countless hours and the solution is so simple. It seems i need more think-in-vue. Thank you, next time i should visit the forum first :) |
shstrophies/SHSTrophies | 626233278 | Title: Fix title of page when you tap onto a trophy in the trophy page
Question:
username_0: Make sure the name of the trophy shows up when you click on the trophy. this used to work but now it just says "Basketball Trophy Award(s)" as shown in the photo below.
<issue_closed>
Status: Issue closed |
cuhacking/atlas | 646999964 | Title: Set up Mapbox Android SDK
Question:
username_0: Set up integration with the Mapbox SDK for both the Android app (`android` module) as well as for the Kotlin Multiplatform Mapbox bindings (`mapbox` module).
For the Android app module, a Mapbox API Key should be read in from the `local.properties` file and included as a build config field in the `common` module so that it can be accessed from each platform frontend. (You can use [BuildKonfig](https://github.com/yshrsmz/BuildKonfig) for this). The `mapbox` module does not require an API key or any extra setup.<issue_closed>
Status: Issue closed |
MicrosoftDocs/azure-docs | 642155555 | Title: Hive / Hive Metastore version is not listed
Question:
username_0: [Enter feedback here]
HI, Can we include Hive and Hive Metastore versions as well ?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 13a63d7b-aac1-ac22-cee8-e2873407c251
* Version Independent ID: cb2b1a56-5313-4bad-1854-72e8525583e3
* Content: [Release notes for Azure HDInsight](https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-release-notes)
* Content Source: [articles/hdinsight/hdinsight-release-notes.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/hdinsight/hdinsight-release-notes.md)
* Service: **hdinsight**
* GitHub Login: @username_2
* Microsoft Alias: **hrasheed**
Answers:
username_1: @username_0 Thanks for the feedback! We are currently investigating and will update you shortly.
username_2: Hello @username_0 [this page](https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-component-versioning#apache-components-available-with-different-hdinsight-versions) does mention the current version of Hive. If you think that the component versioning page should document versions of more components, please reach out to Yanan Cai (alias yanacai).
username_3: #please-close
Status: Issue closed
|
cf-convention/cf-conventions | 225547069 | Title: Typos in Appendix H
Question:
username_0: <NAME> noticed some anomalies in Appendix H, section H.6.3. He saw them in version 1.7, but they have been around for some time. I will fix them. Here is his email on the cf-metadata list:
These appear to be typos near the end of the current 1.7 draft:
1. 'projectory' should be 'trajectory'
2. omit 'section = 3;'
3. 'section:standard_namecf_role = "trajectory_id" ;' should be 'cf_role = "trajectory_id" ;'
Answers:
username_1: just adding myself to this thread...
username_0: The typo corrections have been merged into the master branch and will be part of the CF Conventions version 1.7.
Status: Issue closed
|
facebook/react-native | 117404762 | Title: Run babel transformation before finding dependencies
Question:
username_0: Right now the React Native packager builds the dependency tree and then runs the babel transformation this can cause issues if you have babel plugins that manipulate the path of required modules (e.g. resolve ~/my_module to your root directory). Is it possible to run babel first then build the dependency tree?
Answers:
username_1: some plugin like https://github.com/codemix/babel-plugin-macros also need this.
username_2: This would probably be a substantial change to the packager / Haste -- @username_3 is this something you think could be done in a few days or is it too big of a project to invest time in?
username_3: @username_4 was planning to work on this when he gets some free bandwidth :)
username_4: I am working on it. Should be in master this week
username_5: Hey @username_4,
This would be really useful for me as well. Do you have an username_2a when it's likely to get in?
Cheers!
username_4: I hope it gets in tomorrow. I have it mostly working. It was an awful lot of work.
username_5: Awesome news. Thanks David.
Status: Issue closed
|
hzcsTeam/fund | 281960391 | Title: 前台-募捐详情,募捐列表,个人中心
Question:
username_0: # 捐款数据没变化


<issue_closed>
Status: Issue closed |
scalameta/scalafmt | 213642841 | Title: don't align across argument lists
Question:
username_0: This template is a guideline, not a strict requirement.
- **Version**: dbd369c11bce6d2b8dca855c8a807bc3967b0f91 (> 0.6.2)
- **Integration**: library
- **Configuration**:
```scala
val c = ScalafmtConfig.default
c.copy(
maxColumn = 110,
spaces = c.spaces.copy(
inImportCurlyBraces = true,
inParentheses = true
),
continuationIndent = ContinuationIndent(2, 2),
align = c.align.copy(
tokens = AlignToken.default ++ Set(
AlignToken(":", "Param")
)))
```
## Problem
Scalafmt formats code like this:
```scala
def discoverStaticMethodForced[Arg, Result](
cls: Class[_],
name: String
)( implicit Result: ClassTag[Result], Arg: ClassTag[Arg] ): StaticMethod[Arg, Result] = {
```
## Expectation
I would like the formatted output to look like this:
```scala
def discoverStaticMethodForced[Arg, Result](
cls: Class[_],
name: String
)( implicit Result: ClassTag[Result], Arg: ClassTag[Arg] ): StaticMethod[Arg, Result] = {
``` |
breedx2/cuspid | 119310740 | Title: Add support for HTML5 playing video
Question:
username_0: The texture should also be able to support rendering of HTML5 video.
Three.js Image docs refer to this: http://threejs.org/docs/#Reference/Textures/Texture
Answers:
username_1: Looks like there's a `VideoTexture` class. Live example is here: http://threejs.org/examples/webgl_video_panorama_equirectangular.html
Status: Issue closed
username_0: Now supported! |
deeplearning4j/deeplearning4j | 341494113 | Title: ND4J CPU throws IllegalStateException not OutOfMemoryError on OOM, DL4J CrashReportingUtil doesn't catch
Question:
username_0: It should be (or at least extend) OutOfMemoryError.
This matters here, for example - this means that OOM crash reporting in DL4J currently doesn't work for CPU:
https://github.com/deeplearning4j/deeplearning4j/blob/385353ac245ca974beacd295b31f374aaf0668cc/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/multilayer/MultiLayerNetwork.java#L3186-L3188<issue_closed>
Status: Issue closed |
umijs/qiankun | 627730992 | Title: qiankun2.0 当某个子项目加载失败时,切换到一个会重定向的子应用,加载状态异常
Question:
username_0: ## What happens?
qiankun2.0 当某个子项目加载失败时,切换到一个会重定向子应用,加载状态异常
使用qiankun2.0的loader来获取加载状态
`const loader = loading => render({ loading });`
如果上一个子应用加载失败,切换到一个会重定向的子应用,loading的状态会不正常

## Mini Showcase Repository(REQUIRED)
稍微改动官方例子可复现
在vue子应用的src/main.js中加入重定向代码
## How To Reproduce
**Steps to reproduce the behavior:** 1. 2.
**Expected behavior** 1. 2.
## Context
- **qiankun Version**:
- **Platform Version**:
- **Browser Version**:
Status: Issue closed
Answers:
username_1: single-spa 的 bug,已修复 https://github.com/single-spa/single-spa/pull/638 |
microsoft/fastformers | 843388417 | Title: Add possibility to fine-tune on other tasks
Question:
username_0: # 🚀 Feature request
Hi,
Currently this repo only supports fine-tuning for SuperGlue tasks, am I right? Are you going to enable fine-tuning for other tasks as, for example, a generic sequence classification problem?
## Motivation
I believe that fine-tuning on SuperGlue tasks only strongly limits the applicability of Fastformers
Answers:
username_1: Thanks for the comment, @username_0!
The main purpose of this repository is to demonstrate the models from the FastFormers paper.
At the moment, we are not planning to expand the scope.
Status: Issue closed
|
openshift/origin | 287060575 | Title: Upgrade from 1.5.1 to v3.6 fails with "Storage backend in /etc/origin/master/master-config.yaml must be set to 'etcd3'"
Question:
username_0: I'm trying to upgrade to openshift origin v3.6 from openshift origin v1.5.1.
##### Version
oc v1.5.1+7b451fc
kubernetes v1.5.2+43a9be4
features: Basic-Auth GSSAPI Kerberos SPNEGO
Server https://t1.mdevlab.com:8443
openshift v1.5.1+7b451fc
kubernetes v1.5.2+43a9be4
##### Steps To Reproduce
Run:
ansible-playbook openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_6/upgrade.yml
##### Current Result
TASK [fail] *****************************************************************************************************************************************
fatal: [t1devocm3.mdevlab.com]: FAILED! => {"changed": false, "msg": "Storage backend in /etc/origin/master/master-config.yaml must be set to 'etcd3' before the upgrade can continue"}
fatal: [t1devocm1.mdevlab.com]: FAILED! => {"changed": false, "msg": "Storage backend in /etc/origin/master/master-config.yaml must be set to 'etcd3' before the upgrade can continue"}
fatal: [t1devocm2.mdevlab.com]: FAILED! => {"changed": false, "msg": "Storage backend in /etc/origin/master/master-config.yaml must be set to 'etcd3' before the upgrade can continue"}
to retry, use: --limit @/root/openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_6/upgrade.retry
PLAY RECAP ******************************************************************************************************************************************
localhost : ok=11 changed=0 unreachable=0 failed=0
t1devocm1.mdevlab.com : ok=28 changed=0 unreachable=0 failed=1
t1devocm2.mdevlab.com : ok=24 changed=0 unreachable=0 failed=1
t1devocm3.mdevlab.com : ok=24 changed=0 unreachable=0 failed=1
t1devocn1.mdevlab.com : ok=21 changed=0 unreachable=0 failed=0
t1devocn2.mdevlab.com : ok=21 changed=0 unreachable=0 failed=0
Failure summary:
1. Hosts: t1devocm1.mdevlab.com, t1devocm2.mdevlab.com, t1devocm3.mdevlab.com
Play: Verify master processes
Task: fail
Message: Storage backend in /etc/origin/master/master-config.yaml must be set to 'etcd3' before the upgrade can continue
##### Expected Result
Suceed
##### Additional Information
Answers:
username_1: Looks like installer issue that should be reported at https://github.com/openshift/openshift-ansible
CC @sdodson
Status: Issue closed
|
alfredtorres/3DFacePointCloudNet | 612523975 | Title: Question about the uploaded pretrained checkpoint
Question:
username_0: Thanks for your good work!
I try to use your pretrained model in train_cls.py, however, when I load the checkpoint and get a low Acc during validation, is there something wrong with the .pth file?
I get such results:
Test set: Average loss: 10.843366970735438, Accuracy: (0.001176470541395247)
Answers:
username_1: Could you tell me you validate the model on which dataset?
username_2: I try to use your pretrained model in train_triplet to test NN0 vs NNx in the Bophorus Database without fine tune. I deal with the Bophorus Database like this: the viewpoint in PCL is (0, 0, 0), then I change the generated normals to positive z-axis. I use your pretrained model on NN0 vs NNx, and I get such result: top1 = 40.72%, auc = 83.04%, tpr = 0.0464, fpr = 0.0010. In contrast, Without loading the pretrained model, the result is: top1 = 31.96%, auc = 74.64%, tpr = 0.1443, fpr = 0.0014.
In addition, I try to use your pretrained model in train_cls to test the database generated by GPMM with num_class=1000. The diiference is that I didn't change the generated normals to positive z-axis for this database. I get the result like username_0 says.
I think changing the generated normals to positive z-axis is not the key. Could you tell me what's wrong with me?
Thank you very much! |
ITEA3-Measure/MeasurePlatform | 258030976 | Title: Installing elastic search on an older linux version
Question:
username_0: By default, elastic search requires a 3.5+ Kernel with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER options compiled, or the engine won't start.
To avoid this, edit the config/elasticsearch.yml file, adding the following line (strictly identical):
bootstrap.seccomp: false
Answers:
username_1: Many thanks again @username_0
username_2: NA
Status: Issue closed
|
sailro/Dexer | 464926199 | Title: Find a bug in ReadAnnotationSetRefList
Question:
username_0: https://github.com/username_1/Dexer/blob/master/Dexer/IO/DexReader.cs#L285-L286
```C#
var size = reader.ReadUInt32();
for (uint i = 0; i < size; i++)
{
var offset = reader.ReadUInt32();
result.Add(ReadAnnotationSet(reader, offset));
}
```
there should be
```C#
for (uint i = 0; i < size; i++)
{
var offset = reader.ReadUInt();
if (offset == 0)
result.Add(new List<Annotation>(0));
else
result.Add(ReadAnnotationSet(offset));
}
```
Status: Issue closed
Answers:
username_1: Good catch! Thank you |
jlippold/tweakCompatible | 594397340 | Title: `WiCellSwitcher` working on iOS 13.3.1 pop
Question:
username_0: ```
{
"packageId": "com.brunonfl.wicellswitcher",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.brunonfl.wicellswitcher",
"deviceId": "iPhone8,1",
"url": "http://cydia.saurik.com/package/com.brunonfl.wicellswitcher/",
"iOSVersion": "13.3.1",
"packageVersionIndexed": true,
"packageName": "WiCellSwitcher",
"category": "Tweaks",
"repository": "BigBoss",
"name": "WiCellSwitcher",
"installed": "1.1",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "com.brunonfl.wicellswitcher",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.5",
"shortDescription": "Switch between Wi-Fi and Cellular the Smart Way!",
"latest": "1.1",
"author": "<NAME>",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
Estimote/iOS-Proximity-SDK | 315390488 | Title: Xcode 9.3 with EstimoteProximitySDK (0.12.0) error Thread 1: EXC_BAD_ACCESS (code=1, address=0x10)
Question:
username_0: Running but after a while app is crashing
<img width="1154" alt="screen shot 2018-04-18 at 4 54 31 pm" src="https://user-images.githubusercontent.com/3851075/38921850-4f6adf4c-4329-11e8-800a-49e0e1ea9da9.png">
<img width="497" alt="screen shot 2018-04-18 at 4 56 49 pm" src="https://user-images.githubusercontent.com/3851075/38921933-869237cc-4329-11e8-9efd-52356bac36cf.png">
Answers:
username_1: +1
username_2: @username_0 I'm getting the same crash caused by ` https://github.com/Estimote/iOS-Proximity-SDK/blob/a6a84f820ded5a4211499d0c1fb58dd232a0dca3/EstimoteProximitySDK/EstimoteProximitySDK.framework/PrivateHeaders/EPXTimerAnalyticsHeartbeatGenerator.h `
username_3: I've just run into the same problem, and I believe this happens when your ProximityObserver gets released/deallocated without you calling `stopObservingZones` first.
username_4: I'm also facing the same issue. Any solution?
username_2: @username_4 make sure your not calling observe zones one then once, some how in my code i was calling it twice, that fixed the error for me.
username_4: Thanks @username_2
Can I call **startObservingZones** again after calling **stopObservingZones** ?
username_2: @username_4 why would you want to stop it? but yeah I guess you can.
username_5: please confirm with https://github.com/Estimote/iOS-Proximity-SDK/releases/tag/v0.13.1 that you don't have this issue
Status: Issue closed
|
fullcontent/sistemacelic | 1100296189 | Title: Erro pendências vinculadas
Question:
username_0: Bom dia @username_1
Não estou conseguindo vincular mais de um serviço na pendência, segue detalhamento abaixo:
Tela 01: Iniciando o vinculo da O. S 066.

- [ ] Tela 02: **Erro 01**
Após clicado em adicionar o vinculo da O.S 066 o campo fica em branco, ta ai o **primeiro erro.**

Tela 03:
Iniciando o vinculo da segunda O. S , a 069.

- [ ] Tela 04: **Erro 02**
Após clicado em adicionar o vinculo da segunda O.S (069) o campo tambem fica em branco.

- [ ] Tela 05: **Erro 03**
Após salvar a pendência, retorno na mesma e apenas o ultimo vinculo feito (069) que consta como vinculado, o primeiro feito na tela 01 e 02 (O.S 066) não consta).

- [ ] Tela 06: **Erro 04**
Apenas após salvo, consigo vincular um segundo serviço, porém note que após clicar no botão adicionar o campo do serviço vinculado fica em branco, apenas após salvar e retornando para tela de pendência, que é possivel constatar o vinculo (telas 07 e 08). Não era assim, antes conseguia vincular varios no mesmo momento.

Tela 07:

Tela 08:

Answers:
username_1: @username_0
Repeti o teste aqui e a principio não encontrei o erro.
Pode mandar um vídeo explicando melhor, por gentileza?
Status: Issue closed
username_0: @username_1
Agora qu entendi a lógica, havia pensado que ao clicarem adicionar o campo com a O.S que criei passaria para baixo, mas agora que entendi que não, quando clico em adicionar eu não estou adicionando um novo vinculo de o.s, mas sim um novo campo. Agora ficou claro, erro meu. Chamado encerrado. |
elastic/elasticsearch-net | 49878317 | Title: Prevent indexing from changing the mapping
Question:
username_0: I'm seeing some data in my index that belongs to some property in the object I'm indexing but I didn't define in my mapping. Here's a small repro where I see this unexpected behavior:
```csharp
using System;
using System.Collections.Generic;
using Nest;
using Nest.DSL.Visitor;
namespace NESTMapping
{
public class UnwantedObject
{
public int DoNotWant { get; set; }
}
public class SomeObject
{
public UnwantedObject Unwanted { get; set; }
public string Name;
}
class MappingVisitor: IMappingVisitor
{
private readonly int id;
public MappingVisitor(int id)
{
this.id = id;
}
public int Depth {
get { throw new NotImplementedException(); }
set { throw new NotImplementedException(); }
}
public void Visit(AttachmentMapping mapping) {
throw new NotImplementedException();
}
public void Visit(GeoShapeMapping mapping)
{
throw new NotImplementedException();
}
public void Visit(GeoPointMapping mapping)
{
throw new NotImplementedException();
}
public void Visit(IPMapping mapping)
{
throw new NotImplementedException();
}
public void Visit(MultiFieldMapping mapping)
{
throw new NotImplementedException();
}
[Truncated]
}
}
}
```
This throws `Object mapping detected 2`, i.e. there is no object mapping when I define the mapping, but the mapping is then modified when I index an object.
I can see this too when requesting the mapping directly from Elasticsearch, e.g. after defining the mapping but before indexing, http://localhost:9200/someindex/_mapping shows this:
```
{"someindex":{"mappings":{"someindex":{"properties":{"name":{"type":"string"}}}}}}
```
And after indexing http://localhost:9200/someindex/_mapping is:
```
{"someindex":{"mappings":{"someindex":{"properties":{"name":{"type":"string"},"unwanted":{"properties":{"doNotWant":{"type":"long"}}}}}}}}
```
Is there any way to tell NEST/Elasticsearch to never ever modify mappings, i.e. only send to the server what has been defined in the mapping?
Answers:
username_1: Can we get a visitor sample in future docs :)
username_2: @username_1 we have some docs for 2.x here https://www.elastic.co/guide/en/elasticsearch/client/net-api/2.x/auto-map.html#applying-conventions-through-the-visitor-pattern
For 1.x, check out the unit tests https://github.com/elastic/elasticsearch-net/blob/1.x/src/Tests/Nest.Tests.Integration/Mapping/MappingVisitorTests.cs |
nipy/nibabel | 489987819 | Title: BUG: Simple save-load round-trip emits warning
Question:
username_0: /home/username_0/python/nibabel/nibabel/analyze.py:1012: DeprecationWarning: get_data() is deprecated in favor of get_fdata(), which has a more predictable return type. To obtain get_data() behavior going forward, use numpy.asanyarray(img.dataobj).
* deprecated from version: 3.0
* Will raise <class 'nibabel.deprecator.ExpiredDeprecationError'> as of version: 5.0
data = self.get_data()
```
This suggests that there is some design issue in this `nib.save` traceback, which leads up to the warning emission:
```
../nibabel/nibabel/loadsave.py:98: in save
img.to_filename(filename)
../nibabel/nibabel/filebasedimages.py:334: in to_filename
self.to_file_map()
../nibabel/nibabel/analyze.py:1012: in to_file_map
data = self.get_data()
```
Answers:
username_1: Yup, looks like we'll need to purge internal uses of `get_data()`. Should probably have done that in #794.
Status: Issue closed
|
MicrosoftDocs/azure-docs | 683254006 | Title: missing unsupported scenario | UEFI boot - Secure Boot feature
Question:
username_0: Page is missing following unsupported scenario:
UEFI boot - secure boot
I created a pull request to fix this:
https://github.com/MicrosoftDocs/azure-docs/pull/61304
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 57139c50-3254-d78a-486c-bea6cbd5885d
* Version Independent ID: 6e1a0226-4bea-f720-08cd-77e6dadac264
* Content: [Support for VMware migration in Azure Migrate - Azure Migrate](https://docs.microsoft.com/en-us/azure/migrate/migrate-support-matrix-vmware-migration)
* Content Source: [articles/migrate/migrate-support-matrix-vmware-migration.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/migrate/migrate-support-matrix-vmware-migration.md)
* Service: **azure-migrate**
* GitHub Login: @rayne-wiselman
* Microsoft Alias: **raynew**
Answers:
username_1: @username_0 Thank you for bringing this to our attention. Greatly appreciate your contribution for doc-enhancement through PR-#61304. Post review by the content owner, the appropriate changes would go live.
Thanks once again for your contribution!
username_1: @rayne-wiselman Any update on this #61304?
username_1: @rayne-wiselman Could you please review the PR and take necessary action to merge it?
Current status:

username_1: @rayne-wiselman Any update on the PR?
Status: Issue closed
username_0: closeing this issue
opening in azure-docs-pr |
cosimoNigro/agnpy | 1159832089 | Title: Incompatibility between astropy 5.0.1 and agnpy 0.1.8 ?
Question:
username_0: Hi,
I found and error when installing `agnpy` from `pip`, that brings the latests version of `astropy` (v0.5.1).
When importing `agnpy` it raises the following error (see below). I downgrade `astropy` to 4.2 and `agnpy` can be imported correctly.
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
/opt/conda/lib/python3.8/site-packages/astropy/units/_typing.py in <module>
8 try: # py 3.9+
----> 9 from typing import Annotated
10 except (ImportError, ModuleNotFoundError): # optional dependency
ImportError: cannot import name 'Annotated' from 'typing' (/opt/conda/lib/python3.8/typing.py)
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
<ipython-input-1-4024036b86cf> in <module>
----> 1 import agnpy
/opt/conda/lib/python3.8/site-packages/agnpy/__init__.py in <module>
----> 1 from .spectra import *
2 from .emission_regions import *
3 from .targets import *
4 from .synchrotron import *
5 from .compton import *
/opt/conda/lib/python3.8/site-packages/agnpy/spectra/__init__.py in <module>
----> 1 from .spectra import *
/opt/conda/lib/python3.8/site-packages/agnpy/spectra/spectra.py in <module>
1 # module containing the electron spectra
2 import numpy as np
----> 3 import astropy.units as u
4 from ..utils.math import trapz_loglog
5 from ..utils.conversion import mec2
/opt/conda/lib/python3.8/site-packages/astropy/units/__init__.py in <module>
38
39 from .structured import *
---> 40 from .decorators import *
41
42 del bases
/opt/conda/lib/python3.8/site-packages/astropy/units/decorators.py in <module>
11 import numpy as np
12
---> 13 from . import _typing as T
14 from .core import (Unit, UnitBase, UnitsError,
15 add_enabled_equivalencies, dimensionless_unscaled)
/opt/conda/lib/python3.8/site-packages/astropy/units/_typing.py in <module>
16
17 else:
---> 18 from typing_extensions import * # override typing
19
20 HAS_ANNOTATED = Annotated is not NotImplemented
AttributeError: module 'typing_extensions' has no attribute 'OrderedDictTypedDict'
```
Answers:
username_1: Thanks @username_0, I will look into it. |
unisonweb/unison | 1055286961 | Title: keyword fragments appearing in doc links are treated as their Unison entities
Question:
username_0: If you're writing a `Doc` link, as in `{{I am a doc which links to a term called {abilityPatterns} in the codebase}}` the UCM will error with
```
I couldn't find a type for Patterns.
9 | {abilityPatterns}
```
Indicating that it's looking for an ability named Patterns.
Likewise with terms that are prefixed with `type` as in `{{ I am a doc which links to a term called [Type signatures](typeSignatures)}}`
The UCM will fail to typecheck with a message about how a type named Signatures could not be found in the codebase.
As long as your keyword-fragment is not at the top level of the codebase, you can work around this by qualifying the term, as in `languageReference.abilityPatterns`, but this may not be ideal in the long run.
Answers:
username_1: Interesting!
username_0: Possibly related: https://github.com/unisonweb/unison/issues/2829 linking to UCM commands in Docs may also be causing tdnr issues.
username_2: This is an issue in the lexer. I can take this if you want, since it's on the codepath I'm already debugging, and should be a super quick fix.
username_0: @username_2 that would be amazing! 🤩 Go for it! It's all you! |
smartinkc/Multilingual | 549697702 | Title: Missing characters when pasting survey questions
Question:
username_0: When pasting in question information into the Multilingual tool I found that a single character is being removed in certain places if the text is preceded by CRLF's. For instance, if you were to post in text such as:
@p1000lang{"Spanish":"<font color = navy> <i> Test.<br>
La test2.<br>
Asegúrese test3."}
The "L" and the "A" were both captured on the second and third lines due to the code on line 175 of the multilingual_setup.js file,
`questions = JSON.parse(tags[id].replace('p1000lang','').replace(/\n([^@])/g, "<br>"));`
Changing the code to match the other instances with the replace having the "<br>$1" as on lines 161 and 275 will resolve the issue, however, I just want to make sure that this code is not trying to prevent something that I have not encountered yet.
Additionally, if the intent is to remove new line characters shouldn't the regex be greedy,
`.replace(/\n+([^@])/g, "<br>$1"));`
Answers:
username_1: I not 100% sure if it's on purpose or not. I will take a look when I have a minute. I do know if you put a <br> tag in the field annontation (with or without the multilingual module enabled), REDCap replaces it with a carriage return. It's possible that it has something to do with the data dictionary, so if you do test it, take a look at the data dictionary after your changes and see if it gets uploaded or not. I'll try and take a look this week.
username_2: I have the same issue. When saving "Hello<br><br>world" as my first question, then I go back editing this field, it shows "Hello<br><br>orld". Looks like $1 is fixing the issue, but I don't know why.
 |
antlr/antlr4 | 299278393 | Title: Praser throws error in IE 11
Question:
username_0: ATLR 4 generated parser (JavaScript target) throwing an error in IE 11 if it is minified with UglifyJsPlugin in webpack.
If I remove UglifyJsPlugin plugin, its works as expected.. I was wondering what could be the issue.
The parser is failing at checkUUID() method>
`(G.indexOf(t)<0)
{
throw"Could not deserialize ATN with UUID: "+t+" (expected "+J+" or a legacy UUID).",t,J
}`
any suggestions?
Answers:
username_1: Does it work in other browsers ?
« +t+ » is definitely not a UUID so you might need to exclude the serialized ATN from uglification
Envoyé de mon iPhone
username_0: Uglified code is working inother browsers.. it even working in IE Edge
username_0: `G.prototype.checkUUID = function () {
var t = this.readUUID();
if (X.indexOf(t) < 0) throw "59627784-3BE5-417A-B9EB-8131A7286089";
this.uuid = t
}`

username_1: you might want to try throwing an error instead of a string in the source code and see if to improves
Status: Issue closed
username_0: Found the fix:
new UglifyJsPlugin({
uglifyOptions: {
output: {
ascii_only: true
}
}
}),
username_2: `<meta charset="utf-8">` also fix the issue |
akveo/nebular | 945521841 | Title: nb-date-timepicker event to listen to??
Question:
username_0: ### Issue type
**I'm submitting a ...** (check one with "x")
* [ ] bug report
* [ ] feature request
### Issue description
**Current behavior:**
Can somebody please provide some info on what event is fired when a selection is made for the nb-date-timepicker component?
Unless I have missed it, I have looked at the documentation and could not find it.
**Expected behavior:**
A pointer or example is greatly appreciated.
**Steps to reproduce:**
I am currently using it as
*.component.html
```
<----- Working Range picker example -------------------->
<input nbInput placeholder="Filter by date" [nbDatepicker]="msgrangepicker" fullWidth fieldSize="small">
<nb-rangepicker #msgrangepicker (rangeChange)="msgDateChange($event)" format="MM/DD/YYYY" ></nb-rangepicker>
<----- what should be the syntax here for DateTime picker -------------------->
<input nbInput placeholder="Filter by date" [nbDatepicker]="msgdatetimepicker" fullWidth fieldSize="small">
<nb-date-timepicker #msgdatetimepicker (????.....)="msgDateTimeChange($event)" format="MM/DD/YYYY HH:mm:ss" ></nb-date-timepicker>
```
*.component.ts
```
msgDateChange($event) {
console.log($event);
};
msgDateTimeChange($event) {
console.log($event);
}
```
app.module.ts
```
imports: [
NbDatepickerModule.forRoot(),
NbTimepickerModule.forRoot(),
```
### Other information:
```
Angular 11
Nebular 7.0.0
```<issue_closed>
Status: Issue closed |
predictive-technology-laboratory/sensus | 336937747 | Title: Fix System.NullReferenceException in 0xf7000 + 30895840
Question:
username_0: ### Version 15.6.0(1528347784) ###
### Stacktrace ###
### Reason ###
System.NullReferenceException
### Link to App Center ###
* [https://appcenter.ms/orgs/uva-predictive-technology-lab/apps/sensus-iOS-1/crashes/groups/30f77e3ade36d4e23b9575618c0a4f9d1f660c14](https://appcenter.ms/orgs/uva-predictive-technology-lab/apps/sensus-iOS-1/crashes/groups/30f77e3ade36d4e23b9575618c0a4f9d1f660c14)<issue_closed>
Status: Issue closed |
hugsy/gef | 189078187 | Title: Massive documentation (re-)work
Question:
username_0: With the too many requests I have regarding `gef` install or usage, I need to make more documentation.
Idea list:
* [ ] Video tutorials for install __and__ command use
* [ ] Exhaustive description of commands and options
* [ ] Move away from Markdown for ReadTheDocs to RsT (??)
* [ ] Write-up a quick FAQ for frequent install error/warning messages (ex: Python support not compiled, wrong Python version, etc.)
Answers:
username_0: Add a PEDA compatibility page to show how to find an equivalent for the commands in PEDA
username_1: @username_0 I just want to say thanks for the work you have spent on the documentation, and most importantly, to the project itself! I migrated from pwndbg to gef because of the documentation and the interface looks much more slick & beautiful. Thanks! :smile:
username_0: Related to #161 : make the doc clearer that GEF need a locale of `en_*.UTF-8` (not just UTF-8)
username_0: Tutorial screencasts will be publicly released after Black Hat Arsenal.
Status: Issue closed
|
jonathanwalkerscripts/OppChasers | 827083295 | Title: issue
Question:
username_0: Environment (Devices, Browser, User type)
Device: macOS Catalina - Version 10.15.7
Browser: Google Chrome Version 88.0.4324.192 (Official Build) (x86_64)
User type: User
Description
This is where you will type a short description of the issue. It is a general summary of issue, and should include any important details not listed elsewhere in the report.
Expected Behavior
This is where you will enter what you expected to happen before the issue occurred. It helps give context to the mindset of the tester/user, and how they believed the application would behave.
Actual Behavior
This is where you will enter a short summary of what actually happened when you executed your test. This should be some sort of 'Unexpected Behavior', which is different from the 'Expected Behavior' listed in the previous section.
Steps to Reproduce Issue
This is where you will give step-by-step instructions on how to make this bug happen. It's important to be descriptive, and make sure you don't leave out any steps. Somebody that has never used the application should be able to follow your steps and reproduce this issue.
Navigate to (Website Name) at (Website Link)
Log in using your Company credentials provided by @jonathanwalkerscripts
Click the green button on the far right side of the Navbar to open the 'Edit Listing' page for a listing you've already created
In the job description field, type a description using punctuation such as quotations " periods . commas , and slashes /
Click the green 'Save' button to save the listing you just edited
The page should refresh and display the listing, including the changes you've made
Notice that the saved text has additional slashes not included by the user
Screenshots
SlashesBeforeOnward
SlashesAfterOnward
Notes
The screenshots show the text before and after saving (additional slashes in after). Team confirmed the bug occurs on both Windows and Mac OS.
@jonathanwalkerscripts jonathanwalkerscripts added the bug label 5 days ago
@jonathanwalkerscripts jonathanwalkerscripts self-assigned this 5 days ago
@jonathanwalkerscripts jonathanwalkerscripts added good first issue help wanted labels 5 days ago
@jonathanwalkerscripts jonathanwalkerscripts added this to In progress in Opp Chasers Initial Testing Run 5 days ago
@jonathanwalkerscripts jonathanwalkerscripts moved this from In progress to To do in Opp Chasers Initial Testing Run 5 days ago
@jonathanwalkerscripts jonathanwalkerscripts moved this from To do to In progress in Opp Chasers Initial Testing Run 5 days ago
@jonathanwalkerscripts jonathanwalkerscripts moved this from In progress to To do in Opp Chasers Initial Testing Run 5 days ago
@jonathanwalkerscripts jonathanwalkerscripts added the wontfix label 5 days ago
@jonathanwalkerscripts jonathanwalkerscripts moved this from To do to Blocked in Opp Chasers Initial Testing Run 5 days ago
@jonathanwalkerscripts jonathanwalkerscripts moved this from Blocked to To do in Opp Chasers Initial Testing Run 5 days ago
@jonathanwalkerscripts jonathanwalkerscripts added documentation and removed wontfix labels 5 days ago
@jonathanwalkerscripts jonathanwalkerscripts assigned brianifoster 5 days ago
@brianifoster brianifoster mentioned this issue 5 days ago
Issue # 2 #30
Open
0 of 14 tasks complete
@jonathanwalkerscripts jonathanwalkerscripts moved this from To do to In progress in Opp Chasers Initial Testing Run 2 hours ago
@jonathanwalkerscripts jonathanwalkerscripts moved this from In progress to Blocked in Opp Chasers Initial Testing Run 2 hours ago
@jonathanwalkerscripts jonathanwalkerscripts moved this from Blocked to In progress in Opp Chasers Initial Testing Run 2 hours ago
@jonathanwalkerscripts jonathanwalkerscripts moved this from In progress to Blocked in Opp Chasers Initial Testing Run 2 hours ago
@jonathanwalkerscripts jonathanwalkerscripts moved this from Blocked to In progress in Opp Chasers Initial Testing Run 2 hours ago
@jonathanwalkerscripts jonathanwalkerscripts moved this from In progress to Blocked in Opp Chasers Initial Testing Run 2 hours ago
@jonathanwalkerscripts jonathanwalkerscripts moved this from Blocked to In progress in Opp Chasers Initial Testing Run 2 hours ago
@jonathanwalkerscripts jonathanwalkerscripts moved this from In progress to To do in Opp Chasers Initial Testing Run 2 hours ago
@username_0
Leave a comment
[Truncated]
Labels
bug
documentation
good first issue
help wanted
Projects
Opp Chasers Initial Testing Run
To do
MilestoneNo milestone
Linked pull requests
Successfully merging a pull request may close this issue.
None yet
Notifications
Customize
You’re not receiving notifications from this thread.
2 participants
@brianifoster
@jonathanwalkerscripts |
flathub/org.qbittorrent.qBittorrent | 314553975 | Title: Magnet Link clicks on sites/downloaded Torrent files(on double-click) ... not triggering torrent add window !!!
Question:
username_0: Hi,
I have moved to a flatpak version of qBittorent ... ever since, the magnet links and torrent files aren't working ... ideally clicking a magnet link or double clicking a torrent file should add the downloads to qBittorent.
when one clicks on a magnet link or double click on a torrent file, the add download window needs to be triggered ... but it isn't happening ... i am not sure if its an issue with flatpak system or the qBittorrent flatpak version problem !
thanks.
Answers:
username_0: does it mean flatpak enviroment does not extend this qBittorent's basic feature ?
username_0: reported in April ... not word yet ?
username_1: Works for me.
Did you associated the app to open magnet links and torrent files?
Here's how make flatpakked qbittorrent to open magnet urls:
`xdg-settings set default-url-scheme-handler magnet org.qbittorrent.qBittorrent.desktop`
And here's how to make flatpakked qbittorrent to open .torrent files:
`xdg-mime default org.qbittorrent.qBittorrent.desktop application/x-bittorrent`
username_0: @username_1 ... sorry didn't seem to work for me ...
the former command gave the error ... `xdg-settings: unknown desktop environment`
the latter command ran without error ... but it just wouldn't wopen the application when a downloaded .torrent file was double clicked.
username_1: what is you DE/WM?
username_0: `export DE=gnome`
what does this command do ... just spoof 'gnome' as my DE or it really switches to gnome ?
username_1: It just changes the env variable that get read by xdg-open. I looked at similar bug reports and found that it might help.
Status: Issue closed
username_0: nope ... no luck ... same response even after the export command ...
username_0: Hi,
I have moved to a flatpak version of qBittorent ... ever since, the magnet links and torrent files aren't working ... ideally clicking a magnet link or double clicking a torrent file should add the downloads to qBittorent.
when one clicks on a magnet link or double click on a torrent file, the add download window needs to be triggered ... but it isn't happening ... i am not sure if its an issue with flatpak system or the qBittorrent flatpak version problem !
thanks.
username_2: Works here (Kubuntu 20.04, Plasma 5.18.4, Firefox 75), must be a problem with Deepin.
username_3: I have the same problem as @username_0
qBittorrent flatpac used to work fine
OS is ClearLinux, DE is gnome.
xdg-settings set default-url-scheme-handler magnet
and
xdg-mime default org.qbittorrent.qBittorrent.desktop application/x-bittorrent
Have no errors and no other response.
Browser suggests opening with "system handler", and I cannot select the app proper. (browser is flathub firefox)
Any help/thoughts? |
iPazooki/NopCommerce-PersianCalendar | 474108733 | Title: datepicker box quickly hiding problem in nopCommerce 3.9
Question:
username_0: hi, thanks for your repo
I have this problem in 3.9 version
when I click the picker button, date picking box will fade quickly, so I can't pick a date.
I'd used the original kendo.web.min.js library and in worked fine with Georgian calendar.
I will be grateful if you could help me with this.
Answers:
username_1: Hi @username_0
I'm terribly sorry, this plugin is just for v3.8 and I have not tested it for v3.9
I'm afraid to say I don't have time to create a new version |
tensorflow/tensorflow | 169486438 | Title: GTX 1070 source install bazel build -- Unsupported gpu architecture 'compute_61'
Question:
username_0: Here are my specs:
- Ubuntu 15.10 (potential conflict with CUDA 7.5?)
- NVIDIA GTX 1070 (compute capability 6.1)
- CUDA 7.5
- Cudnn v5.0
Error message:
```
ERROR: /usr/local/lib/python2.7/dist-packages/tensorflow/tensorflow/core/kernels/BUILD:1496:1: error while parsing .d file: /home/volcart/.cache/bazel/_bazel_root/109ad80a732aaece8a87d1e3693889e7/execroot/tensorflow/bazel-out/local_linux-opt/bin/tensorflow/core/kernels/_objs/batchtospace_op_gpu/tensorflow/core/kernels/batchtospace_op_gpu.cu.d (No such file or directory).
nvcc warning : option '--relaxed-constexpr' has been deprecated and replaced by option '--expt-relaxed-constexpr'.
nvcc fatal : Unsupported gpu architecture 'compute_61'
Target //tensorflow/cc:tutorials_example_trainer failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 86.269s, Critical Path: 84.86s
```
It is clearly complaining about the compute capability. Is a compute capability of 6.1 supported? Surely.
I am installing from source.
Some dependency versions:
```
volcart@volcart-Precision-Tower-7910:/usr/local/lib/python2.7/dist-packages/tensorflow_src$ bazel version
Extracting Bazel installation...
.
Build target: bazel-out/local-fastbuild/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar
Build time: Thu Jan 01 00:00:00 1970 (0)
Build timestamp: Thu Jan 01 00:00:00 1970 (0)
Build timestamp as int: 0
volcart@volcart-Precision-Tower-7910:/usr/local/lib/python2.7/dist-packages/tensorflow_src$ gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/4.9/lto-wrapper
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu 4.9.3-5ubuntu1' --with-bugurl=file:///usr/share/doc/gcc-4.9/README.Bugs --enable-languages=c,c++,java,go,d,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.9 --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.9 --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-gnu-unique-object --disable-vtable-verify --enable-plugin --with-system-zlib --disable-browser-plugin --enable-java-awt=gtk --enable-gtk-cairo --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-4.9-amd64/jre --enable-java-home --with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-4.9-amd64 --with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-4.9-amd64 --with-arch-directory=amd64 --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --enable-objc-gc --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32 --enable-multilib --with-tune=generic --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu
Thread model: posix
gcc version 4.9.3 (Ubuntu 4.9.3-5ubuntu1)
volcart@volcart-Precision-Tower-7910:/usr/local/lib/python2.7/dist-packages/tensorflow_src$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2015 NVIDIA Corporation
Built on Tue_Aug_11_14:27:32_CDT_2015
Cuda compilation tools, release 7.5, V7.5.17
```
When I execute `sudo ./configure` I have tried different varients. Namely, the default for all options -- I only have one version of CUDA and Cudnn installed.
I've also tried the following confirguration -- to the same end...
```
volcart@volcart-Precision-Tower-7910:/usr/local/lib/python2.7/dist-packages/tensorflow_src$ sudo ./configure
Please specify the location of python. [Default is /usr/bin/python]:
Do you wish to build TensorFlow with Google Cloud Platform support? [y/N] n
No Google Cloud Platform support will be enabled for TensorFlow
Do you wish to build TensorFlow with GPU support? [y/N] y
[Truncated]
Please specify the Cudnn version you want to use. [Leave empty to use system default]: 5
Please specify the location where cuDNN 5 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size.
[Default is: "3.5,5.2"]: 6.1
Setting up Cuda include
Setting up Cuda lib64
Setting up Cuda bin
Setting up Cuda nvvm
Setting up CUPTI include
Setting up CUPTI lib64
Configuration finished
```
I noticed CUDA 7.5 only supports Ubuntu 15.04 -- could this be a problem since I'm on Ubuntu 15.10?
Answers:
username_0: I should note that a virtual_env install works totally fine, so maybe this isn't an issue?
username_1: CUDA 7.5 doesn't support Pascal cards, so you'll need CUDA 8.0
TensorFlow doesn't support CUDA 8.0, but you can make it work with extra
tweaks like in https://github.com/tensorflow/tensorflow/issues/3431
Also CUDA 8.0 doesn't support Ubuntu 15.10, so you may need to upgrade to
16.04, or fix a bunch of compilation errors by hand during install
username_0: Do you know if the nightly wheels support CUDA 8.0?
username_1: They do not
username_0: Fort verification when I add
`cxx_builtin_include_directory: "/usr/local/cuda-8.0/include"`
to
`third_party/gpus/crosstool/CROSSTOOL`
Am I adding it to the first line of the `toolchain {...` block
I'm having [this](http://stackoverflow.com/questions/38794497/tensorflow-bazel-build-cuda-8-0-gtx-1070-fails-w-gcc-error) issue now.
Status: Issue closed
username_2: Our wheel files now come built for cuda 8.0 and cudnn 5.1
I think this issue is now obsolete, thus closing. |
lay126/fuding_wewe | 102619860 | Title: SERVER : GET_PROFILE, 호출값, 반환값 정리
Question:
username_0: # 호출 URL

Answers:
username_0: # 호출 URL

username_0: # 반환 값 전체

username_0: # 내 프로필인지 판별
['me_flag'] = "yes"
['me_flag'] = "no"
# 팔로잉 중인지 판별
dic['follow_flag'] = "no"
dic['follow_flag'] = "yes"
username_1: user_followers 나를 팔로잉 한 사람 수
user_followings 내가 팔로잉 한 사람 수
Status: Issue closed
|
marcglasberg/i18n_extension | 558729905 | Title: Feature Request: Integration with Flutter Localizations
Question:
username_0: Flutter localizations are still required to translated Flutter widgets. It would be great to have this integration realized within i18n.
Please have a look at the example app from https://github.com/djarjo/flutter_input
I was able to combine changing both localizations with `InputLanguage`.
But it required to have _I18n_ as topmost widget and even one more instance variable of `Locale`. Maybe there is a much more elegant solution by providing something like `I18n.locale` to `MatterialApp `even if `I18n `is a child of `MaterialApp`?
Answers:
username_1: I'm not sure I understand what you want to do, sorry. Could you be more clear, and maybe provide code examples here?
username_1: I will close this, since you did not reply so far, and I can't understand what you want. Please feel free to reopen it if you have more info to help me. Thanks.
Status: Issue closed
|
acidanthera/bugtracker | 595470285 | Title: No function buttons after sleep
Question:
username_0: Hello.
From the very beginning of using VoodooPS2 I have problem with function buttons after sleep. When I run my PC all buttons works perfectly but only until computer fall sleep. After waking up MacOS doesn't detect any input from functions keys (I checked it in karabiner), but I haven't any problems with F1-F12 buttons.
Now I'm using last release of VoodooPS2 and VoodooInput on OpenCore, but on the previous release from ca. November on Clover I had this same issue.<issue_closed>
Status: Issue closed |
pandas-dev/pandas | 244702592 | Title: read_json with lines=True not using buff/cache memory
Question:
username_0: I have a 3.2 GB json file that I am trying to read into pandas using pd.read_json(lines=True). When I run that, I get a MemoryError, even though my system has >12GB of available memory. This is Pandas version 0.20.2.
I'm on Ubuntu, and the `free` command shows >12GB of "Available" memory, most of which is "buff/cache".
I'm able to read the file into a dataframe by iterating over the file like so:
```python
dfs = []
with open(fp, 'r') as f:
while True:
lines = list(itertools.islice(f, 1000))
if lines:
lines_str = ''.join(lines)
dfs.append(pd.read_json(StringIO(lines_str), lines=True))
else:
break
df = pd.concat(dfs)
```
You'll notice that at the end of this I have the original data in memory **twice** (in the list and in the final df), but no problems.
It seems that `pd.read_json` with `lines=True` doesn't use the available memory, which looks to me like a bug.
Answers:
username_1: @username_0 : that behavior does sound buggy to me, but before I label it as such, could you provide a minimal reproducible example for us?
username_0: Happy to, but what exactly would constitute an example here? I can provide an example json file, but how would you suggest I reproduce the memory capacity and allocation on my machine?
username_1: Just provide the smallest possible JSON file that causes this `MemoryError` to occur.
username_2: The ``lines=True`` impl is currently not designed this way. If you subsittue your soln into the current impl does it pass the test suite?
username_2: cc @aterrel
username_0: @username_1 I'm still not sure exactly what would be most helpful for you here.
I tried doing `head -n 10 path/to/file | testing.py`, where testing.py contains: `df = pd.read_json(sys.stdin, lines=True)`, and then varying how many lines to pass.
**Results:**
Every million lines is about .8 GB, according to `head -n 1000000 path/to/file | wc -c`. And I did these each a few times in varying orders, always the same results.
- 1Million lines: success.
- 1.3M lines: success
- 2M lines: got "Killed" and also killed a `watch` in another terminal window, with message "unable to fork process: Cannot allocate memory"
- 3M lines: got "MemoryError" (I had a `watch` running here too, no problems at all)
- Full file: got "MemoryError"
username_0: @username_2 I think your question was for me, but I don't know how to do what you described. Are there instructions you could point me to?
username_1: Yikes! That's a pretty massive file. That does certainly help us with regards to what we would need to do to reproduce this issue.
username_1: <a href="https://pandas.pydata.org/pandas-docs/stable/contributing.html">Here</a> is the documentation for making contributions to the repository. Essentially @username_2 is asking if you could somehow incorporate your workaround in your issue description into the implementation of `read_json`, which you can find in `pandas/io/json/json.py`.
A quick glance there indicates what might be the issue: we're putting ALL of the lines into a list in memory! Your workaround might be able to address that.
username_0: Thanks! I added it for one of the possible input types. You can see it [here](https://github.com/username_0/pandas/commit/fe1cfbd00a0795c40a4b725ea8e95aaa522705b4). It passes all the existing tests, and I'm now able to use it to load that file.
I think this is *much* slower than the previous implementation, and I don't know whether it can be extended to other input types. We could make it faster by increasing the chunk size or doing fewer `concat`s, but at the cost of more memory usage.
username_1: I think it would make sense to add such a parameter. We have it for `read_csv`. Try adding that and let us know how that works! This looks pretty good so far.
username_0: Using the chunksize param in `read_csv` returns a `TextFileReader`, though, right? Won't that be confusing?
username_1: @username_0 : IMO, it would not because there's more confusion when people try to pass in the same parameters to one `read_*` function that they're used to passing in for another and found out they don't work or don't exist. Thus, you would be doing all `read_json` users a service by adding a similar parameter. 😄
username_0: @username_1 Makes sense. [Here's the latest](https://github.com/username_0/pandas/commit/0a474731fc680bb41e339e328dd687a577118962) with the chunksize param.
I still don't know how to make it work on any of the other filepath_or_buffer branches or really what are the input types that would trigger those. I would need an explanation of what's happening there to extend this.
username_1: Certainly. We accept three types of inputs for `read_json`:
* file-path (this option BTW is not clearly documented, so a PR to make this clearer is welcome!)
* file-object
* valid JSON string
Your contribution would address the first two options. You have at this addressed the first one. The second comes in the conditional that checks if the `filepath_or_buffer` has a `read` method. Thus, you should also add your logic there under that check (we'll handle refactoring later).
username_0: Okay @username_1 , Thanks for your help. I added it to that conditional you mentioned as well. Latest [here](https://github.com/username_0/pandas/commit/68063d82c6b03ff5833ae8c93848e99de8eef444). Passes the tests.
I also changed the behavior so that if chunksize is not explicitly passed, we try to read it all at once. My thinking is that using chunksize changes the performance drastically, and better to let people make this tradeoff explicitly without changing the default behavior.
From here, what are the next steps? There's probably a bit of cleanup you'd like me to do -- let me know. Thanks again!
username_1: @username_0 : Sure thing. Just submit a PR, and we'll be happy to review!
username_0: Here goes: https://github.com/pandas-dev/pandas/pull/17168.
Status: Issue closed
username_3: Hi,
I am experimenting with Json of various sizes.
I am using the Pandas read_json with lines=True and noticing a great memory output in the parsing phase.
Using a chunksize of 10.000 to experiment :
For example :
**Input Json** : 280 Mb **Memory Usage** : up to 2.6 Gb **Resulting Data Frame** : 400 Mb (because of dtypes, not much I can do with this)
**Input Json** : 4Gb **Memory Usage** : up to 28 Gb **Resulting Data Frame** : 6Gb .
It seems the memory necessary to parse the Json is definitely too much ( not sure if there are better ways to read big Json in Pandas).
Furthermore it seems this memory remains allocated to the Python process.
Now I am a Python newbie, so this may be perfectly fine and this memory may just remain for Python in a buffer to be used in case of necessity ( it doesn't grow up when the data frame start getting processed).
But it look suspicious.
Let me know if you noticed the same and find out any tips or tricks for that!
Thanks in advance
username_0: @username_3
I've definitely experienced some of what you're describing.
First, the read_json function probably uses more memory overall than it needs to. I don't fully know why that is or how to improve it - that probably belongs in a separate issue if it's important to what you're doing.
Second, when lines=True, I think you're right that all the memory isn't actually being used, it's just not being released back to the OS, so it's a bit spurious.
Third, if you read with lines=True and a small chunksize, you should be fine either way.
username_3: hi @username_0 , thank you for the kind answer.
I just noticed that even using simpler approaches such as :
`with open(interactions_input_file) as json_file:
data_lan = []
for line in json_file:
data_lan.append(pd.io.json.loads(line))
all_columns = data_lan[0].keys()
print("Size "+str(len(data_lan)))
interactions = pd.DataFrame(columns=all_columns, data=data_lan)`
I have similar memory outputs.
I will stop the conversation here as it's slighly off topic.
Should I assume that parsing json lines in Python is just that expensive ?
we are talking about 5-7 times more ram than the initial file...
username_4: I'm having a similar experience with this function as well, @username_3. I ended up regenerating my data to use **read_csv** instead, which is using a dramatically smaller amount of ram.
username_3: thanks @username_4 , I have a small update if that helps...
My file was heavily String and Lists based ( each line was a Json object with a lot of Strings and lists of Strings).
For a matter of fact, those Strings were actually Integer ids, so, after I got that information I switched the Strings to Int and Lists of int.
This first of all brought down the size of the Json from 4.5 Gb to 3 Gb and the memory output from 30 GB to 10 GB.
If I end up with stricter memory requirements I will definitely take a look to the csv option.
Thanks!
username_5: The problem still exists , I am loading a 5GB json file with 16 GB ram ,but still i get memory error . The lines true attribute doesnot work as expected still
username_6: If anyone is going to implement a better version of this, its worth looking at these libraries to do a json out of memory read:
https://pypi.org/project/jsonslicer/#description
https://github.com/ICRAR/ijson
I experienced heavy memory issues just opening a JSON file, but these libraries fixed this issue, and added parsing functionality on top of it. Without beeing to expensive :D
username_7: json.loads for a single item creates about a 50K dictionary for me for each of the 7,000 lines. The resultant dataframe is 38 MB. The memory issue is that 314 MB gets tried up in memory for intermediate processing and stays as a high mark. Statement is:
` data_frame = pd.read_json(in_file_name, lines=True) `
username_0: Just to be clear, the [PR](https://github.com/pandas-dev/pandas/pull/17168) that closed this issue did NOT solve the underlying issue of memory usage when reading json. Instead it added a parameter `chunksize` to the `read_json` method which allows you to balance speed vs memory usage for your usecase. See the [docs on line-delimited json](https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#line-delimited-json) for more info. |
jackc/pgx | 302099638 | Title: pgtype.CharOID is not correctly formated or inserted
Question:
username_0: Given the following
```
CREATE TABLE "public"."tb" (
"id" Character( 40 ) NOT NULL,
"d" JSONB NOT NULL,
PRIMARY KEY ( "id" ) );
;
```
```
c.batch.Queue("INSERT INTO tb(id, d) VALUES ($1,$2) ON CONFLICT (id) DO NOTHING",
[]interface{}{hash, somejson},
[]pgtype.OID{pgtype.CharOID, pgtype.JSONBOID},
nil,
)
```
hash is a SHA1
Only the first character of the hash is inserted, however switching to `pgtype.VarcharOID` solved the problem.
Answers:
username_1: That is because internal and external names for a few PostgreSQL types are confusing. pgtype.CharOID is a single character.
https://github.com/postgres/postgres/blob/master/src/include/catalog/pg_type.h#L294
pgtype.BPChar is what you want for a fix length multiple character field.
https://github.com/postgres/postgres/blob/master/src/include/catalog/pg_type.h#L502
You might consider changing your table to `varchar(n)`. In general, `varchar(n)` is preferred over `char(n)` in PostgreSQL. The only significant difference is `char(n)` does blank padding.
Actually, it looks like you are storing the hex encoding of a hash. You could store it in half the space if you were to skip the hex encoding and decoding and store the hash as a `bytea` instead.
username_0: Well the id is a sha1 hash which is exactly 40 characters so CHAR(40) is exactly what I need.
Status: Issue closed
|
cypress-io/cypress | 765108303 | Title: 兰州新茶到店品茶学生外围上课
Question:
username_0: 兰州新茶到店▋▋薇87O9.55I8唯一靠谱▋▋欣米河浩呵哪有酒店荤,桑拿服务全套SPA,水疗会所保健,品茶学生工作室洋妞泉州喜播传媒有限公司是一家专注于互动娱乐艺人运营,包装,发展的公司。公司与抖音、斗鱼、快手、、陌陌等平台,与综艺、优酷、爱奇艺、唱片、商演等渠道建立长期合作关系。公司有独特的艺人商业运作模式,立志培养属于自己的艺人。泉州喜播传媒是深圳喜播的兄弟公司,截止年月,两者旗下共有线上签约主播余名,并正以每个月超名的签约速度在递进。年月进驻泉州以来,喜播即将重新定义泉州的娱乐新风潮,带来整套的先进管理模式和先进运营经验。在艺人甄选、艺人培训、艺人包装、艺人发展等为艺人提供全方位的升级服务。泉州喜播兰州新茶到店https://github.com/sjslhs6<issue_closed>
Status: Issue closed |
umijs/hox | 584930088 | Title: 要怎么把请求返回的数据 用hox存起来?
Question:
username_0: ### 使用场景
用户登录之后,全局保存用户信息
### 疑问
之前都是用`dva`,请求到用户信息之后,可以在`effects`中使用`payload`将用户信息存到`redux`以供全局使用。`hox`不能带参数,那我要保存接口返回的信息要怎么做?
### 尝试
1. 我试着在createModel中暴露一个方法,接收一个参数
```js
import { useState } from 'react';
import { createModel } from 'hox';
function useTopMenuModel() {
const [menuList, setMenuList] = useState([]);
const saveMenuList = payload => setMenuList(payload);
return { menuList, saveMenuList };
}
export default createModel(useTopMenuModel);
```
2. 然后在页面中调用
```js
import useTopMenuModel from '@/hox/menuModel';
const modelData = useTopMenuModel();
console.log(modelData);
modelData.saveMenuList(menuList); // menuList是接口返回的信息
```
这么做可以把menuList存起来,但是会报错

第一次用hooks,在网上也没找到相关问题,劳烦大佬们答疑下,谢谢~
Answers:
username_0: 上面我那个的报错问题,我把调用方法放在useEffect里面就可以了
```js
useEffect(() => {
modelData.saveMenuList(menuList);
}, [menuList]);
```
我参考的这个回答: https://github.com/facebook/react/issues/18147#issuecomment-592267650
那么`hox`保存全局信息就是以这种方式吗,还是说有其他更好的方式?
username_1: 你这样用没问题的。
不过最好把请求都直接放到 hox 中。
username_0: 感谢解答~
刚开始是想把请求放在`hox`中的,不过大多请求都带参数,文档中看到`hox`不能传参,于是就写外面了。
Status: Issue closed
|
NLnetLabs/unbound | 520597756 | Title: [Help] unbound-control local_zone <name> <type>
Question:
username_0: Hi guys, I would like a little help to figure out the right way to set this command with unbound-control
I'm trying to add a local-zone as
```
unbound-control local_zones "2mdn.net" always_nxdomain
unbound-control local_zones 2mdn.net always_nxdomain
unbound-control "local_zones 2mdn.net always_nxdomain"
```
According to the `unbound-control -h` this seems like the right syntax, but when I hit the enter, the console leaves me hanging, and nothing seems to happens before i hit `CTRL + C`
I'm running on Ubuntu 19.04 with Unbound 1.9.4
Compiled with as written here https://gitlab.com/rpz-zones/toolbox/issues/18#configure-options
Could someone please explain whats wrong with the syntax or ? and maybe give me an example of the right syntax for this example.
PS: This could also be a good example to add to the documentation :smiley:
Answers:
username_1: Hi,
For these kind of questions, we have an excellent mailing list for and from the Unbound users community. The registration and archive of the unbound-user mailing list can be found here: https://nlnetlabs.nl/mailman/listinfo/unbound-users
Thanks for your suggestion to include this in the Unbound documentation. :-)
Best,
-- Benno
Status: Issue closed
username_0: And for those who actually don't use mails? |
scalameta/scalagen | 283184274 | Title: User driven logging
Question:
username_0: As we always wanted to implement this in scalameta/paradise. I feel it needs to happen here.
# Error
Kills the generation process & Logs to console
# Warning
Logs to console
# Info
Logs to console
Status: Issue closed
Answers:
username_0: Note: Instead of just these 3...
I've added 5:
- Abort, will abort generation
- Error/Warning/Info/Debug to match SLF4J
We may want to add a `skip` also, which skips the current generator without expanding. I'll create a ticket |
rust-lang/cargo | 639047969 | Title: Better Docker support via build plan filter and replay
Question:
username_0: Hi.
This is a follow-up feature request to the long discussion over at #2644 and in particular [this comment](https://github.com/rust-lang/cargo/issues/2644#issuecomment-635365756).
#### The problem
Docker.
It's kind of hard to write Dockerfiles such that the build of dependencies can be saved into a docker layer and reused later when only the code of the crate at hand changes, saving quite a lot of time & resources. At present people have to rely on hacks such as replacing sources with dummy content, `touch`-ing sources files, etc. These tricks also don't scale terribly well to complex workspaces and/or larger projects.
#### The solution
I believe a viable approach to solve this could consist of:
- The ability to filter the build plan such that it only contains steps to build dependencies
- The ability to load & replay a serialized build plan
As noted by @username_1 in the linked thread, the build plan wouldn't even need to be checked in VCS, it could be generated on demand via the staged builds Docker feature. Ie. the stragegy would be:
1. Copy workspace into docker, serialize dependencies build plan.
2. Start a second stage, copy build plan from previous stage, build dependencies. This way, the second deps-building stage depends **on the build plan file only, not on the workspace sources** (this is very important in order to not have dependencies build trashed by changes in workspace sources).
3. Start a third stage, copy `target` from previous stage (with dependencies built), copy workspace (again), perfom a regular build.
4. Profit.
I believe the format of a build plan file could be entirely private to cargo and not guaranteed to be stable in any way, since it would only be meant to be used by the same cargo binary just a little later.
I'm writin this issue so that the idea doesn't vanish in yet more comments in that thread and also in hope for a more constructive discussion about how to best tackle the Docker problem. Also, this is more of a long-term wish, I realize this is unlikely to have people jumped on & implemented tomorrow. Let me know what you think :-)
Answers:
username_0: cc @Eh2406
username_1: Nicely written, good idea to create a separate issue for this.
Little minor enhancement (not affecting the proposal, just east of its use) - I think the point 3 can be further simplified to:
3. Copy the full workspace sources into the second stage (ensuring `target` is not overwritten [1]), perform a regular build.
[1] It should be an existing good practice to add `target` directory into `.dockerignore`.
Note that @username_0's proposal makes cool use of the fact the even for multi-crate workspaces, there is a single `target` directory (and presumably a single build plan). It should be therefore well usable also for workspace'd projects.
username_2: I believe this is now possible with `cargo-chef` (author here):
```dockerfile
FROM rust as planner
WORKDIR app
RUN cargo install cargo-chef
COPY . .
RUN cargo chef prepare --recipe-path recipe.json
FROM rust as cacher
WORKDIR app
RUN cargo install cargo-chef
COPY --from=planner /app/recipe.json recipe.json
RUN cargo chef cook --release --recipe-path recipe.json
FROM rust as builder
WORKDIR app
COPY . .
COPY --from=cacher /app/target target
RUN cargo build --release --bin app
FROM rust as runtime
WORKDIR app
COPY --from=builder /app/target/release/app /usr/local/bin
ENTRYPOINT ["./usr/local/bin/app"]
```
Does it capture what you had in mind @username_0?
username_0: @username_2 That looks very promising, I'm looking forward to try `cargo-chef` out... |
octokit/webhooks | 845581510 | Title: `installation.created` webhook event payload has incorrect type
Question:
username_0: <!-- Please replace all placeholders such as this below -->
**What happened?**
The `installation.created` webhook event payload type is incorrect.
**What did you expect to happen?**
I expected the `requester` property to be of type `User | undefined`. Instead, it is declared as type `null | undefined`.
**What the problem might be**
The Github API documentation also doesn't acknowledge this property's existence. Also cross posted to https://github.com/octokit/webhooks.js/issues/520 (where I was advised to post an issue in this repo as well).
**Example Payload**
```json
{
"action":"created",
"installation":{
"id":15819815,
"account":{
"login":"instantish-dev",
"id":71402524,
"node_id":"MDEyOk9yZ2FuaXphdGlvbjcxNDAyNTI0",
"avatar_url":"https://avatars.githubusercontent.com/u/71402524?v=4",
"gravatar_id":"",
"url":"https://api.github.com/users/instantish-dev",
"html_url":"https://github.com/instantish-dev",
"followers_url":"https://api.github.com/users/instantish-dev/followers",
"following_url":"https://api.github.com/users/instantish-dev/following{/other_user}",
"gists_url":"https://api.github.com/users/instantish-dev/gists{/gist_id}",
"starred_url":"https://api.github.com/users/instantish-dev/starred{/owner}{/repo}",
"subscriptions_url":"https://api.github.com/users/instantish-dev/subscriptions",
"organizations_url":"https://api.github.com/users/instantish-dev/orgs",
"repos_url":"https://api.github.com/users/instantish-dev/repos",
"events_url":"https://api.github.com/users/instantish-dev/events{/privacy}",
"received_events_url":"https://api.github.com/users/instantish-dev/received_events",
"type":"Organization",
"site_admin":false
},
"repository_selection":"all",
"access_tokens_url":"https://api.github.com/app/installations/15819815/access_tokens",
"repositories_url":"https://api.github.com/installation/repositories",
"html_url":"https://github.com/organizations/instantish-dev/settings/installations/15819815",
"app_id":81168,
"app_slug":"instantish-github-app-dev",
"target_id":71402524,
"target_type":"Organization",
"permissions":{
"issues":"write",
"members":"read",
"metadata":"read",
"single_file":"write",
"pull_requests":"write",
"repository_projects":"write",
"organization_projects":"write"
},
"events":[
"issues",
[Truncated]
"id":16073505,
"node_id":"MDQ6VXNlcjE2MDczNTA1",
"avatar_url":"https://avatars.githubusercontent.com/u/16073505?v=4",
"gravatar_id":"",
"url":"https://api.github.com/users/username_0",
"html_url":"https://github.com/username_0",
"followers_url":"https://api.github.com/users/username_0/followers",
"following_url":"https://api.github.com/users/username_0/following{/other_user}",
"gists_url":"https://api.github.com/users/username_0/gists{/gist_id}",
"starred_url":"https://api.github.com/users/username_0/starred{/owner}{/repo}",
"subscriptions_url":"https://api.github.com/users/username_0/subscriptions",
"organizations_url":"https://api.github.com/users/username_0/orgs",
"repos_url":"https://api.github.com/users/username_0/repos",
"events_url":"https://api.github.com/users/username_0/events{/privacy}",
"received_events_url":"https://api.github.com/users/username_0/received_events",
"type":"User",
"site_admin":false
}
}
```<issue_closed>
Status: Issue closed |
eurodatacube/eodash | 677422042 | Title: NEW Feature - Ad-hoc indicator generation based on user defined AOI
Question:
username_0: The user can draw a bounding box to derive the indicator for that area on the fly.
This would work with the precomputed / wall-to-wall Truck detections, which are provided as GeoPackage point layers
But in principle also for the air quality and potentially the water quality layers.
Answers:
username_1: Info for S5P data (N1) from SH: Need to use the Statistical API on the BYOD that has time configured.
username_1: @dzelge actions:
* load truck detection data
* provide example for spatial query
* provide access mechanism using an ID (hash)
username_2: @username_3 @username_4 Just a quick question. I tried to access the SH FIS api on an example N3_CUSTOM_TRILATERAL layer (water quality BYOC), and I am receiving a statistics of the viewed RGBA composite, not the source data. Do I understand it right, that a different layer needs to be configured for FIS api to be used? (I tried setting STYLE=INDEX and STYLE=SENSOR), but the results are the same.
example url used: ```<url>/ogc/fis/<instance_id>?STYLE=SENSOR&LAYER=N3_CUSTOM_TRILATERAL&CRS=CRS:84&TIME=2020-08-01/2020-08-25&RESOLUTION=100m&GEOMETRY=POLYGON((12.73%2045.4,13.3%2045.4,13.3%2045,12.73%2045,12.73%2045.4))```
username_1: @dzelge do you have any updates on any of your 3 actions? thanks
username_3: Hi @username_2,
yes, the approach is to create another configuration that returns the raw band values and then use that one for the FIS request.
username_2: Hi, thank you for the clarification. All clear.
Status: Issue closed
username_4: The user can draw a bounding box to derive the indicator for that area on the fly.
This would work with the precomputed / wall-to-wall Truck detections, which are provided as GeoPackage point layers
But in principle also for the air quality and potentially the water quality layers.
username_4: @username_3 @username_5
I think we need support to create this new layer not much information on sentinel hub about it...Or at least I was not able to find them....so We try ourself to generate this custom script to use raw data...is it fine or there are other ways to do it isn a more reliable way?
//VERSION=3
function setup() {
return {
input: ["tropno2","weight", "dataMask"],
output: { bands: 4,
sampleType: "FLOAT32" }
};
}
function evaluatePixel(sample) {
var valt = sample.tropno2;
var valw = sample.weight;
// if (val>180) return [0,0,0,0]; // no data values
return [valt,valt,valw,255];
}
username_5: In general, this seems OK. Not sure, why you want to have "valt" output in two bands, but if this is what you need, that should be fine.
If you are not getting the result you are expecting, please let us know, what you would like to get, if possible, with examples, then also attach one sample URL request, so that we can debug, and we will help you create a custom script.
username_4: @username_5
EOX need to access the raw data. I checked and the raw data are in the band tropono2, so I changed the script into the simpliest way:
//VERSION=3
function setup() {
return {
input: ["tropno2"],
output: { bands: 1,
sampleType: "FLOAT32" }
};
}
function evaluatePixel(sample) {
return [sample.tropno2];
}
SO should be enouhg in this way I presume!
username_5: Agreed, this should work.
username_2: @username_3 @username_5 We have a question regarding the script. Currently for one band this script works, but nodata value was not set on the source cog inserted via byoc - due to a presumed limitation that nodata value must have been integer. So ingested cogs have values set as 9.9E36 for pixels, where this should be marked as nodata. Is there a way to modify this resulting script so it would ignore these "nodata" 9.9E36 values in FIS API response completely (without setting it to some arbitrary value, like 0 - which would skew the resulting dashboard chart)?
username_5: No, unfortunately not. FIS will take into account "no data" values in the raw imagery.
Status: Issue closed
username_1: NO2 drawing is tracked in #589. Closing this as it is done for the trucks. |
jitsi/jitsi-meet | 603615129 | Title: youtube livestreaming error: connect streaming software to start preview
Question:
username_0: *This Issue tracker is only for reporting bugs and tracking code related issues.*
Before posting, please make sure you check community.jitsi.org to see if the same or similar bugs have already been discussed.
General questions, installation help, and feature requests can also be posted to community.jitsi.org.
## Description
Hi, this is my first time trying Jitsi for livestreaming (I used to use hangouts on air, since discontinued of course).
I only managed to make it work using the legacy version of the youtube stream now, not the new "live dashboard"
---
## Current behavior
When using the new design of the live dashboard, the stream connected successfully but failed to show a preview (so it refused to start streaming).
The error was:
```
connect streaming software to start preview
```
---
## Expected Behavior
That the livestreaming would work on both ways (I understand it may be YouTube's problem but it's worth investigating IMHO
---
## Possible Solution
For now, just use the old interface
---
## Steps to reproduce
Go to the new live dashboard, paste the stream key from Jitsi.
---
# Environment details
Chrome on Mac OS 10.14
---
Answers:
username_1: I will close this with reference to this: #5146
See also: #5403
---
*I'm a community contributor and not directly affiliated with the Jitsi team.*
Status: Issue closed
|
encode/uvicorn | 625952306 | Title: Invalid argument was supplied error on windows
Question:
username_0: There seems to be an issue when I increase the worker process count greater than 1.
**command** -> `uvicorn --port 9000 --workers 2 MyProject.asgi:application`
**error log** ->
```
INFO: Uvicorn running on http://127.0.0.1:9000 (Press CTRL+C to quit)
INFO: Started parent process [17696]
INFO: Started server process [20428]
INFO: Started server process [15476]
INFO: Waiting for application startup.
INFO: Waiting for application startup.
INFO: ASGI 'lifespan' protocol appears unsupported.
INFO: ASGI 'lifespan' protocol appears unsupported.
INFO: Application startup complete.
INFO: Application startup complete.
Process SpawnProcess-2:
Traceback (most recent call last):
File "c:\users\arnab\appdata\local\programs\python\python37-32\Lib\multiprocessing\process.py", line 297, in _bootstrap
self.run()
File "c:\users\arnab\appdata\local\programs\python\python37-32\Lib\multiprocessing\process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "t:\my projects\scholify-web\env\lib\site-packages\uvicorn\subprocess.py", line 62, in subprocess_started
target(sockets=sockets)
File "t:\my projects\scholify-web\env\lib\site-packages\uvicorn\main.py", line 382, in run
loop.run_until_complete(self.serve(sockets=sockets))
File "c:\users\arnab\appdata\local\programs\python\python37-32\Lib\asyncio\base_events.py", line 579, in run_until_complete
return future.result()
File "t:\my projects\scholify-web\env\lib\site-packages\uvicorn\main.py", line 399, in serve
await self.startup(sockets=sockets)
File "t:\my projects\scholify-web\env\lib\site-packages\uvicorn\main.py", line 433, in startup
create_protocol, sock=sock, ssl=config.ssl, backlog=config.backlog
File "c:\users\arnab\appdata\local\programs\python\python37-32\Lib\asyncio\base_events.py", line 1393, in create_server
server._start_serving()
File "c:\users\arnab\appdata\local\programs\python\python37-32\Lib\asyncio\base_events.py", line 282, in _start_serving
sock.listen(self._backlog)
OSError: [WinError 10022] An invalid argument was supplied
```
Answers:
username_1: this one has a fix I really dont understand
Status: Issue closed
|
jlippold/tweakCompatible | 301084155 | Title: `tweakCompatible` working on iOS 11.1.2
Question:
username_0: ```
{
"packageId": "bz.jed.tweakcompatible",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "bz.jed.tweakcompatible",
"deviceId": "iPhone10,6",
"url": "http://cydia.saurik.com/package/bz.jed.tweakcompatible/",
"iOSVersion": "11.1.2",
"packageVersionIndexed": true,
"packageName": "tweakCompatible",
"category": "Tweaks",
"repository": "BigBoss",
"name": "tweakCompatible",
"packageIndexed": true,
"packageStatusExplaination": "This package version has been marked as Working based on feedback from users in the community. The current positive rating is 100% with 4 working reports.",
"id": "bz.jed.tweakcompatible",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.0.4",
"shortDescription": "Adds a way to check tweak compatibility in cydia",
"latest": "0.0.4",
"author": "treAson",
"packageStatus": "Working"
},
"base64": "<KEY>",
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
alacritty/alacritty | 642735504 | Title: Control as modifier key makes CTRL+key combinations undetectable
Question:
username_0: ### System
OS: Fedora Linux
Version: `alacritty 0.4.3`
Linux/BSD: Wayland, Sway
### Logs
Answers:
username_1: But `Ctrl +C` aka `^C` was sent correctly from what I see, since it's indeed 0x03 in `showkey -a`? You also pressed `Ctrl + d` indeed, but it was sent a `d` char from `ReceivedChar` I'd guess. I also don't see `Alt` pressed?
username_0: @username_1 Ctrl+C is set to a different type, therefore not applying to the Ctrl+Alt rule I made, therefore being sent correctly. I'm not pressing Alt at all; it is not relevant as I'm talking about Ctrl+KEY combinations on alacritty specifically breaking from my Ctrl+Alt rule (maybe it's breaking because I set Ctrl as a modifier?)
username_1: What it shows in [wev](https://git.sr.ht/~sircmpwn/wev/)?
username_0: ```
[14: wl_keyboard] key: serial: 96069; time: 30051059; key: 37; state: 1 (pressed)
sym: Control_L (65507), utf8: ''
[14: wl_keyboard] modifiers: serial: 0; group: 0
depressed: 00000004: Control
latched: 00000000
locked: 00000000
[14: wl_keyboard] key: serial: 96071; time: 30051281; key: 54; state: 1 (pressed)
sym: c (99), utf8: ''
[14: wl_keyboard] key: serial: 96072; time: 30051380; key: 54; state: 0 (released)
sym: c (99), utf8: ''
[14: wl_keyboard] key: serial: 96073; time: 30052054; key: 40; state: 1 (pressed)
sym: d (100), utf8: 'd'
[14: wl_keyboard] key: serial: 96074; time: 30052116; key: 40; state: 0 (released)
sym: d (100), utf8: ''
[14: wl_keyboard] key: serial: 96075; time: 30052789; key: 37; state: 0 (released)
sym: Control_L (65507), utf8: ''
[14: wl_keyboard] modifiers: serial: 0; group: 0
depressed: 00000000
latched: 00000000
locked: 00000000
```
Status: Issue closed
username_0: Hey! I'm making my own keyboard layout, and made Control+Alt act as LevelThree (AltGr) because my keyboard doesn't have it. However, alacritty doesn't seem to properly pass that down to the terminal, making key combinations like ^C/^D unusable. This doesn't happen in kitty and gnome-terminal.
### System
OS: Fedora Linux
Version: `alacritty 0.4.3`
Linux/BSD: Wayland, Sway
### Logs
In this example, Ctrl+D and Ctrl+C is pressed (only D is set to act on Ctrl+Alt.)
```
[2020-06-22 01:13:01.830125530] [INFO] glutin event: WindowEvent { window_id: WindowId(Wayland(WindowId(94416137087840))), event: KeyboardInput { device_id: DeviceId(Wayland(DeviceId)), input: KeyboardInput { scancode: 29, state: Pressed, virtual_keycode: Some(LControl), modifiers: (empty) }, is_synthetic: false } }
[2020-06-22 01:13:02.414619226] [INFO] glutin event: WindowEvent { window_id: WindowId(Wayland(WindowId(94416137087840))), event: KeyboardInput { device_id: DeviceId(Wayland(DeviceId)), input: KeyboardInput { scancode: 32, state: Pressed, virtual_keycode: Some(D), modifiers: CTRL }, is_synthetic: false } }
[2020-06-22 01:13:02.495600541] [INFO] glutin event: WindowEvent { window_id: WindowId(Wayland(WindowId(94416137087840))), event: KeyboardInput { device_id: DeviceId(Wayland(DeviceId)), input: KeyboardInput { scancode: 32, state: Released, virtual_keycode: Some(D), modifiers: CTRL }, is_synthetic: false } }
[2020-06-22 01:13:03.460369164] [INFO] glutin event: WindowEvent { window_id: WindowId(Wayland(WindowId(94416137087840))), event: KeyboardInput { device_id: DeviceId(Wayland(DeviceId)), input: KeyboardInput { scancode: 46, state: Pressed, virtual_keycode: Some(C), modifiers: CTRL }, is_synthetic: false } }
[2020-06-22 01:13:03.521030264] [INFO] glutin event: WindowEvent { window_id: WindowId(Wayland(WindowId(94416137087840))), event: KeyboardInput { device_id: DeviceId(Wayland(DeviceId)), input: KeyboardInput { scancode: 46, state: Released, virtual_keycode: Some(C), modifiers: CTRL }, is_synthetic: false } }
[2020-06-22 01:13:03.923114770] [INFO] glutin event: WindowEvent { window_id: WindowId(Wayland(WindowId(94416137087840))), event: KeyboardInput { device_id: DeviceId(Wayland(DeviceId)), input: KeyboardInput { scancode: 29, state: Released, virtual_keycode: Some(LControl), modifiers: CTRL }, is_synthetic: false } }
```
As you can see, alacritty seems to detect it fine, however, `showkey -a` says something else:
```
d 100 0144 0x64
^C 3 0003 0x03
```
username_1: Seems like everything is the same as wev is showing? we don't have a binding for `Ctrl + d`, which should send `0x04` normally, however you can see that `wev` explicitly getting `d`. So seems like everything is correct on our side. Or `wev` also has a 'bug'. You can bind `Ctrl + d` to send `0x04` in a config.
Status: Issue closed
username_0: I would assume this is normal behaviour, except that I don't see it happening in neither kitty or gnome-terminal, therefore I thought this might be an issue here. If it isn't, do you know how I can modify my layout to not make this happening, while still mantaining Ctrl+Alt -> AltGr?
username_1: Does it put `d` in something else than terminal emulators, like firefox, etc?
username_0: It doesn't put d in Telegram, neither does in Firefox (falls back to the Ctrl+D keybinding; bookmarks something)
username_1: I'd assume that it just try to process binding and filter the character. The thing is that `Ctrl + D` works due to the character it's sending, so if you want a `Ctrl + d` binding in alacritty you should bind it yourself. it'll stil work with your layout as I said.
So, just read `keybindings` section and make it send proper chars on `Ctrl + d`.
username_0: :thinking: okay, it just seems weird to me that it works on every other app except alacritty |
Automattic/wp-desktop | 283339190 | Title: OSX: Define menu keeps popping up.
Question:
username_0: The Define/cut/Copy/Paste/Select All menu keeps popping up after performing one of the actions.
See capture of behavior: https://youtu.be/I2cURXChiTA
Answers:
username_1: I was unable to reproduce the problem using the following testing steps:
1. Open any post for editing.
1. Highlight a word.
1. Right-click the highlighted word and select Copy.
1. Move the cursor and click somewhere else within the body of the post.
<strong>Result:</strong> the contextual menu disappears in my test as expected.
I tested with WPDesktop 3.1.0 on Mac OS X 10.13.1.
@username_0 may I ask which version of the desktop app you tested with?
Doe this problem also happen if you try editing a WordPress.com post directly in a browser too?
Does this problem happen for you in any other editing apps that you know of?
username_0: @username_1
WPD 3.1.0 on Mac OS X 10.13.2
Problem only occurs in this app, nowhere else.
username_2: Closing this as stale since we have not been able to repro and no other reports of this issue have been submitted since. Please feel free to re-open or submit a new issue if you encounter this bug in our latest release (v5.0.0-beta2 as of this writing).
Status: Issue closed
|
Puzzlepart/prosjektportalen365 | 748716948 | Title: Test av v1.2.4
Question:
username_0: ## Testing the new version before creating new release
Checklist for testing the additions, changes and fixes
# Version 1.2.4
## Added
- [ ] Added "default" option for extensions, similar to list content #328
- [ ] Added info message if there are unpublished statusreports #340
- [ ] Added published/unpublished indicators for statusreports in dropdown and ribbon #341
- [ ] Added possiblity to delete unpublished statusreports #343
- [ ] Added PNG snapshot when publishing project status #337
## Fixed
- [ ] Improved failure handling for PlannerConfiguration task in Project Setup #329
- [ ] Support adding AD groups to get porfolio insights from SP group #332
- [ ] Change to latest statusreport when creating a new statusreport #343
After everything has been checked and approved a release of the new version can be created. It is important that the changelog, this issue and the release notes are equal.
Answers:
username_1: Issues registered when testing this release:
#348 Install Log
#349 Dispaly Name extensions
#350 Project status page not loading
#352 Portfolio content not limited to access level
Fix #331 tested by created 3 new projects in upgrade installastion, and 2 new projects in clean installation of PP365v124
username_1: Alle tester er utførte, og vi kan gå videre med frigivelse av 1.2.4
Status: Issue closed
username_1: ## Testing the new version before creating new release
Latest version can be found here: https://puzzlepart.sharepoint.com/sites/pp365/SitePages/Home.aspx
This site is upgraded automatically after push on dev branch.
Checklist for testing the additions, changes and fixes
# Version 1.2.4
## Added
- [x] Added "default" option for extensions, similar to list content #328
- [x] Added info message if there are unpublished statusreports #340
- [x] Added published/unpublished indicators for statusreports in dropdown and ribbon #341
- [x] Added possiblity to delete unpublished statusreports #343
- [x] Added PNG snapshot when publishing project status #337
## Fixed
- [x] Improved failure handling for PlannerConfiguration task in Project Setup #331
- [x] Support adding AD groups to get porfolio insights from SP group #332 #354
- [x] Change to latest statusreport when creating a new statusreport #343
- [x] Issue were user couldn't exit the portfolio filter pane #353
After everything has been checked and approved a release of the new version can be created. It is important that the changelog, this issue and the release notes are equal.
username_2: Har testet den siste endringen vi la med, #357 i ny site og i eksisterende site.
Status: Issue closed
|
tensorflow/tensorflow | 375153399 | Title: Failed to load
Question:
username_0: <em>Please make sure that this is a build/installation issue. As per our [GitHub Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:build_template</em>
**System information**
- OS Platform and Distribution: Windows 10 x64
- TensorFlow installed from (source or binary):
- TensorFlow version: tensorflow_gpu-1.11.0
- Python version: Python 3.6.7
- Installed using: pip
- CUDA/cuDNN version: cuda 9.0 and cudnn-9.0
- GPU model and memory: 1080 ti 11 GB
Running import tensorflow as tf returns:
Traceback (most recent call last):
File "C:\Users\Sebastian\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\Sebastian\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Users\Sebastian\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "C:\Users\Sebastian\AppData\Local\Programs\Python\Python36\lib\imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "C:\Users\Sebastian\AppData\Local\Programs\Python\Python36\lib\imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: DLL load failed: Could not find the module.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test.py", line 1, in <module>
import tensorflow as tf
File "C:\Users\Sebastian\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\__init__.py", line 22, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "C:\Users\Sebastian\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "C:\Users\Sebastian\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "C:\Users\Sebastian\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\Sebastian\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Users\Sebastian\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "C:\Users\Sebastian\AppData\Local\Programs\Python\Python36\lib\imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "C:\Users\Sebastian\AppData\Local\Programs\Python\Python36\lib\imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: DLL load failed: Could not find the module.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/install_sources#common_installation_problems
I guess my problem is something with the paths. I have added the following:
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\lib\x64
Answers:
username_1: @username_0 - Hi, try installing Microsoft Visual C++ 2015 Redistributable Update 3 and see if you are running into this error.
Also please try with CUDA9 and cuDNN 7.0.5
username_0: Hi, thank you for your answer.
Installing Microsoft Visual C++ 2015 Redistributable Update 3 did resolve the problem.
I'm pretty sure something is wrong with my path variables. I will read your link later, thanks.
username_2: I met the same problem as you. Have you got any solution?
username_0: @username_1 I have a working installation now by following the guide you linked, thanks alot!
@username_2 Follow the link [above](https://www.pugetsystems.com/labs/hpc/The-Best-Way-to-Install-TensorFlow-with-GPU-Support-on-Windows-10-Without-Installing-CUDA-1187/) to install tensorflow with gpu-support through conda.
username_1: @username_0 - Great !
Status: Issue closed
username_1: Are you satisfied with the resolution of your issue?<br> [Yes](https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=23355)<br> [No](https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=23355) |
autogoal/autogoal | 711835590 | Title: pip install autogoal[all]
Question:
username_0: The installation command 'pip install autogoal[all]' misses **sklearn** and **tqdm** packages
Status: Issue closed
Answers:
username_1: Related with #10 . Solving that would solve this as well.
username_1: The installation command 'pip install autogoal[all]' misses **sklearn** and **tqdm** packages
username_1: Except that we would need to add `all` to the optional dependencies. So I'm leaving this open for when the comment comes :sweat_smile:
Status: Issue closed
username_1: Closed, finally solved by #17 |
AleksandrRogov/DynamicsWebApi | 305199942 | Title: 401 Not authorized - successful token from adal-node
Question:
username_0: Having some trouble doing a basic query. I've hooked up `adal-node` and when I call `acquireTokenWithClientCredentials` I successfully get back an access token when I then pass to my dynamicsWebApi callback. Unfortunately, even with a valid token, I'm still getting an `unauthorized` error.
Am I supposed to pass the entire token response to the callback to just the `accessToken` itself (doesn't work either way, just trying to narrow down possibilities`
```
function acquireToken(dynamicsWebApiCallback) {
// a callback for adal-node (auth)
function adalCallback(error, token) {
if (!error) {
// call dynamicsWebApi callback only when token recieved
console.log(token) // this successfully shows the token response
dynamicsWebApiCallback(token); // this always returns 401
} else {
console.log(`Whoops broke it: ${error}`);
}
}
context.acquireTokenWithClientCredentials(
resource,
applicationId,
clientSecret,
adalCallback
)
}
const dynamicsWebApi = new DynamicsApi({
webApiUrl: `${resource}api/data/v8.0`,
onTokenRefresh: acquireToken,
});
dynamicsWebApi.retrieveMultiple("accounts").then(function (response) {
console.log(response);
}).catch(function(error){
console.log(error.message);
});
```
Answers:
username_1: Hi @username_0,
Did you try to copy-paste that token and make a request using something like Fiddler or Curl? Does it work?
username_1: By the way, are you sure that the resource you are using in `webApiUrl` is the same as you provide for adal authentication?
username_0: Hmmm just tried and still getting a 401. Likely a Microsoft/Azure/AD/some other permissions issue?
And the resource for `adal-node` is just the resource without the `api/data/v8.0`
username_1: Oh yeah? Strangely for me they are different. Adal resource is something like: org.crm.dynamics.com; and web api url is: org.**api**.crm.dynamics.com
username_0: I just made that change and that it didn't change anything code wise.
`adal-node` resource: `https://org.crm.dynamics.com`
`webApiUrl`: `https://org.api.crm.dynamics.com`
username_1: token does not work still?
username_0: Nope I'm still getting,
```
{ tokenType: 'Bearer',
expiresIn: 3599,
expiresOn: 2018-03-14T16:30:22.522Z,
resource: 'https://org.crm.dynamics.com/',
accessToken: 'token',
isMRRT: true,
_clientId: 'XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXX',
_authority: 'https://login.microsoftonline.com/{XXXX-XXXXX-XXXXX-XXXXX}/oauth2/token' }
{ message: 'HTTP Error 401 - Unauthorized: Access is denied\r\n',
status: 401,
statusText: undefined }
GET /api/dynamics 400 4989.352 ms - 90
```
username_1: you can try to parse JWT in [jwt.io](https://jwt.io) and see what information is there.
```js
{
"aud": "https://org.crm.dynamics.com",
"iss": "https://sts.windows.net/XXX-XX-XX-XXX/",
"iat": 1521041577,
"nbf": 1521041577,
"exp": 1521045477,
"acr": "1",
"aio": "some random key here",
"amr": [
"pwd"
],
"appid": "XXXX-XXX-XX-XXX",
"appidacr": "0",
"e_exp": 262800,
"family_name": "User lastname",
"given_name": "User firstname",
"ipaddr": "Your IP",
"name": "User full name",
"oid": "XXXX-XXX-XXX-XXXX",
"puid": "random id here",
"scp": "user_impersonation",
"sub": "Random number here",
"tid": "XXXX-XXXX-XXXX-XXXX",
"unique_name": "Email address",
"upn": "Email address",
"uti": "Random number here",
"ver": "1.0"
}
```
username_0: Ok I see. I was able to successfully use postman to get a token, and make a request with that token manually. It looks like I need to pass both a username and password AND a clientSecret - if I just use the function you mentioned, `acquireTokenWithUsernamePassword`, I get the following error:
```
Error: Get Token request returned http error: 401 and server response: {"error":"invalid_client","error_description":"AADSTS70002: The request body must contain the following parameter: 'client_secret or client_assertion'. ...}
```
username_1: Interesting. That could be a problem in Azure AD App Registration Settings somewhere. I always use that function in each Node.Js application I create and never had a problem.
I think it may depend on what permissions you are granting to your application. Here are permissions I usually use:

username_1: Also, forgot to mention, the application type I use is "Native".
username_1: Closing the issue because of inactivity.
Status: Issue closed
|
baltimorecounty/baltimorecountymd.gov-assets | 443398996 | Title: Search input layout issue on iPhone
Question:
username_0: ### Description
The search input on all BCG subpages has rounded corners on the left side on iOS devices. The search button and input also don't align.
### To Reproduce
Steps to reproduce the behavior:
1. View any subpage on an iPhone.
### Expected behavior
All corner should be square (no radius). The input and button should be the same height.
### Screenshots
**Currently looks like:**

**Should look like:**

## Device and Browser
- Safari and Chrome on iPhone 8, iOS 12.2
- Does not affect PCs -- not sure about Android or Mac. |
superduper/BrightnessMenulet | 9445086 | Title: It only put my monitor on 100% of brightness, but no more changes
Question:
username_0: Hi,
I've a Dell U2410 and I would like to use this app for it. The problem is that when I move the % on the menulet, the brigthness of my screen go to 100% and never change to other values. I'm trying using a DVI cable with original converter from mini-DP --> DVI.
If I could help you in the development with any logs, tell me and I try to do it with XCode (I'm iOS developer).
Cheers!
Status: Issue closed
Answers:
username_2: @username_1 I belive this issue was accidentally closed by the big merge (#56) |
partyshark/partyShark | 140504076 | Title: Users that can't be players should be alerted
Question:
username_0: Mobile users that can't load the Deezer web player there should be alerted that they can't shouldn't try to be the party player.
Status: Issue closed
Answers:
username_2: Instead of alerting them just make it so they can't request to be the player, make the button unclickable |
STEllAR-GROUP/hpx | 46982765 | Title: Install failure on OS X -- duplicate rpath entries
Question:
username_0: I am using OS X Yosemite. I currently cannot build HPX from source. This used to work fine on Mavericks.
The build itself goes fine. The install then emits this error message:
```
/opt/local/bin/install_name_tool: object: /Users/username_0/SIMFACTORY/hpx-master/install/lib/hpx/libparcel_coalescing.0.9.10.dylib malformed object (load command 39 cmdsize is zero)
```
If I use "otool -l" on this library, then I see that it is indeed broken; I see the same error message.
Using "otool -l" on the original library (the one in the build directory), I see that it is healthy. Just before load command 39 (the load commands are numbered) are the definitions for LC_RPATH, i.e. the path names provided via the -rpath command. I see that the path "/Users/username_0/SIMFACTORY/boost-1_56_0/install/lib" appears twice.
According to my interpretation of the man page of "install_name_tool" (and Google), this should not be the case -- each path should be unique.
It appears that the commands in the file "cmake_install.cmake" should rewrite these LC_RPATH paths, changing the HPX build directory to the HPX install directory. In my build, these commands are in lines 43 to 59 of this file, and read
```
execute_process(COMMAND "/opt/local/bin/install_name_tool"
-id "libparcel_coalescing.0.dylib"
-change "/Users/username_0/SIMFACTORY/hpx-master/build/lib/libhpx.0.dylib" "libhpx.0.dylib"
-change "/Users/username_0/SIMFACTORY/hpx-master/build/lib/libhpx_serialization.0.dylib" "libhpx_serialization.0.dylib"
"${file}")
execute_process(COMMAND /opt/local/bin/install_name_tool
-delete_rpath "/Users/username_0/SIMFACTORY/hpx-master/build/lib"
"${file}")
execute_process(COMMAND /opt/local/bin/install_name_tool
-delete_rpath "/Users/username_0/SIMFACTORY/boost-1_56_0/install/lib"
"${file}")
execute_process(COMMAND /opt/local/bin/install_name_tool
-add_rpath "/Users/username_0/SIMFACTORY/hpx-master/install/lib"
"${file}")
execute_process(COMMAND /opt/local/bin/install_name_tool
-add_rpath "/Users/username_0/SIMFACTORY/boost-1_56_0/install/lib"
"${file}")
```
It appears that these calls to "install_name_tool" somehow break the library. I think that this is an error either in install_name_tool, or in another system tool, or in clang 3.5 that I am using to build HPX, or in cmake 3.0.
Would it be possible to manually uniquify the -rpath options that are passed to the linker, so that each path is listed only once? The file CMakeFiles/parcel_coalescing_lib.dir/link.txt contains indeed two identical -rpath options for the offending path. Unfortunately, my cmake-fu (or lack thereof) doesn't allow me to try this myself.
See also <https://github.com/pandegroup/openmm/issues/295>, which seems to describe exactly my situation.
Answers:
username_1: Is this still an issue after the recent build system changes?
username_0: Yes, I confirm that the issue is still present.
username_2: This has been fixed by disabling the full RPATH setting of cmake, which seemed to have cause the problem (see eaf73b9).
Buildbot is able to install all libraries just fine (http://hermione.cct.lsu.edu/builders/hpx_appleclang_5_1_boost_1_56_osx_x86_64_release/builds/242/steps/install/logs/stdio).
Please reopen if it is still not working for you.
Status: Issue closed
username_0: I am using OS X Yosemite. I currently cannot build HPX from source. This used to work fine on Mavericks.
The build itself goes fine. The install then emits this error message:
```
/opt/local/bin/install_name_tool: object: /Users/username_0/SIMFACTORY/hpx-master/install/lib/hpx/libparcel_coalescing.0.9.10.dylib malformed object (load command 39 cmdsize is zero)
```
If I use "otool -l" on this library, then I see that it is indeed broken; I see the same error message.
Using "otool -l" on the original library (the one in the build directory), I see that it is healthy. Just before load command 39 (the load commands are numbered) are the definitions for LC_RPATH, i.e. the path names provided via the -rpath command. I see that the path "/Users/username_0/SIMFACTORY/boost-1_56_0/install/lib" appears twice.
According to my interpretation of the man page of "install_name_tool" (and Google), this should not be the case -- each path should be unique.
It appears that the commands in the file "cmake_install.cmake" should rewrite these LC_RPATH paths, changing the HPX build directory to the HPX install directory. In my build, these commands are in lines 43 to 59 of this file, and read
```
execute_process(COMMAND "/opt/local/bin/install_name_tool"
-id "libparcel_coalescing.0.dylib"
-change "/Users/username_0/SIMFACTORY/hpx-master/build/lib/libhpx.0.dylib" "libhpx.0.dylib"
-change "/Users/username_0/SIMFACTORY/hpx-master/build/lib/libhpx_serialization.0.dylib" "libhpx_serialization.0.dylib"
"${file}")
execute_process(COMMAND /opt/local/bin/install_name_tool
-delete_rpath "/Users/username_0/SIMFACTORY/hpx-master/build/lib"
"${file}")
execute_process(COMMAND /opt/local/bin/install_name_tool
-delete_rpath "/Users/username_0/SIMFACTORY/boost-1_56_0/install/lib"
"${file}")
execute_process(COMMAND /opt/local/bin/install_name_tool
-add_rpath "/Users/username_0/SIMFACTORY/hpx-master/install/lib"
"${file}")
execute_process(COMMAND /opt/local/bin/install_name_tool
-add_rpath "/Users/username_0/SIMFACTORY/boost-1_56_0/install/lib"
"${file}")
```
It appears that these calls to "install_name_tool" somehow break the library. I think that this is an error either in install_name_tool, or in another system tool, or in clang 3.5 that I am using to build HPX, or in cmake 3.0.
Would it be possible to manually uniquify the -rpath options that are passed to the linker, so that each path is listed only once? The file CMakeFiles/parcel_coalescing_lib.dir/link.txt contains indeed two identical -rpath options for the offending path. Unfortunately, my cmake-fu (or lack thereof) doesn't allow me to try this myself.
See also <https://github.com/pandegroup/openmm/issues/295>, which seems to describe exactly my situation.
username_0: The build and install steps are now working fine. However, I see this error when I build and run a simple HPX test:
```
mpirun -np 1 ./hello-cc
dyld: Library not loaded: libhpx.0.dylib
Referenced from: /Users/username_0/SIMFACTORY/hpx-master/test/test-0/./hello-cc
Reason: no suitable image found. Did find:
/Users/username_0/SIMFACTORY/hpx-master/install/lib/libhpx.0.dylib: malformed mach-o image: load command #40 length (0) too small in /Users/username_0/SIMFACTORY/hpx-master/install/lib/libhpx.0.dylib
```
This is essentially the same error message as before.
Also, 99% of the HPX self-tests fail. Here is the beginning of the respective output (run before installing HPX):
```
Running tests...
Test project /Users/username_0/SIMFACTORY/hpx-master/build
Start 1: tests.regressions.id_type_ref_counting_1032
1/223 Test #1: tests.regressions.id_type_ref_counting_1032 ...............................................***Failed 0.16 sec
Start 2: tests.regressions.multiple_init
2/223 Test #2: tests.regressions.multiple_init ...........................................................***Failed 0.05 sec
Start 3: tests.regressions.id_type_ref_counting_1032_4
3/223 Test #3: tests.regressions.id_type_ref_counting_1032_4 .............................................***Failed 0.05 sec
```
username_2: I am not entirely sure what's going on there. Looking at buildbot, it seems mostly fine:
http://hermione.cct.lsu.edu/builders/hpx_appleclang_5_1_boost_1_56_osx_x86_64_release/builds/242
Though I buildbot does indeed not tr
Just tried to reproduce the error with the following commands:
```
bash-3.2$ cd /Users/buildbot/slave/hpx_appleclang_5_1_boost_1_56_osx_x86_64_debug/install/
bash-3.2$ cd bin/
bash-3.2$ ./async_io
dyld: Library not loaded: libhpxd.0.dylib
Referenced from: /Users/buildbot/slave/hpx_appleclang_5_1_boost_1_56_osx_x86_64_debug/install/bin/./async_io
Reason: image not found
Trace/BPT trap: 5
bash-3.2$ export DYLD_LIBRARY_PATH=/Users/buildbot/slave/hpx_appleclang_5_1_boost_1_56_osx_x86_64_debug/install/lib:$DYLD_LIBRARY_PATH
bash-3.2$ ./async_io
OS-thread: Write this string to std::cout
HPX-thread: The asynchronous IO operation returned: 0
bash-3.2$
```
This seems to work. So you can either have the rpath error preventing you to install at all or set the DYLD_LIBRARY_PATH environment variable to contain the correct path. This looks like a cmake bug I have no idea how to work around otherwise. What do you suggest?
username_0: The error I see indicates that the library was installed in the wrong way, as the library is found but contains an error: `malformed mach-o image`. This indicates that the library was destroyed while installing it, but the installer did not notice. The problem is `install_name_tool`, which cannot handle duplicate rpath entries.
Setting `DYLD_LIBRARY_PATH` would work fine for me -- I'm using it for other libraries -- but as I mentioned, this is not the problem here. |
NYCPlanning/labs-zap-search | 534017476 | Title: Stage: User is able to submit hearing date with incorrect field
Question:
username_0: Actual Result: User is able to submit hearing date with incorrect field
Expected Result: User should not able to submit hearing date with incorrect field


Answers:
username_1: What does "incorrect" mean? What's wring with the which field?
Looking at the screenshot, it seems like the hearing location is blank. Is that what you mean by incorrect?
The only was I'm able to reproduce that is by entering a space in the location field. Our validation is just making sure that _any_ text has been entered in the location field. Perhaps we should instead check for ≥ alphanumeric character.
This is an enhancement not a bug, and should not block launch.
username_1: Hmm… I read the screenshot wrong; didn't notice the highlighted second location field.
So the issue is that there is text in the location field, but the validation doesn't think the text is valid, and the confirmation screen shows that field as blank. It seems that validation is not working correctly on the second location field.
username_2: Yeah, I think this is a UX issue.
We can tackle this problem if and only if we get time to refactor the code behind this.
username_3: @username_0 Would you mind providing a step-by-step on how to recreate this bug? I've been trying to recreate it and have not been able to
username_3: @username_0 @username_1 @username_2 Has anyone else been able to recreate this bug, and if so, would they mind providing the steps? Haven't bee successful at recreating it
username_3: Unable to reproduce this issue. Closing the issue, will reopen if we run into this bug again.
Status: Issue closed
|
NCEAS/metacatui | 833072372 | Title: Enhance filter group tab editing in the portal editor data page
Question:
username_0: Once basic editing of custom search filter groups is supported in the portal editor (issue #1685), the following enhancements can be added:
- [ ] Allow users to re-order tabs (just like the [portal section tabs](https://github.com/NCEAS/metacatui/blob/504ba5d5b66fa22f052ebe4b4bb9a116910b2bd3/src/js/views/portals/editor/PortEditorSectionsView.js#L258-L269))
- [ ] Support adding, editing, and removing a description and icon for each tab group
<img width="535" alt="Screen Shot 2021-03-16 at 13 53 16" src="https://user-images.githubusercontent.com/26600641/111356716-02af7380-865f-11eb-9511-9c53dc9ab846.png">
- [ ] Enable moving filters between filter groups. Originally we came up with a design that allowed adding existing filters using a dropdown interface, shown below. However, it might be more intuitive to just allow users to drag and drop filters into new tab groups.
<img width="550" alt="Screen Shot 2021-03-16 at 13 55 19" src="https://user-images.githubusercontent.com/26600641/111357007-4a35ff80-865f-11eb-89e0-eae9d8507c18.png">
Related mockups are [here](https://invis.io/8U10HTJHGXHF) |
lianetoolkit/liane-toolkit | 307782758 | Title: Campo de "geolocation" mantém o texto após digitar
Question:
username_0: Caso a lista de entradas sugeridas no autocomplete da página de criação de geolocation tenha mais de uma sugestão - após selecionar o sistema mantém no textfield o texto digitado anteriormente.
O comportamento não é consistente quando você digita um autocomplete que retorne uma unica opção.<issue_closed>
Status: Issue closed |
apache/servicecomb-java-chassis | 988986846 | Title: 关于新版本2.5.0的问题
Question:
username_0: 1.resttemplate 我看到好像你们做了 可配置的重试的情况,应该如何应用?
2.关于三方接口的地方,如何使用注册中心访问.
3.能否提供一套2.5.0版本的yaml,包含所有功能.
Answers:
username_1: 这些问题太泛了, 尽可能提具体问题。
下面一些材料可参考: [开发指南](https://servicecomb.apache.org/references/java-chassis/zh_CN/) 、 [学习和应用实践](https://blog.csdn.net/looook/article/details/116271333)
username_0: 主要官方文档没有明确2.5.0的yaml 应该如何配置啊,我们之前用的1.3.0 现在看一些配置的方式都改了,这让我很头痛,所以我需要一份关于2.5.0的文档,目前官方的是混在一起的,能否专门出一份
username_1: 你能列一下具体问题吗? 1.3.0的配置应该2.5.0都是能够识别的。
username_0: 是有改变部分的,稍后我来提供全套yaml的配置方式,另外一个提一个idea,关于三方的接口,我建议是否能做成支撑动态变化的,因三方接口的不可控性,地址变化或者什么的,如果应用三方接口的方式在注册中心,就会有些问题.
username_0: @username_1 我升级到2.5.0的java-classis,随后放上去还是发现1.3.0的问题,所以我想问下,https://xie.infoq.cn/article/c402d62a1cd95457a9a0f0621 这里面的异步注册 应该怎么用?
username_1: 1. INFOQ的文档说的是注册中心实现逻辑。和使用 java-chassis 没有关系的。 对于开发者不可见。
2. 你说的第三方注册是指什么? 这里的功能吗? https://servicecomb.apache.org/references/java-chassis/zh_CN/build-consumer/3rd-party-service-invoke.html
username_0: 是的,这个功能,当我接口的url,或者地址变更,现在支持不重启服务的情况下无缝切换吗
username_0: 是的,这个功能,当我接口的url,或者地址变更,现在支持不重启服务的情况下无缝切换吗
username_0: 那现在2.5.0是采用长连接的心跳吗,按照我的理解如果配置多个注册中心ip,这三个ip会缓存住,随后不停的和注册中心发送心跳,当第一个ip心跳不可用,切换到第二个吗?
username_1: 1. 不支持无缝切换,可以将配置项写成place holder, 改变后需要重启
2. 微服务和注册中心之间的心跳是短连接, 轮询给各个实例发送心跳(包括失败的情况)。
username_0: 这个无缝切换,有考虑新版本做吗,还有一个就是highway的使用方式,应该如何使用.
username_1: highway的使用你可以先看下我前面给的[学习材料](https://blog.csdn.net/looook/article/details/116271333)
这个issue的讨论不够聚焦了。 针对单点问题你可以创建一个新的 issue。 这个问题关闭了。
Status: Issue closed
|
i-RIC/prepost-gui | 215095854 | Title: Import GeoTIFF error. Proj. with GeoTIFF crashes upon re-open of saved file
Question:
username_0: iRIC v3 opens this geotif file (in attached zip file) but doesn't show elevations correctly. Also when project with this file is saved, it crashes when reopened. The file in .zip file opens correctly in other software such as Global Mapper or ARC. Thanks Keisuke
[E.zip](https://github.com/i-RIC/prepost-gui/files/851521/E.zip)
Answers:
username_1: This problem is now fixed. GeoTIFF I/O had bug.
username_2: Did you confirm this problem fixed?
username_1: Rich and Keisuke checked, and it worked.
Status: Issue closed
|
jupyter/notebook | 113407883 | Title: Document 4.1 new features
Question:
username_0: See https://github.com/jupyter/notebook/issues/653 we added things in what's new.
We should actually have longer descriptions.
Answers:
username_1: I can help out here if someone can point me to where the doc to edit lives. Are we talking about just extending the changelog descriptions or the doc site or something else?
username_0: Docs are in the [docs folder](https://github.com/jupyter/notebook/tree/master/docs/source), auto published [there](http://jupyter-notebook.readthedocs.org/en/latest/) once merged in master.
I think Min already did a bit of work [there](https://github.com/jupyter/notebook/pull/786).
Thanks !
(also tell us which incompatibility you have with 4.1)
username_2: I got everything I could think of in #786, but if anyone has other things I missed, by all means PR away!
username_0: We can likely have section for each of these things that describe in more details how to use them (like shortcut and position in the menu, icon in toolbar..)
username_0: Advertise the tool for generating certificate and password.
username_1: Still planning to take a crack at this tonight or tomorrow when I have time. (But don't let me hold anything up if you're looking to get it done sooner!)
username_0: Don't worry we can document during beta phase so we have 2 weeks :-)
username_3: Perhaps this should be a new documentation issue or part of a blog post on 4.1. The GIF here demonstrates copy and pasting a cell using the new command pallette. At the end, a cell is deleted using command DD (though the GIF doesn't reflect this well - a brief screencast with audio would be better. It could be helpful as folks go through the beta period to make small GIFs of some of these items. Perhaps adding a GIFs folder to the docs source or more simply a GIF issue where folks could place their GIFs. FWIW, the GIF helped more than text or a static image to see what was going on with the new feature.

username_0: The issue with doing that while developing (beta should be fine though) is that the UI change; for the blog post I had to re-record the gif 3 times.
username_3: @carreau Thanks for the feedback. I'll open a separate issue about documentation and helping users understand the user interface.
username_1: I added a bit more detail about how to trigger the major new UI features to the changelog. I also went through all the PRs and issues closed for 4.1 and picked out a few that seemed worthy of changelog mention (IMHO). For all the rest, I included links to the full lists of milestone 4.1 PRs and issues on GitHub for folks that want that level of detail.
username_2: closed by #871. Thanks @username_1!
Status: Issue closed
|
NLnetLabs/ldns | 890095441 | Title: Default TTL is SOA MINIMUM unless $TTL is specified
Question:
username_0: [Reported by @username_1]
[RFC 1035](https://datatracker.ietf.org/doc/rfc1035/), Section 5.1, says 'Omitted class and TTL values are default to the last explicitly stated values.'
It appears ldns does not honour that sentence from RFC 1035 - instead it defaults to 3600 if there is no $TTL.
Answers:
username_1: I think 'SOA MINIMUM' in the issue title should be '3600' :)
username_0: You right. May be better TTL should be SOA MINIMUM ...?
username_1: I don't think SOA MINIMUM should be involved at all.
username_2: I remember that the soa minimum got redefined into the TTL for negative caching by RFC 2308. According to Wikipedia, the recommended value is 3600 and they refer to ripe-209. Reading that, but doesn't really seem to recommend any value.
jaap
username_1: Indeed 2038 un-defines the 'default TTL for records without TTL in the zone' meaning - but I learned today that that meaning never came from 1035. It appears to have come solely from BIND 8's behaviour.
I also note that all tools I looked at, except BIND, have a 3600 default for the case that there is no `$TTL` -and- no record TTL -and- no previous record to grab a TTL from, so it does seem like we have consensus there, for the case where the admin really gives the software nothing.
username_3: FWIW this issue causes problems for ZONEMD. Maybe that's also how @username_1 found it. I'm adding a test case to https://github.com/verisign/zonemd-test-cases
username_1: i think it was unrelated for me, but that's a very good point - implicit semantics are a risk for ZONEMD.
username_4: The 22-lots-rr-types ZONEMD test case in https://github.com/verisign/zonemd-test-cases has a bad ZONEMD record (1 1 1 EFC8D45A563B1E858D58060835E25830E9695A983EE21C3F62F89B9C27DB0B6745F9C6213E37EB236847110FD747F705 rather than 1 1 1 664046D77F36F640B1C5297FA56A695C180F9B688C6E8D915EFF8FDAD9B7BBFC00A833B77812B9F0785CC1EBFB57D709 - if my code is correct) because there are records without an explicit TTL field in the test file and they have been set to 3600 for the computation.
The zone file does not have a $TTL so RFC 1035 rules should apply. |
youzan/vant | 706134434 | Title: List 组件可以与 PullRefresh 组件结合使用
Question:
username_0: 描述问题
list组件设置高度后,只要是下拉都会触发下拉刷新,list列表没有办法滑动
<img width="404" alt="微信图片_20200922150848" src="https://user-images.githubusercontent.com/19701281/93852992-9d994780-fce5-11ea-8adf-b792ea3572cd.png">
Answers:
username_1: 不需要设置高度吧,另外检查一下html、body有没有设置overflow:hidden,如果有,需要去掉
Status: Issue closed
|
prometheus/pushgateway | 983137094 | Title: Label max lengh
Question:
username_0: I got intersting behaviour with maximum label length,
I did a sample of code:
```
#!/usr/bin/env python
import os
from prometheus_client import CollectorRegistry, Gauge, push_to_gateway
registry = CollectorRegistry()
pushgateway_endpoint = os.getenv('pushgateway_endpoint', 'localhost:9091')
labels = [
'instance_id',
'state',
'subnetId',
'tag_name',
'tag_application'
]
gauge_test_metrics = Gauge("test_metrics", "Random count", labels, registry=registry)
test_list = [
{
"instance_id": "i-00942494faabe122c",
"state": "running",
"subnetId": "subnet-04d1dfd4bffc19a49",
"tag_name": "us-east-1-vpc-012312312337f727d62234234234234234dw3d2wqd23d2sd2dsasdcasf23",
"tag_application": "kafka-broker"
},
{
"instance_id": "i-00942494faabe122b",
"state": "running",
"subnetId": "subnet-04d1dfd4bffc19a4s",
"tag_name": "us-east-1-vpc-012312312337f727d62234234234234235f23fsdfsdf23rwedfasfcscsdc",
"tag_application": "kafka-broker1"
}
]
for instance in test_list:
gauge_test_metrics.labels(
instance['instance_id'],
instance['state'],
instance['subnetId'],
instance['tag_name'],
instance['tag_application']
).set(10)
push_to_gateway(pushgateway_endpoint, job='ec2_stats',grouping_key={"instance":instance['instance_id']} ,registry=registry)
```
and as result of running I got next:
```
curl -s http://localhost:9091/metrics | grep "test_metrics"
# HELP test_metrics Random count
# TYPE test_metrics gauge
test_metrics{instance="i-00942494faabe122b",instance_id="i-00942494faabe122b",job="ec2_stats",state="running",subnetId="subnet-04d1dfd4bffc19a4s",tag_application="kafka-broker1",tag_name="us-east-1-vpc-012312312337f727d62234234234234235f23fsdfsdf23rwedfasfcscsdc"} 10
test_metrics{instance="i-00942494faabe122b",instance_id="i-00942494faabe122c",job="ec2_stats",state="running",subnetId="subnet-04d1dfd4bffc19a49",tag_application="kafka-broker",tag_name="us-east-1-vpc-012312312337f727d62234234234234234dw3d2wqd23d2sd2dsasdcasf23"} 10
test_metrics{instance="i-00942494faabe122c",instance_id="i-00942494faabe122c",job="ec2_stats",state="running",subnetId="subnet-04d1dfd4bffc19a49",tag_application="kafka-broker",tag_name="us-east-1-vpc-012312312337f727d62234234234234234dw3d2wqd23d2sd2dsasdcasf23"} 10
```
As you see, here are 3 different metrics stored in pushgateway, but there are should be only 2 metrics.
This issue came, because tag_name has length more than 32 chars.
Is this issue with some limitation or this is a bug of latest version?
Answers:
username_1: There is no limit of label length. Above, you have listed three distinct metrics. The 2nd and 3rd differ in the instance label. Your code is dynamically setting the instance label, so you got the 3rd one from a push from a different instance.
Status: Issue closed
|
sfbrigade/stop-covid19-sfbayarea | 648379493 | Title: Add Code for San Francisco website to social links
Question:
username_0: **Describe the solution you'd like**
In the social links at the bottom, I'd add a link to the website for "Code for San Francisco".
**Additional context**
The other links on there all pertain to the Org, but are missing the central hub so to speak.
Answers:
username_1: I can take this issue if nobody has started working on it already!
username_1: I noticed that "About Code 4 San Francisco" now links to the website -- would it be redundant to also include a website link in the social section?
username_2: @username_0 Can you please check the website and let us know what we can improve for this ticket? |
primefaces/primefaces | 445423213 | Title: DatePicker: range not working with - as date part separator
Question:
username_0: With pattern set to "yyyy-MM-dd" it's impossible to use selectionMode "range". DatePickerRenderer has hardcoded separator string in line 186, and since the number of elements in the parts array isn't equal to 2, it doesn't even try to parse those dates. Please allow to specify different separator string for range and multiple selectionModes.
## 1) Environment
- PrimeFaces version: 7.0.2
- Application server + version: WildFly 16.0.0
- Affected browsers: All
## 2) Expected behavior
Getting list with two elements.
## 3) Actual behavior
Getting empty list.
Answers:
username_1: Good debugging. I think you mean this line 194: https://github.com/primefaces/primefaces/blob/master/src/main/java/org/primefaces/component/datepicker/DatePickerRenderer.java#L194
username_0: In my sources for 7.0.2 (we have commercial support, and probably there is different license header) the line number is different, but it's the same piece of code.
username_0: By the way I wanted to write my own coverter to custom object (range with two fields "from" and "to"), and with the way this code is written it's impossible to do it.
username_2: same issue here. easy fix would be to change the default separators to include the spaces. I don't know of a Date/Time-Pattern that uses ", " or " - " so this might be a permanent fix unless you decide to customize the separators.
proposed code looks like this:
`
case "multiple": {
String[] parts = submittedValue.split(", ");
`
and
` case "range": {
String[] parts = submittedValue.split(" - ");
`
username_1: @username_2 not a bad suggestion. @username_4 WDYT? I can submit a PR if you think its OK.
username_3: Please note the same issue exists in PF 7.0.4 in datepicker.js. `text.split(/-| - /)` is used instead of `text.split(' - ')`
username_1: @username_4 the JS piece @username_3 is talking about above is in the shared DatePicker code between PrimeNG, PrimeReact, and PrimeFaces so I am hesitant to touch it. Can you please review?
username_4: Hi @username_1,
We can add "rangeSeparator" attribute for this issue.
Status: Issue closed
username_4: Hi @username_0
Please use rangeSeparator attribute.
Best regards, |
MichalLytek/type-graphql | 818226483 | Title: Allow disabling inferring the default values from properties initializers
Question:
username_0: In some cases, the default value should be only internally, like current date for some optional date filer input field.
In order to not introduce a breaking change, a schema option should be introduced to disable that mechanism.
Answers:
username_0: Closing by e8e164d 🔒
Status: Issue closed
username_1: Upgraded to 1.2.0-rc.1 today and this seems to work really well for me. Thanks 👍 |
nodejs/node-gyp | 270315929 | Title: Error when installing node package 'oracledb'
Question:
username_0: C:\Users\[username]\Desktop\Oracle\node_modules\oracledb>if not defined npm_config_node_gyp (node "C:\Program Files\nodejs\node_modules\npm\bin\node-gyp-bin\\..\..\node_modules\node-gyp\bin\node-gyp.js" rebuild ) else (node "" rebuild )
gyp info it worked if it ends with ok
gyp verb cli [ 'C:\\Program Files\\nodejs\\node.exe',
gyp verb cli 'C:\\Program Files\\nodejs\\node_modules\\npm\\node_modules\\node-gyp\\bin\\node-gyp.js',
gyp verb cli 'rebuild' ]
gyp info using [email protected]
gyp info using [email protected] | win32 | x64
gyp verb command rebuild []
gyp verb command clean []
gyp verb clean removing "build" directory
gyp verb command configure []
gyp verb check python checking for Python executable "C:\Python27" in the PATH
gyp verb `which` failed Error: not found: C:\Python27
gyp verb `which` failed at getNotFoundError (C:\Program Files\nodejs\node_modules\npm\node_modules\which\which.js:13:12)
gyp verb `which` failed at F (C:\Program Files\nodejs\node_modules\npm\node_modules\which\which.js:68:19)
gyp verb `which` failed at E (C:\Program Files\nodejs\node_modules\npm\node_modules\which\which.js:80:29)
gyp verb `which` failed at C:\Program Files\nodejs\node_modules\npm\node_modules\which\which.js:89:16
gyp verb `which` failed at C:\Program Files\nodejs\node_modules\npm\node_modules\which\node_modules\isexe\index.js:44:5
gyp verb `which` failed at C:\Program Files\nodejs\node_modules\npm\node_modules\which\node_modules\isexe\windows.js:29:5
gyp verb `which` failed at FSReqWrap.oncomplete (fs.js:112:15)
gyp verb `which` failed C:\Python27 { Error: not found: C:\Python27
gyp verb `which` failed at getNotFoundError (C:\Program Files\nodejs\node_modules\npm\node_modules\which\which.js:13:12)
gyp verb `which` failed at F (C:\Program Files\nodejs\node_modules\npm\node_modules\which\which.js:68:19)
gyp verb `which` failed at E (C:\Program Files\nodejs\node_modules\npm\node_modules\which\which.js:80:29)
gyp verb `which` failed at C:\Program Files\nodejs\node_modules\npm\node_modules\which\which.js:89:16
gyp verb `which` failed at C:\Program Files\nodejs\node_modules\npm\node_modules\which\node_modules\isexe\index.js:44:5
gyp verb `which` failed at C:\Program Files\nodejs\node_modules\npm\node_modules\which\node_modules\isexe\windows.js:29:5
gyp verb `which` failed at FSReqWrap.oncomplete (fs.js:112:15) code: 'ENOENT' }
gyp verb could not find "C:\Python27". checking python launcher
gyp verb check python launcher python executable found: "C:\\Python27\\python.exe"
gyp verb check python version `C:\Python27\python.exe -c "import platform; print(platform.python_version());"` returned: "2.7.1\r\n"
gyp verb get node dir no --target version specified, falling back to host node version: 7.4.0
gyp verb command install [ '7.4.0' ]
gyp verb install input version string "7.4.0"
gyp verb install installing version: 7.4.0
gyp verb install --ensure was passed, so won't reinstall if already installed
gyp verb install version is already installed, need to check "installVersion"
gyp verb got "installVersion" 9
gyp verb needs "installVersion" 9
gyp verb install version is good
gyp verb get node dir target node version installed: 7.4.0
gyp verb build dir attempting to create "build" dir: C:\Users\[username]\Desktop\Oracle\node_modules\oracledb\build
gyp verb build dir "build" dir needed to be created? C:\Users\[username]\Desktop\Oracle\node_modules\oracledb\build
gyp verb build/config.gypi creating config file
gyp verb build/config.gypi writing out config file: C:\Users\[username]\Desktop\Oracle\node_modules\oracledb\build\config.gypi
gyp verb config.gypi checking for gypi file: C:\Users\[username]\Desktop\Oracle\node_modules\oracledb\config.gypi
gyp verb common.gypi checking for gypi file: C:\Users\[username]\Desktop\Oracle\node_modules\oracledb\common.gypi
gyp verb gyp gyp format was not specified; forcing "msvs"
gyp info spawn C:\Python27\python.exe
gyp info spawn args [ 'C:\\Program Files\\nodejs\\node_modules\\npm\\node_modules\\node-gyp\\gyp\\gyp_main.py',
gyp info spawn args 'binding.gyp',
gyp info spawn args '-f',
gyp info spawn args 'msvs',
gyp info spawn args '-G',
gyp info spawn args 'msvs_version=auto',
gyp info spawn args '-I',
gyp info spawn args 'C:\\Users\\[username]\\Desktop\\Oracle\\node_modules\\oracledb\\build\\config.gypi',
gyp info spawn args '-I',
gyp info spawn args 'C:\\Program Files\\nodejs\\node_modules\\npm\\node_modules\\node-gyp\\addon.gypi',
gyp info spawn args '-I',
[Truncated]
npm ERR! Failed at the [email protected] install script 'node-gyp rebuild'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the oracledb package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! node-gyp rebuild
npm ERR! You can get information on how to open an issue for this project with:
npm ERR! npm bugs oracledb
npm ERR! Or if that isn't available, you can get their info via:
npm ERR! npm owner ls oracledb
npm ERR! There is likely additional logging output above.
npm verb exit [ 1, true ]
npm ERR! Please include the following file with any support request:
npm ERR! C:\Users\[username]\Desktop\Oracle\npm-debug.log
```
</details>
<!-- Any further details -->
I need some help in understanding the error that when I configured node-gyp for msvs2013 then why am I getting the error that I need the MS Visual Studio 2005 or .NET framework 2.0 SDK . I am more interested in what is going behind the screen when we are installing this 'oracledb' package. I have Python 2.7 installed on my system and added to path variable.
Status: Issue closed
Answers:
username_1: And it's printed by msbuild (i.e., not node-gyp.)
You probably need to run vcvarsall.bat from VS 2013 to set up the environment and if that doesn't work, try https://github.com/Microsoft/nodejs-guidelines/blob/master/windows-environment.md#prerequisites
username_0: I did the steps mentioned in 'Option 1' and the 'windows-build-tools' was installed successfully. All the details and the errors that I shared above are after completing the 'Option 1'.
Based on this information do want to share some more information on what I should do to fix this issue ?
username_1: I don't have much to add to what I wrote earlier. In a nutshell, it's a configuration issue local to your system.
Aside: you are using an unsupported version of node.js. |
nathanmarks/jss-theme-reactor | 178637115 | Title: Differences ?
Question:
username_0: Hi
looks like promising lib but i'm wondering what are main differences against other inline JSstylesheet libs like Aphrodite ?
cheers
Answers:
username_1: This is wrapping JSS.
The difference to Aphrodite are outlined by the JSS author here: https://medium.com/@oleg008/aphrodite-vs-jss-a15761b91ee3#.g3pvlrwug
No IE < 10 support was a bit surprising.
Status: Issue closed
|
django-money/django-money | 317412661 | Title: Error in using the money_localize in the template.
Question:
username_0: Error in using the money_localize in the template.
sequence item 7: expected str instance, Money found
2.0.4
TypeError
sequence item 7: expected str instance, Money found
/usr/local/lib/python3.6/site-packages/django/template/defaulttags.py in render, line 218
/usr/local/bin/python
3.6.5
----
$ pip install django-money
INSTALLED_APPS = [
...,
'djmoney',
...
]
{% load djmoney %}
...
{% money_localize money %}
Am I doing something wrong?
Thank you
Answers:
username_1: I've made a PR on this. Does #423 fix your issue?
Status: Issue closed
|
schemaorg/schemaorg | 226262866 | Title: Using the rdf version of schema.org with Protégé
Question:
username_0: Hello,
I was really happy to find an updated version of the data model I can open in Protégé at http://schema.org/docs/developers.html .
Just a question: it opens ok in Protégé, I can see all the class hierarchy and annotation properties, but I could not visualize in Protégé a property like schema:familyName, which is present in the rdf file. Do you know why ?
Kind regards
Jean
[schema.nt.zip](https://github.com/schemaorg/schemaorg/files/976064/schema.nt.zip)
Answers:
username_1: I'm not a Protégé user so I do not know why, but my guess would be that in Schema.org we use _domainIncludes_ and _rangeIncludes_ not _domain_ and _range_ which Protégé may be expecting.
username_2: Correct.
The RDF version of schema.org is just an RDF file that borrows vocabulary from RDFS. It's neither a OWL model nor a pure RDFS model.
As a workaround to quickly make it work in Protégé you can simply:
- replace schema:domainIncludes with rdfs:domain
- replace schema:rangeIncludes with rdfs:range
- optionally convert the schema:DataType statements into rdfs:Datatype statements
I have code to convert the schema.rdf files into more standard RDFS and OWL models if needed.
N.
username_3: Thanks @username_2. Yes, it would be great if you could share the code somewhere as people do periodically ask about this.
username_0: Thanks a lot for your very useful answer.
Jean
<NAME>
Independent Consultant
Semantic technologies, Linked Open Data
https://fr.linkedin.com/in/jeanusername_0
<EMAIL> <<EMAIL>>
mob +33 6 01 22 48 55
skype jean.username_0
username_4: Hi,
FYI, there is also the OWL mapping of schema.org here: http://topbraid.org/schema/
Judging from the last update date, it does not cover the version 3.2.
Greetings
Umut
username_0: thanks for the info
Jean
<NAME>
Independent Consultant
Semantic technologies, Linked Open Data
https://fr.linkedin.com/in/jeanusername_0
<EMAIL> <<EMAIL>>
mob +33 6 01 22 48 55
skype jean.username_0
Status: Issue closed
username_3: Closing as no action on this is planned.
@username_2 - did you ever post your code?
username_5: Hi!
Could you please reopen the folder?
Br
Timo
username_1: For info, there is also a [Schema.org OWL](https://schema.org/docs/schemaorg.owl) file listed in the experimental section of the [Developers](https://schema.org/docs/developers.html) documentation. This will import directly into Protégé. |
North-Seattle-College/ad440-winter2020-tuesday-repo | 556664606 | Title: Review: Simulated Device -> sends error info: (broken water, water flow issue, missing coffee (scheduled to run every 1 minutes))
Answers:
username_1: I copied the Python code in a vs code file and save it to a local folder on my machine.
Connected to our Azure portal and opened Azure powershell terminal.
I used the command to add Azure CLI &
the command to run on Azure CLI
Then in my command prompt I set path to local folder with Python code & entered Python SimulatedDevice.py to open.
The messages (Broken water, Water flow issue, Missing coffee) appear every minute as instructed.
Bugs: Messages appear with a backslash before each word. That's not how we want it to appear to our client.
Status: Issue closed
|
open-policy-agent/contrib | 541297855 | Title: pam_authz -> opa security
Question:
username_0: There needs to be an example or documentation on securing opa while still allowing pam_authz to function properly.
Answers:
username_1: @username_0 this might be a good starting point: https://www.openpolicyagent.org/docs/latest/security/#hardened-configuration-example. WDYT?
username_0: Yeah. I implemented something like that for the helm chart. What I'm not sure about is on the pam side. How does one securely configure the pam module to talk securely to opa?
username_0: Looks like opa supports unix sockets: https://github.com/open-policy-agent/opa/pull/752
That could be part of the solution if the pam side supports it.
Can you configure the authz module in opa to base auth on unix socket user?
username_0: Just glancing at the code, it does not look like it supports unix sockets. but it would be pretty easy to tweak the curl config in http.c to support it:
https://curl.haxx.se/libcurl/c/CURLOPT_UNIX_SOCKET_PATH.html
username_0: PR for implementing unix socket support here: https://github.com/open-policy-agent/contrib/pull/89
username_0: Between that PR, and controlling the permissions on the parent directory the socket is in, access to OPA can be restricted to root on the same host.
It works for ssh for sure. For other pam contexts its not clear if the pam module works as the user instead of root which would fail. It would be good if opa in socket mode could read the remote user of the socket (unix sockets support this) and allow authz based on it.
username_0: Its also unclear how prometheus metrics can be made to work in this configuration.
username_0: https://github.com/open-policy-agent/opa/issues/1975 filed for unix socket peercred support. |
phillipadsmith/daily | 126106849 | Title: Daily log for Monday, Jan 11, 2016
Question:
username_0: ID: (2016-01-11T22:13:52-08:00)
## Log
- [ ] Meditate
- [ ] Two sun salutations
- [ ] Exercise
- [ ] FL
- [ ] 90-minutes of creative work
- [ ] Follow-up/check-in with a friend
## Details
Meditate:
Exercise:
90-minutes of creative work:
Follow-up/check-in with a friend:<issue_closed>
Status: Issue closed |
fabricjs/fabric.js | 412512047 | Title: loadSVGFromString adds strokeWidth to text objects
Question:
username_0: <!-- BUG TEMPLATE -->
## Version
2.6.0
## Test Case
https://jsfiddle.net/fuL39810/3/
## Information about environment
Browser - Chrome
## Steps to reproduce
In console log we can see that the object has been created with stroke width set to "1" and by clicking in the button (reset stroke width to 0) we see the text in Canvas changing place. I think that this only happens with text objects imported from svg.
## Expected Behavior
The object should have stroke width = 0?
## Actual Behavior
The object has stroke width = 1
Answers:
username_1: This is expected, since the stroke contributes to the object size.
If you want to avoid this, you need to read the center of the object before setting the stroke and then setting it back. Or you can work with originX and originY set to center.
to read the center
`var point = Object.getCenterPoint()`
to set it back where it was:
`Object.setPositionByOrigin(point, 'center', 'center')`
Status: Issue closed
username_0: Thank you for your explanation. Sorry for the inconvenience |
porres/pd-cyclone | 200827080 | Title: to do for coll
Question:
username_0: warnings https://github.com/username_0/pd-cyclone/issues/209
threaded https://github.com/username_0/pd-cyclone/issues/210
insert messages https://github.com/username_0/pd-cyclone/issues/114
doc: filetype / threaded example
Answers:
username_0: some extra information about coll
up to alpha 57, it had no threaded option and the 3rd outlet would bang
cyclone 02 uses pd-l2ork's version, that has a threaded number argument in any order and doesn't bang when not in threaded mode.
we fixed and reverted it, by making it bang in the unthreaded mode, and removed the warning saying if it did load or not a file cause it wasn't necessary
username_0: how does coll behave in single threaded CPUs? Raspberry pi, chip and stuff?
username_0: max doesnt break detemrinism and doesnt choke the audio

username_1: - maybe coll_dump isn't threadable, in terms of outlets, it does more than just bang, it spits stuff out of the main outlet. and right now our only callback coll_tick bangs out of the 3rd outlet. maybe something to bring up with the others?
- so what is filetype supposed to do? it exists in the code, just does nothing.
username_0: dont know about filetype... doesn't seem to be workin in max, reference says:
"Sets the file types which can be read and written into the coll object. The message filetype with no arguments restores the default file behavior."
I guess we could make it work like "filetype" from buffer, which just specifies a filetype for when you write it to a file and give it a name that has no extension...
username_1: i've done A, haven't pulled it yet. so what should C do? doesn't nosearch only mean it shouldn't look for the file on instantiation, but after words you can definitely tell them to read something and then it should bang. trying to think of how to isolate it so it only bangs the one that received the read message, since how it's set up now, the method doesn't know which particular object is calling it...
username_0: you've done B too, right?
username_0: yeah, about C) if you send it a read message, it should read, maybe that's not a relevant detail, the thing is that, **for unthreadeed mode only**, a read message in one coll seems to affect all colls with the same name...
if the method doesn't know which particular object is calling it... well, this IS WORKING fine for the threaded mode, so just make it the same in unthreaded ;)
username_1: note to self things to do: [readagain] bang not working, nosearch bangs when it's not supposed to
username_1: also put back general bangs in collcommon_doread, need for threaded and unthreaded
Status: Issue closed
|
openfl/lime | 161134854 | Title: Android app may start more than once without clearing previous state when Activity is killed by OS
Question:
username_0: You can debug this with "Don't keep activities" in developers option.
When an android app goes to background and other apps request more memory, OS may
1. Destroy activity without killing app's process (onDestroy is always called)
2. Kill app's process (onDestory may not be called, or app may be killed even when it's in middle of execution)
A problem might occur with 1. Current implementation of SDLActivity tries to stop thread by sending quit event to C++ side. When app is reopened some global states may still be stored in memory, and that may cause issues.
You might be able to avoid this issue by manually calling dlopen/dlclose to load/unload libApplicationMain.so, but this only works on cpp. I couldn't find a way to unload assemblies on Xamarin.Android. Xamarin.Android's runtime automatically loads assembly into default AppDomain when app is launched, which makes it difficult to unload assemblies by loading assemblies into separate AppDomain.
I'm avoiding this issue currently by keeping SDLThread/SDLSurface when onDestroy is called with `isFinishing()` = false.
https://developer.android.com/training/basics/activity-lifecycle/stopping.html
Status: Issue closed
Answers:
username_1: Calling `System.exit(0)` is the best approach I've found so far, as we try and exit from the Haxe side, we want a clean restart when we come back |
cyberbotics/webots | 935485629 | Title: Nao room in guided tour is failing
Question:
username_0: The `nao_room.wbt` launched from the guided tour is failing with the following error (on Windows):
```
WARNING: Background: Texture dimension mismatch between leftUrl and rightUrl.
WARNING: "python.exe" was not found.
Webots requires Python version 3.9, 3.8, 3.7 or 2.7 (64 bit) from python.org in your current PATH.
To fix the problem, you should:
1. Check the Python command set in the Webots preferences.
2. Check the COMMAND set in the [python] section of the runtime.ini file of your controller program if any.
3. Fix your PATH environment variable to use the required Python 64 bit version (if available).
4. Install the required Python 64 bit version and ensure your PATH environment variable points to it.
```
Answers:
username_1: On macOS as well. However, adding the full path to the Python executable solves the problem. It seems that Webots ignores the PATH environment somehow
username_0: Anyhow, the guided tour should not have any dependency on Python, as it is not installed by default on Windows.
Status: Issue closed
|
antmicro/screen-recorder | 744396213 | Title: Give alternative to record a video, not GIF
Question:
username_0: But don't make it as complicated as the original RecordRTC demo.
GIF/video should be a switch and not dropdown (GIF by default)
Resolution should be autodiscovered (see #2)
Answers:
username_0: Actually this works better than GIFs now ;)
Status: Issue closed
|
TheRealRobertShields/CPW213-TinyClothes | 570713835 | Title: Add shopping cart functionality
Question:
username_0: - Add a "Add to cart" button next to each product.
- User should be able to add multiple products to their shopping cart.
Shopping cart data should be stored in a cookie.
- Add a shopping cart icon to the navigation.<issue_closed>
Status: Issue closed |
sebfz1/wicket-jquery-ui | 386428524 | Title: DropDown column datatable
Question:
username_0: i use inline edit and works fine, but i doesn't find a column component inline editable for dropdownchoice or can you help me with a example.
Answers:
username_1: I'm not sure to spot your problem, can you detail? You want an editable
dropdown? (a combobox...)
username_0: I’m Using Kendo datatable inline edition, I need in editable row use a dropdown combo selectable with choises! I need a Example or any ideas!
username_1: Hi, sorry for the very late reply. Did you had a look at `DropDownListEditor`?
username_0: Yes I try, but isn’t a Icolumn, pelase can you give me a example how yo use
in a column?
El El lun, 10 dic. 2018 a las 06:19, <NAME> <
username_1: It's the editor to be used by the column in edit mode, you just have to specify it like:
```
new PropertyColumn("myDD", "dd", 150) {
@Override
public IKendoEditor getEditor()
{
return new DropDownListEditor(new String []{"myvalue-1", "myvalue-2"});
}
}
```
Status: Issue closed
|
cf-convention/discuss | 683450489 | Title: Standard names: shallow convection variables
Question:
username_0: **Proposer's names** <NAME> and <NAME>
**Date** 2020/08/21
_In atmospheric numerical models non-precipitating cumulus clouds with cloud tops below 3000 m above the surface are treated separately (see: given explanation in AMS glossary: http://glossary.ametsoc.org/wiki/Shallow_convection_parameterization). Therefore, we want to apply for several quantities in this context:
Analogue to mass_fraction_of_convective_cloud_liquid_water_in_air: mass_fraction_of_shallow_convective_cloud_liquid_water_in_air_
-**Term** mass_fraction_of_shallow_convective_cloud_liquid_water_in_air
-**Definition** "Mass fraction" is used in the construction "mass_fraction_of_X_in_Y", where X is a material constituent of Y. It means the ratio of the mass of X to the mass of Y (including X). A chemical species or biological group denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". Shallow convective cloud is nonprecipitating cumulus cloud with cloud tops below 3000 m above the surface that produced by the convection schemes in an atmosphere model. "Cloud liquid water" refers to the liquid phase of cloud water. A diameter of 0.2 mm has been suggested as an upper limit to the size of drops that shall be regarded as cloud drops; larger drops fall rapidly enough so that only very strong updrafts can sustain them. Any such division is somewhat arbitrary, and active cumulus clouds sometimes contain cloud drops much larger than this. Reference: AMS Glossary http://glossary.ametsoc.org/wiki/Cloud_drop.
-**Units** 1
_Analogue to 'convective_cloud_base_altitude': 'shallow_convective_cloud_base_altitude'_
-**Term** 'shallow_convective_cloud_base_altitude'
-**Definition** cloud_base refers to the base of the lowest cloud. Altitude is the (geometric) height above the geoid, which is the reference geopotential surface. The geoid is similar to mean sea level. Shallow convective cloud is nonprecipitating cumulus cloud with cloud tops below 3000 m above the surface that produced by the convection schemes in an atmosphere model.
-**Units** m
_Analogue to 'convective_cloud_top_altitude'_
-**Term** 'shallow_convective_cloud_top_altitude'
-**Definition** cloud_top refers to the top of the highest cloud. Altitude is the (geometric) height above the geoid, which is the reference geopotential surface. The geoid is similar to mean sea level. Shallow convective cloud is nonprecipitating cumulus cloud with cloud tops below 3000 m above the surface that produced by the convection schemes in an atmosphere model.
-**Units** m
Answers:
username_1: This looks good to me. I agree on the proposed new terms 'mass_fraction_of_shallow_convective_cloud_liquid_water_in_air', 'shallow_convective_cloud_base_altitude' and 'shallow_convective_cloud_top_altitude'.
username_2: Hi @username_0
Thank you for your proposal and to @username_1 for your support.
I think the definition for mass_fraction_of_shallow_convective_cloud_liquid_water_in_air looks fine. As you say, it is similar to 'mass_fraction_of_convective_cloud_liquid_water_in_air'. It looks like we do not have a phrase for shallow_convective_cloud as we do not have any terms relating to it. The only thing I can find about shallow_convective is in 'shallow_convection_precipitation_flux' the phrase 'Some atmosphere models differentiate between shallow and deep convection.' Not sure whether it might be necessary to add this in after your proposed phrase for shallow_convective_cloud. I like the phrase you have given for this:
Shallow convective cloud is nonprecipitating cumulus cloud with cloud tops below 3000 m above the surface that produced by the convection schemes in an atmosphere model.
I would just amend it slightly to:
Shallow convective cloud is nonprecipitating cumulus cloud with a cloud top below 3000m above the surface produced by the convection schemes in an atmosphere model.
The second term looks fine and obviously we will match the shallow convective cloud phrase to both.
username_0: OK, fine with me, thanks!!
username_2: Ok great. So we have the following definitions:
Term: mass_fraction_of_shallow_convective_cloud_liquid_water_in_air
Definition: "Mass fraction" is used in the construction "mass_fraction_of_X_in_Y", where X is a material constituent of Y. It means the ratio of the mass of X to the mass of Y (including X). A chemical species or biological group denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". Shallow convective cloud is nonprecipitating cumulus cloud with a cloud top below 3000m above the surface produced by the convection schemes in an atmosphere model. Some atmosphere models differentiate between shallow and deep convection. "Cloud liquid water" refers to the liquid phase of cloud water. A diameter of 0.2 mm has been suggested as an upper limit to the size of drops that shall be regarded as cloud drops; larger drops fall rapidly enough so that only very strong updrafts can sustain them. Any such division is somewhat arbitrary, and active cumulus clouds sometimes contain cloud drops much larger than this. Reference: AMS Glossary http://glossary.ametsoc.org/wiki/Cloud_drop.
Term: shallow_convective_cloud_base_altitude
Definition: The phrase "cloud_base" refers to the base of the lowest cloud. Altitude is the (geometric) height above the geoid, which is the reference geopotential surface. The geoid is similar to mean sea level. Shallow convective cloud is nonprecipitating cumulus cloud with a cloud top below 3000m above the surface produced by the convection schemes in an atmosphere model. Some atmosphere models differentiate between shallow and deep convection.
Term: shallow_convective_cloud_top_altitude
Definition: The phrase "cloud_top" refers to the top of the highest cloud. Altitude is the (geometric) height above the geoid, which is the reference geopotential surface. The geoid is similar to mean sea level. Shallow convective cloud is nonprecipitating cumulus cloud with a cloud top below 3000m above the surface produced by the convection schemes in an atmosphere model. Some atmosphere models differentiate between shallow and deep convection.
If there are no further comments on these in the next 7 days these can be accepted into the next update.
Thanks
username_2: These have now been accepted for the next update. Thanks
username_2: These terms have now been added to the standard name table v77.
Status: Issue closed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.